Deploying Sonarqube on Azure WebApp for Containers


Sonarqube is a tool for developers to track quality of a project. It provides a dashboard to view issues on a code base and integrates nicely with VSTS for analyzing pull-requests, a good way to always improve the quality on our apps.
Deploying, running and maintaining Sonarqube can however be a little troublesome. Usually, it’s done inside a VM that needs to be maintained, secured, etc. Even in Azure, a VM needs maintenance.

What if we could use the power of other cloud services to host our Sonarqube. The database could easily go inside a SQL Azure. But what about hosting the app ? Hosting in a container offering (ACS/AKS) can be a little complicated to handle (and deploying a full Kubernetes cluster for just Sonarqube is a little bit too extreme). Azure Container Instance (ACI) is quite expensive for running a container in a permanent way.

Therefore, it leaves us with WebApp for Containers, which allows us to run a container inside the context of App Service for Linux, the main interest is that everything is managed : from running and updating the host to certificate management and custom domains.

First try, running the sonarqube image

On Docker Hub, Sonarqube is available to pull here, it runs on Linux using an alpine distribution.
Using Azure CLI, we can create a deployment that runs this image on Web App for Containers.

az group create --name "mysonarqubegroup" --location "West Europe"
az appservice plan create --resource-group "mysonarqubegroup"--name "mysonarqubeplan" --sku "S1" --is-linux
az webapp create --resource-group "mysonarqubegroup" --plan "mysonarqubeplan" --name "mysonarqube" --deployment-container-image-name "sonarqube"

This is going to create a resource group to deploy our resources, it then creates an App Service plan running on Linux and finally creates its associated Web App running the sonarqube image.
You can try to run thoses commands, however few things are not going to work as expected.
First, it’s going to use H2 as a database which is the embedded database for Sonarqube. It is not advised to run in production with this database, you should instead use a SQL Server or PostgreSQL. Secondly, you can start to customize your instance, install plugins etc. but tomorrow morning, you’re going to wake-up with a fresh instance and all the things that you have installed would have been gone, weird! You might feel like Bill Murray on Groundhog day.
Groundhog day

In fact, all of this can be explained as the container remains stateless. Nothing is persisted nor shared between instances and reboots. When the pool recycles, a new instance starts fresh, therefore all your changes such as installed plugins are discarded as they’re written to the disk inside the running container.

Persist all the things !

With the previous commands we were stuck with the embedded database and with no persistance across container reboots. Let’s see how we can improve that and solve those problems.
First of all, we want to use a regular SQL Database such as SQL Azure (it’s also possible to use managed Postgre instances, but we won’t cover this).

With a few commands, we can set-up a database ready to host our data :

az sql server create --name "sonarqubedbserver" --resource-group "mysonarqubegroup" --location "West Europe" --admin-user "sonarqube" --admin-password "mySup3rS3c3retP@ssw0rd"
az sql db create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" --name "sonarqube" --service-objective "S0" --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" -n "AllowAllWindowsAzureIps" --start-ip-address --end-ip-address

We have now a database with the correct collation, a firewall rule to allow traffic inside Azure datacenter (in order for our container to communicate with the DB). We just have to make the container use this.

In order to persist the state across reboots, Web App for Containers has a secret option which allows to mount a volume that will be mapped to a folder on the host (and therefore use the App Service storage). All the files written in this volume will be persisted across the reboots. The volume will be mounted by App Service at the path /home. The downside is that, at build time in Docker, we cannot use this folder as its content is going to be discarded at mount. Additionally, we have to make Sonarqube use this directory to store all the state.

The vanilla Sonarqube image use the folder /opt/sonarqube, one way to achieve what we want is by moving the content we need from /opt/sonarqube to /home/sonarqube and then make symbolic links to preserve the architecture. Unfortunately, the Sonarqube vanilla image also declares a volume on /opt/sonarqube/data, we won’t be able to move, replace, update this folder. All of this can be done by adding a thin layer to the Docker image that contains a shell script that does all the work.
First the Docker file is quite simple :

FROM sonarqube:7.0-alpine
COPY ./bin/
RUN chmod +x ./bin/
ENTRYPOINT ["./bin/"]

It takes the vanilla image, adds a shell script we’re going to see next, gives it the run permission and declares it as the entry point of the container.

The first part of the prepares all the required folder by either creating them or moving them, then adds the symbolic links.


echo Preparing SonarQube container

mkdir -p /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube

mv -n /opt/sonarqube/conf /home/sonarqube
mv -n /opt/sonarqube/logs /home/sonarqube
mv -n /opt/sonarqube/extensions /home/sonarqube

chown -R sonarqube:sonarqube /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube/conf
chown -R sonarqube:sonarqube /home/sonarqube/logs
chown -R sonarqube:sonarqube /home/sonarqube/extensions

rm -rf /opt/sonarqube/conf
rm -rf /opt/sonarqube/logs
rm -rf /opt/sonarqube/extensions

ln -s /home/sonarqube/conf /opt/sonarqube/conf
ln -s /home/sonarqube/logs /opt/sonarqube/logs
ln -s /home/sonarqube/extensions /opt/sonarqube/extensions

The second part of the is simply the real shell script provided with Sonarqube adapted to our needs available here :

chown -R sonarqube:sonarqube $SONARQUBE_HOME

set -e

if [ "${1:0:1}" != '-' ]; then
  exec "$@"

echo Launching SonarQube instance

exec su-exec sonarqube \
  java -jar lib/sonar-application-$SONAR_VERSION.jar \
  -Dsonar.log.console=true \
  -Dsonar.web.javaAdditionalOpts="$SONARQUBE_WEB_JVM_OPTS" \"/home/sonarqube/data" \

The differences are in the parameters, we use the variable $SQLAZURECONNSTR_SONARQUBE_JDBC_URL which contains the connection string to our database, we don’t need the jdbc.username nor jdbc.password anymore, we also use the persisted directory for data /home/sonarqube/data.

We can then build the Docker image and push it to Docker Hub or an Azure Container Registry.

docker build -t mysonarqube:latest .
docker tag mysonarqube:latest <myrepo>/mysonarqube:latest
docker push <myrepo>/mysonarqube:latest

We’re now ready to use it and for that, we need several other commands.

az webapp config connection-string set --resource-group "mysonarqubegroup" --name "mysonarqube" -t SQLAzure --settings SONARQUBE_JDBC_URL="jdbc:sqlserver://;database=sonarqube;user=sonarqube@sonarqubedbserver;password=mySup3rS3c3retP@ssw0rd;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*;loginTimeout=30;"

This create the connection string that will be used by Sonarqube, the $SQLAZURECONNSTR_SONARQUBE_JDBC_URL variable we talked about earlier.

az webapp config set --resource-group "mysonarqubegroup" --name "mysonarqube" --always-on true
az webapp log config --resource-group "mysonarqubegroup" --name "mysonarqube" --docker-container-logging filesystem
az webapp config container set --resource-group "mysonarqubegroup" --name "mysonarqube" --enable-app-service-storage true --docker-custom-image-name "<myrepo>/mysonarqube:latest"

We configure few other things, the first line activates the Always-On capability of App Service, then we enable the container logging feature (all the stdout/stderr will be persisted to the disk and available for see what’s going on inside the container), finally we configure the container with our brand new image and activate the option to persist files.
We’re good to go !

Wrapping up all the things together

All the files to build the Docker image are available on my GitHub repository. The image is also available to pull on my Docker Hub.
We can improve the commands in order to use variable, with Powershell that would leave us with this script :

$resourceGroupName = "mysonarqubedeployment"
$location = "West Europe"
$sqlCredentials = Get-Credential
$sqlServerName = "mysonarqubedeployment"
$databaseSku = "S0"
$databaseName = "sonarqube"
$appServiceName = "mysonarqubedeployment"
$appServiceSku = "S1"
$appName = "mysonarqubedeployment"
$containerImage = "natmarchand/sonarqube:latest"

az group create --name $resourceGroupName --location $location

az sql server create --name $sqlServerName --resource-group $resourceGroupName --location $location --admin-user `"$($sqlCredentials.UserName)`" --admin-password `"$($sqlCredentials.GetNetworkCredential().Password)`"
az sql db create --resource-group $resourceGroupName --server $sqlServerName --name $databaseName --service-objective $databaseSku --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group $resourceGroupName --server $sqlServerName -n "AllowAllWindowsAzureIps" --start-ip-address --end-ip-address

az appservice plan create --resource-group $resourceGroupName --name $appServiceName --sku $appServiceSku --is-linux
az webapp create --resource-group $resourceGroupName --plan $appServiceName --name $appName --deployment-container-image-name "alpine"
az webapp config connection-string set --resource-group $resourceGroupName --name $appName -t SQLAzure --settings SONARQUBE_JDBC_URL=`""jdbc:sqlserver://$;database=$databaseName;user=$($sqlCredentials.Username)@$sqlServerName;password=$($sqlCredentials.GetNetworkCredential().Password);encrypt=true;trustServerCertificate=false;hostNameInCertificate=*;loginTimeout=30;"`"
az webapp config set --resource-group $resourceGroupName --name $appName --always-on true
az webapp log config --resource-group $resourceGroupName --name $appName --docker-container-logging filesystem
az webapp config container set --resource-group $resourceGroupName --name $appName --enable-app-service-storage true --docker-custom-image-name "$containerImage"

Please note that with those command lines, we don’t create the webapp with the sonarqube image we want as it would start the container without a valid configuration (no connection string, no app service storage).

8 thoughts on “Deploying Sonarqube on Azure WebApp for Containers

  1. Is there a way to further diagnose what’s going on in the container? This is what I see in the Container Settings Log.

    2018-09-26 15:07:46.483 ERROR – Container azuresonarqube_0 for site azuresonarqube has exited, failing site start
    2018-09-26 15:12:44.703 INFO – Starting container for site
    2018-09-26 15:12:44.704 INFO – docker run -d -p 26952:9000 –name azuresonarqube_0 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=true -e WEBSITE_SITE_NAME=azuresonarqube -e WEBSITE_AUTH_ENABLED=False -e PORT=9000 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=145b90bc06b4dae9729dde5a2fc2dbdcb017256cc62e23c2ad7fc8882259f82d -e HTTP_LOGGING_ENABLED=1

  2. florizon


    when using the first part of your
    echo Preparing SonarQube container
    mkdir -p /home/sonarqube/data
    chown -R sonarqube:sonarqube /home/sonarqube
    mv -n /opt/sonarqube/conf /home/sonarqube
    mv -n /opt/sonarqube/logs /home/sonarqube
    mv -n /opt/sonarqube/extensions /home/sonarqube
    chown -R sonarqube:sonarqube /home/sonarqube/data
    chown -R sonarqube:sonarqube /home/sonarqube/conf
    chown -R sonarqube:sonarqube /home/sonarqube/logs
    chown -R sonarqube:sonarqube /home/sonarqube/extensions
    rm -rf /opt/sonarqube/conf
    rm -rf /opt/sonarqube/logs
    rm -rf /opt/sonarqube/extensions
    ln -s /home/sonarqube/conf /opt/sonarqube/conf
    ln -s /home/sonarqube/logs /opt/sonarqube/logs
    ln -s /home/sonarqube/extensions /opt/sonarqube/extensions,

    I have the following error :

    2018-10-31T15:54:16.722742055Z Preparing SonarQube container
    2018-10-31T15:54:16.744811242Z mkdir: can’t create directory ‘/home/sonarqube/data
    ‘: No such file or directory
    2018-10-31T15:54:16.752447810Z chown: /home/sonarqube
    : No such file or directory
    2018-10-31T15:54:16.755052868Z mv: can’t create directory ‘/home/sonarqube
    ‘: No such file or directory
    2018-10-31T15:54:16.772087743Z mv: can’t create directory ‘/home/sonarqube
    ‘: No such file or directory
    2018-10-31T15:54:16.776583743Z mv: can’t create directory ‘/home/sonarqube
    ‘: No such file or directory
    2018-10-31T15:54:16.786319357Z chown: /home/sonarqube/data
    : No such file or directory
    2018-10-31T15:54:16.794265733Z chown: /home/sonarqube/conf
    : No such file or directory
    2018-10-31T15:54:16.796867990Z chown: /home/sonarqube/logs
    : No such file or directory
    2018-10-31T15:54:16.798486926Z chown: /home/sonarqube/extensions
    : No such file or directory

    but when i make and ls in the kudu bash, only sonarqube folder is created.

    if you have any idea please let me know, thank

  3. Lionel

    thanks for this nice article.
    How did you manage to bypass the elastic search limit ?
    max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    This issue seems not be solved on azure. I’m struggling with this for a whole week without success.

  4. Lohith

    Hi Nathanael,

    I have tried this approach with file and persisting the data in home directory. But when i tried to deploy the code on azure web app i am getting this below error.

    2020-04-22T11:37:35.944467593Z Preparing SonarQube container new
    2020-04-22T11:37:35.984741428Z Changing the ownership for home directory
    2020-04-22T11:37:53.260492363Z Launching SonarQube instance
    2020-04-22T11:37:53.876561486Z 11:37:53.868 [main] WARN org.sonar.application.config.AppSettingsLoaderImpl – Configuration file not found: /opt/sonarqube/conf/
    2020-04-22T11:37:54.263846423Z 2020.04.22 11:37:54 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
    2020-04-22T11:37:54.336012145Z 2020.04.22 11:37:54 INFO app[][] Elasticsearch listening on /
    2020-04-22T11:37:54.451834078Z 2020.04.22 11:37:54 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key=’es’, ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch -Epath.conf=/opt/sonarqube/temp/conf/es
    2020-04-22T11:37:54.496423983Z 2020.04.22 11:37:54 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
    2020-04-22T11:37:57.591811598Z 2020.04.22 11:37:57 INFO app[][o.e.p.PluginsService] no modules loaded
    2020-04-22T11:37:57.597755359Z 2020.04.22 11:37:57 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
    2020-04-22T11:38:01.191645237Z 2020.04.22 11:38:01 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
    2020-04-22T11:38:01.193843523Z 2020.04.22 11:38:01 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
    2020-04-22T11:38:01.195830210Z 2020.04.22 11:38:01 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
    2020-04-22 11:38:02.358 ERROR – Container sonardev_0_8c9c44a7 for site sonardev has exited, failing site start
    2020-04-22 11:38:02.369 ERROR – Container sonardev_0_8c9c44a7 didn’t respond to HTTP pings on port: 9000, failing site start. See container logs for debugging.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.