Securing the connection between Consul and ASP.Net Core

Standard

The previous article introduced how to use Consul to store configuration of ASP.Net Core (or .Net also). However, it was missing an important thing : the security ! In this article, we will see how we can address this by using ACLs mechanism built into Consul with the previously developed code in order to secure the connection between Consul and ASP.Net Core.

What’s required on Consul

On a normal Consul installation, the cluster should be secured by TLS (see here) to at least verify the authenticity of the server and force the API to use HTTPS.
Going further, it’s possible to use an ACL (Access Control List) key to give rights to the different applications. For example, you can create an ACL to allow App1 to read its configuration key/values, declare itself in the service catalog, consume the service catalog and update its health. The ACL would prevent App1 from reading other apps configuration or declare another service in the catalog.

Declaring an ACL rule is easy once ACLs are activated (see here), it uses the following syntax in the Consul UI :

key "App1/Dev" {
 policy = "read"
}

After creating the ACL, the UI gives a token which looks like a UUID, this token needs to be passed in the HTTP requests headers.
The default policy can be configured to deny everything for anonymous calls.

Adapting the code

Let’s adapt the code (some code hidden for brievety) :

public class ConsulConfigurationProvider : ConfigurationProvider
{
    private const string ConsulIndexHeader = "X-Consul-Index";
    private const string ConsulAclTokenHeader = "X-Consul-Token";

    private readonly string _path;
    private readonly string _consulAclToken;
    private readonly HttpClient _httpClient;
    /* ... */

    public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path, string consulAclToken = null)
    {
        _path = path;
        _consulAclToken = consulAclToken;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        /* ... */
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync(bool isBlocking = false)
    {
        var requestUri = isBlocking ? $"?recurse=true&index={_consulConfigurationIndex}" : "?recurse=true";
        using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[_consulUrlIndex], requestUri)))
        {
            if (!string.IsNullOrWhiteSpace(_consulAclToken))
                request.Headers.Add(ConsulAclTokenHeader, _consulAclToken);

            using (var response = await _httpClient.SendAsync(request))
            {
                /* ... */
            }
        }
    }
}

The only change is to get a token through the constructor and pass it in the header of the request.
Of course, the methods in the ConfigurationSource and in the extension should be updated too.
Don’t forget to consider the token as a secret, therefore it should be handled properly (as a docker secret, a secret in Azure Key Vault, etc.)

Going further with client certificate

It’s even possible to use client certificate to authenticate the client. For this, a certificate must be installed on the machine certificate store. What is needed on a code perspective is a method to retrieve the certificate and use it with the HttpClient instance.

First of all, here’s a sample on how to retrieve a certificate by its thumbprint :

private static X509Certificate2Collection GetLocalMachineCertificateByThumbprint(string thumbprint)
{
    using (var x509Store = new X509Store(StoreLocation.LocalMachine))
    {
        x509Store.Open(OpenFlags.OpenExistingOnly);
        return x509Store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, true);
    }
}

We can now change the constructor of the ConfigurationProvider to use this.

public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path, string consulAclToken = null, string clientCertThumbprint = null)
{
    _path = path;
    _consulAclToken = consulAclToken;
    _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

    if (_consulUrls.Count <= 0)
    {
        throw new ArgumentOutOfRangeException(nameof(consulUrls));
    }

    var httpClientHandler = new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip };

    if (!string.IsNullOrWhiteSpace(clientCertThumbprint))
        httpClientHandler.ClientCertificates.AddRange(GetLocalMachineCertificateByThumbprint(clientCertThumbprint));

    _httpClient = new HttpClient(httpClientHandler, true);

    _configurationListeningTask = new Task(ListenToConfigurationChanges);
}

The changes here are the call to the method above if there’s a thumbprint provided and use the result in the HttpClientHandler.

Final word

The ConfigurationProvider can now authenticates with its client certificate and declare an ACL token to authorize its action and access its private resources (read its configuration, update its health, etc.). Everything is now secured !

Using Consul for storing the configuration in ASP.Net Core

Standard

Consul logoConsul from Hashicorp is a tool used in distributed architectures to allow service discovery, health checking and kv storage for configuration. This article details how to use Consul for storing the configuration in ASP.Net Core by implementing a ConfigurationProvider.

Why use a tool to store the configuration ?

Usually, the configuration in .Net apps is stored in configuration files such as App.config, Web.config or appsettings.json. Starting with ASP.Net Core, a new and extensible configuration framework appeared, it allows to store the configuration outside of the config files and retrieving them from the command line, the environment variables, etc.
The issue with configuration files is that they can be difficult to manage. In fact, we usually end with a base configuration file and transformations files to override for each environment. They’re delivered at the same time than the binaries and therefore, changing a configuration value means redeploying configuration and binaries. Not very convenient.
Using a separate tool to centralize allows us two thing :

  • Having the same configuration across all the machines (no machine out of sync)
  • Being able to change a value without redeploying anything (useful for feature toggling)

Introducing Consul

The purpose of this article is not to talk about Consul but instead to focus on using it with ASP.Net Core.
However, it can be useful to remind few things. Consul has a Key/Value store available, it’s organized hierarchically and folders can be created to map the different application, environments etc. Here’s an example of a hierarchy that is going to be used along this article. Each end node can contain a JSON value.

/
|-- App1
| |-- Dev
| | |-- ConnectionStrings
| | \-- Settings
| |-- Staging
| | |-- ConnectionStrings
| | \-- Settings
| \-- Prod
|   |-- ConnectionStrings
|   \-- Settings
\-- App2
  |-- Dev
  | |-- ConnectionStrings
  | \-- Settings
  |-- Staging
  | |-- ConnectionStrings
  | \-- Settings
  \-- Prod
    |-- ConnectionStrings
    \-- Settings

Querying is easy as it is a REST API, the keys are in the query. For example the query for getting the settings of App1 in the Dev environment looks like this : GET http://<host>:8500/v1/kv/App1/Dev/Settings
The response looks like this :

HTTP/1.1 200 OK
Content-Type: application/json
X-Consul-Index: 1071
X-Consul-Knownleader: true
X-Consul-Lastcontact: 0

[
    {
        "LockIndex": 0,
        "Key": "App1/Dev/Settings",
        "Flags": 0,
        "Value": "ewogIkludCI6NDIsCiAiT2JqZWN0IjogewogICJTdHJpbmciOiAidG90byIsCiAgIkJsYSI6IG51bGwsCiAgIk9iamVjdCI6IHsKICAgIkRhdGUiOiAiMjAxOC0wMi0yM1QxNjoyMTowMFoiCiAgfQogfQp9Cgo=",
        "CreateIndex": 501,
        "ModifyIndex": 1071
    }
]

It’s also possible to query any node in a recursive manner, GET http://<host>:8500/v1/kv/App1/Dev?recurse gives :

HTTP/1.1 200 OK
Content-Type: application/json
X-Consul-Index: 1071
X-Consul-Knownleader: true
X-Consul-Lastcontact: 0

[
    {
        "LockIndex": 0,
        "Key": "App1/Dev/",
        "Flags": 0,
        "Value": null,
        "CreateIndex": 75,
        "ModifyIndex": 75
    },
    {
        "LockIndex": 0,
        "Key": "App1/Dev/ConnectionStrings",
        "Flags": 0,
        "Value": "ewoiRGF0YWJhc2UiOiAiU2VydmVyPXRjcDpkYmRldi5kYXRhYmFzZS53aW5kb3dzLm5ldDtEYXRhYmFzZT1teURhdGFCYXNlO1VzZXIgSUQ9W0xvZ2luRm9yRGJdQFtzZXJ2ZXJOYW1lXTtQYXNzd29yZD1teVBhc3N3b3JkO1RydXN0ZWRfQ29ubmVjdGlvbj1GYWxzZTtFbmNyeXB0PVRydWU7IiwKIlN0b3JhZ2UiOiJEZWZhdWx0RW5kcG9pbnRzUHJvdG9jb2w9aHR0cHM7QWNjb3VudE5hbWU9ZGV2YWNjb3VudDtBY2NvdW50S2V5PW15S2V5OyIKfQ==",
        "CreateIndex": 155,
        "ModifyIndex": 155
    },
    {
        "LockIndex": 0,
        "Key": "App1/Dev/Settings",
        "Flags": 0,
        "Value": "ewogIkludCI6NDIsCiAiT2JqZWN0IjogewogICJTdHJpbmciOiAidG90byIsCiAgIkJsYSI6IG51bGwsCiAgIk9iamVjdCI6IHsKICAgIkRhdGUiOiAiMjAxOC0wMi0yM1QxNjoyMTowMFoiCiAgfQogfQp9Cgo=",
        "CreateIndex": 501,
        "ModifyIndex": 1071
    }
]

We can see multiple things with these responses, first we can see that each key has its value encoded in Base64 to avoid mixing the JSON of the answer with the JSON of the value, then we notice the properties “Index” either in the JSON and in the HTTP headers. Those properties are a kind of timestamp, they allow to know if/when a value was created or updated. They will allow us to know if we need to reload the configuration.

ASP.Net Core configuration system

The configuration infrastructure relies on several things in the Microsoft.Extensions.Configuration.Abstractions NuGet package. First, the IConfigurationProvider is the interface to implement for supplying configuration values, then IConfigurationSource has for purpose giving an instance of the implemented configuration provider.
You can observe several implementations on the ASP.Net GitHub.
Hopefully, instead of directly implementing the IConfigurationProvider, it’s possible to inherit a class named ConfigurationProvider in the Microsoft.Extensions.Configuration package which takes care of the boilerplate code (such as the reload token implementation).
This class contains two important things :

/* Excerpt from the implementation */
public abstract class ConfigurationProvider : IConfigurationProvider
{
    protected IDictionary<string, string> Data { get; set; }
    public virtual void Load()
    {
    }
}

Data is the dictionary containing all the keys and values, Load is the method used at the beginning of the application, as its name indicates, it loads configuration from somewhere (a config file, or our consul instance) and hydrates the dictionary.

Loading consul configuration in ASP.Net Core

The first implementation that we can make, is going to use a HttpClient to fetch the configuration in consul. Then as the configuration is hierarchical (it’s a tree), we will need to flatten it, in order to put it in the dictionary. Easy no ?

First thing, implementing the Load method. It doesn’t do much as we need an asynchronous one, this one will just block the asynchronous call (although it is not the best to block, it is inspired by the ASP.Net core implementation).

public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

Then, we are going to query consul to get the configuration values, in a recursive way (see above). It uses some objects defined in the class such as _consulUrls which is an array of urls to consul instances (for fail-over), _path is the prefix of the keys (such as App1/Dev). Once we get the json, we iterate on each key/value pair, decoding the Base64 string and then flattening all the keys and the JSON objects.

private async Task<IDictionary<string, string>> ExecuteQueryAsync()
{
    int consulUrlIndex = 0;
    while (true)
    {
        try
        {
            using (var httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true))
            using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[consulUrlIndex], "?recurse=true")))
            using (var response = await httpClient.SendAsync(request))
            {
                response.EnsureSuccessStatusCode();
                var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
                return tokens
                    .Select(k => KeyValuePair.Create
                    (
                        k.Value<string>("Key").Substring(_path.Length + 1),
                        k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                    ))
                    .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                    .SelectMany(Flatten)
                    .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
            }
        }
        catch
        {
            consulUrlIndex++;
            if (consulUrlIndex >= _consulUrls.Count)
                throw;
        }
    }
}

The method that flattens the keys and values is a simple Depth First Search on the tree.

private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
{
    if (!(tuple.Value is JObject value))
        yield break;

    foreach (var property in value)
    {
        var propertyKey = $"{tuple.Key}/{property.Key}";
        switch (property.Value.Type)
        {
            case JTokenType.Object:
                foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                    yield return item;
                break;
            case JTokenType.Array:
                break;
            default:
                yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                break;
        }
    }
}

The whole class with its constructor and its fields looks this:

public class SimpleConsulConfigurationProvider : ConfigurationProvider
{
    private readonly string _path;
    private readonly IReadOnlyList<Uri> _consulUrls;

    public SimpleConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path)
    {
        _path = path;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        if (_consulUrls.Count <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(consulUrls));
        }
    }

    public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

    private async Task LoadAsync()
    {
        Data = await ExecuteQueryAsync();
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync()
    {
        int consulUrlIndex = 0;
        while (true)
        {
            try
            {
                var requestUri = "?recurse=true";
                using (var httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true))
                using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[consulUrlIndex], requestUri)))
                using (var response = await httpClient.SendAsync(request))
                {
                    response.EnsureSuccessStatusCode();
                    var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
                    return tokens
                        .Select(k => KeyValuePair.Create
                        (
                            k.Value<string>("Key").Substring(_path.Length + 1),
                            k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                        ))
                        .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                        .SelectMany(Flatten)
                        .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
                }
            }
            catch
            {
                consulUrlIndex = consulUrlIndex + 1;
                if (consulUrlIndex >= _consulUrls.Count)
                    throw;
            }
        }
    }

    private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
    {
        if (!(tuple.Value is JObject value))
            yield break;

        foreach (var property in value)
        {
            var propertyKey = $"{tuple.Key}/{property.Key}";
            switch (property.Value.Type)
            {
                case JTokenType.Object:
                    foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                        yield return item;
                    break;
                case JTokenType.Array:
                    break;
                default:
                    yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                    break;
            }
        }
    }
}

Dynamic configuration reloading

We can go further by using the change notification of consul. It works by just adding a parameter (the value of the last index configuration), the HTTP request is now blocking till the next configuration change (or the timeout the HttpClient).
Compared to the previous class, we just have to add a method ListenToConfigurationChanges to listen in background to the blocking HTTP endpoint of consul and refactor a little.

public class ConsulConfigurationProvider : ConfigurationProvider
{
    private const string ConsulIndexHeader = "X-Consul-Index";

    private readonly string _path;
    private readonly HttpClient _httpClient;
    private readonly IReadOnlyList<Uri> _consulUrls;
    private readonly Task _configurationListeningTask;
    private int _consulUrlIndex;
    private int _failureCount;
    private int _consulConfigurationIndex;

    public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path)
    {
        _path = path;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        if (_consulUrls.Count <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(consulUrls));
        }

        _httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true);
        _configurationListeningTask = new Task(ListenToConfigurationChanges);
    }

    public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

    private async Task LoadAsync()
    {
        Data = await ExecuteQueryAsync();

        if (_configurationListeningTask.Status == TaskStatus.Created)
            _configurationListeningTask.Start();
    }

    private async void ListenToConfigurationChanges()
    {
        while (true)
        {
            try
            {
                if (_failureCount > _consulUrls.Count)
                {
                    _failureCount = 0;
                    await Task.Delay(TimeSpan.FromMinutes(1));
                }

                Data = await ExecuteQueryAsync(true);
                OnReload();
                _failureCount = 0;
            }
            catch (TaskCanceledException)
            {
                _failureCount = 0;
            }
            catch
            {
                _consulUrlIndex = (_consulUrlIndex + 1) % _consulUrls.Count;
                _failureCount++;
            }
        }
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync(bool isBlocking = false)
    {
        var requestUri = isBlocking ? $"?recurse=true&index={_consulConfigurationIndex}" : "?recurse=true";
        using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[_consulUrlIndex], requestUri)))
        using (var response = await _httpClient.SendAsync(request))
        {
            response.EnsureSuccessStatusCode();
            if (response.Headers.Contains(ConsulIndexHeader))
            {
                var indexValue = response.Headers.GetValues(ConsulIndexHeader).FirstOrDefault();
                int.TryParse(indexValue, out _consulConfigurationIndex);
            }

            var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
            return tokens
                .Select(k => KeyValuePair.Create
                    (
                        k.Value<string>("Key").Substring(_path.Length + 1),
                        k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                    ))
                .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                .SelectMany(Flatten)
                .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
        }
    }

    private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
    {
        if (!(tuple.Value is JObject value))
            yield break;

        foreach (var property in value)
        {
            var propertyKey = $"{tuple.Key}/{property.Key}";
            switch (property.Value.Type)
            {
                case JTokenType.Object:
                    foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                        yield return item;
                    break;
                case JTokenType.Array:
                    break;
                default:
                    yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                    break;
            }
        }
    }
}

Plug everything together

We now have a ConfigurationProvider, let’s have a ConfigurationSource to create our provider.

public class ConsulConfigurationSource : IConfigurationSource
{
    public IEnumerable<Uri> ConsulUrls { get; }
    public string Path { get; }

    public ConsulConfigurationSource(IEnumerable<Uri> consulUrls, string path)
    {
        ConsulUrls = consulUrls;
        Path = path;
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return new ConsulConfigurationProvider(ConsulUrls, Path);
    }
}

And some extension methods to use easily our source :

public static class ConsulConfigurationExtensions
{
    public static IConfigurationBuilder AddConsul(this IConfigurationBuilder configurationBuilder, IEnumerable<Uri> consulUrls, string consulPath)
    {
        return configurationBuilder.Add(new ConsulConfigurationSource(consulUrls, consulPath));
    }

    public static IConfigurationBuilder AddConsul(this IConfigurationBuilder configurationBuilder, IEnumerable<string> consulUrls, string consulPath)
    {
        return configurationBuilder.AddConsul(consulUrls.Select(u => new Uri(u)), consulPath);
    }
}

We can now declare Consul in our Program.cs the consul source using other sources (such as environment variables or command line arguments) to provide the urls.

public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration(cb =>
        {
            var configuration = cb.Build();
            cb.AddConsul(new[] { configuration.GetValue<Uri>("CONSUL_URL") }, configuration.GetValue<string>("CONSUL_PATH"));
        })
        .UseStartup<Startup>()
        .Build();

Now, it’s possible to use the standard configuration patterns of ASP.Net Core such as Options.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddOptions();
    services.Configure<AppSettingsOptions>(Configuration.GetSection("Settings"));
    services.Configure<AccountingFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
    services.Configure<CartFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
    services.Configure<CatalogFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
}

To use them in our code, be careful of how you use options, as for options that can be reloaded dynamically, using IOptions<T> you would get a the initial value. Instead, ASP.Net Core requires to use IOptionsSnapshot<T>.
This scenario is really awesome for feature toggling as you can enable and disable new features just by changing the toggle value in consul and, without delivering anything, customers can use those new features. In a same manner, if a feature is bugged, you can disable it, without rolling back or hot fixing.

public class CartController : Controller
{
    [HttpPost]
    public IActionResult AddProduct([FromServices]IOptionsSnapshot<CartFeaturesOptions> options, [FromBody] Product product)
    {
        var cart = _cartService.GetCart(this.User);
        cart.Add(product);
        if (options.Value.UseCartAdvisorFeature)
        {
            ViewBag.CartAdvice = _cartAdvisor.GiveAdvice(cart);
        }
        return View(cart);
    }
}

Conclusion

Those few lines of code allowed us to add the support for consul configuration in our ASP.Net Core application. In fact, any application (even classic .Net app that use Microsoft.Extensions.Configuration packages) can benefit of this. Very cool in a DevOps environment, you can centralize all your configurations in one place and use hot reloading to have feature toggling live.

Deploying Sonarqube on Azure WebApp for Containers

Standard

Sonarqube is a tool for developers to track quality of a project. It provides a dashboard to view issues on a code base and integrates nicely with VSTS for analyzing pull-requests, a good way to always improve the quality on our apps.
Deploying, running and maintaining Sonarqube can however be a little troublesome. Usually, it’s done inside a VM that needs to be maintained, secured, etc. Even in Azure, a VM needs maintenance.

What if we could use the power of other cloud services to host our Sonarqube. The database could easily go inside a SQL Azure. But what about hosting the app ? Hosting in a container offering (ACS/AKS) can be a little complicated to handle (and deploying a full Kubernetes cluster for just Sonarqube is a little bit too extreme). Azure Container Instance (ACI) is quite expensive for running a container in a permanent way.

Therefore, it leaves us with WebApp for Containers, which allows us to run a container inside the context of App Service for Linux, the main interest is that everything is managed : from running and updating the host to certificate management and custom domains.

First try, running the sonarqube image

On Docker Hub, Sonarqube is available to pull here, it runs on Linux using an alpine distribution.
Using Azure CLI, we can create a deployment that runs this image on Web App for Containers.

az group create --name "mysonarqubegroup" --location "West Europe"
az appservice plan create --resource-group "mysonarqubegroup"--name "mysonarqubeplan" --sku "S1" --is-linux
az webapp create --resource-group "mysonarqubegroup" --plan "mysonarqubeplan" --name "mysonarqube" --deployment-container-image-name "sonarqube"

This is going to create a resource group to deploy our resources, it then creates an App Service plan running on Linux and finally creates its associated Web App running the sonarqube image.
You can try to run thoses commands, however few things are not going to work as expected.
First, it’s going to use H2 as a database which is the embedded database for Sonarqube. It is not advised to run in production with this database, you should instead use a SQL Server or PostgreSQL. Secondly, you can start to customize your instance, install plugins etc. but tomorrow morning, you’re going to wake-up with a fresh instance and all the things that you have installed would have been gone, weird! You might feel like Bill Murray on Groundhog day.
Groundhog day

In fact, all of this can be explained as the container remains stateless. Nothing is persisted nor shared between instances and reboots. When the pool recycles, a new instance starts fresh, therefore all your changes such as installed plugins are discarded as they’re written to the disk inside the running container.

Persist all the things !

With the previous commands we were stuck with the embedded database and with no persistance across container reboots. Let’s see how we can improve that and solve those problems.
First of all, we want to use a regular SQL Database such as SQL Azure (it’s also possible to use managed Postgre instances, but we won’t cover this).

With a few commands, we can set-up a database ready to host our data :

az sql server create --name "sonarqubedbserver" --resource-group "mysonarqubegroup" --location "West Europe" --admin-user "sonarqube" --admin-password "mySup3rS3c3retP@ssw0rd"
az sql db create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" --name "sonarqube" --service-objective "S0" --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" -n "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

We have now a database with the correct collation, a firewall rule to allow traffic inside Azure datacenter (in order for our container to communicate with the DB). We just have to make the container use this.

In order to persist the state across reboots, Web App for Containers has a secret option which allows to mount a volume that will be mapped to a folder on the host (and therefore use the App Service storage). All the files written in this volume will be persisted across the reboots. The volume will be mounted by App Service at the path /home. The downside is that, at build time in Docker, we cannot use this folder as its content is going to be discarded at mount. Additionally, we have to make Sonarqube use this directory to store all the state.

The vanilla Sonarqube image use the folder /opt/sonarqube, one way to achieve what we want is by moving the content we need from /opt/sonarqube to /home/sonarqube and then make symbolic links to preserve the architecture. Unfortunately, the Sonarqube vanilla image also declares a volume on /opt/sonarqube/data, we won’t be able to move, replace, update this folder. All of this can be done by adding a thin layer to the Docker image that contains a shell script that does all the work.
First the Docker file is quite simple :

FROM sonarqube:7.0-alpine
COPY entrypoint.sh ./bin/
RUN chmod +x ./bin/entrypoint.sh
ENTRYPOINT ["./bin/entrypoint.sh"]

It takes the vanilla image, adds a shell script we’re going to see next, gives it the run permission and declares it as the entry point of the container.

The first part of the entrypoint.sh prepares all the required folder by either creating them or moving them, then adds the symbolic links.

#!/bin/sh

echo Preparing SonarQube container

mkdir -p /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube

mv -n /opt/sonarqube/conf /home/sonarqube
mv -n /opt/sonarqube/logs /home/sonarqube
mv -n /opt/sonarqube/extensions /home/sonarqube

chown -R sonarqube:sonarqube /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube/conf
chown -R sonarqube:sonarqube /home/sonarqube/logs
chown -R sonarqube:sonarqube /home/sonarqube/extensions

rm -rf /opt/sonarqube/conf
rm -rf /opt/sonarqube/logs
rm -rf /opt/sonarqube/extensions

ln -s /home/sonarqube/conf /opt/sonarqube/conf
ln -s /home/sonarqube/logs /opt/sonarqube/logs
ln -s /home/sonarqube/extensions /opt/sonarqube/extensions

The second part of the entrypoint.sh is simply the real shell script provided with Sonarqube adapted to our needs available here :

chown -R sonarqube:sonarqube $SONARQUBE_HOME

set -e

if [ "${1:0:1}" != '-' ]; then
  exec "$@"
fi

echo Launching SonarQube instance

exec su-exec sonarqube \
  java -jar lib/sonar-application-$SONAR_VERSION.jar \
  -Dsonar.log.console=true \
  -Dsonar.jdbc.url="$SQLAZURECONNSTR_SONARQUBE_JDBC_URL" \
  -Dsonar.web.javaAdditionalOpts="$SONARQUBE_WEB_JVM_OPTS -Djava.security.egd=file:/dev/./urandom" \
  -Dsonar.path.data="/home/sonarqube/data" \
  "$@"

The differences are in the parameters, we use the variable $SQLAZURECONNSTR_SONARQUBE_JDBC_URL which contains the connection string to our database, we don’t need the jdbc.username nor jdbc.password anymore, we also use the persisted directory for data /home/sonarqube/data.

We can then build the Docker image and push it to Docker Hub or an Azure Container Registry.

docker build -t mysonarqube:latest .
docker tag mysonarqube:latest <myrepo>/mysonarqube:latest
docker push <myrepo>/mysonarqube:latest

We’re now ready to use it and for that, we need several other commands.

az webapp config connection-string set --resource-group "mysonarqubegroup" --name "mysonarqube" -t SQLAzure --settings SONARQUBE_JDBC_URL="jdbc:sqlserver://sonarqubedbserver.database.windows.net:1433;database=sonarqube;user=sonarqube@sonarqubedbserver;password=mySup3rS3c3retP@ssw0rd;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"

This create the connection string that will be used by Sonarqube, the $SQLAZURECONNSTR_SONARQUBE_JDBC_URL variable we talked about earlier.

az webapp config set --resource-group "mysonarqubegroup" --name "mysonarqube" --always-on true
az webapp log config --resource-group "mysonarqubegroup" --name "mysonarqube" --docker-container-logging filesystem
az webapp config container set --resource-group "mysonarqubegroup" --name "mysonarqube" --enable-app-service-storage true --docker-custom-image-name "<myrepo>/mysonarqube:latest"

We configure few other things, the first line activates the Always-On capability of App Service, then we enable the container logging feature (all the stdout/stderr will be persisted to the disk and available for see what’s going on inside the container), finally we configure the container with our brand new image and activate the option to persist files.
We’re good to go !

Wrapping up all the things together

All the files to build the Docker image are available on my GitHub repository. The image is also available to pull on my Docker Hub.
We can improve the commands in order to use variable, with Powershell that would leave us with this script :

$resourceGroupName = "mysonarqubedeployment"
$location = "West Europe"
$sqlCredentials = Get-Credential
$sqlServerName = "mysonarqubedeployment"
$databaseSku = "S0"
$databaseName = "sonarqube"
$appServiceName = "mysonarqubedeployment"
$appServiceSku = "S1"
$appName = "mysonarqubedeployment"
$containerImage = "natmarchand/sonarqube:latest"

az group create --name $resourceGroupName --location $location

az sql server create --name $sqlServerName --resource-group $resourceGroupName --location $location --admin-user `"$($sqlCredentials.UserName)`" --admin-password `"$($sqlCredentials.GetNetworkCredential().Password)`"
az sql db create --resource-group $resourceGroupName --server $sqlServerName --name $databaseName --service-objective $databaseSku --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group $resourceGroupName --server $sqlServerName -n "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

az appservice plan create --resource-group $resourceGroupName --name $appServiceName --sku $appServiceSku --is-linux
az webapp create --resource-group $resourceGroupName --plan $appServiceName --name $appName --deployment-container-image-name "alpine"
az webapp config connection-string set --resource-group $resourceGroupName --name $appName -t SQLAzure --settings SONARQUBE_JDBC_URL=`""jdbc:sqlserver://$sqlServerName.database.windows.net:1433;database=$databaseName;user=$($sqlCredentials.Username)@$sqlServerName;password=$($sqlCredentials.GetNetworkCredential().Password);encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"`"
az webapp config set --resource-group $resourceGroupName --name $appName --always-on true
az webapp log config --resource-group $resourceGroupName --name $appName --docker-container-logging filesystem
az webapp config container set --resource-group $resourceGroupName --name $appName --enable-app-service-storage true --docker-custom-image-name "$containerImage"

Please note that with those command lines, we don’t create the webapp with the sonarqube image we want as it would start the container without a valid configuration (no connection string, no app service storage).

DevOps Mobile avec VSTS et HockeyApp, les slides

Standard

Mercredi 15 juin, j’ai eu l’occasion d’animer avec Mathilde Roussel une session pour le meet-up Cross-Platform Paris.
Le sujet était le devops pour les applications mobiles avec VSTS (Visual Studio Team Services) et HockeyApp, deux produits de Microsoft.
A l’aide d’une application Android, nous avons montré le concept de pipeline de build, les tests unitaires et UI, ainsi que le déploiement de betas et la gestion des feedbacks et crashs.
Vous pouvez retrouver les slides ci-dessous.