Using ASP.Net Webform Dependency Injection with .NET 4.7.2

Standard

ASP.Net logoStarting with .NET 4.7.2 (released April, 30st), Microsoft offers an endpoint to plug our favorite dependency injection container when developing ASP.Net Webforms applications, making possible to inject dependencies into UserControls, Pages and MasterPages.
In this article we are going to see how to build an adaptation layer to plug Autofac or the container used in ASP.Net Core.

Dependency Injection for Webforms

In software engineering, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client’s state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.

Wikipedia

In our example we would like to inject a dependency decoupled with the IDependency interface, into our Index page and our Master page.

According to Microsoft, in the release note of the framework, the extension point is by implementing IServiceProvider and using it in the Init method of the Global.asax this way : HttpRuntime.WebObjectActivator = new MyProvider();

Plugging Autofac

When building an Autofac container, we end up with an object implementing the IContainer. So, we have to build an adapter that wraps the Autofac container and forwards
The first version is quite straightforward, we call Autofac if the type is registered else we rely on the Activator class :

However, it won’t work well. In fact, Webform subclass the Webform objects (Pages, UserControls, MasterPage) at runtime making them impossible to register in the ContainerBuilder. Therefore, for all those objects we are going to end up in the case with the Activator.

Hopefully, Autofac provides a way to dynamically declare registrations using the concept of RegistrationSource. By implementing one, we can then register at runtime our Webforms objects.

Subclassed Webforms objects are by default declared in the ASP namespace, if we are asked a type in this namespace, we generate a registration else we let it go through.

Once we have this RegistrationSource, we can use it in our ContainerBuilder:

Please note that in this case, we never register the Index page or Master page.

Allowing “per request” lifetime

It would be interesting to make the “per request” lifetime available. This way, all the objects of the request (the page, the handler, the master page, etc.) can share the same instance of the dependencies. It is a kind of singleton but only per HTTP request, making it safe to use (unlike a simple singleton).

To provide this, Autofac usually creates a LifetimeScope, uses it and stores it in a per request bag (located in the current HttpContext, in the Items property).
We are going to do the same in our AutofacServiceProvider : try to retrieve an existing instance of the LifetimeScope, creating it and storing it if needed and when the requests end, disposing it. If there is no HttpContext, we end up with the root scope, the container itself.

Now, when registering a dependency with InstancePerRequest() method, it makes only one instance per HTTP request.

Using Microsoft Dependency Injection container

We can use the same technique to use the container from Microsoft. Although the container instance implements the IServiceProvider, we have to wrap it anyway. In fact, we need to do this to handle the “per request” scope.

The main difference with Autofac is that there’s no RegistrationSource as this concept only exists in Autofac. However, there is a helper method ActivatorUtilities.GetServiceOrCreateInstance which allows to create an instance of an unregistered component passing registered dependencies to the constructor. Therefore, we can use this to create our instances.

Final word

We’ve seen how to create wrappers around famous dependency injection containers to provide dependency injection for ASP.Net Webforms thanks to the new extension point available from .Net 4.7.2.
It is now possible to make clean dependency injection in Pages and prepare our legacy apps to transition to ASP.Net Core.

You can find the full samples on my GitHub :

Ensuring the correctness of your API

Standard

Designing and maintaining an API is hard. Whether you’re distributing assemblies, creating NuGet packages or exposing services (such as WCF, SOAP or REST), you must pay attention to the contracts you provide. The contract must be well designed for the consumer and must be stable over time. In this article, we’re going to see some rules to follow, tips to help and tools to use to ensure that your APIs are great to use.
Although this post is applied to the .Net environment, most of it can be used with other environments (Java, Node, Python, etc.).

Versioning

Why versioning an API ?

The first thing to do for an API is to version it. This way, you can show the evolution to your consumers. But don’t be mistaken, a version is not just random numbers, it’s also a contract that you pass with the consumer about what has changed in your API.

Sometimes, we can see “timestamp” version such as 2018.04.24. While providing the information of when the API was built (or at least revised), the information is not very useful. Consider that you’re running the version 2018.03.25, you expect that in few days, few changes were made. In fact, you can also have a huge big-bang in the API.

By corollary, if you’re using the version 2011.01.01, you could expect that updating for the new API will take you time (7 years of changes, wow!). But the change can be only one method added.

Semantic versioning

In order to avoid those issues, a meaningful way to version is to use semantic versioning (see semver.org).
Using semantic versioning, you provide information to your consumer to help him understand the changes to your API.

It is based on the concept of breaking change and each number in x.y.z has a precise meaning.

  • x is the major, must be updated in case of breaking change
  • y is the minor, must be updated in case of non breaking change
  • z is the patch, must be updated when making bug fixes

But how can we distinguish if a change is breaking, non breaking or just a bug fix ?

A simple rule of thumb is : if the code using your API must be changed when upgrading, then it is breaking.

Note : a new major doesn’t mean you WILL have breaking changes in your code, it means you MIGHT have some. Sometimes, the major is increased just for “marketing” purposes or because there were a lot of features added.

Examples of classification

Here are some changes that are breaking:

  • Removing a public method
  • Removing a public class
  • Removing a public property
  • Removing a parameter to a public method/constructor
  • Adding a parameter to a public method/constructor
  • Making a public member non-public (equivalent do deleting this member)

The changes that are non breaking :

  • Adding a public class
  • Adding a public method
  • Adding a public constructor (only if there was already a declared constructor)
  • Adding a public property
  • Making a non-public member public (equivalent to adding this member)

And the changes that are patches :

  • Any change that has a lower visibility than public (internal, private, etc.)
  • Changes in the implementation of a method/property

Note : Adding a property can be breaking if it is expected to be required (in cases such as SOAP services)

With these examples in mind, it’s pretty easy to understand that only changes to public members are taken in account for major/minor. Therefore, if you don’t want want to be bothered when changing a member, don’t expose this member (make it internal, private, etc.). In fact, when designing an API consider making by default everything not visible and, by exception, only on the things you do want to expose, make them public.

Backward compatibility of an API

When speaking of libraries (assemblies, jar, etc.), there are three kinds of compatibility :

  • With source compatibility, you expect that a new compatible version of the API (new minor or patch) keeps the source code compiling. No change is required.
  • With binary compatibility, you expect that a new compatible version of the API (new minor or patch) keeps the application running. Without recompiling.
  • With behavioral compatibility, you expect that a new compatible version of the API (new minor or path) the application keeps the same behavior (according to defined criteria).

Of course, in the case of a bug fix, and therefore a patch, it might break the behavioral compatibility. For example, a method was returning a wrong value (such as 1+1=3) and was fixed (now 1+1=2) which breaks the behavioral compatibility. In this case it is accepted as long as the previous behavior is considered buggy. But keep in mind that a consumer might have found a work-around on his side and it might provoke regression. It is worth documenting it.

.Net Assembly versioning

Dealing with the binary compatibility in .Net can be tough. Usually in a .Net assembly, you can find a file AssemblyInfo.cs (starting with .NET Core/Standard, this file is auto-generated).
It looks roughly like this :

using System;
using System.Reflection;

[assembly: System.Reflection.AssemblyCompanyAttribute("MyCompany")]
[assembly: System.Reflection.AssemblyConfigurationAttribute("Debug")]
[assembly: System.Reflection.AssemblyFileVersionAttribute("1.0.0.0")]
[assembly: System.Reflection.AssemblyInformationalVersionAttribute("1.0.0")]
[assembly: System.Reflection.AssemblyProductAttribute("MyProduct")]
[assembly: System.Reflection.AssemblyTitleAttribute("MyAssembly")]
[assembly: System.Reflection.AssemblyVersionAttribute("1.0.0.0")]

We can see three different assembly attributes for versioning : AssemblyFileVersion, AssemblyInformationalVersion and AssemblyVersion. They have different purposes.

AssemblyInformationalVersion is as its name lets it hint is purely informational. You can put almost anything inside as explained on MSDN:

Although you can specify any text, a warning message appears on compilation if the string is not in the format used by the assembly version number, or if it is in that format but contains wildcard characters. This warning is harmless.

AssemblyFileVersion is the version used by Windows when you’re looking at the properties of a .dll file in the explorer.

AssemblyVersion is the real version used by .Net when checking versions.

When you have unsigned assemblies, and therefore no strong name, you can replace any assembly by any version of the same assembly, .NET won’t prevent you to do so. However, if the assembly you are replacing is not binary compatible, you might end with a MissingMemberException.

When using signed assemblies, the rules tighten. .Net checks that the strong name of the assembly is the same. As a reminder, a strong named is composed by the assembly name, its version, its culture and its cryptographic signature. Here is the strong name of a .NET framework assembly : System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

How does Microsoft version its APIs ?

Let’s take a look on what is done in the framework by inspecting deeper the System.Core assembly of .NET 4.7.2 :

[assembly: AssemblyTitle("System.Core.dll")]
[assembly: AssemblyProduct("Microsoft® .NET Framework")]
[assembly: AssemblyFileVersion("4.7.3056.0")]
[assembly: AssemblyInformationalVersion("4.7.3056.0")]
[assembly: AssemblyVersion("4.0.0.0")]

First, we can see the AssemblyFileVersion and AssemblyInformationalVersion are set to 4.7.3056.0 which means .NET 4.7.2 (in preview in Windows 10 Insider). Then the AssemblyVersion is set to 4.0.0.0. Wait, what ? Why not the same version that the one in the two other attributes ? The answer is for servicing, updates and this kind of issues.

In fact, Microsoft would like to be able to update the assemblies of the framework and replace them with a new version at every framework release as they ensured that they are binary compatible. As seen above, it is impossible to replace an assembly with a different strong name, so they must “hack” the AssemblyVersion by only putting the major value and the others to zero. As long as they deliver a 4.x version, it will work.

Juggling with BindingRedirects in .Net

Luckily, there is a tool to soften the rules. Often seen but little understood, binding redirects are made for the case when you want to update the version of an assembly with a newer one but the AssemblyName doesn’t match. It’s a tool on the consumer side, and allow him consciously to bypass the rules. In the final configuration file of the application, those few lines serve this purpose :

<dependentAssembly>
  <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
  <bindingRedirect oldVersion="0.0.0.0-9.0.0.0" newVersion="9.0.0.0" />
</dependentAssembly>

Here, any assembly referencing a version of Newtonsoft.Json between 0.0.0.0 and 9.0.0.0 can be redirected at runtime to an assembly having the version 9.0.0.0. By bypassing the rules, the consumer acknowledge that the versions are binary compatible (or at least that he understands the risks).

With the recent versions of .Net, the framework can automatically generates binding redirects needed when using NuGet packages, in the application config file.

REST services versioning

Versioning REST API is quite similar although it can be done in multiple way.
The easiest and most common is to include the major version in the resource url, such as http://my.great.api/v1/resource . Releasing new minors or patches doesn’t change the url but a new major will.
Same remarks apply for the schema of the resources that are sent or received.

Recommandations about versioning

In my opinion, versioning like Microsoft does is the good way of doing. I usually either use only the major (x.0.0.0) or, my favorite, major and minor (x.y.0.0) for AssemblyVersion. For AssemblyFileVersion, I use x.y.z.b where b is the build number autogenerated by my build factory (TFS/VSTS, TeamCity, etc.).

Breaking changes in the real world

While, most of the time, people agree on the above, in practical it is harder than planned.
The first issue is that it is hard to track all the changes we make to an API and therefore sometime, through a refactoring it’s possible to introduce a breaking change without noticing. This can be solved by the tooling, we will talk about it in the last section of this article.

The second issue is finding ways to avoid breaking changes. A great way to train about this topic is to practice katas using the Baby Steps constraint. We will see some examples on how to deal with common situations.

Avoiding breaking changes

Deleting a parameter in a method

This one is usually easy to deal with : overloads !

public void DoSomethingUseful(int a, int b)
{
    //Do something with those integers
}

If we want to remove a parameter such as b, instead of removing it, we can deprecate the method and overload with the new signature.

[Obsolete("Will be removed in a future version")]
public void DoSomethingUseful(int a, int b)
{
    DoSomething(a);
}

public void DoSomethingUseful(int a)
{
    //Do something with that integer
}

New parameter in a method

public void DoSomethingUseful(int a)
{
    //Do something with that integer
}

The solution here is the same as above although, it can be slightly more complicated. In fact, providing a default value to call the overload is not always an easy task and must be well thought.

[Obsolete("Will be removed in a future version")]
public void DoSomethingUseful(int a)
{
    const int defaultValue = 42;
    DoSomething(a, defaultValue);
}

public void DoSomethingUseful(int a, int b)
{
    //Do something with those integers
}

Changing a name (method, property)

You realized after the first release a typo in a property/method or that the name doesn’t follow your convention. Unfortunately, you’re stuck with it till the end of life of this major version !
However, it’s still possible to have a correctly spelled member.

public void DoSomefingUzeful(int a)
{
    //Do something with that integer
}

Becomes

[Obsolete("Sorry guys, I was drunk while coding this method")]
public void DoSomefingUzeful(int a)
{
    DoSomethingUseful(a);
}

public void DoSomethingUseful(int a)
{
    //Do something with that integer
}

You can see in the three cases above that the deprecated members are just wrappers around the valid members. The logic must not stay in the obsolete member, only the “proxying” logic.

Removing something

Sorry, nothing to do here, just put an [Obsolete] on the next release (minor or patch), you’ll give time for consumers to adapt. Obviously, when deprecating something, you should offer a valid replacement.

Breaking changes in webservices

When introducing breaking changes in web services (whether it is REST or SOAP), you must offer the new endpoints side by side with the old ones during a sufficient period of time. Then you have to track the clients that are slow to migrate, that’s why identifying the calling clients with API keys or similar solutions is interesting.

Chase the late adopters of your API

What is a good API ?

A quick tour about what is a good API with few points. The list is, of course, not exhaustive.

API Design

First of all, think the API you offer, think it as a product. In the same way you would hire designer to create a end-user product (new phone, car, etc.), an API must also be designed.

Quite often, the API is designed after the business code is written and you end up with an API reflecting your implementation of the business, no matter if it has a meaning to the consumer. The API must be centered on the use cases of the consumer and if it doesn’t match with your implementation then an adaptation layer is necessary. Designing the API before implementing anything is recommended.

Use your API

Try to consume it ! It is the best way to know the affordability. Having troubles consuming your API ? Imagine someone not familiar with it.

Naming matters

You should already do it in your code but naming things appropriately is very important. Once again, names must be obvious for the consumer, not the implementer.

Document your API

Even if all your names are meaningful, documentation is very important. Swagger/OpenAPI for example is a great way of documenting REST APIs.

Be useful

Your API must serve a purpose and it must do it well. It does not need to do more (nor less). If you have things that are not relevant to the purpose of your API, hide it as an implementation detail. Also don’t try to put the world in a single API, the purpose won’t be clear (no, having everything in one single API is not a purpose), the maintenance will be complicated and the usage even worse. Think modular and granular.

Avoid dependencies

As much as it is possible. Don’t be that guy who writes a NPM package for upper casing a word and takes more than +1000 dependencies to do that.
Use only the dependencies absolutely needed. If your package starts to reference more and more dependencies, think about exploding it into a core module with extension modules. Each extension having its necessary dependencies, the core almost none.

Stability

Of course, as seen above, don’t break things without warning. And even when breaking, think twice before offering a totally different API that has nothing in common with the previous version. Also, don’t push a major every time, it might take time to integrate.

Joshua Bloch, Principal Software Engineer at Google, made a great slide deck that goes deeper (although it is applied to Java mostly). You can find it here

Tools for creating and maintaining an API

Documenting

In the case of a REST API, a well known specification is Swagger/OpenAPI. In a same way that WSDL files documented SOAP Services, Swagger describes the API. Additionally, it is possible to plug a UI that allows to display the document nicely and query it.
You can find a tutorial on the Microsoft documentation to go further with ASP.Net Core

Ensuring compatibility

As said earlier, it is hard to track the changes of contracts in the object you expose. Hopefully, it is possible to test this and integrate this in the continuous integration process.
Using two NuGet packages, ApprovalTests and PublicApiGenerator and a test framework (such as xUnit), one can write non regression tests of the contracts you expose.
The code would look like this :

using System;
using ApprovalTests;
using ApprovalTests.Reporters;
using Xunit;

[assembly: UseReporter(typeof(XUnit2Reporter))]
namespace MyGreatApi.Tests.ApiApprovals
{
    public class ApprovalTests
    {
        [Fact]
        public void TestApi()
        {
            var publicApiSnapshot = PublicApiGenerator.ApiGenerator.GeneratePublicApi(typeof(MyGreatApi.Person).Assembly);
            Approvals.Verify(publicApiSnapshot);
        }
    }
}

This test will take the API assembly, PublicApiGenerator will examine all the public types (those are the API contract) and generate a snapshot of it. Then, Approvals will compare to an approved snapshot of the API existing in the source control. If those two match, the test succeeds. If you have a change in your API (whether it is breaking or not), it will fail. A manual operation is needed, first examine the difference (is it breaking or not ?) and decide if you want to acknowledge it or review your work to make the change not breaking for example.
Coupled with a code review, you should not let any contract change slip through the cracks. It will also help you increment the values of your semantic version.

Conclusion

As introduced, writing an API is hard and is often taken lightly. With this article, you can now understand the issues related to this topic and hopefully you will be able to tackle them in order to give your consumers APIs that are enjoyable to consume. Don’t forget to design well and good luck !

Securing the connection between Consul and ASP.Net Core

Standard

The previous article introduced how to use Consul to store configuration of ASP.Net Core (or .Net also). However, it was missing an important thing : the security ! In this article, we will see how we can address this by using ACLs mechanism built into Consul with the previously developed code in order to secure the connection between Consul and ASP.Net Core.

What’s required on Consul

On a normal Consul installation, the cluster should be secured by TLS (see here) to at least verify the authenticity of the server and force the API to use HTTPS.
Going further, it’s possible to use an ACL (Access Control List) key to give rights to the different applications. For example, you can create an ACL to allow App1 to read its configuration key/values, declare itself in the service catalog, consume the service catalog and update its health. The ACL would prevent App1 from reading other apps configuration or declare another service in the catalog.

Declaring an ACL rule is easy once ACLs are activated (see here), it uses the following syntax in the Consul UI :

key "App1/Dev" {
 policy = "read"
}

After creating the ACL, the UI gives a token which looks like a UUID, this token needs to be passed in the HTTP requests headers.
The default policy can be configured to deny everything for anonymous calls.

Adapting the code

Let’s adapt the code (some code hidden for brievety) :

public class ConsulConfigurationProvider : ConfigurationProvider
{
    private const string ConsulIndexHeader = "X-Consul-Index";
    private const string ConsulAclTokenHeader = "X-Consul-Token";

    private readonly string _path;
    private readonly string _consulAclToken;
    private readonly HttpClient _httpClient;
    /* ... */

    public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path, string consulAclToken = null)
    {
        _path = path;
        _consulAclToken = consulAclToken;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        /* ... */
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync(bool isBlocking = false)
    {
        var requestUri = isBlocking ? $"?recurse=true&index={_consulConfigurationIndex}" : "?recurse=true";
        using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[_consulUrlIndex], requestUri)))
        {
            if (!string.IsNullOrWhiteSpace(_consulAclToken))
                request.Headers.Add(ConsulAclTokenHeader, _consulAclToken);

            using (var response = await _httpClient.SendAsync(request))
            {
                /* ... */
            }
        }
    }
}

The only change is to get a token through the constructor and pass it in the header of the request.
Of course, the methods in the ConfigurationSource and in the extension should be updated too.
Don’t forget to consider the token as a secret, therefore it should be handled properly (as a docker secret, a secret in Azure Key Vault, etc.)

Going further with client certificate

It’s even possible to use client certificate to authenticate the client. For this, a certificate must be installed on the machine certificate store. What is needed on a code perspective is a method to retrieve the certificate and use it with the HttpClient instance.

First of all, here’s a sample on how to retrieve a certificate by its thumbprint :

private static X509Certificate2Collection GetLocalMachineCertificateByThumbprint(string thumbprint)
{
    using (var x509Store = new X509Store(StoreLocation.LocalMachine))
    {
        x509Store.Open(OpenFlags.OpenExistingOnly);
        return x509Store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, true);
    }
}

We can now change the constructor of the ConfigurationProvider to use this.

public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path, string consulAclToken = null, string clientCertThumbprint = null)
{
    _path = path;
    _consulAclToken = consulAclToken;
    _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

    if (_consulUrls.Count <= 0)
    {
        throw new ArgumentOutOfRangeException(nameof(consulUrls));
    }

    var httpClientHandler = new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip };

    if (!string.IsNullOrWhiteSpace(clientCertThumbprint))
        httpClientHandler.ClientCertificates.AddRange(GetLocalMachineCertificateByThumbprint(clientCertThumbprint));

    _httpClient = new HttpClient(httpClientHandler, true);

    _configurationListeningTask = new Task(ListenToConfigurationChanges);
}

The changes here are the call to the method above if there’s a thumbprint provided and use the result in the HttpClientHandler.

Final word

The ConfigurationProvider can now authenticates with its client certificate and declare an ACL token to authorize its action and access its private resources (read its configuration, update its health, etc.). Everything is now secured !

Using Consul for storing the configuration in ASP.Net Core

Standard

Consul logoConsul from Hashicorp is a tool used in distributed architectures to allow service discovery, health checking and kv storage for configuration. This article details how to use Consul for storing the configuration in ASP.Net Core by implementing a ConfigurationProvider.

Why use a tool to store the configuration ?

Usually, the configuration in .Net apps is stored in configuration files such as App.config, Web.config or appsettings.json. Starting with ASP.Net Core, a new and extensible configuration framework appeared, it allows to store the configuration outside of the config files and retrieving them from the command line, the environment variables, etc.
The issue with configuration files is that they can be difficult to manage. In fact, we usually end with a base configuration file and transformations files to override for each environment. They’re delivered at the same time than the binaries and therefore, changing a configuration value means redeploying configuration and binaries. Not very convenient.
Using a separate tool to centralize allows us two thing :

  • Having the same configuration across all the machines (no machine out of sync)
  • Being able to change a value without redeploying anything (useful for feature toggling)

Introducing Consul

The purpose of this article is not to talk about Consul but instead to focus on using it with ASP.Net Core.
However, it can be useful to remind few things. Consul has a Key/Value store available, it’s organized hierarchically and folders can be created to map the different application, environments etc. Here’s an example of a hierarchy that is going to be used along this article. Each end node can contain a JSON value.

/
|-- App1
| |-- Dev
| | |-- ConnectionStrings
| | \-- Settings
| |-- Staging
| | |-- ConnectionStrings
| | \-- Settings
| \-- Prod
|   |-- ConnectionStrings
|   \-- Settings
\-- App2
  |-- Dev
  | |-- ConnectionStrings
  | \-- Settings
  |-- Staging
  | |-- ConnectionStrings
  | \-- Settings
  \-- Prod
    |-- ConnectionStrings
    \-- Settings

Querying is easy as it is a REST API, the keys are in the query. For example the query for getting the settings of App1 in the Dev environment looks like this : GET http://<host>:8500/v1/kv/App1/Dev/Settings
The response looks like this :

HTTP/1.1 200 OK
Content-Type: application/json
X-Consul-Index: 1071
X-Consul-Knownleader: true
X-Consul-Lastcontact: 0

[
    {
        "LockIndex": 0,
        "Key": "App1/Dev/Settings",
        "Flags": 0,
        "Value": "ewogIkludCI6NDIsCiAiT2JqZWN0IjogewogICJTdHJpbmciOiAidG90byIsCiAgIkJsYSI6IG51bGwsCiAgIk9iamVjdCI6IHsKICAgIkRhdGUiOiAiMjAxOC0wMi0yM1QxNjoyMTowMFoiCiAgfQogfQp9Cgo=",
        "CreateIndex": 501,
        "ModifyIndex": 1071
    }
]

It’s also possible to query any node in a recursive manner, GET http://<host>:8500/v1/kv/App1/Dev?recurse gives :

HTTP/1.1 200 OK
Content-Type: application/json
X-Consul-Index: 1071
X-Consul-Knownleader: true
X-Consul-Lastcontact: 0

[
    {
        "LockIndex": 0,
        "Key": "App1/Dev/",
        "Flags": 0,
        "Value": null,
        "CreateIndex": 75,
        "ModifyIndex": 75
    },
    {
        "LockIndex": 0,
        "Key": "App1/Dev/ConnectionStrings",
        "Flags": 0,
        "Value": "ewoiRGF0YWJhc2UiOiAiU2VydmVyPXRjcDpkYmRldi5kYXRhYmFzZS53aW5kb3dzLm5ldDtEYXRhYmFzZT1teURhdGFCYXNlO1VzZXIgSUQ9W0xvZ2luRm9yRGJdQFtzZXJ2ZXJOYW1lXTtQYXNzd29yZD1teVBhc3N3b3JkO1RydXN0ZWRfQ29ubmVjdGlvbj1GYWxzZTtFbmNyeXB0PVRydWU7IiwKIlN0b3JhZ2UiOiJEZWZhdWx0RW5kcG9pbnRzUHJvdG9jb2w9aHR0cHM7QWNjb3VudE5hbWU9ZGV2YWNjb3VudDtBY2NvdW50S2V5PW15S2V5OyIKfQ==",
        "CreateIndex": 155,
        "ModifyIndex": 155
    },
    {
        "LockIndex": 0,
        "Key": "App1/Dev/Settings",
        "Flags": 0,
        "Value": "ewogIkludCI6NDIsCiAiT2JqZWN0IjogewogICJTdHJpbmciOiAidG90byIsCiAgIkJsYSI6IG51bGwsCiAgIk9iamVjdCI6IHsKICAgIkRhdGUiOiAiMjAxOC0wMi0yM1QxNjoyMTowMFoiCiAgfQogfQp9Cgo=",
        "CreateIndex": 501,
        "ModifyIndex": 1071
    }
]

We can see multiple things with these responses, first we can see that each key has its value encoded in Base64 to avoid mixing the JSON of the answer with the JSON of the value, then we notice the properties “Index” either in the JSON and in the HTTP headers. Those properties are a kind of timestamp, they allow to know if/when a value was created or updated. They will allow us to know if we need to reload the configuration.

ASP.Net Core configuration system

The configuration infrastructure relies on several things in the Microsoft.Extensions.Configuration.Abstractions NuGet package. First, the IConfigurationProvider is the interface to implement for supplying configuration values, then IConfigurationSource has for purpose giving an instance of the implemented configuration provider.
You can observe several implementations on the ASP.Net GitHub.
Hopefully, instead of directly implementing the IConfigurationProvider, it’s possible to inherit a class named ConfigurationProvider in the Microsoft.Extensions.Configuration package which takes care of the boilerplate code (such as the reload token implementation).
This class contains two important things :

/* Excerpt from the implementation */
public abstract class ConfigurationProvider : IConfigurationProvider
{
    protected IDictionary<string, string> Data { get; set; }
    public virtual void Load()
    {
    }
}

Data is the dictionary containing all the keys and values, Load is the method used at the beginning of the application, as its name indicates, it loads configuration from somewhere (a config file, or our consul instance) and hydrates the dictionary.

Loading consul configuration in ASP.Net Core

The first implementation that we can make, is going to use a HttpClient to fetch the configuration in consul. Then as the configuration is hierarchical (it’s a tree), we will need to flatten it, in order to put it in the dictionary. Easy no ?

First thing, implementing the Load method. It doesn’t do much as we need an asynchronous one, this one will just block the asynchronous call (although it is not the best to block, it is inspired by the ASP.Net core implementation).

public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

Then, we are going to query consul to get the configuration values, in a recursive way (see above). It uses some objects defined in the class such as _consulUrls which is an array of urls to consul instances (for fail-over), _path is the prefix of the keys (such as App1/Dev). Once we get the json, we iterate on each key/value pair, decoding the Base64 string and then flattening all the keys and the JSON objects.

private async Task<IDictionary<string, string>> ExecuteQueryAsync()
{
    int consulUrlIndex = 0;
    while (true)
    {
        try
        {
            using (var httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true))
            using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[consulUrlIndex], "?recurse=true")))
            using (var response = await httpClient.SendAsync(request))
            {
                response.EnsureSuccessStatusCode();
                var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
                return tokens
                    .Select(k => KeyValuePair.Create
                    (
                        k.Value<string>("Key").Substring(_path.Length + 1),
                        k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                    ))
                    .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                    .SelectMany(Flatten)
                    .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
            }
        }
        catch
        {
            consulUrlIndex++;
            if (consulUrlIndex >= _consulUrls.Count)
                throw;
        }
    }
}

The method that flattens the keys and values is a simple Depth First Search on the tree.

private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
{
    if (!(tuple.Value is JObject value))
        yield break;

    foreach (var property in value)
    {
        var propertyKey = $"{tuple.Key}/{property.Key}";
        switch (property.Value.Type)
        {
            case JTokenType.Object:
                foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                    yield return item;
                break;
            case JTokenType.Array:
                break;
            default:
                yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                break;
        }
    }
}

The whole class with its constructor and its fields looks this:

public class SimpleConsulConfigurationProvider : ConfigurationProvider
{
    private readonly string _path;
    private readonly IReadOnlyList<Uri> _consulUrls;

    public SimpleConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path)
    {
        _path = path;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        if (_consulUrls.Count <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(consulUrls));
        }
    }

    public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

    private async Task LoadAsync()
    {
        Data = await ExecuteQueryAsync();
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync()
    {
        int consulUrlIndex = 0;
        while (true)
        {
            try
            {
                var requestUri = "?recurse=true";
                using (var httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true))
                using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[consulUrlIndex], requestUri)))
                using (var response = await httpClient.SendAsync(request))
                {
                    response.EnsureSuccessStatusCode();
                    var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
                    return tokens
                        .Select(k => KeyValuePair.Create
                        (
                            k.Value<string>("Key").Substring(_path.Length + 1),
                            k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                        ))
                        .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                        .SelectMany(Flatten)
                        .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
                }
            }
            catch
            {
                consulUrlIndex = consulUrlIndex + 1;
                if (consulUrlIndex >= _consulUrls.Count)
                    throw;
            }
        }
    }

    private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
    {
        if (!(tuple.Value is JObject value))
            yield break;

        foreach (var property in value)
        {
            var propertyKey = $"{tuple.Key}/{property.Key}";
            switch (property.Value.Type)
            {
                case JTokenType.Object:
                    foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                        yield return item;
                    break;
                case JTokenType.Array:
                    break;
                default:
                    yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                    break;
            }
        }
    }
}

Dynamic configuration reloading

We can go further by using the change notification of consul. It works by just adding a parameter (the value of the last index configuration), the HTTP request is now blocking till the next configuration change (or the timeout the HttpClient).
Compared to the previous class, we just have to add a method ListenToConfigurationChanges to listen in background to the blocking HTTP endpoint of consul and refactor a little.

public class ConsulConfigurationProvider : ConfigurationProvider
{
    private const string ConsulIndexHeader = "X-Consul-Index";

    private readonly string _path;
    private readonly HttpClient _httpClient;
    private readonly IReadOnlyList<Uri> _consulUrls;
    private readonly Task _configurationListeningTask;
    private int _consulUrlIndex;
    private int _failureCount;
    private int _consulConfigurationIndex;

    public ConsulConfigurationProvider(IEnumerable<Uri> consulUrls, string path)
    {
        _path = path;
        _consulUrls = consulUrls.Select(u => new Uri(u, $"v1/kv/{path}")).ToList();

        if (_consulUrls.Count <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(consulUrls));
        }

        _httpClient = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip }, true);
        _configurationListeningTask = new Task(ListenToConfigurationChanges);
    }

    public override void Load() => LoadAsync().ConfigureAwait(false).GetAwaiter().GetResult();

    private async Task LoadAsync()
    {
        Data = await ExecuteQueryAsync();

        if (_configurationListeningTask.Status == TaskStatus.Created)
            _configurationListeningTask.Start();
    }

    private async void ListenToConfigurationChanges()
    {
        while (true)
        {
            try
            {
                if (_failureCount > _consulUrls.Count)
                {
                    _failureCount = 0;
                    await Task.Delay(TimeSpan.FromMinutes(1));
                }

                Data = await ExecuteQueryAsync(true);
                OnReload();
                _failureCount = 0;
            }
            catch (TaskCanceledException)
            {
                _failureCount = 0;
            }
            catch
            {
                _consulUrlIndex = (_consulUrlIndex + 1) % _consulUrls.Count;
                _failureCount++;
            }
        }
    }

    private async Task<IDictionary<string, string>> ExecuteQueryAsync(bool isBlocking = false)
    {
        var requestUri = isBlocking ? $"?recurse=true&index={_consulConfigurationIndex}" : "?recurse=true";
        using (var request = new HttpRequestMessage(HttpMethod.Get, new Uri(_consulUrls[_consulUrlIndex], requestUri)))
        using (var response = await _httpClient.SendAsync(request))
        {
            response.EnsureSuccessStatusCode();
            if (response.Headers.Contains(ConsulIndexHeader))
            {
                var indexValue = response.Headers.GetValues(ConsulIndexHeader).FirstOrDefault();
                int.TryParse(indexValue, out _consulConfigurationIndex);
            }

            var tokens = JToken.Parse(await response.Content.ReadAsStringAsync());
            return tokens
                .Select(k => KeyValuePair.Create
                    (
                        k.Value<string>("Key").Substring(_path.Length + 1),
                        k.Value<string>("Value") != null ? JToken.Parse(Encoding.UTF8.GetString(Convert.FromBase64String(k.Value<string>("Value")))) : null
                    ))
                .Where(v => !string.IsNullOrWhiteSpace(v.Key))
                .SelectMany(Flatten)
                .ToDictionary(v => ConfigurationPath.Combine(v.Key.Split('/')), v => v.Value, StringComparer.OrdinalIgnoreCase);
        }
    }

    private static IEnumerable<KeyValuePair<string, string>> Flatten(KeyValuePair<string, JToken> tuple)
    {
        if (!(tuple.Value is JObject value))
            yield break;

        foreach (var property in value)
        {
            var propertyKey = $"{tuple.Key}/{property.Key}";
            switch (property.Value.Type)
            {
                case JTokenType.Object:
                    foreach (var item in Flatten(KeyValuePair.Create(propertyKey, property.Value)))
                        yield return item;
                    break;
                case JTokenType.Array:
                    break;
                default:
                    yield return KeyValuePair.Create(propertyKey, property.Value.Value<string>());
                    break;
            }
        }
    }
}

Plug everything together

We now have a ConfigurationProvider, let’s have a ConfigurationSource to create our provider.

public class ConsulConfigurationSource : IConfigurationSource
{
    public IEnumerable<Uri> ConsulUrls { get; }
    public string Path { get; }

    public ConsulConfigurationSource(IEnumerable<Uri> consulUrls, string path)
    {
        ConsulUrls = consulUrls;
        Path = path;
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return new ConsulConfigurationProvider(ConsulUrls, Path);
    }
}

And some extension methods to use easily our source :

public static class ConsulConfigurationExtensions
{
    public static IConfigurationBuilder AddConsul(this IConfigurationBuilder configurationBuilder, IEnumerable<Uri> consulUrls, string consulPath)
    {
        return configurationBuilder.Add(new ConsulConfigurationSource(consulUrls, consulPath));
    }

    public static IConfigurationBuilder AddConsul(this IConfigurationBuilder configurationBuilder, IEnumerable<string> consulUrls, string consulPath)
    {
        return configurationBuilder.AddConsul(consulUrls.Select(u => new Uri(u)), consulPath);
    }
}

We can now declare Consul in our Program.cs the consul source using other sources (such as environment variables or command line arguments) to provide the urls.

public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration(cb =>
        {
            var configuration = cb.Build();
            cb.AddConsul(new[] { configuration.GetValue<Uri>("CONSUL_URL") }, configuration.GetValue<string>("CONSUL_PATH"));
        })
        .UseStartup<Startup>()
        .Build();

Now, it’s possible to use the standard configuration patterns of ASP.Net Core such as Options.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddOptions();
    services.Configure<AppSettingsOptions>(Configuration.GetSection("Settings"));
    services.Configure<AccountingFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
    services.Configure<CartFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
    services.Configure<CatalogFeaturesOptions>(Configuration.GetSection("FeatureFlags"));
}

To use them in our code, be careful of how you use options, as for options that can be reloaded dynamically, using IOptions<T> you would get a the initial value. Instead, ASP.Net Core requires to use IOptionsSnapshot<T>.
This scenario is really awesome for feature toggling as you can enable and disable new features just by changing the toggle value in consul and, without delivering anything, customers can use those new features. In a same manner, if a feature is bugged, you can disable it, without rolling back or hot fixing.

public class CartController : Controller
{
    [HttpPost]
    public IActionResult AddProduct([FromServices]IOptionsSnapshot<CartFeaturesOptions> options, [FromBody] Product product)
    {
        var cart = _cartService.GetCart(this.User);
        cart.Add(product);
        if (options.Value.UseCartAdvisorFeature)
        {
            ViewBag.CartAdvice = _cartAdvisor.GiveAdvice(cart);
        }
        return View(cart);
    }
}

Conclusion

Those few lines of code allowed us to add the support for consul configuration in our ASP.Net Core application. In fact, any application (even classic .Net app that use Microsoft.Extensions.Configuration packages) can benefit of this. Very cool in a DevOps environment, you can centralize all your configurations in one place and use hot reloading to have feature toggling live.

Deploying Sonarqube on Azure WebApp for Containers

Standard

Sonarqube is a tool for developers to track quality of a project. It provides a dashboard to view issues on a code base and integrates nicely with VSTS for analyzing pull-requests, a good way to always improve the quality on our apps.
Deploying, running and maintaining Sonarqube can however be a little troublesome. Usually, it’s done inside a VM that needs to be maintained, secured, etc. Even in Azure, a VM needs maintenance.

What if we could use the power of other cloud services to host our Sonarqube. The database could easily go inside a SQL Azure. But what about hosting the app ? Hosting in a container offering (ACS/AKS) can be a little complicated to handle (and deploying a full Kubernetes cluster for just Sonarqube is a little bit too extreme). Azure Container Instance (ACI) is quite expensive for running a container in a permanent way.

Therefore, it leaves us with WebApp for Containers, which allows us to run a container inside the context of App Service for Linux, the main interest is that everything is managed : from running and updating the host to certificate management and custom domains.

First try, running the sonarqube image

On Docker Hub, Sonarqube is available to pull here, it runs on Linux using an alpine distribution.
Using Azure CLI, we can create a deployment that runs this image on Web App for Containers.

az group create --name "mysonarqubegroup" --location "West Europe"
az appservice plan create --resource-group "mysonarqubegroup"--name "mysonarqubeplan" --sku "S1" --is-linux
az webapp create --resource-group "mysonarqubegroup" --plan "mysonarqubeplan" --name "mysonarqube" --deployment-container-image-name "sonarqube"

This is going to create a resource group to deploy our resources, it then creates an App Service plan running on Linux and finally creates its associated Web App running the sonarqube image.
You can try to run thoses commands, however few things are not going to work as expected.
First, it’s going to use H2 as a database which is the embedded database for Sonarqube. It is not advised to run in production with this database, you should instead use a SQL Server or PostgreSQL. Secondly, you can start to customize your instance, install plugins etc. but tomorrow morning, you’re going to wake-up with a fresh instance and all the things that you have installed would have been gone, weird! You might feel like Bill Murray on Groundhog day.
Groundhog day

In fact, all of this can be explained as the container remains stateless. Nothing is persisted nor shared between instances and reboots. When the pool recycles, a new instance starts fresh, therefore all your changes such as installed plugins are discarded as they’re written to the disk inside the running container.

Persist all the things !

With the previous commands we were stuck with the embedded database and with no persistance across container reboots. Let’s see how we can improve that and solve those problems.
First of all, we want to use a regular SQL Database such as SQL Azure (it’s also possible to use managed Postgre instances, but we won’t cover this).

With a few commands, we can set-up a database ready to host our data :

az sql server create --name "sonarqubedbserver" --resource-group "mysonarqubegroup" --location "West Europe" --admin-user "sonarqube" --admin-password "mySup3rS3c3retP@ssw0rd"
az sql db create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" --name "sonarqube" --service-objective "S0" --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group "mysonarqubegroup" --server "sonarqubedbserver" -n "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

We have now a database with the correct collation, a firewall rule to allow traffic inside Azure datacenter (in order for our container to communicate with the DB). We just have to make the container use this.

In order to persist the state across reboots, Web App for Containers has a secret option which allows to mount a volume that will be mapped to a folder on the host (and therefore use the App Service storage). All the files written in this volume will be persisted across the reboots. The volume will be mounted by App Service at the path /home. The downside is that, at build time in Docker, we cannot use this folder as its content is going to be discarded at mount. Additionally, we have to make Sonarqube use this directory to store all the state.

The vanilla Sonarqube image use the folder /opt/sonarqube, one way to achieve what we want is by moving the content we need from /opt/sonarqube to /home/sonarqube and then make symbolic links to preserve the architecture. Unfortunately, the Sonarqube vanilla image also declares a volume on /opt/sonarqube/data, we won’t be able to move, replace, update this folder. All of this can be done by adding a thin layer to the Docker image that contains a shell script that does all the work.
First the Docker file is quite simple :

FROM sonarqube:7.0-alpine
COPY entrypoint.sh ./bin/
RUN chmod +x ./bin/entrypoint.sh
ENTRYPOINT ["./bin/entrypoint.sh"]

It takes the vanilla image, adds a shell script we’re going to see next, gives it the run permission and declares it as the entry point of the container.

The first part of the entrypoint.sh prepares all the required folder by either creating them or moving them, then adds the symbolic links.

#!/bin/sh

echo Preparing SonarQube container

mkdir -p /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube

mv -n /opt/sonarqube/conf /home/sonarqube
mv -n /opt/sonarqube/logs /home/sonarqube
mv -n /opt/sonarqube/extensions /home/sonarqube

chown -R sonarqube:sonarqube /home/sonarqube/data
chown -R sonarqube:sonarqube /home/sonarqube/conf
chown -R sonarqube:sonarqube /home/sonarqube/logs
chown -R sonarqube:sonarqube /home/sonarqube/extensions

rm -rf /opt/sonarqube/conf
rm -rf /opt/sonarqube/logs
rm -rf /opt/sonarqube/extensions

ln -s /home/sonarqube/conf /opt/sonarqube/conf
ln -s /home/sonarqube/logs /opt/sonarqube/logs
ln -s /home/sonarqube/extensions /opt/sonarqube/extensions

The second part of the entrypoint.sh is simply the real shell script provided with Sonarqube adapted to our needs available here :

chown -R sonarqube:sonarqube $SONARQUBE_HOME

set -e

if [ "${1:0:1}" != '-' ]; then
  exec "$@"
fi

echo Launching SonarQube instance

exec su-exec sonarqube \
  java -jar lib/sonar-application-$SONAR_VERSION.jar \
  -Dsonar.log.console=true \
  -Dsonar.jdbc.url="$SQLAZURECONNSTR_SONARQUBE_JDBC_URL" \
  -Dsonar.web.javaAdditionalOpts="$SONARQUBE_WEB_JVM_OPTS -Djava.security.egd=file:/dev/./urandom" \
  -Dsonar.path.data="/home/sonarqube/data" \
  "$@"

The differences are in the parameters, we use the variable $SQLAZURECONNSTR_SONARQUBE_JDBC_URL which contains the connection string to our database, we don’t need the jdbc.username nor jdbc.password anymore, we also use the persisted directory for data /home/sonarqube/data.

We can then build the Docker image and push it to Docker Hub or an Azure Container Registry.

docker build -t mysonarqube:latest .
docker tag mysonarqube:latest <myrepo>/mysonarqube:latest
docker push <myrepo>/mysonarqube:latest

We’re now ready to use it and for that, we need several other commands.

az webapp config connection-string set --resource-group "mysonarqubegroup" --name "mysonarqube" -t SQLAzure --settings SONARQUBE_JDBC_URL="jdbc:sqlserver://sonarqubedbserver.database.windows.net:1433;database=sonarqube;user=sonarqube@sonarqubedbserver;password=mySup3rS3c3retP@ssw0rd;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"

This create the connection string that will be used by Sonarqube, the $SQLAZURECONNSTR_SONARQUBE_JDBC_URL variable we talked about earlier.

az webapp config set --resource-group "mysonarqubegroup" --name "mysonarqube" --always-on true
az webapp log config --resource-group "mysonarqubegroup" --name "mysonarqube" --docker-container-logging filesystem
az webapp config container set --resource-group "mysonarqubegroup" --name "mysonarqube" --enable-app-service-storage true --docker-custom-image-name "<myrepo>/mysonarqube:latest"

We configure few other things, the first line activates the Always-On capability of App Service, then we enable the container logging feature (all the stdout/stderr will be persisted to the disk and available for see what’s going on inside the container), finally we configure the container with our brand new image and activate the option to persist files.
We’re good to go !

Wrapping up all the things together

All the files to build the Docker image are available on my GitHub repository. The image is also available to pull on my Docker Hub.
We can improve the commands in order to use variable, with Powershell that would leave us with this script :

$resourceGroupName = "mysonarqubedeployment"
$location = "West Europe"
$sqlCredentials = Get-Credential
$sqlServerName = "mysonarqubedeployment"
$databaseSku = "S0"
$databaseName = "sonarqube"
$appServiceName = "mysonarqubedeployment"
$appServiceSku = "S1"
$appName = "mysonarqubedeployment"
$containerImage = "natmarchand/sonarqube:latest"

az group create --name $resourceGroupName --location $location

az sql server create --name $sqlServerName --resource-group $resourceGroupName --location $location --admin-user `"$($sqlCredentials.UserName)`" --admin-password `"$($sqlCredentials.GetNetworkCredential().Password)`"
az sql db create --resource-group $resourceGroupName --server $sqlServerName --name $databaseName --service-objective $databaseSku --collation "SQL_Latin1_General_CP1_CS_AS"
az sql server firewall-rule create --resource-group $resourceGroupName --server $sqlServerName -n "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

az appservice plan create --resource-group $resourceGroupName --name $appServiceName --sku $appServiceSku --is-linux
az webapp create --resource-group $resourceGroupName --plan $appServiceName --name $appName --deployment-container-image-name "alpine"
az webapp config connection-string set --resource-group $resourceGroupName --name $appName -t SQLAzure --settings SONARQUBE_JDBC_URL=`""jdbc:sqlserver://$sqlServerName.database.windows.net:1433;database=$databaseName;user=$($sqlCredentials.Username)@$sqlServerName;password=$($sqlCredentials.GetNetworkCredential().Password);encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"`"
az webapp config set --resource-group $resourceGroupName --name $appName --always-on true
az webapp log config --resource-group $resourceGroupName --name $appName --docker-container-logging filesystem
az webapp config container set --resource-group $resourceGroupName --name $appName --enable-app-service-storage true --docker-custom-image-name "$containerImage"

Please note that with those command lines, we don’t create the webapp with the sonarqube image we want as it would start the container without a valid configuration (no connection string, no app service storage).

DevOps Mobile avec VSTS et HockeyApp, la vidéo

Standard

Mercredi 15 juin, j’ai eu l’occasion d’animer avec Mathilde Roussel une session pour le meet-up Cross-Platform Paris.
Le sujet était le devops pour les applications mobiles avec VSTS (Visual Studio Team Services) et HockeyApp, deux produits de Microsoft.
A l’aide d’une application Android, nous avons montré le concept de pipeline de build, les tests unitaires et UI, ainsi que le déploiement de betas et la gestion des feedbacks et crashs.
Vous pouvez retrouver la vidéo ci-dessous.

DevOps Mobile avec VSTS et HockeyApp, les slides

Standard

Mercredi 15 juin, j’ai eu l’occasion d’animer avec Mathilde Roussel une session pour le meet-up Cross-Platform Paris.
Le sujet était le devops pour les applications mobiles avec VSTS (Visual Studio Team Services) et HockeyApp, deux produits de Microsoft.
A l’aide d’une application Android, nous avons montré le concept de pipeline de build, les tests unitaires et UI, ainsi que le déploiement de betas et la gestion des feedbacks et crashs.
Vous pouvez retrouver les slides ci-dessous.