Browsed by
Category: Uncategorised

Debugging Microservices with Docker Compose and HAProxy

Debugging Microservices with Docker Compose and HAProxy

When working on a system recently, we had a problem where depending on the area we were working in, we had to remember to run all the different components necessary. Running one service meant we may also have needed to run our authentication service, a MongoDb instance, and another one or two services. That’s a lot to remember, and a lot that I don’t really care about – I only want to work on one of those.

To get around this problem I looked into making use of Docker Compose and HAProxy. The idea being that we spin up one Docker Compose which includes all components, the HAProxy container can then be used to route requests to a local debug instance if it’s running, or fall back to a container if it isn’t.
This might not fit everyone’s solution, but given all our services were containerized already, it seemed like a good fit for us. That said, if you know of another (better?) way of getting around this problem, please let me know!

As always, the sample can be found on Github here.

For this I’ve already set up a basic Website and API in .NET Core – the implementation of these doesn’t matter for this post.
What does matter is that each project has a Dockerfile exposing them on port 80, which looks like this:

For the first step, I’m creating a Docker Compose file that will build our two components (if you have something published, you can pull the image instead), pass in a base address for the API as an environment variable, and expose the previously mentioned port 80.

In this example you can see the base address for the API is being passed in as docker.for.mac.localhost:9201 – obviously this is Mac specific. Annoyingly, at the time of writing, Docker provides different DNS names for both Windows and Mac, so use docker.for.win.localhost in it’s place if you’re on Windows. This is frustrating because it means you have to maintain two copies of both Docker Compose and HAProxy config if you want this to be available on multiple environments. From what I’ve found, a similar way hasn’t been provided for Linux – though I could be wrong

Now we need to add HAProxy – we can pull the image for this one. We’ll link the API and Web containers for use in the config, and we’ll bind to some ports to use – I’m using 9200 and 9201.
I’m going to use 9200 for the Web tier of my application, and 9201 for the API layer.
The last bit is to add the config file as a volume – we’ll create this next. Our Docker Compose now looks like this:

I’ve changed the API base address to match our plans for the HAProxy config, and as you can see we’re looking for a config file in the same folder as the compose file, so let’s create that and get things set up.

The sections of the config we’re interested in for this post are the frontend and backend sections.
We want a frontend for each layer of the application, one for the web, one for the api, one for anything else you want etc., and a backend for each individual service.
The frontend will define rules for routing to a backend based on the url. So if I had two APIs – values and users – for example, I could have addresses of /values routed to the values service, and /users to the users service etc.

So for our example, I’ve set up two frontends, each binding to the port mentioned above, and providing url based routing to our services.
acl defines the rule, and giving it a name. I’m using path_beg to base the rules on the URL.
use-backend defines which backend to use for the named rules described above.

We also have two backends, one for the Web service, and one for the API service. These tell the proxy where to look for a ‘server’ that can handle the frontend requests.
The first one will be our local dev instance (using the previously mentioned Docker DNS).
The second will be the container for that service along with backup to indicate that this should be used as a fallback if local dev is unavailable.

When you add more components to your application, you simply add more backends for them, and add them in to the appropriate frontend – or a new frontend section, if you’re adding a new layer such as wanting MongoDb to run this way as well.

Now we have the Docker Compose and HAProxy config all set up, we can give it a try.
Our Dockerfiles are set to copy the publish directory to the container and use that to run, so in both the web and api folders, run dotnet publish -c Release -o publish to publish the code (or use the tools in Visual Studio).

First, navigate to the folder containing the compose file and run docker-compose build to build the containers with the latest published code. Then we simply run the command docker-compose up. That will spin up the containers.
Now if we navigate to localhost:9200/webtest/home we can see our page loads including the response to our API. Success!
The point of all this is to easily switch what’s being used, so if you start debugging the API application on the set port 8020 and put a breakpoint in the API controller, you can refresh the page and see the breakpoint get hit.
Obviously the same applies to debugging the Web application on port 5020.

My current problem with this is that switching back from debug to container, the first request will fail before the fallback takes place, which isn’t ideal. I’m planning on looking into using heartbeat or something to try and work around this in the future.

While this is a basic example, you can see how this can expand to cover more services and prove useful, so hopefully it’s of use!

Azure Service Bus Listener with Azure Service Fabric

Azure Service Bus Listener with Azure Service Fabric

Starting to play around with Azure Service Fabric, and one of the things a friend wanted to do was to be able to listen for messages on a service bus, and handle messages doing some work with file processing and blob storage. Doing this in Service Fabric would easily enable such a thing with massive scale, so I thought I’d take a look at laying out the basics for him.
As such, this blog post is going to cover the basics of listening to an Azure Service Bus queue on Azure Service Fabric.

My sample code can be found on GitHub here.

First things first, before you can work with Service Fabric you need to set up your dev environment
Obviously, you’ll need an Azure account and to have a Service Bus and Queue set up – you can find info on that here if required.

The application I’m going to create will be very simple, only consisting of two parts:
– A stateless service
– A dummy client console application

The client is just for publishing a message onto the service bus – we need to be able to check this thing works right?
The stateless service is what we’ll deploy to service fabric. This will listen to the service bus, and trigger a method when something is received.

Now create a new solution in Visual Studio, and you’re going to want to create a new Service Fabric Application. From the window that appears, select a ‘Stateless Service’. I’ve called mine ‘ServiceBusTrigger’.
Other types of services may be more suitable depending on what you’re trying to achieve, but for my use case – receiving and processing messages – I don’t care about state.

Now we have the service scaffolded, let’s set our service bus listener.
First, you’ll need to add the Nuget package for WindowsAzure.ServiceBus.
In the ServiceBusTrigger class (or whatever name you gave to your service) you’ll see an override method called CreateServiceInstanceListeners.
This is where we’ll want to set up our listener, and in the future any other listeners you require for this service.
We need to create our ServiceInstanceListener – this will be a class that implements the ICommunicationListener interface. I created a folder to keep mine separate called ServiceListeners, and called my class ServiceBusQueueListener.
Implementing the ICommunicationListener interface you’ll see 3 methods – Abort, CloseAsync, and OpenAsync. We’ll use Abort and CloseAsync to close the connection to our service bus, and we’ll use OpenAsync to set up the client and start receiving messages.
Without any changes yet, your class should look like this:

We’ve already added the Nuget package for the service bus, so let’s start configuring that.
We only need a connection string, and the name of a queue, in order to set up our client. I’ve added these as private fields on the class, so we can set them in the constructor, then access them in the OpenAsync method. I’ve also added two more fields – one for the service bus QueueClient, and one for an Action.
The Action will also be passed in the constructor and set, and we’ll call this method when a message is received. This approach lets this listener be quite reusable – you simply pass in the callback to handle incoming messages where it’s implemented. If there’s a better approach, let me know! 🙂

In the OpenAsync method we’ll create our client from the connection string and queue name, and then set it up so that when a message is received, we invoke the callback Action.
I’ve also added a Stop method to close down the client, this will be called by both CloseAsync and Abort, as in this scenario we’ll want to do the same thing in both instances.
After these changes, your class should look something like this:

That’s most of the work done!
Now back in the service class, we’ll need to register this listener in the CreateServiceInstanceListeners method. Here we simply create a new ServiceInstanceListener class with a new instance of our ServiceBusQueueListener.
I’ve got the connection string and queue name as private fields again, and I’ve added a simple ‘Test’ method that takes in a BrokeredMessage as a parameter, and will write the contents out to the Debug window. This will be the action we pass in to our listener.
All in, our service class now looks like this:

And that’s all that’s required!
To test it, let’s add a console project to our solution to send something to the service bus – this is my DummyClient.
Again, add the Nuget package for the service bus, and set up the connection string and queue name.
We then want to create a connection again, the same as we did in our listener class.
Now all that’s needed is to create a message and send it using the client:

If you debug the service, and then run the DummyClient, you should now see your test message printed in the Output window!

I’ve added one last step to my sample on GitHub, and that’s to make use of environment variables from the config, so you can have the connection string and queue name swapped out depending on where you’re running the code.
The first step is to set up the default environment variables in ServiceManifest.xml – this is under the PackageRoot folder on your service.
In there find the CodePackage node, and underneath the EntryPoint add a new EnvironmentVariables node. And under that, add two EnvironmentVariable nodes, one for the connection string, and one for the queue name.
Your code package node should now look like this:

(You might notice a space in ‘CodePackage’ above. Apparently my syntax highlight has issues with this, the space is there until I find an alternative highlighter to use!)

We also want to set these in ApplicationManifest.xml – this can be found under the ‘ServiceBusTrigger’ node (or whatever you called your project – not your service), under the ApplicationPackageRoot.
This time find the ServiceManifestImport node, and add a new EnvironmentOverrides node, setting the CodePackageRef to the one above.
Again, in here we want to add the two EnvironmentVariable nodes, however this time we’re going to set the value as a parameter. This section should look something like this:

Now, we can override that value from our environment specific parameter xml files. These are found under the ApplicationParameters folder – there are 3 by default: Cloud.xml, Local1Node.xml, Local5Node.xml
In each of these files there is a Parameters node, under which we want to add two more Parameter nodes after the existing one. The name of the parameter should match the name in [ ] brackets that we set above, and the values should be the environment specific values you want:

Now that’s configured, the last step is to read these values from the config and put them to use. Go back to the TriggerService class, and instead of hardcoding queue and connection string values, we’ll read them from the context and get the environment variables by name.
After updating, your constructor should now look like this:

And that’s it!
That’s all that’s required to set up a listener running in an Azure Service Fabric app. As you can see, it’s easy to switch out the service bus for any other listener you might want, or to extend the Action we passed in to handle anything we want when a message is received.

You can find my sample code on GitHub here.

As always, if you have any suggestions or objections please let me know! We’re all here to learn 🙂