Browsed by
Category: Azure

Azure Service Bus Listener with Azure Service Fabric

Azure Service Bus Listener with Azure Service Fabric

Starting to play around with Azure Service Fabric, and one of the things a friend wanted to do was to be able to listen for messages on a service bus, and handle messages doing some work with file processing and blob storage. Doing this in Service Fabric would easily enable such a thing with massive scale, so I thought I’d take a look at laying out the basics for him.
As such, this blog post is going to cover the basics of listening to an Azure Service Bus queue on Azure Service Fabric.

My sample code can be found on GitHub here.

First things first, before you can work with Service Fabric you need to set up your dev environment
Obviously, you’ll need an Azure account and to have a Service Bus and Queue set up – you can find info on that here if required.

The application I’m going to create will be very simple, only consisting of two parts:
– A stateless service
– A dummy client console application

The client is just for publishing a message onto the service bus – we need to be able to check this thing works right?
The stateless service is what we’ll deploy to service fabric. This will listen to the service bus, and trigger a method when something is received.

Now create a new solution in Visual Studio, and you’re going to want to create a new Service Fabric Application. From the window that appears, select a ‘Stateless Service’. I’ve called mine ‘ServiceBusTrigger’.
Other types of services may be more suitable depending on what you’re trying to achieve, but for my use case – receiving and processing messages – I don’t care about state.

Now we have the service scaffolded, let’s set our service bus listener.
First, you’ll need to add the Nuget package for WindowsAzure.ServiceBus.
In the ServiceBusTrigger class (or whatever name you gave to your service) you’ll see an override method called CreateServiceInstanceListeners.
This is where we’ll want to set up our listener, and in the future any other listeners you require for this service.
We need to create our ServiceInstanceListener – this will be a class that implements the ICommunicationListener interface. I created a folder to keep mine separate called ServiceListeners, and called my class ServiceBusQueueListener.
Implementing the ICommunicationListener interface you’ll see 3 methods – Abort, CloseAsync, and OpenAsync. We’ll use Abort and CloseAsync to close the connection to our service bus, and we’ll use OpenAsync to set up the client and start receiving messages.
Without any changes yet, your class should look like this:

We’ve already added the Nuget package for the service bus, so let’s start configuring that.
We only need a connection string, and the name of a queue, in order to set up our client. I’ve added these as private fields on the class, so we can set them in the constructor, then access them in the OpenAsync method. I’ve also added two more fields – one for the service bus QueueClient, and one for an Action.
The Action will also be passed in the constructor and set, and we’ll call this method when a message is received. This approach lets this listener be quite reusable – you simply pass in the callback to handle incoming messages where it’s implemented. If there’s a better approach, let me know! 🙂

In the OpenAsync method we’ll create our client from the connection string and queue name, and then set it up so that when a message is received, we invoke the callback Action.
I’ve also added a Stop method to close down the client, this will be called by both CloseAsync and Abort, as in this scenario we’ll want to do the same thing in both instances.
After these changes, your class should look something like this:

That’s most of the work done!
Now back in the service class, we’ll need to register this listener in the CreateServiceInstanceListeners method. Here we simply create a new ServiceInstanceListener class with a new instance of our ServiceBusQueueListener.
I’ve got the connection string and queue name as private fields again, and I’ve added a simple ‘Test’ method that takes in a BrokeredMessage as a parameter, and will write the contents out to the Debug window. This will be the action we pass in to our listener.
All in, our service class now looks like this:

And that’s all that’s required!
To test it, let’s add a console project to our solution to send something to the service bus – this is my DummyClient.
Again, add the Nuget package for the service bus, and set up the connection string and queue name.
We then want to create a connection again, the same as we did in our listener class.
Now all that’s needed is to create a message and send it using the client:

If you debug the service, and then run the DummyClient, you should now see your test message printed in the Output window!

I’ve added one last step to my sample on GitHub, and that’s to make use of environment variables from the config, so you can have the connection string and queue name swapped out depending on where you’re running the code.
The first step is to set up the default environment variables in ServiceManifest.xml – this is under the PackageRoot folder on your service.
In there find the CodePackage node, and underneath the EntryPoint add a new EnvironmentVariables node. And under that, add two EnvironmentVariable nodes, one for the connection string, and one for the queue name.
Your code package node should now look like this:

(You might notice a space in ‘CodePackage’ above. Apparently my syntax highlight has issues with this, the space is there until I find an alternative highlighter to use!)

We also want to set these in ApplicationManifest.xml – this can be found under the ‘ServiceBusTrigger’ node (or whatever you called your project – not your service), under the ApplicationPackageRoot.
This time find the ServiceManifestImport node, and add a new EnvironmentOverrides node, setting the CodePackageRef to the one above.
Again, in here we want to add the two EnvironmentVariable nodes, however this time we’re going to set the value as a parameter. This section should look something like this:

Now, we can override that value from our environment specific parameter xml files. These are found under the ApplicationParameters folder – there are 3 by default: Cloud.xml, Local1Node.xml, Local5Node.xml
In each of these files there is a Parameters node, under which we want to add two more Parameter nodes after the existing one. The name of the parameter should match the name in [ ] brackets that we set above, and the values should be the environment specific values you want:

Now that’s configured, the last step is to read these values from the config and put them to use. Go back to the TriggerService class, and instead of hardcoding queue and connection string values, we’ll read them from the context and get the environment variables by name.
After updating, your constructor should now look like this:

And that’s it!
That’s all that’s required to set up a listener running in an Azure Service Fabric app. As you can see, it’s easy to switch out the service bus for any other listener you might want, or to extend the Action we passed in to handle anything we want when a message is received.

You can find my sample code on GitHub here.

As always, if you have any suggestions or objections please let me know! We’re all here to learn 🙂

Reducing Consumed Request Units in DocumentDb with C# .NET

Reducing Consumed Request Units in DocumentDb with C# .NET

I’ve previously covered getting up and running with DocumentDb. Since then, one of the things I’ve been looking at with it is how to reduce the amount of request units each write operation to the database consumes.
This post will be assuming you’ve got that code, as we’ll be building on it here.

First, it’s good to understand what throughput you’ve set your collection to have, and what this means in terms of performance as you scale.
Azure lets you configure the collection to have a throughput of between 400 and 1000 request units (RUs) per second.
Every operation you perform on DocumentDb will consume some of these request units, the amount dependent on the size and complexity of what you’re doing.
Once you hit the limit of your throughput, DocumentDb will throttle requests to get you under the limit. When this happens, you’ll receive a HTTP Status 429 response, ‘RequestRateTooLarge’.
In this case, you need to get the header ‘x-ms-retry-after-ms’ which will give you the time to wait before attempting the request again.
However, if you’re using the .NET client SDK then this will be handled for you most of the time as the SDK will implicitly cache the response, will respect the header, and retry the request.
One thing to be wary of is not setting your throughput too high – Azure will charge you for RESERVED throughput. That means whatever you set the collection to, that is what you get charged for, regardless of if you use it or not. I’ll cover scaling the database up in a future post, but unless you’re sure you’ll need it, I’d set the throughput to the minimum, especially if it’s just for testing.

Now, in order to find out if any changes we make will actually work, we’ll need to first find out how many request units we’re currently consuming.
For the purposes of this demo, I’m going to be writing the following class to the database:

This is just so we have something that might be vaguely more realistic than me just dumping a load of randomly named properties into an object.
So what we’ll be writing to the database is a CustomerEntry object. I like to wrap my objects alongside the id that DocumentDb expects, and a TimeStamp for any audit or debug issues.

Now we’ve done this, we’ll take the method from my previous post, and modify it slightly so we can output to the console how many request units the operation consumed:

The parameter ‘documentObject’ will be an instance of CustomerEntry. So if you now run the console app, you’ll see the number of request units output to the window – I’m seeing 19.62 units consumed.
So now we want to try and reduce this amount, which we’re going to do by changing our indexing policy for the DocumentCollection.
In DocumentDb, the indexing determines what on the object is queryable. By default, the entire object is indexed.
We’re going to change this to index by only by one or two properties that I might be interested in querying on, ignoring the rest. This might not be suitable for your application, so think carefully about what you want to index based on your needs.

To do this, I’m going to delete the existing DocumentCollection, and update the method to create a DocumentCollection to apply the new indexing policy.
We’re going to set the ExcludedPaths to ignore the entire Customer object, and then in our IncludedPaths we’ll add the specific property we want. In this case, I’m assuming I’ll only be interested in the Email on the Customer (as that would be my unique property).
So with these changes, our method will now look like this:

So here we’re first querying to see if the Document Collection exists, as discussed in my previous post. Then we set up out Document Collection, setting the Indexing Policy to automatic.
Now, we need to configure our excluded and included paths. We set the Excluded Path to exclude the entire Customer object, and set the Included Path to include everything. This leaves us with the id and TimeStamp properties being indexed, which will be useful for future queries.
Next, we set another Included Path, this time specifying the Email property on the Customer object. So now, our TimeStamp, id, and Customer.Email properties are all indexed, and as such are all queryable.
The real test is the difference in the consumed request units per write operation – remember we were previously consuming 19.62 RUs when writing.
Ensure the collection has been deleted in the Azure Portal, then run the code to create the collection again, and write an object to the collection. When doing so, I now see 7.24 RUs being consumed.
We’ve more than halved the amount we consume!
This is just one way of improving performance with DocumentDb – I’ll continue to learn more and will share what I find.

As always, if you have any improvements or suggestions, feel free to let me know!

Getting Started with DocumentDb and C# .NET

Getting Started with DocumentDb and C# .NET

One of the things I’ve been playing around with recently is an Azure hosted DocumentDb – coincidentally it’s also my first foray into NoSQL databases!
In this post I’ll be covering the basics of getting up and running with DocumentDb based on what I’ve encountered so far. To do this, we’re going to use a basic Console application, and through this we’ll create and read a new database, document collection, and documents themselves.

First, we need to make sure everything is set up in Azure (you’ll need an Azure account with some credit available for this – if you have an MSDN subscription, you get £40 free credit a month (or your local equivalence 🙂 )
In the Azure portal click the ‘+ New’ button, and in the pane that appears select ‘Databases’ and then ‘NoSQL (DocumentDb)’
In the ‘New account’ pane that loads, give your account an ID, a new Resource Group name, and select the Location.
Once done, click Create, and wait for the database to be deployed.

While waiting for the deployment, open Visual Studio and create a blank Console Application. Open the Nuget package manager so we can add DocumentDb – searching ‘documentdb’ should find it as the top result, titled ‘Microsoft.Azure.DocumentDB’
Now in your Program class, create two const strings. These will be used to hold the endpoint URI and the Primary Key of the DocumentDb account.

Now that the database has been deployed, we need to get the endpoint URI and the Primary Key for the account. To do this, navigate to your DocumentDb account and select ‘Keys’. From here copy the URI and Primary Key values into the strings in your console application.
We’ll also want to add an instance of DocumentClient to the class as well, which is the client used for interaction with the database, so your class should look something like this:

Now, we need to create a new instance of DocumentClient using the endpoint and primary key:

This will try and connect to your AzureDb account, and write out the error if it fails.
Now we want to actually create a Database on our account, but we also need to account for the database already existing. The official docs tell you to attempt to read the database immediately in a try-catch block, creating the database in the catch block if the status code indicates it wasn’t found.
Personally, I hate this approach – I’m a firm believer that exception handling should never be used to control program flow.
Instead, for this example, we’ll query for the database first, and create the database if that doesn’t exist:

Here we’re querying for a database with an id matching the chosen name for your database – which I’ve stored in a private field. If any are found matching, we do nothing, otherwise we create a new database through the client, setting the name.

Now we have our database, we need a Document Collection for us to store our items. Again, the Microsoft article shows this done using a try-catch block. Instead, I’ve used another query, this time based on the collection name we want to use:

Again, we query to see if a document collection with an id matching our given name already exists. If it does, do nothing, otherwise we create a new document collection object, set the id to our chosen name (which I’ve also set to a private field for future use), and then we pass that to the client alongside the URI for our database, and a RequestOptions object. For the RequestOptions I’ve simply set the throughput to 400 for the collection – this is the lowest option available, and thus the cheapest.

Now we have our database and document collection, the last step is to write to our database. The method on the client accepts any object, as NoSQL makes no demands on what is being written to the database adhering to any set schema.
So given an object ‘documentObject’ we can write to the database like this:

Reading from the database is achieved in a similar fashion:

The documentId parameter is the id of the document you’re wanting to retrieve. If you want to keep track of the id properties of documents yourself, you can set an ‘id’ property on the object you write to the property. Note that this must be lowercase, which threw me off when I first attempted this! Personally, I keep track of the id by creating a Guid for each object before writing to the database. Of course, there are other ways of querying for documents which I may cover in the future.

But that’s all that’s needed to get up and running with the basics – you now have a database, a document collection, and can read and write documents to that collection! If anything is unclear, or you spot anything I could improve, please let me know!