Browsed by
Author: Ian Rufus

HTML Formatting in .NET Resource Strings

HTML Formatting in .NET Resource Strings

Whilst working on a client website, and I was adding content to the page based on the provided mockups and designs. All was going fine except for one part – a few paragraphs of content had formatting in the middle. Some had bold words, some had links etc.
This was a problem because I was storing all content in resource strings to allow for easier localisation.
One solution is to split the content into multiple parts, and you can apply the formatting around that, but I don’t think that’s ideal.
While it removes any html code from the content if you get it translated, I find it can also lead to a lack of context so you might not get the translation you’re after.
I also find it quite messy having to place multiple strings together around the formatting on the View, much easier to just have one string.

A (fairly quick) search around the net didn’t reveal the solution, but got me half way there. So I thought I’d quickly document the steps needed to be able to use the resource strings, and still have HTML formatting applied.

For a simple example, the string I might be trying to use could be:
Hello, my name is <strong>Ian Rufus</strong> and I love .NET!  and be in my resources under the name MyContentString
Firstly, when writing the content on the View, don’t just use the string with @Strings.MyContentString
Instead, use @Html.Raw(@Strings.MyContentString) which allows us to render unencoded HTML.

This alone won’t get any formatting applied, we also require a basic change to the resource string.
Instead of the <strong>Ian Rufus</strong>
part of the content string I’m using, we need to replace the angular brackets with the HTML code for that symbol, which is &lt; for < and &gt; for >.
This will give you the full string as Hello, my name is <strong>Ian Rufus</strong> and I love .NET!
This site is a great resource for finding the HTML codes you’ll be after.

And that’s it. The combination of using @Html.Raw() and using HTML codes in the resource string are all you need, and formatting will now be displayed correctly on the View.

Hope that’s helpful!

Debugging Microservices with Docker Compose and HAProxy

Debugging Microservices with Docker Compose and HAProxy

When working on a system recently, we had a problem where depending on the area we were working in, we had to remember to run all the different components necessary. Running one service meant we may also have needed to run our authentication service, a MongoDb instance, and another one or two services. That’s a lot to remember, and a lot that I don’t really care about – I only want to work on one of those.

To get around this problem I looked into making use of Docker Compose and HAProxy. The idea being that we spin up one Docker Compose which includes all components, the HAProxy container can then be used to route requests to a local debug instance if it’s running, or fall back to a container if it isn’t.
This might not fit everyone’s solution, but given all our services were containerized already, it seemed like a good fit for us. That said, if you know of another (better?) way of getting around this problem, please let me know!

As always, the sample can be found on Github here.

For this I’ve already set up a basic Website and API in .NET Core – the implementation of these doesn’t matter for this post.
What does matter is that each project has a Dockerfile exposing them on port 80, which looks like this:

For the first step, I’m creating a Docker Compose file that will build our two components (if you have something published, you can pull the image instead), pass in a base address for the API as an environment variable, and expose the previously mentioned port 80.

In this example you can see the base address for the API is being passed in as docker.for.mac.localhost:9201 – obviously this is Mac specific. Annoyingly, at the time of writing, Docker provides different DNS names for both Windows and Mac, so use in it’s place if you’re on Windows. This is frustrating because it means you have to maintain two copies of both Docker Compose and HAProxy config if you want this to be available on multiple environments. From what I’ve found, a similar way hasn’t been provided for Linux – though I could be wrong

Now we need to add HAProxy – we can pull the image for this one. We’ll link the API and Web containers for use in the config, and we’ll bind to some ports to use – I’m using 9200 and 9201.
I’m going to use 9200 for the Web tier of my application, and 9201 for the API layer.
The last bit is to add the config file as a volume – we’ll create this next. Our Docker Compose now looks like this:

I’ve changed the API base address to match our plans for the HAProxy config, and as you can see we’re looking for a config file in the same folder as the compose file, so let’s create that and get things set up.

The sections of the config we’re interested in for this post are the frontend and backend sections.
We want a frontend for each layer of the application, one for the web, one for the api, one for anything else you want etc., and a backend for each individual service.
The frontend will define rules for routing to a backend based on the url. So if I had two APIs – values and users – for example, I could have addresses of /values routed to the values service, and /users to the users service etc.

So for our example, I’ve set up two frontends, each binding to the port mentioned above, and providing url based routing to our services.
acl defines the rule, and giving it a name. I’m using path_beg to base the rules on the URL.
use-backend defines which backend to use for the named rules described above.

We also have two backends, one for the Web service, and one for the API service. These tell the proxy where to look for a ‘server’ that can handle the frontend requests.
The first one will be our local dev instance (using the previously mentioned Docker DNS).
The second will be the container for that service along with backup to indicate that this should be used as a fallback if local dev is unavailable.

When you add more components to your application, you simply add more backends for them, and add them in to the appropriate frontend – or a new frontend section, if you’re adding a new layer such as wanting MongoDb to run this way as well.

Now we have the Docker Compose and HAProxy config all set up, we can give it a try.
Our Dockerfiles are set to copy the publish directory to the container and use that to run, so in both the web and api folders, run dotnet publish -c Release -o publish to publish the code (or use the tools in Visual Studio).

First, navigate to the folder containing the compose file and run docker-compose build to build the containers with the latest published code. Then we simply run the command docker-compose up. That will spin up the containers.
Now if we navigate to localhost:9200/webtest/home we can see our page loads including the response to our API. Success!
The point of all this is to easily switch what’s being used, so if you start debugging the API application on the set port 8020 and put a breakpoint in the API controller, you can refresh the page and see the breakpoint get hit.
Obviously the same applies to debugging the Web application on port 5020.

My current problem with this is that switching back from debug to container, the first request will fail before the fallback takes place, which isn’t ideal. I’m planning on looking into using heartbeat or something to try and work around this in the future.

While this is a basic example, you can see how this can expand to cover more services and prove useful, so hopefully it’s of use!

Rendering Emails with RazorViewEngine in .NET Core 2.0

Rendering Emails with RazorViewEngine in .NET Core 2.0

In this post I’m going to cover how to use the RazorViewEngine to render Views, and get the string content so it can be used as an email template.
As always, the code for this example can be found on Github here.

This has been done in .NET Core 2.0, and I’ve created this on my Mac – though being .NET Core it works equally well on Windows (tested on my Surface Pro), and in theory should on Linux too.

To start with I created a Web API and Console project using the dotnet cli. And given this was done through the cli, I then created a solution and added the two projects to it the same way.
In the code for the API I modified the Program class to set a specific port – 5020. This is just so that it launches on the same port when I run the code, so my Console app knows where to send requests.
It’s worth noting here that if you’re running this through Visual Studio you will have to set the port on the project setting as Visual Studio overrides the setting.

The first thing to do is set up a controller for us to talk to. By default the API comes with a ‘Values’ controller, so I’ve renamed that to ‘Email’, removed the boilerplate code, and added a simple Get method that, for now, just returns a hard coded string:

I’ve then set up the Console to send a request to this endpoint, printing out the result. This is just so we can easily see what’s being output:

Now if you run both applications, you can see simply that “hello” gets printed to the console whenever we press enter.
The next step is to start setting up our API for the rendering. I’ve created a ‘Templating’ folder in the API, and a subfolder called ‘Emails’. In the Emails folder I’ve added one .cshtml file called HelloEmail.cshtml which is blank for now.
Create a class under the ‘Templating’ folder called RazorViewToStringRenderer. The purpose of this class is going to be to find an IView through the RazorViewEngine, and render that by calling RenderAsync.
The RenderAsync method takes in a ViewContext parameter. So the first thing we need to do is create the ViewContext – which means we need the parameters.
The first parameter is an ActionContext. I’ve added a private method to generate this which simply sets up a HttpContext, and uses that to create the ActionContext:

As you can see I’ve used a field called _serviceProvider, which is an implementation of IServiceProvider, that we don’t have yet, so let’s add that at the top of the class. We’re also going to make use of two other fields – ITempDataProvider and IRazorViewEngine:

The TempDataProvider is needed for the ViewContext later, and the RazorViewEngine is what will help find and render our Views.
These fields will need setting, so let’s create a constructor for this class passing in the three interfaces, and setting the fields:

Now we need a method that’s going to make use of GetActionContext, as well as handling the rest of the operation, so add an async method (RenderAsync, remember!) that takes in the name of the View we want, and the type of the model for the View – TModel so this can be used for any and all Views.
In here we can call to create our ActionContext, and make use of the defined RazorViewEngine, and the view name parameter, to locate our View. After the call to locate the View, I’ve checked the Success property to check the View was actually found. The method should look like this for now:

As you can see we get our ActionContext, and use that and the passed in name variable to locate the View through the RazorViewEngine. If successful we get the View from the result, otherwise we throw an exception – obviously what you do in this scenario is up to you!

Now we have the View, we need to create the ViewContext. As mentioned, the first parameter is the ActionContext that we already have. The remaining parameters are the View, a ViewDataDictionary, a TempDataDictionary, a TextWriter, and HtmlHelperOptions.
So to create the ViewContext we pass in the ActionContext and View objets we already have, then we can create the remaining parameters – we instantiate a ViewDataDictionary of type TModel, with new instances of both parameters, and set the Model property to our passed in model.
Next is the TempDataDictionary which we create by passing in the HttpContext of our ActionContext, and our _tempDataProvider field.
For the TextWriter, I’ve added a using statement for a StringWriter, and placed the creation of the ViewContext in there – then I can used the defined StringWriter for the parameter.
Lastly, pass in a new instance of HtmlHelperOptions:

Now that we have both our View, and the ViewContext, we can simply call the RenderAsync method, and get the string output. Inside the using statement, after the creation of the ViewContext, we want to add the following lines:

This method will now render our View with the provided model, and return us the string representation of that!
We’ve passed in three parameters to this class, and we don’t want to worry about handling the setup of those objects, so we can use dependency injection to do the heavy lifting.
We want to add an interface for this class, so I created an IViewToStrinRenderer interface, and had the class implement that. The interface simply defines the one method we’ve already implemented:

Now we can configure the startup to register this interface and implementation in the AddServices method:

.NET will do the rest for us!
Now we need the controller we previously set up to make use of this class. I’ve added a constructor to the EmailController class passing in IViewToStringRenderer, and setting this to a private field on the class. Once again, .NET handles injecting this into the controller, which should now start with this:

In order to test this, we’re going to need our templates to do something.
I’ve created a simple HelloEmailModel class, which just contains a Name property so we can see our model binding working. Then for the HelloEmail view, I’ve just set the model, and added a binding to the Name property:

Back in the EmailController, I’m going to hard code in some values, as it’s not important for this demo.
I’ve created an instance of the Model, and set my name. Then I’ve added a try-catch block, which makes use of the RazorViewToStringRenderer to try and make use of our HelloEmail template.
If an exception is thrown (which I added earlier if the view wasn’t found) we return an error message instead:

Great, let’s give it a run and see what happens…

Oh no! We’ve got an error instead.
Error: One or more errors occurred. (Couldn’t find view ‘HelloEmail’)

So RazorViewEngine can’t find the correct .cshtml file, let’s put a breakpoint in and see what’s happened.
We can see that the result’s Success indicator is false. Because of this, there is another property available to us – SearchedLocations. This tells us where the engine has tried to locate our view:

We can see that it’s searched the conventional folders for the templates, but that’s not where I’ve placed them. You could place the template there, but I like to keep things separated more – especially if you intend for your API to actually have some Views.
So now we need to tell the engine to look in the correct location. To do this, we need to add a ViewLocationExpander.
Under the Templating folder create a new class called ViewLocationExpander. This class will implement the IViewLocationExpander.
Implementing the interface gives us two methods we need to populate – ExpandViewLocations and PopulateValues.
I’ve added a list of strings which gets populated in the constructor – this is populated by getting the current directory, and finding all ‘Emails’ folders in there.
Then in the ExpandViewLocations we Union our list, with the passed in list of ViewLocations – if you don’t want to search the default locations, just return your own list.
For PopulateValues we’re just adding a customviewlocation value to the context, for the name of our expander.

In order to make use of the expander, we need to configure the RazorViewEngine to make use of it.
In the Startup, we need to configure the RazorViewEngineOptions to add our expander to the ViewLocationExpanders on the options.

If you run it now you’ll see…. An error still!
Putting a break point in again will show that we have indeed searched an extra location, but it doesn’t look quite right:

For one, that’s an absolute path, which we don’t want as RazorViewEngine works with absolute paths. And you’ll also notice that it’s searched the folder, but not for the view we’re after. We can make a minor change to the ViewLocationExpander to fix this.
We want to remove the root path from each found location, and also append the file name, which can be done with a couple of select statements:

If we run this now, we’ll see that the email has been rendered with our model, hooray!

Let’s make a useful addition – if you have multiple different emails you’ll be managing, you don’t want to be maintaining all the layouts separately. It’s nice to have a consistent theme applied for you – so let’s add a layout file.
Under the Emails folder, add a _Layout.cshtml file.
I’m not going into the details of formatting etc, just enough to show working with layouts works.
In the layout I’m just setting the structure of the page, adding a title, and rendering our body content:

Now if you run the app, you can see our layout is rendered as well as our View. The same works for rendering sections for scripts, content etc – just be careful about what your email client will allow!

There are two small things we want to do now – to make sure things run as expected when we deploy.
The first, is to ensure that the email templates are copied to the output directory – they can be found in the source folder now, but they need to be present when you publish and deploy.
Add the following to your project file:

The second is to improve how we find the templates. The current idea works well enough, but what if you run the project from outside of it’s root folder? If you do so, and have another project containing views present, you can run into rendering issues due to confusion between views with the same name, or with different layout files being found.
To fix this, we pass in the content root path of the hosting environment, and filter out files that don’t match. Inject IHostingEnvironment into your startup, and store the content root in a field, then pass this to the expander. In the expander constructor, we want to only find files that contain that path:

Now you can use the RazorViewEngine to render a View, and get the string content to be used for an email.

One last thing to mention is a potential issue when running integration tests. If you were to add a test using the renderer, you may see a lot of confusing errors about missing references in the View files. A workaround that worked for me is mentioned in this Github issue:
It’s just a case of creating one file, and adding a few lines to your project file. Making those changes and the errors went away for me 🙂

As always, please comment, raise an issue on Github, or otherwise get in touch if you see any problems or improvements!

C# Unit Of Work Pattern with Dapper

C# Unit Of Work Pattern with Dapper

UPDATE: The code here is based on an existing example on GitHub by Tim Schreiber, you can find the original code here.

At work I’ve started looking into using Dapper instead of Entity Framework. In our case, this is because of performance – we’re on EF 6, not Core (and Dapper is faster still over EF Core even).

I’ve found it really easy to get started with, given there’s very little that Dapper makes you do that you weren’t already if you were using SqlConnection. If you’re used to Entity Framework, you’ll find potentially more work is required to set everything up.

We wanted to get this working with the Unit of Work as our process involved updating half a dozen tables, and if something went wrong we wanted to roll back the changes and report the error.
As always, the code for this post can be found here.
Implementing this pattern will comprise of two parts – the unit of work implementation, and our repositories.

The first thing we’ll do is set up our repository classes, I’ll add two, but as they’re extremely common I’ll only cover one here. If you really want to see both, check out the source code.
I’ve also created entity classes for each of these, but as these are just POCOs there isn’t much point in covering them.
I’ve added a folder called ‘Repositories’ to keep these in, and in there I’ve added two new classes – EmployeeRepository and CustomerRepository.
Each of these has three methods – Add, Update and Remove.
We’ll pass in an IDbTransaction in the constructor of each repository, and from this we can get the connection which we’ll use to execute our SQL queries.
Add and Update take in an entity class representing a record in the table, and Remove simply takes in an int for the id column. Only Add will return anything, the newly set Id of the entity.
All in, this is a simple class:

Since this is more of an example around the Unit of Work pattern, the SQL we’re using is really simple. The main thing to take note of here is the IDbTransaction being passed as a parameter to the Dapper Execute calls.
This does exactly what you’d expect, linking all changes to that transaction, so that if something goes wrong, we can rollback everything in that transaction rather than having to figure anything out for ourselves.
So let’s go ahead and make use of this by implementing the Unit of Work – I’ve done this in a class I’ve called DapperUnitOfWork.
This class will hold our repository – instantiating with the transaction if null, and will include a commit method with attempt to save our changes, and roll back if something goes wrong. I’ve also implemented IDisposable, and in the Dispose method I’m clearing off the IDbTransaction and IDbConnection.
Also worth noting the private ResetRepositories method – this is needed as we instantiate our repository classes with an IDbTransaction. When we’ve committed changes, we’ll be on a new transaction, so we’ll want to re-instantiate our repositories to reflect this.
Your class should look something like this:

No surprises there, it’s easy to see what it’s doing, and it’s easy to use – simply create a new instance of the DapperUnitOfWork class in a using statement, and call the methods on your repository:

Obviously this code could be improved in numerous ways – better error handling, abstraction etc.
But hopefully it’s clear enough to follow!
As always, get in touch if you think there’s a way to improve things 🙂
Or feel free to submit an issue/PR to the repo on Github here.

Azure Service Bus Listener with Azure Service Fabric

Azure Service Bus Listener with Azure Service Fabric

Starting to play around with Azure Service Fabric, and one of the things a friend wanted to do was to be able to listen for messages on a service bus, and handle messages doing some work with file processing and blob storage. Doing this in Service Fabric would easily enable such a thing with massive scale, so I thought I’d take a look at laying out the basics for him.
As such, this blog post is going to cover the basics of listening to an Azure Service Bus queue on Azure Service Fabric.

My sample code can be found on GitHub here.

First things first, before you can work with Service Fabric you need to set up your dev environment
Obviously, you’ll need an Azure account and to have a Service Bus and Queue set up – you can find info on that here if required.

The application I’m going to create will be very simple, only consisting of two parts:
– A stateless service
– A dummy client console application

The client is just for publishing a message onto the service bus – we need to be able to check this thing works right?
The stateless service is what we’ll deploy to service fabric. This will listen to the service bus, and trigger a method when something is received.

Now create a new solution in Visual Studio, and you’re going to want to create a new Service Fabric Application. From the window that appears, select a ‘Stateless Service’. I’ve called mine ‘ServiceBusTrigger’.
Other types of services may be more suitable depending on what you’re trying to achieve, but for my use case – receiving and processing messages – I don’t care about state.

Now we have the service scaffolded, let’s set our service bus listener.
First, you’ll need to add the Nuget package for WindowsAzure.ServiceBus.
In the ServiceBusTrigger class (or whatever name you gave to your service) you’ll see an override method called CreateServiceInstanceListeners.
This is where we’ll want to set up our listener, and in the future any other listeners you require for this service.
We need to create our ServiceInstanceListener – this will be a class that implements the ICommunicationListener interface. I created a folder to keep mine separate called ServiceListeners, and called my class ServiceBusQueueListener.
Implementing the ICommunicationListener interface you’ll see 3 methods – Abort, CloseAsync, and OpenAsync. We’ll use Abort and CloseAsync to close the connection to our service bus, and we’ll use OpenAsync to set up the client and start receiving messages.
Without any changes yet, your class should look like this:

We’ve already added the Nuget package for the service bus, so let’s start configuring that.
We only need a connection string, and the name of a queue, in order to set up our client. I’ve added these as private fields on the class, so we can set them in the constructor, then access them in the OpenAsync method. I’ve also added two more fields – one for the service bus QueueClient, and one for an Action.
The Action will also be passed in the constructor and set, and we’ll call this method when a message is received. This approach lets this listener be quite reusable – you simply pass in the callback to handle incoming messages where it’s implemented. If there’s a better approach, let me know! 🙂

In the OpenAsync method we’ll create our client from the connection string and queue name, and then set it up so that when a message is received, we invoke the callback Action.
I’ve also added a Stop method to close down the client, this will be called by both CloseAsync and Abort, as in this scenario we’ll want to do the same thing in both instances.
After these changes, your class should look something like this:

That’s most of the work done!
Now back in the service class, we’ll need to register this listener in the CreateServiceInstanceListeners method. Here we simply create a new ServiceInstanceListener class with a new instance of our ServiceBusQueueListener.
I’ve got the connection string and queue name as private fields again, and I’ve added a simple ‘Test’ method that takes in a BrokeredMessage as a parameter, and will write the contents out to the Debug window. This will be the action we pass in to our listener.
All in, our service class now looks like this:

And that’s all that’s required!
To test it, let’s add a console project to our solution to send something to the service bus – this is my DummyClient.
Again, add the Nuget package for the service bus, and set up the connection string and queue name.
We then want to create a connection again, the same as we did in our listener class.
Now all that’s needed is to create a message and send it using the client:

If you debug the service, and then run the DummyClient, you should now see your test message printed in the Output window!

I’ve added one last step to my sample on GitHub, and that’s to make use of environment variables from the config, so you can have the connection string and queue name swapped out depending on where you’re running the code.
The first step is to set up the default environment variables in ServiceManifest.xml – this is under the PackageRoot folder on your service.
In there find the CodePackage node, and underneath the EntryPoint add a new EnvironmentVariables node. And under that, add two EnvironmentVariable nodes, one for the connection string, and one for the queue name.
Your code package node should now look like this:

(You might notice a space in ‘CodePackage’ above. Apparently my syntax highlight has issues with this, the space is there until I find an alternative highlighter to use!)

We also want to set these in ApplicationManifest.xml – this can be found under the ‘ServiceBusTrigger’ node (or whatever you called your project – not your service), under the ApplicationPackageRoot.
This time find the ServiceManifestImport node, and add a new EnvironmentOverrides node, setting the CodePackageRef to the one above.
Again, in here we want to add the two EnvironmentVariable nodes, however this time we’re going to set the value as a parameter. This section should look something like this:

Now, we can override that value from our environment specific parameter xml files. These are found under the ApplicationParameters folder – there are 3 by default: Cloud.xml, Local1Node.xml, Local5Node.xml
In each of these files there is a Parameters node, under which we want to add two more Parameter nodes after the existing one. The name of the parameter should match the name in [ ] brackets that we set above, and the values should be the environment specific values you want:

Now that’s configured, the last step is to read these values from the config and put them to use. Go back to the TriggerService class, and instead of hardcoding queue and connection string values, we’ll read them from the context and get the environment variables by name.
After updating, your constructor should now look like this:

And that’s it!
That’s all that’s required to set up a listener running in an Azure Service Fabric app. As you can see, it’s easy to switch out the service bus for any other listener you might want, or to extend the Action we passed in to handle anything we want when a message is received.

You can find my sample code on GitHub here.

As always, if you have any suggestions or objections please let me know! We’re all here to learn 🙂

Reducing Consumed Request Units in DocumentDb with C# .NET

Reducing Consumed Request Units in DocumentDb with C# .NET

I’ve previously covered getting up and running with DocumentDb. Since then, one of the things I’ve been looking at with it is how to reduce the amount of request units each write operation to the database consumes.
This post will be assuming you’ve got that code, as we’ll be building on it here.

First, it’s good to understand what throughput you’ve set your collection to have, and what this means in terms of performance as you scale.
Azure lets you configure the collection to have a throughput of between 400 and 1000 request units (RUs) per second.
Every operation you perform on DocumentDb will consume some of these request units, the amount dependent on the size and complexity of what you’re doing.
Once you hit the limit of your throughput, DocumentDb will throttle requests to get you under the limit. When this happens, you’ll receive a HTTP Status 429 response, ‘RequestRateTooLarge’.
In this case, you need to get the header ‘x-ms-retry-after-ms’ which will give you the time to wait before attempting the request again.
However, if you’re using the .NET client SDK then this will be handled for you most of the time as the SDK will implicitly cache the response, will respect the header, and retry the request.
One thing to be wary of is not setting your throughput too high – Azure will charge you for RESERVED throughput. That means whatever you set the collection to, that is what you get charged for, regardless of if you use it or not. I’ll cover scaling the database up in a future post, but unless you’re sure you’ll need it, I’d set the throughput to the minimum, especially if it’s just for testing.

Now, in order to find out if any changes we make will actually work, we’ll need to first find out how many request units we’re currently consuming.
For the purposes of this demo, I’m going to be writing the following class to the database:

This is just so we have something that might be vaguely more realistic than me just dumping a load of randomly named properties into an object.
So what we’ll be writing to the database is a CustomerEntry object. I like to wrap my objects alongside the id that DocumentDb expects, and a TimeStamp for any audit or debug issues.

Now we’ve done this, we’ll take the method from my previous post, and modify it slightly so we can output to the console how many request units the operation consumed:

The parameter ‘documentObject’ will be an instance of CustomerEntry. So if you now run the console app, you’ll see the number of request units output to the window – I’m seeing 19.62 units consumed.
So now we want to try and reduce this amount, which we’re going to do by changing our indexing policy for the DocumentCollection.
In DocumentDb, the indexing determines what on the object is queryable. By default, the entire object is indexed.
We’re going to change this to index by only by one or two properties that I might be interested in querying on, ignoring the rest. This might not be suitable for your application, so think carefully about what you want to index based on your needs.

To do this, I’m going to delete the existing DocumentCollection, and update the method to create a DocumentCollection to apply the new indexing policy.
We’re going to set the ExcludedPaths to ignore the entire Customer object, and then in our IncludedPaths we’ll add the specific property we want. In this case, I’m assuming I’ll only be interested in the Email on the Customer (as that would be my unique property).
So with these changes, our method will now look like this:

So here we’re first querying to see if the Document Collection exists, as discussed in my previous post. Then we set up out Document Collection, setting the Indexing Policy to automatic.
Now, we need to configure our excluded and included paths. We set the Excluded Path to exclude the entire Customer object, and set the Included Path to include everything. This leaves us with the id and TimeStamp properties being indexed, which will be useful for future queries.
Next, we set another Included Path, this time specifying the Email property on the Customer object. So now, our TimeStamp, id, and Customer.Email properties are all indexed, and as such are all queryable.
The real test is the difference in the consumed request units per write operation – remember we were previously consuming 19.62 RUs when writing.
Ensure the collection has been deleted in the Azure Portal, then run the code to create the collection again, and write an object to the collection. When doing so, I now see 7.24 RUs being consumed.
We’ve more than halved the amount we consume!
This is just one way of improving performance with DocumentDb – I’ll continue to learn more and will share what I find.

As always, if you have any improvements or suggestions, feel free to let me know!

Getting Started with DocumentDb and C# .NET

Getting Started with DocumentDb and C# .NET

One of the things I’ve been playing around with recently is an Azure hosted DocumentDb – coincidentally it’s also my first foray into NoSQL databases!
In this post I’ll be covering the basics of getting up and running with DocumentDb based on what I’ve encountered so far. To do this, we’re going to use a basic Console application, and through this we’ll create and read a new database, document collection, and documents themselves.

First, we need to make sure everything is set up in Azure (you’ll need an Azure account with some credit available for this – if you have an MSDN subscription, you get £40 free credit a month (or your local equivalence 🙂 )
In the Azure portal click the ‘+ New’ button, and in the pane that appears select ‘Databases’ and then ‘NoSQL (DocumentDb)’
In the ‘New account’ pane that loads, give your account an ID, a new Resource Group name, and select the Location.
Once done, click Create, and wait for the database to be deployed.

While waiting for the deployment, open Visual Studio and create a blank Console Application. Open the Nuget package manager so we can add DocumentDb – searching ‘documentdb’ should find it as the top result, titled ‘Microsoft.Azure.DocumentDB’
Now in your Program class, create two const strings. These will be used to hold the endpoint URI and the Primary Key of the DocumentDb account.

Now that the database has been deployed, we need to get the endpoint URI and the Primary Key for the account. To do this, navigate to your DocumentDb account and select ‘Keys’. From here copy the URI and Primary Key values into the strings in your console application.
We’ll also want to add an instance of DocumentClient to the class as well, which is the client used for interaction with the database, so your class should look something like this:

Now, we need to create a new instance of DocumentClient using the endpoint and primary key:

This will try and connect to your AzureDb account, and write out the error if it fails.
Now we want to actually create a Database on our account, but we also need to account for the database already existing. The official docs tell you to attempt to read the database immediately in a try-catch block, creating the database in the catch block if the status code indicates it wasn’t found.
Personally, I hate this approach – I’m a firm believer that exception handling should never be used to control program flow.
Instead, for this example, we’ll query for the database first, and create the database if that doesn’t exist:

Here we’re querying for a database with an id matching the chosen name for your database – which I’ve stored in a private field. If any are found matching, we do nothing, otherwise we create a new database through the client, setting the name.

Now we have our database, we need a Document Collection for us to store our items. Again, the Microsoft article shows this done using a try-catch block. Instead, I’ve used another query, this time based on the collection name we want to use:

Again, we query to see if a document collection with an id matching our given name already exists. If it does, do nothing, otherwise we create a new document collection object, set the id to our chosen name (which I’ve also set to a private field for future use), and then we pass that to the client alongside the URI for our database, and a RequestOptions object. For the RequestOptions I’ve simply set the throughput to 400 for the collection – this is the lowest option available, and thus the cheapest.

Now we have our database and document collection, the last step is to write to our database. The method on the client accepts any object, as NoSQL makes no demands on what is being written to the database adhering to any set schema.
So given an object ‘documentObject’ we can write to the database like this:

Reading from the database is achieved in a similar fashion:

The documentId parameter is the id of the document you’re wanting to retrieve. If you want to keep track of the id properties of documents yourself, you can set an ‘id’ property on the object you write to the property. Note that this must be lowercase, which threw me off when I first attempted this! Personally, I keep track of the id by creating a Guid for each object before writing to the database. Of course, there are other ways of querying for documents which I may cover in the future.

But that’s all that’s needed to get up and running with the basics – you now have a database, a document collection, and can read and write documents to that collection! If anything is unclear, or you spot anything I could improve, please let me know!

Give your Web API some Swagger with Swashbuckle

Give your Web API some Swagger with Swashbuckle

At work I’m currently working on my very first Web API using .NET Core. As part of this, we wanted to set it up to follow the OpenAPI specification, and provide a tool for other developers here to quickly find out more about the API and test some basic use cases.
Swagger seemed like a good answer to all this, and fortunately there’s even a ready to go implementation for .NET called Swashbuckle. In this post I’ll cover the basics to get up and running with Swashbuckle on a Web API.

I’m going to use a new, default Web API project for this, but it’s easy enough to add this to your existing work, as you’ll see.

First, we need to add Swashbuckle to our project. Open the nuget package manager, ensure you’re including pre-release packages, and find Swashbuckle. At the time of writing this, the latest stable version of Swashbuckle doesn’t support .NET Core, and so you’ll need the latest pre-release version until this goes live. At writing, the latest version is v6.0.0-beta902.
One of the good things about Swashbuckle is that it’s open source, you can find the new repo here. They’ve started from scratch to support .NET Core, so if you do encounter any issues, you can get involved and help resolve the issues yourself 🙂

Now that we have Swashbuckle, it needs to be configured so we can actually make use of it. To do this, go to Startup.cs, and in your ConfigureServices method add:

and in your Configure method add the following:

And that’s it.
If you now run the project, and go to /swagger/ui, you’ll see the name of your controller(s), I see ‘Values’. Clicking on that expands out a list of all the endpoints available, nicely colour coded based on what they do!

Initial setup of Swagger
Initial setup of Swagger

So let’s customize it a bit. First, it would be good to give some more detail to anyone viewing your API.
We’ll add some basic details of the API, and who to contact. To do this, go back to your ConfigureServices method in Startup.cs, and add a call to ConfigureSwaggerGen on the services. In here, we want to create a new SingleApiVersion on the options, setting the desired values:

As you can see, I’ve set a version, a title of the API and it’s description, as well as my contact details so people can shout at me for anything stupid I’ve done.

If you run it again and navigate to /swagger/ui, you’ll immediately see those details.

API details displayed in Swagger
API details displayed in Swagger

The next thing we’ll want to do is add a bit more information to some of our endpoints. We can do that using XML Comments in our code.
First, on your project properties, under build, check the box for ‘XML documentation file’
Now Swashbuckle can make use of the XML comments, so let’s add some to our controller. I’m going to modify the post method, simply adding a few example return types:

We now just need to make Swashbuckle include these comments, so we do that once again in Startup.cs, in our call to ConfigureSwaggerGen, add this to the options:

And the using statement for PlatformServices:

The name of the XML file will depend on your project name, and can be found in your debug folder. This code simple finds the file, and passes the address through to the Swagger options.

If you run the code and once again navigate to swagger, open your method, you’ll see our response messages

XML response codes displayed in Swagger
XML response codes displayed in Swagger

Now run and go to /swagger/ui again, check the method and see the snazzy new response codes!

Another useful thing to do, when accepting a complex object as a parameter, is to have a sample model set up for swagger to use. If you’re adding validation to your model, it saves users worrying about it so much if they can just click the sample and have that sent through.

So first, we’ll need an example class set up, this is the one I created, imaginatively called ExampleModel:

I’ve added a Regular Expression annotation to the email, just to give an example of when a sample model would be of use. Not having a sample model gives the value ‘string’ for all strings, which leaves your default value invalid. Adding a sample model avoids this.
So now we have a model, we need to create a SchemaFilter class. This class will implement SwaggerGen’s ISchemaFilter interface.
Implement the interface, and in the Apply method we want to check that the type matches our class, and if so, set the schema example to an instance of our class:

Now there are two ways of registering this, one is to add the filter as an attribute to your model class. Whenever that model is used, your schema filter will apply. The other way is to add it to the options when you configure SwaggerGen. For this example I’ll be adding it to the model for this example, so annotate your model class like so:

And you’ll need the Annotations using statement:

And finally modify your controller method to take in the model class as a parameter:

Now if you run the code, navigate to Swagger, and open that controller method, you’ll see your example model set out to the right hand side, and clicking in the box will add that model to the value text box, allowing you to post data without worrying about anything.

Example model in Swagger
Example model in Swagger

Getting up and running with Swagger/Swashbuckle is as easy as that!
I’m still playing with this and learning more, as I’m sure there’s plenty of cool stuff I haven’t found yet, so will post an update if I find out anything useful.
And please feel free to let me know if you’ve come across anything I should know about!

Bubble Sort and Quick Sort with C#

Bubble Sort and Quick Sort with C#

One of the most common questions you get asked in technical interviews (as happened to me recently!) is to describe a sorting algorithm, so it’s good to have a couple up your sleeve.
In this post I’ll cover two of the most common sorting algorithms, bubble sort and quicksort, and their implementations in C# – and as usual, you can find all the source code on Github here.
I’ll try and cover some other sorting algorithms in a future post.

Bubble Sort

First up, let’s cover bubble sort. This is one of the most basic sorting algorithms.
Bubble Sort works by looping through the numbers, swapping each element with its neighbour provided it’s less than (or greater than, depending on the order you want to sort by). Obviously, every time you swap two elements around, there’s a chance that the remaining elements are no longer in order. As such, we need two loops – the outer loop iterating over the elements, provided they aren’t all considered ordered, and an inner loop that iterates the elements, swapping as necessary (as mentioned above). Every time we swap elements, we need to carry on this process, if no elements are re-ordered, then we need to break out of the loops to save time.

To do this set up a boolean flag (I’ve called mine elementsLeftToSort), and have the first loop continue only if that flag is true (as well as having values left to iterate). This is set to false on each iteration of the outer loop, and is only set to true if numbers are re-ordered – because as mentioned above, if ever we re-order numbers, we then need to check the entire sequence of numbers to see if subsequent changes are required.

This gives you an implementation that may look like this:

The SwapElements method, which is also used for the Quicksort algorithm below, is just a simple method to swap the position in the array of the two indexes you pass in:

Given how this is structured, it’s clear that this is not an optimal algorithm for larger datasets, despite it being one of the simplest to understand. The nested looping gives this a worst case complexity of O(n2) – and if more than a few numbers need sorting in to place, you’ll find that this is more or less the average as well. If you’re unfamiliar with Big O Notation, you can find an overview on Wikipedia here.


Quicksort is a slightly more complex algorithm than bubble sort, but is still quite simplistic to implement.
Whereas bubble sort works by looping through elements and swapping them individually, quicksort works in a divide and conquer method, based on pivoting sections of the dataset around a value.
Quicksort does this recursively by finding a value to use as the pivot, this gives us two sections of the dataset around that value. This is then applied recursively on each section of the dataset doing the same thing, until each section is considered sorted.
To do this you give the algorithm the array of values, and the low and high indexes of the array. The first step is then to partition the array – a value from the array is picked as the initial value for pivoting (I’m using the last value of the array). We then loop through values in the array, swapping elements until the pivot is in the correct location – all values lower in the array are less than the pivot, all higher are greater than. The position of this pivot element is then used to recursively sort the remaining sections – by using the location +/- 1 to get the pivot value for the remaining sections of the array.
We know when a section of the array has been sorted as we’ll end up with a pivot location less than/greater than the low/high index of the section of array being sorted, thus breaking the recursion on that section.
The resulting implementation may looking something like this:

The sample code I’ve uploaded on Github includes both implementations with a stop watch wrapped around each. An array of random numbers is generated and passed in to be sorted so you can see how long each algorithm takes for different sized arrays.
For example, when I run this locally, with an array of 1000 random integers, I get the following output:
Quicksort has taken 1 milliseconds to complete.
Bubble Sort has taken 8 milliseconds to complete.

While with an array of 10,000 random integers, I get:
Quicksort has taken 11 milliseconds to complete.
Bubble Sort has taken 752 milliseconds to complete.

This shows that while Bubble Sort can be equally effective in small arrays, it is (generally speaking) greatly outstripped by Quicksort on larger datasets.

Introduction to Inversion of Control in C# with Dry IoC

Introduction to Inversion of Control in C# with Dry IoC

Recently I’ve been brushing up on some basic skills, and dependency injection with Inversion of Control is one of those things. Previously when doing this, I’ve stuck to the most commonly used variations of IoC with .NET – Unity and Ninject.
However, after coming across this GitHub example I wanted to give Dry IoC a try given it’s performance benefits over most others.

All this code can be found on my GitHub account here.

The most basic use case of IoC is when you’re resolving an interface to a class implementation. Without IoC, you’d need to know the exact implementation of this interface whenever it’s needed, like so:

Obviously this isn’t ideal as it keeps your code tightly coupled – you don’t want Class A to know about the implementation details of Class B. Avoiding this is where Inversion of Control and Dependency Injection comes in.

When using Dry IoC you can instead register the interface against the implementation when configuring the container:

Now you can then use that container to resolve the interface whenever required, like so:

The same principle also applies when you’re registering a singleton, simply passing the Reuse.Singleton parameter when registering will ensure that the same instance is always returned:

I’ve given this interface a method to change a string on the implementation, and a method to print that string out to the console. This allows you to see that changing the string on the first resolved interface is reflected on the second:

Obviously, this is the most basic usage you can find, so let’s start to dig a little deeper.
The next thing that might be needed is when you have an interface that requires a parameter in the constructor. There are two different ways of doing this depending on whether you’re registering against an implementation that requires a primitive type, or a complex object.
For a primitive type you simply need to specify the parameter when registering the interface by using Made.Of to set up the factory method for the constructor:

While this looks quite complex at first glance, all it’s doing is specifying that when this interface is requested, the constructor should be invoked, and should take the given string as a parameter. This interface then gets resolved as normal when it’s required, and we can print out the parameter to see that this has worked:

Registering with a complex object to be passed in works in much the same way, however it requires that we register an instance of the object with the container:

The code above will register an instance of the TestObject class, giving it the key ‘obj1’ which can then be used to resolve it. Now that this object is available we can register the interface, specifying that this implementation of the object should be used in the constructor:

Again, the sample code on GitHub contains a method that will print out each field so you can see they have been populated. Registering an interface like this could be very useful, for example if you require some configuration settings to be applied that might need to change depending on whether you’re running in Debug or in Production.

This is all well and good, but for the most part when you’re using interfaces you’ll have multiple different implementations. In this case you need to be able to retrieve the specific implementation you want, or perhaps in some cases retrieve all registered implementations of the interface, so let’s take a look at those.

Registering multiple implementations to the container is easy – simply register them as normal, but pass a key when doing so that can later be used to retrieve it. The key is an object type – I’ve used an enum as an easy way to keep track of them:

Now they’ve been registered, they can be resolved easily enough again, just by specifying the key you registered them against:

If you’re wanting to resolve all implementations of the interface you can do so into an IEnumerable by using ResolveMany instead of Resolve:

Again, the sample code on GitHub includes Print methods on these implementations so that you can see the correct classes are being resolved.

I hope this has given a good basic introduction to Inversion of Control with C# and Dry IoC. In a future post I’ll expand on this and show how you can use IoC and Dependency Injection to help you follow the principles of TDD.

Link to the code on GitHub can be found here.