Browsed by
Category: Databases

C# Unit Of Work Pattern with Dapper

C# Unit Of Work Pattern with Dapper

UPDATE: The code here is based on an existing example on GitHub by Tim Schreiber, you can find the original code here.

At work I’ve started looking into using Dapper instead of Entity Framework. In our case, this is because of performance – we’re on EF 6, not Core (and Dapper is faster still over EF Core even).

I’ve found it really easy to get started with, given there’s very little that Dapper makes you do that you weren’t already if you were using SqlConnection. If you’re used to Entity Framework, you’ll find potentially more work is required to set everything up.

We wanted to get this working with the Unit of Work as our process involved updating half a dozen tables, and if something went wrong we wanted to roll back the changes and report the error.
As always, the code for this post can be found here.
Implementing this pattern will comprise of two parts – the unit of work implementation, and our repositories.

The first thing we’ll do is set up our repository classes, I’ll add two, but as they’re extremely common I’ll only cover one here. If you really want to see both, check out the source code.
I’ve also created entity classes for each of these, but as these are just POCOs there isn’t much point in covering them.
I’ve added a folder called ‘Repositories’ to keep these in, and in there I’ve added two new classes – EmployeeRepository and CustomerRepository.
Each of these has three methods – Add, Update and Remove.
We’ll pass in an IDbTransaction in the constructor of each repository, and from this we can get the connection which we’ll use to execute our SQL queries.
Add and Update take in an entity class representing a record in the table, and Remove simply takes in an int for the id column. Only Add will return anything, the newly set Id of the entity.
All in, this is a simple class:

Since this is more of an example around the Unit of Work pattern, the SQL we’re using is really simple. The main thing to take note of here is the IDbTransaction being passed as a parameter to the Dapper Execute calls.
This does exactly what you’d expect, linking all changes to that transaction, so that if something goes wrong, we can rollback everything in that transaction rather than having to figure anything out for ourselves.
So let’s go ahead and make use of this by implementing the Unit of Work – I’ve done this in a class I’ve called DapperUnitOfWork.
This class will hold our repository – instantiating with the transaction if null, and will include a commit method with attempt to save our changes, and roll back if something goes wrong. I’ve also implemented IDisposable, and in the Dispose method I’m clearing off the IDbTransaction and IDbConnection.
Also worth noting the private ResetRepositories method – this is needed as we instantiate our repository classes with an IDbTransaction. When we’ve committed changes, we’ll be on a new transaction, so we’ll want to re-instantiate our repositories to reflect this.
Your class should look something like this:

No surprises there, it’s easy to see what it’s doing, and it’s easy to use – simply create a new instance of the DapperUnitOfWork class in a using statement, and call the methods on your repository:

Obviously this code could be improved in numerous ways – better error handling, abstraction etc.
But hopefully it’s clear enough to follow!
As always, get in touch if you think there’s a way to improve things 🙂
Or feel free to submit an issue/PR to the repo on Github here.

Reducing Consumed Request Units in DocumentDb with C# .NET

Reducing Consumed Request Units in DocumentDb with C# .NET

I’ve previously covered getting up and running with DocumentDb. Since then, one of the things I’ve been looking at with it is how to reduce the amount of request units each write operation to the database consumes.
This post will be assuming you’ve got that code, as we’ll be building on it here.

First, it’s good to understand what throughput you’ve set your collection to have, and what this means in terms of performance as you scale.
Azure lets you configure the collection to have a throughput of between 400 and 1000 request units (RUs) per second.
Every operation you perform on DocumentDb will consume some of these request units, the amount dependent on the size and complexity of what you’re doing.
Once you hit the limit of your throughput, DocumentDb will throttle requests to get you under the limit. When this happens, you’ll receive a HTTP Status 429 response, ‘RequestRateTooLarge’.
In this case, you need to get the header ‘x-ms-retry-after-ms’ which will give you the time to wait before attempting the request again.
However, if you’re using the .NET client SDK then this will be handled for you most of the time as the SDK will implicitly cache the response, will respect the header, and retry the request.
One thing to be wary of is not setting your throughput too high – Azure will charge you for RESERVED throughput. That means whatever you set the collection to, that is what you get charged for, regardless of if you use it or not. I’ll cover scaling the database up in a future post, but unless you’re sure you’ll need it, I’d set the throughput to the minimum, especially if it’s just for testing.

Now, in order to find out if any changes we make will actually work, we’ll need to first find out how many request units we’re currently consuming.
For the purposes of this demo, I’m going to be writing the following class to the database:

This is just so we have something that might be vaguely more realistic than me just dumping a load of randomly named properties into an object.
So what we’ll be writing to the database is a CustomerEntry object. I like to wrap my objects alongside the id that DocumentDb expects, and a TimeStamp for any audit or debug issues.

Now we’ve done this, we’ll take the method from my previous post, and modify it slightly so we can output to the console how many request units the operation consumed:

The parameter ‘documentObject’ will be an instance of CustomerEntry. So if you now run the console app, you’ll see the number of request units output to the window – I’m seeing 19.62 units consumed.
So now we want to try and reduce this amount, which we’re going to do by changing our indexing policy for the DocumentCollection.
In DocumentDb, the indexing determines what on the object is queryable. By default, the entire object is indexed.
We’re going to change this to index by only by one or two properties that I might be interested in querying on, ignoring the rest. This might not be suitable for your application, so think carefully about what you want to index based on your needs.

To do this, I’m going to delete the existing DocumentCollection, and update the method to create a DocumentCollection to apply the new indexing policy.
We’re going to set the ExcludedPaths to ignore the entire Customer object, and then in our IncludedPaths we’ll add the specific property we want. In this case, I’m assuming I’ll only be interested in the Email on the Customer (as that would be my unique property).
So with these changes, our method will now look like this:

So here we’re first querying to see if the Document Collection exists, as discussed in my previous post. Then we set up out Document Collection, setting the Indexing Policy to automatic.
Now, we need to configure our excluded and included paths. We set the Excluded Path to exclude the entire Customer object, and set the Included Path to include everything. This leaves us with the id and TimeStamp properties being indexed, which will be useful for future queries.
Next, we set another Included Path, this time specifying the Email property on the Customer object. So now, our TimeStamp, id, and Customer.Email properties are all indexed, and as such are all queryable.
The real test is the difference in the consumed request units per write operation – remember we were previously consuming 19.62 RUs when writing.
Ensure the collection has been deleted in the Azure Portal, then run the code to create the collection again, and write an object to the collection. When doing so, I now see 7.24 RUs being consumed.
We’ve more than halved the amount we consume!
This is just one way of improving performance with DocumentDb – I’ll continue to learn more and will share what I find.

As always, if you have any improvements or suggestions, feel free to let me know!

Getting Started with DocumentDb and C# .NET

Getting Started with DocumentDb and C# .NET

One of the things I’ve been playing around with recently is an Azure hosted DocumentDb – coincidentally it’s also my first foray into NoSQL databases!
In this post I’ll be covering the basics of getting up and running with DocumentDb based on what I’ve encountered so far. To do this, we’re going to use a basic Console application, and through this we’ll create and read a new database, document collection, and documents themselves.

First, we need to make sure everything is set up in Azure (you’ll need an Azure account with some credit available for this – if you have an MSDN subscription, you get £40 free credit a month (or your local equivalence 🙂 )
In the Azure portal click the ‘+ New’ button, and in the pane that appears select ‘Databases’ and then ‘NoSQL (DocumentDb)’
In the ‘New account’ pane that loads, give your account an ID, a new Resource Group name, and select the Location.
Once done, click Create, and wait for the database to be deployed.

While waiting for the deployment, open Visual Studio and create a blank Console Application. Open the Nuget package manager so we can add DocumentDb – searching ‘documentdb’ should find it as the top result, titled ‘Microsoft.Azure.DocumentDB’
Now in your Program class, create two const strings. These will be used to hold the endpoint URI and the Primary Key of the DocumentDb account.

Now that the database has been deployed, we need to get the endpoint URI and the Primary Key for the account. To do this, navigate to your DocumentDb account and select ‘Keys’. From here copy the URI and Primary Key values into the strings in your console application.
We’ll also want to add an instance of DocumentClient to the class as well, which is the client used for interaction with the database, so your class should look something like this:

Now, we need to create a new instance of DocumentClient using the endpoint and primary key:

This will try and connect to your AzureDb account, and write out the error if it fails.
Now we want to actually create a Database on our account, but we also need to account for the database already existing. The official docs tell you to attempt to read the database immediately in a try-catch block, creating the database in the catch block if the status code indicates it wasn’t found.
Personally, I hate this approach – I’m a firm believer that exception handling should never be used to control program flow.
Instead, for this example, we’ll query for the database first, and create the database if that doesn’t exist:

Here we’re querying for a database with an id matching the chosen name for your database – which I’ve stored in a private field. If any are found matching, we do nothing, otherwise we create a new database through the client, setting the name.

Now we have our database, we need a Document Collection for us to store our items. Again, the Microsoft article shows this done using a try-catch block. Instead, I’ve used another query, this time based on the collection name we want to use:

Again, we query to see if a document collection with an id matching our given name already exists. If it does, do nothing, otherwise we create a new document collection object, set the id to our chosen name (which I’ve also set to a private field for future use), and then we pass that to the client alongside the URI for our database, and a RequestOptions object. For the RequestOptions I’ve simply set the throughput to 400 for the collection – this is the lowest option available, and thus the cheapest.

Now we have our database and document collection, the last step is to write to our database. The method on the client accepts any object, as NoSQL makes no demands on what is being written to the database adhering to any set schema.
So given an object ‘documentObject’ we can write to the database like this:

Reading from the database is achieved in a similar fashion:

The documentId parameter is the id of the document you’re wanting to retrieve. If you want to keep track of the id properties of documents yourself, you can set an ‘id’ property on the object you write to the property. Note that this must be lowercase, which threw me off when I first attempted this! Personally, I keep track of the id by creating a Guid for each object before writing to the database. Of course, there are other ways of querying for documents which I may cover in the future.

But that’s all that’s needed to get up and running with the basics – you now have a database, a document collection, and can read and write documents to that collection! If anything is unclear, or you spot anything I could improve, please let me know!