Install local Redis instance with “Bash on Ubuntu on Windows”

In Windows 10, there is a feature that allows you to run native Bash Linux. Pretty cool, and I’ve started using it to have a local Redis instance running on my local dev machine that runs Windows.

Install Bash on Windows

To get this feature, you need the Windows 10 “anniversary” update. You need to then turn on developer mode in Windows. To do this, go the Settings > Update & Security > select For developers on the left and select the Developer Mode radio button.

You then need to turn on the Bash feature in Windows. Go to Programs and Features on the control panel, select Turn Windows features on or off, then navigate to “Windows subsystem for Linux“.

You’ll probably need a restart.

Install Redis

You should now be able to find “Bash on Ubuntu on Windows” (snappy name) from a search in your start menu. Open it.

First thing we need to do is install Dotdeb – which has a bunch of packages – including Redis – for use here.

To install Dotdeb:

You need to add a couple of lines to your APT list that allows us to use Dotdeb. I did this using nano text editor, with the following command:

sudo nano /etc/apt/sources.list

You will probably then be asked for a password. The next screen displays a list of sources. you need to add a couple of lines to the bottom:

deb squeeze all
deb-src squeeze all

Then press Cntl + O followed by enter to save in the same location. Then Cntl + X to exit nano.

Then run the following two commands to get and add GnuPG key.

sudo apt-key add dotdeb.gpg

Finally you need to run the following the get the APT packages:

sudo apt-get update

Once this all is done, you have the simple step of installing Redis, which is done using the following:

sudo apt-get install redis-server

That should be you all set up.

Useful commands

Once you are installed you, obviously, will want to start up your local server. The command for this is:

sudo service redis-server start

You can check the server’s status with: sudo service redis-server status

You may also want to run look at what’s stored on your server. To do this enter the command redis-cli. Once in cli you can do stuff like:

Info – what is says on the tin – info about your server instance. Number of connections, memory usage etc.

keys * – lists all keys stored in the instance

get <myKey> – retrieves the object stored for supplied key.


Note: most of the information came from: I have provided a bit more instruction on setting up Redis as I struggled a tad as a Linux noobie and figured I wouldn’t be the only one!

Testing Entity Framework context in-memory

I’m a great believer in having your tests as integrated and as “end-to-end” as possible, avoiding mocks on anything that does not cross a port or boundary, ala

One thing I have always found was that this became tricky when it came to mocking out your data access, particularly with EF. You can quite easily abstract away your data access layer and that’s absolutely fine. However, I always found I lost something doing this, especially if a test encompasses that  more than one bit of database access – as you start dealing in fake data too often. Not to mention if you anything more funky than simple read/writes in the layer you’re abstracting then this isn’t going to be covered.

Entity Framework 7 or core are providing in memory versions of your collections which is great for this: so look into that if you’re on that version.

I am, however, still using EF6. Some third parties exists to do this, like Effort: which works nicely. I like to have a wee bit more control over the mocked instance and have started creating my own library for mocking your context.


I found I like to deal with the context in the same way I would any other mocked service/object etc and so I decided to create variations of the library for various popular mocking frameworks. To date, just Rhino.Mocks and Moq. Let’s say we are using MockEF.Rhino…  Then the library will use Rhino to create its mock of the context. I.e. An in-memory collection version of the context will be created by Rhino’s MockRepository, and thus any functions you like to use of Rhino still work – this also has the added benefit of being able to mock anything that exists on your context’s interface – so if you have a bunch of methods you’ve created – theses are just mocked in the same way you always would.

Getting Started

To set up your context mock, simply call:

var context = new ContextBuilder<IMyContext>()
      .Setup(x => x.Authors, new List<Author> { new Author { Id = 1, Name = "bob" }})
      .Setup(x => x.Books)

The type argument of Context builder must be an interface. Then to use this in your code, you can set up a context factory, like so:

public interface IFactory
  IMyContext Create();

public class Factory : IFactory
  public IMyFactory Create()
    return new MyContext();

//class that needs to use the context
public class Example
  private IFactory _factory;
  public Example(IFactory factory)
    _factory = factory;

  public bool MethodThatUsesContext()
    //Important. Your context interface MUST implement IDisposable. 
    //Firstly because this won't work otherwise, Secondonly because - you should anyway.
    using (var context = _factory.Create())
      //Use context here.
    return true;

//When testing you can then stub your factory to return the context you have set up, like so:
var factory = MockRepository.GenerateMock<IFactory>();
factory.Stub(x => x.Create()).Return(myMockedContext); //context built using Rhino.MockEF.
var result = new Example(factory).MethodThatUsesContext();

//When running the code not in test, in you DI registrations you can use something like:
container.Register<IFactory, Factory>();

Current Limitations

Currently, if you have any calls to .Find(..) this will only look for fields with the [PrimaryKey] attribute – if this is set up as something else or setup elsewhere -it’s likely to fail.

.SaveChanges() or .SaveChangesAsync() will pretty much do nothing. Any adds or updates are automatically applied to the in memory collection. Which is not the same behaviour as real world. A good example of this is that the Attach(..) and Add(..) methods do the same thing.

To be Done

Plan to add more mocking frameworks.

Extension methods on libraries. Rather than manually creating a ContextBuilder – hide the methods behind an extension applied to the mocking framework’s library.

Test support for versions < EF6



Keep an Eye on Your Memory Consumption When using Akka.Net

I was recently working on a system that essentially involved using Akka to repeatedly call an Api over HTTP, transform the data and store the result. Pretty simple stuff. Using Akka to do this seemed ideal as some of the data transforms were a little complex and I was dealing with a lot of data and a huge amounts of requests. I had setup the system to start its process every 5 minutes, using the built-in scheduler – like so:

someActor, someMessage);

What I did wrong

Now, to keep it simple, lets say I had three actor types; CollectDataActor, TransformDataActor and StoreDataActor – and the process simply involved the CollectDataActor calling the Api then telling the TransformDataActor to do its thing and in turn this tells the StoreDataActor to, well, save the data.

What I was doing to achieve this was calling:

var actor = Context.ActorOf(Context.Create<CollectDataActor>());

And this would then collect the data which came back as a List<> if data. And then essentially created a TransformDataActor for every item in this list, and told them to process it.

Why this is a problem

As I mentioned at the top of the post, I was scheduling this process to run every 5 minutes. This meant that every single time the process was scheduled to run new actors were created at each stage, a new actor to collect, a huge amount to transform and subsequently store. This resulted in memory consumption just increasing and increasing over time. Not good.


At first my solution to this was to kill all the actors I had after I was done with them. However, this became hard to manage as the system grew and it didn’t work at all well for re-sending dead messages – it was a hack, to be honest.

The solution I used was to create in all of the actors I’d need before I start running. I knew I’d need only one CollectDataActor per process run, I’d need a shed load of TransformDataActors and the same for SaveDataActors (1 for each data item I transform).

What I did was create a router with a round robin pool of the TransformDataActors and SaveDataActors. This is a feature of Akka.Net that will essentially keep a pool of any given actor type, and when you message the router it send to an Actor within the said pool which is not busy (or has the fewest numbers of letters in its mailbox). Then, rather than creating an actor each time from the CollectDataActor, I can just select the router by its path and send it.


//setup code.
var transformPops = Props.Create<TransformDataActors>().WithRouter(new RoundRobinPool(50));
var transformRouter = system.ActorOf(transformPops , "transformDataRouter");

var saveProps = Props.Create<SaveDataActors>().WithRouter(new RoundRobinPool(50));
var saveRouter = system.ActorOf(saveProps , "saveDataRouter");

With these set up, the CollectDataActor code can look a little something like:

public class CollectDataActor : ReceiveActor
    public CollectDataActor()
         Receive<object>(message => Handle(message));

    private void Handle(object message)
        List<TypeMcType> data = CollectData();

        foreach (var item in data)
           var actorPath = "akka://path/to/your/system/transformRouter";
           var actor = Context.ActorSelection(actorPath);
           //NOTE: here we use "ActorSelection", we do not create a new actor
           // -  this is the key difference!

Authenticate to Mongo Database that isn’t “admin”

If you have set up your Mongo instance by adding a user to the “admin” database to authenticate against, you may of run into some confusion about how you connect & authenticate against another database on that server.

Let’s say you are trying to use the “local” database… If you using command line, you would need to add the parameter –authenticationDatabase. So your connection would look something like:

mongo --username martin.milsom --authenticationDatabase admin -p myPassword

The extra parameter lets Mongo know where the user is to authenticate against.

Now, if you are using the C# driver for this, as I was, then the answer is to include the same parameter in the connection URL, for example:

var connectionString = "mongodb://";
var database = new MongoClient(connectionString).GetDatabase("local");

Note here, the Url Parameter "?authenticationDatabase=admin" is how you add the parameter.

A Few Extra AutoMapper Features

AutoMapper is really handy tool used to map properties from one object to another. Very useful in situations such as mapping your domain objects into simplified POCO classes you would expose from an API. For more visit their site. Having played with it a bit I have stumbled across a couple of cool little feature you may not know about.


This can be useful if the structure of your two objects do not match, say for example you have the classes:

    public class Address
        public string Street { get; set; }
        public string City { get; set; }
        public string PostCode { get; set; }
    public class Phone
        public string Home { get; set; }
        public string Mobile { get; set; }
    public class Contact
        public string Name { get; set; }
        public Address Address { get; set; }
        public Phone Phone { get; set; }
    public class FlattenedContact
        public string Name { get; set; }
        public string AddressStreet { get; set; }
        public string AddressCity { get; set; }
        public string AddressPostCode { get; set; }
        public string Home { get; set; }
        public string Mobile { get; set; }

**You’ll notice that the property naming convention on the ‘FlattenedContact’ is different, this is on purpose, to show you how AutoMapper handles things differently.**

The simplest way to map, is just use

Mapper.CreateMap<FlattenedContact, Contact>();

And then call

var contact = Mapper.Map<FlattenedContact>(myContact);

Except it’s not quite that simple. AutoMapper, in this instance, will not match the properties coming from the ‘Phone’ class, this is because the name of the properties they correspond to on the flattened object (‘Home’ & ‘Mobile’) are not prefixed with ‘Phone’ (the name of the source class). Whereas the ‘Address’ properties will be matched, because they are called things such as ‘AddressCity’ etc. Makes sense? If you do not wish to re-name your properties, there is something you can do to tell AutoMapper:

            Mapper.CreateMap<Contact, FlattenedContact>()
                .ForMember(dest => dest.Mobile, opt=> opt.MapFrom(src=> src.Phone.Mobile))
                .ForMember(dest => dest.Home, opt=> opt.MapFrom(src=> src.Phone.Home));

Using this mapper instead of the simple previous one gives AutoMapper the information it needs, so all will be fine and dandy!


As the heading may suggest, this is the opposite of what we wanted to achieve before. We will stick with the same example classes we just used – imagine this time we want to convert from ‘FlattenedContact’ to ‘Contact’. To do this, you need to set a simple mapper for the custom type within the complex object and set rules to ensure AutoMapper adds them when mapping that object. Put simply, you create a map from ‘FlattenedContact’ to both ‘Address’ and ‘Phone’. It will look a little like this:

 Mapper.CreateMap<FlattenedContact, Address>();
 Mapper.CreateMap<FlattenedContact, Phone>();
 Mapper.CreateMap<FlattenedContact, Contact>()
      .ForMember(dest=> dest.Phone, opt=> opt.MapFrom(src=>src))
      .ForMember(dest=> dest.Address, opt=> opt.MapFrom(src=> src));

Then simply call:

var contact = Mapper.Map<Contact>(myFlattenedContact);

As with flattening, this only works with one convention. In this case only the Phone class is mapped correctly, the address properties are not. This is because it does not recognise the name of property “AddressCity” as a match to just “City”. You will have to explicitly do some member mapping for this, one option would be to map them all individually:

 .ForMember(dest => dest.Address.City, opt => opt.MapFrom(src => src.AddressCity))
 .ForMember(dest => dest.Address.PostCode, opt => opt.MapFrom(src => src.AddressPostCode))
 .ForMember(dest => dest.Address.Street, opt => opt.MapFrom(src => src.AddressStreet));

Alternatively, you could tell automapper to look out anything with a specified prefix in the property name, like so:

Mapper.CreateMap<FlattenedContact, Address>();
Mapper.CreateMap<FlattenedContact, Phone>();
Mapper.CreateMap<FlattenedContact, Contact>()
.ForMember(dest => dest.Phone, opt => opt.MapFrom(src => src))
.ForMember(dest => dest.Address, opt => opt.MapFrom(src => src));

Reverse Map

This is a nice and simple one. If you’re mapping from one object to another, you can call

Mapper.CreateMap<Source, Destination>().ReverseMap();

Instead of

Mapper.CreateMap<Source, Destination>();
Mapper.CreateMap<Destination, Source>();

Creating Your Own Rules

Sometimes you may want to do something complicated during your map, such as casting a property. This can be done by setting up a rule. To make this happen, you need to create a class that inherits from AutoMapper’s ‘TypeConverter’ class

 public class CommaDelimitedStringConverter : TypeConverter<string, IEnumerable<string>>;
  //This method is a v.simple one to convert from a comma
  //delimted string and output it as a list of strings
  protected override IEnumerable<string> ConvertCore(string source)
       return source.Split(',').ToList();

and then register this rule using

 Mapper.CreateMap<string, IEnumerable<string>>()

Alternatively you can just pass in a Func, for example:

Mapper.CreateMap<string, int>().ConvertUsing(Convert.ToInt32);

EDIT: As of version 4.2 of AutoMapper, registrations are not longer static and look a little more like:

 Mapper.Initialize(cfg =>
 cfg.CreateMap<FlattenedContact, Contact>();
 cfg.CreateMap<MyType, MyOtherType>();

Happy Mapping!

An Educated Guess is Still a Guess

In the real world…

Two weeks ago I moved house. I, obviously, needed to pack up my room. To do this task I had estimated both the time it would take and the amount of transportable storage I would need. I decided I could perform the task over three evenings, first packing the electrical devices, then some odd bits and bobs I had lying around and lastly, I would pack clothes. I had three big cardboard boxes, two large suitcases as well as a couple of rucksacks.

The good news was, I fit all of my stuff into the storage I had. Admittedly with some squashing of suitcases and some overflowing boxes – but I did it!!

Unfortunately, the time I allotted myself was nowhere near enough and I ended up packing until the early hours in the morning before I moved – but, hey, we all do overtime every now and then.

Even more unfortunate for me was that this move did not work out from the get go, and I’m now sat here surrounded by half-packed stuff – ready to move again. Joy.

The Educated Guess

Now that I’m packing again I felt this time I knew exactly how long it would take and what level of storage I would need. I had exactly the same amount of stuff as before and knew where everything was, easy!

Sadly, incorrect! The packing took even longer. I guess the stress of the double-move had a negative effect on my motivation for the task (I guess that’s why I’m sat writing this & not packing!). Also the boxes & suitcases etc that I had did not suffice and I had acquire another large box and another bag. How annoying!

So, what am i getting at?

In the software world most people estimate and size our work items before doing them, usually this leads us committing to do a task in this time. If we relate this way of working back to my real-world moving example…

On the first move the perception would be that I “succeeded”. All of the stuff that needed to be moved, appeared to be moved. Awesome. Except:

  • I crammed so much stuff into my suitcase that its broke the zip opening it. doh!
  • My boxes we so full that stuff fell out when carrying it – and I still can’t find my computer mouse…
  • I stayed up so late that move day was a real slog.

So, from this I would like to draw your attention to the dangers of doing anything (and at any cost) to stick to your estimate. That missing computer mouse could be a very damaging bug you overlook. Not only that, I “worked overtime” to get the job done. This caused a huge detriment to my ability on the following day – if moving was my day job i certainly wouldn’t be able to keep this up every week.

Second-time around is worse, I’ve already underestimated both the time and the storage I need. What’s the reason for this? Simple, this task [despite appearances] is not the same as the last one. First I have changed – lets face it moving twice in a fortnight is not a small amount of stress and no matter the task motivation is key. Secondly, the task itself is different. Okay, I’m moving the same stuff, but everything has a different place to be before. This is the same when writing software, no two tasks are the same, sure you can draw similarities but ultimately you’re just guessing again – and just because one task is similar to another you can still be wildly far off your estimate.

Is Estimation Really Worthwhile?

Getting better at estimation work as a programmer is a a skill that most seem to seek out. There are some great tips on how to get more accurate estimates, none more so than here:

But how much time must we spend estimating tasks? We can all sit researching the code base, assessing the backlog items until we’re blue in the face but ultimately, you will never know how long it will take until its done. Spending a large amount of your time preparing to give an estimation seems a lot like waste, to me.



So why do we get hung up on it? A lot of us run a SCRUM process and at the end of our sprints we’re either all very happy, or very depressed. Our mood is dependant on whether we finished the work that we, ourselves, estimated we could do before we started the sprint. If we do everything on time – we estimated well. If we have stuff left to do – we estimated badly. So why would we want to feel depressed after a two week slog of producing (I hope) great work because of a speculative call I made at the beginning of this process?  I think it is a much better thing to judge our work based on how good our work is – simple. But to do this we need to stop ourselves putting so much importance on our estimations – or at the very least prevent them from becoming a binding contract between ourselves and the business. After all, a certain agile principle does state:

“Working software is the primary measure of progress.”


Another way

I’ve been reading a fair bit recently on the idea of NoEstimates, which I believe can be attributed to Neil Killick. Read:

or check out #NoEstimates for a whole lot of discussion on this.

I tend to hear a lot that estimations are required otherwise we will not know we can deliver on time. This is fair, some people have clients and customers to keep happy. However, I have previously worked in a team that do not estimate their work, we ran a Cranked ( process. In this process, we did not fix the time of our iterations but the scope of them. By doing this, we can achieve some forecasting of work, without once having to guess how long a piece of work will take. All we did was take the range of time of our iterations (removing outliers) and multiplied by the number of work items a feature would require. Simple.

For example, say we needed to implement a login feature. This feature would require creating the back-end code, some form of UI and we need to make sure we restrict access to the application for non-logged-in users. That’s three pieces of work to me. The team’s average iterations take 2-3 days. So that equates to 6 to 9 days (3*[2 to 3 days]). Okay, its a simple example but the idea works. You keep you work items small and focussed and you ultimately don’t have waste any time estimating any more.

The beauty of removing this concern away from us programmers and techy folk is that we can just concentrate on making the product beautiful without spending our time or our emotional well-being on the concern of sticking to our estimations.

And this is just one thing you can do to help remove the waste in estimating – there is a whole bunch of material out there 🙂

Put simply

Let’s start making the quality of our code and the quality of the products we create our primary concern, not our estimations!!



End-to-end Integration tests for Web Api

Recently I was looking at the NancyFx framework, what found I liked most about the whole thing was the Nancy.Testing library. It allows you to not just test the code inside your API, but actually make the request to the Url, with any headers, body or whatever with it. This means you are testing everything. You can ensure you are sending back the right status codes, content types and that you are handling authorization correctly.

To do this in Nancy, the code looks something like:


public void GetEmployeesExpectOK()
    var bootstrap = new ConfigurableBootstrapper(c => c.Module<EmployeeModule>());
    var browser = new Browser(bootstrap);

    BrowserResponse response = browser.Get("/Employees/", with =>
        with.Header("Accept", "application/json");

    Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);


As you can see here, you create a “Browser” and use it to request at the specified route. Pretty cool. This was particularly useful for adding some automated acceptance tests for your application – as it allows us to tests all the way from an API request, down to the data storage and back out again. However, we were not using NancyFx for our API, we were using ASP.Net Web API instead – so we needed to replicate this way of testing.

Happily a library exists which allows you to implement tests this way for Web API. This is called WebApi.Testing ( The code is very similar, except there are some flaws.


public void GetEmployeesExpectOK()
    var config = new HttpConfiguration();
    defaults:new{ id =RouteParameter.Optional});

    Browser browser = new Browser(config);var response = browser.Get("/Employees/",(with)=>
    Assert.Equals(HttpStatusCode.OK, response.StatusCode);

What I don’t like about this is that we have to re-define the routes for the API project, so we’re not really testing the real routes from our application, which is a shame. The solution we found was to edit the WebApiConfig in the API project, so we can get the routes from out test, like so:

public static void Register(HttpConfiguration config) 
    // Setup Web API configuration routes 
    config = GetConfig(config); 

//This is added so we can return the HttpConfiguratio
public static HttpConfiguration GetConfig(HttpConfiguration config) 

    name: "DefaultApi", 
    routeTemplate: "api/{controller}/{id}", 
    defaults: new { id = RouteParameter.Optional } 

    return config; 

Then in the test class:

public void GetEmployeesExpectOK()
    var config = ApiProject.WebApiConfig.GetConfig(new HttpConfiguration()); //Get the routes from API project
    var browser = new Browser(config); 
    var response = browser.Get("/Employees/", (with) => 
        with.Header("Accept", "application/json"); 

    Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);

As you can see here, we just get the route configuration from the application and use it in the test, and now we are testing the routes too.

Writing a Prezi clone with HTML5, SVG, and Javascript

Aditya Bhatt

The idea

The other day, I took it upon myself to write a browser-based Prezi-like app without using flash. Condition: It should run purely client-side.

It was also an experiment in evaluating whether the current APIs available in modern browsers are enough to handle the task. What follows is an account of what works, what doesn’t, and what could be done better.

SVG vs Canvas

When you’re building a rich graphics-intensive app like Prezi, you usually have two ways of rendering content: SVG and Canvas.

1. SVG provides a neat DOM that can be manipulated with existing DOM handling javascript libraries, such as jQuery. Canvas, on the other hand, is just a bitmap buffer. This means that you have to program your own DOM-like scene graph if you wish to use Canvas for handling presentation elements. Libraries for this already exist – most notably, fabric.js, but none…

View original post 963 more words

The SpecFlow Chronicles – Volume 1 : Turning Tests Into Living Documentation

Tester Vs Computer

As I mentioned in last week’s post, part of my focus is to help my team increase its capabilities in the area of test automation.  We would like to increase the scope of our automation and, in particular, we would like to increase the level of collaboration between developers and black-box QA in implementing and maintaining the tests.

Some of the challenges that we’re currently facing:

  • Many black box testers lack the automation experience needed to automate the tests that they want to automate
  • Developers don’t know what the testers want to automate
  • The tests that are currently being run are not transparent to the business side
  • Existing test automation is not being maintained or exploited because it’s only well-understood by those that implemented it

Over the past few months, my team has been playing with an open-source tool called SpecFlow as we try to address these challenges.


View original post 788 more words