Knockout.js – Bindings

Last time I introduced the basics of the knockout.js library. We looked at how you can declare a very simple model and then bind it to a UI using ‘data-bind’ attributes. I alluded to the fact that there are more bindings that come with knockout.js.

Simply put, a binding is something that ties some property (or method) to the HTML DOM. Some bindings are one-way and some are (or at least can be) two-way. In the last post we used the ‘text’ and ‘click’ bindings. ‘text’ can be a two-way binding. I say ‘can be’ because it can only be a two way binding if you initialize the bound model property using ko.observable(). If the bound property is initialized as a primitive javascript type (ie. string or number, etc), the binding will be one-way.

The real power of bindings comes into play when you start binding more than just text fields. Using bindings you can have the methods (behavior) on your javascript objects bound to different events on the DOM elements, such as ‘click’ or ‘event’. Using knockout bindings you can remove alot of that jQuery code to hook events and simply mark up your HTML with a few data-bind=”” attributes.

Let’s look at a small example. We’ll start with the model again:

var model = {
    firstName: ko.observable(""),
    lastName: ko.observable(""),
    submit: function() {
        var shouldSubmit = confirm("Are you really sure?");
        return shouldSubmit;
    }
}

model.isValid = ko.dependentObservable(function() {
    return this.firstName().length > 0 && this.lastName().length > 0;
}, model);

ko.applyBindings(model);

As you can see we have a couple of simple properties (firstName, lastName), but then we have two methods: isValid() and submit(). Let’s look at the HTML we’ll bind this too and then look at things more closely.

<form data-bind="submit: submit">
    First Name: <input type="text" data-bind="value: firstName" />
    Last Name: <input type="text" data-bind="value: lastName" />
    <input type="submit" data-bind="enable: isValid" />
</form>

Using bindings we’ve bound the two name fields to the text boxes. In addition we now can see what the user’s full name will be as soon as you tab out of either text field. You can also see that we disable the Submit button if either the first name or last name fields are empty. One trick here is that when you reference a ko.observable() field in your object and you want the underlying value, you call your field as a function (as we did in the isValid() method).

If you are interested you can play with the this example here: http://jsfiddle.net/tWM8r/2/

You can find out more about Knockout’s built-in bindings on the website here: http://knockoutjs.com/documentation/introduction.html

Knockout’s bindings will allow you to do most of what you need to do, and when you’re missing something, you can extend Knockout by writing your own custom bindings (something akin to writing a jQuery plugin). We’ll get into custom bindings in a future post.

That’s all for now. Next time we’ll look at Knockout’s template binding feature which is very powerful and useful.

ASP.NET MVC – SortDirection model binder

I’ve recently been working on an ASP.NET MVC 3 project. Yesterday I started working with the WebGrid that comes built-in. One thing that didn’t sit right with me was that the WebGrid posts the sort direction to your controller as either “ASC” or “DESC”, yet in the API when setting up the WebGrid in a view (we’re using Razor) you specify the sort direction using the enum System.Web.UI.WebControls.SortDirection.

I initially tried simply changing the sort direction parameter to be of type SortDirection. This doesn’t work though because the values posted by the WebGrid aren’t string equivalents to the enum values (ie. “Ascending” and “Descending”). So, to fix this problem, I decided to build out a custom model binder. Model Binders in ASP.NET MVC are the pieces that map the data provided by the client (browser) to your controller action parameters.

So here’s the model binder that I came up with.

using System.Web.Mvc;
using System.Web.UI.WebControls;

public class SortDirectionModelBinder : IModelBinder
{
    public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
    {
        if (string.IsNullOrEmpty(bindingContext.ModelName))
        {
            return null;
        }

        var valueResult = bindingContext.ValueProvider.GetValue(bindingContext.ModelName);

        if (valueResult == null)
        {
            return null;
        }

        if (valueResult.AttemptedValue.IsNullOrWhiteSpace() || valueResult.AttemptedValue == "ASC")
        {
            return SortDirection.Ascending;
        }
        else
        {
            return SortDirection.Descending;
        }
    }
}

In the model binder we simply grab the posted value and determine if we need to return SortDirection.Ascending or SortDirection.Descending. Once we have the model binder written, we need to let ASP.NET MVC know about it by adding it to the ModelBinders.Binders dictionary. I put this code in the Global.asax file like this:

public class MyApplication : HttpApplication
{
    protected void Application_Start()
    {
        ModelBinders.Binders[System.Web.UI.WebControls.SortDirection] = new SortDirectionModelBinder();
    }
}

Knockout.js – cleaning up the client-side

>I’ve started a new project with a client and it’s quite a shift for me. Coming off a heavy client (WinForms), 3-year project I’m now working in ASP.NET MVC and javascript. One of my goals with regards to the architecture was to keep a handle on the javascript. The project is integrating with the ESRI ArcGIS Javascript API and so there will be lots of client-side javascript and it will be important to keep it organized and maintainable.

In looking around for javascript libraries to help with this I found Knockout.js. This is a library for building larger-scale web applications that use javascript more heavily.

Knockout works by extending the properties of your javascript models (basically a POJO… Plain Old Javascript Object). You can then bind your model to the view (HTML elements) by marking up the HTML elements using “knockout bindings” (there are quite a few to choose from and they’re listed here). Basically a knockout binding is a way to bind a property on your model to a specific attribute of an HTML element. There are quite a few that come built into Knockout (including things like text, visible, css, click, etc).

Once you’ve got the view marked up you simply call

ko.applyBindings(model);

and Knockout takes it the rest of the way.

Let’s walk through a simple example as that’ll make things more clear (I’m going to use the same Login UI example as Derick Bailey used for his first Backbone.js post so it’s a bit easier to compare apples-to-apples… hopefully).

The layout is very similar to a Backbone view except that in a Knockout view you add extra ‘data-bind’ attributes.

Notice the ‘data-bind’ attributes in the markup. These dictate how knockout will bind the model we provide to the UI. Let’s move on to the model now.

This is very close to being a simple javascript object. The only trick is that any property that we want to bind to the UI should be initialized via the ko.observable() method. The value we pass to the ko.observable() will become the property’s underlying value.

Once we’ve instantiated a model object we simply call ko.applyBindings() and Knockout does the rest. (Note that the second parameter is the HTML element that should be the root of the binding. It is optional if you are only binding one object to your page, but I think in almost any application you’ll get to a point where you want to bind multiple objects so this shows how to do that.)

At this point Knockout will keep the UI and model in sync as we manipulate it. This is full 2-way binding for the text elements. So if the user types in the username field, the model’s username property is updated. If we update the model’s username property the UI will be updated as well.

Derick noted in his blog post the mixing of jQuery and Backbone.js. I’m doing some of that here, but I think Knockout.js doesn’t need jQuery as much because of how the binding is declared (on the HTML elements themselves).

I’m going to try to follow Derick’s excellent series of posts on Backbone.js and comment on how the subjects he discusses compare and contrast with Knockout.js.

In the end I think both are very capable libraries even if they come at the problem from slightly different angles. In the end it’s all about better organization of javascript and better assignment of responsibilities in the javascript you write.

Getting Started with AutoTest.Net

I recently came across a new tool called AutoTest.Net.  This is a great tool if you are a TDD/BDD practitioner.  Basically AutoTest.Net can sit in the background and watch your source tree.  Whenever you make changes to a file and save it, AutoTest will build your solution and run all unit tests that it finds.  When it has run the tests it will display the failed tests in the AutoTest window (there is also a console runner if you are a hardcore console user). 

This takes away some of the friction that comes with unit testing because now you don’t have to do the step of build and run tests during development.  Simply write a test and save.  AutoTest runs your tests and you see a failed test.  Now implement the code that should make the test and save again.  AutoTest again detects the change and does a build and test run.  This cycle becomes automatic quite quickly and I’m finding I really like not having to explicitly build and run tests. 

Features

AutoTest.Net sports a nice set of features.  You can see the full list of features on the main GitHub repo page (scroll down) but here’s a few.  It supports multiple test frameworks including MSTest, NUnit, MSpec, and XUnit.net (I believe other frameworks can be supported by writing a plugin).  It can also monitor your project in two different ways.  One is by watching for changes to all files in your project (ie. watching the source files).  The other is by watching only assemblies.  Using this second method it would only run tests when you recompile your code manually.  I’m not sure where this would be preferred but I might find a use for it yet.

Edit: Svein (the author of AutoTest.Net) commented that one other compelling feature is Growl/Snarl support. If you have either one installed you can get test pass/fail notifications through Growl/Snarl, which means you don’t have to flip back to the AutoTest window to check that status of your code. Nice!

This leads me to one other feature that I missed and that is that AutoTest.Net is cross-platform. It currently supports both the Microsoft .NET framework as well as Mono. This means you can develop on OS X or Linux.

Getting AutoTest.Net

So, to get started, download AutoTest.Net from GitHub.  The easiest way is to click the “Downloads” link on the main AutoTest.Net project page and click one of the AutoTest zipfile links (at this time AutoTest.Net-v1.1.zip is the latest). 

AutoTest.Net Download

Once downloaded, unzip to a directory that you’ll run it from.  I’ve put it in my “utilities” directory.  At this point you can start using AutoTest.Net by running AutoTest.WinForms.exe or AutoTest.Console.exe.  If you run the WinForms version, AutoTest will start by asking you what directory you’d like to monitor for changes.  Typically you would select the root directory of your .Net solution (where your .sln file resides).  Once you’ve selected a directory and clicked OK, you’re ready to start development.

image

The screenshot above is the main AutoTest.Net window.  You can see that it ran 1 build and executed a total of 13 unit tests.  The neat thing is that now as I continue to work, all I need to do to cause AutoTest.Net to recompile and re-run all tests is to make a code change and save the file.  This sounds like a small change in the regular flow of code, save, build, run tests.  However, once I worked with AutoTest.Net for a while it started to feel very natural and going back to the old flow will feel like adding friction.

Configuration

Alright, so we have AutoTest.Net running and that’s good.  As with most tools, it comes with a config file, AutoTest.config (which is shared between the WinForms and Console apps).  This config file is well-documented so for the most part you can just open it up and figure out what knobs you can adjust.

Overriding Options

One nice feature that AutoTest.Net has is that you can have a base configuration in the directory where the AutoTest binaries reside but then override it by dropping an AutoTest.config file in the root directory that you are monitoring.  This allows you to keep a sensible base config in your AutoTest application directory and then override per-project as needed.  Typically I’ve been setting the “watch” directory as the directory that my .sln file is in and so you’d drop the AutoTest.config file in the same directory as the .sln file. 

image

In the above screenshot I have the AutoTest.config file which overrides a few settings, one of which is the ignore file: _ignorefile.txt

IgnoreFile Option

One option that is useful is called the IgnoreFile.  This is well-documented in the config file but it didn’t click with me initially.  The IgnoreFile option specifies a file that contains a list of files and folders that AutoTest.Net should ignore when monitoring the configured directories for changes.  The config file mentions the .gitignore file as an similar example so if you are familiar with git, it should make sense.  The one piece that took me a while to figure out was where this ignore file should go.  I finally figured out that if you put it in the root of the monitored directory (ie. beside your .sln file), it will pick it up (see above screenshot). 

Wrapping up

AutoTest.Net is a great tool to help with the Red, Green, Refactor flow.  It removes some of the friction in my day-to-day work by eliminating the need to manually invoke a compile and test run. 

Lastly, the observant readers will have noticed that I said it builds and runs all tests any time it notices a file change.  Greg Young is working on a tool called Mighty Moose that builds on top of AutoTest.Net as an add-in to Visual Studio (I think there’s a standalone version too).  Mighty Moose ups the ante by figuring out what code changed and what tests would be affected by that change.  It then only runs the affected tests, significantly cutting down the test run time. 

Querying XML Columns in SQL Server 2008

We make use of XML columns in a few places on our current project.  XML columns are great for storing “blobs” of data when you want to be able to change the schema of the stored data without having to make database changes (it also provides flexibility in what you store in that column).

Occasionally, we need to query based on parts of the XML in these columns.  Now because these are XML columns it’s not quite as easy to query on parts of the column as an int or nvarchar column.  This is further complicated when the XML being stored uses namespaces. 

Let’s start with the following table:

customer_table

And here’s a sample of the XML stored in the column:

<CustomerInfo xmlns=http://schemas.datacontract.org/2004/07/Sample.Dto
               xmlns:i="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:d="http://custom.domain.example.org/Addresses">
    <FirstName>Jeremy</FirstName>
    <LastName>Wiebe</LastName>
    <d:Address>
    <d:AddressId>82FC06FA-46A9-47CA-81AA-A0B968BCEE49</d:AddressId>
    <d:Line1>123 Main Street</d:Line1>
    <d:City>Toronto</d:City>
    </d:Address>
</CustomerInfo>

Now we could query for this row in the table by first name using the following query:

WITH XMLNAMESPACES ('http://www.w3.org/2001/XMLSchema-instance' as i,
                    'http://schemas.datacontract.org/2004/07/Sample.Dto' as s,
                    'http://custom.domain.example.org/Addresses' as d)
     select CustomerInfo.value('(/s:CustomerInfo/s:FirstName)[1]', 'nvarchar(max)') as FirstName,
            CustomerInfo.value('(/s:CustomerInfo/s:LastName)[1]', 'nvarchar(max)') as LastName,
            CustomerInfo.value('(/s:CustomerInfo/d:Address/d:City)[1]', 'nvarchar(max)') as City,
     from   Customer
     where  MessageXml.value('(/s:CustomerInfo/d:Address/d:AddressId)[1]', 'nvarchar(max)') = '82FC06FA-46A9-47CA-81AA-A0B968BCEE49'

This would list out the FirstName, LastName, and City for any row where the AddressId was ‘82FC06FA-46A9-47CA-81AA-A0B968BCEE49’. There are a two things to note:

Namespaces

Any time you query XML (in any language/platform) that include XML your query must take that into account.  In T-SQL, you define namespaces and their prefixes using the WITH XMLNAMESPACES statement.  This statement defines the namespace prefix and associated namespace URI. 

Once you’ve defined them, you can use the namespace prefix in your XPath queries so that you can properly identify the XML elements you want.  One final note about namespaces here is that we have to define a namespace prefix for the namespace that was the default namespace in the XML sample.  As far as I know there’s no way to define a default XML namespace using the WITH XMLNAMESPACES statement.

XPath Query results

Notice that the XPath queries (the CustomerInfo.value() calls) always index into the results to get the first item in the result.  The .value() function always returns an array of matching XML nodes, so if you only want the first (which is often the case, you need to index into the results.  Also note that with these XPath queries, the results are a 1-based array, not 0-based.  So in this example we’re always taking the first element in the result set.

 

There are other functions you can use to query XML columns in T-SQL and you can find out more about this in the Books Online or via MSDN.   This page is a good starting point for querying XML columns: http://msdn.microsoft.com/en-us/library/ms189075.aspx.

>Uncovering intermittent test failures using VS Load Tests

>

Our team build has been flaky lately with several unit tests that fail intermittently (and most often only on the build server).  I was getting fed up with a build that was broken most of the time so I decided to see if I could figure out why the tests fail.

Working on the premise that the intermittent failure was either multiple tests stepping on each other’s toes or timing related.  I decided the easier one to figure out would be the timing issue. 

To create a new load test:

1) Select “Test –> New Test” and select “Load Test” from the “Add New Test” dialog

Add New Test

2) You will be presented with a Wizard that will help you build a load test.  I don’t know what each setting means, but for the most part I left the settings at their defaults.  I’ve noted the exceptions below.

a) Welcome

New Load Test Wizard - Welcome

b) Scenario

New Load Test Wizard - Scenario

c) Load Pattern

New Load Test Wizard - Scenario - Load Pattern

d) Test Mix Model

New Load Test Wizard - Scenario - Test Mix Model

e) Test Mix

The Test Mix step allows you to select which tests should be included in the load test.  For my purposes, I selected the single test that was failing intermittently.  If you select multiple tests this step allows you to manage the distribution of each test within the load.

New Load Test Wizard - Scenario - Test Mix

f) Counter Sets

New Load Test Wizard - Counter Sets

g) Run Settings

Finally, you are able to manage the Warm up duration and Run duration.  Warm up duration is used to “warm up” the environment.  During this time the load test controller simply runs the tests but does not include their results in the load test.  The Run duration represents how long the load test controller will run the mix of tests selected in the “Test Mix” step.

New Load Test Wizard - Run Settings

Once you click Finish, you will be presented with a view of the load test you’ve just created.  You can now click on the “Run test” button to execute the load test. 

Gotcha: The first time I tried to do a load test it complained that it couldn’t connect to the database.  You can resolve this by running the following SQL script and then going into Visual Studio and selecting Test –> Administer Test Controllers…  This will bring up the following dialog. 

Administer Test Controllers 

The only thing you need to configure in this dialog is the database connection (the script will create a database called “LoadTest”) so in my case the connection string looked like this:

Data Source=(local);Initial Catalog=LoadTest;Integrated Security=True

Now that you have the load test created, you are ready to run it.  Click the “Run test” button on the load test window (shown below). 

Load Test - Ready to Run

While the load test is running, Visual Studio will show you 4 graphs that give you information about the current test run.  You can play with the option checkboxes to show/hide different metrics.

In my case, as the load test ran the unit test I’d selected started to fail.  This at least showed me that I could reproduce the failure on my local machine now. 

Next I added a bit of code to the unit test prior to the assertion that was failing.  Instead of doing the assertion I added code to check for the failure condition and called Debugger.Break(). 

I ran the test again, but this time as I hit the Debugger.Break calls I could step through the code very well because the load test was running 25 simultaneous threads (simulating 25 users).  I dug around a bit and found that you can change the number of simultaneous users.  You can adjust the setting by selecting the “Constant Load Pattern” node and adjusting the “Constant User Count” in the Properties window.

I ran the test again and this time I was able to debug the test and figure out what was wrong. 

Overall using the load test tool in Visual Studio was quite easy to set up and get going.  Other than the requirement of a database (which is used to store the load test results) it was very easy to set up.

>Lessons learned using an IoC Container

>

I am working on my first project that’s using an IoC container heavily.  Overall it’s been a very positive experience.  However, we have learned some things have caused us pain in testing, bug fixing, and general code maintenance.

Separate component registration from application start-up

This is a constant source of problem on our project.  If you don’t separate component registration from your component start-up path you often run into a chicken-and-egg problem.  You may be resolving a component that depends on other components that haven’t yet been registered.  This makes for a very brittle start-up path that is prone to breakage as you add dependencies to your components.  Now the container that is supposed to decouple things is causing you to micromanage your component registration and start-up. 

The solution is to split registration from component start-up.  This makes perfect sense to me now, but wasn’t so obvious when we started the project.  Lesson learned!

Be careful with a global service locator

We went quite a while without using a global service locator (essentially a singleton instance of the container) but eventually found that we needed it.  This might have been avoided with better design-fu but we couldn’t think of a better alternative at the time. 

What we found was that the global container made things difficult to test.  This became especially painful when we needed to push mock objects into our container and than manually clean up the registrations to replace them with the production components that other tests might expect.  I’d be interested in hearing if you have a way to solve this problem, and if so, how.

Use a Full-Featured Container

When we started the project we had intended to use Enterprise Library for some features.  EntLib has Unity integration so we settled on using Unity as our IoC container by default.  The container itself has served us extremely well.  What we’ve found lacking in the container though is it’s supporting infrastructure features.  A container like StructureMap comes with alot of nice features that aren’t core IoC features.  For example, it reduces the “ceremony” (as Jeremy D. Miller would say) of registering components by providing convention-based component registration.  StructureMap also provides detailed diagnostics that help you figure out what’s wrong with your component registrations if things don’t work.

That summarizes the biggest points that we’ve learned so far.  I’d be interested to hear some “gotchas” that you’ve learned from using an IoC container “in anger”. 

>An Introduction to StructureMap

>

Over the next several months I will be presenting a walkthrough of StructureMap.  I will be starting with the basics of how to construct objects using StructureMap and then move on to more advanced topics. 

I know there is lots of great information out there around how to use StructureMap.  My intention for writing this series is mostly selfish.  I want to learn StructureMap, and as I learn it I’ll be blogging the features I’m learning as I go. 

I’m currently working on a project that uses the Unity dependency injection container.  This container has worked very well for us, but as I’ve read various blogs and articles I’ve come to realize that a good DI container provides more than just DI.  It’s my intention to learn the basic usage of StructureMap, but also dig into the more advanced usages as well as much of it’s support infrastructure that helps you keep your dependency injection code to a minimum (thinking of auto-wiring, convention-based registration, etc).

I’ll also be updating this post as I write each new article so that this post can remain as a table of contents for the whole series.

  1. Manual component registration and resolution
  2. Auto-wiring components
  3. Convention-based registration
  4. Component lifetime management

>Beware Calculations in Unit Tests

>

Today our project’s continuous integration build failed.  I traced it down to a unit test that was exercising a scheduling class.  The class takes a schedule and figures out the next time it should run based on the current date. 

Now in our project we’ve provided a way to stub the current date or current time by having a SystemClock class that has a static Instance property.  If we need to we can push a stubbed instance into that property in a unit test to control what the current date/time is.

The problem in the unit test was that it was relying on DateTime.Now and then figuring out what the expected next scheduled time should be right in the unit test.  The calculations were wrong in the unit test and caused the unit test to fail.

Almost every time I see calculations in unit tests to figure out what the expected value should be (especially date-based calculations) I cringe because I see those as unit tests waiting to fail.  If you’ve copied your calculations from the class under test, who says you got the calculations right in the production class?  If you did the calculations differently, your unit test’s calculations could be wrong.  Wherever possible I prefer to keep the moving parts to a minimum and have constant dates as my expected values in date-based unit tests.

>ReSharper 4.5 – Tip #2 – Generate Members

>One of the other developers on the team I’m on pointed out a very handy feature in R# yesterday. It is the “Generate Members” feature and can be accessed using Ctrl+Ins. This handy tool is like a central point to accessing the generation features in R#. The nice thing is that the generated members are placed at the current location and you can generate as many members as you like in one shot. This makes doing things like wrapping a bunch of private fields really easy. So, the next time you are using Alt+Enter (Quick Fix) to generate some code using R#, consider using Alt+Ins and see what it can do.