Latest posts

by John Nye

Migrating a dotNet project to dotnet core

A request was raised on Github to have Search Extensions support dotnet core. This post documents the steps taken as part of that migration in the hope that it may be useful to someone else.

PM> Install-Package NinjaNye.SearchExtensions

The solution I am upgrading (Search Extensions) consists of the following projects:

  • 1 class library
  • 3 test projects

NOTE: This blog assumes that you are using VS2017. You can download the Community edition free from the visual studio downloads page.

Upgrading the .csproj

First step was edit the .csproj file from the main (class library) project. You can open the .csproj in a text editor or simply, right click the project and select "Edit [Project_name].csproj". Once you have the editor open you need to clear the contents and replace it with the following snippet to make the project load on netstandard1.6:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <TargetFramework>netstandard1.6</TargetFramework>
        <Description>[DESCRIPTION]</Description>
    </PropertyGroup>    
</Project>

Once this is complete, in Visual Studio 2017, reload and build the solution. This will highlight, through build errors, any incompatible code and identify which NuGet packages you need to install. At this point you have a list of errors that need fixing before you can proceed.

In order to get the project to build, you will most likely need to dive into your Assembly.cs, and potentially remove it altogether. Before you do so, remember to migrate your project meta data to your .csproj.

If you take advantage of the [assembly:InternalsVisibleTo] then your will need to keep this in your Assembly.cs

Targeting multiple frameworks

Upgrading my test projects was a little more involved, firstly because I wanted to test my code on both dotnet core and net461, and secondly because I wanted to migrate my NUnit tests to XUnit.

Fortunately, with the new xml based project files you can target multiple frameworks fairly simply:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <!-- MULTIPLE FRAMEWORKS DEFINED BELOW -->
        <TargetFrameworks>netcoreapp1.1;net461</TargetFrameworks>
        <Description>Unit test project for Search Extensions</Description>
    </PropertyGroup>    
</Project>

Note that the <TargetFramework> element has been altered to <TargetFrameworks>.

Again, the above step will remove all referenced projects and packages. To reference my project under test I was able to add the following code after the closing </PropertyGroup> node (alternatively you can let VS2017 perform this for you using the UI):

<ItemGroup>
  <ProjectReference Include="..\NinjaNye.SearchExtensions\NinjaNye.SearchExtensions.csproj" />
</ItemGroup>

Once complete I could then address my second problem. Porting my NUnit tests to XUnit. This was simply a case of installing the XUnit nuget package on the project and replacing a bunch of [Test] attributes with [Fact]... well that and a bunch of assertion tweaks but you get the idea.

Which NetStandard api?

So my project had been converted and my tests were running (on multiple frameworks). The last piece in the puzzle was to decide which NetStandard framework to support. Essentially, the earlier you go the greater the amount of frameworks your code can run on. This is at the expense of the richness of the api you integrate with. In the end you don't choose which api to use... it chooses you!

To do this, you simply swap out netstandard1.6 for netstandard1.5 in the .csproj, build the code and, based on the errors, make a decision as to whether you can support the younger framework. In my case I was able to go back to netstandard1.5 but when I tried netstandard1.4 the api did not have enough functionality to perform the actions I required.

If you find that netstandard1.6 is not yet rich enough to migrate to, it might be worth waiting for NetStandard 2.0. This release is due to have a huge amount of additional integration points.

The final conversion

If the above isn't clear, please feel free to leave a comment, or alternatively you can take a look at the SearchExtensions GitHub project and take a look around.

Finally, I'd like to thank Nick Mayne for his help during this conversion. Go check out his GitHub page or follow him on twitter @nicholasmayne


If you would like to know more about this or any other feature, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project

Improved Levenshtein searching with SearchExtensions

NOTE: This post will not cover what Levenshtein Distance is or how it is calculated.
For more information on Levenshtein Distance, please visit the Wikipedia page

Up until now, NinjaNye.SearchExtensions has only been able to calculate the Levenshtein distance between:

  • A single property and a string search term
  • A single property and one other property

To celebrate 10,000 downloads of SearchExtensions, the latest version allows you to calculate the distance between:

  • A single property and multiple strings
  • Multiple properties and a single string
  • Multiple properties and multiple strings
  • A single property and multiple other properties
  • Multiple properties and a single property
  • Multiple properties and multiple other properties

You can install NinjaNye.SearchExtensions from nuget using the following:

PM> Install-Package NinjaNye.SearchExtensions

All of this is fairly difficult to describe, so below are some examples to help me explain.

Calculate levenshtein distance against multiple terms

The following functionality has been in SearchExtensions for a little while but it is a good starting point to describe how to get the Levenshtein distance between each users first name and the word "Jim"

var result = context.Users.LevenshteinDistanceOf(x => x.FirstName)
      .ComparedTo("Jim");

The above will return each user along with the Levenshtein distance of the FirstName compared to "Jim". In version 2.0, it is now possible to compare the FirstName to more than one term:

var result = context.Users.LevenshteinDistanceOf(x => x.FirstName)
      .ComparedTo("Jim", "Fred");

Calculate levenshtein distance against multiple properties

Calculating the distance against multiple properties is also now supported:

var result = context.Users.LevenshteinDistanceOf(x => x.FirstName, x => LastName)
      .ComparedTo("Jim", "Fred");

The above will result in 4 comparisons per record. Because the levenshtein calculation must be performed in memory, you should always reduce the record set as much as possible beforehand, e.g:

var result = context.Users.Search(x => x.CountryCode)
      .EqualTo("GB")
      .LevenshteinDistanceOf(x => x.FirstName, x => LastName)
      .ComparedTo("Jim", "Fred");

When performing a Levenshtein search it is important to always reduce your record set as much as possible, especially when comparing multiple properties to multiple terms

A new result

The result of a Levenshtein search also changed and now has additional properties to work with:

public interface ILevenshteinDistance<out T>
{
    /// <summary>
    /// The distance of the first comparison
    /// </summary>
    int Distance { get; }

    /// <summary>
    /// The queried item
    /// </summary>
    T Item { get; }

    /// <summary>
    /// A collection of all distances calculated
    /// </summary>
    int[] Distances { get; }

    /// <summary>
    /// The minimum distance of all levenshtein calculations
    /// </summary>
    int MinimumDistance { get; }

    /// <summary>
    /// The maximum distance of all levenshtein calculations
    /// </summary>
    int MaximumDistance { get; }
  }

This means you can order the results so that the record with the closest match is ordered first:

var result = context.Users.LevenshteinDistanceOf(x => x.FirstName)
      .ComparedTo("Jim", "Fred")
      .OrderBy(x => x.MinimumDistance)
      .ThenBy(x => x.MaximumDistance);

If you would like to know more about this or any other feature, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project

I woke up this morning to find that search extensions now has over 10,000 downloads!!!

I wanted to take this opportunity to thank all of you that have contributed to the project be it by simply using SearchExtensions, requesting new features, raising issues on GitHub or even submitting PRs.

The future

The improvements I'd like to make to SearchExtensions include:

  • Support for .Net Core
  • Split out Soundex support to its own package
  • Split Levenschtein support to its own package

The above are only what I see as a way to improve SearchExtensions. The best, and most useful ideas often come from users. With that in mind:

If you have any suggestions on how the package could be improved, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the GitHub

PM> Install-Package NinjaNye.SearchExtensions

SearchExtensions now supports searching of child collections for IQueryable

You can now perform child collection searches on IQueryable objects which will get translated and run on the data source server returning only the filtered records.

String searching

The functionality allows for the following to be performed on the data provider and not in memory

var shops = context.Shops.SearchChildren(s => s.Products)
                         .With(p => p.Name)
                         .Containing("ball");

The above returns shops that contain any product that has the term "ball" in it's Name field.

The above will send the following SQL when using LinqToEntities

SELECT [Extent1].[Id] AS [Id]
  -- Additional columns --        
FROM [dbo].[Shops] AS [Extent1]
WHERE EXISTS (
  SELECT 1 AS [C1]
  FROM [dbo].[Products] AS [Extent2]
  WHERE ([Extent1].[Id] = [Extent2].[Shop_Id])
    AND ([Extent2].[Name] LIKE N'%ball%')
)

Support is also in place for searching multiple properties and multiple values

var shops = context.Shops.SearchChildren(s => s.Products)
                         .With(p => p.Name, p => p.Desc)
                         .Containing("ball", "balls");

Additional string comparisons

All the usual string comparisons that you expect from SearchExtensions are also supported for searching child records:

  • .StartsWith() - property starts with any of the given terms
  • .EndsWith() - property ends with any of the given terms
  • .EqualTo() - property is equal to any of the given terms
  • .Containing() - property contains any of the given terms
  • .ContainingAll() - property contains all of the given terms

Support for non string filtering

Support has also been added to allow searching child collections for matches on other types. The following example returns shops who have products with a price between 10,000 and 100,000:

var shops = context.Shops.SearchChildren(s => s.Products)
                         .With(p => p.Price, p => p.RRP)
                         .Between(10000, 100000);

Both the examples so far will return a collection of Shops where any of it's products match the given criteria.

The equivalent of the above without utilising SearchExtensions is as follows:

var shops = context.Shops.Where(s => s.Products.Any(p =>
                              (p.Price > 10000 && p.Price < 100000)
                           || (p.RRP > 10000 && p.RRP < 100000)));

All the usual non string comparisons have also been implemented for IQueryable searching:

  • .GreaterThan()
  • .GreaterThanOrEqualTo()
  • .LessThan()
  • .LessThanOrEqualTo()
  • .Between()

Method Chaining

As is the norm with SearchExtensions, It is also possible to chain your search criteria together in order to create more complex filtering:

var result = context.Countries.SearchChildren(x => x.Cities)
                    .With(c => Name, c => c.LocalName)
                    .StartsWith("A", "E", "I", "O", "U")
                    .EndsWith("S", "T", "U", "V", "W");

To read more about SearchExtensions, please visit ninjanye.github.io/SearchExtensions/


Thanks again to @bzbetty for getting in touch via a github issues and creating this feature request

If you have a new feature request, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project page

To install SearchExtensions, you can either install it via the Nuget Package manager or by using the following in the package manager console

PM> Install-Package NinjaNye.SearchExtensions

Search child collections

I received a request from a user who wondered if SearchExtensions could do something like the functionality described as follows:

Given I have a collection of Shops
I want to return all Shops that stock a Product with a Name containing "foot"

This feature is now available in the latest release via a new method, .SearchChildren(). Currently this is only available for LinqToObjects, not LinqToEntities (yet):

var shops = allShops.SearchChildren(s => s.Products)
                    .With(p => p.Name, p => p.Desc)
                    .Containing("foot", "feet");

The above returns shops that contain any product that has the term "foot" or "feet" in it's Name or Description fields.

Support has also been added to allow searching child collections for matches on other types. The following example returns shops who have products with a price greater than 10,000:

var shops = allShops.SearchChildren(s => s.Products)
                    .With(p => p.Price)
                    .GreaterThan(10000m);

Both the above examples will return a collection of Shops where any of it's products match the given criteria.

The equivalent of the first example in LinqToObjects would be something like:

var shops = allShops.Where(s => s.Products.Any(p => p.Name.Contains("foot")
                                                 || p.Name.Contains("feet")
                                                 || p.Desc.Contains("foot")
                                                 || p.Desc.Contains("feet")))

Additional comparisons available

Other string comparisons available for normal searching are also available for searching children, such as:

  • .StartsWith() - property starts with any of the given terms
  • .EndsWith() - property ends with any of the given terms
  • .EqualTo() - property is equal to any of the given terms
  • .Containing() - property contains any of the given terms
  • .ContainingAll() - property contains all of the given terms

Non string comparisons have also been implemented when searching child collections:

  • .GreaterThan()
  • .GreaterThanOrEqualTo()
  • .LessThan()
  • .LessThanOrEqualTo()

Searching across multiple children

The ability to search across multiple children (of the same type) is also supported. Consider the following (contrived) example:

I want to return all `parents` who have a `Son` OR a `Daughter` with a `Forename` or `Nickname` beginning with 'A', 'E', 'I', 'O' or 'U'

The Model

public class Parent
{
  public IEnumerable<Child> Sons { get; private set; }
  public IEnumerable<Child> Daughters { get; private set; }
}

public class Child
{
  public int Age {get; private set;}
  public string Forename {get; private set;}
  public string Nickname {get; private set;}
  public string Surname {get; private set;}
}

Performing the search

Given a list of parents I could now do the following:

var result = parents.SearchChildren(p => p.Sons, p => p.Daughters)
                    .With(c => Forname, c => c.NickName)
                    .StartsWith("A", "E", "I", "O", "U");

The result

The result of this search is a collection of parents who have a son or a daughter with a forename or surname beginning with a vowel.

Note that all of a parents children will still be present for each parent.

The filter is performed against the parent but the conditions are against the children.


Thanks to @bzbetty for getting in touch via a github issues.

This feature was not something I would have completed without his request as I had not seen or envisaged a need for it.

If you have a new feature request, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project page

To install SearchExtensions, you can either install it via the Nuget Package manager or by using the following in the package manager console

PM> Install-Package NinjaNye.SearchExtensions

Search text for whole word matches

I recently received a request from a user to looking to be able to search for terms whilst not returning partial matches. For instance, given I am searching for the word "tension", I would not expect "extension" to be returned

In version 1.6 it is now possible to restrict your search to only return records where whole words are matched.

var result = data.Search(x => Name)
                 .Matching(SearchType.WholeWords)
                 .Containing("tension");

We can also mix the search types. The following example matches "tension" against whole words only but then reverts a search for "search" to match against any occurrence,

var result = data.Search(x => Name)
                 .Matching(SearchType.WholeWords)
                 .Containing("tension");
                 .Matching(SearchType.AnyOccurrence)
                 .Containing("search");

The above code will return "tension research" but would not return "extension research"

This feature is also available to .StartsWith() and .EndsWith() methods.


Thanks to Rob for getting in touch via a previous post.
This feature has come about purely because he got in touch.

If you have a new feature request, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project page

PM> Install-Package NinjaNye.SearchExtensions

Release 1.5 is here!

I've been rather busy recently with holidays and moving jobs and haven't been able to spend as much time as I'd like on SearchExtensions, however I recently jumped back on the wagon and am pleased to announce a new release.

This release means you can use SearchExtensions to search against more than just strings. New types that are now supported include (but not limited to) int, DateTime and decimal.

Each of these new types support the following search commands:

  • .EqualTo()
  • .LessThan()
  • .GreaterThan()
  • .Between()
  • .GreaterThanOrEqualTo()
  • .LessThanOrEqualTo()

In this post I will give examples on how to use each of these methods against an IQueryable<TestModel> collection named testContext.TestModels.

.EqualTo() method

This method is the same as the .IsEqual() string search method. Note: In this release the string IsEqual() method has been made obsolete and has been replaced by an .EqualTo() method in order to keep the method names consistent across the library.

var result = testContext.TestModels.Search(x => x.IntegerOne, x => IntegerTwo)
                                   .EqualTo(1, 5, 8, 12);

The above implementation will return any records where either IntegerOne or IntegerTwo equal 1, 5, 8 or 12.

.LessThan() and .LessThanOrEqualTo() method

This method allows you to return records where any of the supplied properties are less than the supplied value

var date = new DateTime(2013, 5, 13)
var result = testContext.TestModels.Search(x => x.DateOne, x => DateTwo, x => x.DateThree)
                                   .LessThan(date);

The above implementation will return any records where either DateOne or DateTwo are less than the supplied date.

.GreaterThan() and .GreaterThanOrEqual() methods

This method allows you to return records where any of the supplied properties are greater than the supplied value

var date = new DateTime(2013, 5, 13)
var result = testContext.TestModels.Search(x => x.DateOne, x => DateTwo, x => x.DateThree)
                                   .GreaterThan(date);

The above implementation will return any records where either DateOne, DateTwo or DateThree are greater than the supplied date.

.Between() method

This method allows you to return records where any of the supplied properties are greater than and less than the supplied values

var result = testContext.TestModels.Search(x => x.DecOne, x => DecTwo, x => x.DecThree)
                                   .Between(9.99, 29.99);

The above implementation will return any records where either DecOne, DecTwo or DecThree are between 9.99 and 29.99. Implementing this without SearchExtension would require something like the following:

var result = testContext.TestModels.Where(x => (x.DecOne > 9.99 && x.DecOne < 29.99)
                                            || (x.DecTwo > 9.99 && x.DecTwo < 29.99)
                                            || (x.DecThree > 9.99 && x.DecThree < 29.99))

Combining instructions

As with all of the search extensions these methods can all be combined to easily create complex queries. When used against IQueryable these extension methods with translate these commands to the datasource if possible meaning the datasource provides the filtering instead of in memory.

var result = testContext.Products.Search(x => x.Title, x => x.SubTitle, x => x.Description)
                                 .Containing("demo", "example", "search")
                                 .Search(x => x.USPrice, x => x.EuroPrice, x => x.GBPPrice)
                                 .Between(9.99, 29.99)
                                 .Search(x => x.StartDate)
                                 .LessThanOrEqualTo(DateTime.Today);

If you would like to know more about this or any other feature, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project page

PM> Install-Package NinjaNye.SearchExtensions

This post has also been published on 7digital's Corporate blog and Developer blog

Week 1 at 7digital

I have recently started at 7digital and already there are a few things of note that may seem small, but highlight the differences in attitude 7digital take over other companies I have worked for. Below are a few thoughts from my first week at 7digital.

Day 1 - Meet the team

After an incredibly frustrating start, owing to a 4 hour delay on my train journey, I was introduced to everyone, given my security pass and directed to a new starter guide that had a series of tasks that needed to be completed. These tasks ranged from installing software to getting added to email groups to reading up on the 7digital handbooks. So I logged into my Ubuntu machine and started going through the list... wait... Ubuntu?

Thoughts from the day:

  • Incredibly welcoming bunch.
  • Locate an Ubuntu book!

Day 2 - Empowerment

The new starter guide I had been pointed to stated that I should 'get into email groups' with some examples of which groups might be relevant. Ok, so who do I need to ask to add me? Since I have never had the 'power' to add myself into email groups before I thought this was a valid question, and it was... however the answer was straight forward - 'You can add yourself'. So off I went to the email group management page and added myself to the Tech email groups, that I felt were relevant to me (with guidance from the team).

This was a small shift compared to previous companies and was a feeling that kept recurring...

  • Q: How do I get added to TeamCity? A: Oh, just register.
  • Q: Can I install XYZ on my machine? A: Of course, it's your machine!

Thoughts from the day:

I began to realise that I was entrusted and empowered to make the best decision for my situation, which sounds obvious but is not always forthcoming.

Day 3 - Getting my hands dirty

After getting myself an avatar sorted I paired up with Yogi on a task. Almost everything here is done in pairs which I have always strived for in previous places but it hasn't always stuck. Here, everyone buys into XP mantra and reaps the rewards of applying this discipline. After some productive dev time it was time to push our changes and release the code to SYSTEST and UAT. Everything worked, and the release was really easy since it was automatically triggered from our code push. So SYSTEST and UAT passed all the tests, fantastic. At this point Yogi mentioned something that made my blood turn cold... he nonchalantly suggested that we push to LIVE.... I looked at him to try and work out if he was serious. He was. I knew this happened but my past experience with live deployments meant my natural reaction was to recoil at the thought. We put out an email, and triggered the deployment, which, again, was really simply and easy. A few minutes later, the code was live and the tests had passed and the whole thing was incredibly painless.

Thoughts from the day:

  • Release all the things! It's not scary at all

Day 4 - In the groove

I arrived on day 4 raring to go. I continued to pair with other members of the team and contributed to 2 more live releases. I was able to use the site before a release and then again after the release and see the changes in behaviour take place. This was strangely gratifying despite the changes being relatively small. The test suites and environments meant we had complete confidence that if something did not work correctly we will be notified early enough that it would not make it to live. Should a commit make it through the test environments and fail in live then, because of the incremental release system, we can immediately identify the cause of the issue and put in a failing test to allow us to fix the issue as well as catch the error, should it happen, in the future. Why doesn't everybody do this?

Thoughts from the day: * I think I'm addicted to this release malarkey * Mistakes are ok (and sometimes innevitable), just try and make them early and loudly.

Day 5

The week has flown by. Friday rules dictate that we can't release to live since should there be any problems, we don't want all have to lose our weekends to fix it. But that's ok, in fact it's very sensible. Again I buddied up with a colleague and we started the next task. We make some good progress and are able to release our changes up to UAT; ready to be pushed to live on Monday. The cycle continues until the afternoon when we are all invited to listen through some topics members of the company would like to share. The knowledge sharing session lasted an hour and was a great way to learn about technologies you may not otherwise have been exposed to. Once complete, the office began to wind down and the kitchen was suddenly populated with an array of snacks and beverages. I dutifully went to the kitchen and started to mingle with the masses, immediately being made to feel at home amongst the crowd.

Thoughts from the day: * Knowledge sharing is kept in high regard * Enjoyed getting to know the wider population within 7digital

Conslusion

My first week at 7digital has been great. I've been made to feel incredibly welcome and have been exposed to new ways of working all help to get features out the door quickly. This all helps to keep the team highly motivated and driven. There is a definite shift in the way 7digital do things compared to other companies.

I'm looking forward to spending my future here at 7digital and hope I can learn from the plethora of experience and talent. I'm completely sold on on the continuous release strategy 7digital have employed and look forward to contributing further improvements to the 7digital suite.

I was recently playing around with Sitecore 8 with some colleagues and was looking into using the ItemWebApi to update a items field value. We grabbed the documentation and went about our implementation.

First things first, I enabled the ItemWebApi and set it to the least restrictive setting since I was only testing.

Sitecore.ItemWebApi.config

  <site name="website">
    <patch:attribute name="itemwebapi.mode">StandardSecurity</patch:attribute>
    <patch:attribute name="itemwebapi.access">ReadWrite</patch:attribute>
    <patch:attribute name="itemwebapi.allowanonymousaccess">true</patch:attribute>
  </site>

Next we created some javascript to send a PUT request to Sitecore with some hard-coded id's and values...

$.ajax({
    type: 'PUT',
    headers: { 'Content-Type': 'application/x-www-formurlencoded'},
    url: '/-/item/v1/?sc_itemid=9BDA4182-F418-4D85-8997-3ABEE4546641',
    data: { 'BF01288E-0CB1-44F4-8DD0-587C46374AFC': 'new field value' }
});

Having sent the request I get a response with the item that has been edited BUT... on investigating the values of the field I was editing the value had not changed!

The issue

After a day of digging we finally found the cause of the issue.

Having decompiled the pipeline method responsible for the update operations we located the following code that was updating the fields.

public override void Process(UpdateArgs arguments)
{
  Assert.ArgumentNotNull((object) arguments, "arguments");
  Item[] scope = arguments.Scope;
  NameValueCollection form = arguments.Context.HttpContext.Request.Form;
  foreach (Item obj in scope)
  {
    if (!(Context.Site.Name == "shell") || obj.Access.CanWriteLanguage())
    {
      obj.Fields.ReadAll();
      obj.Editing.BeginEdit();
      foreach (string fieldId in (NameObjectCollectionBase) form)
      {
        Field field = UpdateScope.GetField(obj, fieldId);
        if (field != null && UpdateScope.CanUpdateField(field, arguments.Context.Settings.Mode))
          field.Value = form[fieldId];
      }
      obj.Editing.EndEdit();
    }
  }
}

Ok, so the fields that we want to update are read from the HttpContext.Request.Form collection. On debugging this we found that the form was always empty... and as you'd expect the form collection is also readonly. So why is there nothing in the form collection? Well after bit more digging, StackOverflow came to the rescue. Sure enough, when you read the msdn documentation it clearly states:

The Form collection retrieves the values of form elements posted to the HTTP request body, with a form using the POST method.

Wait a minute, so Request.Form only populates the Form collection on POST requests...? And the collection is readonly so we can't add to this collection. How could this ever have worked?

At this point I was at a loss, but not wanting to give up we kept on pluggin away, creating our own pipeline process to help debug things. Eventually we came to a solution which horrifies me and SHOULD NOT BE USED IN A PRODUCTION ENVIRONMENT

public class WtfItemWebApi : UpdateScope
{
    public override void Process(UpdateArgs arguments)
    {
        //Get the request
        HttpRequest httpRequest = arguments.Context.HttpContext.Request;
        //Retrieve the data sent up with the put request
        var putData = new StreamReader(httpRequest.InputStream).ReadToEnd();
        //Split the data into itemId and value
        var parts = putData.Split('=');
        string itemId = parts[0];
        string value = HttpUtility.UrlDecode(parts[1]);

        //This method is naughty... very very naughty
        MakeFormEditable(arguments.Context.HttpContext);
        // add field id and value to the form collection
        httpRequest.Form.Set(itemId, value);

        // Continue on to the default update field pipeline class
        base.Process(arguments);
    }

    /// <summary>
    /// Sitecore use PUT requests to update items
    /// Sitecore use HttpRequest.Form to read new field values
    /// Microsoft HttpRequest.Form is populated on *POST* requests
    /// Use REFLECTION to open the collection and make it writeable
    /// </summary>
    /// <param name="httpContext"></param>
    protected void MakeFormEditable(HttpContext httpContext)
    {
        var collection = httpContext.Request.Form;

        var propInfo = collection.GetType()
                                 .GetProperty("IsReadOnly", 
                                              BindingFlags.Instance | BindingFlags.NonPublic);
        propInfo.SetValue(collection, false, new object[] { });
    }
}

Now I have my new update field pipeline class all I need to do is plug it in place of the old one.

Sitecore.ItemWebApi.config

  <itemWebApiUpdate>
    <processor type="MyNamespace.Web.Pipelines.WtfWebApi, MyNamespace.Web" />
    <processor type="Sitecore.ItemWebApi.Pipelines.Update.ReadUpdatedScope, Sitecore.ItemWebApi" />
  </itemWebApiUpdate>

And... TA DA!!!! My fields are now updating.

This was a really difficult bug to find and I'd love to hear from anyone else who has used the update field functionality on the ItemWebApi since documentation is rather thin on the ground, especially for Sitecore 8.

The solution above is a HACK so please do not use this in a production environment. It was only put in place in order to get to the bottom of this issue.

If you have used the ItemWebApi to update field values, both successfully or otherwise, I'd love to hear from you so please use the comments form below. Alternatively, if you would prefer, feel free to get in touch with me via twitter @ninjanye

NDepend - My experiences

I recently started playing around with NDepend. Primarily I have been using NDepend to analyse NinjaNye.SearchExtensions.

First impressions

NDepend is hugely in-depth. My first analysis returned so much information it was difficult to know where to start. Fortunately, NDepend gives you a handy interactive dashboard that summarizes the results and allows you to drill down to the specific rule violations you are interested in.

UI

This product is vast. The sheer number of metrics means you can easily get lost. However, the UI (and VisualNDepend in particular) helps to resolve this by allowing you to quickly and easily navigate to any depth in the metrics.

Once you have completed analysis you are given a html report as well as a Dashboard that summarises key points. Other useful interactive tools to utilise are the Dependancy Graph and an the interactive Metric Map.

Personally, I prefer to run and analyse my metrics in VisualNDepend. The differences between this UI and using the VS integrated view are small, however VisualNDepend is built specifically for this purpose and doesn't have to compete with all my code windows in VisualStudio which means everything just seems to flow a bit easier.

The Dashboard

NDepend Dashboard

The first thing that jumped out at me were the 2 critical rules that had been violated. The two rules I was breaking were as follows:

  • Avoid namespaces mutually dependant (1 occurence)
  • Potentially dead methods (8 occurences)

The first thing I thought I'd tackle was the 8 potentially dead methods. Fortunately, the methods it had identified were solely used when building Expression Trees so none of these were valid. I was able to easily remove this rule from my reporting but that does mean I would not have this rule appear if I genuinely violated the rule.

The 'Mutually Dependant Namespace' violation the second critical violation and was something that was valid. Now that I was aware of this I was able to make a few changes to the code structure and resolve the issue.

Performance

There are some methods in SearchExtensions where performance is important, namely the Soundex and Levenshtein functionality. I was able to leverage the stats from NDepend to try and reduce the amount of IL Instructions and therefore improve the performance.

My LevenshteinProcessor for example, had a total of 257 IL instructions which I was able to reduce to 218 instructions (a 15% reduction). This was a concrete measurable way of reducing IL instructions which in turn should have a positive affect affect on performance and something I would not have reliably been able to do without NDepend.

Warnings

Whilst analysing my code I did notice that some of my code returned warnings that, it could be argued, were not issues at all. One instance of this was Potentially Dead Methods. Dead methods were in fact used but through expression trees so the code was not aware that they were being used. In fairness to NDepend, the warning is prefixed with 'Potentially' and NDepend provide a really easy way of informing the analyser to ignore a method. You simply need to create an attribute and alter the rule definition as follows

// If you don't want to link NDepend.API.dll,
// you can use your own IsNotDeadCodeAttribute and adapt this rule.
!m.HasAttribute("NinjaNye.SearchExtensions.NDepend.Attributes.IsNotDeadCodeAttribute".AllowNoMatch()))

Another warning that came up related to some Entity Framework Code First code. The critical rule that had been violated was the 'Method with too many parameters' rule. The feedback was that I had a bunch of methods with up to 15 parameters...??? (WHAT!!! Had I really done that...?). After some investigation looking into this I realised that the code at the source of this violation was Entity Frmaework configuration code that creates tables from my model classes, similar to the following:

CreateTable(
    "dbo.Table1",
    c => new
        {
            Id = c.Int(nullable: false, identity: true),
            CompetitionId = c.Int(nullable: false),
            WeekId = c.Int(nullable: false),
            StartDate = c.DateTime(nullable: false),
            ...
            ...
        })

This was slightly annoying something I would like to see if I could avoid by editing the rule, however at first glance, I could not see an obvious way of ignoring anonymous methods. (Please get in touch if you have done this)

Summary

I have been using NDepend for a couple of weeks now and I feel I am still only scratching the surface on what it can offer. The stats NDepend provide are so vast I have yet to master all but a few metrics. The metrics I have used have been incredibly useful in identifying code to refactor.

NDepend is also incredibly flexible and allows you to customise which rules to use as well as the definition of each rule meaning you can tighten or relax the definition of a rule to suit your needs.


If you would like to share your opinion, please use the comments form below. Alternatively, if you would prefer, feel free to get in touch with me via twitter @ninjanye

Search Extensions : Basic Levenshtein support

As part of release 1.3 of NinjaNye.SearchExtensions, 2 new features have been introduced.

This post talks about Levenshtein Distance support including, what has been delivered and what is still to come.

Basic Levenstein Distance support

Using Search Extensions, you can now calculate the Levenshtein distance between a string property and any string value.

context.TestModels.LevenshteinDistanceOf(x => x.StringOne)
                  .ComparedTo("test");

The comparison can also be made between two properties:

context.TestModels.LevenshteinDistanceOf(x => x.StringOne)
                  .ComparedTo(x => x.StringTwo);

The result

The result of the above is defined as IEnumerable<ILevenshteinDistance<T>>.

In order to return the Levenshtein distance for a particular record, a new interface has been created. This interface allows us to return the result of the comparison as well as the source item itself and is defined as follows:

public interface ILevenshteinDistance<out T>
{
    int Distance { get; }
    T Item { get; }
}

This interface means that you can begin to filter out results based on the Levenshtein Distance. For example if we wanted to retrieve records where the Levenshtein Distance from "test" is less than 5 we would write the following:

var result = data.LevenshteinDistanceOf(x => x.StringOne)
                 .ComparedTo("test")
                 .Where(x => x.Distance < 5)
                 .Select(x => x.Item);

Future enhancements

Enhancements I'd like to make to the current Levenshtein support include the following:

  • Levenshtein Distance against multiple properties
  • Levenshtein Distance compared to multiple values.
  • A combination of the above

Levenshtein Distance against multiple properties

For example, it would be nice to be able to do something like the following:

context.TestModels.LevenshteinDistanceOf(x => x.StringOne, x => x.StringTwo)
                  .ComparedTo("test");

Levenshtein Distance compared to multiple values.

This would extend both the string search and the property search

// Distance from  "test" and "another"
context.TestModels.LevenshteinDistanceOf(x => x.StringOne)
                  .ComparedTo("test", "another");

// Distance from `StringTwo` and `StringThree`
context.TestModels.LevenshteinDistanceOf(x => x.StringOne)
                  .ComparedTo(x => x.StringTwo, x => x.StringThree);

Combining multiple properties with multiple comparisons

context.TestModels.LevenshteinDistanceOf(x => x.StringOne, x => x.StringTwo)
                  .ComparedTo("test", "another");

Useful Links

If you think you would find this package useful, you can download it using the following command.

PM> Install-Package NinjaNye.SearchExtensions


If you would like to get in touch about any of the above, please do so by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the GitHub project page

ContainingAll() string search feature

Recently I have been working on a couple of new additions to SearchExtensions. One of these changes was a request from a user to have the ability to only return results where all of the search terms were hit against any number of properties.

I'm pleased to say this feature is now part of the NinjaNye.SearchExtensions nuget package.

PM> Install-Package NinjaNye.SearchExtensions

How to use ContainingAll()

var result = context.Model.Search(x => x.Name, x => x.Desc)
                          .ContainingAll("test", "search");

This will return only records where all the search terms are matched in any of the defined properties, meaning that the following records would all be returned for the above.

{ Name= "test search", Desc = "desc"}
{ Name= "test", Desc = "search"}
{ Name= "ninja", Desc = "searchtest"}

This feature is also implemented as an IQueryable extension method so can perform the search on the data source so that you don't need to bring every potential record into memory before you apply the filter.

IQueryable implementation

The SQL that is generated when using the IQueryable extension method will be similar to the following:

SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name],
[Extent1].[Desc] AS [Desc]
FROM [dbo].[Test] AS [Extent1]
WHERE (([Extent1].[Name] LIKE N'%test%') OR ([Extent1].[Desc] LIKE N'%test%'))
  AND (([Extent1].[Name] LIKE N'%search%') OR ([Extent1].[Desc] LIKE N'%search%'))

Thanks to @JamesReate for raising this as an issue. Chances are I would not have realised this was a requirement had you not made the request.

If you have a new feature request, please get in touch by adding a comment below, contact me on twitter (@ninjanye) or you can raise an issue on the github project page

Reversing a string efficiently

As part of a new feature on a nuget package I am working on I wanted to reverse a string into a new string. A simple request I thought, and indeed it was. Using System.Linq I was able to use the .Reverse() extension method.

var reversed = value.Reverse()

Wait a minute... my reversed variable isn't a string, it's IEnumerable<char>. No problem, simply call .ToString(). In most cases this would be fine but for my particular scenario performance was important so I went about creating a simple alternative:

public static class StringExtensions
{
    public static string ReverseString(this string value)
    {
        var sb = new StringBuilder(value.Length);
        for (int i = value.Length -1; i >= 0; i--)
        {
            sb.Append(value[i]);
        }
        return sb.ToString();
    }
}

After running some simple tests that reversed a 50 character string, the results were pleasing. The timings themselves differed however, the ReverseString() method consistently out performed the Reverse().ToString() method

An example of some of the test output shows that the time saved is not insignificant especially when you start dealing with millions of strings:

ReverseString test results:
.Reverse().ToString() time taken: 00:00:00.0017556
.ReverseString() time taken:      00:00:00.0003042

UPDATE

My old buddy @paulthecyclist has managed to put forward an even quicker implementation of this functionality. Below is a quick summary on his implementation and how that compares to the above.

The Method

public static string ArrayReverse(this string value)
{
    var charArray = value.ToCharArray();
    Array.Reverse(charArray);
    return new string(charArray);
}

The method takes advantage of Array.Reverse() which as Paul points out is different to the Linq extension method on IEnumerable. This method is also much cleaner and simpler to read.

Performance

ReverseArray test results:
.ReverseString() time taken:      00:00:00.0003042
.ArrayReverse() time taken:       00:00:00.0002265

As you can see, the improvements in speed are quite substantial and will add up when performing this on multiple strings

Thanks to @paulthecyclist for this addition. Be sure to go and check out his blog: http://paulthecyclist.com/


If you would like to get in contact, please do so by adding a comment below or you can contact me on twitter (@ninjanye)

I sometimes find myself writing something like the following when checking if a parent object is null before assignment:

string status;
if(parent == null)
{
    status = "no parent";
}
else
{
    status = parent.Status;
}

This to me seems long and cumbersome and only becomes more exaggerated if you need to check sub properties are not null.

string status;
if(parent == null)
{
    status = "no parent";
}
else if (parent.Child == null)
{
    status = "no child"
}
else if (parent.Child.Child == null)
{
    status = "no grandchild"
}
else
{
    status = parent.Child.Child.Status;
}

Now, I am aware of the new Safe Navigation ?. operator which does go some way to remedy this and I am looking forward to using it in anger, however, the only issue I see is that you lose the ability to identify at which point the object was null which in some cases might be required.

var status = parent?.Child?.Child?.Name;

So I thought about it for a while. What are my options? How about a new operator?

Null Coalescing Ternary operator (??::)

string status = parent ?? "no parent"
                       :: parent.Child ?? "no child"
                       :: parent.Child.Child ?? "no grandchild"
                       :: parent.Child.Child.Status;

Let me break the above down with comments

                // parent is null assign "no parent"
string status = parent ?? "no parent"
                       // else if parent.Child is null assign "no child"
                       :: parent.Child ?? "no child"
                       // else if parent.Child.Child is null assign "no grandchild"
                       :: parent.Child.Child ?? "no grandchild"
                       // if all are not null, assign the grandchild status
                       :: parent.Child.Child.Status;

The big readability question?

I understand that some people find the ternary operator less readable than if/else statements, especially when nesting, and I tend to agree when it comes to having long nesting. So this idea might be dead in the water before it has even begun.

I personally put readability at the top of the priority list when performing code reviews or writing my own code, so the question is... Is the above more or less readable than the larger if/else example?

The above is merging two operators we are already aware of (null coalescing and ternary) so the learning curve should be small.

If you don't like the aforementioned operators, then chances are you're gonna loath this suggestion, and that's fine. I want to hear all arguments for and against this.


What are your thoughts?

I've been thinking about this on and off for a while. Will it be useful? Would you use such an operator? I'd like to hear your thoughts since I hear the ternary operator is not the most loved operator in the world, would this simply expand on an already 'out of love' operator.

If the feedback on this post is positive then I would consider submitting this to the Visual Studio UserVoice community. If the feedback is negative, I will not take offense, you will simply have saved me the embarrassment of putting a silly idea on to UserVoice.

If you would like to share your opinion, please use the comments form below. Alternatively, if you would prefer, feel free to get in touch with me via twitter @ninjanye

NinjaNye.SearchExtensions : Performance Analysis

This post is to explain, test and analyse the performance of the Soundex functionality within NinjaNye.SearchExtensions.

Test environment

All of the test results are from my development machine with the following specification:

  • Intel Core i5-3317U CPU @ 1.70GHz
  • 10Gb RAM
  • Windows 8.1 64bit operating system

Test Setup

The tests were all performed against 1 million randomly generated words ranging from 2 to 10 characters. Each test had a new set of randomly generated words with which to work on. This processing was not included in the timing results.

Building the random words was done as follows:

private List<string> words;
private void BuildWords(int wordCount)
{
    Console.WriteLine("Building {0} words...", wordCount);
    this.words = new List<string>();
    for (int i = 0; i < wordCount; i++)
    {
        string randomWord = this.BuildRandomWord();
        this.words.Add(randomWord);
    }
}

private const string letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
private string BuildRandomWord()
{
    var letterCount = RandomInt(2, 10);
    var sb = new StringBuilder(letterCount);
    for (int i = 0; i < letterCount; i++)
    {
        var letterIndex = RandomInt(0, 51);
        sb.Append(letters[letterIndex]);
    }
    return sb.ToString();
}

private static int RandomInt(int min, int max)
{
    var rng = new RNGCryptoServiceProvider();
    var buffer = new byte[4];

    rng.GetBytes(buffer);
    int result = BitConverter.ToInt32(buffer, 0);

    return new Random(result).Next(min, max);
}

The above methods meant I was simply able to have the following [SetUp] action performed on before every test.

[SetUp]
public void Setup()
{
    this.BuildWords(1000000);
}

The tests

All the output you see below is taken from a single sample test run. These figures will vary slightly on each run since the source data is never exactly the same

Converting 1 million words to soundex

[Test]
public void ToSoundex_OneMillionRecords_UnderOneSecond()
{
    //Arrange
    Console.WriteLine("Processing {0} words", words.Count);
    var stopwatch = new Stopwatch();
    Console.WriteLine("Begin soundex...");
    stopwatch.Start();

    //Act
    var result = words.Select(x => x.ToSoundex()).ToList();
    stopwatch.Stop();
    Console.WriteLine("Time taken: {0}", stopwatch.Elapsed);
    Console.WriteLine("Results retrieved: {0}", result.Count());
    //Assert
    Assert.True(stopwatch.Elapsed.TotalMilliseconds < 1000);
}

Output

Building 1000000 words...
Processing 1000000 words
Begin soundex...
Time taken: 00:00:00.6148250
Results retrieved: 1000000

Querying 1 million words that sound like 'test'

[Test]
public void SearchSoundex_OneMillionWordsComparedToOneWord_UnderOneSecond()
{
    //Arrange
    Console.WriteLine("Processing {0} words", words.Count);

    var stopwatch = new Stopwatch();
    Console.WriteLine("Begin soundex search...");
    stopwatch.Start();

    //Act
    var result = words.Search(x => x).Soundex("test").ToList();
    stopwatch.Stop();
    Console.WriteLine("Time taken: {0}", stopwatch.Elapsed);
    Console.WriteLine("Results retrieved: {0}", result.Count);
    //Assert
    Assert.True(stopwatch.Elapsed.Milliseconds < 1000);
}

Output

Building 1000000 words...
Processing 1000000 words
Begin soundex search...
Time taken: 00:00:00.6203847
Results retrieved: 554

Querying 1 million words that sound like 'test' or 'bacon'

[Test]
public void SearchSoundex_OneMillionWordsComparedToTwoWords_UnderOneSecond()
{
    //Arrange
    Console.WriteLine("Processing {0} words", words.Count);

    var stopwatch = new Stopwatch();
    Console.WriteLine("Begin soundex search...");
    stopwatch.Start();

    //Act
    var result = words.Search(x => x).Soundex("test", "bacon").ToList();
    stopwatch.Stop();
    Console.WriteLine("Time taken: {0}", stopwatch.Elapsed);
    Console.WriteLine("Results retrieved: {0}", result.Count);
    //Assert
    Assert.True(stopwatch.Elapsed.Milliseconds < 1000);
}

Output

Building 1000000 words...
Processing 1000000 words
Begin soundex search...
Time taken: 00:00:00.4232135
Results retrieved: 1230

Querying 1 million words that sound like any of ten supplied words

[Test]
public void SearchSoundex_OneMillionWordsComparedToTenWords_UnderOneSecond()
{
    //Arrange
    Console.WriteLine("Processing {0} words", words.Count);

    var stopwatch = new Stopwatch();
    Console.WriteLine("Begin soundex search...");
    stopwatch.Start();

    //Act
    var result = words.Search(x => x).Soundex("historians", "often", "articulate", "great", "battles",
                                              "elegantly", "without", "pause", "for", "thought").ToList();
    stopwatch.Stop();
    Console.WriteLine("Time taken: {0}", stopwatch.Elapsed);
    Console.WriteLine("Results retrieved: {0}", result.Count);
    //Assert
    Assert.True(stopwatch.Elapsed.Milliseconds < 1000);
}

Output

Building 1000000 words...
Processing 1000000 words
Begin soundex search...
Time taken: 00:00:00.5468836
Results retrieved: 7149

Pedal to the metal

I am really pleased with these performance results. I have not seen how this performs against other Soundex processors out there but I hope it will be competitive. It was not always this perfomant and has only got to this stage through a lot of refactoring that was backed up by my test cases.

You can see the full source code for the SoundexProcessor the projects github repository

Performance tweaks

Regex is cool! but slooooooooow

An initial implementation of the SoundexProcessor used Regex. This was great as it enabled me to easily pass my tests, often by simply updating the regex codes. However, when it came to performance tests, processing 1 million records was taking close to 20 seconds.... With my tests as a safety net I was able to rewrite the functionality the regex pattern was providing by simply analysing each charachter myself.

char.ToUpper() vs char.IsUpper()

Click here for the code this relates section relates to

Another small improvement I made was instead of converting each character to it's upper counterpart and comparing that character to a set of upper case characters I simply check if it is an upper case character and perform the relevant checks based off of that character. This small change saved around 20% of the total running time at that point in time,


If you would like to know more about the soundex search functionality, please get in touch by adding a comment below or you can contact me on twitter (@ninjanye)

New Soundex support in NinjaNye.SearchExtensions

I have recently released a new version of NinjaNye.SearchExtensions nuget package. The main feature of this release is the Soundex search support.

PM> Install-Package NinjaNye.SearchExtensions

SearchExtensions is a library of IQueryable and IEnumerable extension methods to help simplify string searching.

What is Soundex

Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. [Source: Wikipedia]

As of release 1.1, NinjaNye.SearchExtensions supports converting and searching for words based on the soundex algorithm.

How to: Performing Soundex searches

Search where a single property sounds like a single search term

var result = data.Search(x => x.Property1).Soundex("test")

Search where a any of multiple properties sounds like a single search term

var result = data.Search(x => x.Property1, x => x.PropertyTwo)
                 .Soundex("test")

Search where a single property sounds like any one of multiple search terms

var result = data.Search(x => x.Property1).Soundex("test", "another")

Search where a any of multiple properties sounds like any of multiple search terms

var result = data.Search(x => x.Property1, x => x.PropertyTwo)
                 .Soundex("test", "another")

How to: Combining Soundex searches

Joining soundex searches is conducted in the same way as any other search, and can be combined with any other search (although may not be appropriate)

Search where property1 sounds like term1 AND property2 sounds like a term2

 var result = data.Search(x => x.Property1).Soundex("test")
                  .Search(x => x.Property2).Soundex("another")

How to: Converting words to Soundex

As part of this update I created an extension method on string that can be used to convert a word to it's Soundex code. This extension method is public and can be used as you desire outside of the Search() functionality

Producing the Soundex code for a word is simple. Firstly, make sure your are using the Soundex namespace:

using NinjaNye.SearchExtensions.Soundex;

Once you have this you can use the ToSoundex() extension method

string word = "test";
string soundex = word.ToSoundex();

Converting multiple words to soundex codes

string sentence = "the quick brown fox";
string words = sentence.Split(' ');
var codes = words.Select(x => x.ToSoundex());

Performance

A lot of the examples I saw whilst researching the subject performed the same task but not always in the most performant way. Because of this I was keen to build something that would scale. Below are the tests I ran during development.

Test environment

All of these test results are from my development machine with the following specification:

  • Intel Core i5-3317U CPU @ 1.70GHz
  • 10Gb RAM
  • Windows 8.1 64bit operating system

All of the tests below were performed against 1 million randomly generated words ranging from 2 to 10 characters

Converting words using ToSoundex()

var result = words.Select(x => x.ToSoundex()).ToList();
Time taken: 0.6919661 seconds

Querying words that match 'test'

var result = words.Search(x => x).Soundex("test").ToList();
Time taken: 0.6385429 seconds (618 results)

Querying words that match two words

var result = words.Search(x => x).Soundex("test", "bacon").ToList();
Time taken: 0.4372583 seconds (1285 results)

Querying words that match ten words

var result = words.Search(x => x).Soundex("historians", "often", "articulate", "great", "battles", "elegantly", "without", "pause", "for", "thought").ToList();
Time taken: 0.5831033 seconds (7093 results)

To see a more in depth write up of the performance testing I have done, please see my latest post on the subject


Feature requests

I've had a great time developing this feature and

I'm always open to new ideas so if you have an idea for a feature that you believe would be a good addition to SearchExtensions, please get in touch.

Equally, if you are currently using SearchExtensions and can see areas that could be enhanced or improved, I'd love to hear from you.

If you would like to get in contact, please do so by adding a comment below or you can contact me on twitter (@ninjanye)

New Release of NinjaNye.SearchExtensions

I have recently released a new version of my nuget package NinjaNye.SearchExtensions. This is a major release and as such the version has been bumped to 1.0

PM> Install-Package NinjaNye.SearchExtensions

SearchExtensions is a library of IQueryable and IEnumerable extension methods to help simplify string searching.

Changes in Release 1.0

The changes made to the latest release of Search extensions are as follows:

  • Bump version to Release 1.0
  • Remove all previously [Obsolete] methods
  • Promote the fluent Search methods out of the NinjaNye.SearchExtensions.Fluent namespace
  • Remove the specific SearchAll() method in favour of utilising .Search()
  • Performance improvements
  • Code clean and refactoring

Bump version to 1.0

SearchExtensions has been helping projects for a while now and I feel that it has matured into something that I am happy to bring out of pre-release status. With the final set of changes (detailed below), I believe it is in a fit state to be given the released status.

This does not mean that development will halt. Far from it. In future releases I hope to build the following features:

If you have any features you would like to see on the task list, please get in touch by adding a comment below or you can contact me on twitter (@ninjanye)

Remove all previously [Obsolete] methods

The development of SearchExtensions in the early days, at least, was largely a voyage of discovery especially when it came to defining the API that was exposed. Because of this, some methods were made obsolete and superseded. A good example of this was when we introduced the fluent api. Because of this there were half a dozen public methods that had been marked as [Obsolete]. These methods have now been removed. All the functionality still exists in the new fluent API which has been in place for over 6 months so I hope the use of the obsolete methods is now at a minimum.

Promote the fluent Search methods out of the NinjaNye.SearchExtensions.Fluent namespace

Previoulsy, in order to use the fluent API you would have to include the following using statement in your code.

using NinjaNye.SearchExtensions.Fluent

Since the fluent API is now the only way of initiating a search, this has now been promoted to

using NinjaNye.SearchExtensions

This should mean that it is a lot easier to start using SearchExtensions once installed.

Upgrade .Search() to search all string properties

Previously if you wanted to search all string properties you would use a second method, namely SearchAll().

As part of this latest release, we have removed the additional SearchAll method and simply updated Search() so that be default it will identify and search all string properties of an object if none are passed in.

//Search all string properties for "john"
var data.Search().Containing("john");

This new addition has been implemented across both IEnumerable and IQueryable extension methods

Performance improvements

Another inclusion of this release were performance enhancements. The improvements we have implemented mean that the difference in using SearchExtensions compared to custom linq is negligible.

A further post to explore the performance comparisons will follow

Feature requests

I'm always open to new ideas so if you have an idea for a feature that you believe would be a good addition to SearchExtensions, please get in touch.

Equally, if you are currently using SearchExtensions and can see areas that could be enhanced or improved, I'd love to hear from you.

If you would like to get in contact, please do so by adding a comment below or you can contact me on twitter (@ninjanye)

Recently I wanted to expose part of a site to allow you to retrieve the most recent posts as part of a web api request from another domain. I wanted to allow a sister site to make a web api request to get the most recent blog posts. Luckily this is supported via a handy nuget package

PM> Install-Package Microsoft.AspNet.WebApi.Cors

This allows you access to an attribute that you can decorate your controllers/actions to make it available to pre defined types.

Extending Web Api Cors using AppSettings

This package in conjuction with an excellent article by Brock Allen on MSDN helped me to create a new attribute (below) that allowed me to read valid origins from the web.config appsettings section, meaning I can use the attribute and have different allowed origins for different flavours of the site or even different build configurations.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class EnableCorsByAppSettingAttribute : Attribute, ICorsPolicyProvider
{
    const string defaultKey = "cors:AllowedOrigins";
    private readonly string rawOrigins;
    private CorsPolicy corsPolicy;

    /// <summary>
    /// By default uses "cors:AllowedOrigins" AppSetting key
    /// </summary>
    public EnableCorsByAppSettingAttribute() 
        : this(defaultKey) // Use default AppSetting key
    {
    }

    /// <summary>
    /// Enables Cross Origin
    /// </summary>
    /// <param name="appSettingKey">AppSetting key that defines valid origins</param>
    public EnableCorsByAppSettingAttribute(string appSettingKey)
    {
        // Collect comma separated origins
        this.rawOrigins = ConfigurationManager.AppSettings[appSettingKey];
        this.BuildCorsPolicy();
    }

    /// <summary>
    /// Build Cors policy
    /// </summary>
    private void BuildCorsPolicy()
    {
        bool allowAnyHeader = String.IsNullOrEmpty(this.Headers) || this.Headers == "*";
        bool allowAnyMethod = String.IsNullOrEmpty(this.Methods) || this.Methods == "*";

        this.corsPolicy = new CorsPolicy
            {
                AllowAnyHeader = allowAnyHeader,
                AllowAnyMethod = allowAnyMethod,
            };

        // Add origins from app setting value
        this.corsPolicy.Origins.AddCommaSeperatedValues(this.rawOrigins);
        this.corsPolicy.Headers.AddCommaSeperatedValues(this.Headers);
        this.corsPolicy.Methods.AddCommaSeperatedValues(this.Methods);
    }

    public string Headers { get; set; }
    public string Methods { get; set; }

    public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request, 
                                               CancellationToken cancellationToken)
    {
        return Task.FromResult(this.corsPolicy);
    }
}

The above code takes advantage of a simple extension method called AddCommaSeperatedValues, which is defined as follows

internal static class CollectionExtensions
{
    public static void AddCommaSeperatedValues(this ICollection<string> current, string raw)
    {
        if (current == null)
        {
            return;
        }

        var valuesToAdd = raw.SplitCommaSeperatedValues();
        foreach (var value in valuesToAdd)
        {
            current.Add(value);
        }
    }
}

This in turn relies on another string extension method which turns a comma separated string into a collection of values

internal static class StringExtensions
{
    /// <summary>
    /// Splits a comma delimited string into a new collection
    /// </summary>
    /// <param name="raw">Comma delimited string of values to split</param>
    /// <returns></returns>
    public static IEnumerable<string> SplitCommaSeperatedValues(this string raw)
    {
        if (string.IsNullOrWhiteSpace(raw))
        {
            return Enumerable.Empty<string>();
        }

        return raw.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries)
                  .Select(s => s.Trim())
                  .ToList();
    }
}

Using the [EnableCorsByAppSetting] attribute

Using this new attribute is simple. Simply add the attribute to the controller or action method you desire.

In the controller

We can either use the default app setting key by using the default constructor

[EnableCorsByAppSetting]
public class PostsController : ApiController
{
    public IEnumerable<PostSummaryViewModel> Get()
    {
        // Return recent posts
        // Code ommited
    }    
}

Or define our own appSetting to use by providing that appSetting key

[EnableCorsByAppSetting("cors:PostsOrigins")]
public class PostsController : ApiController
{
    public IEnumerable<PostSummaryViewModel> Get()
    {
        // Return recent posts
        // Code ommited
    }    
}

Now that is set up we simply need to add the origins to the appsetting.

In the web.config

Below we are defining the origin that is allowed during development

<appSettings>
    <add key="cors:AllowedOrigins" value="http://localhost:52811" />
</appSettings>

And if we want a different origin when in a release build, when the site goes live for example, we can simply add a transform to the web.release.config file

In web.release.config

<appSettings>
    <add key="cors:AllowedOrigins" value="http://www.mylivesite.com" 
         xdt:Transform="SetAttributes" xdt:Locator="Match(key)" />
</appSettings>

Thanks again to Brock Allen. I hope this is useful to those reading this. If you have any questions or comments, please use the comments for below and I will endeavor to respond promptly.

I have just released the latest update to SearchExtensions nuget package.

This update allows for Ranked Searches to be performed as part of the fluent API.

Performing a Ranked Search

Using the fluent Api, performing a ranked search is the same as performing a regular search, you simply need to transform your results to Ranked using the .ToRanked() method.

var result = data.Search(x => x.Name)
                 .Containing("search")
                 .ToRanked();  // <-- This transforms the result

The ToRanked() method transforms the result from IQueryable<T> to IQueryable<IRanked<T>>

public interface IRanked<out T>
{
    int Hits { get; }
    T Item { get; }
}

This means you can order by the results with the most hits. The hit count is also calculated across multiple properties, for example you can add a second property to Search and the Hits property will be calculated across all properties.

var result = data.Search(x => x.Name, x => x.Description)
                 .Containing("search")
                 .ToRanked();

In the above example, The hit count will ad all hits in both Name and Description properties.

The SQL

When connected to a SQL server data source, when performing a basic ranked search across 2 columns, the following SQL is generated:

SELECT 
    ((( CAST(LEN(
          CASE WHEN ([Extent1].[Name] IS NULL) 
               THEN N'' 
               ELSE [Extent1].[Name] 
          END) AS int)) - 
       ( CAST(LEN(REPLACE(
          CASE WHEN ([Extent1].[Name] IS NULL) 
          THEN N'' 
          ELSE [Extent1].[Name] 
          END, N'search', N'')) AS int))) / 6) 
    +  -- ** Add hits of first property to that of the second
    ((( CAST(LEN(
        CASE WHEN ([Extent1].[Description] IS NULL) 
             THEN N'' 
             ELSE [Extent1].[Description] 
         END) AS int)) - 
      ( CAST(LEN(REPLACE(
          CASE WHEN ([Extent1].[Description] IS NULL) 
               THEN N'' 
               ELSE [Extent1].[Description] 
          END, N'search', N'')) AS int))) / 6) AS [C2], 
    [Extent1].[Id] AS [Id], 
    [Extent1].[StringOne] AS [Name], 
    [Extent1].[Description] AS [Description]
    FROM [dbo].[TestModels] AS [Extent1]
    WHERE ([Extent1].[Name] LIKE N'%search%') OR ([Extent1].[Description] LIKE N'%search%')

This is fairly hard to decipher but I will try and explain what is going on. At the top we are counting the amount of hits each row has. This is done by performin the following psuedo code:

([Property].length - [Property].Replace([searchTerm], '').length) / [searchTerm].length

This translates to the following SQL (with null checks). The following assumes we performed a search for the word "search" on the Property property.

-- ** GET THE LENGTH OF THE PROPERTY **    
    LEN(CASE WHEN ([Extent1].[Property] IS NULL) 
             THEN N'' 
             ELSE [Extent1].[Property] 
        END) 
-- ** SUBTRACT **
    -   
-- ** SEARCH FOR THE SEARCH TERM AND REMOVE OCCURRENCES **
    LEN(REPLACE(
          CASE WHEN ([Extent1].[Property] IS NULL) 
          THEN N'' 
          ELSE [Extent1].[Property] 
          END, N'search', N''))
 -- ** DIVIDE THE RESULT BY THE LENGTH OF THE 
 --    SEARCH TERM TO WORK OUT THE NUMBER OF HITS **
     / 6) 

Ranked Search combinations

Ranked searches can be combined with any of the other fluent search methods however it is only the Containing() method that contributes to the ranked search hit count.

var result = data.Search(x => x.Name, x => x.Description)
                 .StartsWith("abc", "test", "john")
                 .Containing("efg", "hij")
                 .ToRanked()

The above search in plain english...

  • Name OR Description property starts with "abc", "test", or "john"
  • AND Name OR Description contains "efg" or "hij"

I hope this new feature is helpful to those of you out there that have downloaded NinjaNye.SearchExtensions.

If you are new to NinjaNye.SearchExtensions you can download the package using the following command:

PM> Install-Package NinjaNye.SearchExtensions

Please let me know your thoughts in the comments below.

I am pleased to announce a NEW Fluent Search API for SearchExtensions nuget package

As of version 0.5, SearchExtensions now has a fluent API enabling a more control over your queries as well as making them easy to read. Here are the changes:

IQueryable Searching

Because of the Fluent API update, we are now able to offer up some additional methods to search.

Methods

Search methods available to IQueryable data are:

  • Containing - target property contains search term(s)
  • IsEqual - target property equals search term(s)
  • StartsWith - target property starts with search term(s)

Setup

Previous functionality of searching against any number of properties is still supported. Now however you only define the properties you want to perform a search against as part of the setup action.

using NinjaNye.SearchExtensions.Fluent;
//...

var result = data.Search(x => x.Name, x => x.Description)

Defining more than one property states that you want results that are matched within any of the properties.

Once we have identified the properties we wish to search we can start to perform some search actions

Performing a Containing search

Return all records where the Name property contains "search"

var result = data.Search(x => x.Name).Containing("search");

Return all records where the Name property OR the Description property contains "search":

var result = data.Search(x => x.Name, x => x.Description).Containing("search");

Return all records where the Name property OR the Description property contains "search" OR "term":

var result = data.Search(x => x.Name, x => x.Description).Containing("search", "term");

Performing a IsEqual search

Return all records where the Name property equals "search"

var result = data.Search(x => x.Name).IsEqual("search");

Return all records where the Name property OR the Description property equals "search":

var result = data.Search(x => x.Name, x => x.Description).IsEqual("search");

Return all records where the Name property OR the Description property equals "search" OR "term":

var result = data.Search(x => x.Name, x => x.Description).IsEqual("search", "term");

Performing a StartsWith search

Return all records where the Name property starts with "search"

var result = data.Search(x => x.Name).StartsWith("search");

Return all records where the Name property OR the Description property starts with "search":

var result = data.Search(x => x.Name, x => x.Description).StartsWith("search");

Return all records where the Name property OR the Description property starts with "search" OR "term":

var result = data.Search(x => x.Name, x => x.Description).StartsWith("search", "term");

Combining instructions

With the latest version of SearchExtensions you can also combine search actions. For instance

Search where a Name property starts with "john AND is containing "nye":

var result = queryableData.Search(x => x.Name)
                          .StartsWith("john")
                          .Containing("nye");

The ability to pass multiple search terms to any of the action methods still remains. The following returns any record where the Name property OR the Title property starts with either "john" or "web" AND contains "nye" or "developer"

var result = queryableData.Search(x => x.Name, x.Title)   
                          // that starts with "john" OR "web"
                          .StartsWith("john", "web")
                          // and contains ins "nye" OR "developer"
                          .Containing("nye", "developer")

IEnumerable (in memory) Searches

The fluent API has also been extended to support IEnumerable collections (not just IQueryable).

This means you can now perform all of the above searches on in memory collections should you need to. The important thing to remember when performing an in memory search is to set the culture to the type of string comparison you wish to perform. If SetCulture is not specified, StringComparison.CurrentCulture is used.

How to: Performing IEnumerable searches

These methods are identical to that of the IQueryable methods except the comparison functions have an additional overload that takes a string comparison.

IEnumerable extensions also has an additional method named EndsWith.

var result = enumerableData.Search(x => x.Description)
                           // Set culture for comparison
                           .SetCulture(StringComparison.OrdinalIgnoreCase)
                           .StartsWith("abc")
                           .EndsWith("xyz")
                           .Containing("mno");

It is also possible to set the comparison multiple times

var result = enumerableData.Search(x => x.Description)
                           .SetCulture(StringComparison.OrdinalIgnoreCase)
                           .StartsWith("abc")  // Uses OrdinalIgnoreCase
                           .SetCulture(StringComparison.Ordinal)
                           .EndsWith("xyz")    // Uses Ordinal
                           .SetCulture(StringComparison.CurrentCulture)
                           .Containing("mno"); //Uses CurrentCulture

I hope you all enjoy this latest release. This package is still under development so I welcome any feedback you have.


Installation

To install SearchExtensions you can simply write the following in you Package Manager Console

PM> Install-Package NinjaNye.SearchExtensions

First things first, read the below blog post from Ned Batchelder. It was the single most useful blog post on this subject that I found:

http://nedbatchelder.com/text/stopbots.html

After finding Ned Batchelder's post I began to put his suggestions into action. This is my take take on his musings. The following post covers the following subjects:

  • The Honeypot
  • The Timestamp
  • The Spinner
  • Field Names

Prep

Before we implement the fixes, lets assume we have a something similar to the following highly simplistic example:

CreateViewModel.cs [Model]

public class CreateViewModel
{
    [Required]
    public string Name { get; set; }
    [Required, EmailAddress]        
    public string Email { get; set; }
    [Required]
    public string Content { get; set; }
    [Required]
    public int PostId {get; set }
}

Create.cshtml [View]

@model CreateViewModel
@using(Html.BeginForm())
{
    @Html.AntiForgeryToken()
    @Html.HiddenFor(m => m.PostId)
    <div>
        @Html.LabelFor(m => m.Name): 
        @Html.TextboxFor(m => m.Name)
    </div>
    <div>
        @Html.LabelFor(m => m.Email): 
        @Html.TextboxFor(m => m.Email)
    </div>
    <div>
        @Html.LabelFor(m => m.Email):
        @Html.TextAreaFor(m => m.Content)
    </div>

    <input type="submit" value="submit" />
}

Controller.cs [Post action method only]

[HttpPost, ValidateAntiForgeryToken]
public ActionResult CreateComment(CreateViewModel viewModel)
{
    if(ModelState.IsValid)
    {
        //Save the comment and redirect
    }        
    return View(viewModel);        
}

Before we start, notice I have already implemented MVC's AntiForgeryToken and decorated my post action method with the [ValidateAntiForgeryToken] attribute to prevent Cross Site Request Forgery

The Honeypot

One way of thwarting the form filling bots is to implement a honey pot. The idea here is you have an input that is invisible to a human user but is picked up by the bots.

Implementing the Honeypot

We need to add the honeypot property to the Model and validate it as follows:

public class CreateViewModel : IValidatableObject
{
    // ...existing properties

    public string Honeypot { get; set; }

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        if(!String.IsNullOrEmpty(this.Honeypot))
        {
            return new[]{new ValidationResult("An error occured")}
        }        
    }        
}

Notice we are now implementing the IValidatableObject interface which means we have to implement the Validate method. This is the method that is called when check ModelState.IsValid in our controller.

'Displaying' the Honeypot

Obviously we are only displaying the honeypot to the bots, not the humans.

@model CreateViewModel
@using(Html.BeginForm())
{
    @* ..existing markup *@
    <div class="hidden">
        @Html.TextboxFor(m => m.Honeypot)
    </div>
}

Above you can see that I have decided to hide the containing div, not the input itself.

The Timestamp

Implementing the Timestamp

You might decide to implement this in a slightly less obvious way like so, below I mix up how we display .ToString() the timestamp

public class CreateViewModel : IValidatableObject
{
    // ...existing properties

    // New constructor
    public CreateViewModel()
    {
        // Format the datetime in an inventive way
        this.Timestamp = DateTime.UtcNow.toString("ffffHHMMyytssddmm");
    }

    // New property
    public string Timestamp { get; set; }
}

Notice how I have mixed up each of the date-parts to create an unintelligible value. Refer to the Custom Date and Time MSDN page to create your own variant.

Validating the Timestamp propertty means we need to extend the Validate() method.

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
    var failedValidationResult = new ValidationResult("An error occured");
    var failedResult = new[] { failedValidationResult };

    if(!String.IsNullOrEmpty(this.Honeypot))
    {
        return failedResult;
    }        

    DateTime timestamp;
    if (!DateTime.TryParseExact(this.Timestamp, "ffffHHMMyytssddmm", 
                                null, DateTimeStyles.None, out commentDate))
    {
        return failedResult;
    }

    //Check timestamp is within the last 20 minutes
    if (DateTime.UtcNow.AddMinutes(-20) > timestamp)
    {
        return failedResult;
    }
}            

Adding the Timestamp to the View

Next we need to add the Timestamp property as a hidden input to our View

@model CreateViewModel
@using(Html.BeginForm())
{
    @Html.HiddenFor(m => m.Timestamp)
    @* ..existing markup *@
    <div class="hidden">
        @Html.TextboxFor(m => m.Honeypot)
    </div>
}

The Spinner

Hashing the timestamp with other items adds another layer of security and makes it tamper proof should the attacker be able to decipher your custom date format. To do this we need to change our Model.

public class CreateViewModel : IValidatableObject
{
    public CreateViewModel()
    {
        // Format the datetime in an inventive way
        var timestamp = DateTime.UtcNow.toString("ffffHHMMyytssddmm");
        this.Timestamp = timestamp;
        byte[] salt = HashHelper.CreateSalt();
        // Build your spinner in which ever way you choose            
        var toHash = String.Format("{0}{1}{2}", timestamp, Request.UserHostAddress, this.Id);
        //Be as creative as you please (I've added the word 'HASH')           
        //var toHash = String.Format("H{0}A{1}S{2}H", timestamp, Request.UserHostAddress, this.Id);            
        this.Hashed = HashHelper.Hash(toHash, salt);            
        this.Salt = Convert.ToBase64String(salt);
    }

    // ...existing properties        

    public string Timestamp { get; set; }        
    public string Salt { get; set; }
    public string Hashed { get; set; }
}

As before, the additional properties need to be added to the View using the @Html.HiddenFor() Html helper.

Additionally, the HashHelper used above is a custom hash helper I have built that enables the quick creation of salts and hashes and no doubt, the subject of a future post. You will need to decide how you generate your salts and hashes for your given situation.

Validating the Spinner

As before we need to update our Validate method to validate against our salt and hash.

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
    var failedValidationResult = new ValidationResult("An error occured");
    var failedResult = new[] { failedValidationResult };

    if(!String.IsNullOrEmpty(this.Honeypot) || String.IsNullOrEmpty(this.Timestamp)
       ||  String.IsNullOrEmpty(this.Salt) || String.IsNullOrEmpty(this.Hashed))
    {
        // Some expected data is missing
        return failedResult;
    }        

    DateTime timestamp;
    if (!DateTime.TryParseExact(this.Timestamp, "ffffHHMMyytssddmm", 
                                null, DateTimeStyles.None, out timestamp))
    {
        // TImestamp no longer matches our custom format
        return failedResult;
    }

    //Check timestamp is within the last 20 minutes
    if (DateTime.UtcNow.AddMinutes(-20) > timestamp)
    {
        // Form timestamp is older the 20minutes ago 
        return failedResult;
    }

    byte[] salt = Convert.FromBase64String(this.Salt);

    // Remember to use the same format you did when building the hash
    var toHash = String.Format("{0}{1}{2}", this.Timestamp, Request.UserHostAddress, this.Id);
    //var toHash = String.Format("H{0}A{1}S{2}H", this.Timestamp, Request.UserHostAddress, this.Id);            

    var hashed = HashHelper.Hash(toHash, salt); 

    //Check the hash and salt have not been tampered with
    // If anything has changed or been altered, the hashes will not match
    if(this.Hashed.Equals(Convert.ToBase64String(hashed)))        
    {
        // Hashes match, all is good
        return new[]{ValidationResult.Success};
    }
    return failedResult;
}            

Adding the Hash and Salt to the View

We now need to add the properties to our form:

@model CreateViewModel
@using(Html.BeginForm())
{
    @Html.HiddenFor(m => m.Timestamp)
    @Html.HiddenFor(m => m.Salt)
    @Html.HiddenFor(m => m.Hashed)
    @* ..existing markup *@
    <div class="hidden">
        @Html.TextboxFor(m => m.Honeypot)
    </div>
}

So now our form has everything we need added to it and the validation has been implemented. There is just one more task to complete.

Field names

In order to remove the obvious field names, we could hash them with a secret, and a spinner, however I chose to simply use random property names. This changes every aspect of our code. Our final structure might look like this:

Model

public class CreateViewModel : IValidatableObject
{
    public CreateViewModel()
    {
        // Format the datetime in an inventive way
        var timestamp = DateTime.UtcNow.toString("ffffHHMMyytssddmm");
        this.PWHJVT = timestamp;
        byte[] salt = HashHelper.CreateSalt();
        // Build your spinner in which ever way you choose            
        var toHash = String.Format("{0}{1}{2}", timestamp, Request.UserHostAddress, this.Id);
        //Be as creative as you please (I've added the word 'HASH')           
        //var toHash = String.Format("H{0}A{1}S{2}H", timestamp, Request.UserHostAddress, this.Id);            
        this.BHVESH = HashHelper.Hash(toHash, salt);            
        this.BSRGVS = Convert.ToBase64String(salt);
    }

    [Required(ErrorMessage = "The Name field is required")] 
    public string VJKSDN { get; set; }
    [Required(ErrorMessage = "The Email field is required")]
    [EmailAddress(ErrorMessage = "The Email field is not valid")]        
    public string GOVSWE { get; set; }
    [Required(ErrorMessage = "The Content field is required")]
    public string BPASCC { get; set; }
    [Required]
    public int ESVERI {get; set }

    public string SGEGEH { get; set; } // Honeypot
    public string PWHJVT { get; set; } // Timestamp
    public string BSRGVS { get; set; } // Salt 
    public string BHVESH { get; set; } // Hash

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        var failedValidationResult = new ValidationResult("An error occured");
        var failedResult = new[] { failedValidationResult };

        if(!String.IsNullOrEmpty(this.SGEGEH) || String.IsNullOrEmpty(this.PWHJVT)
           ||  String.IsNullOrEmpty(this.BSRGVS) || String.IsNullOrEmpty(this.BHVESH))
        {
            // Some expected data is missing
            return failedResult;
        }        

        DateTime timestamp;
        if (!DateTime.TryParseExact(this.PWHJVT, "ffffHHMMyytssddmm", 
                                    null, DateTimeStyles.None, out timestamp))
        {
            // TImestamp no longer matches our custom format
            return failedResult;
        }

        //Check timestamp is within the last 20 minutes
        if (DateTime.UtcNow.AddMinutes(-20) > timestamp)
        {
            // Form timestamp is older the 20 minutes ago 
            return failedResult;
        }

        byte[] salt = Convert.FromBase64String(this.BSRGVS);

        // Remember to use the same format you did when building the hash
        var toHash = String.Format("{0}{1}{2}", this.PWHJVT, Request.UserHostAddress, this.Id);
        //var toHash = String.Format("H{0}A{1}S{2}H", this.PWHJVT, Request.UserHostAddress, this.Id);            

        var hashed = HashHelper.Hash(toHash, salt); 

        //Check the hash and salt have not been tampered with
        // If anything has changed or been altered, the hashes will not match
        if(this.BHVESH.Equals(Convert.ToBase64String(hashed)))        
        {
            // Hashes match, all is good
            return new[]{ValidationResult.Success};
        }
        return failedResult;
    }                
}

View

@model CreateViewModel
@using(Html.BeginForm())
{
    @Html.AntiForgeryToken()
    @Html.HiddenFor(m => m.ESVERI)
    @Html.HiddenFor(m => m.PWHJVT)
    @Html.HiddenFor(m => m.BSRGVS)
    @Html.HiddenFor(m => m.BHVESH)

    <div>
        @Html.LabelFor(m => m.VJKSDN): 
        @Html.TextboxFor(m => m.VJKSDN)
    </div>
    <div>
        @Html.LabelFor(m => m.GOVSWE): 
        @Html.TextboxFor(m => m.GOVSWE)
    </div>
    <div>
        @Html.LabelFor(m => m.BPASCC):
        @Html.TextAreaFor(m => m.BPASCC)
    </div>
    <div class="hidden">
        @Html.TextboxFor(m => m.SGEGEH)
    </div>

    <input type="submit" value="submit" />
}

Conclusion

Hopefully the above post will help you protect your sites from spam bots. Remember, this will not stop humans with malicious intent. If a human does try and take advantage of your site, make sure you have protected your output. For instance, if you are building a comment system, when you are writing out the comment, make sure you use @Html.Encode(m => m.Content) to encode any html or scripts that might get posted.

Final thought I'd love to hear from anyone out there this as this post is merely my take on Ned's post for an ASP.Net MVC environment.

If there is enough interest I may follow this up with new oss project that makes it easy to make any class adhere to these rules as we could create custom html helpers and a base spam-proof class to base models on.

Let me know if that is something you think might be worth looking further into.

Thanks for reading this lengthy post. I hope it helped.

A new release of NinjaNye.SearchExtensions is now available from nuget. Following on from my previous post, I describe some changes to the method signatures used within search extensions.

Method signature restructure

Since the project began, the method signatures have not been very consistent and as more methods get added the mess was getting bigger so I decided to set a standard to fix this.

Previously, methods were ordered to take advantage of the params keyword so that if a user had multiple search terms they could do so very easily, likewise, if they wanted multiple properties, it was equally easy:

// Multiple search terms
var result = data.Search(x => x.Property1, "searchTerm1", "searchTerm2", "searchTerm3");
// Multiple properties
var result = data.Search("searchTerm1", x => x.Property1, x => x.Property2, x => x.Property3);

As the project grew this became messy and it was unclear whether or not to provide as search term first or a property. So I decided to put in place a rule whereby the first property always relates to the search term and, where needed, the second property always relates to the property. This means, the first example above with multiple search terms now reads a bit differently:

// Old signature
var result = data.Search(x => x.Property1, "searchTerm1", "searchTerm2", "searchTerm3");
// New signature
var result = data.Search(new[]{ "searchTerm1", "searchTerm2", "searchTerm3" }, x => x.Property1);

Now on this occasion, it may not be quite as nice to read but the result means all Search methods conform to the new standard making the whole library much easier to use. Below is a list of methods available which I think illustrates the benefit of the change:

Old

Search(term);
Search(property, term);
Search(property, term, comparison);
Search(term, comparison, params properties);
Search(property, params terms);
Search(property, comparison, params term);
Search(terms, params properties);
Search(terms, comparison, properties);

New

Search(term);
Search(term, property);
Search(term, property, comparison);
Search(term, params properties);
Search(term, properties, comparison);
Search(terms, property);
Search(terms, property, comparison);
Search(terms, params properties);
Search(terms, properties, comparison);

I hope you agree that the new methods are much better then before and hopefully mean the package is much easier to integrate with.


NinjaNye.SearchExtensions is availabel as a nuget package and can be installed using the following code in your Package Manager Console

PM> Install-Package NinjaNye.SearchExtensions

As always, any comments are greatly appreciated. I'd love to hear how you are using SearchExtensions and maybe help find new ways to improve it.

A new release of NinjaNye.SearchExtensions is now available from nuget. One of the additions in the latest release is the ability to search all string properties of a given type.

Search All feature

This is a feature that was requested by a user on a previous post. Effectively, they wanted to perform a search on all string properties of the target type. With a bit of thought, this was fairly easy to implement. I already had a method that could take a list of properties, so all I needed to do was create a list of Expressions to represent each property of the given type. To do this I created a helper method as follows:

/// <summary>
/// Builds an array of expressions that map to every string property on an object
/// </summary>
/// <typeparam name="T">Type of object to retrieve properties from</typeparam>
/// <returns>An array of expressions for each string property</returns>
public static Expression<Func<T, string>>[] GetStringProperties<T>()
{
    var parameter = Expression.Parameter(typeof(T));
    var stringProperties = typeof(T).GetProperties()
                                    .Where(property => property.PropertyType == typeof(String)
                                                    && property.CanRead);

    var result = new List<Expression<Func<T, string>>>();
    // Loop through each string property...
    foreach (var property in stringProperties)
    {
        Expression body = Expression.Property(parameter, property);
        // ...and build a lambda expression to represent it
        result.Add(Expression.Lambda<Func<T, string>>(body, parameter));
    }
    return result.ToArray();
}

Once this was complete, I simply created a new Search overload and used my new helper method to retrieve the properties and pass them on to my existing search system.

/// <summary>
/// Search ALL string properties for a particular search term
/// </summary>
public static IQueryable<T> Search<T>(this IQueryable<T> source, string searchTerm)
{
    if (String.IsNullOrEmpty(searchTerm))
    {
        return source;
    }

    // Get all string properties from object
    var stringProperties = ExpressionHelper.GetStringProperties<T>();
    // Continue with search
    return source.Search(new[] {searchTerm}, stringProperties);
}

This can now be used to search all string properties on a given object as follows:

var result = data.Search("searchTerm");    

I hope you find the new functionality useful.


NinjaNye.SearchExtensions is availabel as a nuget package and can be installed using the following code in your Package Manager Console

PM> Install-Package NinjaNye.SearchExtensions

As always, any comments are greatly appreciated, if it wasn't for a comment on a previous post, the Search All feature would not have been implemented.

I have recently updated my site with a few design tweaks and changes. In addition to that I thought it would be a good idea to integrate a nuget package I created called NinjaNye.SearchExtensions. It is pretty simply to integrate, here is how.

Installing NinjaNye.SearchExtensions nuget package

This part is really simple. Open Package Manager Console from within Visual Studio and run the following command

The SearchExtensions nuget package is also available by running the following

PM> Install-Package NinjaNye.SearchExtensions

If you would prefer to use the Nuget GUI then you can do so by opening your package manager and searching for NinjaNye.SearchExtensions.

Integrating Ranked searches

One of the latest features of the search extensions is the ability to retrieve a ranked search result. Implementing this is as easy as performing a regular search, here is how:

var searchResult = this.postRepository.RetrieveAll()
                                      .RankedSearch(searchTerms, p => p.Title, p => p.Body)
                                      .OrderByDescending(r => r.Hits)
                                      .ThenByDescending(r => r.Item.Views)
                                      .ToList();

Once you have performed a ranked search, you can then order your results by the Hits property of IRanked or any other property of the original item. Above, I am first ordering by the most hits, then by the post with the most views.

This process is simply building up an expression tree which means, when you call ToList(), the expression tree is converted to the specific data provider that sits behind the ORM and only matching results are returned from the data server

Try it out

This code is now live on my blog, so why not try a search using the search text box in the top right of the page, or download the package and have a go for yourself

I hope this is useful to someone, please give feedback using the comments form below.


Full source code for NinjaNye.SearchExtensions can be found here: https://github.com/ninjanye/searchextensions

The updates are coming thick and fast. I am pleased to say that I have extended NinjaNye.SearchExtensions to include support for searching with ranked results for both IQueryable and IEnumerable.

The SearchExtensions nuget package is available by running the following in your package manage console

PM> Install-Package NinjaNye.SearchExtensions

Performing a ranked search

Performing a ranked search imitates the current Search functionality but instead returns an IQueryable<IRanked<T>>

// Retrieve ranked search results where FirstName or LastName contains "john"
var result = context.Users.RankedSearch("john", u => u.FirstName, u => u.LastName);

So what is IRanked<T>. Well, It is simply as follows:

public interface IRanked<out T>
{
    int Hits { get; }
    T Item { get; }
}

Meaning given the follwing users...

IdFirstNameMiddleNamesLastName
53JohnNye
54JohnnyJimmyJohnson
55JimmyJamesSmith

The above RankedSearch() query return the following data:

HitsIdFirstNameMiddleNamesLastName
153JohnNye
254JohnnyJimmyJohnson

Notice it strips out Jimmy Smith since there are no matches, and instead of simply returning the data as Search function would, a RankedSearch() builds a new IRanked result which includes data about the search, specifically the amount of times a search term was hit.

Order a ranked search by most relevant

The returned IRanked<T> result is still IQueryable meaning you can the sort your ranked search results as you wish. The following example is fairly typical in that we are ordering our results by the most Hits.

// Retrieve ranked search results where FirstName or LastName contains "john"
var result = context.Users.RankedSearch("john", u => u.FirstName, u => u.LastName)
                          .OrderByDescending(r => r.Hits);

The SQL produced (when using a sql provider)

In order to count the number of occurrences (or hits) a term appears in a way that can translate to SQL, I have constructed an expression tree equivalent to the following Lambda for each property and search term

[property].Length - ([property].Replace([searchTerm], "").Length) / [searchTerm].Length

For our user search example this translates to:

([user.FirstName].Length - ([user.FirstName].Replace("john", "").Length) / "john".Length)
+ // Add the two hit counts together for each property
([user.LastName].Length - ([user.LastName].Replace("john", "").Length) / "john".Length)

This, coupled with some null checks produces SQL similar to the following

SELECT 
    [Project1].[Id] AS [Id], 
    [Project1].[C1] AS [C1], 
    [Project1].[FirstName] AS [FirstName], 
    [Project1].[MiddleNames] AS [MiddleNames], 
    [Project1].[LastName] AS [LastName], 
    FROM ( SELECT 
        [Extent1].[Id] AS [Id], 
        [Extent1].[FirstName] AS [FirstName], 
        [Extent1].[MiddleNames] AS [MiddleNames], 
        [Extent1].[LastName] AS [LastName], 
        ((( CAST(LEN(CASE WHEN ([Extent1].[FirstName] IS NULL) THEN N'' ELSE [Extent1].[FirstName] END) AS int)) - ( CAST(LEN(REPLACE(CASE WHEN ([Extent1].[FirstName] IS NULL) THEN N'' ELSE [Extent1].[FirstName] END, N'john', N'')) AS int))) / 4) 
        + 
        ((( CAST(LEN(CASE WHEN ([Extent1].[LastName] IS NULL) THEN N'' ELSE [Extent1].[LastName] END) AS int)) - ( CAST(LEN(REPLACE(CASE WHEN ([Extent1].[LastName] IS NULL) THEN N'' ELSE [Extent1].[LastName] END, N'john', N'')) AS int))) / 4) AS [C1]
        FROM [dbo].[Users] AS [Extent1]
        WHERE (CASE WHEN ([Extent1].[FirstName] IS NULL) THEN N'' ELSE [Extent1].[FirstName] END LIKE N'%john%') OR (CASE WHEN ([Extent1].[LastName] IS NULL) THEN N'' ELSE [Extent1].[LastName] END LIKE N'%john%')
    )  AS [Project1]
    ORDER BY [Project1].[C1] DESC

That is all pretty difficult to read but the important part is that as part of the query you can see the following sql for each property and each search term

((( CAST(LEN(                                        -- Get length of property
    CASE WHEN ([Extent1].[FirstName] IS NULL)        -- Check property for null value
         THEN N''                                    -- Substitute null for empty string
         ELSE [Extent1].[FirstName]                  -- Use property value
     END) AS int)) 
-                                                    -- Minus the property without search term
   ( CAST(LEN(                                       -- Get the length of the replaced property
       REPLACE(                                      -- Replace search term in property
           CASE WHEN ([Extent1].[FirstName] IS NULL) --Null check repeated
                THEN N'' 
                ELSE [Extent1].[FirstName] 
           END, N'john', N'')) AS int))) 
       / 4)                                          -- Divide diff by the search term length

Hopefully this helps to explain the latest feature of the SearchExtensions project. I look forward to hearing your comments.

While writing some code for my NinjaNye.SearchExtensions project, I needed find and replace a string with another string, crucially with StringComparison support so that users could specifically choose the culture and case rules for searching.

Here is how I did it (as to my surprise, this is not supported by default as part of the string Replace() method)

internal static class StringExtensions
{
    public static string Replace(this string text, string oldValue, string newValue, 
                                 StringComparison stringComparison)
    {
        int position;
        while ((position = text.IndexOf(oldValue, stringComparison)) > -1)
        {
            text = text.Remove(position, oldValue.Length);
            text = text.Insert(position, newValue);
        }
        return text;
    }
}

As you can see this is a simple extension method that utilizes the IndexOf method to use the supplied StringComparison enumeration.

Usage of this method is as simple as the existing string Replace methods:

string text = "John Nye made this Extension method";
string newText = text.Replace("extension", "handy", StringComparison.OrdinalIgnoreCase);

The newText variable now equals "John Nye made this handy method", despite the case difference in the word "Extension".

Hope this is helpful to some of you out there.

For those interested in my SearchExtensions project, you can read more about it on the projects github pages. Alternatively checkout the SearchExtensions nuget package or install it by running the following in you package manager console

PM> Install-Package NinjaNye.SearchExtensions

I have recently updated my search extensions project to enable ranked search results. This enables a user to search for a term within a property but also order the results by the most relevant according to the number of hits.

Full source code can be found here: https://github.com/ninjanye/searchextensions

The SearchExtensions nuget package is also available by running the following

PM> Install-Package NinjaNye.SearchExtensions

The Goal

The thought behind a ranked search is to enable users to easily search there data collections and determine which results are more relevant to others.

How to use it

A ranked search is called in the same way as a regular search:

var result = queryableData.RankedSearch(x => x.Property, "searchTerm");

This produces the following SQL when used with a sql data provider. Notice that all the searching and ranking is done in SQL (not in memory)

SELECT 
    [Project1].[C1] AS [C1], 
    [Project1].[Property] AS [Property]
    ...
    FROM ( SELECT 
        [Extent1].[Property] AS [Property], 
        ...
        (( CAST(LEN([Extent1].[Property]) AS int)) - 
         ( CAST(LEN(REPLACE([Extent1].[Property], N'searchTerm', N'')) AS int))) 
        / 10 AS [C1]
        FROM [dbo].[Table] AS [Extent1]
        WHERE [Extent1].[Property] LIKE N'%searchTerm%'
    )  AS [Project1]

How it was built (Expression Trees)

So here is the implementation. Firstly, to represent my ranked result I have the following interface

public interface IRanked<out T>
{
    int Hits { get; }
    T Item { get; }
}

... with the following concrete class

internal class Ranked<T> : IRanked<T>
{
    public int Hits { get; set; }
    public T Item { get; set; }
}    

The RankedSearch extension method

public static class RankedSearchExtensions
{
    public static IQueryable<IRanked<T>> RankedSearch<T>(this IQueryable<T> source, 
                                            Expression<Func<T, string>> stringProperty, 
                                            string searchTerm)
    {
        var parameterExpression = stringProperty.Parameters[0];
        var hitCountExpression = CalculateHitCount(stringProperty, searchTerm);
        var rankedInitExpression = ConstructRankedResult<T>(hitCountExpression, 
                                                            parameterExpression);

        var selectExpression = 
               Expression.Lambda<Func<T, Ranked<T>>>(rankedInitExpression, parameterExpression);

        return source.Search(stringProperty, searchTerm)
                     .Select(selectExpression);
    }

The first thing this method does is call CalculateHitCount which creates an expression that represents counting the number of times a search term occurs. I am using the following method to count occurrences so that this can be used by all providers, specifically SQL.

Note: Always write down the code you are trying to build to help visualize the expression tree

x => x.Name.Length - x.Name.Replace([searchTerm], "").Length) / [searchTerm].Length;

In terms of building the above as an expression tree, this was accomplished as follows:

private static BinaryExpression CalculateHitCount<T>(Expression<Func<T, string>> stringProperty, 
                                                     string searchTerm)
{
    Expression searchTermExpression = Expression.Constant(searchTerm);

    // Store term length to work out how many search terms were found
    Expression searchTermLengthExpression = Expression.Constant(searchTerm.Length);

    // Empty string expression to replace search terms with
    Expression emptyStringExpression = Expression.Constant("");        
    PropertyInfo stringLengthProperty = typeof (string).GetProperty("Length");

    //Calculate the length of property
    var lengthExpression = Expression.Property(stringProperty.Body, stringLengthProperty);

    // Replace searchTerm with empty string in property                                                     
    MethodInfo replaceMethod = typeof(string).GetMethod("Replace", 
                                                new[] {typeof (string), typeof (string)});
    var replaceExpression = Expression.Call(stringProperty.Body, replaceMethod, 
                                            searchTermExpression, emptyStringExpression);

    // Calculate length of replaced string
    var replacedLengthExpression = Expression.Property(replaceExpression, stringLengthProperty);

    // Calculate the difference between the property and the replaced property
    var charDiffExpression = Expression.Subtract(lengthExpression, replacedLengthExpression);

    // Divide the character difference by the number of characters in the
    // search term to get the amount of occurrences 
    return Expression.Divide(charDiffExpression, searchTermLengthExpression);
}

The second part of a RankSearch is to initialize a Ranked search result holding the hit count as well as returning the original item. We already have the hit count expression using the method above. We now need to build an expression tree that uses the hit count and builds a ranked result.

The equivalent lambda I want to build is as follows:

x => new Ranked<T>{ Hits = [hitCountExpression], Item = x}

This is represented as the following expression tree. It is fairly simple as it is simple initializing our ranked result:

private static Expression ConstructRankedResult<T>(Expression hitCountExpression, 
                                                   ParameterExpression parameterExpression)
{
    var rankedType = typeof (Ranked<T>);
    // Construct the object
    var rankedCtor = Expression.New(rankedType);

    // Assign hitCount to Hits property
    var hitProperty = rankedType.GetProperty("Hits");
    var hitValueAssignment = Expression.Bind(hitProperty, hitCountExpression);

    //Assign record to Item property
    var itemProperty = rankedType.GetProperty("Item");
    var itemValueAssignment = Expression.Bind(itemProperty, parameterExpression);

    // Initialize Ranked object with property assignments
    return Expression.MemberInit(rankedCtor, hitValueAssignment, itemValueAssignment);
}

Get in touch

I'm not entirely happy with the method name RankedSearch as it suggests the result is ordered by default. This is not the case as the user can order the results as they see fit. RankedSearch simply provides an occurrence (hit) count of the search term. If you have a suggestion as to a better method name, please get in touch via the comments below, twitter, or emailing me using the link in the header

I am currently implementing the RankedSearch feature for use with multiple properties and multiple search terms (a future post, no doubt) but if you have any ideas as to future features or enhancements, then, again, please get in touch using the normal channels.

A while back when I was first releasing my blog, I came across an error that had not troubled me during local testing.

I had created a couple of new bundles to support Prettify in my BundleConfig.cs:

        bundles.Add(new ScriptBundle("~/Content/prettify")
                            .Include("~/Scripts/Prettify/prettify.js"));

        bundles.Add(new StyleBundle("~/Content/prettify")
                            .Include("~/Content/Prettify/prettify.css"));

After a bit of digging the cause was due the fact that the folder /Content/prettify already exists in my application meaning that IIS was handling the request and not MVC. Once this was realized, the fix was simple. I simply changed the virtualPath parameter values to not include existing folders.

        bundles.Add(new StyleBundle("~/bundles/css/prettify")
                            .Include("~/Content/Prettify/prettify.css"));

        bundles.Add(new ScriptBundle("~/bundles/js/prettify")
                            .Include("~/Scripts/Prettify/prettify.js"));

This was a simple fix once the cause was identified (from stackoverflow). I hope this is useful to anyone else out there with the same issue.

In order to be sure not to run into this error in the future I decided to use a simple prefix for my all bundles, namely bundles/css for styling and bundles/js for (yes, you've guessed it) script bundles.

I was recently validating a jquery form submission and wanted to return all the model state errors as json back to the page. Retrieving all model state errors isn't as simple as you might think since on each field can contain multiple errors. Here is how I did it.

Define a class to hold each model error

public class Error
{
    public Error(string key, string message)
    {
        Key = key;
        Message = message;
    }

    public string Key{ get; set; }
    public string Message { get; set; }
}

Create an extension method to retrieve the errors

Because we are using Linq to retrieve the errors, we can either create the extension method using Method syntax or Query syntax. Both solutions are provided below.

Option 1: Uses Method syntax. It's a bit longer but arguably more readable.

public static class ModelStateExtensions
{
    public static IEnumerable<Error> AllErrors(this ModelStateDictionary modelState)
    {
        var result = new List<Error>();
        var erroneousFields = modelState.Where(ms => ms.Value.Errors.Any())
                                        .Select(x => new { x.Key, x.Value.Errors });

        foreach (var erroneousField in erroneousFields)
        {
            var fieldKey = erroneousField.Key;
            var fieldErrors = erroneousField.Errors
                               .Select(error => new Error(fieldKey, error.ErrorMessage));
            result.AddRange(fieldErrors);
        }

        return result;
    }
}

Option 2: Uses Query syntax. This solution is more compact but possible less readable.

public static IEnumerable<Error> AllErrors(this ModelStateDictionary modelState)
{
    var result = from ms in modelState
                 where ms.Value.Errors.Any()
                 let fieldKey = ms.Key
                 let errors = ms.Value.Errors
                 from error in errors
                 select new Error(fieldKey, error.ErrorMessage);

    return result;
}

Using the extension method in a controller

Using the extension method is simple

/**** IMPORTANT: Don't forget to add a using statement to use your extension method *****/

[HttpPost]
public ActionResult PostAction(MyViewModel viewModel)
{
    if (ModelState.IsValid)
    {
        // Perform logic
        return new Content("success");
    }

    //set error status
    Response.StatusCode = 400;
    // Important for live environment.
    Response.TrySkipIisCustomErrors = true;

    var modelErrors = ModelState.AllErrors(); // <<<<<<<<< SEE HERE
    return Json(modelErrors);
}

Notice the Response.TrySkipIisCustomErrors = true; line.

This caught me out when I released this code to a live environment, as by default setting status code overwrites customErrors. See this stack overflow question for more information

Reading the errors

All that is left to do now is set up our javascript method to read our json result.

$.post(postUrl, postData)
...
.fail(function (error) {
    var response = JSON.parse(error.responseText);
    for (var i = 0; i < response.length; i++) {
        var error = response[i];
        var fieldKey = error.Key;
        var message = error.Message;
        // apply custom logic with field keys and messages
        console.log(fieldKey + ': ' + message);
    }
}

I hope you find this helpful, please tweet this article using the links above or comment using the form below.

I recently created a custom blog engine for this blog and thought I'd share a simple overview of how I did it.

Demo: Simple validation demo to show custom comment validation

Creating the data models

In order to persist our comments in a data source, and retrieve them back, we need to define a data model that defines the information we require:

public class Comment
{
    [Required]
    public int Id { get; set; }

    [StringLength(50)]
    public string Author { get; set; }

    [Required]
    public string Body { get; set; }

    [StringLength(100)]
    public string Email { get; set; }

    [Required]
    public int PostId { get; set; }

    // Link to existing Post class 
    public virtual Post Post { get; set; }
}

I want to be able to access comments from a post so I also need to update my post model as follows:

public class Post
{
    // Current Properties...

    // New relationship property
    public virtual ICollection<Comment> Comments { get; set; }
}

Now I have my data models I simply need to create a data migration and update my database using the following commands

Add-Migration CreateComments -ProjectName Blog.Data -Verbose Update-Database -ProjectName Blog.Data -Verbose

Now that our database is up to date you may need to implement some code in order to access your data, be it a repository or otherwise.

Creating the view model

Now that the data can be persisted and retrieved from our datasource we can concentrate on how we allow users to create a comment. Firstly we need to create a viewmodel to represent a comment:

public class CommentViewModel
{
    //Represents a post id
    [Required]
    public int Id { get; set; }
    [Required]
    [StringLength(50)]
    public string Author { get; set; }
    [Required]
    [AllowHtml]
    public string Body { get; set; }
    [Required]
    [StringLength(100)]
    [DataType(DataType.EmailAddress)]
    [EmailAddress]
    public string Email { get; set; }
}

Creating the HttpPost action

We also need to create a post action in our controller to accept the comment form submissions. This will be a simple MVC post action like any other:

[HttpPost]
public ActionResult Comment(CommentViewModel viewModel)
{
    if (ModelState.IsValid)
    {
        //Mapping code - alternatively try AutoMapper
        var dataComment = new Comment();
        dataComment.PostId = viewModel.Id;
        dataComment.Author = viewModel.Author;
        dataComment.Body = viewModel.Body;
        dataComment.Email = viewModel.Email;

        // Create comment and save changes
        commentRepository.Create(comment);
        commentRepository.SaveChanges();

        return new EmptyResult();
    }

    var modelErrors = this.BuildModelErrors();
    // return a bad request to signify that adding the comment failed
    HttpContext.Response.StatusCode = 400;
    // return errors as Json, read by javascript
    return Json(modelErrors);
}

/// <summary>
/// Build a list of model errors from model state.  
/// This method flattens the model state errors.
/// </summary>
/// <returns>A list of Keys and Error messages</returns>
private List<ModelError> BuildModelErrors()
{
    var modelErrors = new List<ModelError>();
    var erroneousFields = this.ModelState.Where(ms => ms.Value.Errors.Any())
                                         .Select(x => new {x.Key, x.Value.Errors});

    foreach (var erroneousField in erroneousFields)
    {
        var fieldKey = erroneousField.Key;
        var fieldErrors = erroneousField.Errors.Select(error => 
                                            new ModelError(fieldKey, error.ErrorMessage));
        modelErrors.AddRange(fieldErrors);
    }
    return modelErrors;
}

//Class to hold model errors and the corresponding field key
private class ModelError
{
    public ModelError(string key, string errorMessage)
    {
        Key = key;
        ErrorMessage = errorMessage;
    }

    public string Key { get; set; }
    public string ErrorMessage { get; set; }
}        

If the view model fails validation I return a HttpStatusCodeResult with a HttpStatusCode.BadRequest (400) status code. This will ensure that our response is not treated as a success in our javascript, but more on that further down

Creating the form

We now need to create a form that can be used to create a comment. I have created this as a partial view so that I can add it easily to multiple pages

I could have simply created a simple Html.BeginForm() or Ajax.BeginForm() but I wanted a custom implementation with cleaner html so I decided to use plain html and javascript with a lot of help from jquery.

@model Blog.Models.Comments.CommentViewModel

<div class="comment-form-container">
    <form class="comment-form" data-action="@Url.Action("Comment", "Comments")">
        @Html.HiddenFor(m => m.Id)
        <div class="comment-body">
            <div>@Html.LabelFor(m => m.Author)</div>
            @Html.TextBoxFor(m => m.Author, new { Class = "comment-name" })
        </div>
        <div>
            <div>@Html.LabelFor(m => m.Email)</div>
            @Html.TextBoxFor(m => m.Email, new { Class = "comment-email" })
        </div>
        <div>
            <div>@Html.LabelFor(m => m.Body)</div>
            @Html.TextAreaFor(m => m.Body, new { Class="comment-body", rows="3", cols="50" })
        </div>
        <div class="comment-result" style="display: none;" >
            <span class="comment-result-text">An error occurred</span>
        </div>
        <div>
            <button type="submit" class="comment-form-submit">Submit comment</button>
        </div>
    </form>
</div>

Now we have a form we simply need to hook up the form submission

The javascript

Below is some javascript that can be used to post our form to the controller in order to submit a form:

$('.comment-form-container').on('click', '.comment-form-submit', function(e) {
    // prevent form submission
    e.preventDefault();
    var form = $(this).closest('form');
    var resultMessage = $('.comment-result', form);
    resultMessage.hide();

    var submitButton = $(this);
    // disable the submit button to stop 
    // accidental double click submission
    submitButton.attr('disabled', 'disabled');
    var resultMessageText = $('.comment-result-text', form);

    // client side validation
    if (validateComment(form, resultMessageText) == false) {
        resultMessage.addClass('comment-result-failure');
        resultMessage.fadeIn();
        // Re-enable the submit button
        submitButton.removeAttr('disabled');
        return;
    }

    var postUrl = form.data('action');
    var postData = form.serialize();

    $.post(postUrl, postData)
        .done(function(data) {
            resultMessage.addClass('comment-result-success');
            resultMessageText.html('Comment created successfully!');
            // Clear submitted value
            $('.comment-body', form).val('');
        })
        .fail(function() {
            resultMessage.addClass('comment-result-failure');
            resultMessageText.html('An error has occurred');
        })
        .always(function() {
            resultMessage.fadeIn();
            submitButton.removeAttr('disabled');
        });
});

I have also added some simple validation to check, client side, that required fields have values. This is in place to simply prevent unnecessarily posting back to the server if we already know a required field hasn't been given a value.

// validate required fields
var validateRequired = function (input) {
    if (input.val().length == 0) {
        input.addClass('input-validation-error');
        return false;
    }
    return true;
};

var validateComment = function (commentForm, resultMessageContainer) {
    var isValid = true;
    var errorText = '';
    var name = $('.comment-name', commentForm);
    if (!validateRequired(name)) {
        errorText += 'The Name field is required.<br />';
        isValid = false;
    }

    var body = $('.comment-body', commentForm);
    if (!validateRequired(body)) {
        errorText += 'The Body field is required.<br />';
        isValid = false;
    }

    var email = $('.comment-email', commentForm);
    if (!validateRequired(email)) {
        errorText += 'The Email field is required.<br />';
        isValid = false;
    }

    if (isValid == false) {
        resultMessageContainer.html(errorText);
    }

    return isValid;
};  

In the following post I will show how we can leverage jquery validate to perform the client side validation for us automatically to replace our custom validation.

Don't forget to check out the demo

Demo: Simple validation demo to show custom comment validation

Any comments, please let me know using the form below.

As part of my custom comment system I utilized a great tool from TopTenSoftware called MarkDownDeep. I was using this in conjunction with Jquery 1.9.1 which unfortunately caused a small issue with the resizer functionality of the site.

As you can see from the demo pages below, the fix was fairly trivial:

Demo: Current version of MarkdownDeep with Jquery 1.9.1

Demo: Fixed version of MarkdownDeep with Jquery 1.9.1

After getting the source code form github it was simply a matter of locating the error, located in this.onResizerMouseDown:

// Handle click on resize bar
this.onResizerMouseDown = function (e) {
    // Initialize state
    var srcElement = (window.event) ? e.srcElement : e.target;

Debugging the file identified that e.srcElement was being used, but was null. In order to fix this I simply put a null check around e.srcElement and if it was null I use e.target as follows:

// Handle click on resize bar
this.onResizerMouseDown = function (e) {
    // Initialize state
    var srcElement = (window.event) ? e.srcElement ? e.target : e.srcElement : e.target;

And that was it. I am going to submit a pull request to the github repository to see if I can get some feedback on this fix and maybe incorporated into the live code.

Hope this helps someone out there with the same problem.

Don't forget to comment using my new Markdown enabled comment engine!

Recently I had a requirement that when under certain conditions a Sitecore field should become readonly. To accomplish this requirement I created a new ReadOnlyWhenXYZValidator. The requirement stated that the validator should stop a user from saving the item if the field had changed. Here is how I accomplished it.

What I decided to do was create a custom validator that can be applied to each property that should become readonly once a condition is met.

[Serializable]
public class ReadOnlyWhenXYZValidator : StandardValidator
{
    public ReadOnlyWhenXYZValidator()
        : base()
    {
    }

    public ReadOnlyWhenXYZValidator(SerializationInfo info, StreamingContext context)
        : base(info, context)
    {
    }

    public override string Name
    {
        get { 
            return Sitecore.StringUtil.GetString(this.Parameters["ValidatorName"], 
                                                 this.GetType().ToString()); }
    }

    protected override ValidatorResult GetMaxValidatorResult()
    {
        return GetFailedResult(ValidatorResult.Error);
    }

    protected override ValidatorResult Evaluate()
    {
        Item item = this.GetItem();
        // The following will need to be replaced with your own logic to 
        // retrieve if an item should be readonly...
        var isReadOnly= ShouldBeReadOnly(item.ID.Guid);
        if (isReadOnly)
        {
            bool propertyHasChanged = HasPropertyChanged();
            if (propertyHasChanged)
            {
                this.Text = String.Format(
                   "Field '{0}' is readonly and cannot be updated",
                   this.GetFieldDisplayName());

                // Fatal Error is required in order to prevent saving
                return ValidatorResult.FatalError;
            }  
        }
        this.Text = "Valid";
        return ValidatorResult.Valid;
    }

    /// <summary>
    /// Returns a value indicating if the property value has changed
    /// </summary>
    private bool HasPropertyChanged()
    {
        var value = GetControlValidationValue();
        var contentItem = Sitecore.Context.ContentDatabase.GetItem(GetItem().ID);
        if (contentItem == null)
        {
            return false;
        }

        var oldValue = contentItem[GetField().ID];
        //Check if current value is different from an old value
        return value != oldValue;
    }
}

Notice I am returning a ValidatorResult.FatalError. This is the only ValidatorResult that prevents a user from saving an item, so although it sounds over the top, is required.

Once complete, simply tell Sitecore to run the validator against the fields you wish to validate. This is done by firstly adding a new Validation Rule to System > Settings > Validation Rules > Field Rules or a sub folder of that.

Once added, simply navigate to your field definitions and, within the Validation Rules section, add your custom validator to each of the folllowing:

  • Quick Action Bar
  • Validate Button
  • Validator Bar
  • Workflow

Once complete load your items and you should find you can't save items that match the condition you have. I hope this helps. Don't forget to share this article if you found it helpful.

If you want to validate your entity framework models but still automatically generate your models using entity framework database-first then you need to use a partial classes to define validation attributes. For example:

Say you have the following model

public class User {
    public string Name { get; set; }
}

If you wanted to put a string length validator on it you would need to create a partial class and utilize the MetadataTypeAttribute (this lives in System.ComponentModel.DataAnnotations)

The following classes should be defined in its own separate file, NOT in the same file as your auto generated models as these will be removed whenever your models are regenerated.

[MetadataTypeAttribute(typeof(UserMetadata))]
public partial class User {
}

The above code defines that the metadata should be retrieved from a 'UserMetadata' class which should be defined as below. Again, this should not be placed in the auto generated file produced by entity framework.

public class UserMetadata{
    [StringLength(50)]
    public string Name {get; set;}
}

From now on if you try to create or update a user with a name that has more than 50 characters, validation will fail.

Recently @dot_Net_Junkie posted a question on StackOverflow on how to implement the TPT using EntityFramework. His original question was using a DB-First approach so my immediate thought was 'how about trying a code-first approach'. I set up a quick test project and this is how did it:

The models

public abstract class BaseTable
{
    public int Id { get; set; }
    public string Name { get; set; }
}

[Table("DerivedWithRelation")]
public class DerivedWithRelation : BaseTable
{
    public int Amount { get; set; }
    public string About { get; set; }
    public int RelatedId { get; set; }

    public virtual ICollection<Relation> Relations { get; set; }
}

[Table("DerivedWithoutRelation")]
public class DerivedWithoutRelation : BaseTable
{
    public int Quantity { get; set; }
    public string Description { get; set; }
}

public class Relation
{
    public int Id { get; set; }
    public string RelationshipType { get; set; }

    public virtual DerivedWithRelation DerivedWithRelation { get; set; }
}

The context

public class MyContext : DbContext
{
    public MyContext()
        : base("DefaultConnection")
    {            
    }

    public IDbSet<BaseTable> BaseTables { get; set; }
    public IDbSet<DerivedWithRelation> DerivedWithRelations { get; set; }
    public IDbSet<DerivedWithoutRelation> DerivedWithoutRelations { get; set; }
}

Creating the Entity Framework code first migration Running Add-Migration TPTCodeFirstTest -ProjectName MyProject.Data -StartUpProjectName MyProject.Data should create something similar to the following:

public partial class AddTPTTest : DbMigration
{
    public override void Up()
    {
        CreateTable(
            "dbo.BaseTables",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    Name = c.String(),
                })
            .PrimaryKey(t => t.Id);

        CreateTable(
            "dbo.Relations",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    RelationshipType = c.String(),
                    DerivedWithRelation_Id = c.Int(),
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("dbo.DerivedWithRelation", t => t.DerivedWithRelation_Id)
            .Index(t => t.DerivedWithRelation_Id);

        CreateTable(
            "dbo.DerivedWithRelation",
            c => new
                {
                    Id = c.Int(nullable: false),
                    Amount = c.Int(nullable: false),
                    About = c.String(),
                    RelatedId = c.Int(nullable: false),
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("dbo.BaseTables", t => t.Id)
            .Index(t => t.Id);

        CreateTable(
            "dbo.DerivedWithoutRelation",
            c => new
                {
                    Id = c.Int(nullable: false),
                    Quantity = c.Int(nullable: false),
                    Description = c.String(),
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("dbo.BaseTables", t => t.Id)
            .Index(t => t.Id);

    }

    public override void Down()
    {
        DropIndex("dbo.DerivedWithoutRelation", new[] { "Id" });
        DropIndex("dbo.DerivedWithRelation", new[] { "Id" });
        DropIndex("dbo.Relations", new[] { "DerivedWithRelation_Id" });
        DropForeignKey("dbo.DerivedWithoutRelation", "Id", "dbo.BaseTables");
        DropForeignKey("dbo.DerivedWithRelation", "Id", "dbo.BaseTables");
        DropForeignKey("dbo.Relations", "DerivedWithRelation_Id", "dbo.DerivedWithRelation");
        DropTable("dbo.DerivedWithoutRelation");
        DropTable("dbo.DerivedWithRelation");
        DropTable("dbo.Relations");
        DropTable("dbo.BaseTables");
    }
}

All that is left is to push our changes to the database
Update-Database -ProjectName MyProject.Data -StartUpProjectName MyProject.Data -Verbose

Et voila: The following database structure is created:

Schema created using Entity Framework code first and TPT

Hope that helps. If you found this useful, please share using the links at the start of this post.

Now available as a nuget package. Search for 'SearchExtensions' or run the following:

PM> Install-Package NinjaNye.SearchExtensions

Source code can be found here: https://github.com/ninjanye/searchextensions


I've recently updated my IQueryable search extension method library to include a search method that allows searching multiple terms against multiple properties. Here's the code with some example output when connected to sql

The code

public static class QueryableExtensions
{
    /// <summary>
    /// Search multiple properties for multiple search terms
    /// </summary>
    /// <param name="source">Source data to query</param>
    /// <param name="searchTerms">search term to look for</param>
    /// <param name="stringProperties">properties to search against</param>
    /// <returns>Collection of records matching the search term</returns>
    public static IQueryable<T> Search<T>(this IQueryable<T> source, 
                                          IList<string> searchTerms, 
                                          params Expression<Func<T, string>>[] stringProperties)
    {
        if (!searchTerms.Any())
        {
            return source;
        }

        // The below is the equivalent lamda that is being constructed:
        // source.Where(x => x.[property1].Contains(searchTerm1)
        //                || x.[property1].Contains(searchTerm2)
        //                || x.[property2].Contains(searchTerm1)
        //                || x.[property2].Contains(searchTerm2)
        //                || x.[property3].Contains(searchTerm1)
        //                || x.[property3].Contains(searchTerm2)...)

        //Variable to hold merged 'OR' expression
        Expression orExpression = null;
        //Retrieve first parameter to use accross all expressions
        var singleParameter = stringProperties[0].Parameters.Single();

        foreach (var searchTerm in searchTerms)
        {
            //Create expression to represent x.[property].Contains(searchTerm)
            ConstantExpression searchTermExpression = Expression.Constant(searchTerm);

            //Build a contains expression for each property
            foreach (var stringProperty in stringProperties)
            {
                //Syncronize single parameter accross each property
                var swappedParamExpression = SwapExpressionVisitor.Swap(stringProperty, stringProperty.Parameters.Single(), singleParameter);

                //Build expression to represent x.[propertyX].Contains(searchTerm)
                var containsExpression = BuildContainsExpression(swappedParamExpression, searchTermExpression);

                orExpression = BuildOrExpression(orExpression, containsExpression);
            }
        }

        var completeExpression = Expression.Lambda<Func<T, bool>>(orExpression, singleParameter);
        return source.Where(completeExpression);
    }

The helper methods and SwapExpressionVisitor can all be found on my SearchExtension github project.

Using it

The new extension method can be used as follows (given source variable is of type IQueryable<Person> with FirstName and Nickname string properties)

var searchTerms = new List<string>{ "john", "fred", "barry" };
var result = source.Search(searchTerms, p => p.FirstName, 
                                        p => p.Nickname).ToList();

The result

When using a sql provider, the above produces the following sql

SELECT [Columns]
FROM [dbo].[People] AS [Extent1]
WHERE ([Extent1].[FirstName] LIKE N'%john%') 
   OR ([Extent1].[Nickname] LIKE N'%john%') 
   OR ([Extent1].[FirstName] LIKE N'%fred%') 
   OR ([Extent1].[Nickname] LIKE N'%fred%') 
   OR ([Extent1].[FirstName] LIKE N'%barry%') 
   OR ([Extent1].[Nickname] LIKE N'%barry%')    

Perfect...

Don't forget to download the package via nuget. I'll be releasing regular updates so please get in touch if you would like particular functionality added to the library.

If you found this post useful please share using the icons above or get in touch if you have any questions.

As part of my new blog i wanted to create my own url shortener. I also wanted the short url's to be based off the root of my site allowing the following as well as other controllers to still work as normal:

Url Structure

I still want to maintain routing to my controllers so the following urls must behave as normal

http://jnye.co/
http://jnye.co/Posts
http://jnye.co/Search

However the following url should also be supported (where '4N2VN7' can be replaced with any text)

http://jnye.co/4N2VN7

The Solution

The solution was fairly simple. First we have to create a route for each controller so they are not treated as short urls. I also needed to add an additional route to identify the url shortener. Below are the routes I added. Remember order is important when it comes to routes as they are checked in order and when one is matched the other routes are not required to be checked.

// Search controller
routes.MapRoute(
    name: "Search",
    url: "search/{action}/{id}",
    defaults: new {controller = "Search", action = "Index", id = UrlParameter.Optional}
);

//Posts controller
routes.MapRoute(
    name: "Posts",
    url: "posts/{action}/{id}",
    defaults: new { controller = "Posts", action = "Index", id = UrlParameter.Optional }
);

//Url shortener route
routes.MapRoute(
    name: "ShortUrl",
    url: "{shortUrl}",
    defaults: new { controller = "Home", action = "Index", shortUrl = UrlParameter.Optional }
);

//To catch http://jnye.co
routes.MapRoute(
    name: "Default",
    url: "{controller}/{action}/{id}",
    defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);

In my Home controller I have the following code to match a short url against my database. If none is supplied or no match is found I return the home page:

// GET: /Home/
public ActionResult Index(string shortUrl)
{
    if (string.IsNullOrEmpty(shortUrl))
    {
        //Show home page
        return View();                
    }

    //Retrieve related post
    var redirectPost = GetPostByShortUrl(shortUrl);
    if(redirectPost == null)
    {
        //No match, serve home page
        return View();
    }

    //Redirect to relate post
    return RedirectToAction("Index", "Posts", new {area = "", id = redirectPost.Id});
}

...And that's it. Our very own custom url shortener. Please share using the sharing links above, which utilize the very url shortener I have implemented here ;-).

Here is my take on how test projects should be structured within a solution as well as list of my preferred tools. I have used this in the past and find it makes for easy navigation and understanding.

Framework

The testing framework I most commonly use is NUnit I also prefer to use Moq as my mocking framework.

Project Structure

  • Create a test project for each project under test
  • Within the test project try and keep the namespaces in sync so that a test is immediately locatable by the class it is testing.
  • Create a folder named after the class under test, suffixed with ‘Tests’ (this is more than simply for descriptive purposes, it also saves a lot of headaches with namespaces)
  • Create a test file for each method under test, again with a ‘Tests’ suffix.

Below is a diagram which explains the above in a bit more detail.

// Project under test
- Solution.ProjectA.csproj [Project]
     - NamespaceA [Folder]
           - Namespace1 [Folder]
                TestableClass.cs [File]
                     - Method1()
                     - Method2() 

//Test project
- Solution.ProjectA.Tests.csproj [Test Project]
     - NamespaceA [Folder]
           - Namespace1 [Folder]
                - TestableClassTests [Folder]
                     - TestableClassTestBase.cs [File] <-- common setup methods and build mocks
                     - Method1Tests.cs [File]
                     - Method2Tests.cs [File]

The Tests

In terms of the tests themselves, follow the Arrange, Act, Assert pattern as follows:

    [TestMethod]
    public void Method_Scenario_ExpectedResult()
    {
        //Arrange                  
        // The object under test would normally be setup in the [TestInitialize] method
        var objectUnderTest = ... 
        var parameterValue = ...

        //Act – perform the action you are testing
        Var result = objectUnderTest.Method1(parameterValue);

        //Assert
        Assert.IsTrue(result);
    } 
  • Methods should be named as the title above describes (Method_Scenario_ExpectedResult)
  • Method – Method name under test, this will be the same for each test in a particular file.
  • Scenario – The scenario under test
  • ExpectedResult – the expected result (null is returned, mock method is called etc etc)
  • Where possible each test should only have one assertion (although this isn’t always possible in reality)

If you have any questions or pointers to add, please don't hesitate to get in touch with me

I recently deployed a new test website to Windows Azure using Azure's Git integration. The problem I had is that I didn't want to drop and recreate the database each time I pushed a release (and lose my data), nor did I want to manually upgrade the database in Package Manager Console.

The solution (once found... hence the blog post) was simple.

In our DataContext we simply needed to set the initializer to migrate to the latest version. This is done as follows:

public class MyDataContext : DbContext
{
    public MyDataContext()
        :base("DefaultConnection")
    {            
    }

    // Context properties

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        Database.SetInitializer(
            new MigrateDatabaseToLatestVersion<MyDataContext, Configuration>());
        base.OnModelCreating(modelBuilder);
    }
}

It does exactly what it says on the tin (or class). When creating the data model, migrate the database to the latest version. Simples. The solution was clean and simple, it just took a bit of searching. Hopefully this blog will help you resolve the same issue I had.

And that's it. Every time I push to my release branch Azure pick's it up and releases it. When the app starts my database is brought right up to date using my data migrations. Brilliant!

I recently read a blog post by Bipin Joshi on Creating Cascading DropDownLists Using ASP.NET MVC 4 And JQuery and it struck me that I would have coded this quite differently to that proposed by Bipin. That's not to say Bipin is wrong, it is simply my preferred solution to the problem, so I thought I'd share

See the live demo: /Demo/Jquery/cascading-dropdown-lists

Most of the changes are fairly minor but and some purely for readability, nevertheless, I thought it would be useful to create a post about how I would accomplish this same task.

First off, I created a basic MVC4 web application which gave me some default files out of the box.

In the home controller I have decided to use the collection initializer to build the list of countries but more importantly I felt the code was cleaner by using the ViewBag dynamic over ViewData.

    public ActionResult Index()
    {
        var countries = new List<string> {"USA", "UK", "India"};
        var countryOptions = new SelectList(countries);
        ViewBag.Countries = countryOptions;
        return View();
    }

Next is the GetStates() action method. Here I have made one change that enables me to retrieve the states over a HttpGet request. The reason for this is I believe that HttpGet is the best fit for this request as we are simply retrieving information form the server. If we were adding or updating states then a HttpPost request would be required.

    public JsonResult GetStates(string country)
    {
        var states = new List<string>();
        switch (country)
        {
            case "USA":
                states.Add("California");
                states.Add("Florida");
                states.Add("Ohio");
                break;
            case "UK":
                states.Add("London");
                states.Add("Essex");
                break;
            case "India":
                states.Add("Goa");
                states.Add("Punjab");
                break;
        }

        //Add JsonRequest behavior to allow retrieving states over http get
        return Json(states, JsonRequestBehavior.AllowGet);
    }

The second, and final, part of my solution is the Index.cshtml file. In this file I have the html for the form as well as the javascript required to retrieve the states from the server.

First lets look at the entire Index.cshtml file (displayed in the next 2 code blocks):

@using (Html.BeginForm())
{
    <div>Select country:</div>
    <div>@Html.DropDownList("country", 
                            ViewBag.Countries as SelectList, 
                            "Please select", 
                            new { id = "country" })
    </div>
    <div>Select state:</div>
    <div>
        <select id="state" disabled="disabled"></select>
    </div>
    <input type="submit" value="Submit"/>
}


@section scripts
{
    <script type="text/javascript">
        $(function() {
            $('#country').on('change', function() {
                var stateDropdown = $('#state');
                //disable state drop down
                stateDropdown.prop('disabled', 'disabled');
                //clear drop down of old states
                stateDropdown.empty();

                //retrieve selected country
                var country = $(this).val();
                if (country.length > 0) {
                    // retrieve data using a Url.Action() to construct url
                    $.getJSON('@Url.Action("GetStates")', {
                        country: country
                    })
                    .done(function (data) {
                        //re-enable state drop down
                        stateDropdown.removeProp('disabled');
                        //for each returned state
                        $.each(data, function (i, state) {
                            //Create new option
                            var option = $('<option />').html(state);
                            //append state states drop down
                            stateDropdown.append(option);
                        });
                    })
                    .fail(function (jqxhr, textStatus, error) {
                        var err = textStatus + ", " + error;
                        console.log("Request Failed: " + err);
                    });
                }
            });
        })
    </script>
}

Unlike Bipin, I have decided to set the states drop down list as disabled by default and also decided to use an overload of @Html.DropDownList() extension method to set my default option as well as the id for the select list.

Below the html (in the same file) I have utilised the @section keyword which enables me to define javascript which will automatically get rendered at the bottom of the <body> tag thanks to the _Layout.cshtml page.

The first thing my javascript does is to disable the states drop down regardless as soon as the value changes.

var stateDropdown = $('#state');
//disable state drop down
stateDropdown.prop('disabled', 'disabled');
//clear drop down of old states
stateDropdown.empty();

The main difference in my approach, however, is that I have decided to use jQuery's $.getJSON() method over $.ajax() as, in my opinion, it allows for more readable code and strips out some of the boilerplate code. It also negates the need to build an ajax options object.

$.getJSON('@Url.Action("GetStates")', {
    country: country
})

In order to get the url, I decided to use the @Url.Action() method in case of future routing or controller/action changes.

The second big change is that instead of setting success and error actions I have decided to use callback chaining in the form of .done() and .error(). Again this is personal preference and largely down to readability.

.done(function (data) {
    //re-enable state drop down
    stateDropdown.removeProp('disabled');
    //for each returned state
    $.each(data, function (i, state) {
        //Create new option
        var option = $('<option />').html(state);
        //append state states drop down
        stateDropdown.append(option);
    });
})
.fail(function (jqxhr, textStatus, error) {
    var err = textStatus + ", " + error;
    console.log("Request Failed: " + err);
});

As an additional, note. If the project catered for it I would probably implement the GetStates() method as a WebApi method. This would easily be done by moving the method into a new controller that inherited from ApiController. You would simply have to update the url of our $.getJSON call to the web api method.

See the live demo: /Demo/Jquery/cascading-dropdown-lists

Well, that's it. I hope you enjoy my take on this fairly common task. Please feel free to comment or tweet me, it'd be great to hear your thoughts on the different approaches.

Here are some links you might find useful that relate to this post:

Today I discovered a simple yet hugely beneficial feature of Specflow (and cucumber) that before today I was unaware of.

The requirement was simple and so was the solution, once we found it.

In order to avoid repeating scenarios we wanted to be able to write a template scenario and iterate over a different set of parameters each time. Below is a simple, example of the problem as well as the solution.

Scenario: Selecting English language
    Given I am on the Home page
    When I select English
    Then the greeting should be Hello

Scenario: Selecting German language
    Given I am on the Home page
    When I select Deutsch
    Then the greeting should be Guten Tag

The solution was clean and simple and makes for incredibly intuitive test scripts.

Scenario Outline: Selecting a language
    Given I am on the Home page
    When I select <language>
    Then the greeting should be <greeting>
    Examples:
    | language | greeting  |
    | English  | Hello     |
    | Deutsch  | Guten Tag |

By changing our script from a Scenario to a Scenario Outline and replacing our languages and greetings to variables, we can add an Examples table to the foot of the test which defines how many times to run the test and with which variables

If you want some further reading on using Scenario Outlines then I found the following useful: https://github.com/cucumber/cucumber/wiki/Scenario-Outlines

An interesting (if not lengthy) question caught my eye on Stack Overflow this evening which I had a crack at so I thought I'd share it.

The simplified problem was, given I have X apples and Y baskets, how can I evenly split the apples into each of the baskets. To do this I ended up creating an IEnumerable extension method that can be applied to the source (apples)

public static class EnumerableExtensions
{
    public static IEnumerable<IEnumerable<T>> Split<T>(this IEnumerable<T> source, int groups)
    {
        var listedSource = source.ToList();
        int extra;
        //Work out if number of items goes exactly into the groups
        int groupSize = Math.DivRem(listedSource.Count(), groups, out extra);

        while (listedSource.Any())
        {
            int newSize = groupSize;
            if (extra > 0)
            {
                //If any remainder items exist, increase the group size
                newSize++;
                extra--;
            }
            yield return listedSource.Take(newSize);
            listedSource = listedSource.Skip(newSize).ToList();
        }
    }
}

The above extension method enabled me to right the following code to split the apples:

int baskets = 4;
var apples = new List<int>{ 1, 2, 3, 4, 5, 6, 7, 8, 9 };
var result = apples.Split(baskets);    

And here's the proof...

Evenly splitting a list

I'd be interested if anyone out there has any other approaches to this problem or any comments on the approach I have put together so please get in touch.

A common requirement that is often requested in modern sites is for forms to be submitted and validated on asynchronously. This was also a question posed by @troyhunt recently.

Here is my usual approach to the problem using MVC4.

First thing we need is a form. This form sits in a partial view (Create.cshtml)

@using (Html.BeginForm()
{
    <div>
        // Add validation summary
        @Html.ValidationSummary(false)
    </div>
    <div>
        @Html.LabelFor(m =&gt; m.Property1)
        @Html.TextBoxFor(m =&gt; m.Property1)
    </div>
    <div>
        @Html.LabelFor(m =&gt; m.Property2)
        @Html.TextBoxFor(m =&gt; m.Property2)
    </div>
    <button id="submit-button" type="submit" value="Submit" />

The partial view is then placed inside a containing div on the main page

<div id="form-container">
    @Html.Partial("Create")
</div>

Our post action is like any other, except for one thing. If all validation passes we return "success" as a string

[HttpPost]
public ActionResult Create(ViewModel viewModel)
{
    // Check model is valid
    if (!ModelState.IsValid)
    {
        //Return the form with validation errors
        return this.PartialView(viewModel);
    }

    //TODO: Perform success action

    return Content("success");
}

Finally, we have our javascript which will perform the post action

@section scripts
{
<script type="text/javascript">
    $(function() {
        $('#submit-button').on('click', function() {
            var form = $('#form-container form');
            var postUrl = form.attr('action');
            var postData = form.serialize();

            $.post(postUrl, postData, function (data) {
                if (data == 'success') {
                    //TODO: Add success action
                } else {
                    $('#form-container').html(data);
                }
            });
        });
    })
</script>
}

Now available as a nuget package. Search for 'SearchExtensions' or run the following:

PM> Install-Package NinjaNye.SearchExtensions

Source code can be found here: https://github.com/ninjanye/searchextensions


Continuing my latest theme of search extension methods, my new method allows users to search a property against multiple search terms.

The code has been added to my existing search extensions project which can be found on github by going to https://github.com/ninjanye/SearchExtensions

public static class QueryableExtensions
{
    public static IQueryable<T> Search<T>(this IQueryable<T> source, 
                                          Expression<Func<T, string>> stringProperty, 
                                          params string[] searchTerms)
    {
        if (!searchTerms.Any())
        {
            return source;
        }

        Expression orExpression = null;
        foreach (var searchTerm in searchTerms)
        {
            //Create expression to represent x.[property].Contains(searchTerm)
            var searchTermExpression = Expression.Constant(searchTerm);
            var containsExpression = BuildContainsExpression(stringProperty, searchTermExpression);

            orExpression = BuildOrExpression(orExpression, containsExpression);
        }

        var completeExpression = Expression.Lambda<Func<T, bool>>(orExpression, stringProperty.Parameters);
        return source.Where(completeExpression);
    }

    private static Expression BuildOrExpression(Expression existingExpression, Expression expressionToAdd)
    {
        if (existingExpression == null)
        {
            return expressionToAdd;
        }

        //Build 'OR' expression for each property
        return Expression.OrElse(existingExpression, expressionToAdd);
    }
}

This allows the following linq statement:

var users = context.Users.Search(u => u.UserName, "john", "bob", "fred");

... which, if used via entity framework against a SQL database converts to the following SQL

SELECT [Extent1].[Id] AS [Id],   
       [Extent1].[UserName] AS [UserName],   
       [Extent1].[FirstName] AS [FirstName],   
       [Extent1].[LastName] AS [LastName],   
       [Extent1].[Email] AS [Email],   
FROM   [dbo].[Users] AS [Extent1]  
WHERE ([Extent1].[UserName] LIKE N'%john%')   
   OR ([Extent1].[UserName] LIKE N'%bob%')   
   OR ([Extent1].[UserName] LIKE N'%fred%')

Now available as a nuget package. Search for 'SearchExtensions' or run the following:

PM> Install-Package NinjaNye.SearchExtensions

Source code can be found here: https://github.com/ninjanye/searchextensions


Following on from my previous post on creating a generic search extension method for IQueryable, I decided to take the concept a step further and create an additional method that allows you to search multiple properties for a particular search term. The syntax I wanted to use for this new method was as follows:

//Search users where...
var ninjaUsers = dataContext.Users.Search("ninja", x => x.UserName,
                                                   x => x.FirstName,
                                                   x => x.LastName);

After a visit to stackoverflow and some expert guidance from @MarcGravell, this is the resulting code:

    public static IQueryable<T> Search<T>(this IQueryable<T> source, 
                                          string searchTerm, 
                                          params Expression<Func<T, string>>[] stringProperties)
    {
        if (String.IsNullOrEmpty(searchTerm))
        {
            return source;
        }

        var searchTermExpression = Expression.Constant(searchTerm);

        //Variable to hold merged 'OR' expression
        Expression orExpression = null;
        //Retrieve first parameter to use accross all expressions
        var singleParameter = stringProperties[0].Parameters.Single();

        //Build a contains expression for each property
        foreach (var stringProperty in stringProperties)
        {
            //Syncronise single parameter accross each property
            var swappedParamExpression = SwapExpressionVisitor.Swap(stringProperty, stringProperty.Parameters.Single(), singleParameter);

            //Build expression to represent x.[propertyX].Contains(searchTerm)
            var containsExpression = BuildContainsExpression(swappedParamExpression, searchTermExpression);

            orExpression = BuildOrExpression(orExpression, containsExpression);
        }

        var completeExpression = Expression.Lambda<Func<T, bool>>(orExpression, singleParameter);
        return source.Where(completeExpression);
    }

    private static Expression BuildOrExpression(Expression existingExpression, Expression expressionToAdd)
    {
        if (existingExpression == null)
        {
            return expressionToAdd;
        }

        //Build 'OR' expression for each property
        return Expression.OrElse(existingExpression, expressionToAdd);
    }

    private static MethodCallExpression BuildContainsExpression<T>(Expression<Func<T, string>> stringProperty, ConstantExpression searchTermExpression)
    {
        return Expression.Call(stringProperty.Body, typeof(string).GetMethod("Contains"), searchTermExpression);
    }

//Create SwapVisitor to merge the parameters from each property expression into one
public class SwapVisitor : ExpressionVisitor
{
    private readonly Expression from, to;
    public SwapVisitor(Expression from, Expression to)
    {
        this.from = from;
        this.to = to;
    }
    public override Expression Visit(Expression node)
    {
        return node == from ? to : base.Visit(node);
    }
    public static Expression Swap(Expression body, Expression from, Expression to)
    {
        return new SwapVisitor(from, to).Visit(body);
    }
}

Performing the following code against a DBContext (connected to a sql db):

//Search users where...
dataContext.Users.Search("ninja", x => x.UserName,  
                                          x => x.FirstName, 
                                          x => x.LastName).ToList();

Produces the following SQL:

SELECT [Extent1].[Id] AS [Id], 
       [Extent1].[UserName] AS [UserName], 
       [Extent1].[FirstName] AS [FirstName], 
       [Extent1].[LastName] AS [LastName], 
       [Extent1].[Email] AS [Email], 
FROM   [dbo].[Users] AS [Extent1]
WHERE ([Extent1].[UserName] LIKE N'%ninja%') 
   OR ([Extent1].[FirstName] LIKE N'%ninja%') 
   OR ([Extent1].[LastName] LIKE N'%ninja%')

Because I can see more extension methods being added to this code I have created a SearchExtensions project on github. Please feel free fork this and make your own additions. Many thanks to @MarcGravell for helping me to see this task through.

Now available as a nuget package. Search for 'SearchExtensions' or run the following:

PM> Install-Package NinjaNye.SearchExtensions

Source code can be found here: https://github.com/ninjanye/searchextensions


Following on from my previous post on creating a generic repository method, I decided to take it a step further and create an generic search extension method to perform the same task.

Here is the code:

public static class QueryableExtensions
{
    public static IQueryable<T> Search<T>(this IQueryable<T> source, Expression<Func<T, string>> stringProperty, string searchTerm)
    {
        if (String.IsNullOrEmpty(searchTerm))
        {
            return source;
        }

        // The below represents the following lamda:
        // source.Where(x => x.[property] != null
        //                && x.[property].Contains(searchTerm))

        //Create expression to represent x.[property] != null
        var isNotNullExpression = Expression.NotEqual(stringProperty.Body, 
                                                      Expression.Constant(null));

        //Create expression to represent x.[property].Contains(searchTerm)
        var searchTermExpression = Expression.Constant(searchTerm);
        var checkContainsExpression = Expression.Call(stringProperty.Body, typeof(string).GetMethod("Contains"), searchTermExpression);

        //Join not null and contains expressions
        var notNullAndContainsExpression = Expression.AndAlso(isNotNullExpression, checkContainsExpression);

        var methodCallExpression = Expression.Call(typeof(Queryable),
                                                   "Where",
                                                   new Type[] { source.ElementType },
                                                   source.Expression,
                                                   Expression.Lambda<Func<T, bool>>(notNullAndContainsExpression, stringProperty.Parameters));

        return source.Provider.CreateQuery<T>(methodCallExpression);
    }
}

Performing the following code against a DBContext (connected to a sql db):

string searchTerm = "test";
var results = context.Clubs.Search(club => club.Name, searchTerm).ToList();

Which produces the following SQL:

SELECT [Extent1].[Id] AS [Id], 
       [Extent1].[Name] AS [Name] 
FROM   [dbo].[Clubs] AS [Extent1]
WHERE  ([Extent1].[Name] IS NOT NULL) 
  AND  ([Extent1].[Name] LIKE N'%test%')

My next goal is to create an extension method that allows the user to pass multiple properties. The results will then match any of the supplied properties. Stay tuned...

Now available as a nuget package. Search for 'SearchExtensions' or run the following:

PM> Install-Package NinjaNye.SearchExtensions

Source code can be found here: https://github.com/ninjanye/searchextensions


Expression trees have been a bit of a magic box for me for a while so I decided to do something about it and learn a bit about them.

I work with entity framework quite a bit and in most cases I use a base repository to perform the common logic such as retrieving records by id, or retrieving all records. I decided to try and implement a generic string search method on my base repository.

I decided that I wanted the following syntax when calling my search functionality:

this.repository.Search(x => x.Name, searchTerm);

To break this down, the first parameter (x => x.Name) is a lamda expression that represents the string property I want to search within. The second parameter is there search text I want to match on.

After much trial and error, trawling stack overflow and swatting up on the msdn documentation, I finally came up with the following

public class Repository<T> : IRepository<T> 
    where T : class, IEntity
{
    /// <summary>
    /// Performs a search on the supplied string property
    /// </summary>
    /// <param name="stringProperty">Property to search upon</param>
    /// <param name="searchTerm">Search term</param>
    public virtual IQueryable<T> Search(Expression<Func<T, string>> stringProperty, string searchTerm)
    {
        var source = this.RetrieveAll();

        if (String.IsNullOrEmpty(searchTerm))
        {
            return source;
        }

        //The following is the query we are trying to reproduce
        //source.Where(x => T.[property] != null 
        //               && T.[property].Contains(searchTerm)

        //Create expression to represent T.[property] != null
        var isNotNullExpression = Expression.NotEqual(stringProperty.Body, Expression.Constant(null));

        //Create expression to represent T.[property].Contains(searchTerm)
        var searchTermExpression = Expression.Constant(searchTerm);
        var checkContainsExpression = Expression.Call(stringProperty.Body, typeof(string).GetMethod("Contains"), searchTermExpression);

        //Join not null and contains expressions
        var notNullAndContainsExpression = Expression.AndAlso(isNotNullExpression, checkContainsExpression);

        //Build final expression
        var methodCallExpression = Expression.Call(typeof (Queryable), 
                                                   "Where", 
                                                   new Type[] {source.ElementType}, 
                                                   source.Expression, 
                                                   Expression.Lambda<Func<Club, bool>>(notNullAndContainsExpression, stringProperty.Parameters));

        return source.Provider.CreateQuery<T>(methodCallExpression);
    }

    public IDataContext DataContext { get; private set; }

    public Repository(IDataContext dataContext)
    {
        this.DataContext = dataContext;
    }

    /// <summary>
    /// Retrieve all records from context for a given type
    /// </summary>
    /// <returns></returns>
    public virtual IQueryable<T> RetrieveAll()
    {
        return this.DataContext.Set<T>();
    }
}

Thanks in particular to the following articles for helping me learn a bit about the magic box.

This task is not complete, I am still yet to look at the sql it produces when hooked up to my sql db. I am also yet to look at it's performance but that will be the subject of a later post where I will hopefully make some tweaks and enhancements.

Any questions or pointers, please add a comment and I'll do my best to get back to you.

Recently I have been displaying enums as drop down lists on some MVC 3 projects and noticed there are a few people out there looking for a solution

Here is an enum helper I use that turns an enum into a select list. Note: If the enum has a description (using the DescriptionAttribute) it will use that as its display text

public static class EnumHelper
{
    // Get the value of the description attribute if the   
    // enum has one, otherwise use the value.  
    public static string GetDescription<TEnum>(this TEnum value)
    {
        var fi = value.GetType().GetField(value.ToString());

        if (fi != null)
        {
            var attributes = (DescriptionAttribute[])fi.GetCustomAttributes(typeof(DescriptionAttribute), false);

            if (attributes.Length > 0)
            {
                return attributes[0].Description;
            }
        }

        return value.ToString();
    }

    /// <summary>
    /// Build a select list for an enum
    /// </summary>
    public static SelectList SelectListFor<T>() where T : struct
    {
        Type t = typeof(T);
        return !t.IsEnum ? null
                         : new SelectList(BuildSelectListItems(t), "Value", "Text");
    }

    /// <summary>
    /// Build a select list for an enum with a particular value selected 
    /// </summary>
    public static SelectList SelectListFor<T>(T selected) where T : struct
    {
        Type t = typeof(T);
        return !t.IsEnum ? null
                         : new SelectList(BuildSelectListItems(t), "Value", "Text", selected.ToString());
    }

    private static IEnumerable<SelectListItem> BuildSelectListItems(Type t)
    {
        return Enum.GetValues(t)
                   .Cast<Enum>()
                   .Select(e => new SelectListItem { Value = e.ToString(), Text = e.GetDescription() });
    }
}

Once you have this helper class in place you can do the following.

In your controller:

//If you don't have an enum value use the type
ViewBag.DropDownList = EnumHelper.SelectListFor<MyEnum>();

//If you do have an enum value use the value (the value will be marked as selected)    
ViewBag.DropDownList = EnumHelper.SelectListFor(MyEnum.MyEnumValue);

In your View:

@Html.DropDownList("DropDownList")
@* OR *@
@Html.DropDownListFor(m => m.Property, ViewBag.DropDownList as SelectList, null)

Hey presto you have a drop down list for your enums which binds back to your view model on post.

In this example I have two tables in my database, Customer and Company. Customer belongs to the dbo schema whereas Company belongs to the 'other' schema. By default the edmx file does not offer up information about the schema of the original tables however this is something that is stored in the underlying xml.

xml scheme

To retrieve this information we need to edit our tt template.

NOTE: I'd recommend getting tangible T4 Editor when working with .tt files as it provides syntax highlighting and intellisense which makes working with these file much easier.

At the top of our tt template (after the input file has been declared) we need to create a new store item collection that will grab the schema information using the code below.

StoreItemCollection sic;
loader.TryCreateStoreItemCollection(inputFile, out sic);
EntityContainer sicEntityContainer = sic.GetItems().FirstOrDefault();

Then from within the foreach (EntityType entity in ItemCollection.GetItems()...) loop you can get the current schema name with the following:

string schemaName = "Unknown";
if (sicEntityContainer != null)
{
    EntitySet eset = sicEntityContainer.GetEntitySetByName(code.Escape(entity), true);
    schemaName = eset.MetadataProperties["Schema"].Value.ToString();
}

Now we have the schema name we can do what we like with it. You might want to add a readonly property by adding the following just after the class is created.

public string Schema
{
    get { return "<#= schemaName #>"; }

}

With these small changes our auto generated classes now look like this:

Code comparison

Hope this helps

01 Jan
2000

What I do

About me

I'm an agile developer from Essex, working in London. I'm also a big believer in test driven development.

I am a keen advocate of test driven development and have experienced real world gains through practicing TDD.

Certifications

MCTS in Web Applications Development with Microsoft .NET Framework 4 2010


70-480: Programming in HTML5 with JavaScript and CSS3 2013


70-486: Developing ASP.NET MVC 4 Web Applications 2013


Experience

Senior Developer, Codehouse February 2013 - Current


Web Developer, Freshfields Bruckhaus Deringer LLP November 2012 - February 2013


Developer, Atlas Computer Systems May 2010 - November 2012


IT Analyst, Freshfields Bruckhaus Deringer 2007 - 2010


IT Analyst, Aon Consulting 2004 - 2007

01 Jan
2000

I am a software engineer from Essex, working in London. I largely work within the microsoft stack, but have wealth of experience across multiple languages and frameworks. I have built this blog from scratch using ASP.NET MVC 4, Entity Framework code first and markdowndeep.

If you have any feedback on my blog or just want to strike up a conversation I'd love to hear from you. You can get in touch with me via twitter @ninjanye or by emailing using the links in the header.

Check out my latest posts below or use my tag cloud for previous posts on a specific subjects