Wednesday, December 19, 2012

Using async/await (TPL) with Orchard

While working on a module for Orchard CMS I ran into some problems trying to glue async (Task) based code with (classic) synchronous code.

Here's the set-up:
  • Service TheMovieDb to call themovedb.org api.
  • Service MovieInfo service to integrate TheMovieDb with Orchard.
  • AdminController for front-end to call MovieInfo and coordinate views.

TheMoveDb uses Tasks, so I don't have to wait around for across-the-wire calls. MovieInfo uses Tasks, and integrates with orchard services. AdminController uses normal Action methods.

Being a TDD developer, all the pieces looked fine. But as soon as I put it together... splat.

It all starts with a deadlock (doesn't it always!). A quick google gets to Stephen Cleary (via SO) Don't Block on Async Code and Async and Await Armed with this I now at-least understand the problem.

Alas, at least in Orchard 1.6, async controllers are a no-go. It took a fair bit of searching, I finally found an SO article describing the problem. So (for now) I've settled on a sync / async solution.

Putting it all together:
  • TheMoveDb service stays as a normal async service.
  • MovieInfo service calls TheMoveDb with ConfigureAwait(false).
  • AdminController stay synchronous; blocks on calls to MovieInfo, waiting for Result.

There was one last gotcha; anything after the await in MovieInfo service executes on its own thread. So no access to orchard services (such as WorkContext etc). To get around this I just had to take care to no need the 'orchard friendly' thread context after the await.

Tuesday, November 27, 2012

Azure web-role with warmup

To avoid the spin-up time on an Azure role, I use the IIS Application Initialization (formally warm-up) module. The module is available by default for IIS8, so it's easier if you can run your azure role on Server 2012 (osFamily=3).

The step-by-step instructions describe how to configure the module for use. Using this along side an article on programmatic configuring azure app pool settings, I've got enough pieces to create a RoleEntryPoint for the task.

You will also need to (find) and include a reference to the Microsoft.Web.Administration assembly. Oh, and in addition to configuring the module, be sure to set the IIS IdleTimeout to 0 (zero).

Putting it all together:
Loading ....

Monday, November 12, 2012

Display summary for content item in Orchard.

Lately I've been building skills using Orchard CMS.
I ran into a problem trying to display a list of content items; in particular the in-built summary approach for Orchard didn't suite. The in-built approach takes the first 200 characters and displays them via the shape: Parts_Common_Body_Summary. This shape is created down inside the ~/Core/Common/Drivers/BodyPartDriver.cs, to be served up via the Parts.Common.Body.Summary.cshtml view template. Then; within the template; the body's html is used to create a excerpt of the first 200 chars, displayed as a p.

For me, I wanted to be able to specify the summary shape using html described in the content item itself. Initially I looked at just customizing the template view, I think this approach may have solved my immediate problem, but I would have had (yucky) logic inside my view template. Sounds like a great excuse for creating a Module :D

Since this was my first Orchard module, there was quite a learning curve to piece together how-to go about this problem. I can recommend the Pluralsight training video's to help you get started. Otherwise the Orchard Doco can help. Of course, nothing beats learning by tracing code and figuring it out yourself :D.

Normal content part driver's in Orchard create a shape, which is then served up via Orchard (don't forget placement) and displayed using a view template. Initially I was inclined to follow this pattern, and create my own "Parts_MySummary" shape for use. For better-or-worse, I opted to alter the existing Summary shape, I'm not 100% this is a good idea, but hey, if the fine folk over on the Orchard Project are going to make such a highly flexible and extensible CMS, then this is the kind of thing that'll happen :O.

To over-ride the in-built shape, I use my own IShapeTableProvider to hook into the display of the Parts_Common_Body_Summary, and with the magic that is IoC tada!

Since I was there, the module is built using 3 strategies for the summary:
  1. Default (fallback) to in-built
  2. First paragraph - extract the first p element from the body
  3. Explicit text - editor defines the summary text to use.
Ideally I would have made this more extensible; allowing for other strategies to also be used. But I couldn't work-out how to add shapes at run-time to represent different strategies. For now, if you want a different strategy: Add an ISummaryStrategy service and provide a better Part editor. I will have to re-visit when I learn more.

The final module is hosted on codeplex: http://kwdsummary.codeplex.com/

Thursday, October 25, 2012

Some tricks for MsBuild + VStudio

Many developers that I work with avoid working with msbuild. This is a shame, since a little msbuild knowledge can go a long way. Here are some tips to help others leverage msbuild along with VStudio

Use .targets and .proj

Both extensions are common for MSBuild files. So which to use? I use the following to help clarify a MSBuild files purpose:

  • *.targets used for projects that are to be imported, these are generally MSBuild files that have little use unless imported into another MSBuild project.
  • *.proj used for projects that have their own useful targets. These are generally MSBuild files that contain targets to be called from command line.

Edit you .csproj to include a .targets project

A great way to leverage msbuild with your normal visual studio (.csproj) project, is to edit the .csproj and import a corresponding .targets file. As a convention, I name the import after the .csproj file, as an example; for a project MyApp.csproj I use a MyApp.targets import. Simply add the following in your .csproj file:

<Import Project="$(MSBuildThisFileName).targets" Condition="Exists('$(MSBuildThisFileName).targets')" />

After importing, you can now make interesting extensions to the build process using the BeforeTargets and AfterTargets attributes

As of VStudio 2012, I can edit my custom MyApp.targets and, without re-loading the MyApp.csproj the changes take effect.

A target naming approach

As your MSBuild project get more complex, you may find (like me) that it can help to use a kind of 2-Target approach. In such cases I try and use a Short name for outer; descriptive name for inner style.

My reasoning is that I use an 'outer' Task from the command line, wehere I want a short easier-to-type name. Whilst, when building the task(s) I want more descriptive names, for future-maintainability. For example, I might have a task makeHelp that calls an internall task AddItemsFromAdditionalPaths.

Use the MSBuild task

To help modularize, use the MSBuild task to 'call' an external project. In the child task, define an Output attribute, and assign a resulting item list to it. Now in the caller (parent) task simply assign the items using TargetOutputs.

Extend when need

MSBuid is surprisingly easy to extend. There are 2 well known extension libraries: MSBuild Extension Pack and MSBuild Community Tasks. If neither suite, its really easy to just create your own.

As another option, you can also inline a task (MSBuild in-line task). But I would recommend NOT doing this. A task is easier to understand when written as a C# class, rahter than in-line.

Leverage your .csproj project

Just as I like to extend my .csproj using a .targets file. It can also be useful to go the other way and build a .proj that imports your .csproj

The Visual Studio IDE is great for maintaining source files, and content files and other files in your project. You can use the UI to visually add and organize project files.

On occasion I take advantage of this using a MyApp.proj MSBuild project that imports MyApp.csproj. This way I can build targets that have access to ItemGroup's such as @(Compile) and @(Content)

Property Functions come in handy

Property Functions can be used to manipulate a property, the string manipulation is one of the more useful capabilities. Unfortunately the syntax is verbose, and can be challenging to read, so I tend to limit to simpler uses.

One of the more useful tricks, is to use property functions with item meta-data. To do this 'convert' meta-data to a string, and use it e.g

    $([System.String]::new('%(RelativeDir)')).Replace('Stub', '$(OutPath)')

Item Metadata for 'special' task processing

Metadata can be a clean way to extend an existing ItemGroup, allowing Tasks to alter how they behave. I prefer to use metadata over creating 'working' ItemGroups. If I find myself creating an ItemGroup just for a Task, e.g

<ItemGroup>
  <LogFilesToArchive />
<ItemGroup>

I prefer

<ItemGroup>
  <LogFiles Include="">
    <archive>$(oldFiles)</archive>
  </LogFiles>
</ItemGroup>

Using meta-data does have a down-side; you need to ensure all Items have the meta-data otherwise msbuild complains. To solve this use a ItemDefinistionGroup, or update the items with:

<ItemGroup>
  <LogFiles Include="@(LogFiles)">
    <archive></archive>
  <LogFiles>
</ItemGroup>

Using DependsOn in .csproj

VStudio uses DependsOn metadata to nest one item under another (like web.config and web.debug.config). You can do this for your own files. For example if you use partials, edit the .csproj to group all the files under one.

Add to ItemGroup with Recursive.

VStudio has a number of well-known item groups, such as @(Compile) and @(Content).

If you edit the .csproj, you can manually add items to these ItemGroups. It can be a faster / easier way to add items, rather then using the UI. I've even use recursive includes to add tree's of files to particular itemgroup's.

Monday, October 22, 2012

Accessing the Orchard current user content item

While working on a custom theme for my Orchard based site, I ran into a problem trying to leverage the current user content item. For my theme, I wanted to be able to present some additional information about the current user, from the header in my theme.

Orchard uses Theme modules to separate presentation from content, this separation is one of the (many) reasons I'm starting to really love Orchard. Using a combination of the Designer tools module, and VStudio, you can peruse the shapes displayed on your page. From this I easily replaced the User.cshtml view with my own, but I couldn't work out how to actually reach the current user content item; the default User.html (from ThemeMachine) uses WorkContent.CurrentUser, but this is the IUser not the content item itself!!??

Eventually I stumbled onto: http://orchard.codeplex.com/discussions/255594, and found the user content item is simply WorkerContent.CurrentUser.ContentItem. To keep my site clean, I opted to leverage the Profile module, and add my custom user data to the Profile content part. Then a little prep code in the view, and now I can leverage a custom DisplayName field in my view. I did have to rummage through the source code to find the best way to reach the data using the dynamic type, finally I settled on:

//http://orchard.codeplex.com/discussions/255594
//User controllable display name.
var displayName = String.Empty;
if(WorkContext.CurrentUser != null)
{
  dynamic user = WorkContext.CurrentUser.ContentItem;
  if(user.ProfilePart != null && 
    user.ProfilePart.Has(typeof(object), "DisplayName") &&
    user.ProfilePart.DisplayName.Value is string)
  {
    displayName = user.ProfilePart.DisplayName.Value.Trim();
  }
  
  if(String.IsNullOrWhiteSpace(displayName))
    { displayName = WorkContext.CurrentUser.UserName; }
}

Thursday, October 11, 2012

Using mq to manage local OSS code.

Recently I've started working with the Orchard CMS open source project. As part of this I needed to alter the code base to suite my environment. So how do I

  1. Manage local / my own changes to the source.
  2. Refresh my tree when the source updates.
  3. Take changes from other forks.
This particular project is using Mecurial (hg), so I can leverage the MQ extension to manage my local code.
I use TortiseHg to work with HG repositories, I'm not really a command-line kinda developer.

Enable MQ extension

Firstly, enable the MQ extension. Be sure to enable the extension in your global settings. More on patching from TortoiseHg Doc.

Clone the repo

The folks on the Orchard project use forks for managing contributions, since this is a local copy, I don't want a fork, just a local clone will do. Otherwise I'll be adding to the noise on the project by holding a fork for an extended time period.

Create your patches repo.

With MQ enabled, you now have a little diamond in the tool-bar. Click to view MQ stuff. Now create a MQ repo. This creates a new hg repository inside the .hg folder, called patches. Since this is itself a repository, working with it is just like working with another repository.
Note: it is NOT a sub-repo, I got confused thinking I could work with the MQ repository like a sub-repository... that ended in tears.

Make a change as a patch.

My first change was to update the azure project. Make the change, but do-NOT commit. Rather use the diamond  click 'new patch' and then commit using QNew. 
I find it useful to view the patch queue, use View--> show patch queue. Another check is to actually open the patches repository, find it inside the .hg folder and open in Tortoise, you should see your patch has been committed.

Update from main repo.

I can now, just like normal, pull from the main repository to get any changes.
before pulling, I like to undo my patches, then re-apply the patches after any updates. Then I can check if each patch is still relevant, and I can update each patch (if needed) to reflect the refreshed code.

Taking changes from another fork.

Orchard uses forks for contributions. Since a fork is just a clone on CodePlex, I can clone a local copy. From Tortoise I can create a patch for a particular change set using r-click export. Then in my local repository, I can import this as a patch using Repository --> Import.
WARNING: be sure to import to patches, not the repository itself.

Extracting a patch.

Any change I make on my local clone, maybe useful as a contribution back to the Orchard project. This can be a bit of a problem, since I may have a few local patches that I don't want to contribute back (because they have no use) to the project. The easiest way I've found is to:
  1. Fork Orchard and get a local clone (normal for Orchard). Open my fork clone.
  2. Use repository --> import and locate the patch in my local/.hg/patches folder.
  3. Import the patch to the Working Directory. Do-not import directly to repository.
  4. Fix the patch if needed, and commit the change set.
  5. All the other bits like running unit tests; push to my fork; submit pull request.
Well that's the plan, will have to see how it goes over time.


Monday, October 8, 2012

URL Rewrite for hosted mvc.net site.

Recently I've been working on deployment of an MVC.NET based site to Azure. As a part of this I needed to re-map request url's from the Azure dns host name to my dns host. After a few fumbling starts I opted to use the IIS URL Rewriter module. The module is installed (by default) on Azure so it was good to go. I spent a good while looking into some of the other available options, in particular the SEO templates for lower-case and trailing-slash. I've had prior experience with url re-writing modules... not much of it good. I did manage to get a re-write rule set to trim and lower case my urls, but I felt it was too fragile to keep.
In the end I settled on just a host-name redirect.
<system.webServer>
<.other.>
<rewrite>
  <rules>
    <rule name="HostName" enabled="true" stopProcessing="false">
    <match url="(.*)" />
    <conditions>
      <add input="{HTTP_HOST}" pattern="YOUR\.SERVICENAME\.cloudapp\.net" />
    </conditions>
    <action type="Redirect" url="{MapSSL:{HTTPS}}/YOURHOSTNAME/{R:1}"/>
    </rule>
  </rules>
  <rewriteMaps>
  <rewriteMap name="MapSSL" defaultValue="OFF">
    <add key="ON" value="https://" />
    <add key="OFF" value="http://" />
  </rewriteMap>
  </rewriteMaps>
  </rewrite>
</system.webServer>

Many thanks to RuslanY for his tips: http://ruslany.net/2009/04/10-url-rewriting-tips-and-tricks/.
This rule was inspired from the stackoverflow: http://stackoverflow.com/questions/2608994/iis7-url-rewriting-how-not-to-drop-https-protocol-from-rewritten-url

Of additional interest: http://www.iis.net/learn/extensions/url-rewrite-module/url-rewrite-module-configuration-reference
http://www.iis.net/learn/extensions/url-rewrite-module/using-the-url-rewrite-module

Tuesday, October 2, 2012

Least-Concept-Method

Often when writing code a developer will use member overloading to provide an alternate convenient call interface on a class. Assuming the developer has at-least a passing interest in quality, they will try and stay DRY. To this end, one of the method overloads will contain the core 'work' for the member, whilst overloads will perform only simple tasks, and then hand-off to the core method to do the real work. But which overload is the best for the core?

I like to use a least-concept method for the core. By this I select a set of parameter(s) that are

  1. Not native types, or simple constants.
  2. Don't require the method to navigate deeply into the parameter(s) (one-level in is ideal).
Why not native types? 
Often I've seen code that uses a core method that extracts the most-basic, or native data from more complex objects, and implements the core with this. For example, say I have a CountLines() method that needs a file, the developer may go for the most basic CountLines(string fileName) style. This approach has the benefit of being flexible for the caller, but it sacrifices clarity. The next developer has to know that what I really mean is the full path to the file, not just its file name. You could argue the parameter is badly named (and I'd agree), but I still would prefer a stronger type. C# is a typed language, a better idea would be to leverage it. In this case I would go for a CountLines(FileInfo file) method.

Why not the most complex object as a parameter?
Sometimes you need to implement a method that takes a complex object, particularly when implementing an interface (plug-in), but the method only uses a small part of the complex object. Lets say you're implementing a RejectIfTooBusy(HttpApplication context) method. You want to reject if the current request is from a set of black-listed hosts. The context has the data you need via context.Request.Url.Host, but to reach this you need to check for nulls along the way. Rather than having this parameter conversion code in your main implementation, I prefer to pull out a core RejectIfTooBusy(Uri whoReqested) method, and have this overload called by the first. By separating out the core member, it becomes clearer that all I'm checking is the request url. It can also be useful to split a complex object, to highlight the bits that the method really uses.

Wednesday, September 12, 2012

Generic Split() extension

Recently I needed to break a set into multiple sub-sets, just like the String.Split() operation.
I couldn't find a simple answer; so a quick bit of code and now I have a simple generic Split<T>() extension you can use to split a collection similar to String.Split().

Code and ms tests found in the gist here: https://gist.github.com/3703440

Friday, August 10, 2012

My default browser is now IE9 :(

A monumental change for me.
I have just changed from chrome to IE9 as my default browser. I feel... kinda yucky.

Why? One simple but painful function: support for the file protocol e.g. file://server/share
In IE (and ffox I hear) it opens my local explorer, a very useful behaviour.
In Chrome.. it does nothing.
I spent a fair amount of time trying to find a work around, in the end... see title.

Friday, May 25, 2012

Quint with mstest

Recently I've been doing more complex JavaScript coding. As part of the process, I wanted a better approach for JavaScript unit tests. After a Google I've settled on using QUnit, from the jquery guys.

QUnit runs in browser, which I wanted, since I like to test on an actual browser JavaScript engine; its closer to what the user experiences. I also wanted to be able to run the tests the same way I run .net tests; in my IDE (vstudio), but out of the box quint can't do that.

There are approaches to use QUnit with a server, but these focus on CI; a good thing; but not what I was after.

So, with a little code, and some convention, I use a simple mstest base class to start a web server, open a page, run quint, return the result. I also use a small bit of JavaScript to get QUnit to call back to my test, reporting results.

See the gist: qunit + mstest; using simple in-proc webserver (https://gist.github.com/2784505)

With this, I now run the unit test in v studio, same as normal tests. Too easy.

A short list of C# coding issues.

Injecting services into entities It been bugging me for a while now that to use services in a entity can be a bit of pain. It usually starts...