Thursday, October 25, 2012

Some tricks for MsBuild + VStudio

Many developers that I work with avoid working with msbuild. This is a shame, since a little msbuild knowledge can go a long way. Here are some tips to help others leverage msbuild along with VStudio

Use .targets and .proj

Both extensions are common for MSBuild files. So which to use? I use the following to help clarify a MSBuild files purpose:

  • *.targets used for projects that are to be imported, these are generally MSBuild files that have little use unless imported into another MSBuild project.
  • *.proj used for projects that have their own useful targets. These are generally MSBuild files that contain targets to be called from command line.

Edit you .csproj to include a .targets project

A great way to leverage msbuild with your normal visual studio (.csproj) project, is to edit the .csproj and import a corresponding .targets file. As a convention, I name the import after the .csproj file, as an example; for a project MyApp.csproj I use a MyApp.targets import. Simply add the following in your .csproj file:

<Import Project="$(MSBuildThisFileName).targets" Condition="Exists('$(MSBuildThisFileName).targets')" />

After importing, you can now make interesting extensions to the build process using the BeforeTargets and AfterTargets attributes

As of VStudio 2012, I can edit my custom MyApp.targets and, without re-loading the MyApp.csproj the changes take effect.

A target naming approach

As your MSBuild project get more complex, you may find (like me) that it can help to use a kind of 2-Target approach. In such cases I try and use a Short name for outer; descriptive name for inner style.

My reasoning is that I use an 'outer' Task from the command line, wehere I want a short easier-to-type name. Whilst, when building the task(s) I want more descriptive names, for future-maintainability. For example, I might have a task makeHelp that calls an internall task AddItemsFromAdditionalPaths.

Use the MSBuild task

To help modularize, use the MSBuild task to 'call' an external project. In the child task, define an Output attribute, and assign a resulting item list to it. Now in the caller (parent) task simply assign the items using TargetOutputs.

Extend when need

MSBuid is surprisingly easy to extend. There are 2 well known extension libraries: MSBuild Extension Pack and MSBuild Community Tasks. If neither suite, its really easy to just create your own.

As another option, you can also inline a task (MSBuild in-line task). But I would recommend NOT doing this. A task is easier to understand when written as a C# class, rahter than in-line.

Leverage your .csproj project

Just as I like to extend my .csproj using a .targets file. It can also be useful to go the other way and build a .proj that imports your .csproj

The Visual Studio IDE is great for maintaining source files, and content files and other files in your project. You can use the UI to visually add and organize project files.

On occasion I take advantage of this using a MyApp.proj MSBuild project that imports MyApp.csproj. This way I can build targets that have access to ItemGroup's such as @(Compile) and @(Content)

Property Functions come in handy

Property Functions can be used to manipulate a property, the string manipulation is one of the more useful capabilities. Unfortunately the syntax is verbose, and can be challenging to read, so I tend to limit to simpler uses.

One of the more useful tricks, is to use property functions with item meta-data. To do this 'convert' meta-data to a string, and use it e.g

    $([System.String]::new('%(RelativeDir)')).Replace('Stub', '$(OutPath)')

Item Metadata for 'special' task processing

Metadata can be a clean way to extend an existing ItemGroup, allowing Tasks to alter how they behave. I prefer to use metadata over creating 'working' ItemGroups. If I find myself creating an ItemGroup just for a Task, e.g

<ItemGroup>
  <LogFilesToArchive />
<ItemGroup>

I prefer

<ItemGroup>
  <LogFiles Include="">
    <archive>$(oldFiles)</archive>
  </LogFiles>
</ItemGroup>

Using meta-data does have a down-side; you need to ensure all Items have the meta-data otherwise msbuild complains. To solve this use a ItemDefinistionGroup, or update the items with:

<ItemGroup>
  <LogFiles Include="@(LogFiles)">
    <archive></archive>
  <LogFiles>
</ItemGroup>

Using DependsOn in .csproj

VStudio uses DependsOn metadata to nest one item under another (like web.config and web.debug.config). You can do this for your own files. For example if you use partials, edit the .csproj to group all the files under one.

Add to ItemGroup with Recursive.

VStudio has a number of well-known item groups, such as @(Compile) and @(Content).

If you edit the .csproj, you can manually add items to these ItemGroups. It can be a faster / easier way to add items, rather then using the UI. I've even use recursive includes to add tree's of files to particular itemgroup's.

Monday, October 22, 2012

Accessing the Orchard current user content item

While working on a custom theme for my Orchard based site, I ran into a problem trying to leverage the current user content item. For my theme, I wanted to be able to present some additional information about the current user, from the header in my theme.

Orchard uses Theme modules to separate presentation from content, this separation is one of the (many) reasons I'm starting to really love Orchard. Using a combination of the Designer tools module, and VStudio, you can peruse the shapes displayed on your page. From this I easily replaced the User.cshtml view with my own, but I couldn't work out how to actually reach the current user content item; the default User.html (from ThemeMachine) uses WorkContent.CurrentUser, but this is the IUser not the content item itself!!??

Eventually I stumbled onto: http://orchard.codeplex.com/discussions/255594, and found the user content item is simply WorkerContent.CurrentUser.ContentItem. To keep my site clean, I opted to leverage the Profile module, and add my custom user data to the Profile content part. Then a little prep code in the view, and now I can leverage a custom DisplayName field in my view. I did have to rummage through the source code to find the best way to reach the data using the dynamic type, finally I settled on:

//http://orchard.codeplex.com/discussions/255594
//User controllable display name.
var displayName = String.Empty;
if(WorkContext.CurrentUser != null)
{
  dynamic user = WorkContext.CurrentUser.ContentItem;
  if(user.ProfilePart != null && 
    user.ProfilePart.Has(typeof(object), "DisplayName") &&
    user.ProfilePart.DisplayName.Value is string)
  {
    displayName = user.ProfilePart.DisplayName.Value.Trim();
  }
  
  if(String.IsNullOrWhiteSpace(displayName))
    { displayName = WorkContext.CurrentUser.UserName; }
}

Thursday, October 11, 2012

Using mq to manage local OSS code.

Recently I've started working with the Orchard CMS open source project. As part of this I needed to alter the code base to suite my environment. So how do I

  1. Manage local / my own changes to the source.
  2. Refresh my tree when the source updates.
  3. Take changes from other forks.
This particular project is using Mecurial (hg), so I can leverage the MQ extension to manage my local code.
I use TortiseHg to work with HG repositories, I'm not really a command-line kinda developer.

Enable MQ extension

Firstly, enable the MQ extension. Be sure to enable the extension in your global settings. More on patching from TortoiseHg Doc.

Clone the repo

The folks on the Orchard project use forks for managing contributions, since this is a local copy, I don't want a fork, just a local clone will do. Otherwise I'll be adding to the noise on the project by holding a fork for an extended time period.

Create your patches repo.

With MQ enabled, you now have a little diamond in the tool-bar. Click to view MQ stuff. Now create a MQ repo. This creates a new hg repository inside the .hg folder, called patches. Since this is itself a repository, working with it is just like working with another repository.
Note: it is NOT a sub-repo, I got confused thinking I could work with the MQ repository like a sub-repository... that ended in tears.

Make a change as a patch.

My first change was to update the azure project. Make the change, but do-NOT commit. Rather use the diamond  click 'new patch' and then commit using QNew. 
I find it useful to view the patch queue, use View--> show patch queue. Another check is to actually open the patches repository, find it inside the .hg folder and open in Tortoise, you should see your patch has been committed.

Update from main repo.

I can now, just like normal, pull from the main repository to get any changes.
before pulling, I like to undo my patches, then re-apply the patches after any updates. Then I can check if each patch is still relevant, and I can update each patch (if needed) to reflect the refreshed code.

Taking changes from another fork.

Orchard uses forks for contributions. Since a fork is just a clone on CodePlex, I can clone a local copy. From Tortoise I can create a patch for a particular change set using r-click export. Then in my local repository, I can import this as a patch using Repository --> Import.
WARNING: be sure to import to patches, not the repository itself.

Extracting a patch.

Any change I make on my local clone, maybe useful as a contribution back to the Orchard project. This can be a bit of a problem, since I may have a few local patches that I don't want to contribute back (because they have no use) to the project. The easiest way I've found is to:
  1. Fork Orchard and get a local clone (normal for Orchard). Open my fork clone.
  2. Use repository --> import and locate the patch in my local/.hg/patches folder.
  3. Import the patch to the Working Directory. Do-not import directly to repository.
  4. Fix the patch if needed, and commit the change set.
  5. All the other bits like running unit tests; push to my fork; submit pull request.
Well that's the plan, will have to see how it goes over time.


Monday, October 8, 2012

URL Rewrite for hosted mvc.net site.

Recently I've been working on deployment of an MVC.NET based site to Azure. As a part of this I needed to re-map request url's from the Azure dns host name to my dns host. After a few fumbling starts I opted to use the IIS URL Rewriter module. The module is installed (by default) on Azure so it was good to go. I spent a good while looking into some of the other available options, in particular the SEO templates for lower-case and trailing-slash. I've had prior experience with url re-writing modules... not much of it good. I did manage to get a re-write rule set to trim and lower case my urls, but I felt it was too fragile to keep.
In the end I settled on just a host-name redirect.
<system.webServer>
<.other.>
<rewrite>
  <rules>
    <rule name="HostName" enabled="true" stopProcessing="false">
    <match url="(.*)" />
    <conditions>
      <add input="{HTTP_HOST}" pattern="YOUR\.SERVICENAME\.cloudapp\.net" />
    </conditions>
    <action type="Redirect" url="{MapSSL:{HTTPS}}/YOURHOSTNAME/{R:1}"/>
    </rule>
  </rules>
  <rewriteMaps>
  <rewriteMap name="MapSSL" defaultValue="OFF">
    <add key="ON" value="https://" />
    <add key="OFF" value="http://" />
  </rewriteMap>
  </rewriteMaps>
  </rewrite>
</system.webServer>

Many thanks to RuslanY for his tips: http://ruslany.net/2009/04/10-url-rewriting-tips-and-tricks/.
This rule was inspired from the stackoverflow: http://stackoverflow.com/questions/2608994/iis7-url-rewriting-how-not-to-drop-https-protocol-from-rewritten-url

Of additional interest: http://www.iis.net/learn/extensions/url-rewrite-module/url-rewrite-module-configuration-reference
http://www.iis.net/learn/extensions/url-rewrite-module/using-the-url-rewrite-module

Tuesday, October 2, 2012

Least-Concept-Method

Often when writing code a developer will use member overloading to provide an alternate convenient call interface on a class. Assuming the developer has at-least a passing interest in quality, they will try and stay DRY. To this end, one of the method overloads will contain the core 'work' for the member, whilst overloads will perform only simple tasks, and then hand-off to the core method to do the real work. But which overload is the best for the core?

I like to use a least-concept method for the core. By this I select a set of parameter(s) that are

  1. Not native types, or simple constants.
  2. Don't require the method to navigate deeply into the parameter(s) (one-level in is ideal).
Why not native types? 
Often I've seen code that uses a core method that extracts the most-basic, or native data from more complex objects, and implements the core with this. For example, say I have a CountLines() method that needs a file, the developer may go for the most basic CountLines(string fileName) style. This approach has the benefit of being flexible for the caller, but it sacrifices clarity. The next developer has to know that what I really mean is the full path to the file, not just its file name. You could argue the parameter is badly named (and I'd agree), but I still would prefer a stronger type. C# is a typed language, a better idea would be to leverage it. In this case I would go for a CountLines(FileInfo file) method.

Why not the most complex object as a parameter?
Sometimes you need to implement a method that takes a complex object, particularly when implementing an interface (plug-in), but the method only uses a small part of the complex object. Lets say you're implementing a RejectIfTooBusy(HttpApplication context) method. You want to reject if the current request is from a set of black-listed hosts. The context has the data you need via context.Request.Url.Host, but to reach this you need to check for nulls along the way. Rather than having this parameter conversion code in your main implementation, I prefer to pull out a core RejectIfTooBusy(Uri whoReqested) method, and have this overload called by the first. By separating out the core member, it becomes clearer that all I'm checking is the request url. It can also be useful to split a complex object, to highlight the bits that the method really uses.

A short list of C# coding issues.

Injecting services into entities It been bugging me for a while now that to use services in a entity can be a bit of pain. It usually starts...