Archive

Posts Tagged ‘CodeProject’

Best practice front-end architecture using Microsoft ASP.Net MVC and Rivets.js

A few years ago I wrote an article about best-practice architecture for web applications built in Microsoft.Net.  This was focused entirely on the back-end and I mentioned at the end that I would do a front-end article one day.  So, here we go…

First of all, let’s get some basic requirements down:

  • your business logic should be separated from your presentation logic
  • your business logic should be unit testable – and this means abstracting as much as possible so you can mock it later
  • your application should be as lightweight as possible – but more importantly, the application must not ‘bloat’ with superfluous or rarely-used features as it grows bigger

In addition, since I wrote the last article, the way we build modern web applications has made a massive shift to client-side focus, with much of my work written in javascript these days.  The trouble with javascript is that doesn’t naturally enforce rules on the developer.  If you are developing by yourself, you may be able to get away with this because you understand your own way of working.  But if you’re working in a multi-team environment, this isn’t good enough and you need to enforce your own rules using conventions which other developers must follow.  This article shows the conventions which I currently use to keep things organized and understandable.

Simple huh?  Let’s get into it.  If you’d like to follow along, you can download the sample project here (and don’t forget to run the included .sql file to create your database structure).

Separating your business logic from your presentation logic (a.k.a. Separating your javascript from your HTML)

For me, the reason for doing this primarily comes down to unit testing – you can’t be dealing with HTML manipulation when you are trying to test the SavePerson() method of your javascript file. 

The typical way to go about this is via two-way data binding and for years the common ways of doing this have been using third party tools like Knockout.js.  Personally, I detest Knockout – they’ve done amazing work (including older browser support), but you have to completely rewrite your javascript models in order to make it work – which means:

  • you and other developers must become proficient ‘knockout developers’ in order to maintain the application
  • you become massively tied-in to the knockout framework

Because of these reasons, I have never built a proper data-bound front-side framework on any of my applications.  At least, until rivets.js came along.

Rivets.js

This was a huge game changer for me.  It’s not as big or popular as the older frameworks such as Knockout, but it has one massive advantage – you can develop your javascript files with absolutely no knowledge of (or reference to) the fact that it is data-bound to rivets.  In fact, your javascript files have no idea that they are data-bound at all!!.  That is perfect – just the way it is supposed to be.  To clarify, here is an example of a file that displays a list of people:

 
function (model) {
var
Init = function () {
console.log("People", model);
},
ViewPerson = function(ev, data) {
alert('You have clicked ' + data.person.DisplayName);
},
AddPerson = function () {
var params = {
firstName: 'Person ' + (model.People.length + 1)
};

// Call our MVC controller method
lib.Data.CallJSON('home/createperson', params, function (newPerson) {

// Add to our model - the view will update automatically
model.People.push(newPerson);
});

return false;
};

Init();
return {
model: model,
AddPerson: AddPerson,
ViewPerson: ViewPerson
};
};
 

Beautiful huh?  Imagine unit-testing that bad-boy – piece of cake!

So, with rivets.js you grab this javascript file and you ‘bind’ it to a block of HTML, and suddenly, as the user interacts with the HTML (like clicking an ‘Add person’ button), your javascript file will handle the events and react according (like creating a new person).  For reference, here is my associated HTML view:

<div id="MyPeopleList" class="home">
<h2>People list</h2>

<table>
<tr data-each-person="model.People" data-on-click="ViewPerson">
<td data-html="person.DisplayName"></td>
</tr>
</table>

<p>
<a data-on-click="AddPerson">Create a new person</a>
</p>
</div>

<script> 
var viewID = 'MyPeopleList';

var view = document.getElementById(viewID);

rivets.bind(view, new PeopleList(model));

</script>

See the data-* attributes?  That’s rivets.js.  I’m not going to into how the binding works – check out the rivets.js documentation for that.

 

Automatically wiring up your views to your controllers

In the HTML sample above, you can see a little script tag at the bottom which pulls in my PersonList javascript and applies to our HTML.  I build a very modular type of architecture so I end up with 100’s of these files and quite honestly I get sick of retyping the same thing again and again.  So, this is a good chance to introduce the first of my ‘conventions’ which I apply using the MVC framework – let’s jump to our C# code.  Specifically, the OnResultExecuted() method which gets called after each of my MVC views are rendered (BTW, if you’re not familiar with Microsoft MVC then you’ll probably need to brush up on another blog before proceeding):

 
protected override void OnResultExecuted(ResultExecutedContext filterContext)
{
var viewFolder = this.GetViewFolderName(filterContext.Result);
var viewFile = this.GetViewFileName(filterContext.Result);
var modelJSON = "";

// Cast depending on result type
if (filterContext.Result is ViewResult)
{
var view = ((ViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}
else if (filterContext.Result is PartialViewResult)
{
var view = ((PartialViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}

// Render our javascript tag which automatically brings in the file based on the view name
if (!string.IsNullOrWhiteSpace(viewFolder) && !string.IsNullOrWhiteSpace(modelJSON))
{

var js = @"
<script>
require(['lib', 'controllers/"
+ viewFolder + @"/" + viewFile + @"'], function(lib, ctrl) {
lib.BindView("
+ modelJSON + @", ctrl);
});
</script>"
;

// Write script
filterContext.HttpContext.Response.Write(js);
}

base.OnResultExecuted(filterContext);
}
 

Don’t worry about all the custom function calls – you’ll find them in the example project download – the key points are:

  • find the physical path of the view that we are rendering (e.g. /home/welcome.cshtml)
  • use this path to determine which javascript file we have associated to it (e.g. /scripts/controllers/home/welcome.js)
  • automatically render the <script/> tag at the end of our view – exactly the same as we manually typed it into the HTML example I pasted above

So, this handy method does a few things:

  • it saves me typing
  • it forces me and other developers in the team to store the javascript files in a consistent and predictable format.  So, if I’m working in Visual Studio on the Welcome.cshtml view, I know immediately where I can find its javascript just by looking at the file name.
  • it provides a clean way for me to do my server-to-client model serialization, which deserves its own section…

Serializing your C# MVC model into your client side (javascript) model

Note the modelJSON variable you can see above.  Because I’m loading my javascript controller from server-side code, I have access to the MVC ViewModel and I am able to serialize it directly into my page.  This is something which few online examples of javascript data-binding frameworks show you – they always start with some model which is hard-coded into your javascript, which is completely impractical in real-life.

In practical terms, this has the other advantage that my server-side MVC models have precisely the same structure as the model I have dealing with in javascript.  This makes it easy for me to understand how my model is formatted when I’m working in javascript.

BONUS: It would be amazing if I could somehow get some kind of javascript intellisense to work on my models by parsing in the C# structure of my MVC models.   I could also use the same mechanism to do compile-time checking of my javascript code.  If anybody can think of a way to do this, please let me know.

Managing your CSS files in a large web application

Another problem I find with large projects is managing CSS files.  One typically has a common.css file and perhaps an admin.css file to try to split it out as required.  But as  your project grows, you add more and more fluff to these files and they end up very very large.  Then, to reduce the initial site load, you think you’ll pull out some targeted CSS classes into a separate file and reference it just in the files you need.  Except then you start forgetting which files need it – and besides, with MVC applications these days you tend to pull in partial views all the time – they have no idea what page they are on and what CSS files they currently have access too.

So, here’s what I’ve finally come up with, and it’s very similar to how I pull in the javascript files above, only this time it uses MVC’s OnResultExecuting() method:

protected override void OnResultExecuting(ResultExecutingContext filterContext)
{
if (filterContext.Result is ViewResult || filterContext.Result is PartialViewResult)
{
// Render a reference to our CSS file
var viewPath = this.GetViewFolderName(filterContext.Result);
if (!string.IsNullOrWhiteSpace(viewPath)) this.RenderPartialCssClass(viewPath);
}

base.OnResultExecuting(filterContext);
}

Again, ignore the missing method references, they’re in the example project.  Here’s what it’s doing:

  • find the path of the current view (e.g. /home/welcome.cshtml)
  • generate a CSS path name based on this path name (e.g. /content/xhome.min.css)
  • Render a <link/> tag pointing to the CSS file

The <link/> tag is written to the HTML stream before the HTML of the view is rendered, and voila – every time you pull down a view (either with a direct request, or a partial render using AJAX), the first thing it will do is tell the browser what CSS file is needs and the browser will pop off and download that too.

Of course, this has its drawbacks:

  • the first time a view in each folder is downloaded, the browser has to make another (blocking) request to get the CSS file.  Remember though that the CSS file is cached so subsequent requests have no additional overhead.
  • it splits the CSS into files which aren’t named or organized according to their use.  For example, another way to organize files would be by function (admin, public etc).  Personally, this doesn’t bother me so I’m happy.

In addition to these files, I also have the usual ‘base’ CSS file, but now it should include only general styles like tables, headings etc.  When you have functionality which is specific to a module (HTML view) then you add it to the relevant file in its own section.  In the example project, you can see that xhome.min.css contains the styles which the welcome.cshtml view uses.

BONUS: the above works perfectly for partial views which are injected into an existing DOM.  However, if you use it for a regular view, then you’ll notice the <link/> tag is inserted at the very top of the HTML element – ie. above the opening <html/> tag.  Not proper although I haven’t found any practical problems in any browsers.  Still, if anybody finds an efficient way to render it in the <head/> tag instead, please let me know – I haven’t bothered looking into it.

Dependency injection using require.js

The final excellent technology I use in my front-end architecture is require.js.  This serves two purposes:

  • it allows my various javascript files to load their dependencies on demand instead of pre-loading all my files on the initial page load.  This is absolutely essential for your large application to be well performing.
  • it allows us to mock out files for unit testing, simply by replacing the require-main.js file

I’m not going to go in to how require.js works – you can find out all about it on their website.  But again, it is a must-have as far as I’m concerned.

Unit testing

Unfortunately, the example project doesn’t have examples of unit testing.  I usually use Jasmine and Karma to run my tests, and I mock out the require-main.js file to stub out things like my jQuery dependencies.

Conclusion

So, that’s it – my current take on ‘best practice’ MVC architecture for real-world, large-scape web applications.  The next steps I’m looking forward solving over the next year are:

  • compile time javascript errors
  • introduction of ES6 javascript
  • implementing the proposed Object.observe() pattern in ES7 (which could technically allow me to replace rivets.js but why re-invent the wheel?)
  • Visual Studio ‘knowing’ how javascript and cshtml files are linked, so you can do something akin to ‘view source’ to jump between them
  • a framework to bind my javascript files to non-HTML framework, perhaps Android axml files?  Perhaps I’m dreaming….

Seeya

Automatic code documentation based on your C# comments

April 24, 2014 Leave a comment

I’ve written a few APIs over the years and the worst part is writing the documentation:

  • it takes extra time
  • it must be updated every time you make changes to your code
  • it is a duplication of work because I already document my code inline anyway

So, here’s a handy utility I wrote which will use reflection to whip through your code and draw out the comments.

Generating XML Documentation

Before proceeding, you must setup your project to generate an XML file of your code comments.  This is done via the Properties –> Build menu in your project (presumably it’s a web project).  See this screenshot below:

 

Screenshot

 

This generates an xml file in the bin directory every time you build.  The file contains all your code comments, ready for parsing by my helper utility.  The format is something like this:

Screenshot

 

So, with this in place, here is the utility class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Xml;
using System.Xml.Linq;
using Common;


namespace Web.Code
{
/// <summary>
/// Creates documentation for our various API methods
/// </summary>
public class ApiDocumentationGenerator
{

#region Sub classes

public class Parameter
{
public string Name { get; set; }
public string Type { get; set; }
public string Description { get; set; }
}

public class Method
{
public string Name { get; set; }
public List<Parameter> Parameters = new List<Parameter>();
public string Summary { get; set; }
}

#endregion

#region Properties

public List<Method> Methods = new List<Method>();
private string PathToXmlDocumentation = "";

private XDocument _XmlDocumentation = null;
private XDocument XmlDocumentation
{
get
{
if (_XmlDocumentation == null)
{
_XmlDocumentation = XDocument.Load(this.PathToXmlDocumentation);
}
return _XmlDocumentation;
}
}

#endregion

/// <summary>
/// Constructor
/// </summary>
/// <param name="pathToXmlDocumentationFile"></param>
public ApiDocumentationGenerator(string pathToXmlDocumentationFile)
{
this.PathToXmlDocumentation = pathToXmlDocumentationFile;
}

/// <summary>
/// Generates our classes
/// </summary>
public void Generate()
{
// BaseController is a class I wrote which all my MVC *Controller methods inherit from. If you don't have a base class, you can just use
// whatever parent class you know your own API methods sit within. And if there is no parent class, then just get every time in the assembly
var ass = System.Reflection.Assembly.GetAssembly(typeof(BaseController));

// Get each class
foreach (var controller in ass.GetTypes())
{
if (controller.IsSubclassOf(typeof(BaseController))) ExtractMethods(controller);
}
}

/// <summary>
/// Finds the methods in this controller
/// </summary>
/// <param name="controller"></param>
private void ExtractMethods(Type controller)
{
foreach (var method in controller.GetMethods())
{
// My API methods are decorated with a custom attribute, ApiMethodAttribute, so only show those ones
var attrs = System.Attribute.GetCustomAttributes(method);

// Check our attributes show we have an API method
var isAPIMethod = false;
foreach (System.Attribute attr in attrs)
{
if (attr is ApiMethodAttribute)
{
isAPIMethod = true;
break;
}
}

// Break if not an API method
if (!isAPIMethod) continue;

// Parse out properties
var meth = new Method();
meth.Name = controller.Name.Replace("Controller", "") + "/" + method.Name;
this.Methods.Add(meth);

// Quick hack to detect the XML segment we want - I know that all my API methods are in *Controller methods, so I can just restrict to this
var memberName = "Controller." + method.Name;

// Get the methods from our documentation
var docInfo = (
from m in this.XmlDocumentation.Descendants("members").Descendants("member")
where m.Attribute("name").Value.Contains(memberName)
select new {
Summary = m.Descendants("summary").First().Value,
Params = m.Descendants("param")
}
).FirstOrDefault();

// Now copy the XML back into my method/parameter classes
if (docInfo != null)
{
meth.Summary = docInfo.Summary;

// Add parameters
foreach (var param in docInfo.Params)
{
var p = new Parameter();
meth.Parameters.Add(p);
p.Name = param.Attribute("name").Value;
p.Description = param.Value;
}
}
}
}
}
}

 

Note that this won’t compile for you because it references a custom attribute, ApiMethodAttribute, and my base class, BaseController.  However, you could delete the logic around these and the documentation should still generate.

Now it’s just a matter of calling the class.  I use mine in an MVC ActionResult:

/// <summary>
/// Uses reflection to document our API methods
/// </summary>
/// <returns></returns>
public ActionResult APIDocumentation()
{
var pathToDocs = HttpContext.Server.MapPath("~/bin/APIDocumentation.xml");
var model = new ApiDocumentationGenerator(pathToDocs);
model.Generate();
return View("admin/apidocumentation", model);
}

And for clarity, I’ll include my View, so you can see how it’s used to render the results to the user:

@model Web.Code.ApiDocumentationGenerator
@{
ViewBag.Title = "API Documentation";
}

<h2>API Documentation</h2>

@foreach (var method in Model.Methods.OrderBy(x => x.Name))
{
<h3>@method.Name</h3>
<p><i>@method.Summary</i></p>
if (method.Parameters.Any())
{
<ul>
@foreach (var param in method.Parameters)
{
<li><strong>@param.Name </strong>@param.Description</li>
}
</ul>
}
}

 

Hope that helps.

Cheesebaron HorizontalScrollView with MvvmCross 3 (Hot Tuna)

February 14, 2014 Leave a comment

Many thanks to Cheesebaron and Stuart for their amazing contributions to the Xamarin platform. Cheesebaron made a scrollable horizontal list view at https://github.com/Cheesebaron/Cheesebaron.HorizontalListView back in early 2012. For those interested, I have ported the Cheesebaron HorizontalListView to the latest version of MvvmCross (currently v3).

Hope that helps.

MVC Output Caching using custom FilterAttribute

August 29, 2013 Leave a comment

 

As with ASP.Net Forms, MVC offers some out-of-the-box caching with their OutputCacheAttribute, however as with classic ASP.Net, one quickly realizes its limitations when building complex systems.  In particular, its very difficult, and often times impossible to flush/clear the cache based on various events that happen within your application. 

For example, consider a main menu which has an ‘Admin’ button for appropriately authorized users.  When your administrator initially views the page, the system will cache the HTML, including the Admin link.  If you later revoked this privilege, the site would continue serving the cached link even though they were technically no longer authorized to access this part of the site.

Not good.

So, with a little to-ing and fro-ing, I’ve finalized my own FilterAttribute which does this for you.  The advantage of writing your own is that you can pass in whatever parameters you like, as well as have directly access to the current HttpContext, which in turns means you can check user-specific values, access the database – whatever you need to do.

How it works

The attribute essentially consists of just a couple of methods, both overrides of the IResultFilter and IActionFilter attributes

  • OnActionExecuting.  This method fires before your Action even begins.  By checking for a cache value here, we can abort the process before any long-running code in your Action method or View rendering executes
  • OnResultExecuting.  This method fires just before HTML is rendered to our output stream.  It is here that we inject cached content (if it exists).  Otherwise, we capture the output for next time

The code

I’ve commented the code below so you can follow more-or-less what is going on.  I won’t go in to too much detail, but needless to say if you copy/paste this straight in to your work, it won’t compile due to the namespace references.  I’m also using Microsoft Unity for dependency injection, so don’t be confused by ICurrentUser etc. 

Finally, I’m got a custom cache class, whose source code I haven’t included – just switch out my lines to access your own cache instead.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Web;
using System.Web.Mvc;
using BlackBall.Common;
using BlackBall.Common.Localisation;
using BlackBall.Contracts.Cache;
using BlackBall.Contracts.Enums;
using BlackBall.Contracts.Exporting;
using BlackBall.Contracts.Localisation;
using BlackBall.Contracts.Security;
using BlackBall.Common.Extensions;
using BlackBall.Logic.Cache;


namespace BlackBall.MVC.Code.Mvc.Attributes
{
public class ResultOutputCachingAttribute : FilterAttribute, IResultFilter, IActionFilter
{

#region Properties & Constructors

private string ThisRequestOutput = "";
private bool VaryByUser = true;

private ICurrentUser _CurrentUser = null;
private ICurrentUser CurrentUser
{
get
{
if (_CurrentUser == null) _CurrentUser = Dependency.Resolve<ICurrentUser>();
return _CurrentUser;
}
}

public ResultOutputCachingAttribute(bool varyByUser = true)
{
this.VaryByUser = varyByUser;
}

private string _CacheKey = null;
private string CacheKey
{
get { return _CacheKey; }
set
{
_CacheKey = value;
}
}

#endregion

/// <summary>
/// Queries the context and writes the HTML depending on which type of result we have (View, PartialView etc)
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private void CacheResult(ResultExecutingContext filterContext)
{
using (var sw = new StringWriter())
{
if (filterContext.Result is PartialViewResult)
{
var partialView = (PartialViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindPartialView(filterContext.Controller.ControllerContext, partialView.ViewName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}else if (filterContext.Result is ViewResult)
{
var partialView = (ViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindView(filterContext.Controller.ControllerContext, partialView.ViewName, partialView.MasterName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}
var html = sw.GetStringBuilder().ToString();

// Add data to cache for next time
if (!string.IsNullOrWhiteSpace(html))
{
var cache = new CacheManager<CachableString>();
var cachedObject = new CachableString() { CacheKey = CreateKey(filterContext), Value = html };
cachedObject.AddTag(CacheTags.Project, CurrentUser.CurrentProjectID);
if (this.VaryByUser) cachedObject.AddTag(CacheTags.Person, this.CurrentUser.PersonID);
cache.Save(cachedObject);
}
}
}


/// <summary>
/// The result is beginning to execute
/// </summary>
/// <param name="filterContext"></param>
public void OnResultExecuting(ResultExecutingContext filterContext)
{
var cacheKey = CreateKey(filterContext);

if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.HttpContext.Response.Write("<!-- Cache start " + cacheKey + " -->");
filterContext.HttpContext.Response.Write(this.ThisRequestOutput);
filterContext.HttpContext.Response.Write("<!-- Cache end " + cacheKey + " -->");
return;
}

// Intercept the response and cache it
CacheResult(filterContext);
}

/// <summary>
/// Action executing
/// </summary>
/// <param name="filterContext"></param>
public void OnActionExecuting(ActionExecutingContext filterContext)
{
// Break if no setting
if (!Configuration.Current.UseOutputCaching) return;

// Our function returns nothing because the HTML is not calculated yet - that is done in another Filter
Func<string, CachableString> func = (ck) => new CachableString() { CacheKey = ck };

// This is the earliest entry point into the action, so we check the cache before any code runs
var cache = new CacheManager<CachableString>();
var cacheKey = new CachableString() { CacheKey = CreateKey(filterContext) };
var cachedObject = cache.Load(cacheKey, func);
this.ThisRequestOutput = cachedObject.Value;

// Cancel processing by setting result to some non-null value. Refer http://andrewlocatelliwoodcock.com/2011/12/15/canceling-the-actionexecutingcontext-in-the-onactionexecuting-actionfilter/
if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.Result = new ContentResult();
}
}

public void OnActionExecuted(ActionExecutedContext filterContext)
{

}

public void OnResultExecuted(ResultExecutedContext filterContext)
{

}

/// <summary>
/// Creates a unique key for this context
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private string CreateKey(ControllerContext filterContext)
{

// Append general info about the state of the system
var cacheKey = new StringBuilder();
cacheKey.Append(Configuration.Current.AssemblyVersion + "_");
if (this.VaryByUser) cacheKey.Append(this.CurrentUser.PersonID.GetValueOrDefault(0) + "_");

// Append the controller name
cacheKey.Append(filterContext.Controller.GetType().FullName + "_");
if (filterContext.RouteData.Values.ContainsKey("action"))
{
cacheKey.Append(filterContext.RouteData.Values["action"].ToString() + "_");
}

// Add each parameter (if available)
foreach (var param in filterContext.RouteData.Values)
{
cacheKey.Append((param.Key ?? "") + "-" + (param.Value == null ? "null" : param.Value.ToString()) + "_");
}

return cacheKey.ToString();
}
}
}

Alright, hope that helps – there’s nothing like HTML caching to make you feel like the best website builder in the world!

Step by Step Guide to Building a Cross-Platform Application in HTML, CSS & Javascript

January 19, 2012 3 comments

Back in the days when your computers came in options of the cream, the white, the off-white, the ivory or the beige, it was very frustrating that an application you put so much effort into wasn’t usable on other computers.

You had to make a choice, and my choice was Windows.  It was just when Microsoft .Net came out and we figured it was a pretty good bet, which it was I reckon.

Then came the web…

A year or so later, despite many of our customers still being in dial-up, I moved to the web.  Unfortunately, this came with almost more headaches – Internet Explorer 6 was the king of browsers, but we had a few Netscapes and this rogue called Firefox was starting to make waves.

Over the last 10 years, while I moved completely away from desktop applications, the browser wars just got worse – even IE couldn’t get versions working nicely together (compatibility mode??? WTF?).

Finally, although IE is still not perfect, it is definitely better and more importantly – you can pretty much ignore it and code just for ‘standards compliant’ browsers – Firefox and Chrome predominantly. 

And then the iPhone came along.

…then came the iPhone…

Actually, the iPhone web browser is incredible – I’m more confident of my websites running on the iPhone than I am in Internet Explorer.  And on top of this, they have to cram your site into a tiny little screen.  I’m very impressed.

Of course, the iPhone really comes into itself with its applications (as opposed to websites).  Not only do they run beautifully, but the entire App Store infrastructure exposes a developer’s work to millions of ready and willing credit card holders, eager to part with a couple of dollars just for the pleasure of the App Store buying experience. 

Unfortunately, the iPhone also forced you to code in yet another language – Objective C – and I’m sorry, but I can barely keep up with .Net let alone learn another language built on C of all things.  I guess I wasn’t the only one to mourn this because there came a slew of WYSIWYGs and cross-platform compilers (Moonlight anybody?).  And to the top rose PhoneGap – an platform that essentially ‘wraps’ a web application in Objective C to convert it into a regular iOS application.  Note only that, it will also wrap it in the relevant languages to support Android, BlackBerry, Windows Phone, Symbian etc…  Such a simple concept, but just amazing.

…and the desktop comes full circle…

Windows 8 purportedly supports native HTML/Javascript/CSS applications. 

Woah – so suddenly, my old-school web-coding skills can be deployed on the web, major mobile devices and 90% of the world’s desktops?  Amazingly, yeah – I think they can.

The HTML/CSS/Javascript Application

So, sorry for the long pre-amble – I just need noobs to appreciate that this next decade of development shouldn’t be taken for granted.

The point is that now you can build an application in ONE language and deploy to multiple platforms.  However, it’s not quite as easy as that – there are many restrictions to what can be built and how.  In this article I’m going to walk you right through from start to finish.  I’ve built a few of these applications by now (the most recent is www.stringsof.me) so I’ll point out the pitfalls and hopefully save you a bit of time.

Know the Goal

I should point out that it is FAR easier to build an application with the knowledge of its intended use.  If it’s going to be used on iPhone or wrapped in PhoneGap, then you can test it incrementally on these platforms as you go.  Far far easier than trying to retro-fit an existing web application.  In fact, I recommend to anybody that no matter how big your web application is, you just start from scratch. Copy/paste what you need from the old one, but start with a clean slate – after all, this app will be used for years and years so you better make it a nice one.  So, here’s our goal:

HTML-Architecture-Overview

 

The Application

The application consists of three parts – an HTML file, one or more Javascript files and one or more CSS files. 

Below I’ve created a completely stripped-down application to try to indicate the core functionality, however if you want to see the full-blown thing in action, I suggest you View Source on m.stringsof.me

Index.html

<html>
<script src=“jquery.js" type="text/javascript"></script>
<script src=“phonegap.js" type="text/javascript"></script>
<script src=“settings.js" type="text/javascript"></script>
<script src=“app.js" type="text/javascript"></script>

<link href=“style.css" rel="stylesheet" type="text/css" />
<link href=“settings.css" rel="stylesheet" type="text/css" />
<body>
<div id=“MyContainer” >
    Hi everybody, welcome to my App.
</div>
<script>
    var app = new App(‘MyContainer’);
    app.Start();
</script>
</body>
</html>

Nothing particularly flash here, but of note:

  • We are using jQuery, but that is just my preference
  • The PhoneGap.js file is required for our various AppStore installations, but on the web server we replace with just a stub file
  • The Settings.js and Settings.css files enable us to manage the minor variations between our various platforms.  For example, iOS requires you to ask people before sending them push messages, Android doesn’t care, and push messages are irrelevant on a web-based app

      Settings.js

      The Settings file contains platform-specific variables.

      var Settings = function(){
          return {
              SiteRoot:'http://api.stringsof.me/’,
              ConfirmPushNotificationsOnStartup: false
          }
      }();

      Data.js

      Data provides connectivity to our server.  This class could be stored in the main App.js file, but I’ve split it out here because in a bigger application you’d have lots of Javascript files and you don’t want circular references if you can help it (not that Javascript minds, grrrrr).

      var Data = function () {
          var that = this;
          this.SiteRoot = Settings.SiteRoot;
          return { 
              CallJSON : function(route, params, callback) { 
                  var triggerName = new Date().getTime().toString();
                  $("body").bind(triggerName, function(e, result) {  
                      callback(result);
                  }); 
      
                  // Add the JSON callback to the parameters
                  params._JsonPCb = 'Data.OnCallJSON‘;
                  params._JsonPContext = "'" + triggerName + "'";
      
                  // Make the JSON call
                  $.ajax({
                      url: that.SiteRoot + route,
                      data: params,
                      type: 'GET‘,
                      dataType:"jsonp“
                  });
              },
      
              OnCallJSON : function(result, triggerName) {
                  $("body").trigger(triggerName, result);
                  $("body").unbind(triggerName); }
          };
      } (); 

      App.js

      Encapsulates our main application code.  This file will usually get pretty big, but you can split it up later depending on your coding style.

      var App = function(containerID){
          this.ContainerID = containerID;
          var that = this;
          this.Start = function(){
              var $con = $(‘#’ + that.ContainerID);
      
              // Get user
              Data.CallJSON(‘Person/GetPerson’, {userName: ‘ben’}, function(person){
                  $con.html(‘Welcome ‘ + person.FirstName);        
              });
      
          }
          return {
              Start: that.Start
          }
      };

    Pretty simple.  All it does is get the user by their username (I have hard-coded ‘ben’ in this example).  On the callback, you display their name in our main <div/> element.

    One flashy thing I’ve done is use my method for calling JSONP with callbacks.  You can read up on it here, or take my word for it that it works.

    Data Access

    Most of us are used to dealing with server-side languages such as ASP.Net or PHP.  Despite some efforts (MVC perhaps), these technologies still leave the HTML dependent, and to a certain extent aware, of the code that generated them.  For example, ASP.Net is heavily dependent on ViewState.  This may be fine for your web application – even your mobile web application – but it is worthless in your iPhone or Android app. 

    Without a server-side language to create and bind our HTML, we must defer to Javascript.  And to get the data required to bind, we must make some kind of web service call.  Because we are using Javascript, it makes sense to return JSON-formatted objects.

    Cross-Domain Data Access Using JSONP

    Using JSON would be all you had to do (and in fact frameworks like ASP.Net MVC have excellent JSON support baked in), except that we want to use the same web service (and therefore the same returned objects) in our iPhone/Android application.  Consider our mobile web application:

    Essentially therefore, our application is getting JSON objects from the same domain (m.stringsof.me) as where it resides.  Now consider our iPhone/Android application:

    • our web service (written in ASP.Net for example) is at http://m.stringsof.me/service
    • our application is stored on the phone – it doesn’t have a concept of ‘domain’

    It doesn’t have a domain, which means we are doing something called cross-site scripting.  Unfortunately, despite many many modern web applications using it (check out the source of the Google home page), all web browsers consider this a gross security risk and your Javascript will error if you try to do it.  Enter JSONP…

    JSONP (JSON with Padding) gets around this issue with a crafty little trick which I’ve covered in another article.  You need to know how JSONP works in order to build your mobile application, so I suggest you brush up.

    Returning JSONP from an ASP.Net MVC Application

    As an aside, if you are an ASP.Net MVC user, you can create your own JsonpResult:ActionResult to return from your Views:

        public class JsonPResult : JsonResult
        {
            public override void ExecuteResult(ControllerContext context)
            {
                var response = context.HttpContext.Response;
                var request = context.HttpContext.Request;
                
                // Open the JSONP javascript function
                var jsonpCallback = request.Params["_jsonpcb"];
                response.Write(jsonpCallback + "(");
    
                // Defer to base class for rendering the javascript object. Because we
                // have opened a javascript function first, it gets rendered as the first parameter
                base.ExecuteResult(context);
    
                // Add any additional parameters - this is not part of JSONP, but
                // a construct I've written to allow me to pass extra 'context' to the server and back
                var extraParams = request.Params["_jsonpcontext"];
                if (!string.IsNullOrEmpty(extraParams)) response.Write(extraParams);
    
                // Close the JSONP function
                response.Write(");");
            }
        }

     

    Using this, you can return JSONP directly from your regular MVC Controllers:

        public class PersonController : Controller
        {
            public ActionResult GetPerson(string username) {
                var person = new DataService().GetPerson(username);
                var result = new JsonPResult {Data = person};
                return result;
            }
        }

     

    Awesome huh?

    Rendering your Data

    Because I have returned JSON from the server, the next step is to render it to HTML.  Without going into too much detail, I personally use a couple of methods:

    • HTML templating.  I store HTML in a separate file (or hidden DIV in the Index.html page) and then bind sections of it using jQuery
    • use jQuery to create html such as $(‘body’).append($(‘<div></div>’).html(‘Hi there’));

    Why not just return HTML from the server?

    Good question, glad I thought of it.  Technically, there’s no reason why you shouldn’t.  In fact, if you did, you wouldn’t need to jump through all those cross-domain hoops with JSON etc.

    Frameworks like ASP.Net MVC actually encourage you to do this with their ‘View’ system and when I built the www version of stringsof.me (www.stringsof.me) I used these and they worked great. 

    When I moved to a mobile version of the application however, I found that this was a little short sighted.  What if I expose my objects to a third party who wants to use them in, for example, a Facebook plugin?  The HTML I return for ‘GetPerson()’ is not likely to suit their purposes so best to return the object and let them format it themselves.  Or what if the returned HTML expects Javascript to scroll it into place, but the requesting device doesn’t support Javascript?

    Although it is easier to specifically write your HTML (and even binding if you are using MVC), I eventually concluded that plain JSON objects are the most versatile mechanism.  By using Javascript as the rendering agent, you can make decisions based on the state of the client (such as width or support for location-based queries) which aren’t necessarily available to a server-rendered page.

    Media Queries

    Now that you are getting your data, the next step is adjusting the presentation between the various platforms.  For example, a full-blown website may have a big background image and the mobile version may remove this to account for low-bandwidth phones visiting it.  The fashionable way to do this these days is via CSS Media Queries.

    The basic premise is that your CSS reacts according to the type of media that is using the device.  For example:

    body{
        background-image:url(‘bg.jpg’);
    }
    @media screen and (max-device-width : 320px){
        body{
            background-image:none;
        }
    }

    The code above says:

    • for the body of the page, use a background image of bg.jpg
    • however, if the screen is smaller than 320px, do not show a background image

    The beauty of this is that it is platform independent – you don’t have to detect an iPhone or Android, you just have to know what its screen resolution is.  You can also switch out based on orientation (if the device is held upright or on it’s side):

    @media screen and (orientation: portrait){
        body{
            background-image:url('bg_narrow.jpg');
        }
    }
    @media screen and (orientation: landscape){
        body{
            background-image:url('bg_wide.jpg');
        }
    }

    Now, if the user turns their iPhone on its side, the background image will switch out to one that better suits its new dimensions.  Snazzy huh?

    Testing your Code

    Now that you have your HTML, Javascript and CSS working, you need to test it.  I have found that the best mechanism is Firefox using the Firebug plugin.  This will get you 99% of the way there, even for your iOS/Android applications later.

    You can test your media queries simply by resizing your browser window to the appropriate dimensions – getting a pretty decent idea of how your site will look on an iPhone compared to a full-size desktop browser.  Check it out by opening m.stringsof.me in a new browser window now and resizing.

    Deploying your Application as a Regular Website

    This is the easy one you’re probably used to – just create the website on IIS/Apache or whatever you are using and copy the HTML/Javascript/CSS files over.  The first time you deploy, remember to create Settings.css and Settings.js files and set them appropriately – they shouldn’t be in the main solution because they differ for other deployments.

    Remember you’ll also need to create an empty (or stubbed) PhoneGap.js file, as per the file reference in your Index.html file.  If you don’t, your site will still run but your visitors will get an unprofessional ‘404 Page Not Found’ error.

    Deploying your Application as a Mobile Website

    If you’ve used your CSS media queries correctly, your main website will double as your mobile website – no changes are required. 

    Deploying your Application as an iPhone and/or iPad App

    Ah, the part you’ve probably been waiting for all along. 

    • download www.phonegap.com and follow their instructions for creating a new project in xCode on your Mac (sorry, you need a Mac computer to build an iPhone app)
    • PhoneGap has comprehensive help files, so you are best off following them, but essentially the next step is to copy your HTML/Javascript/CSS files into the ‘www’ folder that PhoneGap provides.  Again create new Settings.js and Settings.css files accordingly.
    • note that PhoneGap also includes a PhoneGap.*.js file which contains the Javascript wrapper code to access the device hardware such as the camera.  Make sure the file is named exactly the same as that referenced in your Index.html file.

    Compiling and building your PhoneGap application is beyond the scope of this article sorry. 

    Beginner’s tips:

    • iOS is case-sensitive, so the <link/> and <script/> file references in your Index.html file must match the case of the files themselves.  If your CSS refers to images or folders, these are also case-sensitive. This took me about two hours to figure out – too much PC for me I guess.
    • I use DropBox to synchronize changes between my PhoneGap application and my website application.  Even though you are using the same HTML/Javascript/CSS files, they are copies of each other so a change in one must be copied to the other.  If you’re a PC user, you may also like to use File Backup Pro to quickly prepare and copy your changes to your server.

    Deploying your Application to Android (and other mobile devices)

    This uses PhoneGap again, and is the same as the iPhone installation above.  Again, the PhoneGap documentation does a much better job of explaining this than I can.

    Limitations

    The solution I have presented involves building a website predominantly in Javascript, and I have a few problems with this:

    • there is no compile-time checking.  I have colleagues that rave about Script#, but I don’t like the additional learning curve.
    • mainly because of point 1 above, it is hard to enforce architecture or development styles.  This makes working in a multi-developer team environment quite a bit tougher
    • search engines do not execute Javascript which means all they see on your web page is a little bit of HTML wrapper.  This means your website will not rank in Google, Yahoo etc.  You must therefore invest more in other SEO methods such as Site Maps and friendly URLs
    • JSONP can only be executed using GET requests, so if you need to upload a huge amount of data, such as an image, you are out of luck.  In m.stringsof.me, I had to deal with this when uploading the image that the user draws on the <canvas/> element.  I eventually solved it by breaking the base64-encoded representation of the image into 1000kb chunks and sending them to the server one after the other.  The server remembers what it gets and joins them together into a proper image at the end.  (This is why you see a percentage completion status when saving your work – each increment is a chunk of image).  You can View Source on the page to see how it was done, if you like.

    Summary

    Everybody likes a summary section so they know it’s the end of the article.  So, there you go.

    Using jQuery Binding to make cross-domain calls with Closure Callbacks

    December 18, 2011 1 comment

    It was hard to come up with a title to this post because I somehow needed to convey the awesomeness for a problem which I don’t think a lot of people realise they have.

    Quite simply, it is to do with the asynchronous manner in which we make JSONP calls (if you’re not sure how JSONP works, I recommend this simple article from Rick Strahl).  As you know (or will after reading the article), JSONP relies on dynamically injecting a <script/> tag into our document.  Within this tag are two parts of javascript:

    1. the object you are returning from the server (in JSON format)
    2. a function wrapper which ‘pads’ the object (the ‘P’ in JSONP).

    For example, if I wish to return Person object from the server, I would stream down:

    1. on the server, create the object in JSON (using Response.Write() for example in .Net):
     

    Person{FirstName : Larry, LastName : Phillips}

     

    2. wrap the object in a function name ‘mycallback’:

     

    mycallback({Person{FirstName : Larry, LastName : Phillips }})

     

    On the awaiting HTML page, I would have already written the following javascript function:

    function mycallback(person){

        alert(person.FirstName);

    }

     

    The script tag is injected into the page, the response is sent back, the browser automatically runs the Javascript within the script tag, the awaiting function is thereby called and voila – an alert() box is shown

    Problem: the callback is decoupled from the caller

    This looks tidy enough with one example, but when you have dozens or even hundreds of callbacks (such as I have on www.stringsof.me, which prompted this solution) it becomes very hard to manage because for every server call you need to write a separate corresponding callback.  It’s not shown in the example, but often the callback needs to tie back to the caller in some way to alter its state, which makes things even more complicated.

    This is extra frustrating because if you are working with jQuery (for example), you are dealing with nice ‘inline’ callbacks when using regular AJAX calls:

    $.ajax({

        url: that.SiteRoot + route,

        data: params,

        type: 'GET',

        dataType:"json",

        success:function(person){

            alert(person.FirstName);

        }

    });

     

    See?  The callback is written right within the AJAX call.  For programmers, it is tidy and easy to follow.  Unfortunately, if you specify the dataType property as ‘jsonp’, the callback doesn’t work.

    Unfortunately, we can’t do this with a cross-domain call….

    The reason the callback above doesn’t work for jsonp/cross-domain is (presumably) because it is not technically an AJAX call.  From Javascript’s point of view, it is just injecting a new DOM element into the page (the <script/> tag).  Once the tag’s src is downloaded, Javascript has already moved on to the next task.  It is the hack I described above which allows us to link the two.

    …until now!  Using jQuery’s bind and trigger

    Enter jQuery’s bind and trigger functionality.  Observe…if I write…

    $(‘body’).bind(‘foo’, function(){alert(“I’m called");});

     

    …and then at any time later, I write….

    $('body').trigger('foo');

     

    …the popup appears.  So I saw this, and I thought, perhaps I can fake a callback between the two JSONP events.  So, here goes…

     

     

    var Data = function () {

        var that = this;

        

        

        return {

            CallJSON : function(route, params, callback) {

                // The trigger name must be unique for each call, so that multiple (almost) concurrent AJAX

                // calls can be made and assigned back to the same trigger each time

                var triggerName = new Date().getTime().toString();

                params._JsonPCbTrigger = triggerName;

     

                // The traditional JSONP callback is provided, and points to the OnCallJSON function below

                // Note that because it is hard-coded (and therefore the same each time), it could equally be hard-coded on the server

                params._JsonPCb = 'Data.OnCallJSON';

                

                // Use jQuery to prepare for a trigger called triggerName                

                $("body").bind(triggerName, function(e, result) {

                    // Within the callback, we ignore the 'e' parameter (a jQuery artifact), and

                    // just pass the result straight through to the callback we passed into the main function

                    callback(result);  

                });

            

                // Make the JSON call as usual

                $.ajax({

                    url: 'http://www.stringsof.me/ + route,

                    data: params,

                    type: 'GET',

                    dataType:"json"

                });

            },

            

            // This is the generic handler for *all* JSONP calls.

            OnCallJSON : function(result, triggerName) {

                $("body").trigger(triggerName, result);

     

                // We unbind afterwards, simply to release memory

                $("body").unbind(triggerName);

            }

        };

    } ();

    Now, anywhere in my code, I can call (for example):

    var params = { personID: 1 };

    Data.CallJSON('GetPersonByID', params, function(person) {

        alert(person.FirstName);

    });

     

    The Server Code

    That actually concludes the article, but for completeness and in case readers are still a little confused over JSONP, I’ll include the server code that is required to make this work.  It’s in C#, but in essence it is simply writing regular Javascript to the response.

    // Get the variables from the Request that we have sent up from Javascript

    var callBackTrigger = context.Request.Params["_JsonPCbTrigger"];

    var callbackFunctionName = context.Request.Params["_JsonPCb"];

    var personID = int.Parse(context.Request.Params["personid"]);

     

     

    // Defer to service to get the requested person

    var person = new PersonManager().GetPerson(personID);

     

    // Encode to JSON format e.g. {FirstName:'Larry', LastName:'Phillips'}

    var personJSON = Newtonsoft.Json.JsonConvert.SerializeObject(person);

     

    // The callback function on the client receives TWO parameters - the result that

    // we want, and the name of the trigger that jQuery needs for the .trigger()

    var parameters = personJSON + ", " + callBackTrigger;

     

    // Finally, wrap the parameters in the function so that it is automatically

    // executed when the client renders it

    var functionCall = callbackFunctionName + "(" + parameters + ")";

     

    // Write to response

    context.Response.Write(functionCall);

    context.Response.End();

    Architecting Cascading Style Sheets (CSS)

    August 26, 2011 1 comment

    There is one big problem with CSS, and that is its lack of ‘structure’.  When you have a 20-page project, fine – dump everything in one file.  When you have hundreds of pages, across half a dozen different design agencies, things start to become very messy.

    And I hate mess.

    A client of mine is in this situation.  Over three years, they have had 6 different designers, each with different visual styles – but, more importantly for me – each with different CSS requirements and implementations.

    The impact of this has been bothering me ever since I began, but I’ve always had more ‘architectural’ problems to deal with.  However, I decided enough is enough when I realized our total CSS payload was 1Mb, and our biggest CSS file consisted of over 5,000 lines.

    Architecting CSS

    There are a few things I want from my CSS:

    • I’m mostly a C# developer these days, so I like the object-oriented thing – I get it, it’s tidy, it’s re-usable.
    • The ‘cascading’ part of CSS is wonderful, but it is a double-edged sword.  When some designer submits a stylesheet with a tag like “.main h2”, then it will inevitably affect portions of the site they weren’t even involved in.
    • Problems with CSS only manifest themselves when you actually view it – in other words, they are runtime errors.  And when you have hundreds of pages in the site, it is often the users that report a styling problem, not our testers.

    Close, but not close enough

    Unfortunately, I can’t claim to fix any of these problems. I admire the ambition of projects like .less (www.dotlesscss.org/) but something about these didn’t really feel right to me.  I guess its because there is no mainstream support (mainstream support is very important to me as we have a constant stream of new developers coming through – standards are my friend).

    I’ve also read a lot of people’s opinions on CSS – how we need to move away from semantics, how we need to move towards semantics, etc….but again, I need something that has plenty of community support on the internet, and that my new developers will be able to hit the ground running with.

    A brilliant compromise

    So, in the interim, while I search for the perfect solution, I’ve come with with a pretty-damn-good solution.  It consists of a single user control. You might call it WonderControl, but I call it simply CssContainer.ascx:

    ASCX

    <asp:PlaceHolder runat="server" ID="PHStyle" />

    Pretty simple huh?  One line.

    Code-behind

    public partial class CssContainer : BaseUserControl
        {
    
            [
            PersistenceMode(PersistenceMode.InnerProperty),
            TemplateContainer(typeof(CssContainer)),
            TemplateInstance(TemplateInstance.Single),
            ]
            public ITemplate Style { get; set; }
    
            /// <summary>
            /// Init
            /// </summary>
            /// <param name="e"></param>
            protected override void OnInit(EventArgs e)
            {
                // Instantiate any CSS in our template
                this.EnableViewState = false;
                base.OnInit(e);
                if (Style != null) Style.InstantiateIn(PHStyle);
            }
    
            /// <summary>
            /// Pre render
            /// </summary>
            /// <param name="e"></param>
            protected override void OnPreRender(EventArgs e)
            {
                base.OnPreRender(e);
    
                // If, for some reason, the control is called twice, we ignore
                if (!HasRegisteredOnThisRequest())
                {
                    this.Visible = true;
                    this.RegisterStyles();
                }
                else
                {
                    this.Visible = false;
                }
            }
    
            /// <summary>
            /// This control is designed to render styles only once per request - duplicate styles are a waste of resources
            /// </summary>
            /// <returns></returns>
            private bool HasRegisteredOnThisRequest()
            {
    
                // Has this control been rendered before?
                Control parent = this.Parent;
                while (true)
                {
                    if (parent == null) break;
                    if (parent.GetType().IsSubclassOf(typeof(BaseUserControl)) || parent.GetType().IsSubclassOf(typeof(PageBase)))
                    {
                        break;
                    }
                    parent = parent.Parent;
                }
                if (parent == null)
                {
                    // I'll throw an exception here, but really you could just render it anyway - it would just result in multiple renderings per page
                    throw new Exception("CssContainer may only be used on classes inheriting from BaseUserControl or PageBase");
                }
    
                // The count is kept in the PageBase so that we may retain it per-request, but share across all instances of this CssContainer control
                if (!this.Page.GetType().IsSubclassOf(typeof(PageBase)))
                {
                    // I'll throw an exception here, but really you could just render it anyway - it would just result in multiple renderings per page
                    throw new Exception("CssContainer may only be used on pages inheriting from PageBase");
                }
    
                // This control is probably specified multiple times on any one page. Here, we record that its' been done already or not
                if (pageBase.CssContainerCount.ContainsKey(parent.GetType())) return true;
                pageBase.CssContainerCount[parent.GetType()] = 1;
                return false;
            }
    
            /// <summary>
            /// Register the script contents, if any, to the page
            /// </summary>
            private void RegisterStyles()
            {
                // Pull template string OUT of the template we rendered in initially
                var sb = new StringBuilder();
                var tw = new StringWriter(sb);
                var writer = new HtmlTextWriter(tw);
                this.PHStyle.RenderControl(writer);
                writer.Close();
                tw.Close();
                var css = sb.ToString();
                if (string.IsNullOrWhiteSpace(css)) return;
    
                // User controls can be in multiple folder paths, so we must normalize any URLs
                css = css.Replace("~/", WebHelpers.GetFullUrlForPage(""));
    
                // Compress CSS etc
                css = Yahoo.Yui.Compressor.CssCompressor.Compress(css);
    
                // Render to page
                this.PHStyle.Visible = false;
                this.Controls.Add(new LiteralControl("<style type=\"text/css\">" + css + "</style>"));
            }
        }

    Okay, a little flasher, but really very simple.  I have tried to comment the code intuitively so you can follow it, but the general idea is:

    1. the user of the control specifies regular CSS in a template (called Style)
    2. we render that content into the PHStyle placeholder, just as you would a normal template
    3. we later get that CSS back out of the template, and render to the page between regular <style/> tags

    So, the end usage is something like this:

    <wc:Css runat="server">
    <Style>
    .column-con {
        overflow: hidden;
    }
    .column-con .column {
        background-color: #fff;
    }
    </Style>
    </wc:Css>

    The intention is that you drop this CSS directly into the page/usercontrol that is using it.

    Why this is bad…

    Yip, I understand what the problems are with this:

    1. It is inline, meaning page requests are bigger
    2. It can’t be cached on the browser
    3. It can only be re-used as far as the page or usercontrol it is sitting on

    But the answer to all of these is simple: if the pain of breaking these rules, for this set of styles, is too great, then just put them back in your regular external CSS style sheet.

    This control is not intended to replace your external sheets, but rather complement them.

    So, why it’s good…

    There’s only one reason: it’s tidy.  Where we have an obscure user control or page whose styles depart significantly from our usual styles, this is a brilliant way of isolating the styles without cluttering the main style sheet.

    And because it is rendered in code (as opposed to using direct <style/> tags), I know that I have a lot of control over what I might want to do with this in the future.  For example, I could:

    1. fake an external ‘file’ reference using a handler, thereby allowing the resulting request to be cached in the browser
    2. minimize the CSS (actually, I’m already doing this in the example above)
    3. perform replacements such as variable names – just like they do at www.dotlesscss.org/
    4. I can easily find the styles which apply to my HTML, by just browsing to the top of the UserControl – no more ‘Search Entire Project’ to track down a CSS source (update: Resharper 6 has a fairly good implementation of ‘Go To Definition’ for CSS)
    5. etc etc etc

    But until I know how to handle my styles, this solution allows me to contain and manage them far better than regular ‘client-side’ CSS files and embedded <style/> tags allow.

    What about re-usable external style sheets?

    Previously, where we had isolated CSS, we would create separate CSS files and then pull them into the request ‘on-demand’.  Technically, this worked just as well as my solution above – in fact, it was better in that the resulting file would be cached on the client browser and thereby improve overall visit speed.

    But this can result in dozens (or much more) of CSS files cluttering up your project, confusing your developers, slowing your project load time etc.  And besides, as I said above, I know that later, when I’m less busy, I’ll be able to create ‘external’ references using ASHX handlers.

    But embedded styles are terrible!

    In theory, yes – but in practice, definitely not.  One of the worst things a developer can do is get caught up in ‘best practice’.  As long as you understand the reasoning behind the rules, you’re able to break them when you find the reasoning doesn’t apply to your current project.

    Besides, go do a ‘view source’ on any of these home pages – you’ll see a lot of embedded styles:

    See? I’m in good company.

    Conclusion

    This is not the most technically-brilliant piece of work I’ve done, but I have to say it’s one of the most exciting.  I love being able to recklessly add styles exactly how I like with little regard for structure – content in the knowledge that future-Ben will be able to easily tidy it up one day.

    I strongly urge other developers to consider this type of structure for any medium-large projects they are undertaking.

    IOS fails AJAX POST using full URL of short domain name

    July 7, 2011 6 comments

    Well, here’s a weird one which I’ll try to share although I can’t seem to find a heading which Google will pick up to let others know I’ve solved their problem.

    I was building a MVC/JQuery app and had a form posting to the server as follows:

    var model = {

        AnswerText:'',

        QuestionID:60,

        QuestionDate:'2011-2-1'

    };

     

    $.ajax({

        url: siteRoot + 'Question/SaveAnswer',

        data: model,

        type:'POST',

        success: function(data){

            alert('success');

        }

    });

    The system worked fine on localhost, and fine on production using our old .com domain name.

    However, when we switched to a new domain name (www.stringsof.me), the system started failing, but only when browsed from the iPad – other browsers continued to work fine (although I admit I didn’t test it in Safari).

    The problem, as it turns out was the *siteRoot* variable which prefixes the AJAX URL.  Although you can’t see it in code, this variable is set to the domain name of the site (e.g. http://www.stringsof.me:80/).  When I changed siteRoot to just an empty string, it began working fine.

    Whew.

    Comparing the performance of AppFabric against Sql Server

    May 19, 2011 7 comments

    I’ve been doing a lot of work lately implementing distributed caching systems for various clients. During my initial scoping, I found a lot of information out there comparing the performance between cache types (AppFabric, Memcached etc), however I could find very little comparing the performance of caching vs the actual database (in my case, AppFabric vs Sql Server 2005), just that it’s “much better”.

    Despite this lack of statistical information, I went ahead with caching anyway (after all, ‘much better’ sounds pretty good), and because I’m a Microsoft shop, I left it tidy by selecting AppFabric to ease the load on Sql Server.

    Now that I’ve implemented the code to a decent extent through a particular website, I’ve been able to conduct my own performance benchmarks and here I’d like to share the results.

    Note in particular that these are what I would call real-world results. I didn’t attempt to isolate cache access based on particular SQL queries or size of the item. I didn’t reset the cache between web pages. I simply used the website in the same way I’d expect my users to, and recorded the overall times.

    Methodology

    My methodology was fairly simple and certainly prone to a margin of error:

      • installed a ASP.Net website to a development machine which I knew I had sole access to. Sql Server is installed on the same box as the web server, however I had a separate (and dedicated) server containg the cache.
      • switch the cache off (via AppSettings configuration)
      • browse through a set number of web pages, without refreshing the client browser or anything tricky
      • made a note of the pages I travelled through, then turned the cache on and went through the same pages again

    I had simple code around my ‘cache access’ block which simply counted the number of milliseconds the application spent trying to Save/Load the items (or bypassing if the cache was off) and logged them to a text file.

    Results

    Data access is split into two parts – Save and Load. Note that Sql Server does not have the burden of a Save method, and AppFabric of course only calls Save the first time the item is accessed.

    AppFabric Sql Server Ratio
    Load 81,960ms 372,318ms 22%
    Save 2,265ms NA NA
    Total 84,225ms 372,318ms 22%

    Interesting results:

        • AppFabric increases my data-access speeds by almost 5x over Sql Server. Thank goodness for that, and of course the longer you cache an item, the greater this efficiency will get. Again, this is not saying that AppFabric accesses data 5x faster than Sql Server on a call-by-call basis, it is the overall time benefit of implementing caching
        • AppFabric has an additional overhead in saving a newly calculated item back to the cache. In my case it was 2,265ms for the 81,960ms I spent loading items – a ratio of about 1/36. The longer I ran the test (or cached the items before they expired), the better this ratio would have become

    To conclude

    AppFabric clearly has performance gains over Sql Server – in my limited test it made my data access 5x faster, and a longer test expiry period would have made this much much greater.

    Note also that the website I was using had (test) database tables less than ~1,000,000 records, although some fairly funky SQL queries are being made here and there. For an even larger database such as Facebook (or my client’s database, hopefully) the SQL queries would take longer, but (I suspect) the caching times would remain exactly the same – another point in favour of AppFabric.

    Uh-oh…one more thing (and it sucks)

    Now that you’ve read this far, I’ll tell you the real reason I ran these tests, which is that I was convinced that AppFabric actually made my site slower. Not because SQL was better, but because the previous caching system I switched out was in fact the good old ASP.Net HttpCache utility. Like AppFabric, this cache is entirely in-memory but because it doesn’t concern itself with regions, tags, (and much more), it actually runs much much much faster. Let me type that again so Google picks it up – AppFabric is not nearly as fast as the built-in ASP.Net HttpCache utility. I ran the same test as above, but using the HttpCache:

    AppFabric Sql Server HttpCache
    Load 81,960ms 372,318ms 38,866ms
    Save 2,265ms NA 9ms
    Total 84,225ms 372,318ms 39,875ms

    Yes, that’s right – AppFabric is 2.1x slower than the built-in ASP.Net caching utility. Very sad, especially when I see my site slow down after all my hard work. However, I’m sticking with AppFabric:

        • HttpCache resets whenever the IIS App Pool resets, when you redeploy and any other time it feels like it. I suspect if I ran this test over a long time (say, a week) then the results would be closer
        • There is no tagging in HttpCache – you have to roll your own by integrating into the key – and the subsequent parsing to ‘find by tag’ is slow
        • HttpCache cannot be expanded easily to hold gigabytes of data – it simply won’t work when my sites expand
        • The distributed nature of AppFabric allowed me to build a separate ‘admin’ tool where I can look into the cache from another website, count it, clear it etc
        • And if I’m honest, the final reason is that this type of caching is all in vogue at the moment and it’s something I feel I should be part of

    One thing I’m wondering about is actually using a combination of both – extremely high-access queries (like user permissions) could be HttpCache, leaving AppFabric to handle the larger datasets. Bit of an art form I reckon.

    Best practice architecture for professional Microsoft.Net websites

    April 26, 2011 8 comments

    (Update: this article deals with back-end architecture. If you’re looking for HTMl/JS architecture, check out this article.

    Architecture technology is constantly evolving and there is a lot to keep up with. Over my career, I’ve reviewed a large number of approaches and today I’d like to draw a line in the sand and present to you all my current ‘best’ architecture.

    This solution has been used more-or-less in my last three major applications (most notably www.rate-it.co) and is the result of many hundreds of hours of investigating, comparing, and performance testing, as well as tens of thousands of hours backing real-world websites.

    There are a few reasons why I’m sharing it:

    • there’s no loss to me if I can help other people build their sites
    • I’m only as good as the knowledge I can attain.  By sharing mine, I hope that others will share theirs and thereby help me evolve my craft
    • this architecture will probably be obsolete within six months or so.  It will be an interesting record for me to see how my approach changes over time

    The project architecture

    I have created a working application (Visual Studio 2012, .Net 4.5) which summarizes the concepts I’m about to explain.  To install:

    • download the code from here
    • run the included setup.sql file to generate the database structure expected by your code
    • update the BlackBallArchitecture.Web\Connectionstrings.config file as appropriate

    First of all, let’s note the dependencies between the various projects:

    Ideal Microsoft Web Architecture

    There was a time when I built web applications with two layers – a front-end ‘web’ project with my ASPX and ASCX files etc, and a data-access layer which basically parsed stored procedures into database.  This architecture was flawed for a number of reasons, but it’s biggest fault was that there was no separation between UI and business logic, nor even between UI and the data source (you need to know column names if you are accessing a data table).

    These days, I use the structure above.  To quickly summarize:

    • Common.dll is used to store generic functions (such as Extension methods) which are of use to the entire application.
    • Data.dll provides access to a Sql Server database using the Entity Framework Code-First Model (see below).  Data storage is often synonymous with business logic (e.g. saving a person record into the database), however their separation is essential if we are to unit test later.
    • Contracts.dll is where I record the ‘shape’ of the application.  It holds an interface to every business logic class (in Logic.dll), as well as all my entity definitions, such as Person or SystemLog.
    • Logic.dll is where I store all my business logic, such as email validation and accessing the data store (although note that the Logic layer does not actually have a reference to the data store
    • Dependency.dll is used by Unity (see below) to map the Contracts to the actual business logic
    • Web.dll is the front-end web application, containing html files etc.
    • In addition, the application contains Test.dll and CodeGen.dll assemblies, which I have excluded from this diagram to avoid clutter.  I will explain them later.

    So, quite a few assemblies for an application that essentially just saves a person record to a database – in fact, I’ve gone from 2 to 6.  Why?

    Dependency Injection using Microsoft Unity

    The first thing you may notice above is that the Web assembly has no knowledge of either the data store nor the Logic assembly.  In fact, all it has is knowledge of is the logic structure (via the Contracts assembly) and a mechanism for accessing these contracts using the Dependency assembly.

    I have done this using Microsoft Unity, which allows me to use a little trick called Dependency Injection.

    If you refer to the Home/Welcome method, this means that instead of getting my list of people via…

    var people = new Logic.DataManagers.PersonManager().GetPerson(null);

    …I instead use:

    var people = Dependency.Resolve<IPersonManager>().GetPerson(null);

    The unity.config file (in the demo project) tells Unity that IPersonManager should be mapped to my PersonManager class.  By doing this, I have de-coupled the site’s dependency on our actual Logic implementation, instead using just an interface to gain access to my data.  This has a number of gnarly benefits:

    • if I find a bug, I can deploy another assembly with a class that implements IPersonManager, with the bug fixed in it
    • I can unit-test the code in total isolation from the actual business logic – an essential tenet of unit testing

    I can create different implementations of IPersonManager, based on varying project requirements.  For example, I once built a site which had two implementations – one in New Zealand and one in England.  The New Zealand site showed full access to everybody’s name whereas England had stronger privacy controls and would only show the first name.  I simply created two classes inheriting from IPersonManager and implementing FormatName().  New Zealand’s implementation was:

    public string FormatName(Person p){
        return (p.FirstName + " " + p.LastName).Trim();
    }

    and England’s was:

    public string FormatName(Person p){
        return p.FirstName.Trim();
    }

    I included both the implementations in the same Logic.dll assembly, and simply modified the unity.config file of each website to suit.

    Another great use of Unity is when you want to temporarily replace some business logic without a full redeploy.  For example, I will sometimes litter my code with logging functionality, for example, in the PersonManager.SavePerson() function, you can see this:

    Dependency.Resolve<ILogger>().Log(firstName + " record updated");

    But my unity.config file has TWO concrete implementations of ILogger:

    <typeAlias alias="ILogger" type="BlackBallArchitecture.Contracts.ILogger, BlackBallArchitecture.Contracts" />
    <typeAlias alias="DatabaseLogger" type="BlackBallArchitecture.Logic.DatabaseLogger, BlackBallArchitecture.Logic" />
    <typeAlias alias="NoLogger" type="BlackBallArchitecture.Logic.NoLogger, BlackBallArchitecture.Logic" />

    To save database stress, for the most part I turn logging off during production:

    <type type="ILogger" mapTo="NoLogger" />

    However, if there is a problem I can easily switch it on by modifying the file to:

    <type type="ILogger" mapTo="DatabaseLogger" />

    No recompile or redeploy required!

    A misuse of Unity?

    The other thing that Unity allows me to do is create circular references between my assemblies.  For example, from my Logic layer, I am able to access the current user’s PersonID even though it is stored in a website cookie:

    private int Log(SystemLog logSummary)
    {
        // We can use Unity to call back to whichever storage mechanism we are using for the 'current user' info, without actually knowing what it is
        var personID = Dependency.Resolve<ICurrentUser>().PersonID;
    
    
        // Log
        logSummary.PersonID = personID;
        logSummary.WhenOccurred = DateTime.Now;
        var svc = Dependency.Resolve<BlackBallArchitecture.Contracts.DataManagers.ISystemLogManager>();
        svc.SaveLog(logSummary);
        return logSummary.SystemLogID;
    }

    ICurrentUser meanwhile, is implemented in my Web.dll assembly as follows:

    public class AuthenticationManager : BlackBallArchitecture.Contracts.Security.ICurrentUser
    {
        public int? PersonID
        {
            get
            {
                if (!IsAuthenticated) { return null; }
                return int.Parse(HttpContext.Current.User.Identity.Name);
            }
        }
    }

    I know what you’re thinking – who is this handsome renegade?!  Well, there is a distinct difference between the above, and another method which I’ve seen in some Logic layers:

    var personID = int.Parse(HttpContext.Current.User.Identity.Name);

    When developers use this method, they are making an assumption that their business logic will always depend on or be run from a web application.  If they were to later reference this from another application (such as Silverlight or Windows Forms), it will fall apart.

    However, using Unity, if we were to reference this from a new application, we could just implement a new version of ICurrentUser and pull the PersonID variable from a different location such as in-memory or from file etc.  Very cool.

    Data access using LINQ2SQL and the Entity Framework Code First Model

    LINQ2SQL and I got off to a bad start because I moved into so many brown-field applications where it had been naively implemented and was killing the database.  For example, with ignorant use of foreign key relationships and a Repeater control, I once saw a web page making 50,000 database requests.

    If one is working on a project by themselves, they can learn and remember the limitations of LINQ and it works well.  However, I usually need to plan for a team of developers – experienced and inexperienced – as well as anticipate the day when I leave and others come in and have to pick up the project.  For these reasons, I didn’t adopt LINQ until mid 2010, instead opting for stored procedures, which at least gave me 100% control over my SQL.

    That was until I discovered the Entity Framework Code First Model, probably the biggest game-changer for my architectures in the last few years.

    Non-data-bound Entities

    EF CF allows me to design my own entities (POCO) and bind my LINQ queries back to them.  Previously, LINQ returned its own entity types which meant that if you were using them in your web layer (for example, to render a list of people), your project ended up with a SQL dependency right from the web layer.  It also meant that you had to keep a (static) database connection open throughout the duration of the page call, so that changes to your entity could be ‘remembered’ by LINQ and saved back to the data store (this is most often done by storing a LINQ ‘context’ in your Global.asax class and opening in Application.BeginRequest).  Urgh.

    With EF, you declare your entities completely separately from the data-access.  In my example project, I have declared them in the Contracts.Entities.Data namespace.  Note that there are not even any attributes involved here – there is nothing to suggest that they are going to be used in LINQ queries (actually, EF does support various attributes, I just avoided them).

    I then reference these entities back in the Data.dll assembly, using them as return types for my data context.

    The reason I put my entities in a separate assembly was so that I could use them from the web layer (they are return types from my logic layer) without requiring a reference to the Data.dll assembly, making it impossible for the developer to (inadvertently or not) call the data store without going through our Logic layer.

    Caching

    Data entities which are completely independent of the data source are also essential for caching – after all, you can’t store an open database connection on disk on another server.  See the section on Caching below for more details.

    Force your developers to explicitly call for more data

    This harks back to the example before about the 50,000 database hits in one page.  By removing sub-classes (which are what foreign keys resolve to in LINQ), you change this code…

    foreach(var person in people) {
        Response.Write(person.FirstName + " has " person.Orders.Count() + " orders");
    }

    …to this:

    foreach(var person in people) {
        var totalOrders = new OrderManager().GetOrdersForPerson(person.PersonID);
        Response.Write(person.FirstName + " has " totalOrders + " orders");
    }

    Both of these examples result in the database being called to get the total orders for each person, so if there are 50,000 people in your list, you will call the database 50,000 times.  The difference is that this behavior is not immediately apparent when looking at the first example.  With the second method, a good developer will realize what is happening and take actions to resolve it (perhaps by getting all order counts in one call before the loop starts).

    When I first started implementing this design for my clients, their developers were fairly unimpressed, and rightly so – it adds work.  But I consider it absolutely essential if you want to use LINQ2SQL in a multi-developer environment.

    T4 Templates for Code Generation

    One thing that soon becomes apparent when using the Code First model and Unity is that for each database entity you are dealing with, you have quite a number of classes:

    • an entity (e.g. Person)
    • a data-access class
    • an interface for the data-access class
    • ObjectContexts and Configurations for Code First
    • any number of other ‘helpers’ like my GetOrCreatePerson() method

    Typing these by hand would be very time consuming and error prone, so I use T4 Templates to do the heavy lifting for me.  T4 templates have been around for a long time, but were little known until things like the Entity Framework came out.  They are awesome in their simplicity.  T4 templates will only get you so far however.  In order to truly unleash the power of code generation, I also use a fantastic third-party tool called the T4 Toolbox which allows you to:

    • re-use and parametize your templates, in much the same way you would with a regular C# class
    • generate multiple files from a single generator – no more files with 100,000 lines
    • deploy your generated files across multiple projects – an essential requirement for my design, although I suppose you could do it using multiple templates (one for each project) if you enjoyed wasting time

    The result is the CodeGen.dll assembly you have in the example project.  Every time the database structure changes, just open and save the Controller.tt file.  It queries the database and generates the files based off the structure.

    Of course, you may not wish your code to match the database structure – no problem, just write the various classes by hand.

    Caching

    The first thing to remember about caching is that SQL Server is a very good cache.  Its sole purpose is to remember things and return them to your application.

    However, once your site traffic picks up, you will come to realize that those fancy data queries are awfully time consuming for what results in just a couple of objects returned, so at this point, you need to cache.

    The attached project structure just uses the HttpApplication cache, built in to Microsoft.Net.  This cache is actually fantastically fast (see my previous post on cache performance comparison), but it has limitations like not being able to access it between web applications, nor can you distribute to other servers etc.  I’m not going to compare caches in this blog, but just remember:

      • you can’t cache a LINQ2SQL object that is still connected to its data store (hence my Code First POCO model is excellent for this)
      • caching should be done in the Logic layer – the web layer should have no idea where the objects come from
      Unit Testing

    Now, I actually find unit testing extremely boring and often it is a waste of time.  However, when working in a multi-developer environment, tools such as unit testing are an invaluable insurance policy against sloppy development and miscommunication.

    A core tenet of unit testing is to isolate (and test) the minimum amount of functionality at once, and this means reducing dependencies between classes as much as possible.  For example, consider this function which returns a requested person…

    public void GetPerson(int personID){
        var person = new LinqContext().Persons.FirstOrDefault(x => x.PersonID == personID);
        return person;
    }
      A very small and simple function and easy to test.  Except that it depends on the database (LinqContext).  This means that in order to test this function, you must first setup a database, including inserting appropriate test data.  Although possible, it kind of defeats the aforementioned tenet.
      So this brings me back to Dependency Injection and my beloved Code First model.  In the attached example project, I simply replace the database-bound IRepository, with a memory-bound IRepository.  Then, when I call IPersonManager().GetPerson(), instead of using LINQ2SQL on my database, it uses LINQ2Entities on my in-memory collection.  My logic is tested without any database required.
        This is actually amazing – unit testing aside, I’ve actually switched out my data store from a database to in-memory

    without chaning a single line of my data-access code

      .  In theory, I could switch in an XML-based data store or shove things directly in the cloud (if somebody would built a LINQ2Cloud provider).
      This is surely the whole purpose of LINQ, but it wasn’t practical until Microsoft gave us the Code First Model and POCO binding.
      Almost makes me want to actually write unit tests.

    Other Notes

    Broken Project References

    Above, I mentioned that our web layer has no reference to the Data or Logic layers, which is great.  Unfortunately, this means that when you compile the project, Visual Studio does not copy the DLLs into the bin folder, and so when you run it you get errors where the site can’t reference a requested class.  To circumvent this, I simply added some post-build commands to copy the files manually:

    copy "..\..\BlackBallArchitecture.Logic\bin\$(ConfigurationName)\BlackBallArchitecture.Logic.dll" "$(TargetDir)"
    copy "..\..\BlackBallArchitecture.Data\bin\$(ConfigurationName)\BlackBallArchitecture.Data.dll" "$(TargetDir)"

    Resharper

    Ah Resharper, what would we do without you?  With all these files and interfaces flying around, I find Resharper an absolute must for quickly navigating and refactoring my code.  Just spend the money guys.

    MVC

    This article dealt mainly with the solution layers behind the UI – business logic, testing and data access.  Unfortunately that’s only part of the story and the web layer still needs work.  In particular, it would be nice to see an MVC architecture separate the UI from the ‘UI logic’ (or UX, I suppose) – the code-behind model restricts the re-usability of our UX by tying it to ASPX markup (update: I’ve got a front-end architecture now).

    Summary – see it in action

    I’ve tried to be as clear as possible, but reading back, it’s pretty hard to explain everything all at once.  If you’re like me, the best way is to download the code and play around.  Even then, the true benefits of the model may not be apparent until you build a large system, or invite other developers to work alongside you.  Remember, you can also see it in action at www.rate-it.co.

    Anyway, I hope it’s been thought-provoking and hopefully of some use to some of you.  Again, if you have any improvements or even (especially) if you hate the architecture, please leave a comment below.

    Follow

    Get every new post delivered to your Inbox.