Archive

Posts Tagged ‘CodeProject’

Cheesebaron HorizontalScrollView with MvvmCross 3 (Hot Tuna)

February 14, 2014 Leave a comment

Many thanks to Cheesebaron and Stuart for their amazing contributions to the Xamarin platform. Cheesebaron made a scrollable horizontal list view at https://github.com/Cheesebaron/Cheesebaron.HorizontalListView back in early 2012. For those interested, I have ported the Cheesebaron HorizontalListView to the latest version of MvvmCross (currently v3).

Hope that helps.

MVC Output Caching using custom FilterAttribute

August 29, 2013 Leave a comment

 

As with ASP.Net Forms, MVC offers some out-of-the-box caching with their OutputCacheAttribute, however as with classic ASP.Net, one quickly realizes its limitations when building complex systems.  In particular, its very difficult, and often times impossible to flush/clear the cache based on various events that happen within your application. 

For example, consider a main menu which has an ‘Admin’ button for appropriately authorized users.  When your administrator initially views the page, the system will cache the HTML, including the Admin link.  If you later revoked this privilege, the site would continue serving the cached link even though they were technically no longer authorized to access this part of the site.

Not good.

So, with a little to-ing and fro-ing, I’ve finalized my own FilterAttribute which does this for you.  The advantage of writing your own is that you can pass in whatever parameters you like, as well as have directly access to the current HttpContext, which in turns means you can check user-specific values, access the database – whatever you need to do.

How it works

The attribute essentially consists of just a couple of methods, both overrides of the IResultFilter and IActionFilter attributes

  • OnActionExecuting.  This method fires before your Action even begins.  By checking for a cache value here, we can abort the process before any long-running code in your Action method or View rendering executes
  • OnResultExecuting.  This method fires just before HTML is rendered to our output stream.  It is here that we inject cached content (if it exists).  Otherwise, we capture the output for next time

The code

I’ve commented the code below so you can follow more-or-less what is going on.  I won’t go in to too much detail, but needless to say if you copy/paste this straight in to your work, it won’t compile due to the namespace references.  I’m also using Microsoft Unity for dependency injection, so don’t be confused by ICurrentUser etc. 

Finally, I’m got a custom cache class, whose source code I haven’t included – just switch out my lines to access your own cache instead.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Web;
using System.Web.Mvc;
using BlackBall.Common;
using BlackBall.Common.Localisation;
using BlackBall.Contracts.Cache;
using BlackBall.Contracts.Enums;
using BlackBall.Contracts.Exporting;
using BlackBall.Contracts.Localisation;
using BlackBall.Contracts.Security;
using BlackBall.Common.Extensions;
using BlackBall.Logic.Cache;


namespace BlackBall.MVC.Code.Mvc.Attributes
{
public class ResultOutputCachingAttribute : FilterAttribute, IResultFilter, IActionFilter
{

#region Properties & Constructors

private string ThisRequestOutput = "";
private bool VaryByUser = true;

private ICurrentUser _CurrentUser = null;
private ICurrentUser CurrentUser
{
get
{
if (_CurrentUser == null) _CurrentUser = Dependency.Resolve<ICurrentUser>();
return _CurrentUser;
}
}

public ResultOutputCachingAttribute(bool varyByUser = true)
{
this.VaryByUser = varyByUser;
}

private string _CacheKey = null;
private string CacheKey
{
get { return _CacheKey; }
set
{
_CacheKey = value;
}
}

#endregion

/// <summary>
/// Queries the context and writes the HTML depending on which type of result we have (View, PartialView etc)
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private void CacheResult(ResultExecutingContext filterContext)
{
using (var sw = new StringWriter())
{
if (filterContext.Result is PartialViewResult)
{
var partialView = (PartialViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindPartialView(filterContext.Controller.ControllerContext, partialView.ViewName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}else if (filterContext.Result is ViewResult)
{
var partialView = (ViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindView(filterContext.Controller.ControllerContext, partialView.ViewName, partialView.MasterName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}
var html = sw.GetStringBuilder().ToString();

// Add data to cache for next time
if (!string.IsNullOrWhiteSpace(html))
{
var cache = new CacheManager<CachableString>();
var cachedObject = new CachableString() { CacheKey = CreateKey(filterContext), Value = html };
cachedObject.AddTag(CacheTags.Project, CurrentUser.CurrentProjectID);
if (this.VaryByUser) cachedObject.AddTag(CacheTags.Person, this.CurrentUser.PersonID);
cache.Save(cachedObject);
}
}
}


/// <summary>
/// The result is beginning to execute
/// </summary>
/// <param name="filterContext"></param>
public void OnResultExecuting(ResultExecutingContext filterContext)
{
var cacheKey = CreateKey(filterContext);

if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.HttpContext.Response.Write("<!-- Cache start " + cacheKey + " -->");
filterContext.HttpContext.Response.Write(this.ThisRequestOutput);
filterContext.HttpContext.Response.Write("<!-- Cache end " + cacheKey + " -->");
return;
}

// Intercept the response and cache it
CacheResult(filterContext);
}

/// <summary>
/// Action executing
/// </summary>
/// <param name="filterContext"></param>
public void OnActionExecuting(ActionExecutingContext filterContext)
{
// Break if no setting
if (!Configuration.Current.UseOutputCaching) return;

// Our function returns nothing because the HTML is not calculated yet - that is done in another Filter
Func<string, CachableString> func = (ck) => new CachableString() { CacheKey = ck };

// This is the earliest entry point into the action, so we check the cache before any code runs
var cache = new CacheManager<CachableString>();
var cacheKey = new CachableString() { CacheKey = CreateKey(filterContext) };
var cachedObject = cache.Load(cacheKey, func);
this.ThisRequestOutput = cachedObject.Value;

// Cancel processing by setting result to some non-null value. Refer http://andrewlocatelliwoodcock.com/2011/12/15/canceling-the-actionexecutingcontext-in-the-onactionexecuting-actionfilter/
if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.Result = new ContentResult();
}
}

public void OnActionExecuted(ActionExecutedContext filterContext)
{

}

public void OnResultExecuted(ResultExecutedContext filterContext)
{

}

/// <summary>
/// Creates a unique key for this context
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private string CreateKey(ControllerContext filterContext)
{

// Append general info about the state of the system
var cacheKey = new StringBuilder();
cacheKey.Append(Configuration.Current.AssemblyVersion + "_");
if (this.VaryByUser) cacheKey.Append(this.CurrentUser.PersonID.GetValueOrDefault(0) + "_");

// Append the controller name
cacheKey.Append(filterContext.Controller.GetType().FullName + "_");
if (filterContext.RouteData.Values.ContainsKey("action"))
{
cacheKey.Append(filterContext.RouteData.Values["action"].ToString() + "_");
}

// Add each parameter (if available)
foreach (var param in filterContext.RouteData.Values)
{
cacheKey.Append((param.Key ?? "") + "-" + (param.Value == null ? "null" : param.Value.ToString()) + "_");
}

return cacheKey.ToString();
}
}
}

Alright, hope that helps – there’s nothing like HTML caching to make you feel like the best website builder in the world!

Step by Step Guide to Building a Cross-Platform Application in HTML, CSS & Javascript

January 19, 2012 3 comments

Back in the days when your computers came in options of the cream, the white, the off-white, the ivory or the beige, it was very frustrating that an application you put so much effort into wasn’t usable on other computers.

You had to make a choice, and my choice was Windows.  It was just when Microsoft .Net came out and we figured it was a pretty good bet, which it was I reckon.

Then came the web…

A year or so later, despite many of our customers still being in dial-up, I moved to the web.  Unfortunately, this came with almost more headaches – Internet Explorer 6 was the king of browsers, but we had a few Netscapes and this rogue called Firefox was starting to make waves.

Over the last 10 years, while I moved completely away from desktop applications, the browser wars just got worse – even IE couldn’t get versions working nicely together (compatibility mode??? WTF?).

Finally, although IE is still not perfect, it is definitely better and more importantly – you can pretty much ignore it and code just for ‘standards compliant’ browsers – Firefox and Chrome predominantly. 

And then the iPhone came along.

…then came the iPhone…

Actually, the iPhone web browser is incredible – I’m more confident of my websites running on the iPhone than I am in Internet Explorer.  And on top of this, they have to cram your site into a tiny little screen.  I’m very impressed.

Of course, the iPhone really comes into itself with its applications (as opposed to websites).  Not only do they run beautifully, but the entire App Store infrastructure exposes a developer’s work to millions of ready and willing credit card holders, eager to part with a couple of dollars just for the pleasure of the App Store buying experience. 

Unfortunately, the iPhone also forced you to code in yet another language – Objective C – and I’m sorry, but I can barely keep up with .Net let alone learn another language built on C of all things.  I guess I wasn’t the only one to mourn this because there came a slew of WYSIWYGs and cross-platform compilers (Moonlight anybody?).  And to the top rose PhoneGap – an platform that essentially ‘wraps’ a web application in Objective C to convert it into a regular iOS application.  Note only that, it will also wrap it in the relevant languages to support Android, BlackBerry, Windows Phone, Symbian etc…  Such a simple concept, but just amazing.

…and the desktop comes full circle…

Windows 8 purportedly supports native HTML/Javascript/CSS applications. 

Woah – so suddenly, my old-school web-coding skills can be deployed on the web, major mobile devices and 90% of the world’s desktops?  Amazingly, yeah – I think they can.

The HTML/CSS/Javascript Application

So, sorry for the long pre-amble – I just need noobs to appreciate that this next decade of development shouldn’t be taken for granted.

The point is that now you can build an application in ONE language and deploy to multiple platforms.  However, it’s not quite as easy as that – there are many restrictions to what can be built and how.  In this article I’m going to walk you right through from start to finish.  I’ve built a few of these applications by now (the most recent is www.stringsof.me) so I’ll point out the pitfalls and hopefully save you a bit of time.

Know the Goal

I should point out that it is FAR easier to build an application with the knowledge of its intended use.  If it’s going to be used on iPhone or wrapped in PhoneGap, then you can test it incrementally on these platforms as you go.  Far far easier than trying to retro-fit an existing web application.  In fact, I recommend to anybody that no matter how big your web application is, you just start from scratch. Copy/paste what you need from the old one, but start with a clean slate – after all, this app will be used for years and years so you better make it a nice one.  So, here’s our goal:

HTML-Architecture-Overview

 

The Application

The application consists of three parts – an HTML file, one or more Javascript files and one or more CSS files. 

Below I’ve created a completely stripped-down application to try to indicate the core functionality, however if you want to see the full-blown thing in action, I suggest you View Source on m.stringsof.me

Index.html

<html>
<script src=“jquery.js" type="text/javascript"></script>
<script src=“phonegap.js" type="text/javascript"></script>
<script src=“settings.js" type="text/javascript"></script>
<script src=“app.js" type="text/javascript"></script>

<link href=“style.css" rel="stylesheet" type="text/css" />
<link href=“settings.css" rel="stylesheet" type="text/css" />
<body>
<div id=“MyContainer” >
    Hi everybody, welcome to my App.
</div>
<script>
    var app = new App(‘MyContainer’);
    app.Start();
</script>
</body>
</html>

Nothing particularly flash here, but of note:

  • We are using jQuery, but that is just my preference
  • The PhoneGap.js file is required for our various AppStore installations, but on the web server we replace with just a stub file
  • The Settings.js and Settings.css files enable us to manage the minor variations between our various platforms.  For example, iOS requires you to ask people before sending them push messages, Android doesn’t care, and push messages are irrelevant on a web-based app

      Settings.js

      The Settings file contains platform-specific variables.

      var Settings = function(){
          return {
              SiteRoot:'http://api.stringsof.me/’,
              ConfirmPushNotificationsOnStartup: false
          }
      }();

      Data.js

      Data provides connectivity to our server.  This class could be stored in the main App.js file, but I’ve split it out here because in a bigger application you’d have lots of Javascript files and you don’t want circular references if you can help it (not that Javascript minds, grrrrr).

      var Data = function () {
          var that = this;
          this.SiteRoot = Settings.SiteRoot;
          return { 
              CallJSON : function(route, params, callback) { 
                  var triggerName = new Date().getTime().toString();
                  $("body").bind(triggerName, function(e, result) {  
                      callback(result);
                  }); 
      
                  // Add the JSON callback to the parameters
                  params._JsonPCb = 'Data.OnCallJSON‘;
                  params._JsonPContext = "'" + triggerName + "'";
      
                  // Make the JSON call
                  $.ajax({
                      url: that.SiteRoot + route,
                      data: params,
                      type: 'GET‘,
                      dataType:"jsonp“
                  });
              },
      
              OnCallJSON : function(result, triggerName) {
                  $("body").trigger(triggerName, result);
                  $("body").unbind(triggerName); }
          };
      } (); 

      App.js

      Encapsulates our main application code.  This file will usually get pretty big, but you can split it up later depending on your coding style.

      var App = function(containerID){
          this.ContainerID = containerID;
          var that = this;
          this.Start = function(){
              var $con = $(‘#’ + that.ContainerID);
      
              // Get user
              Data.CallJSON(‘Person/GetPerson’, {userName: ‘ben’}, function(person){
                  $con.html(‘Welcome ‘ + person.FirstName);        
              });
      
          }
          return {
              Start: that.Start
          }
      };

    Pretty simple.  All it does is get the user by their username (I have hard-coded ‘ben’ in this example).  On the callback, you display their name in our main <div/> element.

    One flashy thing I’ve done is use my method for calling JSONP with callbacks.  You can read up on it here, or take my word for it that it works.

    Data Access

    Most of us are used to dealing with server-side languages such as ASP.Net or PHP.  Despite some efforts (MVC perhaps), these technologies still leave the HTML dependent, and to a certain extent aware, of the code that generated them.  For example, ASP.Net is heavily dependent on ViewState.  This may be fine for your web application – even your mobile web application – but it is worthless in your iPhone or Android app. 

    Without a server-side language to create and bind our HTML, we must defer to Javascript.  And to get the data required to bind, we must make some kind of web service call.  Because we are using Javascript, it makes sense to return JSON-formatted objects.

    Cross-Domain Data Access Using JSONP

    Using JSON would be all you had to do (and in fact frameworks like ASP.Net MVC have excellent JSON support baked in), except that we want to use the same web service (and therefore the same returned objects) in our iPhone/Android application.  Consider our mobile web application:

    Essentially therefore, our application is getting JSON objects from the same domain (m.stringsof.me) as where it resides.  Now consider our iPhone/Android application:

    • our web service (written in ASP.Net for example) is at http://m.stringsof.me/service
    • our application is stored on the phone – it doesn’t have a concept of ‘domain’

    It doesn’t have a domain, which means we are doing something called cross-site scripting.  Unfortunately, despite many many modern web applications using it (check out the source of the Google home page), all web browsers consider this a gross security risk and your Javascript will error if you try to do it.  Enter JSONP…

    JSONP (JSON with Padding) gets around this issue with a crafty little trick which I’ve covered in another article.  You need to know how JSONP works in order to build your mobile application, so I suggest you brush up.

    Returning JSONP from an ASP.Net MVC Application

    As an aside, if you are an ASP.Net MVC user, you can create your own JsonpResult:ActionResult to return from your Views:

        public class JsonPResult : JsonResult
        {
            public override void ExecuteResult(ControllerContext context)
            {
                var response = context.HttpContext.Response;
                var request = context.HttpContext.Request;
                
                // Open the JSONP javascript function
                var jsonpCallback = request.Params["_jsonpcb"];
                response.Write(jsonpCallback + "(");
    
                // Defer to base class for rendering the javascript object. Because we
                // have opened a javascript function first, it gets rendered as the first parameter
                base.ExecuteResult(context);
    
                // Add any additional parameters - this is not part of JSONP, but
                // a construct I've written to allow me to pass extra 'context' to the server and back
                var extraParams = request.Params["_jsonpcontext"];
                if (!string.IsNullOrEmpty(extraParams)) response.Write(extraParams);
    
                // Close the JSONP function
                response.Write(");");
            }
        }

     

    Using this, you can return JSONP directly from your regular MVC Controllers:

        public class PersonController : Controller
        {
            public ActionResult GetPerson(string username) {
                var person = new DataService().GetPerson(username);
                var result = new JsonPResult {Data = person};
                return result;
            }
        }

     

    Awesome huh?

    Rendering your Data

    Because I have returned JSON from the server, the next step is to render it to HTML.  Without going into too much detail, I personally use a couple of methods:

    • HTML templating.  I store HTML in a separate file (or hidden DIV in the Index.html page) and then bind sections of it using jQuery
    • use jQuery to create html such as $(‘body’).append($(‘<div></div>’).html(‘Hi there’));

    Why not just return HTML from the server?

    Good question, glad I thought of it.  Technically, there’s no reason why you shouldn’t.  In fact, if you did, you wouldn’t need to jump through all those cross-domain hoops with JSON etc.

    Frameworks like ASP.Net MVC actually encourage you to do this with their ‘View’ system and when I built the www version of stringsof.me (www.stringsof.me) I used these and they worked great. 

    When I moved to a mobile version of the application however, I found that this was a little short sighted.  What if I expose my objects to a third party who wants to use them in, for example, a Facebook plugin?  The HTML I return for ‘GetPerson()’ is not likely to suit their purposes so best to return the object and let them format it themselves.  Or what if the returned HTML expects Javascript to scroll it into place, but the requesting device doesn’t support Javascript?

    Although it is easier to specifically write your HTML (and even binding if you are using MVC), I eventually concluded that plain JSON objects are the most versatile mechanism.  By using Javascript as the rendering agent, you can make decisions based on the state of the client (such as width or support for location-based queries) which aren’t necessarily available to a server-rendered page.

    Media Queries

    Now that you are getting your data, the next step is adjusting the presentation between the various platforms.  For example, a full-blown website may have a big background image and the mobile version may remove this to account for low-bandwidth phones visiting it.  The fashionable way to do this these days is via CSS Media Queries.

    The basic premise is that your CSS reacts according to the type of media that is using the device.  For example:

    body{
        background-image:url(‘bg.jpg’);
    }
    @media screen and (max-device-width : 320px){
        body{
            background-image:none;
        }
    }

    The code above says:

    • for the body of the page, use a background image of bg.jpg
    • however, if the screen is smaller than 320px, do not show a background image

    The beauty of this is that it is platform independent – you don’t have to detect an iPhone or Android, you just have to know what its screen resolution is.  You can also switch out based on orientation (if the device is held upright or on it’s side):

    @media screen and (orientation: portrait){
        body{
            background-image:url('bg_narrow.jpg');
        }
    }
    @media screen and (orientation: landscape){
        body{
            background-image:url('bg_wide.jpg');
        }
    }

    Now, if the user turns their iPhone on its side, the background image will switch out to one that better suits its new dimensions.  Snazzy huh?

    Testing your Code

    Now that you have your HTML, Javascript and CSS working, you need to test it.  I have found that the best mechanism is Firefox using the Firebug plugin.  This will get you 99% of the way there, even for your iOS/Android applications later.

    You can test your media queries simply by resizing your browser window to the appropriate dimensions – getting a pretty decent idea of how your site will look on an iPhone compared to a full-size desktop browser.  Check it out by opening m.stringsof.me in a new browser window now and resizing.

    Deploying your Application as a Regular Website

    This is the easy one you’re probably used to – just create the website on IIS/Apache or whatever you are using and copy the HTML/Javascript/CSS files over.  The first time you deploy, remember to create Settings.css and Settings.js files and set them appropriately – they shouldn’t be in the main solution because they differ for other deployments.

    Remember you’ll also need to create an empty (or stubbed) PhoneGap.js file, as per the file reference in your Index.html file.  If you don’t, your site will still run but your visitors will get an unprofessional ‘404 Page Not Found’ error.

    Deploying your Application as a Mobile Website

    If you’ve used your CSS media queries correctly, your main website will double as your mobile website – no changes are required. 

    Deploying your Application as an iPhone and/or iPad App

    Ah, the part you’ve probably been waiting for all along. 

    • download www.phonegap.com and follow their instructions for creating a new project in xCode on your Mac (sorry, you need a Mac computer to build an iPhone app)
    • PhoneGap has comprehensive help files, so you are best off following them, but essentially the next step is to copy your HTML/Javascript/CSS files into the ‘www’ folder that PhoneGap provides.  Again create new Settings.js and Settings.css files accordingly.
    • note that PhoneGap also includes a PhoneGap.*.js file which contains the Javascript wrapper code to access the device hardware such as the camera.  Make sure the file is named exactly the same as that referenced in your Index.html file.

    Compiling and building your PhoneGap application is beyond the scope of this article sorry. 

    Beginner’s tips:

    • iOS is case-sensitive, so the <link/> and <script/> file references in your Index.html file must match the case of the files themselves.  If your CSS refers to images or folders, these are also case-sensitive. This took me about two hours to figure out – too much PC for me I guess.
    • I use DropBox to synchronize changes between my PhoneGap application and my website application.  Even though you are using the same HTML/Javascript/CSS files, they are copies of each other so a change in one must be copied to the other.  If you’re a PC user, you may also like to use File Backup Pro to quickly prepare and copy your changes to your server.

    Deploying your Application to Android (and other mobile devices)

    This uses PhoneGap again, and is the same as the iPhone installation above.  Again, the PhoneGap documentation does a much better job of explaining this than I can.

    Limitations

    The solution I have presented involves building a website predominantly in Javascript, and I have a few problems with this:

    • there is no compile-time checking.  I have colleagues that rave about Script#, but I don’t like the additional learning curve.
    • mainly because of point 1 above, it is hard to enforce architecture or development styles.  This makes working in a multi-developer team environment quite a bit tougher
    • search engines do not execute Javascript which means all they see on your web page is a little bit of HTML wrapper.  This means your website will not rank in Google, Yahoo etc.  You must therefore invest more in other SEO methods such as Site Maps and friendly URLs
    • JSONP can only be executed using GET requests, so if you need to upload a huge amount of data, such as an image, you are out of luck.  In m.stringsof.me, I had to deal with this when uploading the image that the user draws on the <canvas/> element.  I eventually solved it by breaking the base64-encoded representation of the image into 1000kb chunks and sending them to the server one after the other.  The server remembers what it gets and joins them together into a proper image at the end.  (This is why you see a percentage completion status when saving your work – each increment is a chunk of image).  You can View Source on the page to see how it was done, if you like.

    Summary

    Everybody likes a summary section so they know it’s the end of the article.  So, there you go.

    Using jQuery Binding to make cross-domain calls with Closure Callbacks

    December 18, 2011 1 comment

    It was hard to come up with a title to this post because I somehow needed to convey the awesomeness for a problem which I don’t think a lot of people realise they have.

    Quite simply, it is to do with the asynchronous manner in which we make JSONP calls (if you’re not sure how JSONP works, I recommend this simple article from Rick Strahl).  As you know (or will after reading the article), JSONP relies on dynamically injecting a <script/> tag into our document.  Within this tag are two parts of javascript:

    1. the object you are returning from the server (in JSON format)
    2. a function wrapper which ‘pads’ the object (the ‘P’ in JSONP).

    For example, if I wish to return Person object from the server, I would stream down:

    1. on the server, create the object in JSON (using Response.Write() for example in .Net):
     

    Person{FirstName : Larry, LastName : Phillips}

     

    2. wrap the object in a function name ‘mycallback’:

     

    mycallback({Person{FirstName : Larry, LastName : Phillips }})

     

    On the awaiting HTML page, I would have already written the following javascript function:

    function mycallback(person){

        alert(person.FirstName);

    }

     

    The script tag is injected into the page, the response is sent back, the browser automatically runs the Javascript within the script tag, the awaiting function is thereby called and voila – an alert() box is shown

    Problem: the callback is decoupled from the caller

    This looks tidy enough with one example, but when you have dozens or even hundreds of callbacks (such as I have on www.stringsof.me, which prompted this solution) it becomes very hard to manage because for every server call you need to write a separate corresponding callback.  It’s not shown in the example, but often the callback needs to tie back to the caller in some way to alter its state, which makes things even more complicated.

    This is extra frustrating because if you are working with jQuery (for example), you are dealing with nice ‘inline’ callbacks when using regular AJAX calls:

    $.ajax({

        url: that.SiteRoot + route,

        data: params,

        type: 'GET',

        dataType:"json",

        success:function(person){

            alert(person.FirstName);

        }

    });

     

    See?  The callback is written right within the AJAX call.  For programmers, it is tidy and easy to follow.  Unfortunately, if you specify the dataType property as ‘jsonp’, the callback doesn’t work.

    Unfortunately, we can’t do this with a cross-domain call….

    The reason the callback above doesn’t work for jsonp/cross-domain is (presumably) because it is not technically an AJAX call.  From Javascript’s point of view, it is just injecting a new DOM element into the page (the <script/> tag).  Once the tag’s src is downloaded, Javascript has already moved on to the next task.  It is the hack I described above which allows us to link the two.

    …until now!  Using jQuery’s bind and trigger

    Enter jQuery’s bind and trigger functionality.  Observe…if I write…

    $(‘body’).bind(‘foo’, function(){alert(“I’m called");});

     

    …and then at any time later, I write….

    $('body').trigger('foo');

     

    …the popup appears.  So I saw this, and I thought, perhaps I can fake a callback between the two JSONP events.  So, here goes…

     

     

    var Data = function () {

        var that = this;

        

        

        return {

            CallJSON : function(route, params, callback) {

                // The trigger name must be unique for each call, so that multiple (almost) concurrent AJAX

                // calls can be made and assigned back to the same trigger each time

                var triggerName = new Date().getTime().toString();

                params._JsonPCbTrigger = triggerName;

     

                // The traditional JSONP callback is provided, and points to the OnCallJSON function below

                // Note that because it is hard-coded (and therefore the same each time), it could equally be hard-coded on the server

                params._JsonPCb = 'Data.OnCallJSON';

                

                // Use jQuery to prepare for a trigger called triggerName                

                $("body").bind(triggerName, function(e, result) {

                    // Within the callback, we ignore the 'e' parameter (a jQuery artifact), and

                    // just pass the result straight through to the callback we passed into the main function

                    callback(result);  

                });

            

                // Make the JSON call as usual

                $.ajax({

                    url: 'http://www.stringsof.me/ + route,

                    data: params,

                    type: 'GET',

                    dataType:"json"

                });

            },

            

            // This is the generic handler for *all* JSONP calls.

            OnCallJSON : function(result, triggerName) {

                $("body").trigger(triggerName, result);

     

                // We unbind afterwards, simply to release memory

                $("body").unbind(triggerName);

            }

        };

    } ();

    Now, anywhere in my code, I can call (for example):

    var params = { personID: 1 };

    Data.CallJSON('GetPersonByID', params, function(person) {

        alert(person.FirstName);

    });

     

    The Server Code

    That actually concludes the article, but for completeness and in case readers are still a little confused over JSONP, I’ll include the server code that is required to make this work.  It’s in C#, but in essence it is simply writing regular Javascript to the response.

    // Get the variables from the Request that we have sent up from Javascript

    var callBackTrigger = context.Request.Params["_JsonPCbTrigger"];

    var callbackFunctionName = context.Request.Params["_JsonPCb"];

    var personID = int.Parse(context.Request.Params["personid"]);

     

     

    // Defer to service to get the requested person

    var person = new PersonManager().GetPerson(personID);

     

    // Encode to JSON format e.g. {FirstName:'Larry', LastName:'Phillips'}

    var personJSON = Newtonsoft.Json.JsonConvert.SerializeObject(person);

     

    // The callback function on the client receives TWO parameters - the result that

    // we want, and the name of the trigger that jQuery needs for the .trigger()

    var parameters = personJSON + ", " + callBackTrigger;

     

    // Finally, wrap the parameters in the function so that it is automatically

    // executed when the client renders it

    var functionCall = callbackFunctionName + "(" + parameters + ")";

     

    // Write to response

    context.Response.Write(functionCall);

    context.Response.End();

    Architecting Cascading Style Sheets (CSS)

    August 26, 2011 1 comment

    There is one big problem with CSS, and that is its lack of ‘structure’.  When you have a 20-page project, fine – dump everything in one file.  When you have hundreds of pages, across half a dozen different design agencies, things start to become very messy.

    And I hate mess.

    A client of mine is in this situation.  Over three years, they have had 6 different designers, each with different visual styles – but, more importantly for me – each with different CSS requirements and implementations.

    The impact of this has been bothering me ever since I began, but I’ve always had more ‘architectural’ problems to deal with.  However, I decided enough is enough when I realized our total CSS payload was 1Mb, and our biggest CSS file consisted of over 5,000 lines.

    Architecting CSS

    There are a few things I want from my CSS:

    • I’m mostly a C# developer these days, so I like the object-oriented thing – I get it, it’s tidy, it’s re-usable.
    • The ‘cascading’ part of CSS is wonderful, but it is a double-edged sword.  When some designer submits a stylesheet with a tag like “.main h2”, then it will inevitably affect portions of the site they weren’t even involved in.
    • Problems with CSS only manifest themselves when you actually view it – in other words, they are runtime errors.  And when you have hundreds of pages in the site, it is often the users that report a styling problem, not our testers.

    Close, but not close enough

    Unfortunately, I can’t claim to fix any of these problems. I admire the ambition of projects like .less (www.dotlesscss.org/) but something about these didn’t really feel right to me.  I guess its because there is no mainstream support (mainstream support is very important to me as we have a constant stream of new developers coming through – standards are my friend).

    I’ve also read a lot of people’s opinions on CSS – how we need to move away from semantics, how we need to move towards semantics, etc….but again, I need something that has plenty of community support on the internet, and that my new developers will be able to hit the ground running with.

    A brilliant compromise

    So, in the interim, while I search for the perfect solution, I’ve come with with a pretty-damn-good solution.  It consists of a single user control. You might call it WonderControl, but I call it simply CssContainer.ascx:

    ASCX

    <asp:PlaceHolder runat="server" ID="PHStyle" />

    Pretty simple huh?  One line.

    Code-behind

    public partial class CssContainer : BaseUserControl
        {
    
            [
            PersistenceMode(PersistenceMode.InnerProperty),
            TemplateContainer(typeof(CssContainer)),
            TemplateInstance(TemplateInstance.Single),
            ]
            public ITemplate Style { get; set; }
    
            /// <summary>
            /// Init
            /// </summary>
            /// <param name="e"></param>
            protected override void OnInit(EventArgs e)
            {
                // Instantiate any CSS in our template
                this.EnableViewState = false;
                base.OnInit(e);
                if (Style != null) Style.InstantiateIn(PHStyle);
            }
    
            /// <summary>
            /// Pre render
            /// </summary>
            /// <param name="e"></param>
            protected override void OnPreRender(EventArgs e)
            {
                base.OnPreRender(e);
    
                // If, for some reason, the control is called twice, we ignore
                if (!HasRegisteredOnThisRequest())
                {
                    this.Visible = true;
                    this.RegisterStyles();
                }
                else
                {
                    this.Visible = false;
                }
            }
    
            /// <summary>
            /// This control is designed to render styles only once per request - duplicate styles are a waste of resources
            /// </summary>
            /// <returns></returns>
            private bool HasRegisteredOnThisRequest()
            {
    
                // Has this control been rendered before?
                Control parent = this.Parent;
                while (true)
                {
                    if (parent == null) break;
                    if (parent.GetType().IsSubclassOf(typeof(BaseUserControl)) || parent.GetType().IsSubclassOf(typeof(PageBase)))
                    {
                        break;
                    }
                    parent = parent.Parent;
                }
                if (parent == null)
                {
                    // I'll throw an exception here, but really you could just render it anyway - it would just result in multiple renderings per page
                    throw new Exception("CssContainer may only be used on classes inheriting from BaseUserControl or PageBase");
                }
    
                // The count is kept in the PageBase so that we may retain it per-request, but share across all instances of this CssContainer control
                if (!this.Page.GetType().IsSubclassOf(typeof(PageBase)))
                {
                    // I'll throw an exception here, but really you could just render it anyway - it would just result in multiple renderings per page
                    throw new Exception("CssContainer may only be used on pages inheriting from PageBase");
                }
    
                // This control is probably specified multiple times on any one page. Here, we record that its' been done already or not
                if (pageBase.CssContainerCount.ContainsKey(parent.GetType())) return true;
                pageBase.CssContainerCount[parent.GetType()] = 1;
                return false;
            }
    
            /// <summary>
            /// Register the script contents, if any, to the page
            /// </summary>
            private void RegisterStyles()
            {
                // Pull template string OUT of the template we rendered in initially
                var sb = new StringBuilder();
                var tw = new StringWriter(sb);
                var writer = new HtmlTextWriter(tw);
                this.PHStyle.RenderControl(writer);
                writer.Close();
                tw.Close();
                var css = sb.ToString();
                if (string.IsNullOrWhiteSpace(css)) return;
    
                // User controls can be in multiple folder paths, so we must normalize any URLs
                css = css.Replace("~/", WebHelpers.GetFullUrlForPage(""));
    
                // Compress CSS etc
                css = Yahoo.Yui.Compressor.CssCompressor.Compress(css);
    
                // Render to page
                this.PHStyle.Visible = false;
                this.Controls.Add(new LiteralControl("<style type=\"text/css\">" + css + "</style>"));
            }
        }

    Okay, a little flasher, but really very simple.  I have tried to comment the code intuitively so you can follow it, but the general idea is:

    1. the user of the control specifies regular CSS in a template (called Style)
    2. we render that content into the PHStyle placeholder, just as you would a normal template
    3. we later get that CSS back out of the template, and render to the page between regular <style/> tags

    So, the end usage is something like this:

    <wc:Css runat="server">
    <Style>
    .column-con {
        overflow: hidden;
    }
    .column-con .column {
        background-color: #fff;
    }
    </Style>
    </wc:Css>

    The intention is that you drop this CSS directly into the page/usercontrol that is using it.

    Why this is bad…

    Yip, I understand what the problems are with this:

    1. It is inline, meaning page requests are bigger
    2. It can’t be cached on the browser
    3. It can only be re-used as far as the page or usercontrol it is sitting on

    But the answer to all of these is simple: if the pain of breaking these rules, for this set of styles, is too great, then just put them back in your regular external CSS style sheet.

    This control is not intended to replace your external sheets, but rather complement them.

    So, why it’s good…

    There’s only one reason: it’s tidy.  Where we have an obscure user control or page whose styles depart significantly from our usual styles, this is a brilliant way of isolating the styles without cluttering the main style sheet.

    And because it is rendered in code (as opposed to using direct <style/> tags), I know that I have a lot of control over what I might want to do with this in the future.  For example, I could:

    1. fake an external ‘file’ reference using a handler, thereby allowing the resulting request to be cached in the browser
    2. minimize the CSS (actually, I’m already doing this in the example above)
    3. perform replacements such as variable names – just like they do at www.dotlesscss.org/
    4. I can easily find the styles which apply to my HTML, by just browsing to the top of the UserControl – no more ‘Search Entire Project’ to track down a CSS source (update: Resharper 6 has a fairly good implementation of ‘Go To Definition’ for CSS)
    5. etc etc etc

    But until I know how to handle my styles, this solution allows me to contain and manage them far better than regular ‘client-side’ CSS files and embedded <style/> tags allow.

    What about re-usable external style sheets?

    Previously, where we had isolated CSS, we would create separate CSS files and then pull them into the request ‘on-demand’.  Technically, this worked just as well as my solution above – in fact, it was better in that the resulting file would be cached on the client browser and thereby improve overall visit speed.

    But this can result in dozens (or much more) of CSS files cluttering up your project, confusing your developers, slowing your project load time etc.  And besides, as I said above, I know that later, when I’m less busy, I’ll be able to create ‘external’ references using ASHX handlers.

    But embedded styles are terrible!

    In theory, yes – but in practice, definitely not.  One of the worst things a developer can do is get caught up in ‘best practice’.  As long as you understand the reasoning behind the rules, you’re able to break them when you find the reasoning doesn’t apply to your current project.

    Besides, go do a ‘view source’ on any of these home pages – you’ll see a lot of embedded styles:

    See? I’m in good company.

    Conclusion

    This is not the most technically-brilliant piece of work I’ve done, but I have to say it’s one of the most exciting.  I love being able to recklessly add styles exactly how I like with little regard for structure – content in the knowledge that future-Ben will be able to easily tidy it up one day.

    I strongly urge other developers to consider this type of structure for any medium-large projects they are undertaking.

    IOS fails AJAX POST using full URL of short domain name

    July 7, 2011 6 comments

    Well, here’s a weird one which I’ll try to share although I can’t seem to find a heading which Google will pick up to let others know I’ve solved their problem.

    I was building a MVC/JQuery app and had a form posting to the server as follows:

    var model = {

        AnswerText:'',

        QuestionID:60,

        QuestionDate:'2011-2-1'

    };

     

    $.ajax({

        url: siteRoot + 'Question/SaveAnswer',

        data: model,

        type:'POST',

        success: function(data){

            alert('success');

        }

    });

    The system worked fine on localhost, and fine on production using our old .com domain name.

    However, when we switched to a new domain name (www.stringsof.me), the system started failing, but only when browsed from the iPad – other browsers continued to work fine (although I admit I didn’t test it in Safari).

    The problem, as it turns out was the *siteRoot* variable which prefixes the AJAX URL.  Although you can’t see it in code, this variable is set to the domain name of the site (e.g. http://www.stringsof.me:80/).  When I changed siteRoot to just an empty string, it began working fine.

    Whew.

    Comparing the performance of AppFabric against Sql Server

    May 19, 2011 7 comments

    I’ve been doing a lot of work lately implementing distributed caching systems for various clients. During my initial scoping, I found a lot of information out there comparing the performance between cache types (AppFabric, Memcached etc), however I could find very little comparing the performance of caching vs the actual database (in my case, AppFabric vs Sql Server 2005), just that it’s “much better”.

    Despite this lack of statistical information, I went ahead with caching anyway (after all, ‘much better’ sounds pretty good), and because I’m a Microsoft shop, I left it tidy by selecting AppFabric to ease the load on Sql Server.

    Now that I’ve implemented the code to a decent extent through a particular website, I’ve been able to conduct my own performance benchmarks and here I’d like to share the results.

    Note in particular that these are what I would call real-world results. I didn’t attempt to isolate cache access based on particular SQL queries or size of the item. I didn’t reset the cache between web pages. I simply used the website in the same way I’d expect my users to, and recorded the overall times.

    Methodology

    My methodology was fairly simple and certainly prone to a margin of error:

      • installed a ASP.Net website to a development machine which I knew I had sole access to. Sql Server is installed on the same box as the web server, however I had a separate (and dedicated) server containg the cache.
      • switch the cache off (via AppSettings configuration)
      • browse through a set number of web pages, without refreshing the client browser or anything tricky
      • made a note of the pages I travelled through, then turned the cache on and went through the same pages again

    I had simple code around my ‘cache access’ block which simply counted the number of milliseconds the application spent trying to Save/Load the items (or bypassing if the cache was off) and logged them to a text file.

    Results

    Data access is split into two parts – Save and Load. Note that Sql Server does not have the burden of a Save method, and AppFabric of course only calls Save the first time the item is accessed.

    AppFabric Sql Server Ratio
    Load 81,960ms 372,318ms 22%
    Save 2,265ms NA NA
    Total 84,225ms 372,318ms 22%

    Interesting results:

        • AppFabric increases my data-access speeds by almost 5x over Sql Server. Thank goodness for that, and of course the longer you cache an item, the greater this efficiency will get. Again, this is not saying that AppFabric accesses data 5x faster than Sql Server on a call-by-call basis, it is the overall time benefit of implementing caching
        • AppFabric has an additional overhead in saving a newly calculated item back to the cache. In my case it was 2,265ms for the 81,960ms I spent loading items – a ratio of about 1/36. The longer I ran the test (or cached the items before they expired), the better this ratio would have become

    To conclude

    AppFabric clearly has performance gains over Sql Server – in my limited test it made my data access 5x faster, and a longer test expiry period would have made this much much greater.

    Note also that the website I was using had (test) database tables less than ~1,000,000 records, although some fairly funky SQL queries are being made here and there. For an even larger database such as Facebook (or my client’s database, hopefully) the SQL queries would take longer, but (I suspect) the caching times would remain exactly the same – another point in favour of AppFabric.

    Uh-oh…one more thing (and it sucks)

    Now that you’ve read this far, I’ll tell you the real reason I ran these tests, which is that I was convinced that AppFabric actually made my site slower. Not because SQL was better, but because the previous caching system I switched out was in fact the good old ASP.Net HttpCache utility. Like AppFabric, this cache is entirely in-memory but because it doesn’t concern itself with regions, tags, (and much more), it actually runs much much much faster. Let me type that again so Google picks it up – AppFabric is not nearly as fast as the built-in ASP.Net HttpCache utility. I ran the same test as above, but using the HttpCache:

    AppFabric Sql Server HttpCache
    Load 81,960ms 372,318ms 38,866ms
    Save 2,265ms NA 9ms
    Total 84,225ms 372,318ms 39,875ms

    Yes, that’s right – AppFabric is 2.1x slower than the built-in ASP.Net caching utility. Very sad, especially when I see my site slow down after all my hard work. However, I’m sticking with AppFabric:

        • HttpCache resets whenever the IIS App Pool resets, when you redeploy and any other time it feels like it. I suspect if I ran this test over a long time (say, a week) then the results would be closer
        • There is no tagging in HttpCache – you have to roll your own by integrating into the key – and the subsequent parsing to ‘find by tag’ is slow
        • HttpCache cannot be expanded easily to hold gigabytes of data – it simply won’t work when my sites expand
        • The distributed nature of AppFabric allowed me to build a separate ‘admin’ tool where I can look into the cache from another website, count it, clear it etc
        • And if I’m honest, the final reason is that this type of caching is all in vogue at the moment and it’s something I feel I should be part of

    One thing I’m wondering about is actually using a combination of both – extremely high-access queries (like user permissions) could be HttpCache, leaving AppFabric to handle the larger datasets. Bit of an art form I reckon.

    Best practice architecture for professional Microsoft.Net websites

    April 26, 2011 8 comments

    Architecture technology is constantly evolving and there is a lot to keep up with. Over my career, I’ve reviewed a large number of approaches and today I’d like to draw a line in the sand and present to you all my current ‘best’ architecture.

    This solution has been used more-or-less in my last three major applications (most notably www.rate-it.co) and is the result of many hundreds of hours of investigating, comparing, and performance testing, as well as tens of thousands of hours backing real-world websites.

    There are a few reasons why I’m sharing it:

    • there’s no loss to me if I can help other people build their sites
    • I’m only as good as the knowledge I can attain.  By sharing mine, I hope that others will share theirs and thereby help me evolve my craft
    • this architecture will probably be obsolete within six months or so.  It will be an interesting record for me to see how my approach changes over time

    The project architecture

    I have created a working application (Visual Studio 2010, .Net 4.0) which summarizes the concepts I’m about to explain.  To install:

    • download the code from here
    • run the included setup.sql file to generate the database structure expected by your code
    • update the BlackBallArchitecture.Web\Connectionstrings.config file as appropriate
    • created an IIS web application pointing to the BlackBallArchitecture.Web project, and set to ASP.Net 4.0 Classic Mode

    First of all, let’s note the dependencies between the various projects:

    Ideal Microsoft Web Architecture

    There was a time when I built web applications with two layers – a front-end ‘web’ project with my ASPX and ASCX files etc, and a data-access layer which basically parsed stored procedures into database.  This architecture was flawed for a number of reasons, but it’s biggest fault was that there was no separation between UI and business logic, nor even between UI and the data source (you need to know column names if you are accessing a data table).

    These days, I use the structure above.  To quickly summarize:

    • Common.dll is used to store generic functions (such as Extension methods) which are of use to the entire application.
    • Data.dll provides access to a Sql Server database using the Entity Framework Code-First Model (see below).  Data storage is often synonymous with business logic (e.g. saving a person record into the database), however their separation is essential if we are to unit test later.
    • Contracts.dll is where I record the ‘shape’ of the application.  It holds an interface to every business logic class (in Logic.dll), as well as all my entity definitions, such as Person or SystemLog.
    • Logic.dll is where I store all my business logic, such as email validation and accessing the data store (although note that the Logic layer does not actually have a reference to the data store
    • Dependency.dll is used by Unity (see below) to map the Contracts to the actual business logic
    • Web.dll is the front-end web application, containing the ASPX and ASCX files etc.
    • In addition, the application contains Test.dll and CodeGen.dll assemblies, which I have excluded from this diagram to avoid clutter.  I will explain them later.

    So, quite a few assemblies for an application that essentially just saves a person record to a database – in fact, I’ve gone from 2 to 6.  Why?

    Dependency Injection using Microsoft Unity

    The first thing you may notice above is that the Web assembly has no knowledge of either the data store nor the Logic assembly.  In fact, all it has is knowledge of is the logic structure (via the Contracts assembly) and a mechanism for accessing these contracts using the Dependency assembly.

    I have done this using Microsoft Unity, which allows me to use a little trick called Dependency Injection.

    If you refer to default.aspx, this means that instead of getting my list of people via…

    var people = new Logic.DataManagers.PersonManager().GetPerson(null);

    …I instead use:

    var people = Dependency.Resolve<IPersonManager>().GetPerson(null);

    The unity.config file (in the demo project) tells Unity that IPersonManager should be mapped to my PersonManager class.  By doing this, I have de-coupled the site’s dependency on our actual Logic implementation, instead using just an interface to gain access to my data.  This has a number of gnarly benefits:

    • if I find a bug, I can deploy another assembly with a class that implements IPersonManager, with the bug fixed in it
    • I can unit-test the code in total isolation from the actual business logic – an essential tenet of unit testing

    I can create different implementations of IPersonManager, based on varying project requirements.  For example, I once built a site which had two implementations – one in New Zealand and one in England.  The New Zealand site showed full access to everybody’s name whereas England had stronger privacy controls and would only show the first name.  I simply created two classes inheriting from IPersonManager and implementing FormatName().  New Zealand’s implementation was:

    public string FormatName(Person p){
        return (p.FirstName + " " + p.LastName).Trim();
    }

    and England’s was:

    public string FormatName(Person p){
        return p.FirstName.Trim();
    }

    I included both the implementations in the same Logic.dll assembly, and simply modified the unity.config file of each website to suit.

    Another great use of Unity is when you want to temporarily replace some business logic without a full redeploy.  For example, I will sometimes litter my code with logging functionality, for example, in the PersonManager.SavePerson() function, you can see this:

    Dependency.Resolve<ILogger>().Log(firstName + " record updated");

    But my unity.config file has TWO concrete implementations of ILogger:

    <typeAlias alias="ILogger" type="BlackBallArchitecture.Contracts.ILogger, BlackBallArchitecture.Contracts" />
    <typeAlias alias="DatabaseLogger" type="BlackBallArchitecture.Logic.DatabaseLogger, BlackBallArchitecture.Logic" />
    <typeAlias alias="NoLogger" type="BlackBallArchitecture.Logic.NoLogger, BlackBallArchitecture.Logic" />

    To save database stress, for the most part I turn logging off during production:

    <type type="ILogger" mapTo="NoLogger" />

    However, if there is a problem I can easily switch it on by modifying the file to:

    <type type="ILogger" mapTo="DatabaseLogger" />

    No recompile or redeploy required!

    A misuse of Unity?

    The other thing that Unity allows me to do is create circular references between my assemblies.  For example, from my Logic layer, I am able to access the current user’s PersonID even though it is stored in a website cookie:

    private int Log(SystemLog logSummary)
    {
        // We can use Unity to call back to whichever storage mechanism we are using for the 'current user' info, without actually knowing what it is
        var personID = Dependency.Resolve<ICurrentUser>().PersonID;
    
    
        // Log
        logSummary.PersonID = personID;
        logSummary.WhenOccurred = DateTime.Now;
        var svc = Dependency.Resolve<BlackBallArchitecture.Contracts.DataManagers.ISystemLogManager>();
        svc.SaveLog(logSummary);
        return logSummary.SystemLogID;
    }

    ICurrentUser meanwhile, is implemented in my Web.dll assembly as follows:

    public class AuthenticationManager : BlackBallArchitecture.Contracts.Security.ICurrentUser
    {
        public int? PersonID
        {
            get
            {
                if (!IsAuthenticated) { return null; }
                return int.Parse(HttpContext.Current.User.Identity.Name);
            }
        }
    }

    I know what you’re thinking – who is this handsome renegade?!  Well, there is a distinct difference between the above, and another method which I’ve seen in some Logic layers:

    var personID = int.Parse(HttpContext.Current.User.Identity.Name);

    When developers use this method, they are making an assumption that their business logic will always depend on or be run from a web application.  If they were to later reference this from another application (such as Silverlight or Windows Forms), it will fall apart.

    However, using Unity, if we were to reference this from a new application, we could just implement a new version of ICurrentUser and pull the PersonID variable from a different location such as in-memory or from file etc.  Very cool.

    Data access using LINQ2SQL and the Entity Framework Code First Model

    LINQ2SQL and I got off to a bad start because I moved into so many brown-field applications where it had been naively implemented and was killing the database.  For example, with ignorant use of foreign key relationships and a Repeater control, I once saw a web page making 50,000 database requests.

    If one is working on a project by themselves, they can learn and remember the limitations of LINQ and it works well.  However, I usually need to plan for a team of developers – experienced and inexperienced – as well as anticipate the day when I leave and others come in and have to pick up the project.  For these reasons, I didn’t adopt LINQ until mid 2010, instead opting for stored procedures, which at least gave me 100% control over my SQL.

    That was until I discovered the Entity Framework Code First Model, probably the biggest game-changer for my architectures in the last few years.

    Non-data-bound Entities

    EF CF allows me to design my own entities (POCO) and bind my LINQ queries back to them.  Previously, LINQ returned its own entity types which meant that if you were using them in your web layer (for example, to render a list of people), your project ended up with a SQL dependency right from the web layer.  It also meant that you had to keep a (static) database connection open throughout the duration of the page call, so that changes to your entity could be ‘remembered’ by LINQ and saved back to the data store (this is most often done by storing a LINQ ‘context’ in your Global.asax class and opening in Application.BeginRequest).  Urgh.

    With EF, you declare your entities completely separately from the data-access.  In my example project, I have declared them in the Contracts.Entities.Data namespace.  Note that there are not even any attributes involved here – there is nothing to suggest that they are going to be used in LINQ queries (actually, EF does support various attributes, I just avoided them).

    I then reference these entities back in the Data.dll assembly, using them as return types for my data context.

    The reason I put my entities in a separate assembly was so that I could use them from the web layer (they are return types from my logic layer) without requiring a reference to the Data.dll assembly, making it impossible for the developer to (inadvertently or not) call the data store without going through our Logic layer.

    Caching

    Data entities which are completely independent of the data source are also essential for caching – after all, you can’t store an open database connection on disk on another server.  See the section on Caching below for more details.

    Force your developers to explicitly call for more data

    This harks back to the example before about the 50,000 database hits in one page.  By removing sub-classes (which are what foreign keys resolve to in LINQ), you change this code…

    foreach(var person in people) {
        Response.Write(person.FirstName + " has " person.Orders.Count() + " orders");
    }

    …to this:

    foreach(var person in people) {
        var totalOrders = new OrderManager().GetOrdersForPerson(person.PersonID);
        Response.Write(person.FirstName + " has " totalOrders + " orders");
    }

    Both of these examples result in the database being called to get the total orders for each person, so if there are 50,000 people in your list, you will call the database 50,000 times.  The difference is that this behavior is not immediately apparent when looking at the first example.  With the second method, a good developer will realize what is happening and take actions to resolve it (perhaps by getting all order counts in one call before the loop starts).

    When I first started implementing this design for my clients, their developers were fairly unimpressed, and rightly so – it adds work.  But I consider it absolutely essential if you want to use LINQ2SQL in a multi-developer environment.

    T4 Templates for Code Generation

    One thing that soon becomes apparent when using the Code First model and Unity is that for each database entity you are dealing with, you have quite a number of classes:

    • an entity (e.g. Person)
    • a data-access class
    • an interface for the data-access class
    • ObjectContexts and Configurations for Code First
    • any number of other ‘helpers’ like my GetOrCreatePerson() method

    Typing these by hand would be very time consuming and error prone, so I use T4 Templates to do the heavy lifting for me.  T4 templates have been around for a long time, but were little known until things like the Entity Framework came out.  They are awesome in their simplicity.  T4 templates will only get you so far however.  In order to truly unleash the power of code generation, I also use a fantastic third-party tool called the T4 Toolbox which allows you to:

    • re-use and parametize your templates, in much the same way you would with a regular C# class
    • generate multiple files from a single generator – no more files with 100,000 lines
    • deploy your generated files across multiple projects – an essential requirement for my design, although I suppose you could do it using multiple templates (one for each project) if you enjoyed wasting time

    The result is the CodeGen.dll assembly you have in the example project.  Every time the database structure changes, just open and save the Controller.tt file.  It queries the database and generates the files based off the structure.

    Of course, you may not wish your code to match the database structure – no problem, just write the various classes by hand.

    Caching

    The first thing to remember about caching is that SQL Server is a very good cache.  Its sole purpose is to remember things and return them to your application.

    However, once your site traffic picks up, you will come to realize that those fancy data queries are awfully time consuming for what results in just a couple of objects returned, so at this point, you need to cache.

    The attached project structure just uses the HttpApplication cache, built in to Microsoft.Net.  This cache is actually fantastically fast (see my previous post on cache performance comparison), but it has limitations like not being able to access it between web applications, nor can you distribute to other servers etc.  I’m not going to compare caches in this blog, but just remember:

      • you can’t cache a LINQ2SQL object that is still connected to its data store (hence my Code First POCO model is excellent for this)
      • caching should be done in the Logic layer – the web layer should have no idea where the objects come from
      Unit Testing

    Now, I actually find unit testing extremely boring and often it is a waste of time.  However, when working in a multi-developer environment, tools such as unit testing are an invaluable insurance policy against sloppy development and miscommunication.

    A core tenet of unit testing is to isolate (and test) the minimum amount of functionality at once, and this means reducing dependencies between classes as much as possible.  For example, consider this function which returns a requested person…

    public void GetPerson(int personID){
        var person = new LinqContext().Persons.FirstOrDefault(x => x.PersonID == personID);
        return person;
    }
      A very small and simple function and easy to test.  Except that it depends on the database (LinqContext).  This means that in order to test this function, you must first setup a database, including inserting appropriate test data.  Although possible, it kind of defeats the aforementioned tenet.
      So this brings me back to Dependency Injection and my beloved Code First model.  In the attached example project, I simply replace the database-bound IRepository, with a memory-bound IRepository.  Then, when I call IPersonManager().GetPerson(), instead of using LINQ2SQL on my database, it uses LINQ2Entities on my in-memory collection.  My logic is tested without any database required.
        This is actually amazing – unit testing aside, I’ve actually switched out my data store from a database to in-memory

    without chaning a single line of my data-access code

      .  In theory, I could switch in an XML-based data store or shove things directly in the cloud (if somebody would built a LINQ2Cloud provider).
      This is surely the whole purpose of LINQ, but it wasn’t practical until Microsoft gave us the Code First Model and POCO binding.
      Almost makes me want to actually write unit tests.

    Other Notes

    Broken Project References

    Above, I mentioned that our web layer has no reference to the Data or Logic layers, which is great.  Unfortunately, this means that when you compile the project, Visual Studio does not copy the DLLs into the bin folder, and so when you run it you get errors where the site can’t reference a requested class.  To circumvent this, I simply added some post-build commands to copy the files manually:

    copy "..\..\BlackBallArchitecture.Logic\bin\$(ConfigurationName)\BlackBallArchitecture.Logic.dll" "$(TargetDir)"
    copy "..\..\BlackBallArchitecture.Data\bin\$(ConfigurationName)\BlackBallArchitecture.Data.dll" "$(TargetDir)"

    Resharper

    Ah Resharper, what would we do without you?  With all these files and interfaces flying around, I find Resharper an absolute must for quickly navigating and refactoring my code.  Just spend the money guys.

    CSS and Javascript Compression

    I use the wonderful Combres to minify and combine my CSS and Javascript files.  It is open source and you can find detailed instructions here.

    MVC

    This article dealt mainly with the solution layers behind the UI – business logic, testing and data access.  Unfortunately that’s only part of the story and the web layer still needs work.  In particular, it would be nice to see an MVC architecture separate the UI from the ‘UI logic’ (or UX, I suppose) – the code-behind model restricts the re-usability of our UX by tying it to ASPX markup.

    I suggest checking dedicated MVC articles for this as it is another blog post in and of itself.

    Summary – see it in action

    I’ve tried to be as clear as possible, but reading back, it’s pretty hard to explain everything all at once.  If you’re like me, the best way is to download the code and play around.  Even then, the true benefits of the model may not be apparent until you build a large system, or invite other developers to work alongside you.  Remember, you can also see it in action at www.rate-it.co.

    Anyway, I hope it’s been thought-provoking and hopefully of some use to some of you.  Again, if you have any improvements or even (especially) if you hate the architecture, please leave a comment below.

    From conception to customers in 24 hours – how we built a successful startup overnight

    April 3, 2011 6 comments

    www.findfish.at was conceived, developed and marketed in just 22.5 non-stop hours by myself and my business partner James.  As you may have heard, the app has taken off at a rate far faster than we were prepared for and it’s been a busy couple of weeks.  Now that we have it under control, I’ve been asked to give a technical summary of how we built the site in such a short time frame.

    So, here’s how we did it…

    The idea

    This was James’ idea, pure and simple.  The story goes that he was out fishing and remarking on how much easier it was to catch a fish if you knew were they were – i.e. if you had a fish finder.  You still had to reel that bad boy in, you still had to coax him out of the water, but if you knew there were 100 fish under you, your odds went up considerably.

    Wouldn’t it be good, he thought, if dating was like that?

    …whoops, a little background might be in order…

    James and I co-founded Knowhere (www.knowhere.co.nz) back in 2006.  We know location-based services inside-out.  We were doing real-time user tracking (using Windows Mobile, back when it was awesome) before Google Latitude was dreamt of and before Facebook and Foursquare even existed.  We’ve dealt with ‘urban jungles’ and mass caching.  We developed our own SQL routines to query location data before SQL Server 2008 served them up out of the box.  We also know the frustration of building an awesome product only to fall flat trying to explain it to people, or worse yet – even letting them know it exists in the first place.

    So, when James thought “wouldn’t it be good if…”, he wasn’t just coming into it cold.

    …anyway, as I was saying…

    Wouldn’t it be good, he thought, if dating was like that?  That’s when he called me up.  James is the ideas man, but I’m the handsome developer who has to actually make them work.  We were both flat-out in March and didn’t have much time to spare, but the idea was good, it was fun to build, and we knew that our previous experience could make it work, where others couldn’t.

    So we decided to give ourselves 24 hours to build and market as much as we could, and then see what happened.

    Friday, 9am – cutting to the core of the problem

    The biggest lesson I learnt in this project was that age-old maxim in the development and design circles – “less is more”.  We had a lot of big ideas for this project, cool fun stuff which would make the app really great, keep it “feature rich” and intimidate others from trying to copy us.

    But forcing ourselves to ship within 24 hours made us really break down the problem into its simplest form, and then come up with the simplest solution to answer it.  So, out went:

    • customer logins
    • user tracking
    • graphics
    • caching (although I hoped I’d come to regret that, and I did)
    • friend invites
    • demographic capture
    • a contact page and supporting website
    • did I mention graphics?

    Which left us with:

    • tell us if you’re searching for girls or boys
    • tell us where you’d like to search
    • view the map

    That meant users could get the information they wanted in just two mouse clicks and often within about 10 seconds of typing the URL into their browser.

    Friday, 10:30am – the audience

    We’ve already been burnt before by focusing too much on the technology and features of our systems, and not enough on how we’re going to promote the damn thing.  This was a particularly frustrating concession for me to learn because I’m a software developer.  I take a lot of pride in things like reducing page load times, or setting focus to a form field when you enter a page.  But these things are quite frankly useless if nobody is using them.

    With www.findfish.at, we knew that our audience would be technology focused and youngish (actually, we were wrong about that one, as we found out later).  We also knew that our application would probably be used ‘on-demand’ i.e. as people were actually out and about with their friends.  With this in mind, we targeted the following platforms, in this order of priority:

    • iPhone (heard of it?)
    • Facebook App
    • Website

    If you’re not a developer, building three applications may seem like a big commitment in our 24 hour time period.  But if you are, you’d have heard of HTML5 and jQuery.

    Friday, 11:00am – the architecture

    Two hours in to our mission, we had a workflow and a target platform.  Now I had to build it.  Luckily, I do a lot of work as a software architect and consultant and I’ve tried and tested a a fair few development and design methodologies over the past.  Because of this, I was able to get us off the ground with a really good solid foundation, consisting of:

    • Sql Server 2005 backend.  I don’t care how much flat-file caching you do, I still like to know that if my indexes fall over I can rebuild everything from scratch, and that scratch is my good old relational database.  Note that I didn’t use 2008 – that’s because I knew our production server didn’t have a license for it :)
    • Microsoft .Net 4.0.  I’m a .Net guy and I make no excuses for it – I reckon it is the strongest development platform available (feel free to post your hateful comments below, but I’m over those types of arguments so you probably won’t get a reply)
    • Unity Dependency Injection for IoC.  Unity was overkill in such a small project, but I wanted a foundation for unit testing later (should the app ever take off).  Unity also lets you do ridiculous tricks like (indirectly) calling the web layer from your data layer (for example, to record the current user in the Http Session) or vertically integrating your logging classes.  It is cool.
    • Entity Framework 4.0.  LINQ is the technology I love to hate.  I have gone into so many Brownfield applications where LINQ has been ignorantly spread throughout the site and dragged down the performance, and the project separation considerably.  I saw a client once where one page was generating 50,000 database calls due to LINQ traversal within a repeater.  However, using EF4 and T4 templates, I am able to control exactly how I want my objects represented and avoid these pitfalls.  In particular, I do not represent any foreign key relationships in LINQ and I bind them to POCO in a separate web project.  This allows my web layer to have no project reference to my data layer, and forces my developers to explicitly call the logic/service layers every time they want some data.  It also allows me to cache the objects later, as they are not data-bound.
    • Memcached.  Actually, I didn’t put this in (only 24 hours remember), I just left a gap for it.  But I couldn’t have done it without Unity and the EF4 POCO model.

    So, using these, I was able to give myself a 200 hour head start on research and implementation for a top-notch and highly scalable architecture.  I also knew it was bug-free (or close to it) and worked in real-world conditions, due to its recent implementation on www.rate-it.co.

    The final thing I had to design was the front-end.  In particular, I wanted to use the same code for all three platforms – iPhone, Facebook and the regular website.

    Friday, 2pm – generalizing the front-end for multiple platforms

    Thank you so much iPhone for supporting HTML5 and proper web standards.  Because of this, I was able to build my HTML in exactly the same way as I would for a normal website.  And because Facebook Apps are in fact regular websites in iframes, I was able to use HTML for that too.

    Building an iPhone application in HTML is not as good as building a regular embedded app in iOS.  I know that.  I also knew I had 24 hours and it simply wasn’t feasible (also I can’t code iOS.  Also, I don’t own a Mac to develop on).  We figured that if the application proved popular, we could build an app later – hell, it might even be a way to monetize it.

    As it turns out though, this design decision had a number of other benefits:

    • people could access it immediately when they were out and about (they didn’t have to go to the AppStore first)
    • we could focus our marketing solely on the website URL
    • there was a possibility that other mobile platforms like Android could use it (as it turns it, it doesn’t work on Android, but I’ve been too busy scrambling with other things to work out why.  I’ll bet it’s simple though)
    • I’ve already used www.phonegap.com for another client so I figured this might be a good stepping stone to a full-blown app one day.

    Of course, the three platforms have different form factors and so the final thing I had to do was switch in CSS styles to adjust widths, graphics etc depending on whether an iPhone was viewing the site or Facebook was.

    Friday, 2:01pm – hacking in the style sheets

    I’m pretty sure that HTML5 lets you switch in style sheets based on meta tags and media flags, but I didn’t have time to work all that out when I knew I could just do this:

    /// <summary>

    /// Detects current device (eg iphone) and overrides styles)

    /// </summary>

    private void InjectStylesForCurrentDevice()

    {

        var html = "";

        if (Request.Browser.MobileDeviceModel == "IPhone" || FindFish.Common.Configuration.Current.IsDeveloperMode)

        {

            html = @"

                <link rel='stylesheet' type='text/css' href='" + Library.GetFullUrl("~/include/css/theme/iphone.css") + @"'/>

                <meta name='viewport' content='initial-scale=1.0, user-scalable=no' />

                <meta name='apple-mobile-web-app-status-bar-style' content='black' />

                <link rel='apple-touch-icon' href='" + Library.GetFullUrl("~/include/img/iphone_touch_icon.jpg") + @"'/>

                <link rel='apple-touch-startup-image' href='" + Library.GetFullUrl("~/include/img/iphone_startup.jpg") + @"' />

            ";

        }

        if (html != "") this.PHExtraHeaderContent.Controls.Add(new LiteralControl(html));

    }

    It’s crude, it’s not testable, but it worked in about 10 minutes.

    Friday 3:00pm – meanwhile on the other side of the office…

    James, isn’t a developer which is great because for every hour I was developing, James was marketing.  He started with  half a dozen press releases, each subtly worded to target the audience, and moved on to what is probably the crustiest promotional video every made.

    I can’t code for hours on end like I used to do when I was young.  So every hour or so I’d walk over and annoy James.  This kept us talking and enthusiastic and helped us keep up to date on where we were going.

    I say it again – what good is a cool application (including auto-focusing form fields!) if nobody is there to see it.  Having such a dedicated marketing effort was (and continues to be) the most important contributor to our success.  (James, do you have anything nice to say about all my development?)

    Friday, 4:00pm – the map

    Having solved the platform targeting, there was one other ‘unknown’ that I needed to solve before I could be 100% that our application would work – how to present the data.  I’ve used Google Maps extensively for Knowhere so I knew I could get it okay, but I wanted a bit more:

    • slick when used on an iPhone, just like the built-in Google Maps application
    • upgraded to the v3 API.  I had only used v2 before.
    • animations and graphics.  If not now, then at least the possibility later.

    As an aside, I didn’t even consider Microsoft Maps or anybody else.  Google is awesome and I have used it before so it was a no brainer.

    Fortunately for me, the Google Maps API v3 has been extensively rebuilt with a particular focus on mobile devices.  And to my delight, it supported the ‘pinch’ feature on the iPhone!

    As far as the graphics went, James and I had envisaged something cool like a radar scanning over the top of the map.  I still reckon I can do this – either with a floating DIV or using their custom overlays, but I didn’t want to get bogged down in the UI for too long – I’ve fallen into this quagmire one time too often – so I settled for a simple ‘bounce’ animation.  I’m actually thinking about taking this off now, because it is jerky on slower computers when there are 100+ items animating at once.

    I drew a couple of cute avatars – one for boys and one for girls – and used them for my marker icons.  Easy.  Here’s the resulting initialization:

    this.EnsureMapInitialized = function(callback){

        if (this.HasInitialized) {callback(); return;}

        var useragent = navigator.userAgent;

        var isIPhone = useragent.indexOf('iphone') != -1;

        var mapdiv = document.getElementById(that.MapID);

        // Load map into container

        var latlng = new google.maps.LatLng(0, 0);

        var myOptions = {

            zoom: 15,

            center: latlng,

            // Iphone zoom done with pinch

            zoomControl:!isIPhone,

            zoomControlOptions:{style: google.maps.ZoomControlStyle.SMALL},

            panControl:false,

            streetViewControl:false,

            mapTypeControl:false,

            mapTypeId: google.maps.MapTypeId.ROADMAP

        };

        that.Map = new google.maps.Map(document.getElementById(that.MapID), myOptions);

        // Load to state for retrieval later

        myMaps[mapID] = this;

        // Events

        google.maps.event.addListener(that.Map, 'bounds_changed', function(){

            google.maps.event.clearListeners(that.Map, 'bounds_changed');

            if (callback != null) callback();

        });

        google.maps.event.addListener(that.Map, 'dragend', function(){that.OnLocationChanged();});

        google.maps.event.addListener(that.Map, 'zoom_changed', function(){that.OnLocationChanged();});

        this.HasInitialized = true;

    };

    I took off all the overlays like pan and street view because I wanted an uncluttered interface and I also wanted to reduce the ‘googleness’ of the site as much as possible.  Note in particular that I switched the zoomControl off for iPhone devices which support the pinch.

    Friday, 8:00pm – getting the current location using Facebook Places

    Almost halfway through the day and I finally began the work which I knew would be the biggest and most frustrating – integrating the Facebook Graph API to allow users to share their location with us using Facebook Places.

    With our early emphasis on social media for marketing, a Facebook login seemed like a pretty good way to get people’s location.  In particular, if somebody agreed to sign in using Places, it would give us the ability to periodically update their location (anonymously of course) on our map, providing much more accurate results for our users.

    James and I both had reservations about how many people would actually use this feature.  We’re a couple of old-fashioned guys and we’ve been fighting the ‘big brother’ issue for years in Knowhere.  As it turns out an incredible number of people have actually used this feature – about a quarter of our visitors.  I guess people are realizing more and more that privacy is a two way street – the more information you give about yourself, the better service you will get.

    Facebook also gave us other information such as gender and age (although we don’t use or record the latter at the moment, perhaps one day).

    I’ve already blogged about the Facebook Graph API during my work for www.rate-it.co, so I’m not going to repeat it here.  Suffice to say, I more or less lifted the code and dropped it into our site (it helped that they used the same overall architecture).

    The call to get Places data was pretty simple once I had the architecture in place:

    /// <summary>

    /// Class presenting the structure of the JSON-formatted response returned by the Facebook Places API. This allows us to 

    /// use the Newtonsoft Serializer to deconstruct into a compile-time-checked class.

    /// </summary>

    private class FacebookCheckinData

    {

        public class FacebookCheckinInfo

        {

            public class PlaceInfo

            {

                public class LocationInfo

                {

                    public double latitude { get; set; }

                    public double longitude { get; set; }

                }

                public string id { get; set; }

                public LocationInfo location { get; set; }

            }

            public string id { get; set; }

            public DateTime created_time { get; set; }

            public PlaceInfo place { get; set; }

        }

        public List<FacebookCheckinInfo> data { get; set; }

    }

    /// <summary>

    /// Loads the personal information for the given user

    /// </summary>

    /// <param name="accessToken"></param>

    /// <param name="userID"></param>

    public void LoadUser(string accessToken, string userID)

    {

        var url = "https://graph.facebook.com/" + userID + "/checkins/"

            .AppendQueryString(QueryKeys.OAUTH_CODE, accessToken, true);

        var webRequest = System.Net.WebRequest.Create(url);

        var webResponse = webRequest.GetResponse();

        StreamReader sr = null;

        string responseText = "";

        try

        {

            sr = new StreamReader(webResponse.GetResponseStream());

            responseText = sr.ReadToEnd();

        }

        finally

        {

            if (sr != null) sr.Close();

        }

        // Parse the text from JSON

        var checkins = Newtonsoft.Json.JsonConvert.DeserializeObject<FacebookCheckinData>(responseText);

        foreach(var checkin in checkins.data) {

            this.Latitude = checkin.place.location.latitude;

            this.Longitude = checkin.place.location.longitude;

            this.WhenRecorded = checkin.created_time;

            break;

        }

    }

    I suppose the main thing of interest here is the class FacebookCheckinInfo which I was able to build after viewing Facebook’s JSON-formatted response.  This let me parse their response into a “C# friendly” object for me to use later.

    Friday, 11:30pm – time for a break

    We were both pretty ****ed by this point and starting to get a little grumpy.  James had brought around a few cigars for us to celebrate with in the morning, but we thought a half-way celebration was justified.  Unfortunately, as soon as one gets that taste of cigar in their mouth, one needs a drink so James had a beer and I had a whisky.  Just one mind – I knew I was vulnerable to a snooze.

    Remarkably, this combination served to wake us both up and we got back into it with a vengeance at midnight.

    Saturday, 1:00am – don’t force people to use Facebook

    Restricting people to using Facebook for our application would have been a pretty daft move – alienating both those people that didn’t want to give away their personal data and those without a Facebook account at all (yes, they exist, I was one of them six months ago).

    HTML5 to the rescue again with their new geolocation feature.  Now on the app, our visitors can choose to just ‘find people around me now’ and I run the following code:

    SignInUsingCurrentLocation : function(){

        if(!navigator.geolocation){

            alert('Sorry, your browser does not support this feature - try signing in using Facebook above.');

            return;

        }

        navigator.geolocation.getCurrentPosition(function(position){

            this.Map.ShowLocation(position.coords.latitude, position.coords.longitude, that.FindMales);

        });

    },

    When I think of all the hassles we had writing Windows Phones apps back in 2006, my heart bleeds.

    Saturday, 3:00am – the UI

    Because of the time-sink that it is, I purposefully left the UI to last.  This is despite my feeling that the UI is actually the most important part of a site (see what a progressive and magnanimous developer I am?).  Because iPhone users were our primary market, we knew it had to mimic an application as much as possible – and not look like a regular website.

    Although the application had three ‘parts’, I decided to keep it all on a single page and switch screens in and out using jQuery.  Although this increased the initial load time, I thought it was worthwhile to have a faster and more responsive app.

    I drew the three panels out using DIVS, each one with the width of the screen meaning that only one would show at once.  I then used jQuery to animate the screens left and right – kind of stepping the user through the form.

    image

    Of note:

    • there is a single back button allowing the user to return.  Initially I had a button allowing you to skip straight to any of the three screens, but this was confusing.  Less is more.
    • each screen pushes the user through the path that we want them to take.  The ‘calls to action’ are in big button-like links, and there are no distracting work-flows to draw them away from their task
    • instead of just hiding/showing screens, I animated them left and right to give the user a sense of flow and process
    • I had to remove all background images and some CSS radius/shadow to improve the performance of the animations on the iPhone.  As it turns out, it’s better this way – they were just clutter.

    Saturday, 6:00am – the finishing touches

    That’s it!  End-to-end functionality achieved and tested (a little).  I was exhausted but needed to do a few more things for professional pride:

    • Google Analytics
    • cached the application page using ASP.Net OutputCache – it’s wicked fast
    • deferred the Google Maps loading until the user actually requested a location (as opposed to page load)
    • built us a logo – it’s a pixelated cityscape with fish floating above it.  Just like a real fish finder but in the city!  I’m quite chuffed with that one.
    • Saturday, 7:30am – the release!

      At 7:30am we hit deploy, sent off our press releases, told our Facebook friends and then went and had another cigar and beer.  It is quite an anti-climax releasing a website like that – you send it out into the great wide world but you have no idea if anybody is using it (even Google Analytics take 24 hours to turnaround).  The phone doesn’t ring.  Your bank balance remains fairly average.  Your new neighbour who you haven’t introduced yourself to yet drives by on her way to work and sees you having a whisky and cigar on the deck at 7:30am.

      Summary & Lessons Learnt

      I have built applications in the past that were much much more sophisticated than www.findfish.at.  I have had logging, offline email notifications, asynchronous task managers, caching, Cassandra, unit testing and lots of other buzz words.  I have built sites with a $50,000 design and UI budget.

      But in terms of bang for your buck, www.findfish.at has been the best by a mile.  I think there are a number of reasons for this:

    • the limited time frame forced us to concentrate on the core product – no fluff
    • no graphic designer meant that we were forced to keep the site minimal – again emphasizing the purpose of the site without distracting the user.  It also improved page load times.
    • we put just as much effort into marketing as we did development
    • we built an application that addressed a genuine need in the psyche of most human beings, and we modernized for a technical audience.
    • the 24 hour factor gave us a unique marketing angle – something for non-technical journalists to grasp and run with
    • there are no barriers to using the site – not even a login
    • the ‘fish finder for singles’ branding is easy for new visitors to understand and immediately know what to expect from our application

    Given a little more time, the only thing I’d have done different is to add a loading screen so people could get immediate feedback when initially viewing the application – first impressions count (haha I actually wrote ‘fist impressions count’ then – that would be the subject of another blog).

    If you haven’t checked out www.findfish.at yet, go to it now then ‘like’ us on Facebook :)  Oh, and tell your friends.

    Making a Facebook Wall Post using the new Graph API and C#

    November 3, 2010 27 comments

    One of the features coming up with Rate-It is the ability to post your ratings/reviews to your Facebook wall.  For this, we decided to go with the new Facebook Graph API.

    There are quite a few tricks and obstacles in the way of you doing this, particularly because the documentation omits a few details and the documentation is very PHP- or Javascript-centric.  With this in mind, we’ve posted below some very detailed instructions on how to get the system going using Microsoft C#.

     

    Debug your Facebook Application from localhost

    One of the first problems you’ll have is trying to debug your application, and for this the easiest way is via localhost.  We have seen a few posts suggesting that this is not possible, but it is – and it is easy.  Simply setup a new application at www.facebook.com/developers and then enter your localhost information where a URL is required.  We’ve copied a screenshot of ours below:

    image

     

    Getting a user to Authorize your Application

    Refer: http://developers.facebook.com/docs/authentication/

    Facebook has a two-step approach for a user to authorize your application.  The first is to connect their Facebook account.  The second is to switch the Facebook account code with an OAuth account code.

    The key to all this, and the most problems you’ll have is simply structuring your URLs.  There are three, and here they are:

            /// The callback page where the user is returned to after authorizing with FB

            /// </summary>

            /// <returns></returns>

            private string FacebookAfterAuthorize()

            {

                return SiteRoot + "services/facebook/canvas/afterauthorise.aspx";

            }

    
    

            /// <summary>

            /// Once FB returns a user code to us, we need to use the OAuth key in queries

            /// </summary>

            /// <param name="code"></param>

            /// <returns></returns>

            public string FacebookOAuthAccessTokenFromFacebookCode(string codeReturnedFromInitialFacebookRequest)

            {

                var url = "https://graph.facebook.com/oauth/access_token"

                    .AppendQueryString("client_id", Configuration.Current.FacebookAPIKey)

                    .AppendQueryString("client_secret", Configuration.Current.FacebookAPISecret)

                    .AppendQueryString("code", codeReturnedFromInitialFacebookRequest)

                    .AppendQueryString("redirect_uri", this.FacebookAfterAuthorize())

                    ;

    
    

                return url;

            }

    
    

            /// <summary>

            /// The link the user clicks when they want to initialize their connection to FB

            /// </summary>

            /// <returns></returns>

            public string FacebookAuthorize()

            {

                var url = "https://graph.facebook.com/oauth/authorize"

                    .AppendQueryString("client_id", Configuration.Current.FacebookAPIKey)

                    .AppendQueryString("scope", "publish_stream,offline_access")

                    .AppendQueryString("redirect_uri", this.FacebookAfterAuthorize())

                    ;

    
    

                return url;

            }

    Step 1 – Connect the Facebook Account

    Simply create a hyperlink and present it to the user.  The hyperlink contains various GET parameters which tell Facebook who you are and what you want to do with the user’s account.  Here is our one:

    var url = new LinkManager().FacebookAuthorize();

    BtnFacebookAuthorize.NavigateUrl = url;

     

    Pretty simple.  This redirects the user to a Facebook page where they are presented with something like this:

    image

    Assuming the user selects ‘Allow’, they are redirected to the redirect_url parameter you specified in the hyperlink.  In our case, this was /services/facebook/canvas/afterauthorize.aspx.  Let’s look at that now.

    Step 2 – Switching the Facebook Code for the OAuth Code

    The Graph API uses the OAuth system to authenticate users and at this point, we only have some kind of Facebook code.  We get the OAuth code by POSTing the Facebook code back up to Facebook.  I’ll let the code do the talking:

     

    public void ProcessAfterAuthorization(HttpRequest request)

    {

        // Facebook passes an empty code if the user declined

        string code = request.Params("code");

        if (string.IsNullOrEmpty(code)) throw new FacebookAuthorizationException("You have chosen not to proceed with Facebook integration.");

    
    

        // Facebook

        var url = new LinkManager().FacebookOAuthAccessTokenFromFacebookCode(code);

        var webRequest = WebRequest.Create(url);

        var webResponse = webRequest.GetResponse();

        StreamReader sr = null;

        string returnedKeyValuePairs = "";

        try

        {

            sr = new StreamReader(webResponse.GetResponseStream());

            returnedKeyValuePairs= sr.ReadToEnd();

        }finally

        {

            if (sr != null) sr.Close();

        }

    
    

        // Get the OAuth token from the returned querystring

        var authToken = returnedKeyValuePairs.GetQueryStringValue("access_token");

    
    

    }

     

    Save the authToken field against your user record – this is what you’ll pass later when you make a wall post.  It will look a little something like this:

    108091332590110|a9fa8e86ed521818374c8733-100001367407858|vRR3VgTCPuBwgcsG4Kr2fUiw9A8

    Writing to the User’s Wall

    Refer: http://developers.facebook.com/docs/reference/api/post

    Writing to a user’s wall is a lot more complicated, and the documentation linked above omits a very important point.  Here is what we came up with:

    public class PostToWall

    {

        public string Message = "";

        public string AccessToken = "";

        public string ArticleTitle = "";

        public string FacebookProfileID = "";

    
    

        public string ErrorMessage { get; private set; }

        public string PostID { get; private set; }

    
    

        /// <summary>

        /// Perform the post

        /// </summary>

        public void Post()

        {

            if (string.IsNullOrEmpty(this.Message)) return;

    
    

            // Append the user's access token to the URL

            var url = "https://graph.facebook.com/me/feed"

                .AppendQueryString("access_token", this.AccessToken);

    
    

            // The POST body is just a collection of key=value pairs, the same way a URL GET string might be formatted

            var parameters = ""

                .AppendQueryString("name", "name")

                .AppendQueryString("link", "http://link.com")

                .AppendQueryString("caption", "a test caption")

                .AppendQueryString("description", "a test description")

                .AppendQueryString("source", "http://blackballsoftware.com/images/whitetheme/headerwhite.png")

                .AppendQueryString("actions", "{\"name\": \"View on Rate-It\", \"link\": \"http://www.rate-it.co.nz\"}")

                .AppendQueryString("privacy", "{\"value\": \"EVERYONE\"}")

                .AppendQueryString("message", this.Message);

    
    

    
    

            // Mark this request as a POST, and write the parameters to the method body (as opposed to the query string for a GET)

            var webRequest = WebRequest.Create(url);

            webRequest.ContentType = "application/x-www-form-urlencoded";

            webRequest.Method = "POST";

            byte[] bytes = System.Text.Encoding.ASCII.GetBytes(parameters);

            webRequest.ContentLength = bytes.Length;

            System.IO.Stream os = webRequest.GetRequestStream();

            os.Write(bytes, 0, bytes.Length);

            os.Close();

    
    

            // Send the request to Facebook, and query the result to get the confirmation code

            try

            {

                var webResponse = webRequest.GetResponse();

                StreamReader sr = null;

                try

                {

                    sr = new StreamReader(webResponse.GetResponseStream());

                    this.PostID = sr.ReadToEnd();

                }

                finally

                {

                    if (sr != null) sr.Close();

                }

            }catch(WebException ex)

            {

                // To help with debugging, we grab the exception stream to get full error details

                StreamReader errorStream = null;

                try

                {

                    errorStream = new StreamReader(ex.Response.GetResponseStream());

                    this.ErrorMessage = errorStream.ReadToEnd();

                }finally

                {

                    if (errorStream != null) errorStream.Close();

                }

            }

        }

    }

     

     

     

     

     

    And to use, simply:

    var post = new PostToWall();

    post.Message = "Test message from Ben";

    post.ArticleTitle = "A new rating has been posted";

    post.AccessToken = "108091332590110|a9fa8e86ed521818374c8733-100001367407858|vRR3VgTCPuBwgcsG4Kr2fUiw9A8";

    post.Post();

    Response.Write("The Facebook post successed with ID: " + post.PostID);

    Response.Write("<br/>");

    Response.Write("The error message was: " + post.ErrorMessage);

     

    Right – hopefully that is useful to somebody, it will be to us when we forget how we did it in a few months.

    Apologies for the custom classes etc in the code like AppendQueryString() – I’m sure you can work them out.

    Follow

    Get every new post delivered to your Inbox.