Archive

Posts Tagged ‘CodeProject’

Bridging the client-server boundary – an experiment in architectures for next-generation web applications

May 1, 2015 1 comment

This article is the first of a series in which I design and experiment with a new mechanism for building tomorrow’s web applications.  Specifically, I’m interested in blurring the gap between client & server, such that both parties are unaware of their relationship to the other.

You can download the source code at the bottom of this article

The problem

Consider the following very common scenario where a web page signs up a new user based on their email address:

  1. the human enters an email address and presses the Sign Up button
  2. the browser posts the email address to your server
  3. the server receives the email, runs it through the business logic and then saves in the database
  4. the server returns (unloads the call stack), sending a 200-OK message back to the web browser
  5. the web browser receives the successful message and displays a message saying as much

In graphic terms, the communication flow is a little like this:

Simple

This is all well and good, but now consider that, as part of the server’s business logic, the email address is compared with existing registrations and deemed to be a duplicate.  It is okay to save duplicates in our database, but we want to make sure that the human didn’t actually mean to Sign In to their existing account.  So, the communication now has an extra round-trip:

Second

This is pretty easy to draw on a diagram, but in practice, coding  it up involves a lot more function points:

Method Breakdown

As the callouts in the diagram above show, our intrepid developer needs to write extra code to:

  • listen for the ‘duplicate email’ message from the server and display a confirmation box to the human asking if they’d like to continue even though it is a duplicate email
  • write an additional API method which accepts the selection that the user made.  In actual practice, this may be the same function method with an optional override, but the point is that it needs to be accommodated

To get technical, this is what our SaveSignup() method might look like (in the business logic layer):

public void SaveSignUp(string email, bool? overrideIfDuplicate){
	var emailAlreadyExists = MyEmailServer.EmailAlreadyExists(email);
	if (emailAlreadyExists && !overrideIfDuplicate) throw new DuplicateEmailException("This email already exists");

	// Save
	MySignUpService.Save(email);
}

Keep in mind that this method would be called twice – the first time with no overrideIfDuplicate parameter, and then the second time where the user has set it to ‘true’.

Further to this, when the server confirms whether or not the duplicate should be saved, it implicitly assumes that the human is still there at the other end, ready to answer.  What if the existing database has 1 billion emails already and the duplication check takes 20 seconds – should we expect a human to wait this long for what they perceive to be a simple form submission?  Nup.

The answer

So this is the goal of my article – let’s see if we can develop an architecture like this:

Event

This diagram was a little hard to draw, so please bear with me.  What I’m trying to convey is:

  • the initial saving of the sign up form returns immediately, and the human can continue with their workflow (including leaving our website altogether)
  • when the server detects a duplicate, it doesn’t specifically fire this back at the client (although in this diagram, the client does answer it).  You might initially think of this like an event, but in fact it is a pause in execution

A pause in execution

A pause in execution – and this will become the crux in my architecture.  Ultimately, I want the aforementioned function to be rewritten like this:

public void SaveSignUp(string email){
	var emailAlreadyExists = MyEmailServer.EmailAlreadyExists(email);
	var bridge = new MyFancyNewArchitectureMessenger();
	if (emailAlreadyExists && !bridge.Listen("This email has already been registered. Are you sure you wish to continue?")) return;

	// Save
	MySignUpService.Save(email);
}

In this rewritten example, we don’t have to write extra parameters to accept extra logic.  Instead, as questions arise, we simply ask them in what appears to be a synchronous manner (of course, the actual execution can’t be synchronous, but I want it to appear that way).

If this doesn’t look like much of a difference to you, consider this more real-world example with more logical paths:

The current way…

public void SaveSignUp(string email, bool overrideIfDuplicate, bool? inviteAFriend, string friendsEmailAddress){
    // Check if the email already exists, or if the user has agreed to store it anyway
    var emailAlreadyExists = !regex.IsMatch(email);
    if (emailAlreadyExists && !overrideIfDuplicate) throw new DuplicateEmailException("This email already exists");

	// TODO: save to database
 
	// Check if they want to invite a friend?
    if (!inviteAFriend.HasValue)) throw new InviteAFriendException("Would you like to invite a friend?");

	// They've agreed to invite a friend?
	if (inviteAFriend.Value && !string.IsNullOrWhiteSpace(friendsEmailAddress)){
		// TODO: save friend's email address to database
	}
}

My new way…

public void SaveSignUp(string email){
	var bridge = Dependency.Resolve<IBridge>();

	// Confirm the email address?
	if (!regex.IsMatch(email))
	{
		// Confirm with user?
		if (!await bridge.Listen(new YesNoPrompt("Invalid email", "Your email is not a valid format. Are you sure you wish to save it?")))
		{
			return;
		}
	}
	// TODO: save to database

	// Would they like to invite a friend?
	if (await bridge.Listen(new YesNoPrompt("Invite friend", "Would you like to invite a friend?")))
	{
		var friendsEmail = await bridge.Listen(new Readline("Invite friend", "What is your friend's email address?"));
		
		// TODO: save to database
	}
			
}

Although the two code examples above are similar in length, the new architecture is much easier to develop because it is built in a linear fashion:

  • With the former, our fearless developer would have also had to do a lot of work on the client such as passing up new variables and responding to different exception types.
  • With the latter, our developer simply asks questions of the bridge and waits for a response.  They have no knowledge or concern for how the questions are being asked.

Building a bridge between the client & server

For version one of my bridge, I decided to use a combination of now somewhat-old technologies:

  • SignalR. SignalR uses websockets and this allows me to push my bridge messages down to the client (I also use it to push messages from the client to the server, although this could equally have been done with a regular HTTP post).
  • Microsoft’s new async/await construct.  Because version one of the bridge uses loops and polling, I use async/await to move the processing to a different thread – thereby freeing up IIS to server up other requests.

Let’s look at the code:

private async Task<T> Listen<T>(IBridgeMessage<T> msg){
	// Poll the cache, waiting for a result
	var cache = new CacheManager<IBridgeMessage<T>>();
	IBridgeMessage<T> result = null;

	// Cancel after a few seconds
	var cancel = new CancellationTokenSource();
	cancel.CancelAfter(TimeSpan.FromSeconds(20));

	// Version one - poll the cache in a loop to check for our return
	await Task.Factory.StartNew(() =>
	{
		while (true)
		{
			// Listen to see if we've timed out
			if (cancel.IsCancellationRequested) break;
					
			// Check the cache - the cache matches based on the the msg.MessageID property
			result = cache.Load(msg, null);
			if (result != null) break;

			// Wait a wee while before checking again
			System.Threading.Thread.Sleep(100);
		}
	}, cancel.Token);

	// Return
	if (result == null)
	{
		NotifyCancelled(msg.MessageID);
		return default(T);
	}
	return result.Result;
}

As you can see, the code essentially just loops, checking for a variable in our cache (which our client later populates – see below).  Now, I know that this is not very elegant – obviously the server is still using resources even when ‘idle’, and clearly this method wouldn’t scale well once we had a few thousand concurrent users.  But it’s a good start for version one.

As I mentioned, the code will repeatedly check our cache for a return token, which is inserted by the client like this:

public void Answer(string jsonEncodedResult) {
	if (String.IsNullOrWhiteSpace(jsonEncodedResult)) return ;

	// Deconstruct the entities
	var jToken = (JToken)Newtonsoft.Json.JsonConvert.DeserializeObject(jsonEncodedResult);

	var requestedTypeName = jToken.Value<string>("TypeName");

	// Because they were serialized from an abstract class (BaseTemplateItem), Newtonsoft can't automatically cast them to their types as they're just a collection of {name:value} pairs
	// So, we need to iterate through and cast them ourselves
	var itemTypes = System.Reflection.Assembly.GetAssembly(typeof(IBridgeMessage)).GetTypes().Where(x => x.IsSubclassOf(typeof(BaseBridgeMessage))).ToList();
	var thisType = itemTypes.FirstOrDefault(x => x.Name == requestedTypeName);
	if (thisType == null) return;
			
	// Need to cast to its appropriate type
	var item = Newtonsoft.Json.JsonConvert.DeserializeObject(jsonEncodedResult, thisType);

	// Just pop the result in our (shared) cache, and the listeners above will extract it
	var cache = new CacheManager<ICachable>();
	cache.Save((ICachable)item);
}

The bulk of this method is actually to do with parsing a JSON result back to our message type – in fact, the only part we’re interested in now is the last two lines where we store the result back in our cache, ready for our aforementioned loop to pick it up in the Listen() method.

Proof of concept complete – what’s next for version two?

This code has established a semantic framework for how I want the code to work.  Specifically:

  • code is written in an apparently synchronous manner
  • code is written with no regard for who or what it is interacting with – no fiddling around with matching client-side API calls to server side methods
  • semantically, our code appears to pause in execution when a question is asked of the bridge.  Once the bridge responds, execution continues in a linear fashion.  Compare this with current methods, where the entire http request is sent again

However, there is one glaring problem which prevents this code from being used in real-world scenarios and that is to do with how we store and retain state…

Storing and resuming state

As we have seen, not only does our Listen() method hog a new thread, it essentially holds the entire call stack in memory while it waits. Although I haven’t tested it, I’m certain this will not scale well.

Unfortunately though, in order for my code to be semantically synchronous, I need some mechanism for both storing state before our call to Listen(), and restoring it after we get a response.

And by ‘state’, I include a variety of things:

  • the call stack
  • incoming parameters to our current function (and those of the calling function, and those of that calling function….and so on, up the call stack)
  • if we are building a web application, I need access to things like the Session and Cookies variables
  • any configuration values (e.g. stored in web.config or app.config files)

Replacing the way I store and resume state will be the focus of version two of this architecture.  At this point, I don’t know how I’m going to do it, but things worth pursuing are:

  • serializing and storing the call stack in the database
  • can .Net’s reflection classes allow me to resume the call stack from a specific point?
  • perhaps .Net is simply not capable of doing something like this, and something like Node.js would work better
  • on that note, would a scripting language (such as Node) allow me to literally build and execute code on the fly?
  • and I should actually do some benchmarks to see just how poorly performing the existing version one code is – perhaps it would actually suffice for a moderately-sized web application after all?

In addition, I’d like to extend the implementation to provide a different bridge.  We currently use SignalR to communicate with a web-based client, but because our bridge is abstracted via an interface, we could easily write a new Bridge which (for example) sent the user an email with a link to confirm/decline an action (think about that for a second – the code would still just be waiting on the Listen() method, even though behind the scenes our bridge is firing emails around the world, potentially over the course of a few hours or even days. That is pretty cool).

Download the example code

I’ve built a working prototype of this version one architecture, which you can download here.  Because I intend to extend the architecture, there is a lot of extra stuff with the result being that it is a lot more complicated than typical tutorials.  In particular:

  • I’m using Microsoft Unity for my dependency injection
  • the client-side javascript is architected using RequireJS
  • The CSS is written using LessCss and a Task Runner plugin to run Gulp.  Of course,  you shouldn’t need to change the CSS, but just so you know…
  • the project was written in VS2013
  • I included all the Nuget binaries, so it is quite a big download but hopefully it means you can get up and running quicker

Okay, good luck and have fun.  If you have any ideas on how I can progress this, best to get me on Twitter – @benliebert

Practical tips and tricks for using ES6 in today’s web applications

Javascript’s ES6 upgrade has been a long time coming and brings a lot of really great features.  Since the spec was finalized late last year, we’ve jumped in with both feet and begun using ES6 in considerable parts of our new web applications.

The internet already has plenty of tutorials covering specific ES6 features and how to use them, but most of them are shown in isolation and there’s not enough examples of how to actually pick up those code snippets and get them working in a real web application.

So, this blog post is a random collection of tips, tricks & styles that we’ve begun using in real-world web applications.  In no particular order…

 

Running ES6 and ES5 side-by-side

Practically speaking, you are very unlikely to be able to develop an application wholly in ES6.  If nothing else, most of your plugins are still in ES5.  Specifically, we’re talking about how modules are injected into the system.  Typically, this is done using an AMD dependency injection system like RequireJS, but ES6 has a snazzy new module syntax which is designed to replace this.

The practical challenge is getting these two technologies to run side by side.  Or, from our new ES6 point of view – how do we import a non-ES6 module? The answer is SystemJS.

Although it’s technically not, we like to think of SystemJS as a kind of wrapper for RequireJS and ES6 modules.  It basically reads what the structure of your file is and:

  • if it contains ES6 module syntax, it assumes it is an ES6 module
  • otherwise, loads as if it is a RequireJS module

One gotcha that took us about an hour to work out was that you need to kick it all off using a call to System.import.  In retrospect, this is pretty obvious – I mean, something has to tell your browser how to start loading external dependencies.  So basically, it all ties together like this:

logon.es6

logon.es6 is your snazzy new logon module, written entirely in ES6.  It uses the new module and class syntax and your girlfriend thinks it’s really good.  Note that it has two dependencies – the first is lib, which you wrote yourself in ES5 and the second is a third-party plugin (in this case jQuery) which may or may not have any AMD- or module-syntax embedded.

import $ from 'jquery';
import lib from 'lib';

export default class Logon {
/*
AttemptLogOn
Grabs the username and password provided and calls a service to authenticate
*/
AttemptLogOn(){
let params = {
username: $('#TxtUserName').val(),
password: $('#TxtPassword').val()
};

lib.CallService('/secure/login', params);
}
};
lib.js
lib.js is that old library file which you’ve built up over the last few years.  It is written in ES5 and full of helpful utility methods which you really can’t be bothered upgrading to ES6.  It uses RequireJS syntax to declare its dependencies at the top of the file.
require(['anotherdependency'], function(anotherDep) {
return function(){

/*
CallService
Makes an ajax request to the given URL
*/
var CallService = function(relativeUrl, params){
// Details omitted...
};

// Return public methods
return {
CallService: CallService
}
};
});

Logon.html

Your regular HTML page.  It uses System.import to get the ball rolling:

<script src="systemjs.js"></script>
<input type="text" id="TxtUserName"/>

<input type="password" id="TxtPassword"/>

<script>

System.import('logon.es6').then(function(l){

    var log = new l();    

    log.AttemptLogin();

});

</script>

One more thing – you’ll probably need to use System.config call to tell it how your files are organized etc:
System.config({
baseURL: '/scripts/',
paths: {
'Views/*': '/views/*.js'
},

map: {
"jquery": "lib/jquery",
"jqueryui": "lib/jquery-ui.min"
},

meta: {
"lib/jquery-ui-min": {
deps: ['lib/jquery']
}
}
});




Writing a re-usable base class using ES6 inheritance

Along with modules, this is the feature we most appreciate in ES6 – a tidy way to create a re-usable base class for our controllers.  See, here is how our projects are typically laid out:

  • The application is divided into heaps and heaps of modules, like ‘logon’, ‘view chart’, ‘render menu’ etc etc
  • Each of these modules has a Javascript controller class which is bound to a view (using RivetsJS – we have an in-depth tutorial here)
  • There is a lot of common code which is repeated in our controllers, such as:
    • an Init() method to kick things off
    • a property called ‘model’ where we store the data for our view/controller
    • a reference to the view, in case we have to do something nasty like use jQuery to animate an element

Using ES6, we’ve now been able to create a tidy little BaseController class which encapsulates this once:

basecontroller.es6

Our Base Controller class is written exactly like a regular ES6 class…

 
import $ from 'jquery';
import lib from 'lib';

export default class BaseController{
constructor(m) {
this.IsLoading = false;

this.model = m;

// Our models all contain a reference to our view ID
if (this.model !== null) this.view = $('#' + this.model.UniqueID);
else this.view = $('<div></div>'); // Create an empty object so that we don't have to keep doing null checks if there is no model

this.Init();
}

/*
Init
This method can be overwritten by base classes
*/
Init(){

}

/*
UpdateModel
Helper method to replace our model (for example, if we update a database record)
*/
UpdateModel(newModel){
$.extend(this.model, newModel);
}

/*
Calls our web service to get the given JSON
*/
CallJSON(url, params){
var p = new Promise((success, fail) => {
// Adjust model state
this.view.addClass('loading');
this.IsLoading = true;

// Call our web service
lib.CallService(url, params).then(result => {
// Adjust this model state
this.view.removeClass('loading');
this.IsLoading = false;

// Pass back to the specific handler/caller
success(result);
}, err => {
console.log("Error", err);
this.view.removeClass('loading');
this.IsLoading = false;
fail();
});
});

return p;
}

}

logon.es6
Our re-written logon file may now look like this:
import BaseController from 'basecontroller';
export default class LogonControl extends BaseController {
AttemptSignIn(){
alert('Your current PersonID is ' + this.model.PersonID);
let params = {
username: this.view.find('#TxtUserName').val(),
password: this.view.find('#TxtPassword').val()
};

// Use base method to make a JSON call
this.CallJSON('signin', params).then((newModel) => {
this.UpdateModel(newModel);
alert('Your new PersonID is ' + this.model.PersonID);
});

}
};

And of course, you kick it all off by instantiating logon.es6 with a model in the constructor (note that the constructor is in the BaseController class, and accepts one parameter):
System.import('logon').then((l) => {
let model = {
PersonID: 0
};

var log = new l(model);
log.Init();
});

Using traceur to make your ES6 code backwards compatible

Currently, most browsers only support a tiny subset of the ES6 standards and we doubt that we could rely wholly on them coming up to speed for at least another 12 months, likely much longer.  So it becomes necessary to run a transpiler which converts your ES6 code back into ES5.

As far as we can tell, the most complete transpiler out there is Google’s Traceur

There are two ways of doing this, the lazy way and the proper way.  The lazy way is to just include a script file in the <head/> of your application, but we’re not even going to show a demo of that here because it is short-sighted.  (If you’re asking, the thing we hate most is not that the transpiling is done in real-time in the browser, but the fact that you have to decorate your <script/> tags with type=”module”)

The better way to do this is to setup a task which runs traceur against your ES6 files at compile time and then point your browser at the generated ES5 files.  For this, we’ve used the new Gulp integration supported by Visual Studio 2015.  This is not the place to give a Gulp/VS tutorial, but once you’ve got your head around it, here is how we at Blackball do our transpiling:

gulpfile.js

/// <binding ProjectOpened='watchjs'/>
var gulp = require('gulp');
var watch = require('gulp-watch');
var traceur = require('gulp-traceur');
var rename = require("gulp-rename");

// Helpful error handler to display error messages in our Gulp console window
function onError(error) {
console.log("ERROR: " + error.toString());
this.emit('end');
}

// Watch runs the traceur task automatically each time our es6 files are edited
gulp.task('watchjs', function () {
gulp.watch('**/*.es6', ['compiletraceur']);
});

// Our ES6 files are indicated with a .es6 file extension, so we just grab them all then save the transpiled .js file alongside each
gulp.task('compiletraceur', function () {

return gulp.src('scripts/**/*.es6')
.pipe(traceur())
.on('error', onError)
.pipe(rename(function (path) {
path.extname = ".js";
}))
.pipe(gulp.dest('scripts/'));
});

 

Read it slowly and it kind of makes sense.

One problem with traceur is that your browser is running code which you didn’t write, so error logs do not match your ES6 files one-to-one.  This is surprisingly okay though – even though the structure of your files differs, then lines that cause errors are generally pretty similar and practically speaking we haven’t had any problems understanding what part of our ES6 code the error pertains to.

 

Summing up

Whether you like it or not, you’re all going to be coding in ES6 in the next five years so you better get on board.  Due to the lack of support (tooling, blogs/forums, browsers….), it is not really practical to use it today, however if you like to play with new toys then hopefully this article will save you a few hours…

Best practice front-end architecture using Microsoft ASP.Net MVC and Rivets.js

May 28, 2014 1 comment

A few years ago I wrote an article about best-practice architecture for web applications built in Microsoft.Net.  This was focused entirely on the back-end and I mentioned at the end that I would do a front-end article one day.  So, here we go…

First of all, let’s get some basic requirements down:

  • your business logic should be separated from your presentation logic
  • your business logic should be unit testable – and this means abstracting as much as possible so you can mock it later
  • your application should be as lightweight as possible – but more importantly, the application must not ‘bloat’ with superfluous or rarely-used features as it grows bigger

In addition, since I wrote the last article, the way we build modern web applications has made a massive shift to client-side focus, with much of my work written in javascript these days.  The trouble with javascript is that doesn’t naturally enforce rules on the developer.  If you are developing by yourself, you may be able to get away with this because you understand your own way of working.  But if you’re working in a multi-team environment, this isn’t good enough and you need to enforce your own rules using conventions which other developers must follow.  This article shows the conventions which I currently use to keep things organized and understandable.

Simple huh?  Let’s get into it.  If you’d like to follow along, you can download the sample project here (and don’t forget to run the included .sql file to create your database structure).

Separating your business logic from your presentation logic (a.k.a. Separating your javascript from your HTML)

For me, the reason for doing this primarily comes down to unit testing – you can’t be dealing with HTML manipulation when you are trying to test the SavePerson() method of your javascript file. 

The typical way to go about this is via two-way data binding and for years the common ways of doing this have been using third party tools like Knockout.js.  Personally, I detest Knockout – they’ve done amazing work (including older browser support), but you have to completely rewrite your javascript models in order to make it work – which means:

  • you and other developers must become proficient ‘knockout developers’ in order to maintain the application
  • you become massively tied-in to the knockout framework

Because of these reasons, I have never built a proper data-bound front-side framework on any of my applications.  At least, until rivets.js came along.

Rivets.js

This was a huge game changer for me.  It’s not as big or popular as the older frameworks such as Knockout, but it has one massive advantage – you can develop your javascript files with absolutely no knowledge of (or reference to) the fact that it is data-bound to rivets.  In fact, your javascript files have no idea that they are data-bound at all!!.  That is perfect – just the way it is supposed to be.  To clarify, here is an example of a file that displays a list of people:

 
function (model) {
var
Init = function () {
console.log("People", model);
},
ViewPerson = function(ev, data) {
alert('You have clicked ' + data.person.DisplayName);
},
AddPerson = function () {
var params = {
firstName: 'Person ' + (model.People.length + 1)
};

// Call our MVC controller method
lib.Data.CallJSON('home/createperson', params, function (newPerson) {

// Add to our model - the view will update automatically
model.People.push(newPerson);
});

return false;
};

Init();
return {
model: model,
AddPerson: AddPerson,
ViewPerson: ViewPerson
};
};
 

Beautiful huh?  Imagine unit-testing that bad-boy – piece of cake!

So, with rivets.js you grab this javascript file and you ‘bind’ it to a block of HTML, and suddenly, as the user interacts with the HTML (like clicking an ‘Add person’ button), your javascript file will handle the events and react according (like creating a new person).  For reference, here is my associated HTML view:

<div id="MyPeopleList" class="home">
<h2>People list</h2>

<table>
<tr data-each-person="model.People" data-on-click="ViewPerson">
<td data-html="person.DisplayName"></td>
</tr>
</table>

<p>
<a data-on-click="AddPerson">Create a new person</a>
</p>
</div>

<script> 
var viewID = 'MyPeopleList';

var view = document.getElementById(viewID);

rivets.bind(view, new PeopleList(model));

</script>

See the data-* attributes?  That’s rivets.js.  I’m not going to into how the binding works – check out the rivets.js documentation for that.

 

Automatically wiring up your views to your controllers

In the HTML sample above, you can see a little script tag at the bottom which pulls in my PersonList javascript and applies to our HTML.  I build a very modular type of architecture so I end up with 100’s of these files and quite honestly I get sick of retyping the same thing again and again.  So, this is a good chance to introduce the first of my ‘conventions’ which I apply using the MVC framework – let’s jump to our C# code.  Specifically, the OnResultExecuted() method which gets called after each of my MVC views are rendered (BTW, if you’re not familiar with Microsoft MVC then you’ll probably need to brush up on another blog before proceeding):

 
protected override void OnResultExecuted(ResultExecutedContext filterContext)
{
var viewFolder = this.GetViewFolderName(filterContext.Result);
var viewFile = this.GetViewFileName(filterContext.Result);
var modelJSON = "";

// Cast depending on result type
if (filterContext.Result is ViewResult)
{
var view = ((ViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}
else if (filterContext.Result is PartialViewResult)
{
var view = ((PartialViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}

// Render our javascript tag which automatically brings in the file based on the view name
if (!string.IsNullOrWhiteSpace(viewFolder) && !string.IsNullOrWhiteSpace(modelJSON))
{

var js = @"
<script>
require(['lib', 'controllers/"
+ viewFolder + @"/" + viewFile + @"'], function(lib, ctrl) {
lib.BindView("
+ modelJSON + @", ctrl);
});
</script>"
;

// Write script
filterContext.HttpContext.Response.Write(js);
}

base.OnResultExecuted(filterContext);
}
 

Don’t worry about all the custom function calls – you’ll find them in the example project download – the key points are:

  • find the physical path of the view that we are rendering (e.g. /home/welcome.cshtml)
  • use this path to determine which javascript file we have associated to it (e.g. /scripts/controllers/home/welcome.js)
  • automatically render the <script/> tag at the end of our view – exactly the same as we manually typed it into the HTML example I pasted above

So, this handy method does a few things:

  • it saves me typing
  • it forces me and other developers in the team to store the javascript files in a consistent and predictable format.  So, if I’m working in Visual Studio on the Welcome.cshtml view, I know immediately where I can find its javascript just by looking at the file name.
  • it provides a clean way for me to do my server-to-client model serialization, which deserves its own section…

Serializing your C# MVC model into your client side (javascript) model

Note the modelJSON variable you can see above.  Because I’m loading my javascript controller from server-side code, I have access to the MVC ViewModel and I am able to serialize it directly into my page.  This is something which few online examples of javascript data-binding frameworks show you – they always start with some model which is hard-coded into your javascript, which is completely impractical in real-life.

In practical terms, this has the other advantage that my server-side MVC models have precisely the same structure as the model I have dealing with in javascript.  This makes it easy for me to understand how my model is formatted when I’m working in javascript.

BONUS: It would be amazing if I could somehow get some kind of javascript intellisense to work on my models by parsing in the C# structure of my MVC models.   I could also use the same mechanism to do compile-time checking of my javascript code.  If anybody can think of a way to do this, please let me know.

Managing your CSS files in a large web application

Another problem I find with large projects is managing CSS files.  One typically has a common.css file and perhaps an admin.css file to try to split it out as required.  But as  your project grows, you add more and more fluff to these files and they end up very very large.  Then, to reduce the initial site load, you think you’ll pull out some targeted CSS classes into a separate file and reference it just in the files you need.  Except then you start forgetting which files need it – and besides, with MVC applications these days you tend to pull in partial views all the time – they have no idea what page they are on and what CSS files they currently have access too.

So, here’s what I’ve finally come up with, and it’s very similar to how I pull in the javascript files above, only this time it uses MVC’s OnResultExecuting() method:

protected override void OnResultExecuting(ResultExecutingContext filterContext)
{
if (filterContext.Result is ViewResult || filterContext.Result is PartialViewResult)
{
// Render a reference to our CSS file
var viewPath = this.GetViewFolderName(filterContext.Result);
if (!string.IsNullOrWhiteSpace(viewPath)) this.RenderPartialCssClass(viewPath);
}

base.OnResultExecuting(filterContext);
}

Again, ignore the missing method references, they’re in the example project.  Here’s what it’s doing:

  • find the path of the current view (e.g. /home/welcome.cshtml)
  • generate a CSS path name based on this path name (e.g. /content/xhome.min.css)
  • Render a <link/> tag pointing to the CSS file

The <link/> tag is written to the HTML stream before the HTML of the view is rendered, and voila – every time you pull down a view (either with a direct request, or a partial render using AJAX), the first thing it will do is tell the browser what CSS file is needs and the browser will pop off and download that too.

Of course, this has its drawbacks:

  • the first time a view in each folder is downloaded, the browser has to make another (blocking) request to get the CSS file.  Remember though that the CSS file is cached so subsequent requests have no additional overhead.
  • it splits the CSS into files which aren’t named or organized according to their use.  For example, another way to organize files would be by function (admin, public etc).  Personally, this doesn’t bother me so I’m happy.

In addition to these files, I also have the usual ‘base’ CSS file, but now it should include only general styles like tables, headings etc.  When you have functionality which is specific to a module (HTML view) then you add it to the relevant file in its own section.  In the example project, you can see that xhome.min.css contains the styles which the welcome.cshtml view uses.

BONUS: the above works perfectly for partial views which are injected into an existing DOM.  However, if you use it for a regular view, then you’ll notice the <link/> tag is inserted at the very top of the HTML element – ie. above the opening <html/> tag.  Not proper although I haven’t found any practical problems in any browsers.  Still, if anybody finds an efficient way to render it in the <head/> tag instead, please let me know – I haven’t bothered looking into it.

Dependency injection using require.js

The final excellent technology I use in my front-end architecture is require.js.  This serves two purposes:

  • it allows my various javascript files to load their dependencies on demand instead of pre-loading all my files on the initial page load.  This is absolutely essential for your large application to be well performing.
  • it allows us to mock out files for unit testing, simply by replacing the require-main.js file

I’m not going to go in to how require.js works – you can find out all about it on their website.  But again, it is a must-have as far as I’m concerned.

Unit testing

Unfortunately, the example project doesn’t have examples of unit testing.  I usually use Jasmine and Karma to run my tests, and I mock out the require-main.js file to stub out things like my jQuery dependencies.

Conclusion

So, that’s it – my current take on ‘best practice’ MVC architecture for real-world, large-scape web applications.  The next steps I’m looking forward solving over the next year are:

  • compile time javascript errors
  • introduction of ES6 javascript
  • implementing the proposed Object.observe() pattern in ES7 (which could technically allow me to replace rivets.js but why re-invent the wheel?)
  • Visual Studio ‘knowing’ how javascript and cshtml files are linked, so you can do something akin to ‘view source’ to jump between them
  • a framework to bind my javascript files to non-HTML framework, perhaps Android axml files?  Perhaps I’m dreaming….

Seeya

Automatic code documentation based on your C# comments

April 24, 2014 Leave a comment

I’ve written a few APIs over the years and the worst part is writing the documentation:

  • it takes extra time
  • it must be updated every time you make changes to your code
  • it is a duplication of work because I already document my code inline anyway

So, here’s a handy utility I wrote which will use reflection to whip through your code and draw out the comments.

Generating XML Documentation

Before proceeding, you must setup your project to generate an XML file of your code comments.  This is done via the Properties –> Build menu in your project (presumably it’s a web project).  See this screenshot below:

 

Screenshot

 

This generates an xml file in the bin directory every time you build.  The file contains all your code comments, ready for parsing by my helper utility.  The format is something like this:

Screenshot

 

So, with this in place, here is the utility class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Xml;
using System.Xml.Linq;
using Common;


namespace Web.Code
{
/// <summary>
/// Creates documentation for our various API methods
/// </summary>
public class ApiDocumentationGenerator
{

#region Sub classes

public class Parameter
{
public string Name { get; set; }
public string Type { get; set; }
public string Description { get; set; }
}

public class Method
{
public string Name { get; set; }
public List<Parameter> Parameters = new List<Parameter>();
public string Summary { get; set; }
}

#endregion

#region Properties

public List<Method> Methods = new List<Method>();
private string PathToXmlDocumentation = "";

private XDocument _XmlDocumentation = null;
private XDocument XmlDocumentation
{
get
{
if (_XmlDocumentation == null)
{
_XmlDocumentation = XDocument.Load(this.PathToXmlDocumentation);
}
return _XmlDocumentation;
}
}

#endregion

/// <summary>
/// Constructor
/// </summary>
/// <param name="pathToXmlDocumentationFile"></param>
public ApiDocumentationGenerator(string pathToXmlDocumentationFile)
{
this.PathToXmlDocumentation = pathToXmlDocumentationFile;
}

/// <summary>
/// Generates our classes
/// </summary>
public void Generate()
{
// BaseController is a class I wrote which all my MVC *Controller methods inherit from. If you don't have a base class, you can just use
// whatever parent class you know your own API methods sit within. And if there is no parent class, then just get every time in the assembly
var ass = System.Reflection.Assembly.GetAssembly(typeof(BaseController));

// Get each class
foreach (var controller in ass.GetTypes())
{
if (controller.IsSubclassOf(typeof(BaseController))) ExtractMethods(controller);
}
}

/// <summary>
/// Finds the methods in this controller
/// </summary>
/// <param name="controller"></param>
private void ExtractMethods(Type controller)
{
foreach (var method in controller.GetMethods())
{
// My API methods are decorated with a custom attribute, ApiMethodAttribute, so only show those ones
var attrs = System.Attribute.GetCustomAttributes(method);

// Check our attributes show we have an API method
var isAPIMethod = false;
foreach (System.Attribute attr in attrs)
{
if (attr is ApiMethodAttribute)
{
isAPIMethod = true;
break;
}
}

// Break if not an API method
if (!isAPIMethod) continue;

// Parse out properties
var meth = new Method();
meth.Name = controller.Name.Replace("Controller", "") + "/" + method.Name;
this.Methods.Add(meth);

// Quick hack to detect the XML segment we want - I know that all my API methods are in *Controller methods, so I can just restrict to this
var memberName = "Controller." + method.Name;

// Get the methods from our documentation
var docInfo = (
from m in this.XmlDocumentation.Descendants("members").Descendants("member")
where m.Attribute("name").Value.Contains(memberName)
select new {
Summary = m.Descendants("summary").First().Value,
Params = m.Descendants("param")
}
).FirstOrDefault();

// Now copy the XML back into my method/parameter classes
if (docInfo != null)
{
meth.Summary = docInfo.Summary;

// Add parameters
foreach (var param in docInfo.Params)
{
var p = new Parameter();
meth.Parameters.Add(p);
p.Name = param.Attribute("name").Value;
p.Description = param.Value;
}
}
}
}
}
}

 

Note that this won’t compile for you because it references a custom attribute, ApiMethodAttribute, and my base class, BaseController.  However, you could delete the logic around these and the documentation should still generate.

Now it’s just a matter of calling the class.  I use mine in an MVC ActionResult:

/// <summary>
/// Uses reflection to document our API methods
/// </summary>
/// <returns></returns>
public ActionResult APIDocumentation()
{
var pathToDocs = HttpContext.Server.MapPath("~/bin/APIDocumentation.xml");
var model = new ApiDocumentationGenerator(pathToDocs);
model.Generate();
return View("admin/apidocumentation", model);
}

And for clarity, I’ll include my View, so you can see how it’s used to render the results to the user:

@model Web.Code.ApiDocumentationGenerator
@{
ViewBag.Title = "API Documentation";
}

<h2>API Documentation</h2>

@foreach (var method in Model.Methods.OrderBy(x => x.Name))
{
<h3>@method.Name</h3>
<p><i>@method.Summary</i></p>
if (method.Parameters.Any())
{
<ul>
@foreach (var param in method.Parameters)
{
<li><strong>@param.Name </strong>@param.Description</li>
}
</ul>
}
}

 

Hope that helps.

Cheesebaron HorizontalScrollView with MvvmCross 3 (Hot Tuna)

February 14, 2014 Leave a comment

Many thanks to Cheesebaron and Stuart for their amazing contributions to the Xamarin platform. Cheesebaron made a scrollable horizontal list view at https://github.com/Cheesebaron/Cheesebaron.HorizontalListView back in early 2012. For those interested, I have ported the Cheesebaron HorizontalListView to the latest version of MvvmCross (currently v3).

Hope that helps.

MVC Output Caching using custom FilterAttribute

August 29, 2013 Leave a comment

 

As with ASP.Net Forms, MVC offers some out-of-the-box caching with their OutputCacheAttribute, however as with classic ASP.Net, one quickly realizes its limitations when building complex systems.  In particular, its very difficult, and often times impossible to flush/clear the cache based on various events that happen within your application. 

For example, consider a main menu which has an ‘Admin’ button for appropriately authorized users.  When your administrator initially views the page, the system will cache the HTML, including the Admin link.  If you later revoked this privilege, the site would continue serving the cached link even though they were technically no longer authorized to access this part of the site.

Not good.

So, with a little to-ing and fro-ing, I’ve finalized my own FilterAttribute which does this for you.  The advantage of writing your own is that you can pass in whatever parameters you like, as well as have directly access to the current HttpContext, which in turns means you can check user-specific values, access the database – whatever you need to do.

How it works

The attribute essentially consists of just a couple of methods, both overrides of the IResultFilter and IActionFilter attributes

  • OnActionExecuting.  This method fires before your Action even begins.  By checking for a cache value here, we can abort the process before any long-running code in your Action method or View rendering executes
  • OnResultExecuting.  This method fires just before HTML is rendered to our output stream.  It is here that we inject cached content (if it exists).  Otherwise, we capture the output for next time

The code

I’ve commented the code below so you can follow more-or-less what is going on.  I won’t go in to too much detail, but needless to say if you copy/paste this straight in to your work, it won’t compile due to the namespace references.  I’m also using Microsoft Unity for dependency injection, so don’t be confused by ICurrentUser etc. 

Finally, I’m got a custom cache class, whose source code I haven’t included – just switch out my lines to access your own cache instead.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Web;
using System.Web.Mvc;
using BlackBall.Common;
using BlackBall.Common.Localisation;
using BlackBall.Contracts.Cache;
using BlackBall.Contracts.Enums;
using BlackBall.Contracts.Exporting;
using BlackBall.Contracts.Localisation;
using BlackBall.Contracts.Security;
using BlackBall.Common.Extensions;
using BlackBall.Logic.Cache;


namespace BlackBall.MVC.Code.Mvc.Attributes
{
public class ResultOutputCachingAttribute : FilterAttribute, IResultFilter, IActionFilter
{

#region Properties & Constructors

private string ThisRequestOutput = "";
private bool VaryByUser = true;

private ICurrentUser _CurrentUser = null;
private ICurrentUser CurrentUser
{
get
{
if (_CurrentUser == null) _CurrentUser = Dependency.Resolve<ICurrentUser>();
return _CurrentUser;
}
}

public ResultOutputCachingAttribute(bool varyByUser = true)
{
this.VaryByUser = varyByUser;
}

private string _CacheKey = null;
private string CacheKey
{
get { return _CacheKey; }
set
{
_CacheKey = value;
}
}

#endregion

/// <summary>
/// Queries the context and writes the HTML depending on which type of result we have (View, PartialView etc)
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private void CacheResult(ResultExecutingContext filterContext)
{
using (var sw = new StringWriter())
{
if (filterContext.Result is PartialViewResult)
{
var partialView = (PartialViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindPartialView(filterContext.Controller.ControllerContext, partialView.ViewName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}else if (filterContext.Result is ViewResult)
{
var partialView = (ViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindView(filterContext.Controller.ControllerContext, partialView.ViewName, partialView.MasterName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}
var html = sw.GetStringBuilder().ToString();

// Add data to cache for next time
if (!string.IsNullOrWhiteSpace(html))
{
var cache = new CacheManager<CachableString>();
var cachedObject = new CachableString() { CacheKey = CreateKey(filterContext), Value = html };
cachedObject.AddTag(CacheTags.Project, CurrentUser.CurrentProjectID);
if (this.VaryByUser) cachedObject.AddTag(CacheTags.Person, this.CurrentUser.PersonID);
cache.Save(cachedObject);
}
}
}


/// <summary>
/// The result is beginning to execute
/// </summary>
/// <param name="filterContext"></param>
public void OnResultExecuting(ResultExecutingContext filterContext)
{
var cacheKey = CreateKey(filterContext);

if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.HttpContext.Response.Write("<!-- Cache start " + cacheKey + " -->");
filterContext.HttpContext.Response.Write(this.ThisRequestOutput);
filterContext.HttpContext.Response.Write("<!-- Cache end " + cacheKey + " -->");
return;
}

// Intercept the response and cache it
CacheResult(filterContext);
}

/// <summary>
/// Action executing
/// </summary>
/// <param name="filterContext"></param>
public void OnActionExecuting(ActionExecutingContext filterContext)
{
// Break if no setting
if (!Configuration.Current.UseOutputCaching) return;

// Our function returns nothing because the HTML is not calculated yet - that is done in another Filter
Func<string, CachableString> func = (ck) => new CachableString() { CacheKey = ck };

// This is the earliest entry point into the action, so we check the cache before any code runs
var cache = new CacheManager<CachableString>();
var cacheKey = new CachableString() { CacheKey = CreateKey(filterContext) };
var cachedObject = cache.Load(cacheKey, func);
this.ThisRequestOutput = cachedObject.Value;

// Cancel processing by setting result to some non-null value. Refer http://andrewlocatelliwoodcock.com/2011/12/15/canceling-the-actionexecutingcontext-in-the-onactionexecuting-actionfilter/
if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.Result = new ContentResult();
}
}

public void OnActionExecuted(ActionExecutedContext filterContext)
{

}

public void OnResultExecuted(ResultExecutedContext filterContext)
{

}

/// <summary>
/// Creates a unique key for this context
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private string CreateKey(ControllerContext filterContext)
{

// Append general info about the state of the system
var cacheKey = new StringBuilder();
cacheKey.Append(Configuration.Current.AssemblyVersion + "_");
if (this.VaryByUser) cacheKey.Append(this.CurrentUser.PersonID.GetValueOrDefault(0) + "_");

// Append the controller name
cacheKey.Append(filterContext.Controller.GetType().FullName + "_");
if (filterContext.RouteData.Values.ContainsKey("action"))
{
cacheKey.Append(filterContext.RouteData.Values["action"].ToString() + "_");
}

// Add each parameter (if available)
foreach (var param in filterContext.RouteData.Values)
{
cacheKey.Append((param.Key ?? "") + "-" + (param.Value == null ? "null" : param.Value.ToString()) + "_");
}

return cacheKey.ToString();
}
}
}

Alright, hope that helps – there’s nothing like HTML caching to make you feel like the best website builder in the world!

Step by Step Guide to Building a Cross-Platform Application in HTML, CSS & Javascript

January 19, 2012 3 comments

Back in the days when your computers came in options of the cream, the white, the off-white, the ivory or the beige, it was very frustrating that an application you put so much effort into wasn’t usable on other computers.

You had to make a choice, and my choice was Windows.  It was just when Microsoft .Net came out and we figured it was a pretty good bet, which it was I reckon.

Then came the web…

A year or so later, despite many of our customers still being in dial-up, I moved to the web.  Unfortunately, this came with almost more headaches – Internet Explorer 6 was the king of browsers, but we had a few Netscapes and this rogue called Firefox was starting to make waves.

Over the last 10 years, while I moved completely away from desktop applications, the browser wars just got worse – even IE couldn’t get versions working nicely together (compatibility mode??? WTF?).

Finally, although IE is still not perfect, it is definitely better and more importantly – you can pretty much ignore it and code just for ‘standards compliant’ browsers – Firefox and Chrome predominantly. 

And then the iPhone came along.

…then came the iPhone…

Actually, the iPhone web browser is incredible – I’m more confident of my websites running on the iPhone than I am in Internet Explorer.  And on top of this, they have to cram your site into a tiny little screen.  I’m very impressed.

Of course, the iPhone really comes into itself with its applications (as opposed to websites).  Not only do they run beautifully, but the entire App Store infrastructure exposes a developer’s work to millions of ready and willing credit card holders, eager to part with a couple of dollars just for the pleasure of the App Store buying experience. 

Unfortunately, the iPhone also forced you to code in yet another language – Objective C – and I’m sorry, but I can barely keep up with .Net let alone learn another language built on C of all things.  I guess I wasn’t the only one to mourn this because there came a slew of WYSIWYGs and cross-platform compilers (Moonlight anybody?).  And to the top rose PhoneGap – an platform that essentially ‘wraps’ a web application in Objective C to convert it into a regular iOS application.  Note only that, it will also wrap it in the relevant languages to support Android, BlackBerry, Windows Phone, Symbian etc…  Such a simple concept, but just amazing.

…and the desktop comes full circle…

Windows 8 purportedly supports native HTML/Javascript/CSS applications. 

Woah – so suddenly, my old-school web-coding skills can be deployed on the web, major mobile devices and 90% of the world’s desktops?  Amazingly, yeah – I think they can.

The HTML/CSS/Javascript Application

So, sorry for the long pre-amble – I just need noobs to appreciate that this next decade of development shouldn’t be taken for granted.

The point is that now you can build an application in ONE language and deploy to multiple platforms.  However, it’s not quite as easy as that – there are many restrictions to what can be built and how.  In this article I’m going to walk you right through from start to finish.  I’ve built a few of these applications by now (the most recent is www.stringsof.me) so I’ll point out the pitfalls and hopefully save you a bit of time.

Know the Goal

I should point out that it is FAR easier to build an application with the knowledge of its intended use.  If it’s going to be used on iPhone or wrapped in PhoneGap, then you can test it incrementally on these platforms as you go.  Far far easier than trying to retro-fit an existing web application.  In fact, I recommend to anybody that no matter how big your web application is, you just start from scratch. Copy/paste what you need from the old one, but start with a clean slate – after all, this app will be used for years and years so you better make it a nice one.  So, here’s our goal:

HTML-Architecture-Overview

 

The Application

The application consists of three parts – an HTML file, one or more Javascript files and one or more CSS files. 

Below I’ve created a completely stripped-down application to try to indicate the core functionality, however if you want to see the full-blown thing in action, I suggest you View Source on m.stringsof.me

Index.html

<html>
<script src=“jquery.js" type="text/javascript"></script>
<script src=“phonegap.js" type="text/javascript"></script>
<script src=“settings.js" type="text/javascript"></script>
<script src=“app.js" type="text/javascript"></script>

<link href=“style.css" rel="stylesheet" type="text/css" />
<link href=“settings.css" rel="stylesheet" type="text/css" />
<body>
<div id=“MyContainer” >
    Hi everybody, welcome to my App.
</div>
<script>
    var app = new App(‘MyContainer’);
    app.Start();
</script>
</body>
</html>

Nothing particularly flash here, but of note:

  • We are using jQuery, but that is just my preference
  • The PhoneGap.js file is required for our various AppStore installations, but on the web server we replace with just a stub file
  • The Settings.js and Settings.css files enable us to manage the minor variations between our various platforms.  For example, iOS requires you to ask people before sending them push messages, Android doesn’t care, and push messages are irrelevant on a web-based app

      Settings.js

      The Settings file contains platform-specific variables.

      var Settings = function(){
          return {
              SiteRoot:'http://api.stringsof.me/’,
              ConfirmPushNotificationsOnStartup: false
          }
      }();

      Data.js

      Data provides connectivity to our server.  This class could be stored in the main App.js file, but I’ve split it out here because in a bigger application you’d have lots of Javascript files and you don’t want circular references if you can help it (not that Javascript minds, grrrrr).

      var Data = function () {
          var that = this;
          this.SiteRoot = Settings.SiteRoot;
          return { 
              CallJSON : function(route, params, callback) { 
                  var triggerName = new Date().getTime().toString();
                  $("body").bind(triggerName, function(e, result) {  
                      callback(result);
                  }); 
      
                  // Add the JSON callback to the parameters
                  params._JsonPCb = 'Data.OnCallJSON‘;
                  params._JsonPContext = "'" + triggerName + "'";
      
                  // Make the JSON call
                  $.ajax({
                      url: that.SiteRoot + route,
                      data: params,
                      type: 'GET‘,
                      dataType:"jsonp“
                  });
              },
      
              OnCallJSON : function(result, triggerName) {
                  $("body").trigger(triggerName, result);
                  $("body").unbind(triggerName); }
          };
      } (); 

      App.js

      Encapsulates our main application code.  This file will usually get pretty big, but you can split it up later depending on your coding style.

      var App = function(containerID){
          this.ContainerID = containerID;
          var that = this;
          this.Start = function(){
              var $con = $(‘#’ + that.ContainerID);
      
              // Get user
              Data.CallJSON(‘Person/GetPerson’, {userName: ‘ben’}, function(person){
                  $con.html(‘Welcome ‘ + person.FirstName);        
              });
      
          }
          return {
              Start: that.Start
          }
      };

    Pretty simple.  All it does is get the user by their username (I have hard-coded ‘ben’ in this example).  On the callback, you display their name in our main <div/> element.

    One flashy thing I’ve done is use my method for calling JSONP with callbacks.  You can read up on it here, or take my word for it that it works.

    Data Access

    Most of us are used to dealing with server-side languages such as ASP.Net or PHP.  Despite some efforts (MVC perhaps), these technologies still leave the HTML dependent, and to a certain extent aware, of the code that generated them.  For example, ASP.Net is heavily dependent on ViewState.  This may be fine for your web application – even your mobile web application – but it is worthless in your iPhone or Android app. 

    Without a server-side language to create and bind our HTML, we must defer to Javascript.  And to get the data required to bind, we must make some kind of web service call.  Because we are using Javascript, it makes sense to return JSON-formatted objects.

    Cross-Domain Data Access Using JSONP

    Using JSON would be all you had to do (and in fact frameworks like ASP.Net MVC have excellent JSON support baked in), except that we want to use the same web service (and therefore the same returned objects) in our iPhone/Android application.  Consider our mobile web application:

    Essentially therefore, our application is getting JSON objects from the same domain (m.stringsof.me) as where it resides.  Now consider our iPhone/Android application:

    • our web service (written in ASP.Net for example) is at http://m.stringsof.me/service
    • our application is stored on the phone – it doesn’t have a concept of ‘domain’

    It doesn’t have a domain, which means we are doing something called cross-site scripting.  Unfortunately, despite many many modern web applications using it (check out the source of the Google home page), all web browsers consider this a gross security risk and your Javascript will error if you try to do it.  Enter JSONP…

    JSONP (JSON with Padding) gets around this issue with a crafty little trick which I’ve covered in another article.  You need to know how JSONP works in order to build your mobile application, so I suggest you brush up.

    Returning JSONP from an ASP.Net MVC Application

    As an aside, if you are an ASP.Net MVC user, you can create your own JsonpResult:ActionResult to return from your Views:

        public class JsonPResult : JsonResult
        {
            public override void ExecuteResult(ControllerContext context)
            {
                var response = context.HttpContext.Response;
                var request = context.HttpContext.Request;
                
                // Open the JSONP javascript function
                var jsonpCallback = request.Params["_jsonpcb"];
                response.Write(jsonpCallback + "(");
    
                // Defer to base class for rendering the javascript object. Because we
                // have opened a javascript function first, it gets rendered as the first parameter
                base.ExecuteResult(context);
    
                // Add any additional parameters - this is not part of JSONP, but
                // a construct I've written to allow me to pass extra 'context' to the server and back
                var extraParams = request.Params["_jsonpcontext"];
                if (!string.IsNullOrEmpty(extraParams)) response.Write(extraParams);
    
                // Close the JSONP function
                response.Write(");");
            }
        }

     

    Using this, you can return JSONP directly from your regular MVC Controllers:

        public class PersonController : Controller
        {
            public ActionResult GetPerson(string username) {
                var person = new DataService().GetPerson(username);
                var result = new JsonPResult {Data = person};
                return result;
            }
        }

     

    Awesome huh?

    Rendering your Data

    Because I have returned JSON from the server, the next step is to render it to HTML.  Without going into too much detail, I personally use a couple of methods:

    • HTML templating.  I store HTML in a separate file (or hidden DIV in the Index.html page) and then bind sections of it using jQuery
    • use jQuery to create html such as $(‘body’).append($(‘<div></div>’).html(‘Hi there’));

    Why not just return HTML from the server?

    Good question, glad I thought of it.  Technically, there’s no reason why you shouldn’t.  In fact, if you did, you wouldn’t need to jump through all those cross-domain hoops with JSON etc.

    Frameworks like ASP.Net MVC actually encourage you to do this with their ‘View’ system and when I built the www version of stringsof.me (www.stringsof.me) I used these and they worked great. 

    When I moved to a mobile version of the application however, I found that this was a little short sighted.  What if I expose my objects to a third party who wants to use them in, for example, a Facebook plugin?  The HTML I return for ‘GetPerson()’ is not likely to suit their purposes so best to return the object and let them format it themselves.  Or what if the returned HTML expects Javascript to scroll it into place, but the requesting device doesn’t support Javascript?

    Although it is easier to specifically write your HTML (and even binding if you are using MVC), I eventually concluded that plain JSON objects are the most versatile mechanism.  By using Javascript as the rendering agent, you can make decisions based on the state of the client (such as width or support for location-based queries) which aren’t necessarily available to a server-rendered page.

    Media Queries

    Now that you are getting your data, the next step is adjusting the presentation between the various platforms.  For example, a full-blown website may have a big background image and the mobile version may remove this to account for low-bandwidth phones visiting it.  The fashionable way to do this these days is via CSS Media Queries.

    The basic premise is that your CSS reacts according to the type of media that is using the device.  For example:

    body{
        background-image:url(‘bg.jpg’);
    }
    @media screen and (max-device-width : 320px){
        body{
            background-image:none;
        }
    }

    The code above says:

    • for the body of the page, use a background image of bg.jpg
    • however, if the screen is smaller than 320px, do not show a background image

    The beauty of this is that it is platform independent – you don’t have to detect an iPhone or Android, you just have to know what its screen resolution is.  You can also switch out based on orientation (if the device is held upright or on it’s side):

    @media screen and (orientation: portrait){
        body{
            background-image:url('bg_narrow.jpg');
        }
    }
    @media screen and (orientation: landscape){
        body{
            background-image:url('bg_wide.jpg');
        }
    }

    Now, if the user turns their iPhone on its side, the background image will switch out to one that better suits its new dimensions.  Snazzy huh?

    Testing your Code

    Now that you have your HTML, Javascript and CSS working, you need to test it.  I have found that the best mechanism is Firefox using the Firebug plugin.  This will get you 99% of the way there, even for your iOS/Android applications later.

    You can test your media queries simply by resizing your browser window to the appropriate dimensions – getting a pretty decent idea of how your site will look on an iPhone compared to a full-size desktop browser.  Check it out by opening m.stringsof.me in a new browser window now and resizing.

    Deploying your Application as a Regular Website

    This is the easy one you’re probably used to – just create the website on IIS/Apache or whatever you are using and copy the HTML/Javascript/CSS files over.  The first time you deploy, remember to create Settings.css and Settings.js files and set them appropriately – they shouldn’t be in the main solution because they differ for other deployments.

    Remember you’ll also need to create an empty (or stubbed) PhoneGap.js file, as per the file reference in your Index.html file.  If you don’t, your site will still run but your visitors will get an unprofessional ‘404 Page Not Found’ error.

    Deploying your Application as a Mobile Website

    If you’ve used your CSS media queries correctly, your main website will double as your mobile website – no changes are required. 

    Deploying your Application as an iPhone and/or iPad App

    Ah, the part you’ve probably been waiting for all along. 

    • download www.phonegap.com and follow their instructions for creating a new project in xCode on your Mac (sorry, you need a Mac computer to build an iPhone app)
    • PhoneGap has comprehensive help files, so you are best off following them, but essentially the next step is to copy your HTML/Javascript/CSS files into the ‘www’ folder that PhoneGap provides.  Again create new Settings.js and Settings.css files accordingly.
    • note that PhoneGap also includes a PhoneGap.*.js file which contains the Javascript wrapper code to access the device hardware such as the camera.  Make sure the file is named exactly the same as that referenced in your Index.html file.

    Compiling and building your PhoneGap application is beyond the scope of this article sorry. 

    Beginner’s tips:

    • iOS is case-sensitive, so the <link/> and <script/> file references in your Index.html file must match the case of the files themselves.  If your CSS refers to images or folders, these are also case-sensitive. This took me about two hours to figure out – too much PC for me I guess.
    • I use DropBox to synchronize changes between my PhoneGap application and my website application.  Even though you are using the same HTML/Javascript/CSS files, they are copies of each other so a change in one must be copied to the other.  If you’re a PC user, you may also like to use File Backup Pro to quickly prepare and copy your changes to your server.

    Deploying your Application to Android (and other mobile devices)

    This uses PhoneGap again, and is the same as the iPhone installation above.  Again, the PhoneGap documentation does a much better job of explaining this than I can.

    Limitations

    The solution I have presented involves building a website predominantly in Javascript, and I have a few problems with this:

    • there is no compile-time checking.  I have colleagues that rave about Script#, but I don’t like the additional learning curve.
    • mainly because of point 1 above, it is hard to enforce architecture or development styles.  This makes working in a multi-developer team environment quite a bit tougher
    • search engines do not execute Javascript which means all they see on your web page is a little bit of HTML wrapper.  This means your website will not rank in Google, Yahoo etc.  You must therefore invest more in other SEO methods such as Site Maps and friendly URLs
    • JSONP can only be executed using GET requests, so if you need to upload a huge amount of data, such as an image, you are out of luck.  In m.stringsof.me, I had to deal with this when uploading the image that the user draws on the <canvas/> element.  I eventually solved it by breaking the base64-encoded representation of the image into 1000kb chunks and sending them to the server one after the other.  The server remembers what it gets and joins them together into a proper image at the end.  (This is why you see a percentage completion status when saving your work – each increment is a chunk of image).  You can View Source on the page to see how it was done, if you like.

    Summary

    Everybody likes a summary section so they know it’s the end of the article.  So, there you go.

    Follow

    Get every new post delivered to your Inbox.