Archive

Posts Tagged ‘CodeProject’

Using the Microsoft Dynamics CRM Online Web API from your ASP.Net MVC website

August 30, 2016 Leave a comment

The MS Dynamics Web API has a lot of promise, allowing users to authenticate using OAuth2 then granting your application access to their CRM data.  Unfortunately, the documentation provided by Microsoft is misleading at best.

After bashing my head against the wall for about ten hours, I got a response back from an MS developer with the following working solution, which I’ve padded-out and am sharing here to hopefully save somebody else the headache.

Here’s the working code sample, just drop in as a standard ActionResult in your MVC project – no other code is required/involved.

public async Task<ActionResult> GetAccountsFromDynamics()
{// Once you've created your Native Client in Azure AD, you can get the clientID for it
var azureTenantGuid = "***";
var clientID = "***";
var tokenRequestUrl = string.Format(@"https://login.microsoftonline.com/{0}/oauth2/token", azureTenantGuid);
// The credentials for the CRM *user* that you are accessing CRM on behalf of
var crmUrl = "https://your_crm_url.dynamics.com";
var userName = "***";
var password = "***";

// Connect to the authentication server
var request = (HttpWebRequest)WebRequest.Create(tokenRequestUrl);
request.Method = "POST";

// Write our request to the request body
using (var reqStream = await request.GetRequestStreamAsync())
{
var postData = string.Format(@"client_id={0}&resource={1}&username={2}&password={3}&grant_type=password", clientID, crmUrl, userName, password);
var postBytes = new ASCIIEncoding().GetBytes(postData);
reqStream.Write(postBytes, 0, postBytes.Length);
reqStream.Close();
}

// Call the authentication server and parse out the response
var accessToken = "";
using (var response = (HttpWebResponse)request.GetResponse())
{
// Proceed interpreting result
var dataStream = response.GetResponseStream();
if (dataStream != null)
{
var reader = new StreamReader(dataStream);

// The response is returned as JSON, these lines just conver it to a C# object. The format includes our access token:
// Example format: {access_Token: "abc...", scope: "public"}
var json = reader.ReadToEnd();

var tokenSummary = json.FromJson<TokenSummary>();
accessToken = tokenSummary.access_token;
}
}

// Now make a request to Dynamics CRM, passing in the toekn
var apiBaseUrl = "https://your_crm_url.dynamics.com/api/data/v8.1/";
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
var result = await httpClient.GetAsync(apiBaseUrl + "accounts?$select=name&$top=3");
var accountInfoJson = await result.Content.ReadAsStringAsync();

// You're done!
return Content(accountInfoJson);
}
Categories: General Tags: ,

Bridging the client-server boundary – an experiment in architectures for next-generation web applications

May 1, 2015 1 comment

This article is the first of a series in which I design and experiment with a new mechanism for building tomorrow’s web applications.  Specifically, I’m interested in blurring the gap between client & server, such that both parties are unaware of their relationship to the other.

You can download the source code at the bottom of this article

The problem

Consider the following very common scenario where a web page signs up a new user based on their email address:

  1. the human enters an email address and presses the Sign Up button
  2. the browser posts the email address to your server
  3. the server receives the email, runs it through the business logic and then saves in the database
  4. the server returns (unloads the call stack), sending a 200-OK message back to the web browser
  5. the web browser receives the successful message and displays a message saying as much

In graphic terms, the communication flow is a little like this:

Simple

This is all well and good, but now consider that, as part of the server’s business logic, the email address is compared with existing registrations and deemed to be a duplicate.  It is okay to save duplicates in our database, but we want to make sure that the human didn’t actually mean to Sign In to their existing account.  So, the communication now has an extra round-trip:

Second

This is pretty easy to draw on a diagram, but in practice, coding  it up involves a lot more function points:

Method Breakdown

As the callouts in the diagram above show, our intrepid developer needs to write extra code to:

  • listen for the ‘duplicate email’ message from the server and display a confirmation box to the human asking if they’d like to continue even though it is a duplicate email
  • write an additional API method which accepts the selection that the user made.  In actual practice, this may be the same function method with an optional override, but the point is that it needs to be accommodated

To get technical, this is what our SaveSignup() method might look like (in the business logic layer):

public void SaveSignUp(string email, bool? overrideIfDuplicate){
	var emailAlreadyExists = MyEmailServer.EmailAlreadyExists(email);
	if (emailAlreadyExists &amp;&amp; !overrideIfDuplicate) throw new DuplicateEmailException(&quot;This email already exists&quot;);

	// Save
	MySignUpService.Save(email);
}

Keep in mind that this method would be called twice – the first time with no overrideIfDuplicate parameter, and then the second time where the user has set it to ‘true’.

Further to this, when the server confirms whether or not the duplicate should be saved, it implicitly assumes that the human is still there at the other end, ready to answer.  What if the existing database has 1 billion emails already and the duplication check takes 20 seconds – should we expect a human to wait this long for what they perceive to be a simple form submission?  Nup.

The answer

So this is the goal of my article – let’s see if we can develop an architecture like this:

Event

This diagram was a little hard to draw, so please bear with me.  What I’m trying to convey is:

  • the initial saving of the sign up form returns immediately, and the human can continue with their workflow (including leaving our website altogether)
  • when the server detects a duplicate, it doesn’t specifically fire this back at the client (although in this diagram, the client does answer it).  You might initially think of this like an event, but in fact it is a pause in execution

A pause in execution

A pause in execution – and this will become the crux in my architecture.  Ultimately, I want the aforementioned function to be rewritten like this:

public void SaveSignUp(string email){
	var emailAlreadyExists = MyEmailServer.EmailAlreadyExists(email);
	var bridge = new MyFancyNewArchitectureMessenger();
	if (emailAlreadyExists &amp;&amp; !bridge.Listen(&quot;This email has already been registered. Are you sure you wish to continue?&quot;)) return;

	// Save
	MySignUpService.Save(email);
}

In this rewritten example, we don’t have to write extra parameters to accept extra logic.  Instead, as questions arise, we simply ask them in what appears to be a synchronous manner (of course, the actual execution can’t be synchronous, but I want it to appear that way).

If this doesn’t look like much of a difference to you, consider this more real-world example with more logical paths:

The current way…

public void SaveSignUp(string email, bool overrideIfDuplicate, bool? inviteAFriend, string friendsEmailAddress){
    // Check if the email already exists, or if the user has agreed to store it anyway
    var emailAlreadyExists = !regex.IsMatch(email);
    if (emailAlreadyExists &amp;&amp; !overrideIfDuplicate) throw new DuplicateEmailException(&quot;This email already exists&quot;);

	// TODO: save to database

	// Check if they want to invite a friend?
    if (!inviteAFriend.HasValue)) throw new InviteAFriendException(&quot;Would you like to invite a friend?&quot;);

	// They've agreed to invite a friend?
	if (inviteAFriend.Value &amp;&amp; !string.IsNullOrWhiteSpace(friendsEmailAddress)){
		// TODO: save friend's email address to database
	}
}

My new way…

public void SaveSignUp(string email){
	var bridge = Dependency.Resolve&lt;IBridge&gt;();

	// Confirm the email address?
	if (!regex.IsMatch(email))
	{
		// Confirm with user?
		if (!await bridge.Listen(new YesNoPrompt(&quot;Invalid email&quot;, &quot;Your email is not a valid format. Are you sure you wish to save it?&quot;)))
		{
			return;
		}
	}
	// TODO: save to database

	// Would they like to invite a friend?
	if (await bridge.Listen(new YesNoPrompt(&quot;Invite friend&quot;, &quot;Would you like to invite a friend?&quot;)))
	{
		var friendsEmail = await bridge.Listen(new Readline(&quot;Invite friend&quot;, &quot;What is your friend's email address?&quot;));

		// TODO: save to database
	}

}

Although the two code examples above are similar in length, the new architecture is much easier to develop because it is built in a linear fashion:

  • With the former, our fearless developer would have also had to do a lot of work on the client such as passing up new variables and responding to different exception types.
  • With the latter, our developer simply asks questions of the bridge and waits for a response.  They have no knowledge or concern for how the questions are being asked.

Building a bridge between the client & server

For version one of my bridge, I decided to use a combination of now somewhat-old technologies:

  • SignalR. SignalR uses websockets and this allows me to push my bridge messages down to the client (I also use it to push messages from the client to the server, although this could equally have been done with a regular HTTP post).
  • Microsoft’s new async/await construct.  Because version one of the bridge uses loops and polling, I use async/await to move the processing to a different thread – thereby freeing up IIS to server up other requests.

Let’s look at the code:

private async Task&lt;T&gt; Listen&lt;T&gt;(IBridgeMessage&lt;T&gt; msg){
	// Poll the cache, waiting for a result
	var cache = new CacheManager&lt;IBridgeMessage&lt;T&gt;&gt;();
	IBridgeMessage&lt;T&gt; result = null;

	// Cancel after a few seconds
	var cancel = new CancellationTokenSource();
	cancel.CancelAfter(TimeSpan.FromSeconds(20));

	// Version one - poll the cache in a loop to check for our return
	await Task.Factory.StartNew(() =&gt;
	{
		while (true)
		{
			// Listen to see if we've timed out
			if (cancel.IsCancellationRequested) break;

			// Check the cache - the cache matches based on the the msg.MessageID property
			result = cache.Load(msg, null);
			if (result != null) break;

			// Wait a wee while before checking again
			System.Threading.Thread.Sleep(100);
		}
	}, cancel.Token);

	// Return
	if (result == null)
	{
		NotifyCancelled(msg.MessageID);
		return default(T);
	}
	return result.Result;
}

As you can see, the code essentially just loops, checking for a variable in our cache (which our client later populates – see below).  Now, I know that this is not very elegant – obviously the server is still using resources even when ‘idle’, and clearly this method wouldn’t scale well once we had a few thousand concurrent users.  But it’s a good start for version one.

As I mentioned, the code will repeatedly check our cache for a return token, which is inserted by the client like this:

public void Answer(string jsonEncodedResult) {
	if (String.IsNullOrWhiteSpace(jsonEncodedResult)) return ;

	// Deconstruct the entities
	var jToken = (JToken)Newtonsoft.Json.JsonConvert.DeserializeObject(jsonEncodedResult);

	var requestedTypeName = jToken.Value&lt;string&gt;(&quot;TypeName&quot;);

	// Because they were serialized from an abstract class (BaseTemplateItem), Newtonsoft can't automatically cast them to their types as they're just a collection of {name:value} pairs
	// So, we need to iterate through and cast them ourselves
	var itemTypes = System.Reflection.Assembly.GetAssembly(typeof(IBridgeMessage)).GetTypes().Where(x =&gt; x.IsSubclassOf(typeof(BaseBridgeMessage))).ToList();
	var thisType = itemTypes.FirstOrDefault(x =&gt; x.Name == requestedTypeName);
	if (thisType == null) return;

	// Need to cast to its appropriate type
	var item = Newtonsoft.Json.JsonConvert.DeserializeObject(jsonEncodedResult, thisType);

	// Just pop the result in our (shared) cache, and the listeners above will extract it
	var cache = new CacheManager&lt;ICachable&gt;();
	cache.Save((ICachable)item);
}

The bulk of this method is actually to do with parsing a JSON result back to our message type – in fact, the only part we’re interested in now is the last two lines where we store the result back in our cache, ready for our aforementioned loop to pick it up in the Listen() method.

Proof of concept complete – what’s next for version two?

This code has established a semantic framework for how I want the code to work.  Specifically:

  • code is written in an apparently synchronous manner
  • code is written with no regard for who or what it is interacting with – no fiddling around with matching client-side API calls to server side methods
  • semantically, our code appears to pause in execution when a question is asked of the bridge.  Once the bridge responds, execution continues in a linear fashion.  Compare this with current methods, where the entire http request is sent again

However, there is one glaring problem which prevents this code from being used in real-world scenarios and that is to do with how we store and retain state…

Storing and resuming state

As we have seen, not only does our Listen() method hog a new thread, it essentially holds the entire call stack in memory while it waits. Although I haven’t tested it, I’m certain this will not scale well.

Unfortunately though, in order for my code to be semantically synchronous, I need some mechanism for both storing state before our call to Listen(), and restoring it after we get a response.

And by ‘state’, I include a variety of things:

  • the call stack
  • incoming parameters to our current function (and those of the calling function, and those of that calling function….and so on, up the call stack)
  • if we are building a web application, I need access to things like the Session and Cookies variables
  • any configuration values (e.g. stored in web.config or app.config files)

Replacing the way I store and resume state will be the focus of version two of this architecture.  At this point, I don’t know how I’m going to do it, but things worth pursuing are:

  • serializing and storing the call stack in the database
  • can .Net’s reflection classes allow me to resume the call stack from a specific point?
  • perhaps .Net is simply not capable of doing something like this, and something like Node.js would work better
  • on that note, would a scripting language (such as Node) allow me to literally build and execute code on the fly?
  • and I should actually do some benchmarks to see just how poorly performing the existing version one code is – perhaps it would actually suffice for a moderately-sized web application after all?

In addition, I’d like to extend the implementation to provide a different bridge.  We currently use SignalR to communicate with a web-based client, but because our bridge is abstracted via an interface, we could easily write a new Bridge which (for example) sent the user an email with a link to confirm/decline an action (think about that for a second – the code would still just be waiting on the Listen() method, even though behind the scenes our bridge is firing emails around the world, potentially over the course of a few hours or even days. That is pretty cool).

Download the example code

I’ve built a working prototype of this version one architecture, which you can download here.  Because I intend to extend the architecture, there is a lot of extra stuff with the result being that it is a lot more complicated than typical tutorials.  In particular:

  • I’m using Microsoft Unity for my dependency injection
  • the client-side javascript is architected using RequireJS
  • The CSS is written using LessCss and a Task Runner plugin to run Gulp.  Of course,  you shouldn’t need to change the CSS, but just so you know…
  • the project was written in VS2013
  • I included all the Nuget binaries, so it is quite a big download but hopefully it means you can get up and running quicker

Okay, good luck and have fun.  If you have any ideas on how I can progress this, best to get me on Twitter – @benliebert

Categories: General Tags: , ,

Practical tips and tricks for using ES6 in today’s web applications

Javascript’s ES6 upgrade has been a long time coming and brings a lot of really great features.  Since the spec was finalized late last year, we’ve jumped in with both feet and begun using ES6 in considerable parts of our new web applications.

The internet already has plenty of tutorials covering specific ES6 features and how to use them, but most of them are shown in isolation and there’s not enough examples of how to actually pick up those code snippets and get them working in a real web application.

So, this blog post is a random collection of tips, tricks & styles that we’ve begun using in real-world web applications.  In no particular order….

 

Running ES6 and ES5 side-by-side

Practically speaking, you are very unlikely to be able to develop an application wholly in ES6.  If nothing else, most of your plugins are still in ES5.  Specifically, we’re talking about how modules are injected into the system.  Typically, this is done using an AMD dependency injection system like RequireJS, but ES6 has a snazzy new module syntax which is designed to replace this.

The practical challenge is getting these two technologies to run side by side.  Or, from our new ES6 point of view – how do we import a non-ES6 module? The answer is SystemJS.

Although it’s technically not, we like to think of SystemJS as a kind of wrapper for RequireJS and ES6 modules.  It basically reads what the structure of your file is and:

  • if it contains ES6 module syntax, it assumes it is an ES6 module
  • otherwise, loads as if it is a RequireJS module

One gotcha that took us about an hour to work out was that you need to kick it all off using a call to System.import.  In retrospect, this is pretty obvious – I mean, something has to tell your browser how to start loading external dependencies.  So basically, it all ties together like this:

logon.es6

logon.es6 is your snazzy new logon module, written entirely in ES6.  It uses the new module and class syntax and your girlfriend thinks it’s really good.  Note that it has two dependencies – the first is lib, which you wrote yourself in ES5 and the second is a third-party plugin (in this case jQuery) which may or may not have any AMD- or module-syntax embedded.

import $ from 'jquery';
import lib from 'lib';

export default class Logon {
/*
AttemptLogOn
Grabs the username and password provided and calls a service to authenticate
*/
AttemptLogOn(){
let params = {
username: $(‘#TxtUserName’).val(),
password: $(‘#TxtPassword’).val()
};

lib.CallService(‘/secure/login’, params);
}
};

lib.js
lib.js is that old library file which you’ve built up over the last few years.  It is written in ES5 and full of helpful utility methods which you really can’t be bothered upgrading to ES6.  It uses RequireJS syntax to declare its dependencies at the top of the file.
require(['anotherdependency'], function(anotherDep) {
    return function(){

/*
CallService
Makes an ajax request to the given URL
*/
var CallService = function(relativeUrl, params){
// Details omitted…
};

// Return public methods
return {
CallService: CallService
}
};
});

Logon.html

Your regular HTML page.  It uses System.import to get the ball rolling:

<script src="systemjs.js"></script>
<input type="text" id="TxtUserName"/>

<input type="password" id="TxtPassword"/>

<script>

System.import('logon.es6').then(function(l){

    var log = new l();

    log.AttemptLogin();

});

</script>

 

One more thing – you’ll probably need to use System.config call to tell it how your files are organized etc:
System.config({
    baseURL: '/scripts/',
    paths: {
        'Views/*': '/views/*.js'
    },

map: {
“jquery”: “lib/jquery”,
“jqueryui”: “lib/jquery-ui.min”
},

meta: {
“lib/jquery-ui-min”: {
deps: [‘lib/jquery’]
}
}
});

 

Writing a re-usable base class using ES6 inheritance

Along with modules, this is the feature we most appreciate in ES6 – a tidy way to create a re-usable base class for our controllers.  See, here is how our projects are typically laid out:

  • The application is divided into heaps and heaps of modules, like ‘logon’, ‘view chart’, ‘render menu’ etc etc
  • Each of these modules has a Javascript controller class which is bound to a view (using RivetsJS – we have an in-depth tutorial here)
  • There is a lot of common code which is repeated in our controllers, such as:
    • an Init() method to kick things off
    • a property called ‘model’ where we store the data for our view/controller
    • a reference to the view, in case we have to do something nasty like use jQuery to animate an element

Using ES6, we’ve now been able to create a tidy little BaseController class which encapsulates this once:

basecontroller.es6

Our Base Controller class is written exactly like a regular ES6 class…

import $ from 'jquery';
import lib from 'lib';

export default class BaseController{
constructor(m) {
this.IsLoading = false;

this.model = m;

// Our models all contain a reference to our view ID
if (this.model !== null) this.view = $(‘#’ + this.model.UniqueID);
else this.view = $(

‘); // Create an empty object so that we don’t have to keep doing null checks if there is no model

this.Init();
}

/*
Init
This method can be overwritten by base classes
*/
Init(){

}

/*
UpdateModel
Helper method to replace our model (for example, if we update a database record)
*/
UpdateModel(newModel){
$.extend(this.model, newModel);
}

/*
Calls our web service to get the given JSON
*/
CallJSON(url, params){
var p = new Promise((success, fail) => {
// Adjust model state
this.view.addClass(‘loading’);
this.IsLoading = true;

// Call our web service
lib.CallService(url, params).then(result => {
// Adjust this model state
this.view.removeClass(‘loading’);
this.IsLoading = false;

// Pass back to the specific handler/caller
success(result);
}, err => {
console.log(“Error”, err);
this.view.removeClass(‘loading’);
this.IsLoading = false;
fail();
});
});

return p;
}

}

logon.es6
Our re-written logon file may now look like this:
import BaseController from 'basecontroller';
export default class LogonControl extends BaseController {
    AttemptSignIn(){
        alert('Your current PersonID is ' + this.model.PersonID);
        let params = {
            username: this.view.find('#TxtUserName').val(),
            password: this.view.find('#TxtPassword').val()
        };

// Use base method to make a JSON call
this.CallJSON(‘signin’, params).then((newModel) => {
this.UpdateModel(newModel);
alert(‘Your new PersonID is ‘ + this.model.PersonID);
});

}
};

And of course, you kick it all off by instantiating logon.es6 with a model in the constructor (note that the constructor is in the BaseController class, and accepts one parameter):
System.import('logon').then((l) => {
    let model = {
        PersonID: 0
    };

var log = new l(model);
log.Init();
});

 

Using traceur to make your ES6 code backwards compatible

Currently, most browsers only support a tiny subset of the ES6 standards and we doubt that we could rely wholly on them coming up to speed for at least another 12 months, likely much longer.  So it becomes necessary to run a transpiler which converts your ES6 code back into ES5.

As far as we can tell, the most complete transpiler out there is Google’s Traceur.

There are two ways of doing this, the lazy way and the proper way.  The lazy way is to just include a script file in the <head/> of your application, but we’re not even going to show a demo of that here because it is short-sighted.  (If you’re asking, the thing we hate most is not that the transpiling is done in real-time in the browser, but the fact that you have to decorate your <script/> tags with type=”module”)

The better way to do this is to setup a task which runs traceur against your ES6 files at compile time and then point your browser at the generated ES5 files.  For this, we’ve used the new Gulp integration supported by Visual Studio 2015.  This is not the place to give a Gulp/VS tutorial, but once you’ve got your head around it, here is how we at Blackball do our transpiling:

gulpfile.js

/// <binding ProjectOpened='watchjs'/>
var gulp = require('gulp');
var watch = require('gulp-watch');
var traceur = require('gulp-traceur');
var rename = require("gulp-rename");

// Helpful error handler to display error messages in our Gulp console window
function onError(error) {
console.log(“ERROR: “ + error.toString());
this.emit(‘end’);
}

// Watch runs the traceur task automatically each time our es6 files are edited
gulp.task(‘watchjs’, function () {
gulp.watch(‘**/*.es6’, [‘compiletraceur’]);
});

// Our ES6 files are indicated with a .es6 file extension, so we just grab them all then save the transpiled .js file alongside each
gulp.task(‘compiletraceur’, function () {

return gulp.src(‘scripts/**/*.es6’)
.pipe(traceur())
.on(‘error’, onError)
.pipe(rename(function (path) {
path.extname = “.js”;
}))
.pipe(gulp.dest(‘scripts/’));
});

 

Read it slowly and it kind of makes sense.

One problem with traceur is that your browser is running code which you didn’t write, so error logs do not match your ES6 files one-to-one.  This is surprisingly okay though – even though the structure of your files differs, then lines that cause errors are generally pretty similar and practically speaking we haven’t had any problems understanding what part of our ES6 code the error pertains to.

 

Summing up

Whether you like it or not, you’re all going to be coding in ES6 in the next five years so you better get on board.  Due to the lack of support (tooling, blogs/forums, browsers….), it is not really practical to use it today, however if you like to play with new toys then hopefully this article will save you a few hours…

Categories: General Tags: , ,

Best practice front-end architecture using Microsoft ASP.Net MVC and Rivets.js

May 28, 2014 1 comment

A few years ago I wrote an article about best-practice architecture for web applications built in Microsoft.Net.  This was focused entirely on the back-end and I mentioned at the end that I would do a front-end article one day.  So, here we go…

First of all, let’s get some basic requirements down:

  • your business logic should be separated from your presentation logic
  • your business logic should be unit testable – and this means abstracting as much as possible so you can mock it later
  • your application should be as lightweight as possible – but more importantly, the application must not ‘bloat’ with superfluous or rarely-used features as it grows bigger

In addition, since I wrote the last article, the way we build modern web applications has made a massive shift to client-side focus, with much of my work written in javascript these days.  The trouble with javascript is that doesn’t naturally enforce rules on the developer.  If you are developing by yourself, you may be able to get away with this because you understand your own way of working.  But if you’re working in a multi-team environment, this isn’t good enough and you need to enforce your own rules using conventions which other developers must follow.  This article shows the conventions which I currently use to keep things organized and understandable.

Simple huh?  Let’s get into it.  If you’d like to follow along, you can download the sample project here (and don’t forget to run the included .sql file to create your database structure).

Separating your business logic from your presentation logic (a.k.a. Separating your javascript from your HTML)

For me, the reason for doing this primarily comes down to unit testing – you can’t be dealing with HTML manipulation when you are trying to test the SavePerson() method of your javascript file. 

The typical way to go about this is via two-way data binding and for years the common ways of doing this have been using third party tools like Knockout.js.  Personally, I detest Knockout – they’ve done amazing work (including older browser support), but you have to completely rewrite your javascript models in order to make it work – which means:

  • you and other developers must become proficient ‘knockout developers’ in order to maintain the application
  • you become massively tied-in to the knockout framework

Because of these reasons, I have never built a proper data-bound front-side framework on any of my applications.  At least, until rivets.js came along.

Rivets.js

This was a huge game changer for me.  It’s not as big or popular as the older frameworks such as Knockout, but it has one massive advantage – you can develop your javascript files with absolutely no knowledge of (or reference to) the fact that it is data-bound to rivets.  In fact, your javascript files have no idea that they are data-bound at all!!.  That is perfect – just the way it is supposed to be.  To clarify, here is an example of a file that displays a list of people:

 
function (model) {
var
Init = function () {
console.log("People", model);
},
ViewPerson = function(ev, data) {
alert('You have clicked ' + data.person.DisplayName);
},
AddPerson = function () {
var params = {
firstName: 'Person ' + (model.People.length + 1)
};

// Call our MVC controller method
lib.Data.CallJSON('home/createperson', params, function (newPerson) {

// Add to our model - the view will update automatically
model.People.push(newPerson);
});

return false;
};

Init();
return {
model: model,
AddPerson: AddPerson,
ViewPerson: ViewPerson
};
};
 

Beautiful huh?  Imagine unit-testing that bad-boy – piece of cake!

So, with rivets.js you grab this javascript file and you ‘bind’ it to a block of HTML, and suddenly, as the user interacts with the HTML (like clicking an ‘Add person’ button), your javascript file will handle the events and react according (like creating a new person).  For reference, here is my associated HTML view:

<div id="MyPeopleList" class="home">
<h2>People list</h2>

<table>
<tr data-each-person="model.People" data-on-click="ViewPerson">
<td data-html="person.DisplayName"></td>
</tr>
</table>

<p>
<a data-on-click="AddPerson">Create a new person</a>
</p>
</div>

<script> 
var viewID = 'MyPeopleList';

var view = document.getElementById(viewID);

rivets.bind(view, new PeopleList(model));

</script>

See the data-* attributes?  That’s rivets.js.  I’m not going to into how the binding works – check out the rivets.js documentation for that.

 

Automatically wiring up your views to your controllers

In the HTML sample above, you can see a little script tag at the bottom which pulls in my PersonList javascript and applies to our HTML.  I build a very modular type of architecture so I end up with 100’s of these files and quite honestly I get sick of retyping the same thing again and again.  So, this is a good chance to introduce the first of my ‘conventions’ which I apply using the MVC framework – let’s jump to our C# code.  Specifically, the OnResultExecuted() method which gets called after each of my MVC views are rendered (BTW, if you’re not familiar with Microsoft MVC then you’ll probably need to brush up on another blog before proceeding):

 
protected override void OnResultExecuted(ResultExecutedContext filterContext)
{
var viewFolder = this.GetViewFolderName(filterContext.Result);
var viewFile = this.GetViewFileName(filterContext.Result);
var modelJSON = "";

// Cast depending on result type
if (filterContext.Result is ViewResult)
{
var view = ((ViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}
else if (filterContext.Result is PartialViewResult)
{
var view = ((PartialViewResult)filterContext.Result);
if (view != null && view.Model is BaseModel) modelJSON = view.Model.ToJSON();
}

// Render our javascript tag which automatically brings in the file based on the view name
if (!string.IsNullOrWhiteSpace(viewFolder) && !string.IsNullOrWhiteSpace(modelJSON))
{

var js = @"
<script>
require(['lib', 'controllers/"
+ viewFolder + @"/" + viewFile + @"'], function(lib, ctrl) {
lib.BindView("
+ modelJSON + @", ctrl);
});
</script>"
;

// Write script
filterContext.HttpContext.Response.Write(js);
}

base.OnResultExecuted(filterContext);
}
 

Don’t worry about all the custom function calls – you’ll find them in the example project download – the key points are:

  • find the physical path of the view that we are rendering (e.g. /home/welcome.cshtml)
  • use this path to determine which javascript file we have associated to it (e.g. /scripts/controllers/home/welcome.js)
  • automatically render the <script/> tag at the end of our view – exactly the same as we manually typed it into the HTML example I pasted above

So, this handy method does a few things:

  • it saves me typing
  • it forces me and other developers in the team to store the javascript files in a consistent and predictable format.  So, if I’m working in Visual Studio on the Welcome.cshtml view, I know immediately where I can find its javascript just by looking at the file name.
  • it provides a clean way for me to do my server-to-client model serialization, which deserves its own section…

Serializing your C# MVC model into your client side (javascript) model

Note the modelJSON variable you can see above.  Because I’m loading my javascript controller from server-side code, I have access to the MVC ViewModel and I am able to serialize it directly into my page.  This is something which few online examples of javascript data-binding frameworks show you – they always start with some model which is hard-coded into your javascript, which is completely impractical in real-life.

In practical terms, this has the other advantage that my server-side MVC models have precisely the same structure as the model I have dealing with in javascript.  This makes it easy for me to understand how my model is formatted when I’m working in javascript.

BONUS: It would be amazing if I could somehow get some kind of javascript intellisense to work on my models by parsing in the C# structure of my MVC models.   I could also use the same mechanism to do compile-time checking of my javascript code.  If anybody can think of a way to do this, please let me know.

Managing your CSS files in a large web application

Another problem I find with large projects is managing CSS files.  One typically has a common.css file and perhaps an admin.css file to try to split it out as required.  But as  your project grows, you add more and more fluff to these files and they end up very very large.  Then, to reduce the initial site load, you think you’ll pull out some targeted CSS classes into a separate file and reference it just in the files you need.  Except then you start forgetting which files need it – and besides, with MVC applications these days you tend to pull in partial views all the time – they have no idea what page they are on and what CSS files they currently have access too.

So, here’s what I’ve finally come up with, and it’s very similar to how I pull in the javascript files above, only this time it uses MVC’s OnResultExecuting() method:

protected override void OnResultExecuting(ResultExecutingContext filterContext)
{
if (filterContext.Result is ViewResult || filterContext.Result is PartialViewResult)
{
// Render a reference to our CSS file
var viewPath = this.GetViewFolderName(filterContext.Result);
if (!string.IsNullOrWhiteSpace(viewPath)) this.RenderPartialCssClass(viewPath);
}

base.OnResultExecuting(filterContext);
}

Again, ignore the missing method references, they’re in the example project.  Here’s what it’s doing:

  • find the path of the current view (e.g. /home/welcome.cshtml)
  • generate a CSS path name based on this path name (e.g. /content/xhome.min.css)
  • Render a <link/> tag pointing to the CSS file

The <link/> tag is written to the HTML stream before the HTML of the view is rendered, and voila – every time you pull down a view (either with a direct request, or a partial render using AJAX), the first thing it will do is tell the browser what CSS file is needs and the browser will pop off and download that too.

Of course, this has its drawbacks:

  • the first time a view in each folder is downloaded, the browser has to make another (blocking) request to get the CSS file.  Remember though that the CSS file is cached so subsequent requests have no additional overhead.
  • it splits the CSS into files which aren’t named or organized according to their use.  For example, another way to organize files would be by function (admin, public etc).  Personally, this doesn’t bother me so I’m happy.

In addition to these files, I also have the usual ‘base’ CSS file, but now it should include only general styles like tables, headings etc.  When you have functionality which is specific to a module (HTML view) then you add it to the relevant file in its own section.  In the example project, you can see that xhome.min.css contains the styles which the welcome.cshtml view uses.

BONUS: the above works perfectly for partial views which are injected into an existing DOM.  However, if you use it for a regular view, then you’ll notice the <link/> tag is inserted at the very top of the HTML element – ie. above the opening <html/> tag.  Not proper although I haven’t found any practical problems in any browsers.  Still, if anybody finds an efficient way to render it in the <head/> tag instead, please let me know – I haven’t bothered looking into it.

Dependency injection using require.js

The final excellent technology I use in my front-end architecture is require.js.  This serves two purposes:

  • it allows my various javascript files to load their dependencies on demand instead of pre-loading all my files on the initial page load.  This is absolutely essential for your large application to be well performing.
  • it allows us to mock out files for unit testing, simply by replacing the require-main.js file

I’m not going to go in to how require.js works – you can find out all about it on their website.  But again, it is a must-have as far as I’m concerned.

Unit testing

Unfortunately, the example project doesn’t have examples of unit testing.  I usually use Jasmine and Karma to run my tests, and I mock out the require-main.js file to stub out things like my jQuery dependencies.

Conclusion

So, that’s it – my current take on ‘best practice’ MVC architecture for real-world, large-scape web applications.  The next steps I’m looking forward solving over the next year are:

  • compile time javascript errors
  • introduction of ES6 javascript
  • implementing the proposed Object.observe() pattern in ES7 (which could technically allow me to replace rivets.js but why re-invent the wheel?)
  • Visual Studio ‘knowing’ how javascript and cshtml files are linked, so you can do something akin to ‘view source’ to jump between them
  • a framework to bind my javascript files to non-HTML framework, perhaps Android axml files?  Perhaps I’m dreaming….

Seeya

Categories: General Tags: , , ,

Automatic code documentation based on your C# comments

April 24, 2014 Leave a comment

I’ve written a few APIs over the years and the worst part is writing the documentation:

  • it takes extra time
  • it must be updated every time you make changes to your code
  • it is a duplication of work because I already document my code inline anyway

So, here’s a handy utility I wrote which will use reflection to whip through your code and draw out the comments.

Generating XML Documentation

Before proceeding, you must setup your project to generate an XML file of your code comments.  This is done via the Properties –> Build menu in your project (presumably it’s a web project).  See this screenshot below:

 

Screenshot

 

This generates an xml file in the bin directory every time you build.  The file contains all your code comments, ready for parsing by my helper utility.  The format is something like this:

Screenshot

 

So, with this in place, here is the utility class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Web;
using System.Xml;
using System.Xml.Linq;
using Common;


namespace Web.Code
{
/// <summary>
/// Creates documentation for our various API methods
/// </summary>
public class ApiDocumentationGenerator
{

#region Sub classes

public class Parameter
{
public string Name { get; set; }
public string Type { get; set; }
public string Description { get; set; }
}

public class Method
{
public string Name { get; set; }
public List<Parameter> Parameters = new List<Parameter>();
public string Summary { get; set; }
}

#endregion

#region Properties

public List<Method> Methods = new List<Method>();
private string PathToXmlDocumentation = "";

private XDocument _XmlDocumentation = null;
private XDocument XmlDocumentation
{
get
{
if (_XmlDocumentation == null)
{
_XmlDocumentation = XDocument.Load(this.PathToXmlDocumentation);
}
return _XmlDocumentation;
}
}

#endregion

/// <summary>
/// Constructor
/// </summary>
/// <param name="pathToXmlDocumentationFile"></param>
public ApiDocumentationGenerator(string pathToXmlDocumentationFile)
{
this.PathToXmlDocumentation = pathToXmlDocumentationFile;
}

/// <summary>
/// Generates our classes
/// </summary>
public void Generate()
{
// BaseController is a class I wrote which all my MVC *Controller methods inherit from. If you don't have a base class, you can just use
// whatever parent class you know your own API methods sit within. And if there is no parent class, then just get every time in the assembly
var ass = System.Reflection.Assembly.GetAssembly(typeof(BaseController));

// Get each class
foreach (var controller in ass.GetTypes())
{
if (controller.IsSubclassOf(typeof(BaseController))) ExtractMethods(controller);
}
}

/// <summary>
/// Finds the methods in this controller
/// </summary>
/// <param name="controller"></param>
private void ExtractMethods(Type controller)
{
foreach (var method in controller.GetMethods())
{
// My API methods are decorated with a custom attribute, ApiMethodAttribute, so only show those ones
var attrs = System.Attribute.GetCustomAttributes(method);

// Check our attributes show we have an API method
var isAPIMethod = false;
foreach (System.Attribute attr in attrs)
{
if (attr is ApiMethodAttribute)
{
isAPIMethod = true;
break;
}
}

// Break if not an API method
if (!isAPIMethod) continue;

// Parse out properties
var meth = new Method();
meth.Name = controller.Name.Replace("Controller", "") + "/" + method.Name;
this.Methods.Add(meth);

// Quick hack to detect the XML segment we want - I know that all my API methods are in *Controller methods, so I can just restrict to this
var memberName = "Controller." + method.Name;

// Get the methods from our documentation
var docInfo = (
from m in this.XmlDocumentation.Descendants("members").Descendants("member")
where m.Attribute("name").Value.Contains(memberName)
select new {
Summary = m.Descendants("summary").First().Value,
Params = m.Descendants("param")
}
).FirstOrDefault();

// Now copy the XML back into my method/parameter classes
if (docInfo != null)
{
meth.Summary = docInfo.Summary;

// Add parameters
foreach (var param in docInfo.Params)
{
var p = new Parameter();
meth.Parameters.Add(p);
p.Name = param.Attribute("name").Value;
p.Description = param.Value;
}
}
}
}
}
}

 

Note that this won’t compile for you because it references a custom attribute, ApiMethodAttribute, and my base class, BaseController.  However, you could delete the logic around these and the documentation should still generate.

Now it’s just a matter of calling the class.  I use mine in an MVC ActionResult:

/// <summary>
/// Uses reflection to document our API methods
/// </summary>
/// <returns></returns>
public ActionResult APIDocumentation()
{
var pathToDocs = HttpContext.Server.MapPath("~/bin/APIDocumentation.xml");
var model = new ApiDocumentationGenerator(pathToDocs);
model.Generate();
return View("admin/apidocumentation", model);
}

And for clarity, I’ll include my View, so you can see how it’s used to render the results to the user:

@model Web.Code.ApiDocumentationGenerator
@{
ViewBag.Title = "API Documentation";
}

<h2>API Documentation</h2>

@foreach (var method in Model.Methods.OrderBy(x => x.Name))
{
<h3>@method.Name</h3>
<p><i>@method.Summary</i></p>
if (method.Parameters.Any())
{
<ul>
@foreach (var param in method.Parameters)
{
<li><strong>@param.Name </strong>@param.Description</li>
}
</ul>
}
}

 

Hope that helps.

Cheesebaron HorizontalScrollView with MvvmCross 3 (Hot Tuna)

February 14, 2014 Leave a comment

Many thanks to Cheesebaron and Stuart for their amazing contributions to the Xamarin platform. Cheesebaron made a scrollable horizontal list view at https://github.com/Cheesebaron/Cheesebaron.HorizontalListView back in early 2012. For those interested, I have ported the Cheesebaron HorizontalListView to the latest version of MvvmCross (currently v3).

Hope that helps.

Categories: General Tags: , ,

MVC Output Caching using custom FilterAttribute

August 29, 2013 Leave a comment

 

As with ASP.Net Forms, MVC offers some out-of-the-box caching with their OutputCacheAttribute, however as with classic ASP.Net, one quickly realizes its limitations when building complex systems.  In particular, its very difficult, and often times impossible to flush/clear the cache based on various events that happen within your application. 

For example, consider a main menu which has an ‘Admin’ button for appropriately authorized users.  When your administrator initially views the page, the system will cache the HTML, including the Admin link.  If you later revoked this privilege, the site would continue serving the cached link even though they were technically no longer authorized to access this part of the site.

Not good.

So, with a little to-ing and fro-ing, I’ve finalized my own FilterAttribute which does this for you.  The advantage of writing your own is that you can pass in whatever parameters you like, as well as have directly access to the current HttpContext, which in turns means you can check user-specific values, access the database – whatever you need to do.

How it works

The attribute essentially consists of just a couple of methods, both overrides of the IResultFilter and IActionFilter attributes

  • OnActionExecuting.  This method fires before your Action even begins.  By checking for a cache value here, we can abort the process before any long-running code in your Action method or View rendering executes
  • OnResultExecuting.  This method fires just before HTML is rendered to our output stream.  It is here that we inject cached content (if it exists).  Otherwise, we capture the output for next time

The code

I’ve commented the code below so you can follow more-or-less what is going on.  I won’t go in to too much detail, but needless to say if you copy/paste this straight in to your work, it won’t compile due to the namespace references.  I’m also using Microsoft Unity for dependency injection, so don’t be confused by ICurrentUser etc. 

Finally, I’m got a custom cache class, whose source code I haven’t included – just switch out my lines to access your own cache instead.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Web;
using System.Web.Mvc;
using BlackBall.Common;
using BlackBall.Common.Localisation;
using BlackBall.Contracts.Cache;
using BlackBall.Contracts.Enums;
using BlackBall.Contracts.Exporting;
using BlackBall.Contracts.Localisation;
using BlackBall.Contracts.Security;
using BlackBall.Common.Extensions;
using BlackBall.Logic.Cache;


namespace BlackBall.MVC.Code.Mvc.Attributes
{
public class ResultOutputCachingAttribute : FilterAttribute, IResultFilter, IActionFilter
{

#region Properties & Constructors

private string ThisRequestOutput = "";
private bool VaryByUser = true;

private ICurrentUser _CurrentUser = null;
private ICurrentUser CurrentUser
{
get
{
if (_CurrentUser == null) _CurrentUser = Dependency.Resolve<ICurrentUser>();
return _CurrentUser;
}
}

public ResultOutputCachingAttribute(bool varyByUser = true)
{
this.VaryByUser = varyByUser;
}

private string _CacheKey = null;
private string CacheKey
{
get { return _CacheKey; }
set
{
_CacheKey = value;
}
}

#endregion

/// <summary>
/// Queries the context and writes the HTML depending on which type of result we have (View, PartialView etc)
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private void CacheResult(ResultExecutingContext filterContext)
{
using (var sw = new StringWriter())
{
if (filterContext.Result is PartialViewResult)
{
var partialView = (PartialViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindPartialView(filterContext.Controller.ControllerContext, partialView.ViewName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}else if (filterContext.Result is ViewResult)
{
var partialView = (ViewResult)filterContext.Result;
var viewResult = ViewEngines.Engines.FindView(filterContext.Controller.ControllerContext, partialView.ViewName, partialView.MasterName);
var viewContext = new ViewContext(filterContext.Controller.ControllerContext, viewResult.View, filterContext.Controller.ViewData, filterContext.Controller.TempData, sw);
viewResult.View.Render(viewContext, sw);
}
var html = sw.GetStringBuilder().ToString();

// Add data to cache for next time
if (!string.IsNullOrWhiteSpace(html))
{
var cache = new CacheManager<CachableString>();
var cachedObject = new CachableString() { CacheKey = CreateKey(filterContext), Value = html };
cachedObject.AddTag(CacheTags.Project, CurrentUser.CurrentProjectID);
if (this.VaryByUser) cachedObject.AddTag(CacheTags.Person, this.CurrentUser.PersonID);
cache.Save(cachedObject);
}
}
}


/// <summary>
/// The result is beginning to execute
/// </summary>
/// <param name="filterContext"></param>
public void OnResultExecuting(ResultExecutingContext filterContext)
{
var cacheKey = CreateKey(filterContext);

if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.HttpContext.Response.Write("<!-- Cache start " + cacheKey + " -->");
filterContext.HttpContext.Response.Write(this.ThisRequestOutput);
filterContext.HttpContext.Response.Write("<!-- Cache end " + cacheKey + " -->");
return;
}

// Intercept the response and cache it
CacheResult(filterContext);
}

/// <summary>
/// Action executing
/// </summary>
/// <param name="filterContext"></param>
public void OnActionExecuting(ActionExecutingContext filterContext)
{
// Break if no setting
if (!Configuration.Current.UseOutputCaching) return;

// Our function returns nothing because the HTML is not calculated yet - that is done in another Filter
Func<string, CachableString> func = (ck) => new CachableString() { CacheKey = ck };

// This is the earliest entry point into the action, so we check the cache before any code runs
var cache = new CacheManager<CachableString>();
var cacheKey = new CachableString() { CacheKey = CreateKey(filterContext) };
var cachedObject = cache.Load(cacheKey, func);
this.ThisRequestOutput = cachedObject.Value;

// Cancel processing by setting result to some non-null value. Refer http://andrewlocatelliwoodcock.com/2011/12/15/canceling-the-actionexecutingcontext-in-the-onactionexecuting-actionfilter/
if (!string.IsNullOrWhiteSpace(this.ThisRequestOutput))
{
filterContext.Result = new ContentResult();
}
}

public void OnActionExecuted(ActionExecutedContext filterContext)
{

}

public void OnResultExecuted(ResultExecutedContext filterContext)
{

}

/// <summary>
/// Creates a unique key for this context
/// </summary>
/// <param name="filterContext"></param>
/// <returns></returns>
private string CreateKey(ControllerContext filterContext)
{

// Append general info about the state of the system
var cacheKey = new StringBuilder();
cacheKey.Append(Configuration.Current.AssemblyVersion + "_");
if (this.VaryByUser) cacheKey.Append(this.CurrentUser.PersonID.GetValueOrDefault(0) + "_");

// Append the controller name
cacheKey.Append(filterContext.Controller.GetType().FullName + "_");
if (filterContext.RouteData.Values.ContainsKey("action"))
{
cacheKey.Append(filterContext.RouteData.Values["action"].ToString() + "_");
}

// Add each parameter (if available)
foreach (var param in filterContext.RouteData.Values)
{
cacheKey.Append((param.Key ?? "") + "-" + (param.Value == null ? "null" : param.Value.ToString()) + "_");
}

return cacheKey.ToString();
}
}
}

Alright, hope that helps – there’s nothing like HTML caching to make you feel like the best website builder in the world!

Follow

Get every new post delivered to your Inbox.

Join 185 other followers