Handling and storing dates in a globally distributed application

Handling and storing dates in a globally distributed application

If you're building a cloud/web-based application which is serving people in different time zones, then sooner or later you are going to have to deal with different timezones.

Unlike other 'base' variables in computer programming, like strings, numbers or boolean fields, dates are peculiar in that their value can potentially change depending on where or when you are looking at it.  That is a pretty crazy wild card to deal with.

This article will describe the architecture that your application should adopt in order to satisfy dates across multiple time zones.  In particular, we'll use the following technologies:

  • SQL Server database
  • .Net application for your business logic (API)
  • Javascript (web browser) for the client interface

Let's break the article down into the different parts of the stack...

SQL Server

Consider this screenshot from SQL Server.  The two values below are actually showing the same date.  The first is the timezone where SQL Server is installed (New Zealand) and the second is what we call the UTC timezone, which I like to think of as England.  As you can see, they are precisely 12 hours apart because NZ is on the other side of the world.  It is currently 8:19 in the morning here, but if I were to phone the Queen she would say it was 8:19 in the evening.  And yet we are both in the now.  Weird.

The screenshot above also highlights a serious issue with how SQL Server stores dates.  Note that there is absolutely no indicator in either date as to what timezone we are in

[Quick disclaimer - SQL Server 2016 actually does have a better way of storing timezones against dates now, but the point remains that plenty of data storage mechanisms do not record timezones]

Consider now that immediately after these dates were stored, your application is querying the first column (the New Zealand date).  It would pull the date out and say "hey! it's 8:19 in the morning".  This is correct, if your application is in New Zealand.  But if your application was in Australia (which is only ten hours ahead of UTC), they would be two hours out - because 8:19am in New Zealand is actually only 6:19am in Australia (actually, I'm pretty sure Australia has different timezones, so it might even differ across Australia!).

At this point - congratulations you have done the worst thing in software development and irrevocably lost data.

Okay, you might say, no problem.  I will update my application to account for the timezone within which the date was stored (GMT+12).  Then, knowing my own timezone (e.g. Australia, GMT+10) I can simply subtract the difference to get the two hours offset and use that Instead.

Now, your app will work fine....until you decide to host your server in another country - say, Australia.  Now if you subtract two hours from the database-value, you'll actually be two hours out again.  Instead you need to record when you switched servers and then make the offset for all dates before you switched, and a zero offset since the switch.

Which is great...until you have two servers in different timezones.  At this point, the source of your dates is intermingled, so you really need to start recording the timezone of each date.  Here is something I see a bit:

All well and good.  Until Daylight Savings Time kicks in.  Different countries implement daylights savings time differently at different times of the years.  For all intents and purposes you should consider it random and subject to change without notice.  But that doesn't change the fact that (for example, in New Zealand), 'now' is 13 hours ahead of UTC in January, but only 12 hours ahead in July.  The changeover is typically March and October (I think).

Finally, then instead of recording the timezone, just record the offset minutes to retain a second column alongside your dates, where the offset is recorded.  Here you can see we've recorded the number of minutes (720 is 12 hours) that we are offset)

I see the option above quite a bit too.  But if you're at this stage, then you need to simply adopt the entire point of the article, which is to store all your dates in UTC to begin with.  This obviates the requirement of a second column but it moves the onus of this conversion to your application that is storing the date in the first place.

So, let's move to the application.

.Net application

As I said, your application needs to save your dates in UTC format, so your first attempt at this might be to do use the handy ToUniversalTime() function to transform your dates just before saving them:

 

public void SaveDate(DateTime now){
    var utc = now.ToUniversalTime();
    database.Save(utc);
}

 

Unfortunately in most modern languages (including .Net), this doesn't always work - what if the value of 'now' is already in UTC?  In the case of .Net the application will actually throw an error.

That's actually fine, because you can write helper functions to check the DateTime.Kind property first and convert only if necessary, but it's not pretty.

With this in mind, at Blackball Software we apply a blanket convention to our code - all dates in your business logic must be UTC.  This means that in the function above, we simply remove the conversion to UTC because we know that the incoming 'now' variable is already in UTC format.  

Crucially important here is that the boundaries of your application - the parts where data comes in and out - must only receive and send dates in UTC.  Consider now a typical REST/JSON API endpoint:

public void SaveBirthDate(int personID, DateTime birthDate){
  var dataManager = new DataManager();
  
  // The data manager expects all dates as UTC, so we just pass straight through
  dataManager.SaveBirthDate(personID, birthDate);
}

This endpoint exposes a DateTime field (called birthDate), but note something peculiar - the DateTime variable does not itself enforce UTC.  In fact, there is no variable in .Net of type "UTC Date" - there is only "Date".  

We now need to get a little more technical about the internal workings of the .Net DateTime variable.  In particular, it's DateTime.Kind property, which can be one of three values:

  • Local - the date is assumed to be in the local time (of the server where the application is running)
  • UTC - the date is in UTC
  • Unspecified - no format was specified

So, now consider a block of javascript calling this endpoint:

var data = {
  personID: 100,
  birthDate: "1979-02-01 00:00:00"
}
SendToServer(data);

What exactly is our .Net endpoint going to interpret this date as?  Local, UTC or Unspecified?  Well, no timezone information is provided in the string, so .Net will interpret as Unspecified.  But here's the crucial point:

In .Net, an Unspecified DateTime will ultimately manifest in Local time.

This becomes an issue because unlike your .Net business logic application, you can't enforce-by-convention that all developers pass a UTC date to your API endpoint.  The endpoint is most likely used by not only your own front-end (where you could enforce by convention if you had to), but by third party developers that you have little or no communication with.  This means it becomes your responsibility to parse and convert the dates entering your application via the endpoints.  

Thankfully, both ASP.Net MVC and the newer WebAPI allow you to inject your own logic into the pipeline.  In the case of WebAPI, here is a function that Blackball uses to intercept and convert all incoming DateTime variables by writing a custom HttpParameterBinding:

// ...code emitted for brevity
if (Descriptor.ParameterType == typeof(DateTime?)){
    DateTime val;
    if (!string.IsNullOrWhiteSpace(stringValue) 
        && DateTime.TryParse(stringValue, CultureInfo.CurrentCulture, DateTimeStyles.AssumeUniversal, out val)) 
        value = val.ToUniversalTime();          
     }
}
                                            
                                           

Cool huh?  All logic in our back-end can now rest assured that any incoming dates are in UTC.  The actual code in an HttpParameterBinding is a bit complicated - too much for the scope of this article.  The salient point above though is the use of DateTimeStyles.AssumeUniversal.

The front-end

This finally brings us to the front end, which in this scenario is unique in that it is the only tier which displays dates to (or gets dates from) humans, which is really the main purpose of your application.  Your human doesn't want to see dates in UTC - most of them probably don't even know what it is.  But because your application is being rendered in the human's timezone, it is usually a fairly easy matter to do an on-the-fly conversion just before rendering it to the screen.  In javascript, this would be something akin to:

var person = server.GetPerson()
var utcDate = person.BirthDate;
console.log("UTC date", utcDate);
var offsetMinutes = person.BirthDate.getTimezoneOffset();
var localDate = new Date(utcDate.getTime() + offsetMinutes*60000);
console.log("Local date", localDate);

If the user was viewing this page in New Zealand, they would see a time that was two hours "ahead" than if they were viewing it in Australia.

To conclude

So, here is really all you need to know in order to future-proof your application.  Do this from the start:

  • only ever store UTC dates in your data store
  • enforce (preferably by code, otherwise by convention) that UTC dates are only ever dealt with at the business logic level - including insertion/retrieval with the data store
  • to satisfy the above, where you lose control over the developers that are using your app (ie. you cannot code by convention), you need to come up with some kind of mechanism to enforce this rule at your application boundaries

Easy huh?

What the hell is agile anyway and why should I care?

What the hell is agile anyway and why should I care?

The development process

The development process