This blog explains why we always need a fall back (or even a fall forward) plan. Just in case. How good is yours?
First of all welcome to this the first blog of the new site for Practical Data Migration. I'm keen to hear your feedback either publicly via comments attached to this blog or if you prefer to castigate us in private please use the Ask Johny feature to contact me more discretely.
And bookmark this site - we'll be adding to it regularly.
I would also like to thank my friends at the BCS for hosting Johny's Data Migration blog all these years. They've been a great support but with the new website up and running it's time to say good bye to the old and hello to the new.
Given that this is a new start I thought we should celebrate it by looking at the worst that can happen.
After long months of preparation, the big day comes along. Then disaster strikes. Now this can be for no predictable reason. Of course if we have done our preparation correctly the data will be ready, the technical processes will have been tested. The business processes will also have been briefed out. Still things can go unexpectedly wrong and then we need a fall back. One that has been thought out in advance and if not necessarily tested (as the cautionary tale that follow shows testing is not always possible) at least seen to be feasible.
Unfortunately It was one of my ex-clients (Network Rail) who presented us with the Christmas gift of the almost perfect example of how not to manage a situation where a transformation project has not run according to plan.
For those not familiar with the structure of the rail industry here in the UK, Network Rail is a not for profit company that owns the rail ways but they neither maintain the infrastructure nor do they run the trains. There are a number of franchises, geographically and route based let out to independent Train Operating Companies (or TOC's in the parlance of the trade). The engineering is carried out by infrastructure companies (or Infra Tec's). So although Network Rail can commission rail work it has no direct control over its delivery. This is a situation familiar I think to most of us working in IT transformation delivery. The client commissions but someone else delivers often with a number of sub-contractors for specialist elements
Mostly in the private sector these disasters occur behind closed doors and the public is none the wiser. Just occasionally the whole mess becomes public. Then a besieged CEO has to stand up in front of the camera's and deal with the fall out. Please step up Network Rail's managing director Robin Gisby to explain on National Television how overrunning engineering work left thousands of passengers without trains on the Saturday between Christmas and New Year. His next public appearance may well be before a select committee of the houses of parliament where he will receive a further grilling.
The fault, it seems, was in the breakdown of some machinery during a multi-million pound maintenance activity outside London's King's cross Station followed by over optimistic planning for the safety commissioning of the new system (and let's be honest, no one want's that skimped on). For those not familiar with the weird topography of the major rail routes into London there are three major stations within a mile of one another that serve as termini for all destinations North of London plus one station a few miles away in the City. Each station however serves local trains as well s long distance and there is overlapping provision and re-use of the same track. This is a bequest from the profit hunting Victorian railway entrepreneurs who competed for routes as opposed to most other nations more planned approach to transport provision.
All a bit of a messy inheritance but the effective closure of King's Cross and signal problems on the lines out of Euston plus issues elsewhere on the system created mayhem in that busy period between Christmas and New Year.
So to re-cap. A complex legacy, a major transformation project, tight implementation deadlines, all delivered through a complex network of suppliers. Sound familiar?
And then it starts to go wrong.
Anyone who has been on one of our courses will know just how much we stress a fall back plan which puts your in a situation that allows business as usual with minimum customer impact. When the changes you have made cannot be just rolled back and the old regime reinstated we may have to fall forward. Railway engineering work tends to be in this category but so do some IT system changes. Ideally the fall back plan should be tested, but this is often not possible for technical, financial and programme reasons. In fact fall back plans are rarely tested. They do have to be tenable however. Now I have no inside information on this one but the Rail Track fall back plan seems to have either been made up on the fly or possibly on the back of an envelope. Up the line from King's Cross is Finsbury Park. This is a busy suburban station in its own right. with overground and underground lines intersecting. But it only has five lines and six platforms. King's Cross has 12 platforms. Suburban trains are typically 3 carriages long, inter-city trains are typically 11. So the plan to halt all the incoming trains at Finsbury Park and get customers to schlep up there with their suitcases, push chairs, aged relatives, children, wheelchairs etc. was clearly floored. The result, as should have been anticipated, was that Finsbury Park was mobbed to the extent that incoming trains could not disgorge their passengers because of the crush and the station had to be repeatedly closed for safety reasons. When passengers did hit the platform the confusion was so great that getting on the right train was a lottery and this caused knock on problems up the line.
The lessons here are obvious. Firstly, never go into a change programme without a workable fall back strategy Secondly you can delegate the task but you can't delegate responsibility. Finally if it does all go wrong, get out in front of the camera and apologise quickly. Mia culpa not excuses.
In the words of the Duke of Wellington - there but for the grace of god go I. Things can always go wrong. Most of our migrations are amenable to a graceful fall back to existing systems but in the 24/7 world we now work in that may mean applying updates that have been backed up in anticipation of a go live to the legacy as opposed to the target as planned. Sometimes we can't fall back to the legacy. Be prepared!
From next week I will be running a series of blogs on the fraught question of testing data migration projects. This is a topic that is often raised and I'm going to throw my thoughts into the public domain and see if we can't get a debate going. I look forward to hearing from you.