TSB - A Classic Migration Failure

I haven’t blogged for a while but after the TSB fiasco various folks have asked my opinion, so I thought I’d put hand to keyboard with my observations.

Firstly let me say I have skin in the game here.  Iergo – my consulting company – bank with the TSB and, whereas I’ll try not let this colour my view, there are aspects of our experience that I have not seen in the media reports, but which give additional insights into the bigger picture.

This sorry tale has its origins in the banking crisis of 2008/9.  Lloyds TSB was rescued by the British Government, but part of the deal was that it had to spin off a retail bank consisting of 600 plus branches.  This became the new TSB.  It was subsequently purchased by Spanish banking group Sabadell.  At this point it was still running its business on Lloyds’ banking platform.

Come 2017 TSB declared themselves ready to part company with Lloyds and commenced a project to migrate of customers from Lloyds’ to Sadabell’s computer systems.  And after a few false starts the cut over was set for the 20th April this year.

The first clue that things were not going to go well came even before the 20th.  We were immediately suspicious when we were told that the bank would be offline for 36 hours over the weekend.  Why, in the 21st century, does this online bank think it necessary or acceptable to be closed to their customers for that length of time? 

Unless of course they are working in ignorance of the options available to them. 

This ill omen was compounded by the delivery of a new phone app that was to replace the One Use Password generator (OUP) we had used with Lloyds.  It was never properly explained if this new app was to be used for logging on or for authorising transactions.  I’m assuming, that as a replacement of the Lloyds’ software and keypad, this was their authentication app.

The whole thing seemed amateurish.  We took the precaution of moving a few 10’s of thousands of pounds to other accounts on the assumption that things were not going to go well.

Then their servers went dark a good 2 hours before official shut down.  So already we could see timing and organisational issues.

It was when all the customers tried to log back on that the disaster became public.

As customers I can affirm that we did not satisfactorily log onto our online account until the following Friday, seven days after the shut down, and the platform was shaky for at least a week after that.  Sometimes available, sometimes not.  Sometimes accepting our log in credentials, sometimes not.  Sometimes telling us we had exceeded our permitted number of login attempts and needed our account re-set via their help desk but always allowing us extra attempts if we re-booted our accessing PC.

Their authentication app was withdrawn, without notice, 2 days after cut over.  They had hacked together an online solution using passwords. This has still not been fixed and is the weakest of all the banks we deal with. What are the security implications? Was there enough time to penetration test this solution? Testing clearly not being their strong point.  What else, that we can't see has also been hacked around with?  Obviously there are higher priority defects being fixed because revamped authentication software or hardware has not yet appeared.

At one stage during this cycle we were directed to a help page with a phone number on it.  We tried the number and found ourselves connected to a Lloyds Bank help desk!

When we finally got full access we seem to have gained an account.  It doesn’t have any money in it and no one seems all that bothered.

As we were sitting around scratching our heads and wondering, someone asked about the IBAN.  For those of you not in the know, the IBAN is an international bank account number that you give out to clients abroad so they can pay you by credit transfer.  Ours was still Lloyds branded.  Shouldn’t it be TSB?

I hot footed it round to our nearest branch and waited in the surprisingly good-tempered queue (so very British).  Got to the front and it was clear the staff had not been briefed on any change to the IBAN or if cash in flight on the old one would get redirected to the new.  They were trying to be helpful but were as much in the dark as everyone else and had to use the public helpline to find answers to questions they should have had in their cutover training pack.

We are busy watching the accounts as outstanding payments come in.  So far everything seems to be accounted for.

From my end user perspective what does this tell me?

1.       They broke the cardinal rule – never go into a cutover without a fall-back plan. There are no exceptions to this rule and no excuses.  Hoping to muddle through is not a business continuity plan.

2.       Why did they risk a big bang when they could have phased delivery? Maybe simple domestic customers first, more complex customers down the timeline.  This would have allowed them to test their capacity issues and fix issues before they escalated.  Were they unaware of the technical options available to them?

3.       Testing was clearly inadequate

4.       Testing however only gets you so far.  You can test defects out, but you can’t test quality in and this software was not, and is still not, fit for purpose, as the authentication app shows

5.       The lack of customer and staff preparation for the IBAN switch tells me there were failings on business readiness as well as technical readiness sides.

6.       The failure to even update exiting pages with the correct phone number tells its own tale of attention to detail

Conclusion

The 36 hour planned outage, the risky Big Bang approach, the lack of a fall back plan, inadequate testing, poor business readiness, sloppy work across the website and failure of the authentication app all speak to me of a lack of adequate skills and experience of large, complex, data migration projects.   Even if the software "worked".

These are all standard failings on migration projects that go wrong.  It’s quite common for the software itself to work.  Here the boast was that the system balanced to the penny.  And well it may have, but we were locked out for a week and customers could access accounts that did not belong to them.  Success in technical terms but a failure in business terms is still a failure.  Probably a bigger failure than the other way around.

More worrying, as a customer, is that the scale of the undertaking clearly exceeded the skills of a senior management blinded by hubris.  The persistent denial that there was a problem, even when under scrutiny by a select committee of the House of Commons, demonstrates a more serious long-term issue for TSB. 

But not for iergo.  We are moving banks.

The TSB debacle is the latest in a long line of spectacular failures.  However, these public catastrophes are but the tip of the iceberg of failure.  Those of us on the inside know of scores of other projects managed just a badly but not exposed to the glare of publicity the unfortunate TSB crew have endured. 

Trying to deliver a complex data migration on the cheap can turn out expensive.  Get experienced, trained support on board.  Work to an externally validated methodology.  And make sure you have a fall back plan.

Happy Migrations,

Johny Morris

Data Migration - Reconciliation Testing

This blog looks at data migration reconciliation (aka data migration audit) testing.  This is an essential form of testing that our clients should, and usually do, insist upon as part of user acceptance testing.  The question they want answering is “Did all the stuff I wanted moving from source to target get moved”.  And a jolly reasonable question it is too.

Read More

Testing Data Migration Step 2

In the last blog we looked at two major confusions that bedevil data migration testing – confusing building for quality with testing for defects and confusing data design issues with data migration faults. 

I have set aside a session at the next Data Migration Matters event (DMM8, 2nd June, London) exclusively to discuss the issue of Testing Data Migrations.  Check out the timetable at:

http://www.datamigrationmatters.co.uk/

I hope to see as many of you there as possible.  Let’s see if we can’t get some consensus around testing.

Now back to the blog.

In the last blog we looked at two major confusions that bedevil data migration testing – confusing building for quality with testing for defects and confusing data design issues with data migration faults.  

This blog was meant to look at reconciliation.  However I have had a number of questions regarding Data Design so reacting in an agile fashion I’m going to take a moment to look at this.  For ease of understanding I will set this problem up as a plain vanilla ERP implementation using a waterfall approach with a supplier or systems integrator delivering a Computer Off The Shelf (COTS) package into a manufacturing or service delivery company client. 

The supplier is responsible for understanding the operations of the COTS package on the one hand and for analysing the operations of the client on the other and then for bringing the two together in a perfect handshake.  Part of this fit has to be the data design.  Or does it?  Understanding the confusion surrounding this goes some way to understanding the confusion around testing I believe.

So what is “Data Design”?

Well what we are talking about here is the way the data structures built into a COTS package are used to support the data structures the client needs to carry out their business. 

Let’s take something simple – Lead to Cash (L2C).  Commercial organisations exist to sell stuff, at a profit.  So they all need a process of getting a lead and turning it into a sale and then delivering the product and collecting the cash.  This L2C process is fundamental to capitalism.

Without dwelling on all the detail, this journey from lead to cash involves the establishment of certain master or framework data items.  We have customers (both actual and potential), we have products both physical and logical (as in my case where I sell data migration consultancy).   If we concentrate on the products, all ERP packages will have some kind of Product-Master structure (please go with me on this one – I'm not going to dwell on the difference between a parts master and a product master).  Therefore all implementations of the lead to cash process will need a Product-Master establishing that is suitable for their physical and/or logical products.

But these Masters will not be the same in a house builder and a medical supplies manufacturer.

but who gets to design it?

So who is responsible for designing the use of the COTS Product-Master for our phantom client?  We need both the domain knowledge of the client and the COTS package knowledge of the supplier but it is always best practice to have one lead.  Are we target led or source led?

Well to my mind it must be the supplier.  They are the ones who know the target best and know how the Product-Master is related to accounts, product lifecycle management, supplier management etc. etc. within the target application.  The art of fitting the client’s business requirement to the structures on offer in the COTS Package requires knowledge of the COTS Package and the implication of design choices and this expertise is what the supplier brings to the party. 

Only the client has knowledge of the client product set that is the other half of the mix.  It is my contention that it should be the responsibility of the supplier to have the analysis skills that will reveal this knowledge in such a way that they can then perform the data design and deliver an operating platform that will enhance the activities of the client.  We need to bear in mind that most organisations do not replace the systems that support their businesses very often.  Therefore there is no reason why they should have developed the skills to analyse and articulate their business processes in a format ideally suited to a third party’s implementation requirements.  The supplier on the other hand has regular need for these items to be developed so it makes sense for them to have cultivated the skills needed to unearth these processes and the ancillary data.  They should have made these skills part of their own product set.

This is rarely contested but the level of detail in the data design often is.  It’s all very well placing the onus for data design of Product-Master on the shoulders of the supplier, but what about the detailed definition of items like the format of part numbers or the breakdown of a product set into discrete deliverables? 

There are two reasons for still saying that detailed Data Design belongs with the supplier.  Firstly, although the majority of the Data Design will stay as is – if you are a car manufacturer before the migration you will be one at the end so you will have models and versions of models and so on – there are some data items that are only present because that is the way the client has traditionally done things.  It is for the supplier, if they are to add value, to challenge these vestigial elements and substitute them for ones that will take advantage of the new system’s capabilities.  The risk otherwise is that the new system becomes a poor reproduction of the old which undermines the value proposition of making a change in the first place. 

The second reason is linked to the first.  In modern highly integrated COTS packages the setting of some values has impacts across the application.  It takes knowledgeable experts to understand the implication of something as apparently simple as the parts numbering system and its relationship to the part breakdown structure.  This means we need critically engaged target system experts to facilitate the result.

So to optimise our investment and to avoid cock-ups it should be the COTS package experts who are driving the bus.

However all too often I find, on arrival at an in-flight project, that the subtle difference between creating a general structure for your Product-Master and taking this down to the level of detail that can actually run the business, is a point of misunderstanding between the supplier and the client.  Often then as it is the Data Migration team who best know the legacy data and the issues seems to be one to do with data the task of providing this metadata falls on them.  This is wrong.

As an aside, if the main supplier will not commit to delivering the detailed data design but s waiting for the client to produce them, and given that the client may not have the skills to produce them, then the client should look to sub-contract that element out.

Whoever is performing the data design it still remains the case that this is not a data migration task.  If you are moving house, the removal men expect to be told where the bedrooms, lounge and kitchen in you new residence are.  (OK so they may be able to make a reasonable guess as to which room is  the kitchen but you get my point).  They are not expecting to decide for you which cupboards you wish to use and certainly not to have to architect the dwelling.  This should be the same with us lifters and shifters of data.  Tell us where to stick the stuff and we will organise things so that is where it gets stuck.

If all of this has been a little dense then please allow me to recapitulate:

  • Data Migrations are always part of bigger programmes
  • There is usually an incumbent supplier or systems integrator contracted to implement the new Computer Off The Shelf package
  • Data Design and Data Migration are not the same
  • Data Design is the alignment of the client metadata with the data structures available on the target system
  • Best practice is for the supplier to be responsible for detailed Data Design as well as detailed  process design
  • If the supplier is not going to perform this task then unless you have the skills in-house get assistance from a supplier who can provides this service
  • In any case Data Design should be seen as a separate task from data migration and planned into the project from the beginning
  • Data migration is the finding, extraction, transformation and loading of data of the appropriate quality in the right place at the right time.  It is not responsible for defining where and what the right place is

Back to reconciliation

First of all then what is data migration reconciliation (AKA data migration audit)?  Well put simply it answers the business side question “How will I know that everything I had in my old system, that I wanted moving to the new system, made it across”.   It does not completely answer the kindred question “....and how do I know it landed in the right place” because that involves both data migration issues (did we move the data according to the specification) and target design issues (did the programme perform the data design correctly to match business processes so that the locations to move the data into were available with appropriate behaviours).

Next time out we really will look at these two linked questions.

Testing Data Migrations Step 1

With the next Data Migration Matters event immanent, I intend to run this discussion up to the date of the event and we have set aside a session in DMM8 to discuss the vexed question of Data Migration Testing for which these blogs are the precursors.  So join the discussion on-line and then come along to DMM8 to make yourself heard - literally

If there is one topic that generates more online chat than any other in the Data Migration space, it is the one about Testing Data Migrations.  Check out the various forum and you will see what I mean. I am going to argue  that at bottom this is due to a confusion about what the Data Migration project is about and therefore how to test that it has been successful.

However before I go any further with this let me make it plain that I am not a test analyst.  I have the utmost respect for their craft and I do not want to invade their space.  So anything I say here is not intended to be a lecture to far more skilled hands than me in this area.  It relates specifically to the perceived issues of testing Data Migrations rather than testing in general.

It is also true that testing, just like every other aspect of IT it seems, has its own tribes.  And I certainly do not want to get involved in the internecine particulars of disputes of which, as I say, I am not really qualified to opine.  So if I talk about Test Scenario’s or Test Scripts or test cases please accept them as the words of an informed bystander not with the very particular meanings that one school or another will ascribe to them.

This being the first blog of a series I want to lay down some fundamentals therefore Step 1, and the subject of this blog, is that you can’t test quality into a product.  It does not matter if that product is a motor car, a fine meal or a data migration. 

Design and build quality in.  Test defects out.

 A lil' more testing and I'm sure I'll be green

A lil' more testing and I'm sure I'll be green

This may sound like distinction without a difference but think about it.  If you wanted to build a motor car that was green and economical, one that managed 100 plus miles per gallon (35km to the litre for our metric friends), you would not start with a Cadillac Eldorado and try to test the MPG into it (if that is one of your quality exit criteria – the Eldorado is a perfect example of 1950’s American flamboyant self confidence and needs no improving).

Yet this is often precisely the puzzle we are trying to solve in our data migration testing.  We are looking at the issue of quality from the wrong end of the project timeline.

But why is this?  Well in part modern procurement processes are causing an issue with data design.  I have written about this before in other contexts but it is an issue that will continue to bedevil both procurer's and suppliers of new enterprise apps until the purchasers amend their buying practices and suppliers react accordingly.

In brief the move to fixed price contracts for new system delivery and the premium on time to market and price has meant that the suppliers have been forced to move data design down the timeline.  Depending on whom you employ, and the predilections of the buyer, we typically have a cascade of Discovery -> High Level Design -> Low Level Design -> Build -> Test -> Deliver as our phases.  The Low Level Design phase is a misnomer.  In the struggle to win business and keep time lines and costs down, the supplier is constrained to move the detailed work on what particular fields, including custom fields, will be used for and therefore the precise details of their go live start up values into the build phase.  Of course we in the Data Migration end of things need these exact details to perform our data migration.  By the time the Build is complete we get a cascade of data requirements with user acceptance testing, bulk load testing etc. looming.

The temptation to tacitly assume that we can sort it all out in the testing is just plain wrong.  The “Throw the data at the target and see what sticks” approach to data migration is sadly making a reappearance in the hurly burly of modern implementations.

On less well ordered projects this cascade can also be incoherent and contradictory and this is when the second confusion emerges....

Test the data migration not the data design

When time is pushed and the detailed data design arrives way down the timeline, it is easy to confuse genuine data migration defects (wrongly selected data, badly transformed data, incomplete data sets etc) with data design defects (different format requirements in different parts of the solution, incorrectly designed functionality etc.).  On a well run complex programme, roles and responsibilities for outcomes are understood in advance.  In a later blog I will show some techniques for managing this, but in essence data migration testing should be about what is says on the label – testing the data migration not the quality of the solution. 

This is also true of the gaps we in the data migration team are often the first to be aware of in the solution.  Having analysed the source data we may see that a particular object – say a working at height risk assessment – contains 10 fields in the source but only six in the target.  How this is best managed I will also cover later but for now I make the statement that it is a design not a data migration issue.

And one final, if not confusion then point of distinction – I prefer to separate data migration testing from data migration reconciliation (or as it is sometimes called data migration auditing). 

It may be that they answer the same question (did all the things I wanted moved from the source end up in the right place in the target) but because they typically require different techniques both in requirements analysis and in the migration build, it is less confusing if we separate them.

The next blog will be on reconciliation testing followed by end to end, user acceptance, functional, mock load and soak tests.  Finally, when we are agreed that we have the set of data migration test types we will look at how to manage the two confusions above.

I look forward to hearing from you either over the normal media or in person at DMM8

Data Migration Pirates and Brides

This week news on a link up between X88 and Experian and a moan about pimping the search engines. 

First let me boast at this point that the launch of this web site went excellently well.  It was well received with a few minor quibbles.  A few folks said that the navigation menu's were difficult to read in their semi-opaque state and this has been corrected.  Amazing what looks cool in the design studio but doesn't work so well in the real world.

There were also requests for a forum type page to be set up.  We are investigating the feasibility of this.  I'm for it personally my only concern is that as a lot of traffic (at the moment) is being directed via LinkedIn groups, and they have the same forum functionality. Would I be causing myself more confusion having parallel threads running in two places?  Personally I think it worth the risk but let's see eh?

SkullAndCrossbones

Onto the subject of today's diatribe.  Having put up this website and therefore reviewed the Google search results for Practical Data Migration I find that there are imposters (or at least impressionist wannabe's) who are responding to the Practical Data Migration call without training, accreditation or  compliance.  Caveat Emptor as the Romans would have said (roughly translates as "Buyer Beware").  Genuine PDMv2 practitioners will be proudly baring their PDMv2 Accreditation badges on their site not masquerading behind a paid for Google page placement.

Perhaps I should be flattered that PDM is something worth pretending too I suppose.

I am refraining from naming names today but if these miscreants do not mend the error of their ways I may be forced to act. You have been warned.

Happier news for my friends at X88, responsible for the fabulous Pandora software have finally come clean about their intentions and entered into nuptials with Experian.   For those  in the data management fraternity here in the UK you may be more familiar with the QAS brand than Experian because QAS's data quality and enhancement tools are pretty well ubiquitous in the name and address space.  I certainly rarely go onto a site in the UK where  they don't feature.

Of course if your work takes you closer to the sales process then you will know Experian for its credit rating services rather than the QAS brand - but it is the data quality aspects we will reflect on here.

As the two newly weds bed in I have to congratulate the happy couple on what looks, from the those who know them, as possibly a marriage made in heaven.  On the one hand the blushing young bride Pandora, in the way of youth everywhere, challenging accepted norms with youthful dynamism.  On the other, the more mature Experian who already have a substantial presence in many industry verticals.  

It seems that the offspring of this union are already out of the nursery and making their own way in the world.  A surprisingly short gestation period one might say but possibly a testament to the fecundity between the betrothed.  It propagates  the benefit of the name and address awareness and data enhancement products in the Experian genome, within the heterogeneous profiling and  data quality abilities of Pandora and builds on the pre-martial relationship they enjoyed in collaboration over the QAS /Experian Data Quality tool.  Perhaps not surprising then that Experian Pandora (as they have named their progeny, which may not be startlingly original but is at least explicable) emerged fully formed from the coming together of the two parties - perhaps more Aphrodite than Pandora but who am I to extend this into a classicist discussion?

However putting aside the marital humour for a moment, and looking at this from a commercial and technical perspective I can see this as being a great success. Experian is, as I said, ubiquitous and trusted at the highest levels within the finance departments of most FTSE 100 corporations.  Enterprises are betting their level of bad debt and therefore profitability on the basis of Experian's knowledge of the customer.  It is also one of the few credit rating agencies to come out of the 2008 debacle with an unimpaired reputation.  However just as it is trusted so it is a little stolid.  X88 on the other hand is the new kid on the block, bringing disruptive technology to market. Combining the strengths of both of them - the trustworthiness of one brand and the brilliance of the other, the deep knowledge of data subjects (both people and other legal entities) of Experian and the technical innovation of Pandora,  could create exactly the right cocktail for a fizzing success. There is also the possibility of the contrary with the bureaucracy of an established institution stifling the fleet of foot brilliance of the upstart to the detriment of both.

However this blog wishes the newly weds well and will be keeping a watching brief.  Normally I would at this point give you the url to follow up this blog, but this romance has been of such a whirlwind nature that checking qas.co.uk or experian.com or experian.co,uk or even X88.com I seek in vain any reference to it but I expect the blushing bride and proud groom to make public their private joy at any moment.

Johny Morris

 

 

 

 

Data Migration - What if it all goes wrong

This blog explains why we always need a fall back (or even a fall forward) plan.  Just in case.  How good is yours?

First of all welcome to this the first blog of the new site for Practical Data Migration.  I'm keen to hear your feedback either publicly via comments attached to this blog or if you prefer to castigate us in private please use the Ask Johny feature to contact me more discretely.  

And bookmark this site - we'll be adding to it regularly.

I would also like to thank my friends at the BCS for hosting Johny's Data Migration blog all these years.  They've been a great support  but with the new website up and running it's time to say good bye to the old and hello to the new.

Given that this is a new start I thought we should celebrate it by looking at the worst that can happen.  

After long months of preparation, the big day comes along.  Then disaster strikes.  Now this can be for no predictable reason.  Of course if we have done our preparation correctly the data will be ready, the technical processes will have been tested.  The business processes will also have been briefed out.  Still things can go unexpectedly wrong and then we need a fall back.  One that has been thought out in advance and if not necessarily tested (as the cautionary tale that follow shows testing is not always possible) at least seen to be feasible.

Unfortunately It was one of my ex-clients (Network Rail) who presented us with the Christmas gift of the almost perfect example of how not to manage a situation where a transformation project has not run according to plan.

For those not familiar with the structure of the rail industry here in the UK, Network Rail is a not  for profit company that owns the rail ways but they neither maintain the infrastructure nor do they run the trains.  There are a number of franchises, geographically and route based let out to independent Train Operating Companies (or TOC's in the parlance of the trade).  The engineering is carried out by infrastructure companies (or Infra Tec's).  So although Network Rail can commission rail work it has no direct control over its delivery.  This is a situation familiar I think to most of us working in IT transformation delivery.  The client commissions but someone else delivers often with a number of sub-contractors for specialist elements

Mostly in the private sector these disasters occur behind closed doors and the public is none the wiser.  Just occasionally the whole mess becomes public.  Then a besieged CEO has to stand up in front of the camera's and deal with the fall out.  Please step up Network Rail's managing director Robin Gisby to explain on National Television how overrunning engineering work left thousands of passengers without trains on the Saturday between Christmas and New Year.  His next public appearance may well be before a select committee of the houses of parliament where he will receive a further grilling.

 PASSENGER concourse king's cross 

PASSENGER concourse king's cross 

The fault, it seems, was in the breakdown of some machinery during a multi-million pound maintenance activity outside London's King's cross Station followed by over optimistic planning for the safety commissioning of the new system (and let's be honest, no one want's that skimped on).  For those not familiar with the weird topography of the major rail routes into London there are three major stations within a mile of one another that  serve as termini for all destinations North of London plus one station a few miles away in the City.  Each station however serves local trains as well s long distance and there is overlapping provision and re-use of the same track. This is a bequest from the profit hunting Victorian railway entrepreneurs who competed for routes as opposed to most other nations more planned approach to transport provision.

All a bit of a messy inheritance but the effective closure of King's Cross and signal problems on the lines out of Euston plus issues elsewhere on the system created mayhem in that busy period between Christmas and New Year.  

So to re-cap.  A complex legacy, a major transformation project, tight implementation deadlines, all delivered through a complex network of suppliers.  Sound familiar?

 finsbury park station

finsbury park station

 the lock out

the lock out

And then it starts to go wrong.  

Anyone who has been on one  of our courses will know just  how  much we stress a fall back plan which puts your in a situation that  allows business as  usual with minimum customer impact.  When the changes you have  made cannot be just rolled back and the old regime reinstated we may have to fall forward.  Railway engineering work tends to be in this category but so do some IT system changes.  Ideally the fall back plan should be tested, but this is often not possible for technical, financial and programme reasons.  In fact fall back plans are rarely tested. They do have  to be tenable however.  Now  I have no inside information on this one but the Rail Track fall back plan seems to have either been made up on the fly or possibly on the back of an envelope.  Up  the line from King's Cross is Finsbury Park.  This is a busy suburban station in its own right. with overground and underground lines intersecting.  But it only has five lines and six platforms.  King's Cross has 12 platforms.  Suburban trains are typically 3 carriages long, inter-city trains are typically 11.  So the plan to halt all the incoming trains at Finsbury Park and get customers to schlep up there with their suitcases, push chairs, aged relatives, children, wheelchairs etc. was clearly floored.  The result, as should  have been anticipated, was that Finsbury Park was mobbed to the extent that incoming trains could  not disgorge their passengers because of the  crush and the station had to be repeatedly closed for safety reasons. When passengers did hit the platform the confusion was so great that getting on the right train was a lottery and this caused knock on problems up the line.

The lessons here are obvious. Firstly, never go into a change programme without a workable fall back strategy  Secondly you can delegate the task but you can't delegate responsibility. Finally if it does all go wrong, get out in front of the camera and apologise quickly.  Mia culpa not excuses.  

In the words of the Duke of Wellington - there but for the grace of god go I.  Things can always go wrong.  Most of our migrations are amenable to a graceful fall back to existing systems but in the 24/7 world we now work in that may mean applying updates that have been backed up in anticipation of a go live to the legacy as opposed to the target as planned. Sometimes we can't fall back to the legacy.  Be prepared!

From next week I will be running a series of blogs on the fraught question of testing data migration projects. This is a topic that  is often raised and I'm going to throw my thoughts into the public domain and see if we can't get a debate going.  I look forward to hearing from you.

Johny Morris

@johnymorris

Big Data, Big Hats

This blog was first published in 2012 and is one of the most widely quoted of all my efforts. As you will see it was a lot of fun to write - I hope it is as much fun to read.  Some of the links probably no longer work but I don't think that detracts from its main message.

Is Big Data just another Marketing wheeze and does it suffer from the same semantic issues that have bedevilled other MI/BI/DW oversells of the past 20 years?

Read More

Data Migration – The Agile Experience

This blog was first published on the BCS website in February 2013.  In the nearly two years that have elapsed since it was written our thinking has developed and a white paper encapsulating this will be published shortly

Agile is definitely the development method in ascendancy at the moment.  Seems wherever I go, or any clients I speak to, they are either doing Agile or wanting to be Agile.......

Read More