TSB - A Classic Migration Failure

I haven’t blogged for a while but after the TSB fiasco various folks have asked my opinion, so I thought I’d put hand to keyboard with my observations.

Firstly let me say I have skin in the game here.  Iergo – my consulting company – bank with the TSB and, whereas I’ll try not let this colour my view, there are aspects of our experience that I have not seen in the media reports, but which give additional insights into the bigger picture.

This sorry tale has its origins in the banking crisis of 2008/9.  Lloyds TSB was rescued by the British Government, but part of the deal was that it had to spin off a retail bank consisting of 600 plus branches.  This became the new TSB.  It was subsequently purchased by Spanish banking group Sabadell.  At this point it was still running its business on Lloyds’ banking platform.

Come 2017 TSB declared themselves ready to part company with Lloyds and commenced a project to migrate of customers from Lloyds’ to Sadabell’s computer systems.  And after a few false starts the cut over was set for the 20th April this year.

The first clue that things were not going to go well came even before the 20th.  We were immediately suspicious when we were told that the bank would be offline for 36 hours over the weekend.  Why, in the 21st century, does this online bank think it necessary or acceptable to be closed to their customers for that length of time? 

Unless of course they are working in ignorance of the options available to them. 

This ill omen was compounded by the delivery of a new phone app that was to replace the One Use Password generator (OUP) we had used with Lloyds.  It was never properly explained if this new app was to be used for logging on or for authorising transactions.  I’m assuming, that as a replacement of the Lloyds’ software and keypad, this was their authentication app.

The whole thing seemed amateurish.  We took the precaution of moving a few 10’s of thousands of pounds to other accounts on the assumption that things were not going to go well.

Then their servers went dark a good 2 hours before official shut down.  So already we could see timing and organisational issues.

It was when all the customers tried to log back on that the disaster became public.

As customers I can affirm that we did not satisfactorily log onto our online account until the following Friday, seven days after the shut down, and the platform was shaky for at least a week after that.  Sometimes available, sometimes not.  Sometimes accepting our log in credentials, sometimes not.  Sometimes telling us we had exceeded our permitted number of login attempts and needed our account re-set via their help desk but always allowing us extra attempts if we re-booted our accessing PC.

Their authentication app was withdrawn, without notice, 2 days after cut over.  They had hacked together an online solution using passwords. This has still not been fixed and is the weakest of all the banks we deal with. What are the security implications? Was there enough time to penetration test this solution? Testing clearly not being their strong point.  What else, that we can't see has also been hacked around with?  Obviously there are higher priority defects being fixed because revamped authentication software or hardware has not yet appeared.

At one stage during this cycle we were directed to a help page with a phone number on it.  We tried the number and found ourselves connected to a Lloyds Bank help desk!

When we finally got full access we seem to have gained an account.  It doesn’t have any money in it and no one seems all that bothered.

As we were sitting around scratching our heads and wondering, someone asked about the IBAN.  For those of you not in the know, the IBAN is an international bank account number that you give out to clients abroad so they can pay you by credit transfer.  Ours was still Lloyds branded.  Shouldn’t it be TSB?

I hot footed it round to our nearest branch and waited in the surprisingly good-tempered queue (so very British).  Got to the front and it was clear the staff had not been briefed on any change to the IBAN or if cash in flight on the old one would get redirected to the new.  They were trying to be helpful but were as much in the dark as everyone else and had to use the public helpline to find answers to questions they should have had in their cutover training pack.

We are busy watching the accounts as outstanding payments come in.  So far everything seems to be accounted for.

From my end user perspective what does this tell me?

1.       They broke the cardinal rule – never go into a cutover without a fall-back plan. There are no exceptions to this rule and no excuses.  Hoping to muddle through is not a business continuity plan.

2.       Why did they risk a big bang when they could have phased delivery? Maybe simple domestic customers first, more complex customers down the timeline.  This would have allowed them to test their capacity issues and fix issues before they escalated.  Were they unaware of the technical options available to them?

3.       Testing was clearly inadequate

4.       Testing however only gets you so far.  You can test defects out, but you can’t test quality in and this software was not, and is still not, fit for purpose, as the authentication app shows

5.       The lack of customer and staff preparation for the IBAN switch tells me there were failings on business readiness as well as technical readiness sides.

6.       The failure to even update exiting pages with the correct phone number tells its own tale of attention to detail

Conclusion

The 36 hour planned outage, the risky Big Bang approach, the lack of a fall back plan, inadequate testing, poor business readiness, sloppy work across the website and failure of the authentication app all speak to me of a lack of adequate skills and experience of large, complex, data migration projects.   Even if the software "worked".

These are all standard failings on migration projects that go wrong.  It’s quite common for the software itself to work.  Here the boast was that the system balanced to the penny.  And well it may have, but we were locked out for a week and customers could access accounts that did not belong to them.  Success in technical terms but a failure in business terms is still a failure.  Probably a bigger failure than the other way around.

More worrying, as a customer, is that the scale of the undertaking clearly exceeded the skills of a senior management blinded by hubris.  The persistent denial that there was a problem, even when under scrutiny by a select committee of the House of Commons, demonstrates a more serious long-term issue for TSB. 

But not for iergo.  We are moving banks.

The TSB debacle is the latest in a long line of spectacular failures.  However, these public catastrophes are but the tip of the iceberg of failure.  Those of us on the inside know of scores of other projects managed just a badly but not exposed to the glare of publicity the unfortunate TSB crew have endured. 

Trying to deliver a complex data migration on the cheap can turn out expensive.  Get experienced, trained support on board.  Work to an externally validated methodology.  And make sure you have a fall back plan.

Happy Migrations,

Johny Morris