Chaotic Changes

Systems and supplier changes – what could possibly go wrong?

Recent well-publicised failures in delivery of client services following system or supplier changes are prompting speculation as to how these long planned and tested changes could have gone so wrong. The last thing these firms wanted was to let customer facing services slip – it’s the most visible and damaging kind of failure. Most firms will focus far more on front-end facing functionality than on the engine room sitting behind, so that when deadlines loom or corners have to be cut, it is the latter which suffers. This begs the question, if the client can see this level of chaos, what else is happening behind the scenes? We have worked to help a number of firms whose swan-like movement in front of clients has had to be supported by furious paddling beneath, as operational aspects of their service have been held together with blu tack and string.

The causes of these issues can be manifold, but there are some common themes in the failures we have seen:

  • Leapfrogging

Simply, trying to leapfrog from an ageing system or set of processes and services to a brave new world of shiny toys in one step. Sometimes, this is driven by a desire to save customers the pain of multiple changes (though they would prefer that to the kind of failings we have seen recently). Sometimes it’s angst about being left behind by competitors. Either way, it’s a simple matter of multiple changes bringing too much complexity and an uncontrollable set of moving parts to the project. All the talk about AI or robo-advice being the future of financial services is interesting and exciting but let’s not get carried away until we can take instructions, carry them out and keep the clients informed.

  • The devil you know

A seductive line of thought, when faced with the shiny new toys of front end functionality and a long wish list from the front office, is to retain those grubby legacy engine parts and plug the new toys into the front. There is also a desire, often, to limit the changes to working practice so that the teams of people in charge of the chewing gum and string (or, indeed, those with client facing roles) suffer minimal disruption. Unfortunately, this often causes as many issues as it solves. Differences between functionality can range from the timing of updates, which leave data permanently out of sync, to the number of decimal places and the roundings rules used by each system. In addition to which, of course, there’s the building of the plumbing between all the parts. Some of the differences may seem trivial, particularly at the point when the decision is made by the business sponsor, who has not experienced the pain these apparently small differences can bring. Often, however, they prove surprisingly difficult to overcome, particularly if they demand changes to ancient software, the code for which is buried somewhere long with the Holy Grail. Often the dearth of knowledge of how the system actually works and the lack of documentation makes the task of updating legacy systems still more difficult and prone to risk.

  • You say potayto, I say potaato

These days, very few projects run through detailed documentation of business requirements and specifications which are produced and agreed before the software is built or configured. If any documentation is produced, it tends to be running behind the reality. Many projects claim to be running an ‘Agile’ process, but in reality this often means only that they will show you what has been built and there will be a bit of time for tweaking if you don’t like it. The major risk with this approach is that you will not understand one another. In good faith, the provider may be diligently working to produce functionality which is not what you expected and won’t work for your business/in the UK market and regulatory environment. The time saved in getting something out there leads to promises of delivery times which leave little leeway for the mad scramble required to make the changes that are identified late in the process. Compromises made in the heat of the moment can then lead to the kinds of issues we have seen over recent months and years.

  • Testing times

After all this, there may be a lot less time for testing then planned. The scope of testing is often cut savagely, to meet the published deadlines. Testing often proceeds in parallel with changes being made as the glitches discovered above and during testing itself are addressed. A casualty from this is often regression testing. An old-fashioned notion, perhaps, but there’s a good reason for retesting functions after changes made elsewhere. Too often we have seen major issues arising unexpectedly in one function, based on changes made in another. Even with excellent version control and a tightly controlled process for changes, things do slip through.

  • Eternal optimism

Clearly, many of the previous factors are built on a bedrock of optimism that everything will work out in the end. This is particularly so when the firm decides to invest in tried and tested software from a known supplier. Many times we have heard, “it works for X so it will work for us” ignoring the difference in surround technologies and products. However, it is easy to lose sight of the changes the firm has made and the effects this will potentially have. A dash to the finish line could be swiftly followed by a major stumble, as we have seen. In this situation there is a very strong temptation to maintain that optimism and also strong motivation not to lose face. Sometimes, though, the right answer is to stop, go away, fix it and then come back when it is working correctly. This requires two things to happen. For clear heads, not governed by political pressures, to assess how badly things are broken and whether they can realistically be fixed in situ in a short timescale. This should also consider clients’ best interests and other regulator led considerations. The second thing required is a realistic and tested rollback plan, to allow the old systems and processes to be put back into place. We have never seen a firm which had this. Hard to believe, maybe, but it’s a sad fact that optimism always seems to triumph over learning from others in the scenario and ploughing on regardless is then left as the only possible way forward. The cost and consequences of this are evident in what we are reading about in the press.

So, the lessons we would suggest are:

  • – Hop, don’t leap
  • – Think differently, think deeper
  • – Write stuff down and agree it before you get too far in the process
  • – Make a list and check it twice (at least)
  • – Be more cynical, be more cautious

If you’d like to talk to us about operational or systems changes, do get in contact through our contacts page

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search