Blog

26 APRIL 2017 David Richards, Cloud Tech News

Four nines & failure rates – will the cloud ever cut it for transactional banking?

Nearly nine hours of unplanned outage a year - in real-time banking? Is it any wonder financial services hasn’t made the final leap into the cloud?

Banks looking to take their cloud plans to the next level are likely to have returned to the drawing board following the latest Amazon Web Services outage, which disrupted the online activities of major organizations from Apple to the US Securities and Exchange Commission. One estimate suggests US financial services companies alone lost $160 million[1] – in just four hours. It’s been a timely reminder that any downtime is too much in an always-on digital economy, certainly for financial services.

The sobering point is that AWS was still delivering within the terms of its service-level agreement (SLA). This promises 99.99% service and data availability (otherwise known as “four nines” availability). This may be good enough for a lot of things, but it won’t do for banking. 

Over a year that 0.01% scope for unavailability equates to almost nine hours of unplanned outage – and that’s on top of any planned downtime for maintenance or updates. Combine the two and you’re looking at more than a day’s worth of service loss across a 12-month period. It’s hardly a recommendation for banks to move critical, live data into the cloud – however compelling the business drivers.

Banks need five nines (99.999%) service and data availability – the levels aimed for on their own premises. That’s a downtime tolerance of between 0.32 and 3 seconds per year. And public cloud services are not set up to match that. It would be uneconomical.

The end doesn’t justify the means

Moving real-time transactions into the cloud is the final frontier for traditional regulated financial institutions. And there’s no question that they need, and want, to do this. It’s vastly more cost-efficient, and it’s the only way they can hope to compete with nimble financial upstarts, whose agility owes everything to being able to crunch huge numbers at high-speed using someone else’s top-of-the-range server farms. 

Financial authorities such as the UK’s Financial Conduct Authority have already accepted the cloud[2], which on the face of it gives banks the green light to be more ambitious. But not really, because the issued guidance doesn’t bridge the reality gap traditional banks need to get across – ie the inadequate service level for scenarios other than data archiving or disaster recovery.

In data archiving and backup applications, the cloud’s appeal hinges on its cost-efficiency, scalability and durability. But durability should not be confused with availability. Even if data is tightly safeguarded, and can be brought back online efficiently after a system crash or other crisis, this adds no value in a live-data scenario. If there is any chance that at some point access may be interrupted, the other merits of cloud don’t matter in this context.

And that’s why banks haven’t made the final leap to using cloud in a production environment – because these otherwise very viable on-demand data centers can’t offer them the very high availability assurances they need.

Lost market opportunity

So banks are stuck. The inability to move core systems and live data into the cloud is costing them competitively in lost market opportunity.

If they could make the leap, it would pave the way for advanced customer analytics, intelligent service automation, complex stock correlations, and predictive fraud detection: data-intensive applications that demand massive computer power – at a scale that their proprietary data centers simply can’t deliver.

But AWS and other mainstream cloud infrastructure providers have designed their services and service level agreements to meet the needs of the majority: where the risk of interrupting a morning’s business, social feeds or even hedge fund activity, though costly, is at least partly offset by huge infrastructure savings.

Remaining open to new options

Banks absolutely need to be more ambitious and creative in their use of the cloud. Their future differentiation depends on having access to the same computer power, speed and flexible resource as their more nimble, less risk-averse competitors. But they are not going to make the transition until the service levels they rely on for core systems can be delivered.

Inadequate service levels are a significant stumbling block, but lessons will be learnt each time a high-profile cloud service is compromised[3]. In the meantime, barriers to what banks need to do can be overcome. Solving the data availability issue comes down to the way data is synchronized between sites (eg primary servers and secondary data centers), so that live data is always available in more than one place at the same time. It sounds impossible, but it isn’t.

Achieve this (and at WANdisco we have) and the nines will take care of themselves.

[1] The massive AWS outage hurt 54 of the top 100 internet retailers — but not Amazon, Business Insider, March 2017

[2] FCA's cloud guidance: What it means for cloud providers and financial services firms, Computerworld, September 2016

[3] $310m AWS S3-izure: Why everyone put their eggs in one region, The Register, March 2017