By Joe Clabby, Clabby Analytics
For decades, banks, financial institutions, retailers, transporters and other industries that process bulk payments have relied on batch processing to generate end-of-the day reconciliation reports, weekly or bi-weekly payrolls, consumer bills (such as utility and phone bills) and other work that is best processed in a group or “batch”. And many of these institutions will continue to use batch processing for decades to come because it allows jobs to be shifted and worked when computers are less busy, thus balancing system utilization and costs. In other words, for many jobs, batch will remain a viable, valuable approach.
However, technological innovations, new business models, customer expectations and regulatory pressures are causing a major shift in the way that many financial payments and transfers are supported:
- Consumers want to use their smartphones to instantly transfer money or pay for items – eliminating the need to carry cash. Smartphones are becoming the new wallet – and instant funds transfer using digital currency is the new cash;
- Consumers want deposits processed immediately;
- Merchants want same day payments;
- New business models based on mobile payments are evolving that overlay various services, such as authentication, fraud protection and security onto mobile transactions – making it possible to securely process transactions in real time;
- Government entities are putting rules in place to speed payment processing. For example, in the EU, regulations require clear information on payments, fast payments, consumer protection and a wide choice of services; in the US, the government requires credit card activity reporting to the IRS, lower debit card interchange fees, and compliance with the Payment Card Industry Data Security Standard); and,
- Consumers and enterprises are demanding the ability to instantly process payments across borders with little to no time delay.
As could be expected, a shift to real-time payments places new demands on computing systems because many jobs will need to be processed immediately – not left to be batch processed at a more convenient time for the bank or financial institution. This requirement for immediate payment processing also means that more computing capacity will be required at certain times during the day to handle spikes in demand during waking hours.
This shift in payment methodology will have a huge impact on IBM – particularly the IBM mainframe organization. IBM mainframes support 87% of the worlds credit card transactions; mainframes are used by 92 of the world’s top 100 banks – and in 23 of the top 25 retailers globally. A major shift in the way that payments are processed could alter this balance…
IBM: An Alternative Pricing Scheme
For years, IBM’s mainframe pricing was based on a “capacity pricing” model. That is, mainframe users paid for the amount of computing capacity that they used – and that capacity could be balanced over the course of a day using a batch pricing model.
So, for example, an organization might have to process 100,000 transactions a day – but using a batch model it could spread the processing of those transactions over a 24 hour period, thus balancing the capacity headroom they required. With a new model, real-time payment processing, the bulk of transactions hit during waking hours which means that an organization might have to process 75,000 transactions over an eight or ten hour period. As a result, the amount of capacity headroom required and related costs would spike tremendously during that period.
If IBM were to stay with its current capacity pricing model, many mainframe customers would see a huge increase in processing costs which would likely drive those companies to seek less expensive alternatives.
To remedy this situation, IBM has introduced a new approach: “utility pricing” for on-premises workloads. This enables mainframe clients to reduce their computing cost when dealing with high capacity peaks, uncertain volumes and growth rates, and volatile workload profiles (transaction spikes and surges). It also helps clients deal with restrictions in capital expenditures (CAPEX) and concerns about overall total-cost-of-ownership (TCO).
The way the solution works is that a customer acquires the licensing of the IBM Payments software solution (Financial Transaction Manager or FTM). The software needed in support of FTM – z/OS, IBM MQ (Message Queue), IBM DB2 and IBM Operational Decision Manager Standard – is then charged per use for the payments processed. That’s just payments processed in a customer’s production environment (FTM has a counter in it which accurately collects and reports on the volumes processed).
In on-premises mainframe systems the FTM package is offered as a software-only or a combined software and hardware solution. Clients who chose to by the software-only stack will see a range of benefits. These include lowered cost/risks around peak demands; predictable pricing linked directly to business/transaction volumes, no impact on other licensing costs (MLC), co-location to other core banking applications on their mainframes, and charges only for production payments. Clients who choose the combined software/hardware offering will see all of the above benefits, as well as significantly lower risk by using a utility pricing model with a fixed cost per payment processed – and with with limited upfront hardware spend.
The most difficult objection to overcome when it comes to mainframe adoption is price, especially acquisition costs. Mainframe hardware, compared to most x86-based servers, is expensive. And related systems software, middleware, transaction processing environments and management software can also be expensive. In head-to-head competition, x86 solutions almost always look cheaper – and, accordingly, IT executive managers most often purchase x86-based servers on the basis of that perceived lower price.
With this new utility pricing model for payment processing, IBM has taken a giant step forward in using capacity pricing to process highly variable workloads to correct what some perceive as a punitive, even disastrous pricing scheme. By doing so, the company is also protecting its mainframe base as the transition to real-time pricing takes place and opening new, future opportunities for its z Systems as demands for stronger security and higher system capacity drive more prospects to consider z Systems mainframes.
I have to congratulate IBM’s z marketing organization for taking this corrective action. They have properly assessed the payments market – and have positioned z Systems to compete effectively and aggressively in this fast growing market opportunity. What appears to be going on is that the company is taking a closer look at its systems features: the characteristics of processors; communications subsystems; file subsystems; scale-up and –out capabilities; security capabilities; queuing facilities; and more – and then looking more closely at workload demands.
In this case, z Systems marketing saw a major threat developing, worked with development to structure a utility pricing management offering leveraging their very own FTM payments software solution (FTM) – and has presented its prospects and customers with a viable and attractive pricing alternative. Good work, IBM z Systems marketing!