By Joe Clabby, Clabby Analytics
A recent Clabby Analytics PundIT article entitled “IBM IBV Study: Why Infrastructure Matters” discussed why infrastructure (systems, storage, networks and systems software) decisions have become so important to enterprise information technology (IT) buyers. In that article we discussed the findings of an IBM Institute for Business Value (IBV) study that showed that enterprise IT buyers are willing to invest big money into infrastructure products – provided that those products help drive top-line growth such as enhanced revenue opportunities, enable better customer service, provide new insights – and the like. The study also indicated that these buyers know that optimization is key to maximizing their investments in infrastructure products.
Last week in Greenwich, Connecticut, IBM invited 88 IT research analysts from around the world to an analyst briefing in order to showcase its Systems and Technology Group (STG) infrastructure products and to outline its strategies. IBM’s goal: to show that IBM is continuing to make large investments in infrastructure products, and to demonstrate that these products are well integrated, highly competitive, and leading edge in terms of features and functions.
From our perspective, IBM succeeded in proving that infrastructure matters on a number of fronts, including:
- The advanced designs of its POWER architecture and system implementations;
- Positioning the mainframe to handle new workloads such as Big Data analytics and mobile computing;
- Leadership in Flash storage; and in
- Software defined environments (SDEs) where IBM has leveraged its distributed systems management resource scheduling products (the result of its acquisition of Platform Computing) to build-out a broad suite of “software-defined” offerings.
The remainder of this report will look at each of these offerings more closely.
Power Systems – Open, Built for Data and Optimized for the Cloud
It should be no secret that IBM’s Power Systems sales have been stumbling lately – with revenue declines of 31%, 22%, and 28% over the last three quarters. In fact, one analyst intimated to me “why does IBM continue to invest in POWER?” – as if Power Systems would not recover. My answer to this analyst was “balderdash”.
Yes, this has been a bad period from a sales perspective for Power Systems – largely because IBM focused too heavily on taking low-hanging-fruit Unix business from its competitors and failed to focus on Linux on POWER and several other growth opportunities. But, under new leadership, the Power Systems organization has re-targeted the Power Systems line and is now focusing on capturing Linux applications, on analytics markets, and on optimizing Power Systems for the cloud. Further, to broaden the appeal of POWER architecture, IBM has also opened the POWER architecture to external vendors, setting the stage for a whole new generation of POWER-based systems solutions. At this STG briefing, IBM described how it will grow Power Systems revenue by focusing on cloud innovation, integration and management, and on three specific areas of analytics that can exploit the raw processing power of the POWER8 processor. For a comparison of POWER8 to x86 architecture, see this report.
Will Power Systems recover? Yes – and they will do so because of distinct processor performance advantages (such as the ability to process 4 times the number of threads as competing Intel x86-based processors), and systems advantages (such as the ability to support very large configurations of main memory and Flash cache) over competing Intel x86-based system. There are thousands of workloads that need the raw processing power of POWER architecture – especially a new generation of analytics workloads and the evolving cognitive computing market. As the market becomes more familiar with these new workloads as well as the technological advantages of Power Systems, sales will again turn positive.
One hidden gem in the Power Systems presentation was a discussion of IBM’s new compute-on-demand (COD) mobile pricing. Power Systems experience peak-and-valley sales cycles where buyers who want better performance and more capacity buy the latest-greatest POWER servers – and then don’t refresh until the next version of POWER architecture comes along. IBM’s new COD/mobile pricing model enables IT buyers to more easily add capacity by finding and paying for POWER-based resources that can be located anywhere within a Power Systems cloud – changing the buyer focus from waiting for new generations of Power Systems to finding and exploiting capacity. If this new COD/mobile pricing model takes off, Power Systems should see more level sales.
Also worth mentioning is that IBM’s Power Systems organization invited three members of the OpenPOWER foundation to its STG meeting. Another way that IBM intends to grow POWER sales is by opening the POWER architecture for development around the world. IBM partners articulated how they are building new, innovative systems architectures based on POWER8 architecture and are very pleased with the results that they are seeing. IBM also mentioned that there are now over 50 vendor members of the OpenPOWER foundation.
IBM’s System z Mainframe – New Workloads
In past reports Clabby Analytics has described the difference between IBM’s System z mainframe architecture and other RISC architectures from a microprocessor/systems design perspective. The way we see it, IBM’s z processor is a single-thread processor that is highly optimized for processing “stacked” work. The System z system design is heavily optimized for fast input/output processing. Accordingly, we argue that the combination of a fast stack processor and a very large I/O subsystem make System z the market’s most efficient large volume transaction processor/server.
On the other hand, we have struggled to argue that System z is well suited to process analytics workloads. Yes, we have noted that recent improvements in the System z instruction set (70 new instructions plus optimization for Java workloads), combined with floating point improvements and out-of-order execution have improved System z analytics workload performance. But we’d probably still choose a Power System for processing many analytics workloads due to the ability of POWER8 to process so many threads so quickly. Be aware, however, that we are about to change this perspective.
At IBM’s STG meeting certain data was shared with us that will strengthen the argument that System z can be an excellent analytics workload processor. IBM’s System z organization argues that it makes a great deal of sense to process transactions and analytics using a System z because moving the data to other processors can create data management problems as well as add significant additional expense. (We happen to concur with this point of view as we state in this ETL report). IBM’s System z development is adding new functionality that will greatly strengthen the performance of System z for processing analytics workloads along with transaction workloads. Some of these new features and functions will be announced shortly (they are under non-disclosure at present) – and as soon as this non-disclosure information becomes publicly available, we are preparing to argue that System z makes an excellent transaction/analytics workload processor (expect us to issue a report in the November timeframe on this topic).
IBM to Reclaim Leadership in the Storage Market
For decades IBM was the leader in the storage market – both from a sales and innovation perspective. But in the 1990s IBM saw that leadership edge challenged and has become one of many storage array providers. But the acquisition of Texas Memory Systems in 2012 has revitalized IBM’s storage business, positioning IBM to become the number one supplier of Flash storage for the enterprise. The computing market is in the process of moving away from mechanical hard disk drives to solid state Flash storage, and IBM is particularly well positioned with its Flash storage units to claim major market share in this fast growing market segment.
One aspect of Flash storage particularly intrigues us at Clabby Analytics: the potential to build very large in-memory systems using Flash storage as memory cache. Last May we wrote a report on how IBM’s X6-based System x systems can support up to 12 TB of memory by using a combination of main memory and Flash cache that serves as extended memory and is served to the processor over System x memory channels (this technology, by the way, is being transitioned to Lenovo as part of the sale of the System x x86 systems to Lenovo – a deal that should close shortly). We see huge potential for IBM’s Power Systems organization to mimic this Flash cache as memory approach – albeit in a different manner (IBM can build large huge Flash cache memory subsystems that can support up to 40TB of Flash that can access the POWER8 processor using IBM’s new CAPI interface). Clabby Analytics is in the process of writing a report on how POWER8-based servers are well positioned to drive down the cost of large in-memory systems by exploiting Flash as memory cache – expect this report to be released within a month.
Software Defined Environments
In October, 2013, Clabby Analytics wrote a report that described our view of how the software defined computing marketplace would evolve. The way we see it, “software defined” computing is all about abstracting systems, storage and networking hardware such that these resources can be programmed through software by users as well as vendors to perform new functions. Using this software defined approach enables enterprises to tightly integrate application service level objectives with the underlying infrastructure to enable dynamic automated reconfiguration of resources based on application/-workload requirements or business-based policies; and it enables enterprises to improve Quality of Service (QoS) levels by software defining network and storage hardware behaviors (giving you the ability to introduce new availability, security or other features instead of having to wait until your vendor supplies them).
But once your systems, storage, and network resources have been software defined, they will also need to be integrated with one another (otherwise your information systems will consist of software defined silos). We believe that the way that these resources will be tied together into dynamic working pools of resources will be through the use of an evolving cloud standard known as OpenStack – as well as through the use of associated standards such as TOSCA and Open Services for Lifecycle Collaboration (or OSLC for short).
At IBM’s STG briefing we saw great progress in the formation of “software defined environments” – especially in the area of resource management. In January, 2012 IBM acquired Platform Computing, one of the early pioneers of “grid computing”. What IBM essentially acquired, however, was not a grid vendor – but a maker of one of the most advanced distributed workload management software environments in the industry. IBM’s software defined organization is now exploiting Platform Computing’s advanced resource management environment to deliver software defined solutions across systems, storage and networks.
The net of what we saw at IBM’s STG analyst briefing was this: IBM is making major investments in systems, storage, networks and related infrastructure that will continue to differentiate IBM from other vendors in the computing marketplace. We note that last year IBM announced that it would spend a billion dollars to build out its Linux portfolio. Earlier this year, IBM stated that it would spend another billion dollars to build out its Watson (cognitive computing) ecosystem. And reemphasized that it announced a few months ago that it was spending three billion dollars on research and development initiatives – including initiatives to build future chip and process technologies. This level of investment certainly indicates that IBM is taking its “infrastructure matters” mantra very seriously. Overall we were very impressed by IBM innovations in processors, systems designs and Flash. These innovations are making it possible to build new and different types of systems environments – and with increasing infrastructure integration – IBM’s STG organization is well positioned to drive greater profitability in the future.