IBM’s Enterprise 2014 Showcase: Major Technology Shifts Ahead

By Joe Clabby, Clabby Analytics

For the past five years IBM’s Systems and Technology Group (STG) has sponsored an annual customer-oriented strategy sharing/product review/hands-on training conference called IBM Enterprise. Of the dozen or so events that I attend every year this is by far my favorite because it succinctly describes IBM systems strategies, provides tons of case study/customer testimonials and provides several venues in which to interact with and seek off-the-record feedback from IBM customers. That feedback helps me get real world but anonymous feedback on IBM products and services – and helps me formulate my analytical opinions.

This year IBM focused on its STG messaging on its Power Systems, System z and storage lines (System i and PureSystems were barely mentioned – though the latter is really a software group product):

• For those who follow Power Systems, the big news was the arrival of the new scale-up solutions (the E870 and E880); the introduction of new packaged “accelerators” (a NoSQL machine and a graphical processing unit [GPU] machine); new IBM Power S824 servers that tightly integrate IBM and other OpenPOWER member technologies [including NVIDIA’s graphical processor unit accelerator technology}; and the expansion of the POWER ecosystem (through the OpenPOWER foundation);
• On the mainframe side, the messaging concentrated on IBM’s overarching CAMSS strategy (cloud, analytics, mobile, social and security) – with particularly strong messaging on mobile;
• On the storage side, IBM focused on its strategy to retake the number one position in storage as the market moves from power hungry, slower spinning disks to solid state Flash drives;
IBM Research also shared its roadmap for future microprocessor designs, and one of my favorite technology speakers of all time, Bernie Myerson (an IBM fellow) shared an important insight on how “factors” of improvement (exponential rises in performance) will become the new way to evaluate the systems of tomorrow.

The Power Systems Messaging
It is hard to say what the biggest news in the Power Systems session was – there was just so much of it.
The E870 and E880 are extremely powerful, extremely efficient scale-up systems built on IBM’s POWER 8 architecture. The big news with these new servers is how many threads they can process simultaneously, how much main memory they can access and their performance advantage over x86 competitors. (For details how Power8 microprocessors compare to Intel x86 architecture, consider downloading this free report). For instance, the E880, the most powerful of the line, can support up to 192 cores offering support for more than 1500 threads, and also offering access to up to 16TB of main memory. On the performance side, SAP S&D benchmarks show that nearly 1,000 users can be supported per core using these machines (this is 2.2X more than the closest competitor). In short, these machines are powerful data center workhorses.

As impressive as the new scale-up machines are, I was particularly drawn to the arrival of new “hybrid” systems architectures that bolster Power 8 CPUs with GPUs (graphical processing units) and field programmable gate arrays [FPGAs]) that have been designed to deliver exponential factors of performance improvements over traditional designs based on a single CPU. Over the past year I’ve been writing about the arrival of such architectures in the Intel camp (see this report, this report and this report). I was fascinated by IBM’s new POWER8- and FPGA-based NoSQL server and the new NVIDIA GPU- and POWER8-based Data Engine for Analytics accelerators that are capable of running thousands of computations simultaneously, and are also well suited to perform compute-intensive applications.

This new hybrid system designs warrants a bit more discussion but doing so offers some contradictions. First, a Wikipedia lookup offers a completely different definition of what hybrid systems are than what I’ve been calling hybrids. Second, IBM nomenclature describes systems with different types of processors working together to accelerate workload performance as “accelerators”. And third, it’s really hard in this industry to create a new class of server primarily because it screws up the bean-counters’ (Gartner and IDC) market models. Still, I would advocate that this new class of systems be called “accelerator systems” rather than hybrids – so Clabby Analytics reports from this moment on will reflect that change.

Another point to consider with regard to these new accelerators was raised by IBM fellow Bernie Meyerson. In his presentation, Myerson contended that Moore’s Law is dead (shrinking microprocessors in order to put more transistors on a die in order to achieve a doubling of processor performance about every 18 months) – and that the new metric for systems performance will be measured in factors of improvement over traditional architectures. These new accelerator systems are all about delivering factors of improvement – and in private discussions with IBM and other vendors, I was told that there are many, many more of these types of designs coming.

So get ready for a major market shift – one towards performance improvements using new accelerator system designs (and using “factors” to measure those improvements). As an interesting anecdote, Stephen Leonard, IBM’s GM of Sales Worldwide, told a story about a customer to whom IBM offered a system that offered a 60X factor of performance improvement over a traditional single CPU design. The customer expressed no interest so Leonard re-worded his sales pitch by asking, “How would you like a system that would cost only 1/60th the price of your current system to execute your current workload?” Suddenly, the customer was greatly interested… Trust me, you’re going to be hearing a lot of this “factor” pitch for years to come.

Not to be overlooked in this Power Systems discussion is the advance of Linux on POWER. Four years ago, when POWER7 was announced in NYC, both Tony Iams (now of Gartner) and I were pressing IBM executives on their strategy to bring Linux to the POWER marketplace. We pointed out that Linux would be extremely important to the future of POWER – and we were told that IBM was working on Linux on POWER with its new “Chiphopper” program to capture Linux applications to the POWER chipset. But we knew that not enough effort was being put forward (primarily because IBM’s Power Systems organization was making a fortune capturing customers fleeing from the failing HP Itanium and Oracle/Sun SPARC architectures). I hold that IBM’s failure to address Linux on POWER back then responsible for the sudden decline in Power System revenue sales starting two quarters ago. The Unix market declined sharply and the Power Systems organization was simply not properly prepared to introduce the Linux on POWER solutions that the market needed.
The good news is that Power Systems organization is under new leadership with general manager Doug Balog at the helm – and several big advances in the group’s commitment to Linux have taken place accordingly. New low-end Power Systems with Linux have been introduced; an Integrated Facility for Linux (IFL – similar to the one for the System z mainframe) has been introduced (see this report). Most importantly, IBM has introduced support for little-endian as well as big-endian Linux applications on the POWER microprocessor (endians have to do with byte order – and by adopting little endian, Power Systems can now run almost every x86-based Linux application on the POWER microprocessor after a simple recompile). Plus, Linux is playing a big role in systems design at the OpenPOWER Foundation. As a result of these activities and more, I am now satisfied that IBM’s Linux on POWER strategy is on the right track.

Finally, in the POWER arena at IBM Enterprise, the OpenPOWER Foundation was discussed at great length. This group was formed to license IBM’s POWER8 microprocessor architecture – and leverage the open source community and interested third-party vendors to greatly expand the POWER ecosystem with all new solutions. The new Data Engine for Analytics accelerator mentioned earlier is one of the first system designs to emerge as part of this alliance. I’m running short on space in this blog, so if you want to know more about this OpenPOWER Foundation, see our new, free report on our website.

The System z Discussion
In the mainframe main tent, the positioning of System z in cloud, analytics, mobile and social markets was highlighted. IBM focused how the mainframe was positioned in the private cloud as well as the hybrid cloud world – and how the mainframe has become extremely successful as a mobile back-end server environment.

My take on the mainframe position in the cloud is this: IBM’s System z organization has much work ahead of it to successfully position the mainframe as a participant in today’s view of cloud computing. Ask most systems-knowledgeable people what a cloud is and they’ll point to the Amazon Web Services (AWS) cloud or Microsoft Azure environment (both public-facing, x86-based cloud environments) – or they’ll claim to have a virtualized x86 private cloud of their own. Mainframes often left out of today’s cloud discussions – despite the fact that mainframes represented the first cloud-in-a-box environment decades ago.

As for using System z as a back-end server for mobile environments, we first started seeing the excitement in mainframer’s eyes about using the platform in mobile about two years ago at an IBM SHARE user group meeting in Boston. These users recognized that their mainframes were the repositories for much of the world’s data. And what they understood intuitively was recently backed-up in this Forbes article which states that “[IBM] technology, in fact, powers 90% of banks and 80% of airlines across the globe. Moreover, 70% of enterprise data flow through IBM’s systems in some way.” My bet is that the lion’s share of this 70% figure flows through mainframes.

IBM brought several customers on stage to discuss their mainframe/mobile experiences. And the company also talked in great detail about the new Apple/IBM alliance that will integrate iPhones and iPads with back-end mainframe data for analytics queries, secure look-ups and the like. For more on how the mainframe can serve as a mobile back-end see this free report – and for more on how the Apple/IBM relationship works, look for an article by me on the SHARE website next month. Incidentally, I plan to update the above mobile report soon, refreshing it with several more customer examples from this IBM Enterprise conference.

One last note on the mainframe mobile strategy: getting access to back-end data from mobile devices was initially an expensive proposition because it burned expensive mainframe MIPS. IBM identified this situation and, in order to grow mainframe/mobile marketshare, came up with a pricing scheme that greatly reduced those costs. This pricing plan should be a major enticement for mainframe/mobile buyers.

Finally, when it came to discussing the mainframe and analytics, IBM focused on describing its DB2 Analytics Accelerator environment (covered by Brad Day of Enterprise Computing Advisors and I in this report). What is particularly interesting about the DB2 Analytics Accelerator is that it is an “accelerator” server – as described earlier. What it does is it delivers mainframe DB2 data to a tightly coupled accelerator server (which uses FPGAs and Intel processors) to process certain analytics workloads factors of time faster than the mainframe can. The company recently updated this environment with new features. I also had a separate discussion with an IBM fellow regarding the future of analytics on the mainframe – and there is substantially more to be told in this space. Unfortunately, I am bound under a non-disclosure agreement from sharing the rest of this story with readers at the present time.

The Storage Strategy
Only a year ago Clabby Analytics wrote a report on how we believed that the software defined marketplace would evolve to a rich cloud environment that would provide users access to a wide range of open source and vendor-supplied software-defined services, while also letting users build their own software defined systems, storage and network services. In that report, we described how IBM, EMC and Red Hat were building software defined storage environments – but, at the time, we had difficulty getting a grasp on IBM’s strategy. Shortly after we wrote our report, Jamie Thomas moved into the role of IBM’s Storage general manager – and the strategy was suddenly better articulated. In her keynote speech at IBM Enterprise, Ms. Thomas described IBM’s storage strategy as follows:
1. IBM will become number one in storage by becoming the dominant player in Flash technology. There is a major market shift taking place away from mechanical disk technologies to Flash solid state storage – and IBM is already number one in Flash storage. IBM will maintain and grow its Flash marketshare.
2. IBM will use software-defined storage as a means to differentiate itself from other storage vendors. The company is already the leader in software defined storage (according to IDC) with a strong portfolio of products: Elastic Storage technology launched in May is now available as software, as-a-service on SoftLayer and as an appliance based on IBM POWER8. IBM SAN Volume Controller and IBM Virtual Storage Center products are also a part of its software defined storage portfolio. IBM will use software defined techniques to provide clients with the flexibility to use heterogeneous hardware, and simplify the deployment of storage services and hybrid cloud.

The Microprocessor Agenda
Microprocessors fascinate me – and I know that the current silicon design that relies on shrinkage to increase performance has run its course (if you shrink dies much more, designers will be working at the atomic level to build chips – and things could get ugly from a nuclear fission perspective…). So I’m extremely interested in some of the new 3D, carbon nanotube, and synaptic designs that are currently being explored. At Enterprise 2014, a representative from IBM’s Research division spoke about IBM’s $3 billion investment in next generation processor technologies – and observed that some major changes are coming our way. I have not yet had time to process all that I learned during this session – but if I can find the time I will attempt to write a new report (the last one I did can be found here) that describes some of the new technologies being researched and their possible impact on today’s microprocessor status quo.

 Summary Observations
There was so much information presented at IBM Enterprise that I ended up with 15 pages of notes! It’s going to take some time for me to assimilate all of it – but my overriding take-away after this conference is this: Power Systems really has its act together and I’m expecting a big turnaround in their financial performance accordingly. As for the mainframe, work is needed to position the platform as an important player in the cloud – but I’m excited about what I’ve heard is coming in mainframe-based analytics. And, in storage, IBM’s strategy and related product implementations have come a long way since only a year ago. So far, in fact, that we will issue a new report reflecting all of those changes shortly.
While attending Enterprise 2014, I also learned that one of IBM’s traditional foes (Hewlett-Packard – HP) may be split into two companies – a PC/printer company, as well as an enterprise server/software/services company. A few years ago I wrote a report that described how HP had suffered from a decade of mismanagement (especially by underinvesting in research and development). IBM, on the other hand, is investing heavily in game-changing processor and numerous other developments that may change the market (just as IBM did in the bipolar to CMOS days). I think splitting HP into two separate companies is an excellent idea (the low margin PC business should be split from the high margin enterprises hardware/services business – these are really two separate businesses).
All-in-all, IBM Enterprise is a wise use of Power Systems/mainframe/storage buyer’s time. I’d encourage these buyers to attend IBM’s next Enterprise conference (which I am told will be combined with the IBM Impact conference) in May 2015. If you do attend, don’t be surprised if I introduce myself and seek your feedback on IBM strategies, products and performance.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s