Virtual Instruments Acquires Xangati: A New Application-Centric Focus to Bridge APM and IPM tools

By Joe Clabby, Clabby Analytics

Almost two years ago we wrote our first report on Virtual Instruments (VI), a fast growing, analytics-driven performance management company with a strong focus on making infrastructure more efficient. We described the VI product portfolio which included “VirtualWisdom,” the company’s infrastructure performance management platform, and associated hardware and software offerings known as “Probes” (ProbeVM, ProbeSW, Probe FC and Probe NTAP). We also observed that the company was using “advanced correlation techniques, analytics and visualization to provide definitive and actionable insights on infrastructure/application behavior” using hardware appliances to offload systems from having to burn precious cycles gathering monitoring information. In essence, VI had created a separate performance monitoring/availability management/utilization optimization environment that has a very low impact on system operation and latency.

Last year, we reported that Virtual Instruments had merged with Load DynamiX – adding a performance testing, validation and change management environment to its analytics-driven infrastructure management portfolio. With these combined facilities, customers are better able to understand and test application/infrastructure relationships – enabling them to significantly improve application performance, particularly as it relates to Fibre Channel storage. Since that acquisition, Virtual Instruments has expanded Load DynamiX functionality into network-attached storage with its new NAS Performance Probe – and will soon introduce and iSCSI Probe. VI customers have reacted favorably to this acquisition: for 2016 year to date, Virtual Instruments revenues are running at 122% of plan.

On November 15, VI announced that it had acquired Xangati, a provider of products that monitor, analyze and control private and hybrid cloud infrastructure in real time – in an application-aware context. VI describes Xangati as “a service assurance analytics and performance control platform that optimizes virtual application workloads, leveraging an in-memory platform architected for automation based on machine-learned heuristics.” The way we see it, however, is that Xangati expands the VI portfolio by providing new insights into application behavior in environments beyond storage – particularly into cloud network infrastructure and compute/virtualization layer activities. And what is special about Xangati is that it uses new types of analytics (contention analytics, predictive capacity analysis and adaptive control) to examine application activities within networks, private clouds, deep compute environments, at the virtualization layer and in public clouds.

With the Xangati acquisition, VI is expanding its reach – moving beyond its initial application/storage infrastructure performance management focus into the complex world of application behavior within hybrid clouds. And Xangati is a perfect match for VI in that the company’s solutions were built on using machine-driven analysis of networks (particularly real-time security analysis) to speed the analysis of activities and application behavior. By merging Xangati functionality with the VI’s VirtualWisdom analytics/reporting platform, VI customers will now be able to understand application behavior within cloud environments – expanding VI’s reach and value to IT infrastructure managers and application owners looking for a holistic application/infrastructure performance management environment. The VirtualWisdom infrastructure performance management (IPM) platform will become the ideal complement to application performance management (APM) solutions.

The competitive environment

The Xangati acquisition also extends VI’s reach into one of the hottest market segments in the IT marketplace: public/private/hybrid cloud, and gives VI customers the ability to monitor application behavior across networks, within virtualized environments, within virtual desktop environments, within cloud environments and even down to the granular container level. This is all accomplished from a single-pane-of-glass management, analysis and reporting environment (VirtualWisdom – expect Xangati functionality to be blended into this VI management environment in phases in 2017). We are aware of only one other company (IBM) that offers this level of rich application layer analysis combined with deep infrastructure analysis – and IBM’s approach and price points are completely different.

What makes VI’s approach different from application performance management vendors and other infrastructure performance management software suppliers (that usually offer point product solutions) is that the company can use hardware appliances known as “Probes” to relieve systems from monitoring, management and analysis duties. The reason this is important is that running VI’s software obviously burns system cycles. Those cycles aren’t free – they consume anywhere from 2 to 5% of a system’s resources – and that can be costly across dozens, hundreds or even large scale systems. In addition, software-only based monitoring solutions don’t support true real-time monitoring. VirtualWisdom hardware probes enable sub-second wire data collection and analysis whereas software solutions typically poll the infrastructure once every 2 to 5 minutes. What VI has done is offload most of this processing to independent low-cost, Intel-based servers that analyze large amounts of systems-/storage-/network-monitoring data – and then use analytics to look for problems, anomalies or opportunities to tune systems.

It is also worth noting that some application performance management vendors are just starting to use machine learning and analytics while VI already offers a rich and varied suite of analytics tools and reporting facilities. VI’s solutions already work across domains, they already measure and contextualize how applications are deployed (topology) and how they behave; they already perform analysis on large amounts of data in real time; they already use predictive analytics to suggest recommendations; and they already close the loop by providing testing and validation in order to ensure that applications can be deployed on underlying infrastructure in a high-fidelity, highly efficient manner. In other words, VI has already taken a leadership position in application-centric infrastructure performance management.

The big impact

As far back as 2011, Clabby Analytics started to report on the increasing role of analytics in managing systems and infrastructure. The basic concept that we have been trying to get information technology (IT) executives to understand is this: with the advent of machine-driven analytics, systems can now look at more data than is humanly possible, in far less time, and provide important insights into application behavior, infrastructure shortcomings and tuning opportunities.

By making systems do more of the complex application behavior analysis, enterprises can save big money in three different ways: 1) the use of infrastructure/application behavior analysis tools results in the need for far fewer systems/appli-cation/database analysts to analyze system behavior – saving enterprises big money in analyst salaries and benefits; 2) less-skilled individuals are needed for troubleshooting and tuning – again saving enterprises big money in terms of salaries (while also helping enterprises address skills gaps); and, 3) infrastructure performance management tools help reduce overprovisioning – adding more equipment than needed to execute workloads (because infrastructure performance management tools help IT managers and administrators build more efficient systems that utilize fewer resources).

In addition to saving big money in salary/equipment costs, enterprises can also solve problems more quickly using real-time analytics that provide analysis on large volumes of monitored data in timeframes that are exponentially faster than human response times. In fact, the financial benefits of avoiding performance slowdowns and business outages can easily outweigh the significant CAPEX and OPEX savings.

Summary observations

We expect that VI’s approach to application/infrastructure integration will remain the same, even after the Xangati acquisition. The company collects data from applications, infrastructure and the network (instrumentation); it then contextualizes applications and infrastructure – showing how each relates. Once contextualization has taken place, application/infrastructure analysis can take place on workload performance, contention issues, etc. Once recommended changes have been made, developers can model, simulate and test the proposed changes. Finally, changes are deployed in production and adapted to workload needs. VI’s storage and testing products use this approach today; Xangati’s network and cloud performance monitoring products will also follow the same path.

It is noteworthy that VI has become profitable this year. Its business is growing; new products are being rolled into its VirtualWisdom platform; and, it is expanding its relationship with third parties such as APM and system vendors in order to find new ways to market. With its combined software and hardware approach we see VI as a unique offering – and with its expansion through acquisition approach, we believe the company will find other software companies that can complement this portfolio. By targeting hot market segments with unique and deep offerings, we see a solid growth path for Virtual Instruments in the future.

Posted in Uncategorized | Leave a comment

IBMs World of Watson (WoW) Conference: Embedded Applications and Thriving Ecosystem

By Joe Clabby, President, Clabby Analytics

Two years ago after IBM’s Insight 2014 conference, I wrote a piece for the Pund-IT Review about IBM’s Watson cognitive computing environment, I stated the following:

We in the analyst community need to see some black ink coming from IBM’s cognitive computing Watson efforts. I personally believe that Watson is on the cusp of hitting it big – starting probably in about a year or two.” I further stated that: “Watson will become profitable in about two years as the betas reach closure and as IBM works out its deployment and pricing plans.”

At that time my big concerns regarding Watson were that I was not seeing an expansive portfolio of cognitive applications; many customer environments were still “in beta test”; the Watson business partner ecosystem was in its infancy; and IBM’s Watson pricing/deployment plans were “obscure”. But, after attending this year’s World of Watson (WoW) conference in Las Vegas, all of my concerns have been assuaged. Why is that the case? Because:

  • The Watson applications portfolio has been markedly expanded – helped by a huge ($1 billion+) investment by IBM in hardware and software development;
  • The Watson ecosystem has greatly expanded as dozens and dozens of developers and partners have created cognitive computing and analytics solutions on top of the Watson platform;
  • The “quiet beta customers” of 2014 have been replaced by “vocal Watson customer enthusiasts” – as evidenced by the 50 or so live testimonials at this year’s WoW conference, and by the dozens of Watson user break-out sessions that could be found on the conference schedule; and,
  • IBM’s pricing and deployment scheme for Watson has solidified – with the company placing a huge emphasis on Watson cloud service delivery models.

Just before getting on the plane to Las Vegas I reviewed IBM’s 3rd quarter earnings announcement and I noted that the company reported that cognitive solutions (which include Watson revenues) had reached $4.2 billion, up 4.5 percent.   IBM also reported that cloud revenue within the segment grew 74 percent (up 75 percent adjusting for currency), and Solutions Software grew 8 percent. What this shows me is that IBM’s major initiatives in cognitive, cloud and service delivery are now delivering the “black ink” that I’ve been looking for.

And this revenue jump may be just the tip of the iceberg – Ginni Rometty, IBM’s Chairman, President and CEO, reported in her keynote address that cognitive computing is estimated to be a $31 billion industry in the near future!   And with IBM’s clear leadership in cognitive technology, the future looks very bright indeed for IBM’s Watson cognitive computing initiative.

The World of Watson Conference

The World of Watson conference merged with the analytics/cognitive/data management conference formerly known as “Insight.”. Plenty of discussions on databases, data management, analytics, security and other related topics were still to be found at the WoW event – but, by changing the conference name, IBM has chosen to drive home the importance of its cognitive computing offerings.

This year’s WoW conference was attended by a large contingent of customers, business partners and IBMers – 17,000+ strong. The exhibition floor, where I usually spend the lion’s share of my time, was the largest I’ve ever seen (with 500,000 square feet of space). The conference featured 1200 sessions, including 500+ client experiences; 120 business partners; and 200+ hands-on labs/certifications.

The exhibition floor itself was organized into four quadrants:

  • Transforming industries (real-life examples of cognitive-based industry transformations);
  • Monetizing data (how specific business processes and roles can thrive in the cognitive era);
  • Reimagining professions (how data, analytics, and cognitive services are coming together to enhance, scale, and accelerate human expertise); and,
  • Redefining development (how IBM cognitive tools benefit developers).

The key messages IBM sought to deliver were:

  1. Cognitive is transforming every aspect of our lives;
  2. Data, analytics and cloud are the foundation for cognitive business; and,
  3. Cognitive business is “the way” going forward (meaning it will help people make better decisions and solve complex problems more quickly).

The Big News: More Application Solutions

My number one concern with Watson back in 2014 was its limited application solution portfolio. At that time Watson felt more like a technology in search of problems to solve rather than an industry-focused collection of real world solutions. But at WoW I observed that the Watson portfolio has been greatly expanded with a collection of on premises and cloud-based solutions.

I also observed that there appear to be two approaches to expand the Watson portfolio: 1) a concept I’ll call “front-ending,” and, 2) a concept called “embedding.” Think of Watson mainly as an environment that can manage and analyze data. IBM now refers to the data that Watson curates as a “corpus” – a body of knowledge that grows with a given business. Enterprises can create corpuses with all sorts of structured and unstructured data in them – but queries need to be formulated or query software or libraries need to be created or made available to query each respective corpus. At WoW I saw dozens of applications developed by IBM and its business partners that reside on top of Watson (like an overlay), that query a Watson corpus, and that implement certain process flows to deliver desired results.

My favorite example of this approach was designed by a company known as Persistent. This company can install Internet-of-Things devices in a washroom that can monitor the amount of toilet paper, soap, and paper towels that are available. (Yes, I know this sounds mundane, but it is a good example of this “front ending” approach that I’m seeking to describe…). The application can then place an order for replacement materials once certain threshold levels had been reached – and the application can also schedule a maintenance representative to replace dwindling supplies.

This application is not as profound as some of the other applications that Watson is performing in the areas of medical research or in financial markets. But it is illustrative of: 1) new applications that are being written to query Watson corpuses; 2) workflows that are being automated; and, 3) new efficiencies that are being introduced.

The “embedding” options for the Watson application portfolio mean that existing software is being improved by adding Watson-based functionalities. For example, existing software may be able to tell an enterprise executive that there is a problem in given processes or business units – but that software may lack the facilities to tell the executive why the problem has occurred and what to do about it. By augmenting a traditional reporting system with embedded Watson functionality, executives can take advantage of the platform’s ability to analyze structured and unstructured data to quickly get to the root cause of a problem. Then knowledge bases can be created to suggest work-arounds to given problems.

One of the best examples I saw of this was in a new IBM product known as Watson Supply Chain which uses cognitive technology to increase transparency, intelligence and predictability. Working with a company known as TransVoyant (which manages an extensive database of global shipping information and tracks weather events), IBM’s Watson Supply Chain can provide comprehensive visibility into supply chain activities across the globe. So, for instance, if a shipment of hard disks manufactured in China is about to leave Hong Kong, and there is a monsoon brewing in its path, executives can be alerted to possible trouble, can explore alternative routes or find alternative suppliers – and can also be advised of the financial impacts of their decisions.

The net effect is that new intelligence and predictive analytics have been added to a traditional supply chain application by embedding Watson, as well as by introducing a new source of data (TransVoyant) to provide new insights. I saw dozens of examples of applications that now have similar new analytics features thanks to embedding intelligent, cognitive Watson analytics within their workflows.

The Expanded Partner Ecosystem

At WoW, the number of third-party ecosystem vendors was at least three-fold larger than Insight 2014. IBM claims to now have 120+ Watson business partners. Not all were represented at WoW but some of IBM’s most noteworthy Watson business partners include QueBIT, Centric Netherlands B.V., Enterprise Computing Limited, New England Systems, SYNNEX Infotec Corporation, Sirius Computer Solutions, Prolifics, Addedo, ICit Business Intelligence, Revelwood and Cubewise. A list of the awards that these Watson partners have received from IBM can be found here.

The Customers are Speaking Out

During WoW, close to fifty IBM Watson customers took to the stage to relay their experiences and successes. The big change here is that many customers in 2014 were still in beta and were not willing to share results but now users are clamoring to get to the stage. Some provided descriptions of the business benefits that they had achieved using Watson cognitive technologies; others went a step further to describe the benefits that Watson technology can deliver to society. This group included executives who appeared on stage with Ginni Rometty during her key note; John B. King – U.S. Secretary of Education; Mary Barra – Chairman and Chief Executive Officer at General Motors Company; and Professor Yitzhak Peterburg – Chairman of the Board at Teva Pharmaceutical Industries Ltd.

What Was Missing

IBM tends to talk at a very high level about “solutions”, but it is often difficult to figure out which mix of products are required to build specific cognitive solutions. Granted, there are different data sets coming from various internal and external sources that will need to be filtered, cleansed and managed; granted that some Watson service will be needed to access and analyze those data sets; and granted that customers want a variety of widely different application solutions – so a wide variety of products may be needed to construct a cognitive solution.

But IBM main tent presentations were very light describing products and their capabilities in depth – and which services or APIs are required to construct certain types of solutions. When cruising the demo floor, many customers were packed deeply around certain demos attempting to figure out which products to use to build their respective cognitive solutions. So one thing IBM could do better at next year’s show would be to put more solutions architects on stage and on the exhibition floor to help prospective customers better understand its broad and deep software portfolio.

Summary Observations

I believe that IBM’s Watson has turned the corner from being an interesting experimental technology to becoming an invaluable cognitive system. The purpose of this tool is to help provide new insights by analyzing vast amounts of data and assimilating answers or constructing predictions based-upon that data analysis. I also believe the learning capability of Watson is critical, and a differentiator from complex analysis tools of the past and present. As data changes and as decisions are made, Watson adjusts, and improves. At WoW, I saw numerous examples of how Watson is being used to augment problem understanding and resolution (IBM is now calling the former A.I. [artificial intelligence] “augmented intelligence”).

From a competitive standpoint, there are numerous other companies building “cognitive” offerings. Microsoft, Google, Cognitive Scale, Cisco (Cognitive Threat Analytics), HPE (Haven on Demand), Customer Matrix, Spark Cognition (see this report for a description of these products) – to name but a few – all have cognitive programs underway. But IBM, with its huge investments in database management software, in security, in analytics, in process flow, in APIs and cloud architecture, in mobile, in geospatial, in the Internet-of-Things, in cloud services and delivery models, in software solutions – and more – is far better positioned than any other major technology company to lead the charge into cognitive computing. Two years ago I was looking for signs of increasing profitability from the Watson Initiative. I now see an immense new market opening up for cognitive solutions – and IBM is ideally positioned to claim a gigantic share of that market.

Posted in Uncategorized | 1 Comment

IBM Storage Announcements Bolster Hybrid Cloud Strategy

By Jane Clabby, Clabby Analytics

On November 2, 2016, IBM introduced a set of storage-related enhancements that will improve businesses’ ability to easily and seamlessly modernize and optimize traditional workloads by improving performance and efficiency, as well as support new-generation, born-in-the- cloud applications by leveraging hybrid cloud architectures. These latest announcements improve scalability, efficiency, performance and flexibility across the already comprehensive IBM storage and software defined storage portfolio. New announcements include:

  • IBM Spectrum Virtualize
    • Transparent Cloud Tiering
    • Cloud Flash Copy
    • Flash enhancements
  • IBM Spectrum Scale
    • IBM Spectrum Scale 4.2.2
    • Cloud data sharing
  • IBM Storwize Family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack support for cloud-enabled Flash

Let’s look at each of these announcements in greater detail.

IBM Spectrum Virtualize

IBM Spectrum Virtualize Transparent Cloud Tiering

IBM Spectrum Virtualize Transparent Cloud Tiering extends local storage (including almost 400 IBM and non-IBM systems) transparently, cost-effectively and on-demand into the cloud for snapshots and restores, freeing up primary storage for demanding new-generation cognitive and analytics workloads. By using cloud as a back-end tier, capex (capital expenditure) can be converted to opex (operational expenditure), changing the economics of storage by minimizing up-front costs. This feature is available in IBM Spectrum Virtualize software, IBM Storwize V7000 Gen2/Gen2+, IBM FlashSystem V9000, IBM SAN Volume Controller and VersaStack configurations that use those arrays.

IBM Spectrum Virtualize Cloud FlashCopy

This solution provides full and incremental restore from cloud snapshots stored in public clouds, including IBM SoftLayer, OpenStack Swift and Amazon S3 clouds (others will be added in the future). Data is compressed before sending it to the cloud minimizing bandwidth and storage requirements, and all data stored on the cloud is encrypted, including data that has been stored on all supported IBM and non-IBM arrays. Use cases include backup to store copies of volumes in the cloud, archiving cold data (storing a copy in the cloud and deleting the primary copy), and transferring volumes from system to system to restore a volume to a new system.

IBM Spectrum Virtualize family Flash enhancements

The IBM Spectrum Virtualize family enhancements will enable high density expansion with up to 3X better drive density than 12-drive 2U enclosures. New enclosures will support up to 92 drives in only 5U rack space for up to 920TB with 10TB NL-SAS HDDs and 1380TB with 15TB flash drives. These can be intermixed with 2U enclosures in the same system and are supported across Storwize V5000 Gen2, Storwize V7000 Gen2/Gen2+, FlashSystem V9000, SAN Volume Controller (DH8 and SV1) and VersaStack.

IBM Spectrum Scale

IBM Spectrum Scale Cloud Data Sharing

You may recall that back in July, IBM announced IBM Spectrum Scale Transparent Cloud Tiering, which enabled a single namespace for data across the hybrid cloud and provided policy-based tiering to manage data placement and migration seamlessly across on-premises and off-premises infrastructures. Movement of data is automated based on policy. For example, data that hasn’t been accessed in a week could be automatically migrated to the cloud and files can be recalled on demand.

IBM Spectrum Scale Cloud Data Sharing adds to these capabilities with automated policy-driven replication and synchronization, and more granular control over data movement based on type of data, action, metadata or heat. Data can be moved from storage to storage, including data and metadata, bridging the gap between cloud and file infrastructure. This solution is ideal for cloud- native applications, DevOps, new generation applications such as Apache Spark, and/or workloads that are heavily parallel-processed. Data can be analyzed and then automatically tiered out to public cloud providers.

IBM Spectrum Scale Version 4.2.2

IBM Spectrum Scale includes new features that improve security, usability, flexibility and Big Data analytics. Here is a sampling of new features. An updated GUI provides better, more comprehensive monitoring and troubleshooting, and tasks can be streamlined using guided interfaces. The new revision includes iSCSI client support with diskless boot and RESTful API support. To better address analytics workloads, the Hadoop Distributed File System (HDFS) has been updated to support new applications and data oceans, and multiple HDFS domains can be included in a single storage pool – ideal for large scale Big Data and analytic cognitive workloads.

IBM Storwize Family Upgrades

IBM has added scalability improvements to systems built with IBM Spectrum Virtualize. For example, IBM reports the following improvements to IBM Storwize V7000 for all-Flash workloads:

  • 50% more drives per system
  • Almost 3x more drives per clustered system
  • 4x larger maximum flash drive size (up to 15.36 TB)
  • Almost 6X larger maximum single system (up to 11.4PB)
  • 8X larger maximum clustered system (up to 32PB in only 4 racks)

IBM DS8880 High Performance Flash Enclosure Gen2

IBM reports 2X performance improvement and 6X added capacity per enclosure. Gen2 also provides a 90% improvement in IOPS read performance and 50% in IOPS write performance, as well as a 110% improvement in read throughput and 85% in write throughput for real-time analytics, cognitive and traditional I/O intensive transactional workloads. Data protection is improved by using RAID 6 as the default.

IBM DeepFlash Elastic Storage Server

In July, IBM announced the Deep Flash 150 all-Flash array with IBM Spectrum Scale, designed for Big Data unstructured workloads and providing better economics, efficiency and time-to-results for analytics applications.

The new IBM DeepFlash Elastic Storage Server (ESS) is a packaged (one SKU) turnkey solution available in two configurations (GF1 with1 DeepFlash system with up to 180GB usable, or GF-2 with 2 DeepFlash systems with up to 360GB usable) provides software defined Flash storage for both file and object data, and includes IBM Spectrum Scale RAID erasure coding for advanced data protection. In addition, IBM reports up to 8X faster response time (compared to HDD) and throughput of 26.5GB/s read, 16.5GB/s write for the GF2. The DeepFlash ESS will grow as to meet the demands of the business with seamless, in-place upgrades to multi-petabytes per system that can be clustered for virtually unlimited scaling.

VersaStack Support for Cloud-enabled Flash

VersaStack, IBM’s converged infrastructure solution, adds support to enable a wider range of use cases and workloads in a converged cloud-enabled package. New systems include:

  • Storwize V5000/V5030F with Cisco UCS Mini – Entry to midsize
  • SAN Volume Controller with Cisco UCS – Mixed storage workloads
  • FlashSystem A9000 with Cisco UCS – VDI

These models join the Storwize V7000/V7000U V7000F w/ Cisco UCS (medium to large enterprise) and the FlashSystem V9000/900 with Cisco UCS (high performance).

Summary observations

Over the last several years, the storage landscape has evolved considerably in order to adapt to and support new applications and workloads. Business initiatives around Oracle, PeopleSoft and SAP have been replaced with new generation Big Data, analytics and cognitive workloads and applications including Hadoop, Spark and MongoDB. And while NAS and SAN still play an important role in corporate data centers, large public cloud object stores like Amazon S3 and Microsoft Azure have become increasingly important as businesses look to store massive volumes of structured and unstructured data in a flat, global namespace at cloud-scale.

IBM has responded with a steady stream of announcements that support these trends. IBM Cloud Object Storage provides storage in the public cloud, but can also be deployed on-premise. Working as both service and local storage, it provides the capabilities of hybrid cloud, enabling businesses to seamlessly support both traditional and new generation workloads in a secure, scalable, efficient manner. IBM’s Spectrum Storage Suite of software defined solutions can be purchased software-only, as an appliance or as-a-service. Spectrum Virtualize enables heterogeneous enterprise-class data services including migration, encryption, tiering, snapshots and replication across 400 different storage arrays from a wide range of vendors, providing customers with flexibility and investment protection.

As the price of Flash has decreased and as its overall economics (efficiency through data reduction techniques) have improved, all-Flash storage arrays are being used for a wider range of applications. Recent additions to the IBM all-Flash line-up provide systems for the entry-level and mid-range. With the existing all-Flash arrays such as the FlashSystem A9000 and A9000R, and the DeepFlash 150 and DS8888, IBM can support applications ranging from entry-level to high-end mainframe-based applications.

With these latest additions to its storage portfolio, including comprehensive all-Flash array offerings, a family of flexible software defined Spectrum Storage solutions and a well-articulated hybrid cloud strategy, IBM is uniquely positioned to guide customers into the Cognitive Computing Era.

Posted in Uncategorized | Leave a comment

IBM Edge 2016– Outthink the Status Quo

By Jane Clabby, Clabby Analytics

IBM Edge 2016 featured the company’s Storage, Power Systems and z Systems as foundations for the evolving cognitive computing era. Cloud computing was positioned as the delivery platform for digital innovation – including technology innovation, collaborative innovation and business model innovation. The theme of this year’s conference was: “Outthink the status quo.”

Statistics from this year’s IBM Institute for Business Value (IBV) Study support the company’s overall strategic directions. 72% of study participants selected technology as the main game changer for the first time, while 70% of Chief Experience Officers (CXOs) plan to extend partner networks and 80% are experimenting with different business models. As always, the conference included many customer success stories to reinforce IBM’s key messages.

While I will cover primarily storage and software-defined infrastructure in my review, there were other interesting storylines that piqued my interest this year–including Blockchain and the Open Source project Hyperledger.

Keynote

IBM Systems SVP Tom Rosamilia’s keynote led off with a discussion about the importance of systemic performance relative to microprocessor performance. With Moore’s Law tapering off, resulting in increasingly diminishing returns, businesses must focus on combinations of things working together to accelerate and optimize performance in Big Data, cognitive and analytics applications. The focus on systemic performance has enabled businesses to do things like combining transactional processing with analytics on z Systems, improving efficiency and time-to-results. And Flash storage now serves new generation workloads that require instantaneous delivery of performance.

Customer examples include:

  • Plenty of Fish, an on-line dating site with over 3 million active daily users, experienced bottlenecks when running matching algorithms on its clients’ personality data and images. Moving to all-Flash IBM Storage enabled the company to get to a 4 millisecond response time on its 30 servers, addressing the accelerating demands of its member base. IBM Flash offered lower latency than any other solutions evaluated–which was the metric most critical to the company’s business.
  • Red Bull Racing designs and manufactures race cars that are used in 21 countries across 5 continents. The car design includes more than 100 lightweight high-power sensors, on-board computing systems, real-time telemetry and external data (weather, for example) that use IBM analytics to make complex data driven decisions in real-time. Cars must be both fast and safe and comply with industry regulation and auditing processes. To meet these demands, the company makes more than 30,000 engineering changes per year which must be tested in virtual simulations before going live. As an IBM partner since 2005, Red Bull recently adopted IBM Spectrum Computing LSF to accelerate workload and resource management, increasing throughput for simulations by from 30-50%. Other IBM solutions employed by Red Bull include IBM Spectrum Scale, IBM Spectrum Symphony, IBM Elastic Server on Power and IBM Spectrum Protect.
  • Wells Fargo moved to Blockchain running on IBM Linux, introducing a new paradigm for financial transactions – a tamper-proof, shared distributed ledger that tracks each step in a transaction in a secure way.

IBM Storage and Software Defined Infrastructure

Ed Walsh, the new (he rejoined the company nine weeks ago) GM of IBM Storage and Software Defined Infrastructure, shared his perspective on the three compelling elements of the IBM storage portfolio:

  • The broadest storage portfolio in the industry
  • Software-defined Leadership
  • Flash leadership

With an IBM software-defined infrastructure, customers can move into the Cognitive Computing Era, support new generation application workloads such as Hadoop and Spark, take out cost, and support growing business demands. IBM cognitive solutions provide the ability to ingest all sorts of data, learn from it and reason through it to improve and automate decision-making.

Broad portfolio

One of the most compelling aspects of IBM’s storage strategy is its commitment to flexibility and choice, enabling customers to purchase solutions as software –only (to be run on commodity storage), as an appliance, or as-a-service – with the same code base regardless of delivery model. Many of IBM’s software-defined storage offerings are available, not only in the SoftLayer Cloud, but also in other public cloud environments such as Google Cloud, Amazon S3, and Microsoft Azure. This enables customers to easily move to hybrid cloud environments as well as support traditional and new generation workloads. The flexible licensing model of the Spectrum Storage Suite is another plus – allowing customers to mix and match products within the suite as needs change, paying based on capacity. Not only can customers scale up as needs grow but customers can also scale back as needs change.

The IBM Flash portfolio is equally comprehensive, offering a wide range of recently refreshed all-Flash arrays that can be used for new and traditional use cases including storage virtualization, grid-scale cloud, Big Data and business critical applications.

Software-defined storage-IBM Spectrum Storage

As mentioned above, IBM’s software-defined storage solutions enable customers to easily move to the hybrid cloud which IDC reports will represent 80% of enterprise IT by 2017. According to IBM, these technologies allow customers to simultaneously optimize traditional applications by leveraging automation to free up resources for new applications, allowing organizations to modernize and transform. The portfolio includes:

  • IBM Spectrum Scale – Scale-out file storage
  • IBM Spectrum Accelerate – Scale-out block storage
  • IBM Cloud Object Storage – Scale-out object storage
  • IBM Spectrum Virtualize – Virtualized block storage
  • IBM Spectrum Protect – Data protection
  • IBM Spectrum Control – Common management layer across block and object
  • IBM Spectrum Copy Data Management – a new solution that enables customers to easily manage the proliferation of copies across the organization, not just those used for replication and data protection, but for other use cases as well.

IBM also unveiled another new offering based on customer demand–combining the capabilities of IBM Spectrum Protect and IBM Spectrum Scale, called Data Protection for Data Oceans–which is already in use at the European Union Water Company to improve data protection across oceans of data in a scale out architecture. This is a direction that IBM will continue to pursue – building strategic solutions that solve specific problems.

We also learned more about IBM Spectrum Scale transparent cloud tiering, which allows administrators to (without the use of a gateway) add private or public cloud object storage as a target for Spectrum Scale data. Data can be seamlessly tiered to tape, an object store or to public clouds, including IBM Cloud Object Store, Amazon S3, Google Cloud, Azure, (or any cloud with a RESTful API interface) in the same manner used to store data locally.

Software-defined infrastructure –IBM Spectrum Computing

The other element of IBM’s software- defined infrastructure is IBM Spectrum Computing workload management and resource scheduling. These solutions, originally designed to support the high-performance computing (HPC) market, are finding their way into a broader set of workloads and applications. Providing cloud-based scale out and data virtualization, IBM Spectrum Computing is ideal for new generation applications such as Hadoop, Spark, SAP HANA and Cassandra, as well as Big Data and analytics applications. By virtualizing the compute grid, “cluster creep” is minimized, utilization is improved, and performance is predictable in multi-tenant multi-workload shared infrastructure environments. The portfolio includes:

  • IBM Spectrum LSF — Intelligent workload management platform for distributed HPC environments
  • IBM Spectrum Symphony — Grid services for analytics, trade and risk applications
  • IBM Spectrum Conductor — Designed for open scale-out frameworks for real-time workload management for applications such as Hadoop, Spark, Mongo DB and others.
  • IBM Spectrum Conductor for Spark – Purpose-built version of Apache Spark

IBM Flash Solutions

On August 23, 2016, IBM announced three new all-Flash systems:

  • IBM Storwize V7000 – cost-optimized Flash for heterogeneous environments with high performance CPU and more cache for up to 45% performance improvement
  • IBM V5030F – low cost for entry level and mid-size workloads — Flash optimized with Real-Time   Compression, distributed RAID and multi-layer write caching
  • IBM FlashSystem V9000 – Ultra-low latency enterprise-class for mission-critical workloads with up to 30% performance improvement

These systems join the rest of the IBM all-Flash portfolio to cover a broad range of use cases and capacity and performance requirements, providing optimized performance while maintaining a low TCO. The following customer examples demonstrate the value of all-Flash.

  • University of Pittsburgh Medical Center (UPMC) has 3 million subscribers and 13PB of block storage managed by IBM Spectrum Storage. The company has a home-grown message router that is very sensitive to latency delays and was impacting customer billing. A V9000 front-ended by Spectrum Virtualize enabled UPMC to migrate the VM’s within hours. The success of this project opened the floodgates for other Flash-oriented projects within the University.
  • State of Arizona Land Department generates revenue for local public schools by managing Arizona-based properties. With the V9000 and VersaStack, GIS applications that used aerial photography data, map sets and 3D models performed better than other solutions that were evaluated.

Blockchain

Blockchain is a protocol that originated in 2009 at Bitcoin to record financial transactions. Since then, it is a much richer solution that records the transfer and ownership of assets in many industries, providing database records that are validated and propagated, and more importantly, cannot be changed or deleted. Hyperledger is an open source project (one of the most popular projects ever) launched as a collaborative effort to advance Blockchain technology. Hyperledger builds on the Bitcoin Blockchain, adding data encryption, access control policies and digital signatures for every record.

Here are examples of how Blockchain is used.

  • IBM uses Blockchain built on Hyperledger in its own supply chain. IBM was seeing 25,000 disputes every year over shipments, payments, invoices etc., and each dispute was costing the company $31,000. By adopting Blockchain, IBM was able to replace the existing ledger system with a secure record of orders and parts with a range of suppliers throughout the company. The new system helps enforce workflows and enables partners and suppliers to contribute to the workflow.
  • Everledger uses Blockchain to track the provenance of diamonds and other luxury items. This life story of each diamond was previously tracked on paper which introduced risk of fraud and theft, costing insurers $50B annually and leading to higher insurance costs for businesses and customers. Using a Blockchain solution built on IBM’s LinuxONE platform, Everledger now provides a digital thumbprint of each diamond with information about custody and ownership, title transfer and chain of possession, as well as notarization and time stamping.

Summary Observations

IBM Edge 2016 was packed with lots of great information and customer success stories that demonstrate how the company’s technologies are being used and what tangible benefits are being derived. There were many examples that reinforced the idea of “systemic” performance improvements being provided by IBM products working in conjunction with each other. While my focus in on storage and software defined infrastructure, IBM’s hybrid cloud strategy is very compelling, incorporating elements from across the portfolio and linking them together to support new uses cases and workloads (analytics and Big Data, for example) that require high levels of performance. This and IBM’s strength in analytics and cognitive computing are clear differentiators.

Some of my fellow analysts criticized the IBM storage portfolio for being too broad and too complex. But I see it a different way. Customers can address all of their storage requirements from within the IBM portfolio, so there is no need to use different vendors, which actually reduces complexity associated with ordering, managing and support (we used to call this one-stop shopping). The new Spectrum product names, based on product function, also contribute to ease-of-ordering, and the consistent user interface across the product line improves ease-of-use.

IBM introduced lots of new ideas and technologies at Edge 2016 that are, as Tom Rosamilia said, “transformative.” IBM reinforced this by positioning itself as the company to turn to when transforming a business. With deep hybrid cloud product offerings and expertise, with the richest and deepest analytics portfolio in the industry, and with its innovative spirit, I concur. I’m looking forward to Edge 2017 where I will be able to see even further progress in business transformation as more and more enterprises come forward to explain how they have used IBM technologies to drive innovation and new business models.

Posted in Uncategorized | 1 Comment

IBM’s Edge 2016 Conference: Blockchain; Accelerated Systems; Software-Defined Storage

By Joe Clabby, Clabby Analytics

IBM Edge 2016, held in Las Vegas last week focused on three themes: 1) cognitive solutions – such as the cognitive/Big Data/advanced analytics technologies that have worked their way into various IBM products; 2) cloud architectures, services and platforms (especially the seamless blending of public and private clouds); and, 3) industry innovation (such as the progress being made with Blockchain protocols as they relate to the open Hyperledger standard).

The Biggest News – Industry innovation: Blockchain and the Hyperledger standard

At breakfast on the first morning after my arrival at the conference I sat next to an IBMer who asked “Hey, what do you think of the Blockchain news?” I told him that Blockchain, the architecture that underlies Bitcoin currency exchange, “wasn’t even on my radar screen.” He said “Wait til the end of the conference and let’s see what you have to say then.” He was right…

Here’s what I now have to say about the Blockchain protocol and related standards: “Blockchain, as manifest in the evolving Hyperledger standard, represents one of the most momentous changes to business process flow, transaction processing and accountability that I have ever seen – and I’ve been in the computing industry for almost forty years!

Over my career I’ve witnessed the rise of mainframe computing; the arrival of distributed computing; the arrival of the personal computer; the standardization of networking protocols and the rise of the Internet; the move toward business process reengineering and the standardization of business applications (ERP, CRM, SCM, …); the rise of Big Data analytics; the arrival of cognitive computing, and much more. But with the arrival of the Blockchain protocol, I see a new wave of transaction processing that has the potential to greatly increase the security of transactions while eliminating process delays and human (middleman) overhead. When it comes to next generation transaction processing, I rate Blockchain and associated process flow and security activities as being as important to the future of conducting business as the Internet has been to collaboration and knowledge sharing.

For those not familiar with the technology, I suggest that you read this article by the Brookings Institution: The Blockchain: What It Is and Why It Matters.   The Brookings Institution states that “the elegance of the Blockchain is that it obviates the need for a central authority [such as a bank] to verify trust and the transfer of value. It transfers power and control from large entities to the many, enabling safe, fast, cheaper transactions despite the fact that we may not know the entities we are dealing with.”

In other words, it changes the way that transactions will be handled in the future by not requiring a centralized authority to handle all the steps of conducting a transaction – instead, all the elements needed to conduct the transaction (such as contractual and payment information) becomes locked in a public record to a series of steps known as the Blockchain. Using this approach, computers can verify the validity of each transaction and create an immutable (untamperable, if there were such a word) record of a transaction’s flow. Say goodbye to bills of lading; to courier services; to other middlemen; to various contractual services and to other time- and cost- consuming overhead (such as bank payment delays) related to processing today’s transactions.

And, as a result, expect tremendously lower transaction processing costs.

Why so? Note that the Blockchain protocol presents a method through which transactions can be processed. There are numerous other elements involved in streamlining Blockchain transaction flows including the use of consensus algorithms, various storage models – and services that help establish identity and trust; that control and regulate access and that establish the rules that govern a transaction (a contract). To incorporate these services, the Linux Foundation now hosts a project known as Hyperledger that adds the needed services (such as smart contracts) to support Blockchain-based transactions.

As is my nature as a systems and infrastructure analyst, I naturally considered what kind of systems environment I might run Hyperledger applications on. My first thought was about security – how to protect transactions from internal fraud as well as cybercriminals. Ironically, we’re just about to release a report on IBM’s LinuxONE architecture – an architecture that offers the most securable computing environment in the industry with strong cryptographic support, solid access control, a huge emphasis on privacy – and that offers an immense input/output communications subsystem, and tons of computing capacity.

My immediate reaction to Hyperledger was that large enterprises with strong security requirements should first and foremost consider adopting Hyperledger on LinuxONE servers. The next day I learned that IBM’s own beta-Hyperledger environment had gone into production this month – and IBM had selected LinuxONE as its platform of choice for production-grade Hyperledger. I talked with representatives of IBM’s Power Systems group and found that organization had other priorities at this time – so I don’t expect much action on Hyperledger on Power Systems in the near term.

Further, my assessment of Intel servers is that they can indeed run Hyperledger – but I also know that Intel servers would require extensive reengineering to match the security, scalability and communication subsystem advantages of IBM LinuxONE servers. Accordingly, when Hyperledger gains greater acceptance across various industries, I expect LinuxONE sales to escalate.

Cognitive solutions

For five years Clabby Analytics has been tracking IBM’s progress in automating systems management, in overlaying systems management with machine-driven analytics, and in streamlining application performance management. So, at the Edge conference, IBM did not have to convince us that it was aggressively building cognitive solutions to manage its hardware offerings.

Next week we plan to publish a report entitled: “Building a Mainframe IT Service Management Plan to Deliver Operational Excellence” that describes how IBM is infusing several of its systems/storage/-network management products with analytics facilities. In this Clabby Analytics Research Report we state that “We see only one vendor (IBM) that can provide a comprehensive systems operations management suite that can efficiently monitor and manage applications, databases and systems by using intuitive, cognitive and analytics driven software.”

We note that IBM “has taken a clear leadership position in using system intelligence and analytics to solve operational and performance issues. Further, over the past few years we’ve seen the company restructure its operations management products into cost efficient easily consumable suites – and offer selected management capabilities through cloud services”. And we note “that IBM has also simplified its pricing and license management practices – helping to make its operations management solutions more affordable.”

Based upon our research, we would strongly recommend that information technology (IT) operations managers consider building a holistic plan to manage their systems/storage/network/cloud environments. That plan should include integrated monitoring and management solutions as well as cognitive tools that simplify management and lower the skill levels needed to troubleshoot and tune systems (thus reducing administrative costs).

Cloud architecture and platforms

I’ve never been a big fan of cloud architecture, though my fellow Clabby Analytics researcher, Jane, is. To me, the idea of virtualizing resources and delivering computing functions as services has always seemed “old hat” (I remember when IBM introduced MSNF to virtualize mainframe facilities as far back as the 1970s). But although I find cloud plumbing to be boring, I’m strongly encouraged by the number of cloud services and service delivery models that I’ve been seeing at IBM events such as Interop, Interconnect – and now, Edge.

In the past, IBM largely focused on on-premise software licensing. But now, more and more of its products are becoming service offerings – available for deployment on-premises; or available from IBM and/or managed service providers as services. What I saw at Edge was clear progress in opening-up access to IBM services using application program interfaces (APIs), with a major emphasis on building gateways between public and private (hybrid) clouds; as well as increased emphasis on Dev Ops tools (much of IBM’s software development is now done using agile development methods).

As an example of a new cloud services offering, consider IBM’s z Operational Insights which just became generally available. This offering is designed to provide insights into z Systems operational performance without the need to install any on-premises software. It represents a way to take advantage of IBM expertise to efficiently manage and tune a given enterprise’s mainframe environment to lower cost, and also anonymously benchmark yourself with z Systems peers to see how efficient a given mainframe really is (data is anonymized and compared with other installations). When mainframe users express concerns about where the next generation of mainframe managers is going to come from, I steer them to products like this which make it possible to either own management products on premises or to obtain additional support from external trained professionals through a secure cloud.

Other notes

One of our key research agenda items at Edge was to investigate why storage hardware sales at the traditional storage leaders has been declining over the past few years. Jane Clabby’s write-up in this week’s Pund-IT Review describes what she learned at Edge regarding storage. What I learned was that storage has been in flux – teetering between traditional HDD and solid-state Flash solutions; that new Big Data-intensive applications are driving analytics workloads to differing types of storage solutions – and that software-defined storage represents the future growth opportunity for traditional storage vendors. This topic is of great interest to us – so, accordingly, expect a more detailed report on our storage market findings in late October.

A few years ago, when we started looking at large in-memory databases, we pointed out that IBM’s Power Systems would make an ideal host for SAP’s HANA environment. Our reasoning: a POWER8 microprocessor can process 8 threads per cycle to Intel’s 2 – and Power Systems also offer increased memory, communications subsystem speed and large memory (with the ability to set up fast solid state memory access to the CPU using the IBM CAPI interface). At Edge we were delighted to learn that Power Systems-based HANA solutions are performing at least twice as fast as similar Intel systems – and we were told that SAP is extremely pleased with HANA on Power Systems performance.

Also worthy of note in the Power Systems group is the release of the oh-so-easy-to-remember-name “S822LC for HPC.” This is a server that employs POWER processors as well as NVIDIA graphics processors to speed the processing of compute- and parallel intensive workloads. Newly released, hundreds of these servers are already on order. And given that a lot of analytics workloads now resemble high-performance computing (HPC) workloads, we’re expecting S822LC for HPC to be very well received in the now very broad HPC marketplace (high performance computing now spans across many industries including the medical, financial and scientific communities). IBM’s Power Systems sales representatives are actively seeking to find a home for the new S822LC for HPC solution in bio genomics, defense, financial services, fraud detection, market intelligence and other data/compute/parallel computing markets.

Finally, the OpenPOWER Foundation reported great progress in membership increases – and hinted that some important new industry relationships may be in the offing. I’ll track and report on these new members when they are officially announced. The OpenPOWER Foundation also proudly discussed a number of new POWER microprocessor-based products that have made their way to market over the past year (I covered many of these in announcements in my blog on the OpenPOWER Summit).

Summary Observations

I think the key messages that IBM wanted to deliver at Edge were “Partner with us to modernize and transform your computing environment. Let us help you maximize systemic workload performance across traditional data center and new analytics environments leveraging cloud architecture.”

From what I saw with advanced systems designs; with cognitive management products; and with APIs that transparently enable the union of public and private cloud environments, I am convinced that IBM can, today, already meet its goal of helping its customers transform their existing traditional data centers to the highly integrated, highly efficient hybrid cloud-based compute environments of the future.

As for innovation, the many that I saw in IBM hardware, software and cognitive computing were overshadowed by the stunning work that the company has done in conjunction with other vendors and enterprises as part of the Hyperledger project. Hyperledger will have a momentous impact on IBM, as well as across small, medium and large businesses as well as across the entire computing industry. We at Clabby Analytics plan to track this technology very closely in the future. Blockchain and Hyperledger are definitely now on our radar screen.

 

 

 

 

Posted in Uncategorized | 1 Comment

IBM’s POWER9 Microprocessor: The Basis for Accelerated Systems

By Joe Clabby, Clabby Analytics

Over the past several years, all major server makers have focused on redesigning their server architectures in order to eliminate data flow bottlenecks – building what have become known as “accelerated systems”. Likewise, the major developers of database software have also found new ways to speed database processing. This combination has created a data-processing panacea for fast, efficient and affordable processing of very large databases (Big Data).

As far back as early 2013, Clabby Analytics started to report on the progress being made in accelerated system designs (our reports on VelociData, The Now Factory, IBM’s PureData system, IBM’s DB2 Analytics Accelerator and IBM Power Systems accelerated designs are available upon request).   Accelerated system designs focus on speeding workload processing by improving system throughput. Improved throughput is achieved in a variety of ways including increasing internal bus speed, reducing communications overhead, off-loading tasks to other types of processors, improving memory management, reducing I/O drag, tuning execution methods, creating new interfaces that streamline peripheral access to processing power, and more.

What intrigued us about the early designs was that several systems were making use of field programmable gate arrays (FPGAs) to accelerate data transfer (processing data at line speed); and some designs were also making use of graphical processing units (GPUs) to accelerate parallel processing. We also found that the makers of general purpose CPUs (especially POWER and Intel processors) were more focused than ever before on processing large amounts of data (Big Data).

In 2014 we also started tracking database accelerators (our report on IBM’s DB2 BLU Acceleration vs. SAP’s Hana vs. Oracle’s Exadata is available upon request). What we observed was that new algorithms combined with new data processing methodologies were being used to accelerate database processing speed. Also, in 2014, we noted that sales of in-memory database software from IBM, SAP, Oracle, Altibase, Exasol, Kognito, McObject, ParStream, Quartet FS, VMware and VoltDB were on the rise – helping further accelerate the processing of large amounts of data (see this report for further details).

Today, in 2016, we’re starting to see one systems vendor pull ahead of its competitors when it comes to architecting accelerated systems. In a recent briefing, IBM shared with us its plans for further accelerating its current POWER8-based servers – and IBM also shared its future plans for its next generation POWER9 architecture.

Why Accelerated Systems?

Traditional server designs can be used to process Big Data, technical and cognitive workloads. But the bigger question is: “how efficiently?” Accelerated systems can operate exponentially faster than traditional designs – and this speed advantage results in the ability to process more and more data at a significantly lower cost.

Anecdote: When the computer system salesman told an IT executive that his new system could process data exponentially faster than the current system – the executive said “I don’t care about ‘exponentially faster’ claims”. The salesman, not missing a beat, then said “Okay – let me rephrase: How would you like to use significantly fewer systems and pay for significantly fewer software licenses in order to process your workloads?” The IT executive was suddenly more interested.

IBM POWER8 Accelerated Systems

In June, 2014, Clabby Analytics took a close look at IBM’s POWER8 architecture when we compared it to Intel’s E5 v2 architecture (see this report). We found that both microprocessor/server environments had been designed to process Web, file and print, email, database, vertical-specific applications, high performance computing and cloud workloads.

We also found that POWER8 processors were more efficient than a E5 Xeon competitor (due to processing and bandwidth advantages, POWER8-based servers were able to deliver results more quickly). When we looked at IBM’s Power Systems designs, we found that POWER8-based servers were better suited for data-intensive environments from a performance and price/performance perspective (POWER8-based servers cost less than E5 v2-based competitors due to aggressive IBM pricing and numerous efficiency advantages).

Today, as we look more closely at Power8-based system designs, we find that that IBM has further accelerated its Power Systems by:

  • Using fast – and sometimes different – processors to handle specific workloads;
  • Enhancing memory access using crypto and memory expansion – and by introducing transactional memory;
  • Increasing on-chip cache sizes and memory buffers to make it possible to place data closer to central processing units (CPUs) and graphical processing units (GPUs) where that data can be processed more quickly;
  • Increasing internal bus speed to accelerate data flow within a server;
  • Streamlining input/output (I/O) access to processors using interfaces such as CAPI and RDMA (and will soon use NVLink) to reduce I/O overhead and simplify I/O device to processor data movement;
  • Streamlining networking subsystems to speed communications;
  • Using virtual addressing to allow accelerators to use the same memory that processors use in order to remove operating system and device overhead; and,
  • Introducing hardware managed cache coherence.

With the introduction of POWER8, IBM placed PCIe Gen 3 logic directly on the chip, then built an interface to this logic known as the coherence attached processor interface (or CAPI). CAPI is a customizable hardware accelerator that enables devices, Flash and coprocessors to talk directly and at very high speeds with POWER8 processors. Examples of CAPI-based solutions include IBM’s Data Engine for NoSQL (which allows 40GB of flash storage to be used like extended memory); DRC Graphfind analytics; and Erasure Code Acceleration for Hadoop.

The reason this CAPI interface is so important is because it eliminates the need for a PCIe bridge, as well as the need to launch the thousands of operating system and driver instructions (perhaps as many as 22.5K instructions) that are run every time PCIe I/O resources are used. Instead, the logic for driving I/O resides on the chip where the number of instructions are vastly reduced and the speed of interactions between the CPU and associated hardware devices is dramatically improved.

Later this year IBM will introduce a new interface known as NVLink to its POWER8 processor. Created by NVIDIA, NVLink tightly links NVIDEA GPUs with POWER8, enabling POWER8 and NVIDEA GP100 GPUs to jointly process data. This new interface is 5X to 12X faster than PCI Gen 3 connections – leading to even more rapid processing of data-intensive workloads.

Another method that IBM Power Systems use to accelerate data flow is known as remote direct memory access (RDMA) which enables data to be moved from the memory/storage of one system to the memory or storage of another system – at line speed. The types of workloads that benefit most from the use of RDMA are network-intensive applications that suffer from bandwidth/latency-related data retrieval issues. These include:

  • Large scale simulations, rendering, large scale software compilation, streaming analytics and trading decisions – the kinds of applications found most often in massively parallel, high performance computing (HPC);
  • Hyper-appliance, hyperconverged and hyperscale environment where large volumes of data needs to be moved between servers and associated storage; and,
  • Workloads where network latency slows database performance and interferes with virtual machine (VM) density.

IBM’s accelerator-attach technologies such as CAPI and NVLink feed data to POWER8 exponentially faster than previous generation interconnects such as PCIe. To us, IBM appears to be more aggressive in accelerator-attach interfaces than its competitors, and – accordingly – we see this as a distinct competitive differentiator.

Also Noteworthy: Memory Activities

When POWER8-based systems were originally introduced, IBM announced that memory channel speed has been improved – again enabling more data to be delivered more quickly to processors, thus accelerating the processing of large volumes of data. IBM also announced its Durable Memory Interface (DMI).   This interface is agnostic, meaning it enables different types of memory to be attached to the bus (so Power Systems are no longer only tied to DRAM memory).

What POWER9 Means to the Future of Accelerated Systems

Just over a year from now (in the second half of 2017), IBM will introduce its next generation POWER9 processors. POWER9 will consist of a family of chips that will be focused on 1) analytics and cognitive; 2) new opportunities in cloud and hyperscale; and, 3) technical computing.

Some of the improvements that can be expected when POWER9 is delivered include:

  • 24 cores;
  • Improvements to IBM’s Vector Scaling eXtension (VSX) – which improves floating point extensions (especially useful for cryptographic operations);
  • Improvements in branch prediction and a shorter pipeline;
  • The use of “execution slices” to improve performance;
  • The use of large, low-latency eDRAM cache to accommodate big datasets;
  • State-of-the-Art I/O speed (by leveraging PCI Gen4) – giving POWER9 3X faster bandwidth to access I/O and storage as compared with POWER8;
  • A new 25Gbps advanced accelerator bus;
  • On-chip compression and cryptographic accelerators;
  • Access to next generation NVLink 2.0 to increase speed by 33%, plus coherence across GPUs and CPUs memory to enhance usability of GPUs; and,
  • Optimization for 2 socket servers using direct attached DDR4 direct attach memory channels.POWER9-based servers will be delivered in multiple different scale-up and scale-out form factors. And IBM, as could be expected, will continue to concentrate on making its next generation POWER processors energy efficient, high in security features and rich in Quality-of-Service functionality (such as high availability and resiliency).

Summary Observations

In days gone by, systems makers focused on increasing processor speed in order to improve processing performance. For almost 50 years, the number of transistors on a processor doubled about every two years (known as “Moore’s Law) – so improving system performance constantly centered on improved processing power.

But in the late 2000s, processors reached their peak in terms of the number of transistors that could be placed on a chip, forcing systems designers to focus more heavily on tuning memory and input/output (I/O) device access, on improving internal bus speed, on improving communications and on streamlining memory access in order make systems perform more quickly. These highly tuned systems have become known as “accelerated systems”.

IBM has been particularly aggressive in driving breakthroughs that attack bandwidth-limiting bottlenecks. It has created hardware accelerators; it has embedded accelerators at the chip level; it has implemented its own hardware interface (CAPI); and it is using NVIDIA’s NVLink to help connect GPUs directly to its POWER8 and POWER9 processors. From a memory perspective, it uses RDMA to speed memory access; it has increased memory channel bandwidth – and now it will soon offer its customers the opportunity to use multiple different types of memory in its Power Systems. POWER8 offers large memory caches – as will POWER9.

It should be clear to readers that Power Systems are all about moving data efficiently. IBM has focused strongly on overcoming latency issues and other bottlenecks – delivering some of the most powerful servers in the industry for data processing. Meanwhile, an entire ecosystem has grown-up around POWER8 – enabling major vendors from around the world to contribute new, innovative solutions to the POWER ecosystem.

As we look at IBM’s competitors, we see all leading systems makers working on accelerated system designs. However, we see several distinct differentiators in IBM’s Power Systems lines including several accelerator-attach technologies, high performance and strong cost/performance. We also see contributions and innovations introduced through the OpenPOWER Foundation as a major differentiator. From our perspective, IBM’s aggressive efforts in server acceleration and in database acceleration – combined with its broad portfolio of analytics software – make the company a formidable competitor in the world of accelerated, Big Data servers.

IBM’s mantra for POWER8 = was “designed for data” – meaning that several of its central features were designed to accelerate the processing of large databases. With POWER9, we believe the new mantra should be “designed for accelerated systems leadership” because the new POWER9 processor and system designs should significantly outperform competing Intel Xeon E7, and Oracle Sparc M and Sparc64 system designs. For enterprises planning their future Big Data strategies, it is time to become very familiar with IBM’s fast approaching POWER9 architecture.

 

 

 

Posted in Uncategorized | Leave a comment

Comtrade Announces Microsoft SCOM Management Pack for Nutanix

By Jane Clabby, Clabby Analytics

Comtrade Software, a provider of IT infrastructure and application management and monitoring solutions, recently announced the Microsoft Systems Center Operation Manager (SCOM) Management Pack for Nutanix. The solution enables IT administrators to monitor the Nutanix appliance, providing visibility into infrastructure performance and end-user experience. It will monitor virtual machines (VMs) and look at the health of nodes, clusters, and storage, as well as track data protection and replication status.

According to IDC MarketScape: Worldwide Hyperconverged Systems 2014 Vendor Assessment, Nutanix had a 52% share of hyperconverged revenue during the first half of 2014. Hyperconverged systems— those that integrate compute, storage, networking and virtualization into a software-defined architecture that can be run on commodity hardware— are growing in popularity based on their flexibility, scalability, cost-effectiveness, integrated management and single vendor support.

Background

Comtrade Software is an IT solutions and software engineering services business specializing in monitoring, data protection and software integration. Comtrade Software is a software arm of Comtrade Group, which was established in 1990. The privately held company has 1500 employees in 11 countries across Europe and North America. Technology centers are located in three major cities – Ljubljana (Slovenia), Belgrade (Serbia) and Sarajevo (Bosnia and Herzegovina). Comtrade has greater than 1000 customers in a range of industries, and partners include Microsoft, Cisco, IBM, Intel, HP, Citrix, Fujitsu, EMC, Oracle, F5 and many others. Comtrade Software boasts 46% year-over-year revenue growth and 60% year-over-year customer growth.

Comtrade provides native Microsoft System Center Operations Manager plug-ins, “management packs”, giving IT administrators insight into infrastructure performance and end-user experience, ensuring uninterrupted business operations on-premise or in the cloud. The company prioritized the Microsoft SCOM management platform based on its rapid market growth, “single pane of glass” architecture, scalability and analytics capabilities. Comtrade’s management packs extend SCOM’s capabilities for faster problem resolution, higher availability, reduced downtime and better administrative productivity and efficiency through automation of processes and common activities.

Comtrade provides SCOM management packs for F5 Networks, an Application Delivery Networking (ADN) company that optimizes delivery of network- based applications and the security, performance, availability of infrastructure components such as servers, data storage, and other network resources. Comtrade’s Citrix management packs were acquired by Citrix in January 2016.

This latest announcement enables Comtrade to leverage Nutanix’ success in the hyperconverged market. The Comtrade management pack for Nutanix provides key insight—with monitoring and automatic alerting for infrastructure components which impact application performance (storage, networking, I/O rates) but are outside the control of the Nutanix appliance—through a consolidated dashboard view.

SCOM MP for Nutanix– A Closer Look

The SCOM MP for Nutanix provides 360 degree monitoring of Nutanix hyperconverged infrastructure, looking at the health of infrastructure components and providing proactive alerts to IT in the event of abnormalities. Dashboard views identify areas of highest resource consumption. Application insight automatically discovers the role of the virtual machine (VM) by identifying the business critical applications that it is running. The applications dashboard illustrates the connection between the infrastructure and application layer, and enables infrastructure teams to identify possible issues and how they will impact application performance, enabling fixes to be performed before they affect users.

In the initial release, application awareness support includes Citrix XenApp/XenDesktop with Microsoft Exchange, Skype for Business, SQL Server planned in subsequent releases.

Here are some highlights:

  1. By looking at the “Storage Overview” dashboard, the solution can identify when the free storage pool is low, alerting administrators to add more storage before application performance is impacted.
  2. Top VMs are ranked by CPU usage, memory usage, storage etc. in the “Overview VMs” dashboard so that VMs within the cluster that are being highly utilized can be flagged proactively before users are affected.
  3. Top clusters are ranked by CPU usage, memory usage, free space etc. so VMs can be migrated from overloaded clusters to those using fewer resources in order to maintain consistent application performance.
  4. By identifying which business critical applications are running on a cluster in the Application Dashboard, administrators can understand the roles of each VM and the resources consumed and make adjustments based on business requirements.
  5. In the “Overview Data Protection” Dashboard,” “Protection Domains Table” or “Data Protection Topology,” any issues with replication can be detected assuring high availability of business critical applications.
  6. The solution alerts when the Controller Virtual Machine (CVM) is down and provides detailed information about the problem and possible steps for resolution.

Summary Observations

As the IT landscape has become increasingly more complex with growth in virtualization, cloud infrastructure and Big Data, IT managers have struggled to maintain performance, scalability and control. Hyperconverged systems which package storage, memory and CPU power in a single easily-managed appliance enable IT administrators to respond quickly to unpredictable growth while still optimizing performance and availability. Nutanix emerged as an early leader in this market, challenging competitors such as EMC, NetApp and HP, and expanding the list of workloads to include business applications such as Microsoft Exchange and SQL Server. The Comtrade SCOM MP for Nutanix enables Comtrade to leverage this growth by providing a solution with native SCOM integration that not only monitors components of Nutanix appliance, but also underlying the infrastructure elements that impact application performance. This gives administrators a comprehensive single pane of glass view that provides a much more complete picture of potential problems areas, and enables proactive management—reducing downtime and improving application availability.

 

Posted in Uncategorized | Leave a comment