IBM DS8880F All-Flash Data Systems – Integral to IBM’s Cognitive Computing Strategy

By Jane Clabby, Clabby Analytics

When we think of data storage today, chances are that the first things that come to mind are “object storage”, “cloud storage”, “commodity storage” and “software-defined storage”. Nobody really talks much about SANs and block storage anymore–except for high-end database and transaction oriented applications. The same has been true for IBM’s storage strategy.

For the past couple of years, IBM has made many announcements and published many marketing documents that position its Storwize and Spectrum portfolios, including “all-flash” versions of those products. However, the DS8000 family is typically relegated to the background, and enhancements have focused primarily on “speeds and feeds” – x times faster, x more capacity, x times lower latency etc. – rather than new use cases and support for different types of workloads.

Imagine my surprise when IBM’s first storage briefing of 2017 highlighted not only the new all-flash DS8880F arrays, but also the strategic nature of the DS8880F family and how it underpins IBM’s cognitive computing and analytics offerings. In addition, this announcement provides a lower-cost entry point (starting at 95,000 USD) for DS8880-class storage, providing high-performance low-latency storage for mid-range customers. According to IBM, providing a more cost-effective offering enables businesses in emerging geographies such as Latin America and Eastern Europe to afford DS8880F benefits in a smaller package.

New use cases/workloads

Here are a couple examples of workloads that are ideal for the DS8880 family:

·         Cognitive computing

IDC forecasts global spending on cognitive systems will reach nearly $31.3 billion in 2019 with a five year compound annual growth rate (CAGR) of 55%. With IBM Watson, IBM has been a pioneer in cognitive computing –using Watson-based artificial intelligence technology in applications across a wide range of industries, enabling humans to solve complex problems more efficiently. Cognitive applications ingest, process and correlate huge volumes of data which requires a robust computing infrastructure and high performance, high capacity storage.

·         Real-time analytics

Another area where IBM is seeing interest from customers is in real-time analytics, where information can be collected, processed, and analyzed in real-time for applications such as credit card fraud detection and Internet of Things (IoT). The DS8880F provides the robust resiliency and data security for the source OLTP data set as well as the throughput and performance to enable real-time analytics. As a result, financial institutions can match credit card use patterns with real-time data collection, detecting potential fraud before it happens and saving millions of dollars. In transportation, IoT data collected from cars, street lights, cell phones and other sources combined with real-time analytics will enable driverless cars.   In health care, patients can be monitored–collecting and analyzing several metrics simultaneously in real-time– to alert medical staff to potential problems, improving patient outcomes.

New generation analytics and cognitive applications simply can’t be handled by traditional shared nothing Hadoop clusters, HDFS, and commodity storage. Those systems are designed for scale and low-cost, but don’t have the bandwidth or performance to support real-time analytics and cognitive systems that businesses collect and need to analyze data instantaneously. The new DS8880F data systems are designed specifically to handle these types of workloads that require lower latency and sub-millisecond response time.

Workload consolidation

The new DS8880F models also enable the consolidation of all mission-critical workloads, both new and traditional, for IBM z Systems and IBM Power Systems under a single all-flash storage family.

  • Cognitive – Including Watson Explorer, Watson Content analytics and Watson APIs that allow customers and ISV’s to create their own Watson- based applications.
  • Analytics- Including IBM Cognos Analytics, IBM SPSS, IBM Infosphere Big Insights, SAS business intelligence, Elasticsearch, Apache Solr and others.
  • Traditional/Database – Including IBM DB2, Oracle, SAP, PostgreSQL, MongoDB, Cassandra and others.

DS8880F all-flash data systems –A closer look

IBM’s new line-up of DS8880F data systems includes three all-flash arrays designed to support applications from entry level business class to enterprise class to the “analytics” class required for real-time analytics and cognitive workloads. IBM uses a different approach in the design of its all-flash arrays. Rather than just replacing HDD’s with Flash cards IBM has completely rearchitected the system, optimizing it for Flash for better performance. The systems use the Flash Enclosure Gen2 with 2.5” drives in a 4U enclosure and are built on the Power CEC’s (Central Electronic Complex) used in Power servers. The models vary based on number of cores, capacity, cache and number of FibreChannel /FICON ports, but the functionality is the same across the product line.

Here are the details:

  • DS8884F Business Class
    • Built with IBM Power Systems S822
    • 6-core POWER8 processor per S822
    • 256 GB Cache (DRAM)
    • 32 Fibre channel/FICON ports
    • 6.4TB to 154 TB of flash capacity
  • DS8886F Enterprise Class
    • Built with IBM Power Systems S824
    • 24-core POWER8 processor per S824
    • 2 TB Cache (DRAM)
    • 128 Fibre channel/FICON ports
    • 6.4TB to 614.4 TB of flash capacity
  • DS8888F Analytics Class
    • Built with IBM Power Systems E850
    • 48-core POWER8 processor per E850
    • 2 TB Cache (DRAM)
    • 128 Fibre channel/FICON ports
    • 6.4TB to 1.22 PB of flash capacity

The Flash Enclosure Gen 2 provides significant improvements over the previous generation for both IOPs (read: 500,000+47%, write: 300,000+50%) and throughput (read: 14 GB/s + 268%, write: 10.5GB/s +288%). Aside from the performance gains seen from using all-flash, the unique architectural design attaches the enclosure to the PCI 3 bus rather than through the device adapter, eliminating a portion of the data path which lowers latency, and improves response time. The drawer itself has also been redesigned to provide higher bandwidth into the I/O drawer and to use ASIC rather than FPGA in the drawer itself (also improving bandwidth).

Other features of the DS8880F series include greater than “six-nines” availability; point-in-time copy functions with IBM FlashCopy; and Remote Mirror and Copy functions with Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, IBM z/OS/Global Mirror, and z/OS Metro/Global Mirror providing disaster recovery/replication for up to 4 sites and 1000 miles with 3-5 second RPO (recovery point objective) for businesses where continuous operation is critical.

Summary observations

With this announcement, IBM expands its offerings of high-end enterprise-class storage by adding a new lower cost entry point that makes the systems affordable for a broader range of customers, while also enabling new uses cases that require the performance of all-flash. The optimized architecture further reduces latency and response time which will differentiate IBM in markets that require real-time analysis and results. Not only will these systems improve performance in their traditional database and OLTP realm, but the DS8880F family will become a key building block in infrastructure built to support cognitive and analytics applications.

In order to attract new customers, IBM also plans to expand marketing outreach by promoting the DS8880 systems as part of its broader cognitive computing strategy. The recent analyst briefing was a good start. Like IBM, I have been guilty of shunting aside the DS8000 family in favor of covering IBM Storwize, Cleversafe object storage and the IBM Spectrum software-defined family. In the future, I will pay closer attention. And IBM customers and prospects – you should, too.

Posted in Uncategorized | Leave a comment

Year End Review: IBM Systems Group

By Joe Clabby, Clabby Analytics

Tom Rosamilia, Senior Vice President of IBM Systems group, recently presented his annual year-end recap. In his opening statement he started with a review of IBM’s mission in the IT marketplace; to “continue to transform and innovate in technology, business models, and skills for the cognitive era.” He then pointed out several of IBM acquisitions that illustrate the company’s focus in these areas, including: Truven Health Analytics, explorsys, Merge, clearleap, USTREAM, The Weather Company and more.

What jumped out at me was that most of the new solutions that IBM has brought to market are focused on specific industries, like healthcare, or are focused on helping other vendors bring analytics solutions to the market, such as The Weather Company. Although IBM continues to deliver the infrastructure, database and transaction products it always has, it must be noted that the company truly sees itself as a leader in analytics and cognitive solutions.

IBM Systems’ accelerated 2016

Rosamilia underscored this point with a review of the key products in his portfolio that support the company’s emphasis on analytics. These included the activities that have taken place around IBM’s POWER architecture, its z Systems/LinuxONE offering, and in storage. Coincidently, in 2016, Clabby Analytics wrote in-depth reports on each of these topics, including this POWER 9 report, this LinuxONE report and this storage/software defined report.

Rosamilia’s POWER commentary highlighted innovations around the company’s OpenPOWER initiative (which we cover in-depth in this blog), as well as IBM’s emphasis on extraordinarily fast Power-based servers that leverage other processor architectures, such as NVIDIA graphical processing units (GPU) to serve the analytics marketplace.

He focused on Google and Rackspace’s efforts to develop an open server specification using IBM’s forthcoming POWER9 architecture, then described 2016’s arrival of POWER LC servers that combine POWER8 processors with NVIDIA’s GPUs and NVLink interconnect technologies. Rosamillia also spent time discussing IBM open sourcing its CAPI (Coherent Accelerator Processor Interface) technology and the progress being made within the OpenCAPI Consortium; and the company’s continuing efforts with NVIDIA to deliver Power AI solutions.

My key take-aways from this discussion was that IBM is continuing to aggressively build “accelerated systems” that use multiple types of processors to accelerate analytics workloads, and that IBM is successfully engaging open communities to help build solutions on its POWER architecture and complementary technologies.

Z Systems, LinuxONE and Blockchain

The key points that Rosamilia chose to focus on regarding the company’s z Systems/LinuxONE mainframe architecture centered on positioning LinuxONE for hybrid cloud environments; the use of z13s for encrypted hybrid clouds; the relationship of the z/OS operating environment and the Apache Spark movement (a better way of processing large volumes of analytics data than Hadoop); the EZSource acquisition (for code analysis) ; and the availability of secure cloud services for Blockchain on LinuxONE.

The new “news” in Rosamilia’s review of z System/LinuxONE was his emphasis on Blockchain and HSBN (the company’s “high security business network”). Blockchain serves as the basis for creating a new way to perform transaction processing, one that features a secure “open ledger” that is shared amongst all concerned parties during the transaction. This new approach streamlines transaction and business processes and enables significantly greater security that traditional approaches.

I had not been aware that IBM had created a service offering featuring IBM LinuxONE servers overlaid with Hyperledger that enables customers to form smart contracts, to create shared ledger’s, to gather consensus along the route of completing a transaction – all taking place in a secure and private environment. IBM claims that it is making solid headway with this offering in the securities, trade, finance, syndicated loans, supply chain, retail banking, public records and digital property management industries. Rosamilia shared examples of success stories using this LinuxONE/Bluemix offering, including the activities taking place at Wells Fargo, Walmart and Everledger.

IBM Storage in a Flash

In storage, Rosamilia focused on2016 activities that resulted in IBM Flash and software defined solutions. He described efforts to round out the company’s Flash array offerings from the low-end all the way through the high-end, and also described how the company is providing storage solutions driven by software and appliance designs, along with IBM’s storage as a service cloud offerings.

Rosamillia also provided examples of how IBM’s software-defined storage products are being used, including a discussion of DESY (the Deutsches Elektronen-Synchrotron research facility in Germany which is using IBM’s Spectrum Scale to analyze petabytes of data in minutes, not days), the Arizona State Land Department (super-efficient land administration using IBM Flash Systems), and bitly (using IBM Cloud Object Storage to accelerate consumer research with faster, easier access to data derived from capturing over 10 billion online clicks per month).

Summary Observations

Rosamilia’s year-end review of IBM Systems’ highlights was a good 50,000 foot overview of the most important activities that have taken place in 2016. But there is far more going on within this group than meets the eye.

Two years ago, IBM’s POWER organization was struggling: its former UNIX market stronghold was weakening as customers shifted to Linux on x86 architecture and revenues were in strong decline. To right the ship, IBM decided to open source its POWER architecture to the industry. And, as a result, the company has revived its revenue stream while fostering advanced and innovative systems from the OpenPOWER community. What IBM’s POWER organization has done is truly remarkable, they rescued this architecture from the declines suffered by competitors, including Oracle (Sun) and HPE, opened it up for collaborative systems integration, and built incredibly powerful new system designs using POWER processors, GPUs and FPGAs (field programmable gate arrays).

For over 20 years, ever since industry pundits in the mid-1990s forecast the demise of the IBM mainframe, Clabby Analytics has taken the position that there is no other architecture better suited for processing secure transactions (and now in-transaction analytics workloads) than IBM’s z System. Given this position, we see IBM’s new LinuxONE mainframe servers as ideally positioned to support a projected major market move toward Hyperledger and Blockchain transaction processing over the coming years. This movement should greatly escalate the sale of mainframe servers. Long live the mainframe!

As for storage, the markets that IBM and every other enterprise vendor focus on have changed tremendously over the past few years as customers shifted from traditional workloads to include more compute- and data-intensive workloads (genomics, simulations, what/if analysis, cognitive), and next generation Big Data and born-in-the cloud applications, like Hadoop, NoSQL, Spark and Docker. Accordingly, IT executives are now looking for storage and software defined infrastructure options that provide better IT performance, scalability, and agility at significantly lower cost. In addition, these same executives are grappling with rationalizing which workloads belong on-premise – and which workloads can be shifted to low-cost public cloud storage.

To address traditional storage requirements, as well as the new generation of compute- and/or data-intensive applications, IBM has revamped its storage line to include a complete range of solutions (including software-based offerings, services and storage hardware options such as appliances and all-Flash arrays). To many, IBM’s myriad storage offerings may seem confusing but if you look from the perspective of IT managers and executives, storage needs to make full use of varying technologies , needs to accommodate private and public clouds; and needs to support both traditional applications and new workloads, including analytics. IBM storage accomplishes all of these objectives.

IBM’s year-end review was excellent at a high level. But for more details on each initiative, take a look at our free reports on IBM LinuxONE, IBM storage and software defined storage, and POWER architecture at

Posted in Uncategorized | 1 Comment

INETCO – Monitoring and Analytics for EMV Chip Card Transactions

By Jane Clabby, Clabby Analytics

EMV (Europay, MasterCard and Visa – the three companies who developed the EMV standard) is a global standard for credit/debit cards that use computer chips to authenticate and secure chip-card transactions. More secure than magnetic stripe (magstripe) cards, EMV chip cards encrypt bank information with a unique transaction code each time the card is used for payment. As a result, if transaction data is stolen, it cannot be used again – greatly reducing the risk of counterfeit card fraud. Businesses and credit card companies are in the midst of a transition between magstripe-based transactions and chip-based transactions.

In a recent briefing with Marc Borbas, VP of Marketing at INETCO, a global provider of real-time transaction monitoring and analytics software, he described how INETCO is helping ease this transition by providing insight to (1) prioritize terminal migration to EMV (2) identify fraudulent usage patterns in non-EMV devices (3) provide information on why EMV-capable transactions have defaulted to magstripe and (4) discover how to reach peak EMV efficiency.

INETCO Insight and INETCO Analytics give both card issuers and merchants the ability to make data-driven decisions during the EMV transition to reduce fraud liability, improve transaction performance and optimize terminal conversion.

INETCO Background

INETCO Insight provides agent-less transaction monitoring software and a data streaming platform for a real-time, end-to-end operations view into the performance of all digital banking transactions in all banking channels, self-service networks, and payment processing environments. By using real-time network-based data capture rather than collecting data in log files, for example, all transaction data is collected – including failed transactions, transaction attempts, fragments and duplicates, enabling IT operators to proactively identify issues.

As banks increasingly shift from human interaction to a range of self-service channels, valuable customer-oriented data is generated that can help financial institutions better serve their customers. The INETCO Analytics on-demand banking and customer analytics platform analyzes this collected data to provide business insight into how customers are interacting with retail banks and financial institutions – improving both profitability and customer experience. EMV migration is one example of how INETCO can collect and analyze transaction data to provide business value.

EMV transition

The adoption of EMV has already contributed to a drop in counterfeit fraud rates, with Visa reporting in May 2016, a 47% drop in counterfeit fraud in EMV-enabled merchants compared to the previous year. This may sound like good news, but according The Strawhecker Group, by September2016 only 44% of US retailers were estimated to have EMV-capable terminals, while only 29% could actually accept chip-based payments. As the window of opportunity is closing as merchants and credit card companies make the change-over, MasterCard has seen a 77 percent increase in counterfeit card fraud year-over-year among merchants who have not completed the transition to EMV. In fact, $4 billion in fraud is expected this year, and as much as $10 billion is predicted between now and 2020, according to a new study from antifraud company iovation and financial industry consultant Aite Group.

Initially, the responsibility for fraudulent transactions was with the card issuer, but after October 1, 2015, liability shifted to the merchant (in most cases) if they had not upgraded systems to use chip-enabled cards, devices and transactions. This provides an incentive for both card issuers and merchants to make a speedy transition to chip technology.

But this transition is not without its challenges; with better security comes increased complexity. The transaction size is larger, and the chip, card and pin all need to be verified- which may impact performance. The new terminals are expensive and software may need to be upgraded to ensure interoperability. In addition there are many decisions surrounding the transition itself. Which terminal locations should be migrated first? What is the competition doing? What is my transaction volume and which terminals are the most profitable? What is the customer density in a given location?


As stated earlier there are four ways that INETCO can help with EMV migration. Let’s look at each of these in greater detail.

  1. Prioritize terminal migration to EMV –Many factors should be considered when looking at which terminals to migrate first. INETCO’s analytics can determine where non-compliant terminals are located and assess the impact. By analyzing transaction volumes through a particular terminal and/or profitability, these terminals can be upgraded first. By looking at customer density within a particular location, businesses can decide to move new terminals to a different location.
  2. Identify fraudulent usage patterns in non EMV devices – For businesses in the midst of the transition to EMV, INETCO can collect and analyze information to detect activity in magstripe transactions that could indicate fraudulent usage, minimizing financial exposure. For example, if a transaction is identified as high-risk based on a particular pattern (volume, time-of-day, location etc.) the transaction can be declined.
  3. Provide information on why EMV-capable devices have defaulted to magstripe – Since many businesses now are running magstripe and EMV in parallel as they shift over, it is important to identify why a particular transaction that should have been processed as EMV wasn’t. INETCO can easily spot transactions that should have gone EMV, and identify the source of the issue and also help with charge-back dispute resolution. In addition, operators can see the split between magstripe and EMV in real-time and/or set an alert if the threshold reaches a specified metric.
  4. Discover how to reach peak EMV efficiency – INETCO has identified three dimensions that should be considered during EMV migration. First, businesses should look at terminals and what % has been converted. Configuration, certification and roll-out issues should be identified so that each conversion can benefit from knowledge gained from previous ones. Merchants should be educated to understand the benefits of converting. Second, cardholders should look at what % of the customer base is active and how the cards are functioning. Consumers, too, should be educated on the security benefits of using chip-cards. And finally, businesses must look closely at the transactions themselves and how to achieve top performance: Are there software upgrade and interoperability issues that are affecting transactions? And if so, what is the impact and how can the issue be resolved.

Summary observations

With so many changes in the way consumers interact with banks via mobile, on-line and other self-service channels, INETCO has evolved to support not only IT operators in proactively identifying performance issues, but business managers as well. With collected transaction data and INETCO Analytics, customer behavior and preferences can be analyzed to improve user experience and fraud patterns can be detected for better risk management – providing a much broader range of potential use cases for INETCO.

During the briefing, Borbas introduced the concept of the “Uberbanked Consumer,” today’s consumer who is faced with many banking options outside the realm of traditional banking where the loyal customer selects a single bank to provide checking, savings, money market accounts, mortgage and other financial needs. The Uberbanked consumer values convenience, consumer experience and a good deal – and will use a range of solutions from a range of financial institutions that are both traditional (bank) and non-traditional (Venmo). Because these users are fickle, transaction performance is becoming more and more important as these traditional financial institutions compete to maintain customer loyalty and mindshare. This is another use case where INETCO can provide unique value.

I also inquired about Blockchain, a protocol that originated in 2009 at Bitcoin to record financial transactions and has evolved since then to record the transfer and ownership of assets in many industries, providing database records that are validated and propagated, and more importantly, cannot be changed or deleted. INETCO is following this potentially disruptive trend closely, and believes there may be a future opportunity for the company to provide a centralized view of a system that is inherently decentralized.

I was pleased to see that INETCO has stuck to its roots in transaction monitoring and analytics for payment processing and financial institutions. At the same time, the company is well-positioned for the future–embracing new types of users, additional retail banking channels, adjacent industries (such as retail) and a growing portfolio of use cases.




Posted in Uncategorized | Leave a comment

Western Digital Adds Entry-Level Cloud-Scale Object Storage System

By Jane Clabby, Clabby Analytics

On November 15, 2016, Western Digital introduced a new addition to its object-based storage (OBS) solution portfolio — the ActiveScale P100. The integrated turnkey system is an easy-to-deploy entry-level system that scales modularly from 720TB to 19PB of raw capacity, and is designed for Big Data applications across on-premise and public cloud infrastructure in a range of industries including life sciences, media and entertainment and government/defense. Included in the new offering (and also available for the existing Active Archive System) is ActiveScale CM (cloud management), a new cloud-based monitoring tool that provides remote system health monitoring and predictive performance and capacity analytics.


According to IDC, file and object storage is expected to be a 32 billion dollar market by 2020. Comprised of primarily unstructured data sets, these large volumes of data are being used increasingly for Big Data analytics in applications such as fraud detection, machine learning, genomics sequencing, and seismic processing.

The Western Digital OBS family includes the new ActiveScale P100, and the Active Archive System. Both are scale-out OBS solutions that provide the benefits of object storage — including massive scale, easy and immediate access to data, and data longevity. Vertical integration makes these systems easier to buy, deploy and manage, and the tuning and optimization provide better performance than solutions that include white box or other DIY components.

Major features of the new ActiveScale P100 and existing Active Archive System include:

  • Amazon S3 compliant scale-up and scale-out solution -ideal for Big Data applications.
  • Strong consistency ensures that data is always up-to-date.
  • BitDynamics continuous data scrubbing provides integrity verification and self-healing.
  • Advanced erasure coding offers better data durability (up to 15 nines) than traditional RAID systems.
  • BitSpread provides data protection without replication, so capacity requirements are reduced.
  • ActiveScale SM (system management) provides a comprehensive, single-namespace view across scale-out infrastructure.
  • ActiveScale CM is a new cloud-based monitoring tool that provides remote system health monitoring and predictive performance and capacity analytics for multiple namespaces.

Active Archive Customer Examples

  • Ovation Data, a data management company, uses Active Archive System in conjunction with Versity Storage Manager to build private storage clouds for customers in the oil & gas industry. The company selected the Active Archive System because it provided the economics of tape storage with the performance of disk storage. The solution provides cloud-scale storage, automated tiering to object storage and tape, and speedy cost-effective access to data stored over a long period of time — improving efficiency and enabling users to make data-driven decisions that will drive business value . The system also provides a direct plug-in to VSM without any modifications for quick, easy deployment.
  • EPFL (École Polytechnique Fédérale de Lausanne) is using Active Archive System for the Montreux Jazz Festival Project, a digital archive storing 50 years of the festival’s audio-visual content and accessible by academics for research and education purposes, and ultimately by the general public for their listening pleasure.

ActiveScale P100 – A Closer Look

The ActiveScale P100 is a modular turn-key system developed based on customer demand for an easy-to-deploy system at an entry-level capacity, performance and price-point.

The system includes three 1Usystem nodes combined with 10TB HelioSeal drives that form a cluster (metadata is copied on all system nodes) with a 10Gb Ethernet backplane, and provides the ability to easily snap pieces together in 6U increments to expand the system. Scale–out configuration options include a scale-out capacity configuration expandable to 9 racks and 19,440TB of raw capacity, as well as a scale-out performance configuration expandable to 2,160TB of raw capacity and throughput up to 3.8GB per second. Configurable erasure coding allows software policies to be set along a durability and capacity continuum to determine the best match for a given workload.

ActiveScale SM provides dashboard-based real-time system management and monitoring of system health to proactively identify any potential problems such as performance or capacity issues. Also managed through this comprehensive view is installation, configuration, upgrades and capacity expansion. Management wizards automate many common functions.

ActiveScale CM collects and analyzes historical system data to identify trends and patterns for operational and business insight. ActiveScale CM can correlate and aggregate multiple metrics across the entire namespace for a comprehensive view of storage infrastructure or can examine specific elements such as individual geographies or applications, for example, to identify potential problem areas —providing better management of SLA’s, improving efficiency, and reducing management costs.

Summary observations

As businesses are recognizing the value of collecting and analyzing years worth of data —largely unstructured data collected from a wide variety of sources—traditional storage options don’t provide the flexibility or scalability required to support today’s Big Data applications. Tape can cost-effectively store data for many years, but isn’t easily accessible. SAN and NAS storage weren’t designed to support cloud-scale applications or to store unstructured data such as images or video. So many businesses have turned to object storage to overcome these limitations. Public cloud object stores like Amazon S3 and Microsoft Azure have become “go-to” solutions for storing large volumes of structured and unstructured data at cloud-scale. Object storage cost-effectively provides massive scale, is designed for unstructured data and new data types generated by sensors and IoT (Internet of Things), and provides easy accessibility to stored data.

The ActiveScale P100 provides the benefits of object storage in a modular turnkey system that is easy to scale, deploy, manage and grow. The vertically integrated software, networking, storage and server rack level system is up-and-running without the challenges and hidden costs associated with assembling a similar system. The integrated components have been tuned for optimal performance, and the built-in predictive analytics of ActiveScale SM enables proactive management. With an entry-level price point and easy expandability, the ActiveScale P100 enables businesses of all sizes and in any industry to take advantage of cloud-scale data storage, easy data access, and big data analytics for a range of workloads.


Posted in Uncategorized | Leave a comment

Virtual Instruments Acquires Xangati: A New Application-Centric Focus to Bridge APM and IPM tools

By Joe Clabby, Clabby Analytics

Almost two years ago we wrote our first report on Virtual Instruments (VI), a fast growing, analytics-driven performance management company with a strong focus on making infrastructure more efficient. We described the VI product portfolio which included “VirtualWisdom,” the company’s infrastructure performance management platform, and associated hardware and software offerings known as “Probes” (ProbeVM, ProbeSW, Probe FC and Probe NTAP). We also observed that the company was using “advanced correlation techniques, analytics and visualization to provide definitive and actionable insights on infrastructure/application behavior” using hardware appliances to offload systems from having to burn precious cycles gathering monitoring information. In essence, VI had created a separate performance monitoring/availability management/utilization optimization environment that has a very low impact on system operation and latency.

Last year, we reported that Virtual Instruments had merged with Load DynamiX – adding a performance testing, validation and change management environment to its analytics-driven infrastructure management portfolio. With these combined facilities, customers are better able to understand and test application/infrastructure relationships – enabling them to significantly improve application performance, particularly as it relates to Fibre Channel storage. Since that acquisition, Virtual Instruments has expanded Load DynamiX functionality into network-attached storage with its new NAS Performance Probe – and will soon introduce and iSCSI Probe. VI customers have reacted favorably to this acquisition: for 2016 year to date, Virtual Instruments revenues are running at 122% of plan.

On November 15, VI announced that it had acquired Xangati, a provider of products that monitor, analyze and control private and hybrid cloud infrastructure in real time – in an application-aware context. VI describes Xangati as “a service assurance analytics and performance control platform that optimizes virtual application workloads, leveraging an in-memory platform architected for automation based on machine-learned heuristics.” The way we see it, however, is that Xangati expands the VI portfolio by providing new insights into application behavior in environments beyond storage – particularly into cloud network infrastructure and compute/virtualization layer activities. And what is special about Xangati is that it uses new types of analytics (contention analytics, predictive capacity analysis and adaptive control) to examine application activities within networks, private clouds, deep compute environments, at the virtualization layer and in public clouds.

With the Xangati acquisition, VI is expanding its reach – moving beyond its initial application/storage infrastructure performance management focus into the complex world of application behavior within hybrid clouds. And Xangati is a perfect match for VI in that the company’s solutions were built on using machine-driven analysis of networks (particularly real-time security analysis) to speed the analysis of activities and application behavior. By merging Xangati functionality with the VI’s VirtualWisdom analytics/reporting platform, VI customers will now be able to understand application behavior within cloud environments – expanding VI’s reach and value to IT infrastructure managers and application owners looking for a holistic application/infrastructure performance management environment. The VirtualWisdom infrastructure performance management (IPM) platform will become the ideal complement to application performance management (APM) solutions.

The competitive environment

The Xangati acquisition also extends VI’s reach into one of the hottest market segments in the IT marketplace: public/private/hybrid cloud, and gives VI customers the ability to monitor application behavior across networks, within virtualized environments, within virtual desktop environments, within cloud environments and even down to the granular container level. This is all accomplished from a single-pane-of-glass management, analysis and reporting environment (VirtualWisdom – expect Xangati functionality to be blended into this VI management environment in phases in 2017). We are aware of only one other company (IBM) that offers this level of rich application layer analysis combined with deep infrastructure analysis – and IBM’s approach and price points are completely different.

What makes VI’s approach different from application performance management vendors and other infrastructure performance management software suppliers (that usually offer point product solutions) is that the company can use hardware appliances known as “Probes” to relieve systems from monitoring, management and analysis duties. The reason this is important is that running VI’s software obviously burns system cycles. Those cycles aren’t free – they consume anywhere from 2 to 5% of a system’s resources – and that can be costly across dozens, hundreds or even large scale systems. In addition, software-only based monitoring solutions don’t support true real-time monitoring. VirtualWisdom hardware probes enable sub-second wire data collection and analysis whereas software solutions typically poll the infrastructure once every 2 to 5 minutes. What VI has done is offload most of this processing to independent low-cost, Intel-based servers that analyze large amounts of systems-/storage-/network-monitoring data – and then use analytics to look for problems, anomalies or opportunities to tune systems.

It is also worth noting that some application performance management vendors are just starting to use machine learning and analytics while VI already offers a rich and varied suite of analytics tools and reporting facilities. VI’s solutions already work across domains, they already measure and contextualize how applications are deployed (topology) and how they behave; they already perform analysis on large amounts of data in real time; they already use predictive analytics to suggest recommendations; and they already close the loop by providing testing and validation in order to ensure that applications can be deployed on underlying infrastructure in a high-fidelity, highly efficient manner. In other words, VI has already taken a leadership position in application-centric infrastructure performance management.

The big impact

As far back as 2011, Clabby Analytics started to report on the increasing role of analytics in managing systems and infrastructure. The basic concept that we have been trying to get information technology (IT) executives to understand is this: with the advent of machine-driven analytics, systems can now look at more data than is humanly possible, in far less time, and provide important insights into application behavior, infrastructure shortcomings and tuning opportunities.

By making systems do more of the complex application behavior analysis, enterprises can save big money in three different ways: 1) the use of infrastructure/application behavior analysis tools results in the need for far fewer systems/appli-cation/database analysts to analyze system behavior – saving enterprises big money in analyst salaries and benefits; 2) less-skilled individuals are needed for troubleshooting and tuning – again saving enterprises big money in terms of salaries (while also helping enterprises address skills gaps); and, 3) infrastructure performance management tools help reduce overprovisioning – adding more equipment than needed to execute workloads (because infrastructure performance management tools help IT managers and administrators build more efficient systems that utilize fewer resources).

In addition to saving big money in salary/equipment costs, enterprises can also solve problems more quickly using real-time analytics that provide analysis on large volumes of monitored data in timeframes that are exponentially faster than human response times. In fact, the financial benefits of avoiding performance slowdowns and business outages can easily outweigh the significant CAPEX and OPEX savings.

Summary observations

We expect that VI’s approach to application/infrastructure integration will remain the same, even after the Xangati acquisition. The company collects data from applications, infrastructure and the network (instrumentation); it then contextualizes applications and infrastructure – showing how each relates. Once contextualization has taken place, application/infrastructure analysis can take place on workload performance, contention issues, etc. Once recommended changes have been made, developers can model, simulate and test the proposed changes. Finally, changes are deployed in production and adapted to workload needs. VI’s storage and testing products use this approach today; Xangati’s network and cloud performance monitoring products will also follow the same path.

It is noteworthy that VI has become profitable this year. Its business is growing; new products are being rolled into its VirtualWisdom platform; and, it is expanding its relationship with third parties such as APM and system vendors in order to find new ways to market. With its combined software and hardware approach we see VI as a unique offering – and with its expansion through acquisition approach, we believe the company will find other software companies that can complement this portfolio. By targeting hot market segments with unique and deep offerings, we see a solid growth path for Virtual Instruments in the future.

Posted in Uncategorized | Leave a comment

IBMs World of Watson (WoW) Conference: Embedded Applications and Thriving Ecosystem

By Joe Clabby, President, Clabby Analytics

Two years ago after IBM’s Insight 2014 conference, I wrote a piece for the Pund-IT Review about IBM’s Watson cognitive computing environment, I stated the following:

We in the analyst community need to see some black ink coming from IBM’s cognitive computing Watson efforts. I personally believe that Watson is on the cusp of hitting it big – starting probably in about a year or two.” I further stated that: “Watson will become profitable in about two years as the betas reach closure and as IBM works out its deployment and pricing plans.”

At that time my big concerns regarding Watson were that I was not seeing an expansive portfolio of cognitive applications; many customer environments were still “in beta test”; the Watson business partner ecosystem was in its infancy; and IBM’s Watson pricing/deployment plans were “obscure”. But, after attending this year’s World of Watson (WoW) conference in Las Vegas, all of my concerns have been assuaged. Why is that the case? Because:

  • The Watson applications portfolio has been markedly expanded – helped by a huge ($1 billion+) investment by IBM in hardware and software development;
  • The Watson ecosystem has greatly expanded as dozens and dozens of developers and partners have created cognitive computing and analytics solutions on top of the Watson platform;
  • The “quiet beta customers” of 2014 have been replaced by “vocal Watson customer enthusiasts” – as evidenced by the 50 or so live testimonials at this year’s WoW conference, and by the dozens of Watson user break-out sessions that could be found on the conference schedule; and,
  • IBM’s pricing and deployment scheme for Watson has solidified – with the company placing a huge emphasis on Watson cloud service delivery models.

Just before getting on the plane to Las Vegas I reviewed IBM’s 3rd quarter earnings announcement and I noted that the company reported that cognitive solutions (which include Watson revenues) had reached $4.2 billion, up 4.5 percent.   IBM also reported that cloud revenue within the segment grew 74 percent (up 75 percent adjusting for currency), and Solutions Software grew 8 percent. What this shows me is that IBM’s major initiatives in cognitive, cloud and service delivery are now delivering the “black ink” that I’ve been looking for.

And this revenue jump may be just the tip of the iceberg – Ginni Rometty, IBM’s Chairman, President and CEO, reported in her keynote address that cognitive computing is estimated to be a $31 billion industry in the near future!   And with IBM’s clear leadership in cognitive technology, the future looks very bright indeed for IBM’s Watson cognitive computing initiative.

The World of Watson Conference

The World of Watson conference merged with the analytics/cognitive/data management conference formerly known as “Insight.”. Plenty of discussions on databases, data management, analytics, security and other related topics were still to be found at the WoW event – but, by changing the conference name, IBM has chosen to drive home the importance of its cognitive computing offerings.

This year’s WoW conference was attended by a large contingent of customers, business partners and IBMers – 17,000+ strong. The exhibition floor, where I usually spend the lion’s share of my time, was the largest I’ve ever seen (with 500,000 square feet of space). The conference featured 1200 sessions, including 500+ client experiences; 120 business partners; and 200+ hands-on labs/certifications.

The exhibition floor itself was organized into four quadrants:

  • Transforming industries (real-life examples of cognitive-based industry transformations);
  • Monetizing data (how specific business processes and roles can thrive in the cognitive era);
  • Reimagining professions (how data, analytics, and cognitive services are coming together to enhance, scale, and accelerate human expertise); and,
  • Redefining development (how IBM cognitive tools benefit developers).

The key messages IBM sought to deliver were:

  1. Cognitive is transforming every aspect of our lives;
  2. Data, analytics and cloud are the foundation for cognitive business; and,
  3. Cognitive business is “the way” going forward (meaning it will help people make better decisions and solve complex problems more quickly).

The Big News: More Application Solutions

My number one concern with Watson back in 2014 was its limited application solution portfolio. At that time Watson felt more like a technology in search of problems to solve rather than an industry-focused collection of real world solutions. But at WoW I observed that the Watson portfolio has been greatly expanded with a collection of on premises and cloud-based solutions.

I also observed that there appear to be two approaches to expand the Watson portfolio: 1) a concept I’ll call “front-ending,” and, 2) a concept called “embedding.” Think of Watson mainly as an environment that can manage and analyze data. IBM now refers to the data that Watson curates as a “corpus” – a body of knowledge that grows with a given business. Enterprises can create corpuses with all sorts of structured and unstructured data in them – but queries need to be formulated or query software or libraries need to be created or made available to query each respective corpus. At WoW I saw dozens of applications developed by IBM and its business partners that reside on top of Watson (like an overlay), that query a Watson corpus, and that implement certain process flows to deliver desired results.

My favorite example of this approach was designed by a company known as Persistent. This company can install Internet-of-Things devices in a washroom that can monitor the amount of toilet paper, soap, and paper towels that are available. (Yes, I know this sounds mundane, but it is a good example of this “front ending” approach that I’m seeking to describe…). The application can then place an order for replacement materials once certain threshold levels had been reached – and the application can also schedule a maintenance representative to replace dwindling supplies.

This application is not as profound as some of the other applications that Watson is performing in the areas of medical research or in financial markets. But it is illustrative of: 1) new applications that are being written to query Watson corpuses; 2) workflows that are being automated; and, 3) new efficiencies that are being introduced.

The “embedding” options for the Watson application portfolio mean that existing software is being improved by adding Watson-based functionalities. For example, existing software may be able to tell an enterprise executive that there is a problem in given processes or business units – but that software may lack the facilities to tell the executive why the problem has occurred and what to do about it. By augmenting a traditional reporting system with embedded Watson functionality, executives can take advantage of the platform’s ability to analyze structured and unstructured data to quickly get to the root cause of a problem. Then knowledge bases can be created to suggest work-arounds to given problems.

One of the best examples I saw of this was in a new IBM product known as Watson Supply Chain which uses cognitive technology to increase transparency, intelligence and predictability. Working with a company known as TransVoyant (which manages an extensive database of global shipping information and tracks weather events), IBM’s Watson Supply Chain can provide comprehensive visibility into supply chain activities across the globe. So, for instance, if a shipment of hard disks manufactured in China is about to leave Hong Kong, and there is a monsoon brewing in its path, executives can be alerted to possible trouble, can explore alternative routes or find alternative suppliers – and can also be advised of the financial impacts of their decisions.

The net effect is that new intelligence and predictive analytics have been added to a traditional supply chain application by embedding Watson, as well as by introducing a new source of data (TransVoyant) to provide new insights. I saw dozens of examples of applications that now have similar new analytics features thanks to embedding intelligent, cognitive Watson analytics within their workflows.

The Expanded Partner Ecosystem

At WoW, the number of third-party ecosystem vendors was at least three-fold larger than Insight 2014. IBM claims to now have 120+ Watson business partners. Not all were represented at WoW but some of IBM’s most noteworthy Watson business partners include QueBIT, Centric Netherlands B.V., Enterprise Computing Limited, New England Systems, SYNNEX Infotec Corporation, Sirius Computer Solutions, Prolifics, Addedo, ICit Business Intelligence, Revelwood and Cubewise. A list of the awards that these Watson partners have received from IBM can be found here.

The Customers are Speaking Out

During WoW, close to fifty IBM Watson customers took to the stage to relay their experiences and successes. The big change here is that many customers in 2014 were still in beta and were not willing to share results but now users are clamoring to get to the stage. Some provided descriptions of the business benefits that they had achieved using Watson cognitive technologies; others went a step further to describe the benefits that Watson technology can deliver to society. This group included executives who appeared on stage with Ginni Rometty during her key note; John B. King – U.S. Secretary of Education; Mary Barra – Chairman and Chief Executive Officer at General Motors Company; and Professor Yitzhak Peterburg – Chairman of the Board at Teva Pharmaceutical Industries Ltd.

What Was Missing

IBM tends to talk at a very high level about “solutions”, but it is often difficult to figure out which mix of products are required to build specific cognitive solutions. Granted, there are different data sets coming from various internal and external sources that will need to be filtered, cleansed and managed; granted that some Watson service will be needed to access and analyze those data sets; and granted that customers want a variety of widely different application solutions – so a wide variety of products may be needed to construct a cognitive solution.

But IBM main tent presentations were very light describing products and their capabilities in depth – and which services or APIs are required to construct certain types of solutions. When cruising the demo floor, many customers were packed deeply around certain demos attempting to figure out which products to use to build their respective cognitive solutions. So one thing IBM could do better at next year’s show would be to put more solutions architects on stage and on the exhibition floor to help prospective customers better understand its broad and deep software portfolio.

Summary Observations

I believe that IBM’s Watson has turned the corner from being an interesting experimental technology to becoming an invaluable cognitive system. The purpose of this tool is to help provide new insights by analyzing vast amounts of data and assimilating answers or constructing predictions based-upon that data analysis. I also believe the learning capability of Watson is critical, and a differentiator from complex analysis tools of the past and present. As data changes and as decisions are made, Watson adjusts, and improves. At WoW, I saw numerous examples of how Watson is being used to augment problem understanding and resolution (IBM is now calling the former A.I. [artificial intelligence] “augmented intelligence”).

From a competitive standpoint, there are numerous other companies building “cognitive” offerings. Microsoft, Google, Cognitive Scale, Cisco (Cognitive Threat Analytics), HPE (Haven on Demand), Customer Matrix, Spark Cognition (see this report for a description of these products) – to name but a few – all have cognitive programs underway. But IBM, with its huge investments in database management software, in security, in analytics, in process flow, in APIs and cloud architecture, in mobile, in geospatial, in the Internet-of-Things, in cloud services and delivery models, in software solutions – and more – is far better positioned than any other major technology company to lead the charge into cognitive computing. Two years ago I was looking for signs of increasing profitability from the Watson Initiative. I now see an immense new market opening up for cognitive solutions – and IBM is ideally positioned to claim a gigantic share of that market.

Posted in Uncategorized | 1 Comment

IBM Storage Announcements Bolster Hybrid Cloud Strategy

By Jane Clabby, Clabby Analytics

On November 2, 2016, IBM introduced a set of storage-related enhancements that will improve businesses’ ability to easily and seamlessly modernize and optimize traditional workloads by improving performance and efficiency, as well as support new-generation, born-in-the- cloud applications by leveraging hybrid cloud architectures. These latest announcements improve scalability, efficiency, performance and flexibility across the already comprehensive IBM storage and software defined storage portfolio. New announcements include:

  • IBM Spectrum Virtualize
    • Transparent Cloud Tiering
    • Cloud Flash Copy
    • Flash enhancements
  • IBM Spectrum Scale
    • IBM Spectrum Scale 4.2.2
    • Cloud data sharing
  • IBM Storwize Family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack support for cloud-enabled Flash

Let’s look at each of these announcements in greater detail.

IBM Spectrum Virtualize

IBM Spectrum Virtualize Transparent Cloud Tiering

IBM Spectrum Virtualize Transparent Cloud Tiering extends local storage (including almost 400 IBM and non-IBM systems) transparently, cost-effectively and on-demand into the cloud for snapshots and restores, freeing up primary storage for demanding new-generation cognitive and analytics workloads. By using cloud as a back-end tier, capex (capital expenditure) can be converted to opex (operational expenditure), changing the economics of storage by minimizing up-front costs. This feature is available in IBM Spectrum Virtualize software, IBM Storwize V7000 Gen2/Gen2+, IBM FlashSystem V9000, IBM SAN Volume Controller and VersaStack configurations that use those arrays.

IBM Spectrum Virtualize Cloud FlashCopy

This solution provides full and incremental restore from cloud snapshots stored in public clouds, including IBM SoftLayer, OpenStack Swift and Amazon S3 clouds (others will be added in the future). Data is compressed before sending it to the cloud minimizing bandwidth and storage requirements, and all data stored on the cloud is encrypted, including data that has been stored on all supported IBM and non-IBM arrays. Use cases include backup to store copies of volumes in the cloud, archiving cold data (storing a copy in the cloud and deleting the primary copy), and transferring volumes from system to system to restore a volume to a new system.

IBM Spectrum Virtualize family Flash enhancements

The IBM Spectrum Virtualize family enhancements will enable high density expansion with up to 3X better drive density than 12-drive 2U enclosures. New enclosures will support up to 92 drives in only 5U rack space for up to 920TB with 10TB NL-SAS HDDs and 1380TB with 15TB flash drives. These can be intermixed with 2U enclosures in the same system and are supported across Storwize V5000 Gen2, Storwize V7000 Gen2/Gen2+, FlashSystem V9000, SAN Volume Controller (DH8 and SV1) and VersaStack.

IBM Spectrum Scale

IBM Spectrum Scale Cloud Data Sharing

You may recall that back in July, IBM announced IBM Spectrum Scale Transparent Cloud Tiering, which enabled a single namespace for data across the hybrid cloud and provided policy-based tiering to manage data placement and migration seamlessly across on-premises and off-premises infrastructures. Movement of data is automated based on policy. For example, data that hasn’t been accessed in a week could be automatically migrated to the cloud and files can be recalled on demand.

IBM Spectrum Scale Cloud Data Sharing adds to these capabilities with automated policy-driven replication and synchronization, and more granular control over data movement based on type of data, action, metadata or heat. Data can be moved from storage to storage, including data and metadata, bridging the gap between cloud and file infrastructure. This solution is ideal for cloud- native applications, DevOps, new generation applications such as Apache Spark, and/or workloads that are heavily parallel-processed. Data can be analyzed and then automatically tiered out to public cloud providers.

IBM Spectrum Scale Version 4.2.2

IBM Spectrum Scale includes new features that improve security, usability, flexibility and Big Data analytics. Here is a sampling of new features. An updated GUI provides better, more comprehensive monitoring and troubleshooting, and tasks can be streamlined using guided interfaces. The new revision includes iSCSI client support with diskless boot and RESTful API support. To better address analytics workloads, the Hadoop Distributed File System (HDFS) has been updated to support new applications and data oceans, and multiple HDFS domains can be included in a single storage pool – ideal for large scale Big Data and analytic cognitive workloads.

IBM Storwize Family Upgrades

IBM has added scalability improvements to systems built with IBM Spectrum Virtualize. For example, IBM reports the following improvements to IBM Storwize V7000 for all-Flash workloads:

  • 50% more drives per system
  • Almost 3x more drives per clustered system
  • 4x larger maximum flash drive size (up to 15.36 TB)
  • Almost 6X larger maximum single system (up to 11.4PB)
  • 8X larger maximum clustered system (up to 32PB in only 4 racks)

IBM DS8880 High Performance Flash Enclosure Gen2

IBM reports 2X performance improvement and 6X added capacity per enclosure. Gen2 also provides a 90% improvement in IOPS read performance and 50% in IOPS write performance, as well as a 110% improvement in read throughput and 85% in write throughput for real-time analytics, cognitive and traditional I/O intensive transactional workloads. Data protection is improved by using RAID 6 as the default.

IBM DeepFlash Elastic Storage Server

In July, IBM announced the Deep Flash 150 all-Flash array with IBM Spectrum Scale, designed for Big Data unstructured workloads and providing better economics, efficiency and time-to-results for analytics applications.

The new IBM DeepFlash Elastic Storage Server (ESS) is a packaged (one SKU) turnkey solution available in two configurations (GF1 with1 DeepFlash system with up to 180GB usable, or GF-2 with 2 DeepFlash systems with up to 360GB usable) provides software defined Flash storage for both file and object data, and includes IBM Spectrum Scale RAID erasure coding for advanced data protection. In addition, IBM reports up to 8X faster response time (compared to HDD) and throughput of 26.5GB/s read, 16.5GB/s write for the GF2. The DeepFlash ESS will grow as to meet the demands of the business with seamless, in-place upgrades to multi-petabytes per system that can be clustered for virtually unlimited scaling.

VersaStack Support for Cloud-enabled Flash

VersaStack, IBM’s converged infrastructure solution, adds support to enable a wider range of use cases and workloads in a converged cloud-enabled package. New systems include:

  • Storwize V5000/V5030F with Cisco UCS Mini – Entry to midsize
  • SAN Volume Controller with Cisco UCS – Mixed storage workloads
  • FlashSystem A9000 with Cisco UCS – VDI

These models join the Storwize V7000/V7000U V7000F w/ Cisco UCS (medium to large enterprise) and the FlashSystem V9000/900 with Cisco UCS (high performance).

Summary observations

Over the last several years, the storage landscape has evolved considerably in order to adapt to and support new applications and workloads. Business initiatives around Oracle, PeopleSoft and SAP have been replaced with new generation Big Data, analytics and cognitive workloads and applications including Hadoop, Spark and MongoDB. And while NAS and SAN still play an important role in corporate data centers, large public cloud object stores like Amazon S3 and Microsoft Azure have become increasingly important as businesses look to store massive volumes of structured and unstructured data in a flat, global namespace at cloud-scale.

IBM has responded with a steady stream of announcements that support these trends. IBM Cloud Object Storage provides storage in the public cloud, but can also be deployed on-premise. Working as both service and local storage, it provides the capabilities of hybrid cloud, enabling businesses to seamlessly support both traditional and new generation workloads in a secure, scalable, efficient manner. IBM’s Spectrum Storage Suite of software defined solutions can be purchased software-only, as an appliance or as-a-service. Spectrum Virtualize enables heterogeneous enterprise-class data services including migration, encryption, tiering, snapshots and replication across 400 different storage arrays from a wide range of vendors, providing customers with flexibility and investment protection.

As the price of Flash has decreased and as its overall economics (efficiency through data reduction techniques) have improved, all-Flash storage arrays are being used for a wider range of applications. Recent additions to the IBM all-Flash line-up provide systems for the entry-level and mid-range. With the existing all-Flash arrays such as the FlashSystem A9000 and A9000R, and the DeepFlash 150 and DS8888, IBM can support applications ranging from entry-level to high-end mainframe-based applications.

With these latest additions to its storage portfolio, including comprehensive all-Flash array offerings, a family of flexible software defined Spectrum Storage solutions and a well-articulated hybrid cloud strategy, IBM is uniquely positioned to guide customers into the Cognitive Computing Era.

Posted in Uncategorized | Leave a comment