IBM Interconnect 2017: Cloud Solutions; Watson-enabled Applications and Blockchain

By Joe Clabby, Clabby Analytics

Last week approximately 20,000 people attended IBM’s Interconnect Cloud Computing event at the Mandalay Bay complex in Las Vegas, and their reasons for attendance varied widely. Some wanted to learn more about IBM’s cloud and cognitive computing products and strategies, others were interested in traditional enterprise computing products, still others in application development or asset management or security. Interconnect, with its broad spectrum of keynotes, product demonstrations, customer testimonials and hands-on labs, was able to address the broad range of requirements of its attendees.

I went to Interconnect 2017 without a specific research agenda. Interconnect is the industry’s largest cloud computing showcase, so I went to learn more about IBM’s cloud strategy, its cloud products, its new cloud services and to learn what vendors and customers are saying about the cloud marketplace. After listening to the presentations of several IBM executives, after talking with numerous product managers at exhibition booths, after listening to customer testimonial after customer testimonial, and after watching product demonstrations – I left Interconnect with several new perspectives.

My biggest finding is that IBM has done a masterful job building an enterprise-class hybrid cloud environment that features analytics, machine learning and cognitive computing. I stopped publishing reports on IBM’s cloud efforts back into 2014 because I perceived that IBM was having a difficult time getting its cloud act together (more on this later). What I found at this conference is that IBM has a clear cloud strategy with offerings that are distinctly different from the cloud offerings of Microsoft, Amazon and Google. And due to these differences, I believe that IBM will see stronger growth in enterprise cloud computing as compared with its leading competitors over the next several years.

I also noted that IBM has become more aggressive in building-out its Watson/third party software ecosystem.  For the past several years IBM has focused strongly on adding cognitive and analytics overlays onto its traditional enterprise software environments. But at Interconnect 2017 I found dozens of Watson-enabled third party software vendors – and learned that IBM’s Watson organization is looking to recruit thousands of third party software makers to its Watson/cloud environment.

Finally, I observed that IBM is “all in” when it comes to Blockchain technology – and it has the perfect platform on which to implement Blockchain: LinuxONE. Further, IBM offers a secure, IBM-managed Blockchain service environment that should appeal to customers who want to offload their transaction processing to a company with a deep transaction processing heritage.

IBM’s Hybrid Cloud

After several years of watching IBM try to architect its own cloud environment (Smart Cloud), and several years of watching IBM acquire a whole bunch of cloud-related companies, including Cast Iron, Coremetrics, Sterling Commerce, Unica, Emptoris, Varicent and Kenexa – with no apparent cohesive rhyme or reason for these acquisitions – I lost confidence that IBM would be able to recover against new generation cloud competitors such as Akamai, Rackspace, Amazon, Google, Microsoft and the telcos. I determined that I would come back to covering IBM cloud computing when IBM had a clear strategy, a coherent product line and rising marketshare potential.

I’m back.

At Interconnect 2017, Arvind Krishna, senior vice president, Hybrid Cloud and Director, IBM Research, presented a slide that brought me back into the fold (see Figure 1). This slide shows an integrated, secure, cohesive Watson/analytics cloud environment on which a variety of IBM software solutions can run. The design of this environment is distinctly different from the design of competing clouds that lack deep cognitive and analytics elements. IBM has moved its rich portfolio of home grown analytics, transaction processing and integrated software solutions to this unique, enterprise-strength cloud – contributing to and helping drive the company’s $24.5 billion in software revenue. In the shaded area (upper left), IBM has also developed several ancillary businesses that run on its hybrid cloud – new businesses with unique offerings that exploit cognitive computing and analytics, and that represent a huge growth opportunity for the company. Healthcare, financial services and the Internet of Things are examples of these high growth opportunities.  In short, the company now has a clear strategy; it has a coherent product line – and, given the uniqueness of its position in the cloud marketplace, the company’s cloud environment has strong potential to significantly grow market share.

IBM’s Chairman, President and CEO Ginni Rometty spoke about the solid revenue growth that she is seeing in both Watson and the IBM Cloud. She also described how the IBM cloud can be differentiated from the clouds of other vendors such as Microsoft, Google and Amazon. She emphasized that IBM is in the hybrid cloud business, helping customers build clouds that suit their enterprise needs without having to give away their data. According to Ginni, the value of data is that it generates unique insights. Competitors, she argued, are often in favor of democratizing and sharing data – and this gives away the advantage of developing unique competitive insights. IBM advocates helping enterprises gather data from a variety of sources, both internal and external, to help generate those unique insights. Ginni’s speech made it perfectly clear that IBM knows its position in the cloud marketplace; that it knows what its customers want; and that the company can now deliver solutions to its customers that its cloud competitors cannot.

Expanding the Watson/Cloud Ecosystem

When I attended the World of Watson conference last year I noted that a growing number of vendors were starting to overlay Watson on top of their traditional software offerings. The big news at this year’s Interconnect conference was that cloud leader Salesforce.com has now integrated Watson with its own Einstein program in order to simplify the use of its software and help users make better decisions.

At last year’s World of Watson conference, however, I left without a clear understanding of how IBM was going to make it possible for the broader ecosystem of third-party vendors to overlay Watson on top of their own infrastructure, management and application solutions. At Interconnect I learned that IBM is wide open to helping third-party software vendors, including competitors; integrate their solutions with Watson on the IBM cloud.

In short, third-party software makers who are not integrating analytics, cognitive services and cloud architecture into their product offerings are going to have a hard time surviving. IBM offers an integrated, secure cognitive/cloud environment on which third-party software makers can deploy their solutions. Vendors looking to the next wave in computing would be well served to look into deploying their solutions on IBM’s Watson Cloud.

Software makers who wish to know more about this topic should contact me at joeclabby@AOL.com.

Blockchain, LinuxONE and Blockchain for Hyperledger Fabric v1.0

In September, 2016, I started writing about Blockchain, a new way of processing transactions based upon sharing a common ledger (see this blog). I’ve since come to the conclusion that IBM’s LinuxONE platform, and its new Blockchain services are the best approaches to take when building Blockchain networks.

To build a Blockchain transaction processing environment, Blockchain users need secure platforms, networks and a ledger that accounts for transactions. For those not familiar with IBM’s LinuxONE, it is a platform that offers high-speed architecture, fast elliptic curve processing, and dedicated hardware for acceleration. It is 2.3 times faster than competitors – with integrated security levels such as EAL Level 5+ that no other architecture in the industry can match. Hardware security also includes FIPS 140-2 compliance. As part of the security features of this architecture, security keys are placed into protected memory and placed into hardware secure modules in order to prevent access and tampering. IBM does not make it possible for code to take control of secured modules, and focuses on encryption protection to prevent administrative abuse.   Try finding performance this strong and security this deep on any other platform…

At Interconnect 2017, IBM introduced a new Blockchain service, IBM’s Blockchain for Hyperledger Fabric v1.0 – a service that offers both security and scale, as well as related governance tools. This service, which uses LinuxONE servers on the back-end, helps protect from external as well as internal attacks, offers the levels of security previously described, uses the secure service containers and hardware security models previously described – all running on a highly auditable operating environment. I travel to parts of the world that are not familiar with LinuxOne architecture – where IT executives are reluctant to use it, preferring x86-based solutions – even though LinuxOne uses an operating environment that is quite familiar in the industry: Linux. For those geographies that want the best Blockchain implementation available, but don’t want to run their own Blockchain environment, I would strongly recommend evaluating the above mentioned Blockchain service.

One last thing on Blockchain: regulation. As transactions take place across various domains, they run into dozens or perhaps hundreds of regulations that must be addressed. Watson now offers a regulatory service that can help streamline the complicated problem of dealing with regulations. The Blockchain story – LinuxONE combined with automated Watson regulation services – will be difficult for other vendors to mimic and overcome.

Summary Observations

Over the past few years, I’ve watched the giant behemoth called IBM change its course. It has changed its cloud strategy – it no longer tries to compete with the public cloud vendors or the traditional enterprise cloud makers per se – it now offers a solution that is completely different. IBM calls its offering a “hybrid cloud” – a blend of integrated public and private cloud facilities driven by analytics and cognitive computing. The company has taken its traditional enterprise management tools and cloud-enabled them – weaving them into the fabric of the cloud. In fact, wherever it makes sense, it has cloud- and Watson-enabled its entire product line, from management tools and infrastructure through applications, reporting tools and databases. And it has modified its pricing and go to market models – actually promoting many of its offerings first and foremost as “software-as-a-service” solutions.

These changes at IBM involved huge internal cultural changes, including migrating the “enterprise computing old guard” to “new disciples of analytics and cognitive computing.” These changes involved “new think” – like how do we most efficiently develop and support applications under the new, analytics/machine-driven cloud model? And how do we offer traditional, on-premise solutions as well as Software-as-a-service solutions? They involved retooling, integration efforts, the introduction of new APIs to join the old world to the new mobile world – and the new cloud environment to the existing premise environment. IBM has worked to integrate existing traditional management applications into the new model of cloud computing – and to find or create new solutions more suited for the new hybrid cloud environments.

The IBM Cloud has come a long way since I stopped covering it in 2014. This week, at Interconnect 2017, I saw firm evidence that IBM has made it over the hump – and has transitioned to a modern, extremely competitive analytics/cognitive cloud company that can address enterprise needs for a blended, rationalized public/private/hybrid computing environment. Enterprises planning for a future of cognitively-enabled applications and secure Blockchain must take a closer look at the IBM Cloud.

Posted in Uncategorized | Leave a comment

IBM Connect 2017: Collaboration Under the Radar

By Joe Clabby, Clabby Analytics

IBM runs six large technology events each year: World of Watson, InterConnect, Amplify, Edge, Vision and Connect. I usually attend the two largest, World of Watson and InterConnect, along with 15,000 to 20,000 other interested parties. Both of these events are huge and cover a wide range of technologies ranging from systems to infrastructure, to management, to cloud, to cognitive computing, to analytics and more. Although I suspect that IBM’s deep portfolio of collaboration products can be found at some of these events, I just plain don’t notice them.

My earliest memories of IBM date back to the 1970s when I was selling competitive products against their word processing and computer systems. In the 1980s, IBM collaborative computing “office products” started making the scene. Does anyone remember PROFS and DISOSS? In the 1990s IBM’s stunned the marketplace by purchasing an office product company by the name of Lotus for a whopping $3 billion. As a research analyst at the time, I could not fathom what IBM saw in Lotus and its email and collaboration products. By the early 2000s, I could clearly see the value of the purchase as IBM brought to market a myriad of new office and document management offerings, creating a multibillion-dollar collaboration software environment that dwarfed their now “miniscule” investment in Lotus.

This year I chose to attend IBM’s Connect 2017 event in San Francisco – leaving my 75° abode in in sunny Charleston, South Carolina, to travel to cold and rainy San Francisco. And as surprising as this may sound, I’m glad I did. Why so?

  • IBM Connect 2017 was an industry tradeshow built around IBM’s and its business partners’ collaborative solutions. It offers roadmaps and commitments to older IBM products, such as Sametime, Domino and Notes. It also showcases new IBM products such as Verse (a versatile modern day mail and messaging environment with a collaborative overlay), and Connection (a social business network platform), and  Watson Workspace and Work Services (a conversational collaboration environment and a set of APIs that allow developers to build applications that understand conversations and intent, and allow for integration into existing work applications).
  • Business partner participation at the event was strong, featuring new collaborative product offerings between IBM and Cisco and between IBM and Box . Connect also highlighted many blended solutions by vendors whose products overlay IBM offering, such as project management environments blended with Notes, and mobile interfaces blended with Domino environments.   Also featured were a slew of new products that integrate Watson cognitive technologies with existing business applications, such as Watson Workplace into traditional software offerings, thus delivering new functionalities to market.

What I Heard and Saw: Business Partners

I always seem to gravitate toward the EXPO floors at these events. I think the main reason is that I just plain love to play with technology – I like to see the way it’s used; I like to see what new and innovative directions that developers have taken with their hardware and software solutions; and the EXPO floors are a great place to talk with vendors about what’s really happening in the marketplace, as well as to meet IT buyers.

At Connect, I had a long conversation with Alex Homsi, the CEO of Trilog Group, an IBM business partner, who I asked to help put IBM Connect 2017 into context. The way Mr. Homsi described it, the IBM Connect events are all about getting things done. They are about increasing productivity and efficiency, but mostly about collaboration in the processing of complex workflows. As I stood in the Trilog booth, Mr. Homsi gestured around the floor “look over there – you see companies that offer telepresence, that create virtual project rooms, that offer sales collaboration tools and much, much more.” When I prodded him for more information about why people come to the event, Mr. Homsi told me that “They come to solve complex, mega-problems” and then he proceeded to talk about how his own project solutions help customers save millions (or in once case, hundreds of millions) of dollars by digitally capturing content, effectively communicating it and then coordinating the efforts of large groups. Incidentally, Mr. Homsi is also CEO of a company called Darwino, a company that helps customers mobilize IBM Notes Domino applications and migrate to the cloud.

I also talked at length with the company by the name of Oblong, a maker of a digital content management environment that I wish I owned. For sci-fi fans who may have seen the movie “Minority Report”, you might recall Tom Cruise pulling files and data streams from a wide variety of sources, which he examined in the holographic 3D air space that surrounded him. He could expand files, shrink files, push files to the side, look at multiple displays of realtime and recorded data in real time – easily moving between static and dynamic filing environments at the touch of his hand. Oblong makes a highly scalable environment that can scale across hundreds of displays (sorry, no holographs yet) where information that it collects can be collaboratively shared amongst large teams. I pictured disaster response use cases where a room of people look to coordinate a response to an event – and I have to admit, I thought back to the NASA launch room in the movie “Apollo 13” where a group of scientists collaborated on a way to help astronauts return to earth after a failed lunar landing (I guess I’ve been watching too many old movies lately…). Anyway, you get the point, the world is now digital, in these digital sources can be easily harvested and displayed – enabling people to more easily collaborate and make better decisions.

A company by the name of Imaging Systems, Inc. out of Slovenia also caught my attention with a product called IMiS MOBILE that can be used to easily mobile-enable legacy applications – thus broadening the platforms that can be used to conduct collaborative activities using hand-held devices. I liked this product because of its programming simplicity.

I went to at least a dozen other booths, including the IBM Verse and Connection booths (covered in the next section).

Last but not least, business partners were truly excited about using Watson to make their applications “smarter”. I wrote about this trend in my Pund-IT trip summary after attending World of Watson 2016 – and this trend is becoming omnipresent (I’m seeing it everywhere across the traditional software applications markets) as ISVs are recognizing that using Watson can simplify the use their products while expanding the types of and accuracy of the solutions they create. Watching the industry move to “Watson Enablement” is truly one of the most fascinating trends I’ve ever seen in the computing industry – it seems that the sky’s the limit in terms of what machine intelligence blended with traditional applications can now do.

What I Heard and Saw From IBM

The lead speaker at Connect was Inhi Cho Suh, IBM’s General Manager of Collaborative Solutions. She took the stage to tell the audience of about 2,000 Connect attendees what was going on in the collaboration marketplace and what IBM was doing to address the needs of the market.   is– I’ve known Ms. Suh for years – I first met her in the early days of IBM’s foray into the analytics marketplace – and she’s a real straight shooter. Bringing a person with her background to the collaboration space is a very smart move on IBM’s behalf – she knows analytics technology extremely well, and she knows how to Watson-enable IBM’s collaborative offerings as well as how to help business partners to do so.

Ms. Suh took the stage to tell the audience about the trends that she is seeing in the collaborative computing market space. She contended that the way we engage with others in the work world is changing thanks to new innovations, especially the use of Watson cognitive services that are being used to simplify products and extend their capabilities. She also talked about open collaboration where companies in the collaboration marketplace are working more closely together to build jointly integrated solutions. I saw a clear example of this with the joint IBM/Cisco announcement that integrates Cisco collaboration products with IBM collaboration products – in the past these two companies would’ve been strong competitors who likely would not have worked together, but now both are pleased to show how the best of each company’s solutions can be blended to create a more powerful and integrated collaborative environment.

Ms. Suh also talked about how collaboration products are getting better at streamlining process flows.

Other speakers talked about how cognitive computing is being blended with analytics. In short, cognitive computing is being used to help sort and prioritize “what is important to me”; it is streamlining the flow of work; it’s using bots and virtual assistants to help aid humans in their decision-making processes; it’s involved in using the Internet of Things sensory devices to aid decision-making; and it’s helping individuals focus better.

There was also an interesting discussion about whether today’s tools “create more noise” than help. From what I saw on the demo floor, today’s tools can take a lot of the uncertainty and human guesswork out decision-making, while at the same time making processes flow more easily. The “more noise” argument does not hold with me.

As for IBM products, I got a close look at IBM Connections and IBM Verse. Verse was pretty cool, a modern mail-and-messaging environment with the collaborative overlay that made it simple to access and sort the work of fellow team members as well as handle external inputs. I especially liked that the product could be used in a mode where you don’t have to delete your emails and related documents, you just leave them in your inbox when you’re done with them – and if you need to refer back to them you perform a search on the few keywords that you may remember and your document appears. I’ve deleted emails and messages for decades – what a silly concept… As for IBM Connections, I rarely have a need to collaborate with a group of people to conduct project work, but if I did, I’d consider using this social, collaborative environment.

As an Aside

I had the opportunity to attend a customer-presented session on deploying Blockchain on IBM’s Bluemix. For those not aware of Blockchain, it’s a new way of processing transactions based upon Bitcoin technology. It deals with distributed databases, distributed servers, encrypted data, the mining of that data – and establishes consensus between nodes. This technology is being used to create trusted, synchronized transactions. If a transaction is tampered with in any way, all stakeholders know about it and the transaction is thus broken. The ability to conduct secure transactions is one of the major points of this technology – but probably the biggest selling point is that it takes intermediaries who add processing, time and cost overhead to transactions out of the picture. It’s pretty exciting stuff and represents a whole new, more efficient and secure way of processing transactions – and I will have the pleasure of speaking about it at a government conference in Dubai in May. It was fun to see how another practitioner handled the topic.

Conclusion

I truly enjoy going to technology shows. Throughout my lifetime, I have been a technology sales representative, a project manager and a technology researcher. I remember how I used to do things in the old days, and I like seeing how technological advances have simplified tasks, and have made people more productive and efficient.

As I looked at the wide array of technological solutions presented at Connect 2017, I kept asking myself “How can I use these tools as a research analyst to my advantage?” I left Connect with some new ideas to explore, and with the self-realization that I go to events such as this to learn, to look for new innovations, to talk with people who have used or developed these technologies – and most of all to surround myself with people with like interests who enjoy technology and innovation as much as I do.

Will I go again to IBM’s Connect next year? I hope so…

 

 

 

Posted in Uncategorized | Leave a comment

IBM Enhances Spectrum Storage Management, Integration and Flexibility

IBM’s February 7, 2017 software-defined storage announcement was so chock full of new capabilities for the company’s Spectrum Storage and Cloud Object Storage, it’s tough to sum it up in one sentence or even a paragraph. But a look at the history of IBM Spectrum Storage will provide some context and illustrate IBM’s prime objectives: consistency, integration and flexibility.

The Spectrum Family of software- defined storage was first announced in February 2015 — a rebranding of existing IBM storage solutions with names more indicative of their functions. In early 2016, IBM announced the IBM Spectrum Storage Suite, a single capacity-based license that includes all the IBM Spectrum Storage offerings. Over time, the suite has become more of a “family”, with a consistent user experience across products by using IBM Storage Design Language (based on IBM design language) and improved integration between members of the product family.

For example, Spectrum Scale and Spectrum Accelerate are managed through Spectrum Control, and Spectrum Scale can be used as a target with Spectrum Protect – an example of integration between these two products. Another great example is the integration of IBM Cloud Object Storage (formerly Cleversafe –acquired in 2015) with IBM Spectrum Storage offerings (we’ll discuss this in greater detail later). IBM Cloud Object Storage is now a full-fledged member of the Spectrum Storage Suite with no change in the license charge.

In most cases, IBM Spectrum Storage solutions are available in flexible deployment models as software-only and as-a-service offerings delivered via a hybrid cloud model—including on-premises solutions, private and public cloud implementations. Data can be easily shared across on-premises and cloud deployment models—providing maximum flexibility for customers.

IBM Spectrum Storage Suite Announcements

IBM announced enhancements to IBM Spectrum Virtualize, IBM Spectrum Control/Storage Insights, IBM Spectrum Accelerate and IBM Cloud Object Storage. Here are the details.

IBM Spectrum Virtualize

The big news with IBM Spectrum Virtualize is that it is now available as software-only and can be deployed on x86 servers , providing more options for service providers who may also choose to offer Spectrum Control for management and Spectrum Protect for back-up. Bare metal support on Supermicro SuperServer 2028U-TRTP+ is being introduced (in addition to existing support for Lenovo System x3650 M5). As a result, service providers can offer new services to their customers (for example disaster recovery as a service for clients with IBM’s SVC and/or Storwize) while also adding new capabilities to their own infrastructure.

Other enhancements include I/O throttling for host clusters, GUI support for host groups, increased background FlashCopy transfer rates (4x improvement), automatic resizing of remote mirror volumes, and automatic consistency protection for metro/global mirror relationships.

IBM Spectrum Control and Storage Insights (SaaS version)

Perhaps one the most exciting enhancements to Storage Insights is support for Dell EMC storage (VNX, VNXe and VMAX) enabling cloud-based storage analytics for Dell EMC customers, providing better insight, flexibility and efficiency for customers with heterogeneous storage environments  For example, administrators can see at-a-glance which storage volumes need to re-tiered for cost-optimized storage, examine future capacity requirements and reallocate unused storage across both IBM and Dell EMC storage infrastructure. A new consolidated chargeback reporting tool provides better accounting for data owners and administrators and adds support for Dell EMC VNXe.

IBM Spectrum Accelerate

The latest revision of IBM Spectrum Accelerate (V11.5.4) provides the following:

  • Non –disruptive data-at-rest encryption – data security is improved by using SED drive-based encryption and can be done on –premise or in the cloud, Standard key management tools including IBM Security Key Lifecycle Manager and SafeNet KeySecure are supported and the functionality is available at no additional charge.
  • Hyper-Scale Manager 5.1 enables centralized management across up to 144 systems on or off-premise running IBM Spectrum Accelerate (FlashSystem A900/A900R, XIV Gen 3, Spectrum Accelerate software-only).
  • VMware 6 support enables enhanced IBM Spectrum Accelerate solutions in hyperconverged configurations including VMware ESXi 6.0 and vCenter6.0. vSphere Web Client certification provides IBM native storage visibility. These features are delivered in IBM Spectrum Control Base Edition for centralized management across the entire IBM storage portfolio.

IBM Cloud Object Storage

As mentioned earlier, IBM Cloud Object Storage is now included in the IBM Spectrum Storage portfolio, having been integrated with:

  • IBM Spectrum Scale (for an additional storage tier behind IBM Spectrum Scale);
  • IBM Spectrum Protect ( to provide an electronic vault, initial target or storage in the cloud);
  • IBM Spectrum Control (to enable monitoring of IBM Cloud Object Storage); and
  • IBM Spectrum CDM (for data archiving in IBM Cloud Object Storage).

Supported environments include on-premise single and multi-site, hybrid cloud dedicated multi-site, and cloud-only running on dedicated supported cloud infrastructure. Existing clients get the new capabilities and can use them immediately.

Other new features include:

  • Unified NFS/Object access adds the ability to use the object store via an NFS file system interface. IBM will use its own NFS native file access capability for object storage to enable support for applications that communicate to storage using file system protocols in addition to those that speak natively to object storage – with all the scalability, security, reliability and capacity characteristics (hundreds of PBs and billions of files) of IBM Cloud Object Storage. Included is the ability to interoperate between the file system interface and the object storage RESTful API, a common storage pool for NFS and object data, and file to object migration. Customers can, for example, use the file system data to ingest the data set, but for processing that data for an analytics application they can access data via the object interface. IBM Object Cloud Storage NFS is ideal for large-scale file archives, long-term backup retention, and file accessible content repository applications.
  • Enhanced storage-as-a-service (STaaS) increases the maximum number of user accounts that can be supported on a single system. The system can scale to millions of users with their own accounts and isolated storage containers all on a single system. This system is designed to be used both for a commercial STaaS product that can be sold to other businesses but also for employees within an organization with use cases including medical archive-as-a-service, archive/back-up as-a-service, IT storage as-a-service and cloud storage as-a-service.
  • IPv6 is now supported for management of all devices and all nodes— nodes can be configured to support IPv4 or IPv6, so IBM Cloud Object storage can easily fit into a customer’s existing network architecture if they have adopted IPv6.
  • Preconfigured bundles of IBM Object Cloud Storage are now available in Entry, Workhorse and Enterprise classes. These bundles are turnkey, simple to order complete systems with a balanced, pre-tuned set of components for various levels of capacity and performance— designed to simplify a customer’s entry into object storage.

Summary Observations

Having followed IBM Storage for several years, the direction the company is taking with its storage portfolio is definitely making my job easier. Back in the old days, there were so many disparate hardware and software offerings handled by different siloed groups at IBM Storage, it was difficult to keep up with them all. And it was even tougher to figure out how all the products worked together. In some cases, I learned that it was tough to figure out because they didn’t.

Since the rebranding in early 2015, IBM has been making steady progress in integrating its storage portfolio at both the marketing and technical level. A consistent user experience makes the products feel like a family and makes storage administration easier and more efficient. Synergies between the products enable enterprise customers and service providers to offer new services. IBM’s software-defined Spectrum Storage strategy gives customers more flexibility, enabling Spectrum products to be purchased as software-only, as an integrated solution or as-a-service. In addition, IBM supports a wide range of underlying hardware platforms and a hybrid cloud storage model.

These latest enhancements to the portfolio continue to build on the themes of integration, flexibility, and consistent management— which ultimately improve efficiency and drive down storage costs, while at the same time enabling new use cases. I am particularly impressed by how quickly Cleversafe, now IBM Cloud Object storage, has been integrated with IBM Spectrum Storage and enhanced to support unified file and object access. IBM Spectrum Storage has something for everyone— and now it’s easier than ever to find the integrated software-defined storage solution that your organization needs.

Posted in Uncategorized | Leave a comment

IBM DS8880F All-Flash Data Systems – Integral to IBM’s Cognitive Computing Strategy

By Jane Clabby, Clabby Analytics

When we think of data storage today, chances are that the first things that come to mind are “object storage”, “cloud storage”, “commodity storage” and “software-defined storage”. Nobody really talks much about SANs and block storage anymore–except for high-end database and transaction oriented applications. The same has been true for IBM’s storage strategy.

For the past couple of years, IBM has made many announcements and published many marketing documents that position its Storwize and Spectrum portfolios, including “all-flash” versions of those products. However, the DS8000 family is typically relegated to the background, and enhancements have focused primarily on “speeds and feeds” – x times faster, x more capacity, x times lower latency etc. – rather than new use cases and support for different types of workloads.

Imagine my surprise when IBM’s first storage briefing of 2017 highlighted not only the new all-flash DS8880F arrays, but also the strategic nature of the DS8880F family and how it underpins IBM’s cognitive computing and analytics offerings. In addition, this announcement provides a lower-cost entry point (starting at 95,000 USD) for DS8880-class storage, providing high-performance low-latency storage for mid-range customers. According to IBM, providing a more cost-effective offering enables businesses in emerging geographies such as Latin America and Eastern Europe to afford DS8880F benefits in a smaller package.

New use cases/workloads

Here are a couple examples of workloads that are ideal for the DS8880 family:

·         Cognitive computing

IDC forecasts global spending on cognitive systems will reach nearly $31.3 billion in 2019 with a five year compound annual growth rate (CAGR) of 55%. With IBM Watson, IBM has been a pioneer in cognitive computing –using Watson-based artificial intelligence technology in applications across a wide range of industries, enabling humans to solve complex problems more efficiently. Cognitive applications ingest, process and correlate huge volumes of data which requires a robust computing infrastructure and high performance, high capacity storage.

·         Real-time analytics

Another area where IBM is seeing interest from customers is in real-time analytics, where information can be collected, processed, and analyzed in real-time for applications such as credit card fraud detection and Internet of Things (IoT). The DS8880F provides the robust resiliency and data security for the source OLTP data set as well as the throughput and performance to enable real-time analytics. As a result, financial institutions can match credit card use patterns with real-time data collection, detecting potential fraud before it happens and saving millions of dollars. In transportation, IoT data collected from cars, street lights, cell phones and other sources combined with real-time analytics will enable driverless cars.   In health care, patients can be monitored–collecting and analyzing several metrics simultaneously in real-time– to alert medical staff to potential problems, improving patient outcomes.

New generation analytics and cognitive applications simply can’t be handled by traditional shared nothing Hadoop clusters, HDFS, and commodity storage. Those systems are designed for scale and low-cost, but don’t have the bandwidth or performance to support real-time analytics and cognitive systems that businesses collect and need to analyze data instantaneously. The new DS8880F data systems are designed specifically to handle these types of workloads that require lower latency and sub-millisecond response time.

Workload consolidation

The new DS8880F models also enable the consolidation of all mission-critical workloads, both new and traditional, for IBM z Systems and IBM Power Systems under a single all-flash storage family.

  • Cognitive – Including Watson Explorer, Watson Content analytics and Watson APIs that allow customers and ISV’s to create their own Watson- based applications.
  • Analytics- Including IBM Cognos Analytics, IBM SPSS, IBM Infosphere Big Insights, SAS business intelligence, Elasticsearch, Apache Solr and others.
  • Traditional/Database – Including IBM DB2, Oracle, SAP, PostgreSQL, MongoDB, Cassandra and others.

DS8880F all-flash data systems –A closer look

IBM’s new line-up of DS8880F data systems includes three all-flash arrays designed to support applications from entry level business class to enterprise class to the “analytics” class required for real-time analytics and cognitive workloads. IBM uses a different approach in the design of its all-flash arrays. Rather than just replacing HDD’s with Flash cards IBM has completely rearchitected the system, optimizing it for Flash for better performance. The systems use the Flash Enclosure Gen2 with 2.5” drives in a 4U enclosure and are built on the Power CEC’s (Central Electronic Complex) used in Power servers. The models vary based on number of cores, capacity, cache and number of FibreChannel /FICON ports, but the functionality is the same across the product line.

Here are the details:

  • DS8884F Business Class
    • Built with IBM Power Systems S822
    • 6-core POWER8 processor per S822
    • 256 GB Cache (DRAM)
    • 32 Fibre channel/FICON ports
    • 6.4TB to 154 TB of flash capacity
  • DS8886F Enterprise Class
    • Built with IBM Power Systems S824
    • 24-core POWER8 processor per S824
    • 2 TB Cache (DRAM)
    • 128 Fibre channel/FICON ports
    • 6.4TB to 614.4 TB of flash capacity
  • DS8888F Analytics Class
    • Built with IBM Power Systems E850
    • 48-core POWER8 processor per E850
    • 2 TB Cache (DRAM)
    • 128 Fibre channel/FICON ports
    • 6.4TB to 1.22 PB of flash capacity

The Flash Enclosure Gen 2 provides significant improvements over the previous generation for both IOPs (read: 500,000+47%, write: 300,000+50%) and throughput (read: 14 GB/s + 268%, write: 10.5GB/s +288%). Aside from the performance gains seen from using all-flash, the unique architectural design attaches the enclosure to the PCI 3 bus rather than through the device adapter, eliminating a portion of the data path which lowers latency, and improves response time. The drawer itself has also been redesigned to provide higher bandwidth into the I/O drawer and to use ASIC rather than FPGA in the drawer itself (also improving bandwidth).

Other features of the DS8880F series include greater than “six-nines” availability; point-in-time copy functions with IBM FlashCopy; and Remote Mirror and Copy functions with Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, IBM z/OS/Global Mirror, and z/OS Metro/Global Mirror providing disaster recovery/replication for up to 4 sites and 1000 miles with 3-5 second RPO (recovery point objective) for businesses where continuous operation is critical.

Summary observations

With this announcement, IBM expands its offerings of high-end enterprise-class storage by adding a new lower cost entry point that makes the systems affordable for a broader range of customers, while also enabling new uses cases that require the performance of all-flash. The optimized architecture further reduces latency and response time which will differentiate IBM in markets that require real-time analysis and results. Not only will these systems improve performance in their traditional database and OLTP realm, but the DS8880F family will become a key building block in infrastructure built to support cognitive and analytics applications.

In order to attract new customers, IBM also plans to expand marketing outreach by promoting the DS8880 systems as part of its broader cognitive computing strategy. The recent analyst briefing was a good start. Like IBM, I have been guilty of shunting aside the DS8000 family in favor of covering IBM Storwize, Cleversafe object storage and the IBM Spectrum software-defined family. In the future, I will pay closer attention. And IBM customers and prospects – you should, too.

Posted in Uncategorized | Leave a comment

Year End Review: IBM Systems Group

By Joe Clabby, Clabby Analytics

Tom Rosamilia, Senior Vice President of IBM Systems group, recently presented his annual year-end recap. In his opening statement he started with a review of IBM’s mission in the IT marketplace; to “continue to transform and innovate in technology, business models, and skills for the cognitive era.” He then pointed out several of IBM acquisitions that illustrate the company’s focus in these areas, including: Truven Health Analytics, explorsys, Merge, clearleap, USTREAM, The Weather Company and more.

What jumped out at me was that most of the new solutions that IBM has brought to market are focused on specific industries, like healthcare, or are focused on helping other vendors bring analytics solutions to the market, such as The Weather Company. Although IBM continues to deliver the infrastructure, database and transaction products it always has, it must be noted that the company truly sees itself as a leader in analytics and cognitive solutions.

IBM Systems’ accelerated 2016

Rosamilia underscored this point with a review of the key products in his portfolio that support the company’s emphasis on analytics. These included the activities that have taken place around IBM’s POWER architecture, its z Systems/LinuxONE offering, and in storage. Coincidently, in 2016, Clabby Analytics wrote in-depth reports on each of these topics, including this POWER 9 report, this LinuxONE report and this storage/software defined report.

Rosamilia’s POWER commentary highlighted innovations around the company’s OpenPOWER initiative (which we cover in-depth in this blog), as well as IBM’s emphasis on extraordinarily fast Power-based servers that leverage other processor architectures, such as NVIDIA graphical processing units (GPU) to serve the analytics marketplace.

He focused on Google and Rackspace’s efforts to develop an open server specification using IBM’s forthcoming POWER9 architecture, then described 2016’s arrival of POWER LC servers that combine POWER8 processors with NVIDIA’s GPUs and NVLink interconnect technologies. Rosamillia also spent time discussing IBM open sourcing its CAPI (Coherent Accelerator Processor Interface) technology and the progress being made within the OpenCAPI Consortium; and the company’s continuing efforts with NVIDIA to deliver Power AI solutions.

My key take-aways from this discussion was that IBM is continuing to aggressively build “accelerated systems” that use multiple types of processors to accelerate analytics workloads, and that IBM is successfully engaging open communities to help build solutions on its POWER architecture and complementary technologies.

Z Systems, LinuxONE and Blockchain

The key points that Rosamilia chose to focus on regarding the company’s z Systems/LinuxONE mainframe architecture centered on positioning LinuxONE for hybrid cloud environments; the use of z13s for encrypted hybrid clouds; the relationship of the z/OS operating environment and the Apache Spark movement (a better way of processing large volumes of analytics data than Hadoop); the EZSource acquisition (for code analysis) ; and the availability of secure cloud services for Blockchain on LinuxONE.

The new “news” in Rosamilia’s review of z System/LinuxONE was his emphasis on Blockchain and HSBN (the company’s “high security business network”). Blockchain serves as the basis for creating a new way to perform transaction processing, one that features a secure “open ledger” that is shared amongst all concerned parties during the transaction. This new approach streamlines transaction and business processes and enables significantly greater security that traditional approaches.

I had not been aware that IBM had created a service offering featuring IBM LinuxONE servers overlaid with Hyperledger that enables customers to form smart contracts, to create shared ledger’s, to gather consensus along the route of completing a transaction – all taking place in a secure and private environment. IBM claims that it is making solid headway with this offering in the securities, trade, finance, syndicated loans, supply chain, retail banking, public records and digital property management industries. Rosamilia shared examples of success stories using this LinuxONE/Bluemix offering, including the activities taking place at Wells Fargo, Walmart and Everledger.

IBM Storage in a Flash

In storage, Rosamilia focused on2016 activities that resulted in IBM Flash and software defined solutions. He described efforts to round out the company’s Flash array offerings from the low-end all the way through the high-end, and also described how the company is providing storage solutions driven by software and appliance designs, along with IBM’s storage as a service cloud offerings.

Rosamillia also provided examples of how IBM’s software-defined storage products are being used, including a discussion of DESY (the Deutsches Elektronen-Synchrotron research facility in Germany which is using IBM’s Spectrum Scale to analyze petabytes of data in minutes, not days), the Arizona State Land Department (super-efficient land administration using IBM Flash Systems), and bitly (using IBM Cloud Object Storage to accelerate consumer research with faster, easier access to data derived from capturing over 10 billion online clicks per month).

Summary Observations

Rosamilia’s year-end review of IBM Systems’ highlights was a good 50,000 foot overview of the most important activities that have taken place in 2016. But there is far more going on within this group than meets the eye.

Two years ago, IBM’s POWER organization was struggling: its former UNIX market stronghold was weakening as customers shifted to Linux on x86 architecture and revenues were in strong decline. To right the ship, IBM decided to open source its POWER architecture to the industry. And, as a result, the company has revived its revenue stream while fostering advanced and innovative systems from the OpenPOWER community. What IBM’s POWER organization has done is truly remarkable, they rescued this architecture from the declines suffered by competitors, including Oracle (Sun) and HPE, opened it up for collaborative systems integration, and built incredibly powerful new system designs using POWER processors, GPUs and FPGAs (field programmable gate arrays).

For over 20 years, ever since industry pundits in the mid-1990s forecast the demise of the IBM mainframe, Clabby Analytics has taken the position that there is no other architecture better suited for processing secure transactions (and now in-transaction analytics workloads) than IBM’s z System. Given this position, we see IBM’s new LinuxONE mainframe servers as ideally positioned to support a projected major market move toward Hyperledger and Blockchain transaction processing over the coming years. This movement should greatly escalate the sale of mainframe servers. Long live the mainframe!

As for storage, the markets that IBM and every other enterprise vendor focus on have changed tremendously over the past few years as customers shifted from traditional workloads to include more compute- and data-intensive workloads (genomics, simulations, what/if analysis, cognitive), and next generation Big Data and born-in-the cloud applications, like Hadoop, NoSQL, Spark and Docker. Accordingly, IT executives are now looking for storage and software defined infrastructure options that provide better IT performance, scalability, and agility at significantly lower cost. In addition, these same executives are grappling with rationalizing which workloads belong on-premise – and which workloads can be shifted to low-cost public cloud storage.

To address traditional storage requirements, as well as the new generation of compute- and/or data-intensive applications, IBM has revamped its storage line to include a complete range of solutions (including software-based offerings, services and storage hardware options such as appliances and all-Flash arrays). To many, IBM’s myriad storage offerings may seem confusing but if you look from the perspective of IT managers and executives, storage needs to make full use of varying technologies , needs to accommodate private and public clouds; and needs to support both traditional applications and new workloads, including analytics. IBM storage accomplishes all of these objectives.

IBM’s year-end review was excellent at a high level. But for more details on each initiative, take a look at our free reports on IBM LinuxONE, IBM storage and software defined storage, and POWER architecture at www.ClabbyAnalytics.com.

Posted in Uncategorized | 1 Comment

INETCO – Monitoring and Analytics for EMV Chip Card Transactions

By Jane Clabby, Clabby Analytics

EMV (Europay, MasterCard and Visa – the three companies who developed the EMV standard) is a global standard for credit/debit cards that use computer chips to authenticate and secure chip-card transactions. More secure than magnetic stripe (magstripe) cards, EMV chip cards encrypt bank information with a unique transaction code each time the card is used for payment. As a result, if transaction data is stolen, it cannot be used again – greatly reducing the risk of counterfeit card fraud. Businesses and credit card companies are in the midst of a transition between magstripe-based transactions and chip-based transactions.

In a recent briefing with Marc Borbas, VP of Marketing at INETCO, a global provider of real-time transaction monitoring and analytics software, he described how INETCO is helping ease this transition by providing insight to (1) prioritize terminal migration to EMV (2) identify fraudulent usage patterns in non-EMV devices (3) provide information on why EMV-capable transactions have defaulted to magstripe and (4) discover how to reach peak EMV efficiency.

INETCO Insight and INETCO Analytics give both card issuers and merchants the ability to make data-driven decisions during the EMV transition to reduce fraud liability, improve transaction performance and optimize terminal conversion.

INETCO Background

INETCO Insight provides agent-less transaction monitoring software and a data streaming platform for a real-time, end-to-end operations view into the performance of all digital banking transactions in all banking channels, self-service networks, and payment processing environments. By using real-time network-based data capture rather than collecting data in log files, for example, all transaction data is collected – including failed transactions, transaction attempts, fragments and duplicates, enabling IT operators to proactively identify issues.

As banks increasingly shift from human interaction to a range of self-service channels, valuable customer-oriented data is generated that can help financial institutions better serve their customers. The INETCO Analytics on-demand banking and customer analytics platform analyzes this collected data to provide business insight into how customers are interacting with retail banks and financial institutions – improving both profitability and customer experience. EMV migration is one example of how INETCO can collect and analyze transaction data to provide business value.

EMV transition

The adoption of EMV has already contributed to a drop in counterfeit fraud rates, with Visa reporting in May 2016, a 47% drop in counterfeit fraud in EMV-enabled merchants compared to the previous year. This may sound like good news, but according The Strawhecker Group, by September2016 only 44% of US retailers were estimated to have EMV-capable terminals, while only 29% could actually accept chip-based payments. As the window of opportunity is closing as merchants and credit card companies make the change-over, MasterCard has seen a 77 percent increase in counterfeit card fraud year-over-year among merchants who have not completed the transition to EMV. In fact, $4 billion in fraud is expected this year, and as much as $10 billion is predicted between now and 2020, according to a new study from antifraud company iovation and financial industry consultant Aite Group.

Initially, the responsibility for fraudulent transactions was with the card issuer, but after October 1, 2015, liability shifted to the merchant (in most cases) if they had not upgraded systems to use chip-enabled cards, devices and transactions. This provides an incentive for both card issuers and merchants to make a speedy transition to chip technology.

But this transition is not without its challenges; with better security comes increased complexity. The transaction size is larger, and the chip, card and pin all need to be verified- which may impact performance. The new terminals are expensive and software may need to be upgraded to ensure interoperability. In addition there are many decisions surrounding the transition itself. Which terminal locations should be migrated first? What is the competition doing? What is my transaction volume and which terminals are the most profitable? What is the customer density in a given location?

INETCO and EMV

As stated earlier there are four ways that INETCO can help with EMV migration. Let’s look at each of these in greater detail.

  1. Prioritize terminal migration to EMV –Many factors should be considered when looking at which terminals to migrate first. INETCO’s analytics can determine where non-compliant terminals are located and assess the impact. By analyzing transaction volumes through a particular terminal and/or profitability, these terminals can be upgraded first. By looking at customer density within a particular location, businesses can decide to move new terminals to a different location.
  2. Identify fraudulent usage patterns in non EMV devices – For businesses in the midst of the transition to EMV, INETCO can collect and analyze information to detect activity in magstripe transactions that could indicate fraudulent usage, minimizing financial exposure. For example, if a transaction is identified as high-risk based on a particular pattern (volume, time-of-day, location etc.) the transaction can be declined.
  3. Provide information on why EMV-capable devices have defaulted to magstripe – Since many businesses now are running magstripe and EMV in parallel as they shift over, it is important to identify why a particular transaction that should have been processed as EMV wasn’t. INETCO can easily spot transactions that should have gone EMV, and identify the source of the issue and also help with charge-back dispute resolution. In addition, operators can see the split between magstripe and EMV in real-time and/or set an alert if the threshold reaches a specified metric.
  4. Discover how to reach peak EMV efficiency – INETCO has identified three dimensions that should be considered during EMV migration. First, businesses should look at terminals and what % has been converted. Configuration, certification and roll-out issues should be identified so that each conversion can benefit from knowledge gained from previous ones. Merchants should be educated to understand the benefits of converting. Second, cardholders should look at what % of the customer base is active and how the cards are functioning. Consumers, too, should be educated on the security benefits of using chip-cards. And finally, businesses must look closely at the transactions themselves and how to achieve top performance: Are there software upgrade and interoperability issues that are affecting transactions? And if so, what is the impact and how can the issue be resolved.

Summary observations

With so many changes in the way consumers interact with banks via mobile, on-line and other self-service channels, INETCO has evolved to support not only IT operators in proactively identifying performance issues, but business managers as well. With collected transaction data and INETCO Analytics, customer behavior and preferences can be analyzed to improve user experience and fraud patterns can be detected for better risk management – providing a much broader range of potential use cases for INETCO.

During the briefing, Borbas introduced the concept of the “Uberbanked Consumer,” today’s consumer who is faced with many banking options outside the realm of traditional banking where the loyal customer selects a single bank to provide checking, savings, money market accounts, mortgage and other financial needs. The Uberbanked consumer values convenience, consumer experience and a good deal – and will use a range of solutions from a range of financial institutions that are both traditional (bank) and non-traditional (Venmo). Because these users are fickle, transaction performance is becoming more and more important as these traditional financial institutions compete to maintain customer loyalty and mindshare. This is another use case where INETCO can provide unique value.

I also inquired about Blockchain, a protocol that originated in 2009 at Bitcoin to record financial transactions and has evolved since then to record the transfer and ownership of assets in many industries, providing database records that are validated and propagated, and more importantly, cannot be changed or deleted. INETCO is following this potentially disruptive trend closely, and believes there may be a future opportunity for the company to provide a centralized view of a system that is inherently decentralized.

I was pleased to see that INETCO has stuck to its roots in transaction monitoring and analytics for payment processing and financial institutions. At the same time, the company is well-positioned for the future–embracing new types of users, additional retail banking channels, adjacent industries (such as retail) and a growing portfolio of use cases.

 

 

 

Posted in Uncategorized | Leave a comment

Western Digital Adds Entry-Level Cloud-Scale Object Storage System

By Jane Clabby, Clabby Analytics

On November 15, 2016, Western Digital introduced a new addition to its object-based storage (OBS) solution portfolio — the ActiveScale P100. The integrated turnkey system is an easy-to-deploy entry-level system that scales modularly from 720TB to 19PB of raw capacity, and is designed for Big Data applications across on-premise and public cloud infrastructure in a range of industries including life sciences, media and entertainment and government/defense. Included in the new offering (and also available for the existing Active Archive System) is ActiveScale CM (cloud management), a new cloud-based monitoring tool that provides remote system health monitoring and predictive performance and capacity analytics.

Background

According to IDC, file and object storage is expected to be a 32 billion dollar market by 2020. Comprised of primarily unstructured data sets, these large volumes of data are being used increasingly for Big Data analytics in applications such as fraud detection, machine learning, genomics sequencing, and seismic processing.

The Western Digital OBS family includes the new ActiveScale P100, and the Active Archive System. Both are scale-out OBS solutions that provide the benefits of object storage — including massive scale, easy and immediate access to data, and data longevity. Vertical integration makes these systems easier to buy, deploy and manage, and the tuning and optimization provide better performance than solutions that include white box or other DIY components.

Major features of the new ActiveScale P100 and existing Active Archive System include:

  • Amazon S3 compliant scale-up and scale-out solution -ideal for Big Data applications.
  • Strong consistency ensures that data is always up-to-date.
  • BitDynamics continuous data scrubbing provides integrity verification and self-healing.
  • Advanced erasure coding offers better data durability (up to 15 nines) than traditional RAID systems.
  • BitSpread provides data protection without replication, so capacity requirements are reduced.
  • ActiveScale SM (system management) provides a comprehensive, single-namespace view across scale-out infrastructure.
  • ActiveScale CM is a new cloud-based monitoring tool that provides remote system health monitoring and predictive performance and capacity analytics for multiple namespaces.

Active Archive Customer Examples

  • Ovation Data, a data management company, uses Active Archive System in conjunction with Versity Storage Manager to build private storage clouds for customers in the oil & gas industry. The company selected the Active Archive System because it provided the economics of tape storage with the performance of disk storage. The solution provides cloud-scale storage, automated tiering to object storage and tape, and speedy cost-effective access to data stored over a long period of time — improving efficiency and enabling users to make data-driven decisions that will drive business value . The system also provides a direct plug-in to VSM without any modifications for quick, easy deployment.
  • EPFL (École Polytechnique Fédérale de Lausanne) is using Active Archive System for the Montreux Jazz Festival Project, a digital archive storing 50 years of the festival’s audio-visual content and accessible by academics for research and education purposes, and ultimately by the general public for their listening pleasure.

ActiveScale P100 – A Closer Look

The ActiveScale P100 is a modular turn-key system developed based on customer demand for an easy-to-deploy system at an entry-level capacity, performance and price-point.

The system includes three 1Usystem nodes combined with 10TB HelioSeal drives that form a cluster (metadata is copied on all system nodes) with a 10Gb Ethernet backplane, and provides the ability to easily snap pieces together in 6U increments to expand the system. Scale–out configuration options include a scale-out capacity configuration expandable to 9 racks and 19,440TB of raw capacity, as well as a scale-out performance configuration expandable to 2,160TB of raw capacity and throughput up to 3.8GB per second. Configurable erasure coding allows software policies to be set along a durability and capacity continuum to determine the best match for a given workload.

ActiveScale SM provides dashboard-based real-time system management and monitoring of system health to proactively identify any potential problems such as performance or capacity issues. Also managed through this comprehensive view is installation, configuration, upgrades and capacity expansion. Management wizards automate many common functions.

ActiveScale CM collects and analyzes historical system data to identify trends and patterns for operational and business insight. ActiveScale CM can correlate and aggregate multiple metrics across the entire namespace for a comprehensive view of storage infrastructure or can examine specific elements such as individual geographies or applications, for example, to identify potential problem areas —providing better management of SLA’s, improving efficiency, and reducing management costs.

Summary observations

As businesses are recognizing the value of collecting and analyzing years worth of data —largely unstructured data collected from a wide variety of sources—traditional storage options don’t provide the flexibility or scalability required to support today’s Big Data applications. Tape can cost-effectively store data for many years, but isn’t easily accessible. SAN and NAS storage weren’t designed to support cloud-scale applications or to store unstructured data such as images or video. So many businesses have turned to object storage to overcome these limitations. Public cloud object stores like Amazon S3 and Microsoft Azure have become “go-to” solutions for storing large volumes of structured and unstructured data at cloud-scale. Object storage cost-effectively provides massive scale, is designed for unstructured data and new data types generated by sensors and IoT (Internet of Things), and provides easy accessibility to stored data.

The ActiveScale P100 provides the benefits of object storage in a modular turnkey system that is easy to scale, deploy, manage and grow. The vertically integrated software, networking, storage and server rack level system is up-and-running without the challenges and hidden costs associated with assembling a similar system. The integrated components have been tuned for optimal performance, and the built-in predictive analytics of ActiveScale SM enables proactive management. With an entry-level price point and easy expandability, the ActiveScale P100 enables businesses of all sizes and in any industry to take advantage of cloud-scale data storage, easy data access, and big data analytics for a range of workloads.

 

Posted in Uncategorized | Leave a comment