Amidst Continuing Growth, Compuware Introduces Topaz on AWS and CloudBees Jenkins Enterprise

Introduction

If you asked me three years ago what I thought of Compuware, I would have described it as “a point product company in managed decline.” At the time, Compuware was bifurcated between mainframe point solutions and application performance management software. Sales had softened; it was slow to release new products; and its portfolio was “stagnant.” In short, the company was struggling.

But, in late 2014, everything changed for Compuware with a cash investment infusion; the hiring of a new, more focused management team; major changes in company culture (including a stronger emphasis on innovation); and the introduction of a new strategy with a strong focus on Development/Operations or DevOps, build/deploy; data management and cybersecurity. Accordingly, I wrote a report at the end of 2015 that described the new Compuware.

Nearly two years later, I see Compuware as a company focused on making it easy for customers to consume its product offerings – while at the same time being optimized to create new products and services. Its two most recent announcements include expanded Topaz on AWS (Amazon Web Services) solutions support for CloudBees Jenkins Enterprise.

The Topaz on AWS Announcement

Since January 2015, when Compuware promised to introduce new products and/or significant technology improvements every quarter, the company has been on a tear, expanding its Topaz DevOps product line to include Topaz for Enterprise Data; Topaz for Program Analysis; Topaz for Total Test and Topaz Workbench – and now offering its Topaz development environment in a cloud with Topaz on AWS.

A casual observer might look at Compuware’s new Topaz on AWS and think: “no big deal – many other vendors have taken their on-premises offerings and have moved them to AWS and other public clouds.” But those observers would be wrong – there’s much more to this Topaz on AWS story going on below the surface. In fact, Topaz on AWS:

 

  • Enables users to get access to the latest-greatest features and functionality of Topaz without having to go through internal channels to upgrade and deploy their on-premise Topaz environments. This is hugely important because it provides sophisticated DevOps functionality to users more quickly than ever before – rapidly improving developer productivity.
  • Simplifies the deployment of DevOps software, tools and related infrastructure – enabling enterprises to deploy DevOps solutions in minutes rather than waiting days or weeks. Customers setup Topaz on AWS by answering a few questions on a template – and they’re off-to-the-races;
  • Frees administrators from having to support on premises desktop deployments; and,
  • Enables enterprises to scale Topaz on demand – so the red tape involved in getting tools to developers to support new initiatives suddenly disappears.

In short, Topaz on AWS users are able to increase their productivity and scale capacity whenever needed (especially useful for new projects) – while being able to quickly gain access to one of the richest mainframe Dev/Ops environments on the market.

Topaz on AWS leverages Amazon’s AppStream 2.0 technology – a managed, secure application streaming service that allows users to stream desktop applications from AWS to any device capable of running a Web browser. AWS maintains an extensive global infrastructure that ensures a highly secure and performant user experience.

Also noteworthy, there are no new charges for Topaz. Licenses can be used on premises, in the cloud, or in a mixed environment. Users do, however, need to pay to use the Amazon cloud (provisioning a virtual server in AWS should be in the $.10 per hour range).

The CloudBees Jenkins Enterprise Announcement

In addition to its new Topaz on AWS offerings, Compuware also announced that the company is collaborating with CloudBees, a maker of building, testing and deployment products that speed applications to production by utilizing continuous delivery practices. Compuware’s ISPW (source code management and release automation environment), as well as Compuware’s Topaz for Total Test can be integrated with CloudBees’ Jenkins Enterprise (automated continuous delivery environment).The combination makes it possible for application developers to more easily integrate and orchestrate DevOps efforts across diverse platforms.

Also worth mentioning, developers now have to ability to set up Webhooks in ISPW to stream information about ISPW events to other DevOps tools and communicate with Jenkins to drive Continuous Integration processes. Team members can stay apprised of DevOps activities via Slack and HipChat.

Summary Observations

It is a joy to watch Compuware’s progress in the mainframe DevOps arena. The company has found a way to greatly expand its portfolio using internal development resources – and it has gotten “creative” by expanding its portfolio through dynamic partnerships (see this report for details on the company’s partnerships with AppDynamics, Atlassian, SonarSource, Splunk and others). Today the Compuware portfolio also contains solutions integrated with Software Engineering of America (SEA), XebiaLabs, CorreLog; Dynatrace, BMC, and Conic IT. All told, Compuware’s DevOps portfolio now spans developer productivity, code quality, continuous integration, source code management, release automation, test data management, application performance management and cybersecurity.

To date, Compuware has successfully delivered new, innovative product and solutions for twelve straight quarters. The company is making money. And it has a very interesting roadmap for bringing new products and functionality to market that includes new offerings in application understanding, code editing, data management, quality assurance, source and release management, performance and security. With all the progress the company has made to date, combined with what I know about its future plans, I can’t wait to see where Compuware is 12 quarters from now!

Advertisements
Posted in Uncategorized | Leave a comment

New Payment Processing Demands, New Pricing on IBM z Systems

By Joe Clabby, Clabby Analytics

For decades, banks, financial institutions, retailers, transporters and other industries that process bulk payments have relied on batch processing to generate end-of-the day reconciliation reports, weekly or bi-weekly payrolls, consumer bills (such as utility and phone bills) and other work that is best processed in a group or “batch”. And many of these institutions will continue to use batch processing for decades to come because it allows jobs to be shifted and worked when computers are less busy, thus balancing system utilization and costs. In other words, for many jobs, batch will remain a viable, valuable approach.

However, technological innovations, new business models, customer expectations and regulatory pressures are causing a major shift in the way that many financial payments and transfers are supported:

  • Consumers want to use their smartphones to instantly transfer money or pay for items – eliminating the need to carry cash. Smartphones are becoming the new wallet – and instant funds transfer using digital currency is the new cash;
  • Consumers want deposits processed immediately;
  • Merchants want same day payments;
  • New business models based on mobile payments are evolving that overlay various services, such as authentication, fraud protection and security onto mobile transactions – making it possible to securely process transactions in real time;
  • Government entities are putting rules in place to speed payment processing. For example, in the EU, regulations require clear information on payments, fast payments, consumer protection and a wide choice of services; in the US, the government requires credit card activity reporting to the IRS, lower debit card interchange fees, and compliance with the Payment Card Industry Data Security Standard); and,
  • Consumers and enterprises are demanding the ability to instantly process payments across borders with little to no time delay.

As could be expected, a shift to real-time payments places new demands on computing systems because many jobs will need to be processed immediately – not left to be batch processed at a more convenient time for the bank or financial institution. This requirement for immediate payment processing also means that more computing capacity will be required at certain times during the day to handle spikes in demand during waking hours.

This shift in payment methodology will have a huge impact on IBM – particularly the IBM mainframe organization. IBM mainframes support 87% of the worlds credit card transactions; mainframes are used by 92 of the world’s top 100 banks – and in 23 of the top 25 retailers globally. A major shift in the way that payments are processed could alter this balance…

IBM: An Alternative Pricing Scheme

For years, IBM’s mainframe pricing was based on a “capacity pricing” model. That is, mainframe users paid for the amount of computing capacity that they used – and that capacity could be balanced over the course of a day using a batch pricing model.

So, for example, an organization might have to process 100,000 transactions a day – but using a batch model it could spread the processing of those transactions over a 24 hour period, thus balancing the capacity headroom they required. With a new model, real-time payment processing, the bulk of transactions hit during waking hours which means that an organization might have to process 75,000 transactions over an eight or ten hour period. As a result, the amount of capacity headroom required and related costs would spike tremendously during that period.

If IBM were to stay with its current capacity pricing model, many mainframe customers would see a huge increase in processing costs which would likely drive those companies to seek less expensive alternatives.

To remedy this situation, IBM has introduced a new approach: “utility pricing” for on-premises workloads. This enables mainframe clients to reduce their computing cost when dealing with high capacity peaks, uncertain volumes and growth rates, and volatile workload profiles (transaction spikes and surges). It also helps clients deal with restrictions in capital expenditures (CAPEX) and concerns about overall total-cost-of-ownership (TCO).

The way the solution works is that a customer acquires the licensing of the IBM Payments software solution (Financial Transaction Manager or FTM). The software needed in support of FTM – z/OS, IBM MQ (Message Queue), IBM DB2 and IBM Operational Decision Manager Standard – is then charged per use for the payments processed. That’s just payments processed in a customer’s production environment (FTM has a counter in it which accurately collects and reports on the volumes processed).

In on-premises mainframe systems the FTM package is offered as a software-only or a combined software and hardware solution. Clients who chose to by the software-only stack will see a range of benefits. These include lowered cost/risks around peak demands; predictable pricing linked directly to business/transaction volumes, no impact on other licensing costs (MLC), co-location to other core banking applications on their mainframes, and charges only for production payments. Clients who choose the combined software/hardware offering will see all of the above benefits, as well as significantly lower risk by using a utility pricing model with a fixed cost per payment processed – and with with limited upfront hardware spend.

Summary Observations

The most difficult objection to overcome when it comes to mainframe adoption is price, especially acquisition costs. Mainframe hardware, compared to most x86-based servers, is expensive. And related systems software, middleware, transaction processing environments and management software can also be expensive. In head-to-head competition, x86 solutions almost always look cheaper – and, accordingly, IT executive managers most often purchase x86-based servers on the basis of that perceived lower price.

With this new utility pricing model for payment processing, IBM has taken a giant step forward in using capacity pricing to process highly variable workloads to correct what some perceive as a punitive, even disastrous pricing scheme. By doing so, the company is also protecting its mainframe base as the transition to real-time pricing takes place and opening new, future opportunities for its z Systems as demands for stronger security and higher system capacity drive more prospects to consider z Systems mainframes.

I have to congratulate IBM’s z marketing organization for taking this corrective action. They have properly assessed the payments market – and have positioned z Systems to compete effectively and aggressively in this fast growing market opportunity. What appears to be going on is that the company is taking a closer look at its systems features: the characteristics of processors; communications subsystems; file subsystems; scale-up and –out capabilities; security capabilities; queuing facilities; and more – and then looking more closely at workload demands.

In this case, z Systems marketing saw a major threat developing, worked with development to structure a utility pricing management offering leveraging their very own FTM payments software solution (FTM) – and has presented its prospects and customers with a viable and attractive pricing alternative. Good work, IBM z Systems marketing!

 

 

Posted in Uncategorized | Leave a comment

IBM LinuxONE: A Strategy Refinement

By Joe Clabby, Clabby Analytics

Clabby Analytics has argued for years that IBM needs to do a better job of explaining which workloads belong on which servers (x86, Power Systems, mainframes). Our primary argument has been that microprocessors process workloads differently; and systems are designed differently – meaning that workloads perform better when placed on systems that are best suited to process them. IBM has traditionally resisted providing such guidance, leaving sales teams and customers/prospects to work out which workloads belong on which processors/servers.

Last year, we took it upon ourselves to publish this report in which we discussed which workloads belong on LinuxOne vs. x86 servers. Robert Francis Group also published a similar report. IBM, on the other hand, continued to focus its sales efforts on server consolidation and the price advantages LinuxONE had over distributed x86 server environments (upwards of 30% cost savings for certain workloads).

This year, IBM seems to have gotten the message: to further increase sales of LinuxONE its going to have to do some workload positioning work. Accordingly, IBM has done a strategic rethink of LinuxONE positioning. The “new think” at IBM is that LinuxONE should be pitched as: 1) a powerful, scale-up server environment that is ideal for data-intensive processing; and, 2) as unparalleled secure server environment. Pricing will become a secondary, corollary argument. We agree with this new positioning – and this article explains why.

Background

IBM introduced Linux on its mainframe architecture almost 20 years ago. In the early days, adopters initially deployed the then not-quite-enterprise-grade operating environment on isolated partitions on their mainframes where they could experiment with using mainframe power and scale to drive custom as well as open source Linux applications. As Linux continued to improve, adopters became more comfortable with Linux as a resilient and secure operating environment – and the adoption level of Linux on the mainframe rose steadily.

As demand for Linux on the mainframe continued to increase, IBM decided that it would be wise to build and promote Linux-only mainframes – turnkey, integrated Linux mainframes configured and priced to compete head-to-head with large Intel x86-based servers. Accordingly, less than two years ago, at LINUXCON, IBM announced its “LinuxONE” servers. Since that announcement, IBM’s LinuxONE servers have experienced strong growth as consolidation servers, as database servers, and as servers for new generation Linux applications such as Blockchain.

A Shift in Buying Patterns

IBM’s strategic-rethink is being driven by LinuxOne customers and prospects searching for super strong security as well as outstanding data processing, scalability and management:

  • The strongest growth for LinuxONE is occurring in China where LinuxONE has become recognized as a highly secure, high performance server – and very price competitive server — for a wide range of Linux and open source applications;
  • New applications such as Blockchain are also making their way to LinuxONE, driven by performance and security Quality-of-Service requirements; and,
  • IBM customers are starting to realize that it makes more sense to run “a single version of the truth” database on highly scalable, scale up servers such as the mainframe as compared to the complexity of controlling multiple copies of databases across distributed servers.

The Security Difference

IBM’s mainframe architecture has long had very significant design advantages over x86-based servers – with clear and distinct advantages found at the microprocessor level as well as in the overall system design:

  • At the microprocessor level, a sizeable portion of microprocessor real estate being devoted to encryption/decryption services.
  • At the system level, tamper responding cryptography cards can be added to address security and compliance requirements.
  • At the operating system level, the operating environment can control access to logical partitions – protecting them from internal and external exploits.

With these design advantages, mainframe architecture has been able to achieve a Common Criteria security ranking of EAL Level 5+ (zero x86 servers have achieved this ranking). And, IBM’s Crypto Express6s has been designed to meet a Federal Information Processing Standard known as FIPS 140-2 level 4 using PCIe interconnect. Add to these hardware rankings that IBM also offers a comprehensive set of security tools including zSecure and Watson-enabled QRadar– as well as many related security services – and the competitive advantages of the IBM LinuxONE architecture and software/service portfolio over x86 servers and related security ecosystem become readily apparent.

When it comes to security on IBM’s mainframe, it should also be noted that in July of this year IBM introduced a new approach to security with “pervasive encryption” (encrypts all data within a mainframe environment); as well as secure service containers that helps isolate workloads to prevent tampering. With pervasive encryption, IBM’s mainframe can encrypt data-at-rest without application changes – while offering tremendously better performance than x86 architecture while doing so. Data-in-flight (networked data) can also be encrypted and protected with full end-to-end network security. And pervasive encryption helps enterprises lower their compliance testing costs because auditors no longer have to check to seek what data is protected and how – instead, ALL data is encrypted. Finally, it is important to note that mainframes and LinuxONE servers offer industry-leading secure Java performance via TLS (2-3x faster than Intel).

IBM’s Secure Service Containers are worth a closer look. What IBM announced was a statement of direction regarding a Security on Demand (SoD) service that will allow its clients to use Docker container technology and secured containers to build new services of their own design. IBM’s Secure Serivce Containers will completely protect memory by isolating memory in LPARs so no peer environment can access to memory in another container. With Secure Service Containers, a known good image boots in firmware; all data and code is encrypted by default, which can also help clients protect sensitive and/or proprietary code; system administrators can’t gain access to that data through remote command line access, nor directly access the operating system. With SSCs, IBM has clearly concentrated on preventing access to data from external as well as internal forces.

IBM is also quick to point out that there are clear and distinct differences in the way that the x86 world approaches security as compared with the mainframe world. Intel offers a facility known as Secure Guard Extension (SGX) which essentially creates a secure enclave of protected memory (only 90 MB, however). To use SGX, developers need to write to the SGX APIs. So, to tighten security in x86 container environments, the x86 ecosystem needs to step forward and write code – and should a developer miss a line of code and writes to the wrong API, an application becomes insecure. Contrast this approach with IBM created Secure Service Containers which require NO software code changes to take advantage of container security for applications.   If administrators want to expose administrative functions such as starting a chron job or scheduling a daily backup, they can do so by expose administrative functions using Web pages or Restful services.   If an administrator does not properly access services, they have to fix their access – but they’re not operating under the belief that an application is secure when it isn’t. Secure Service Containers, provide a secure infrastructure for deploying virtual appliances, helping to contain and address infrastructure security threats and vulnerabilities.

Couple the mainframe’s solid hardware security (including tamper responding crypto) with IBM’s rich software security portfolio – and with related services – and enterprises that are extremely security conscious would be hard pressed to find a more secure solution than IBM’s z14 and LinuxONE servers.

LinuxONE as a Scale-up Database Environment

What is easier to control: 1) multiple copies of a database distributed amongst many servers; or, 2) a single version of a database running on a high performance, scale-up server? Obviously, the answer is #2.

With billions of dollars invested in building mainframe architecture, IBM has architected the world’s most powerful and scalable scale-up commercial server environment. And the company has an explicit desire to capture as much enterprise data as possible on its scale-up mainframe platform. So it should come as no surprise that IBM’s LinuxONE marketing organization wants to go after data intensive workloads.

From a performance perspective, IBM’s LinuxONE offers industry-leading performance when processing Java workloads (up to 50% faster than Intel).

From a system design perspective, mainframes can vertically scale within the same frame to 170 cores, equivalent to hundreds of x86 cores. Mainframes can use up to 640 POWER-based cores to deliver unparalleled channel input/output, as well as to ensure data integrity by checking data quality. Mainframes offer advanced single instruction, multiple data (SIMD) extensions that enable processors to perform the same operation on multiple data points simultaneously (critical to financial applications). Further, mainframes offer pause-less garbage collection to enable vertical scaling while maintaining predictable performance. Mainframes can support large data-in-memory applications by providing access to 32TB of main memory, and access to much larger caches than Intel offers. With mainframe architecture, it is also possible to build to build a large cloud within a single box.

Which data workloads belong on the LinuxONE mainframe? Clearly those workloads that need access to large memory. And clearly those workloads that need near real-time results (such as banking applications where transactions and balances must be done in near real time). And those workloads that require strong security. In short, secure, time sensitive stateful workloads such as databases and systems of record where data is persistent should be deployed on mainframe architecture.

Which data workloads do not belong on the LinuxONE mainframe? Clearly workloads where time is not a critical factor – such as e-mail, Web searches or Twitter tweets. Or small workloads that don’t need scale. Or workloads with low security Quality-of-Service requirements.

Summary Observations

For at least a decade we have argued that workload characteristics should dictate system choice – and that no single system is ideal for all jobs. Accordingly, IT executives should understand the differences between mainframe and x86 architectures in order to make the most optimal choice for executing various workloads.

The way that IBM’s LinuxONE (mainframe) z processors process work is distinctly different from the way that x86 processors process work. In short, some microprocessors focus on processing large numbers of threads (examples: x86, POWER and SPARC) – while the LinuxONE microprocessor focuses on placing data in large cache and then quickly executing threads (a process known as stacking).

Mainframe system design is also distinctly different than typical x86 server designs. Mainframes feature access to very large memory (32TB); they have very large I/O channels; they can have dozens of communications processors to offload network processing from the CPU; they offer super strong security – and they can scale vertically much higher than x86 based servers. Plus mainframes can operate at 100% of capacity for sustained periods whereas typical x86-based servers run in the 50-60% range. With all of these advantages, it is easy to argue that IBM’s LinuxONE mainframes are particularly well suited for processing data intensive workloads. LinuxONE is an especially powerful solution when used for making business decisions based on a central source of truth.

As for security, given the huge data breaches and constant stream of attacks on corporate databases, security is becoming a key decision factor in server selection. With major advantages in processor design, in server design, in pervasive encryption, Secure Service Containers and in its software and service portfolios – as well as in FIPS and EAL certifications – LinuxONE is clearly the better choice when compared to x86 solutions.

According to a report published last year by the Robert Francis Group, the best workloads to put onto LinuxONE are “applications requiring rapid disaster recovery, business-critical ISV applications, business connectors, data services, development of WebSphere and Java applications, email and collaboration applications, network infrastructure, virtualization and security services, and Web servers and Web application servers.” These types of applications can exploit LinuxONE strengths in reliability, availability, processing power, networking and security. But we would also add that the best workloads to put onto a LinuxONE server are those that can take advantage of LinuxONE cache to host more and more virtual machines that can then make variable use of underlying resource pools. These types of applications can be run more efficiently on LinuxONE than on comparatively low cache x86 servers.

What I’m starting to see is a set of new use cases; what I’m hoping to see from IBM’s LinuxONE organization is a litany of use cases with proof points that clearly depict and articulate why LinuxONE is a better choice for certain workloads than x86 architecture. Right now we have a strategy shift statement that will focus LinuxONE on databases and security. This is a step in the right direction – but IBM needs to provide a lot more guidance using use cases and customer proof points if it intends to convince the market that LinuxONE servers are the optimal choice to execute secure, data-intensive workloads.

Posted in Uncategorized | Leave a comment

A Trip to the Cyber Range: An Immersive Response and Strategic Planning Exercise

By Joe Clabby, Clabby Analytics

Does your organization have a comprehensive plan in place to deal with major data breaches? Does your organization have the right tools and the right policies and procedures in place to isolate the cause of a breach and shut it down? Does that plan include ways to analyze breaches; ways to assess the damage; and ways to interact with the onslaught of press, customers and regulators who will demand answers and assurances afterward? Do your employees know their exact responsibilities should a breach occur?   These are the types of questions that business and information technology (IT) executives will find themselves asking should they visit IBM’s new X-Force Command Center Cyber Range in Cambridge, Massachusetts.

Last week when I had the opportunity to visit IBM’s Cyber Range I was guided to a large conference room that featured over 30 displays– in an imaginary immersive security operations center. IBM security faculty started my lesson in cybersecurity breach response with an overview of the Cyber Range, followed by a discussion of goals and objectives.  The individual who guided me through the exercise explained a bit about typical security challenges, and then detailed how the security drill would work. Essentially, I would be in charge of a fictitious company and guide its response to a security breach in which customer data was published on the web for all to see.

And then, all heck broke loose. The exercise started with a blindside – a crisis situation I was not prepared to handle. (I’ll not describe the blindside – it would ruin the fun for future attendees – but trust me, it catches you off-guard).

As I attempted to deal with the crisis, a litany of questions jumped into my mind:

  • What data has been published?
  • Was the leak plugged?
  • Was it internal or external?
  • At what point do I address the media, and what do I tell them?

As the crisis unfolded, I was asked in rapid fashion to handle the customer outcry, the press inquiries, and to deal with regulators. Again, this is another activity that I’ll be scarce on details with in order to maintain the element of surprise – but let’s just say this: as you go through this exercise you realize that a plan should have been put in place to handle a crisis like the one I was facing BEFORE the situation occurred. After around forty-five stressful minutes, the exercise drew to a close with a final press interview, which IBM confirmed that I handled well.

Looking back at the experience I realized that there has got to be a process in place, supported by the right security tools, the right project management/process flow tools — and supported by specific individuals assigned to handle a wide variety of tasks that must be addressed should a breach occur.

The tools

As I watched the exercise unfold, I was shown a lot of tools that were used to identify the source of the breach; to analyze internal behaviors; to manage the process flow; to assess the potential damage in terms of regulatory fines; and more. I have seen most of IBM’s security product offerings in demonstrations at various conferences – but I had never seen them all in action, working together in a seamless, integrated fashion. Impressive.

I have written about several of IBM’s security tools over the past few years (see this report, this report and this report) – so I had a working knowledge of the features and functions of the respective products. I was most impressed by IBM’s Resilient environment, an incident response platform that provides a means to work collaboratively through the breach process in an organized, project management-like fashion.

IBM Resilient offers a series of working “playbooks” that help guide executives through all of the steps of the breach process. Activities can be set up in advance, with contact names, a description of the task that must be performed, and the name and contact information for individuals assigned to execute tasks. This software can even extrapolate a company’s financial exposure in terms of regulatory fines (and remember that if certain tasks aren’t executed in certain time frames, the fines go up exponentially).

IBM’s i2 visual investigation and analysis tools played a major role in the breach analysis process.

As I claim in one of the above-mentioned reports, I contend that cognitive computing is on a path to become pervasive in the security community. During the course of the breach exercise, IBM’s cognitively-enabled QRadar was used to formulate a “threat query” which it forwarded to IBM Watson for Cyber Security for additional analysis.

Watson searched its corpus of structured and unstructured data – and it searches for other threat entities – and let me know about threats that may have been related to the breach that I was attempting to manage. The information gleaned by Watson for Cyber Security was refined by QRadar – using insights identified by QRadar, by Watson for Cyber Security and by the security administrator. In a relatively short amount of time these tools enabled me to identify the source of the breach.

IBM’s BigFix Detect was also used in the analysis process. This program is an endpoint detection and response (EDR) environment designed to identify malicious behavior at the point where external threats begin. When used with Watson for Cyber Security, the combo can deliver targeted remediation to endpoints that have been compromised quickly – within minutes – helping cut off cyber attacks.

IBM did not overtly try and sell me on any of these products – the company merely showed what needed to be done during a cyber attack, then used the tools at its disposal to do so. Interestingly, IBM did not mention any of the services that it could use to help me implement a breach response plan or formulate a comprehensive security plan. This was likely due to the abbreviated nature of my visit – the exercise can run anywhere from 4-8 hours, whereas I could only devote 60 minutes to the exercise.

Service offerings

Though IBM did not describe its services, I think those offerings are worth mentioning here. I know from previous research that IBM’s security services fall into seven categories: 1) data and application security services; 2) offensive security testing; 3) incident response and intelligence services; 4) identity and access management; 5) infrastructure and endpoint security; 6) security strategy, risk and compliance; and, 7) security intelligence and optimization)

As background, IBM’s data and application security services help secure data and applications. The company’s data security capabilities are based on its Guardium offering, which provides data and file activity monitoring, threat detection analytics, vulnerability and more. The company also offers application security on cloud consulting services. IBM’s offensive security testing services include a programmatic approach to security testing of all types, including human, process, hardware, IoT, application and infrastructure.

The IBM X-Force Red portal helps provide visibility into asset vulnerabilities and offensive security reporting facilities. IBM X-Force Incident Response and Intelligence Services are backed by an organization that helps prepare clients to instantly respond to security incidents. The services offered include a proactive retainer program known as IBM X-Force IRIS Vision Retainer and cybersecurity consulting services to deal with active threats.

IBM’s identity and access management services protect against breaches, and include identity and access strategy and assessment, cloud identity and insider threat protection services. IBM’s infrastructure and endpoint security aim at transforming existing email, Web, network, server and endpoint environments into modern, well secured environments through managed security services, cloud security services, network protection services (such as a managed firewall service), and through managed detection and response services.

The company’s security strategy, risk and compliance service aims at meeting regulatory requirements. This service makes recommendations for the better management of risks, compliance and governance – and includes IBM’s GDPR privacy service.

IBM’s security intelligence and optimization service aims at proactively detecting and prioritizing threats. This service is designed to assist clients throughout the Threat Management lifecycle, including strategic security operations center planning and deployment, 24×7 threat monitoring and analysis, rules and use case management, and closed-loop intelligence processing. This offering also includes services from the IBM X-Force Command Centers such as the IBM X-Force Hosted Threat Analysis Service. Managed SIEM and strategic SOC consulting are two important services in this area.

Summary observations

I thoroughly enjoyed my visit to IBM’s Cyber Range. It was a fun, immersive and extremely informative exercise—a great way to spend an afternoon in Cambridge other than strolling Havad Yahd.

When I started the exercise, I had no idea what to expect. It could’ve been like other briefings – typically discussions of a vendor’s strategies, products and services. But IBM has taken this process to the next level, making it an immersive learning and teaching experience. I “got into” the exercise big time, marveling at the technologies I had at my disposal. IBM’s suite of integrated tools made it easy for me to ascertain the source of the problem and respond rapidly, in an organized fashion, to a data breach.

Due to time constraints, I could not finish the full experience. I was told that it can last for hours, and that teams of security personnel are provided learning materials and planning materials that they fill out in real time to help them figure out how to build their own security strategies. The Cyber Range is really an educational opportunity made fun through the use of immersion technologies.

I loved the parting words of the lead faculty member as he concluded our time together. “The Cyber Range is where security best practices meet. It’s where the game of CLUE converges with security process flow on a Disney roller coaster ride.” From my perspective, he’s 100% right – it’s fun, it’s exciting and it is a fantastic learning experience for those looking to IBM to help them build a comprehensive, fast-response security breach plan.

Posted in Uncategorized | Leave a comment

PowerPlex 2017 – Makers Making an Impact

By Jane Clabby, Clabby Analytics

This year’s PowerPlex conference was held in Atlanta, Georgia on May 8-11. The event built upon the “making things” theme introduced last year—highlighting the manufacturing focus of Plex, as well as the company’s customers, prospects and partners. Each attendee’s badge told us, for example, “we make automobile braking systems” or “we make candy” or “we make things happen”.

Plex Systems, based in Troy Michigan, makes cloud-based manufacturing ERP software linking the “shop floor to the top floor” with an end-to-end solution for financials, HR, manufacturing operations, customer and sales management, and supply chain planning—capturing information across the business for analytics and reporting.

Plex plays a large role in helping manufacturing businesses make things—with close to 600 customers in 1800 facilities in 20+ countries, representing $35B in total annual revenue. The Plex Manufacturing Cloud processes over 5 billion transactions every day—and 50 percent of those transactions come from connected devices and machines on the shop floor. With an annual renewal rate of over 95 percent, these customers rely on Plex so that they can focus on their core business. In fact, over the past 12 months Plex has delivered 99.995 percent uptime to its customers—just 26 minutes of unplanned downtime.

Fourth Industrial Revolution

The Fourth Industrial Revolution (4IR) was a central theme at this year’s conference. Three market forces—cloud computing, the evolution of manufacturing and the rise of the Industrial Internet of Things (IIoT), are driving 4IR. By taking advantage of these forces, the Plex community has pioneered the concept of connected manufacturing—connected systems, processes, smart products, and people across the manufacturing floor—the foundation of 4IR. Plex customers are ahead of the curve, leveraging these trends to “make things” and to make things better and more efficiently. Here are several examples.

  • GenZe, a maker of electric bikes and scooters, is part of the rising sharing economy dedicated to providing affordable transportation across the US. The scooters are internet-enabled and connected to the Plex Cloud, allowing GenZe to collect information that can be analyzed to understand customer usage patterns and to provide monitoring of the scooter’s parts to support proactive maintenance.
  • Polamer Precision, winner of the 2017 Plex Impact Award for innovation, is using Microsoft HoloLens with Plex on the shop floor in a pilot program. The 3D technology is being used to illustrate the layout of the manufacturing floor—to see visually how forklifts move across the manufacturing floor—with the goal of automating that process.
  • Firstronic, a provider of electronics manufacturing services (EMS) and a 2016 Industry Week Best Plants winner leverages Plex’s cloud architecture to shorten the process of bringing on new plants from 12-18 months to fewer than 90 days. Plex provides the transparency to track material movement, and traceablity at the component level. The company will expand Plex usage to do more predictive planning, using Plex to identify which parts are running low and to order additional stock.

 

Skills Shortage

In my conversations with attendees at this year’s Plex conference, the skills shortage was a common frustration. Many businesses are using Plex to automate processes that were previously done manually, enabling some shop-floor activities to be accomplished with fewer people and allowing reallocation of resources throughout the company. Others are doing more on-the-job training with mobile devices and looking ahead to using augmented reality to instruct workers on the shop floor. While these are great solutions, they don’t really get to the heart of the problem. Our young people need to be educated in skills that will prepare them for the jobs that are available.

Mike Rowe, well-known TV host and podcaster, delivered the opening keynote on Day Two of the conference and provided his insight into the issue. Inspired by a trip he made to the San Francisco sewers to do an interview for Evening Magazine (a program on the local CBS affiliate), Mike traveled across 50 states and apprenticed at 300 “dirty” jobs, ranging from road kill cleaner to worm dung farmer.

After this experience, Mike had a “peripeteia” — reaching an unexpected reversal in his thinking about “dirty” jobs. With 5.6 million available jobs and 1.3 trillion in student loans funding education for jobs that don’t exist, the world needs to change their thinking.

As a result, Mike founded mikeroweWORKS Foundation, a public charity that is dedicated to providing the technical and vocational training required to fill the 3 million available skilled trade jobs currently available in the US—jobs that pay well but don’t necessarily require a college degree. During the conference, Plex presented Mike Rowe with a $25,000 donation which will be used to provide scholarships to people getting trained for high-demand skilled jobs.

Collaboration and Community

Finally, collaboration was an overarching theme at the conference. With over 13,000 active users in the Plex community worldwide, many rely on other users to work through issues or provide suggestions as to how Plex can be used in different and innovative ways to meet business goals. One third of the sessions at Power Plex were hosted by customers, sharing ideas and insight with the rest of the community.

Along with customers, the conference includes prospects and partners, and Plex prospects are encouraged to solicit product feedback from existing customers. Finally, PowerPlex 2017 marked the third year that the conference included a PowerPlex session devoted to women. This year’s luncheon included a cross-section of strong, articulate and successful women in executive positions who had great advice for women pursuing careers in technology and manufacturing.

Final Thoughts

Makers are truly making an impact, and based my many conversations at PowerPlex 2017, Plex customers are already defining the fourth industrial revolution and what it means to the future of manufacturing.

Posted in Uncategorized | Leave a comment

IBM Interconnect 2017: Cloud Solutions; Watson-enabled Applications and Blockchain

By Joe Clabby, Clabby Analytics

Last week approximately 20,000 people attended IBM’s Interconnect Cloud Computing event at the Mandalay Bay complex in Las Vegas, and their reasons for attendance varied widely. Some wanted to learn more about IBM’s cloud and cognitive computing products and strategies, others were interested in traditional enterprise computing products, still others in application development or asset management or security. Interconnect, with its broad spectrum of keynotes, product demonstrations, customer testimonials and hands-on labs, was able to address the broad range of requirements of its attendees.

I went to Interconnect 2017 without a specific research agenda. Interconnect is the industry’s largest cloud computing showcase, so I went to learn more about IBM’s cloud strategy, its cloud products, its new cloud services and to learn what vendors and customers are saying about the cloud marketplace. After listening to the presentations of several IBM executives, after talking with numerous product managers at exhibition booths, after listening to customer testimonial after customer testimonial, and after watching product demonstrations – I left Interconnect with several new perspectives.

My biggest finding is that IBM has done a masterful job building an enterprise-class hybrid cloud environment that features analytics, machine learning and cognitive computing. I stopped publishing reports on IBM’s cloud efforts back into 2014 because I perceived that IBM was having a difficult time getting its cloud act together (more on this later). What I found at this conference is that IBM has a clear cloud strategy with offerings that are distinctly different from the cloud offerings of Microsoft, Amazon and Google. And due to these differences, I believe that IBM will see stronger growth in enterprise cloud computing as compared with its leading competitors over the next several years.

I also noted that IBM has become more aggressive in building-out its Watson/third party software ecosystem.  For the past several years IBM has focused strongly on adding cognitive and analytics overlays onto its traditional enterprise software environments. But at Interconnect 2017 I found dozens of Watson-enabled third party software vendors – and learned that IBM’s Watson organization is looking to recruit thousands of third party software makers to its Watson/cloud environment.

Finally, I observed that IBM is “all in” when it comes to Blockchain technology – and it has the perfect platform on which to implement Blockchain: LinuxONE. Further, IBM offers a secure, IBM-managed Blockchain service environment that should appeal to customers who want to offload their transaction processing to a company with a deep transaction processing heritage.

IBM’s Hybrid Cloud

After several years of watching IBM try to architect its own cloud environment (Smart Cloud), and several years of watching IBM acquire a whole bunch of cloud-related companies, including Cast Iron, Coremetrics, Sterling Commerce, Unica, Emptoris, Varicent and Kenexa – with no apparent cohesive rhyme or reason for these acquisitions – I lost confidence that IBM would be able to recover against new generation cloud competitors such as Akamai, Rackspace, Amazon, Google, Microsoft and the telcos. I determined that I would come back to covering IBM cloud computing when IBM had a clear strategy, a coherent product line and rising marketshare potential.

I’m back.

At Interconnect 2017, Arvind Krishna, senior vice president, Hybrid Cloud and Director, IBM Research, presented a slide that brought me back into the fold (see Figure 1). This slide shows an integrated, secure, cohesive Watson/analytics cloud environment on which a variety of IBM software solutions can run. The design of this environment is distinctly different from the design of competing clouds that lack deep cognitive and analytics elements. IBM has moved its rich portfolio of home grown analytics, transaction processing and integrated software solutions to this unique, enterprise-strength cloud – contributing to and helping drive the company’s $24.5 billion in software revenue. In the shaded area (upper left), IBM has also developed several ancillary businesses that run on its hybrid cloud – new businesses with unique offerings that exploit cognitive computing and analytics, and that represent a huge growth opportunity for the company. Healthcare, financial services and the Internet of Things are examples of these high growth opportunities.  In short, the company now has a clear strategy; it has a coherent product line – and, given the uniqueness of its position in the cloud marketplace, the company’s cloud environment has strong potential to significantly grow market share.

IBM’s Chairman, President and CEO Ginni Rometty spoke about the solid revenue growth that she is seeing in both Watson and the IBM Cloud. She also described how the IBM cloud can be differentiated from the clouds of other vendors such as Microsoft, Google and Amazon. She emphasized that IBM is in the hybrid cloud business, helping customers build clouds that suit their enterprise needs without having to give away their data. According to Ginni, the value of data is that it generates unique insights. Competitors, she argued, are often in favor of democratizing and sharing data – and this gives away the advantage of developing unique competitive insights. IBM advocates helping enterprises gather data from a variety of sources, both internal and external, to help generate those unique insights. Ginni’s speech made it perfectly clear that IBM knows its position in the cloud marketplace; that it knows what its customers want; and that the company can now deliver solutions to its customers that its cloud competitors cannot.

Expanding the Watson/Cloud Ecosystem

When I attended the World of Watson conference last year I noted that a growing number of vendors were starting to overlay Watson on top of their traditional software offerings. The big news at this year’s Interconnect conference was that cloud leader Salesforce.com has now integrated Watson with its own Einstein program in order to simplify the use of its software and help users make better decisions.

At last year’s World of Watson conference, however, I left without a clear understanding of how IBM was going to make it possible for the broader ecosystem of third-party vendors to overlay Watson on top of their own infrastructure, management and application solutions. At Interconnect I learned that IBM is wide open to helping third-party software vendors, including competitors; integrate their solutions with Watson on the IBM cloud.

In short, third-party software makers who are not integrating analytics, cognitive services and cloud architecture into their product offerings are going to have a hard time surviving. IBM offers an integrated, secure cognitive/cloud environment on which third-party software makers can deploy their solutions. Vendors looking to the next wave in computing would be well served to look into deploying their solutions on IBM’s Watson Cloud.

Software makers who wish to know more about this topic should contact me at joeclabby@AOL.com.

Blockchain, LinuxONE and Blockchain for Hyperledger Fabric v1.0

In September, 2016, I started writing about Blockchain, a new way of processing transactions based upon sharing a common ledger (see this blog). I’ve since come to the conclusion that IBM’s LinuxONE platform, and its new Blockchain services are the best approaches to take when building Blockchain networks.

To build a Blockchain transaction processing environment, Blockchain users need secure platforms, networks and a ledger that accounts for transactions. For those not familiar with IBM’s LinuxONE, it is a platform that offers high-speed architecture, fast elliptic curve processing, and dedicated hardware for acceleration. It is 2.3 times faster than competitors – with integrated security levels such as EAL Level 5+ that no other architecture in the industry can match. Hardware security also includes FIPS 140-2 compliance. As part of the security features of this architecture, security keys are placed into protected memory and placed into hardware secure modules in order to prevent access and tampering. IBM does not make it possible for code to take control of secured modules, and focuses on encryption protection to prevent administrative abuse.   Try finding performance this strong and security this deep on any other platform…

At Interconnect 2017, IBM introduced a new Blockchain service, IBM’s Blockchain for Hyperledger Fabric v1.0 – a service that offers both security and scale, as well as related governance tools. This service, which uses LinuxONE servers on the back-end, helps protect from external as well as internal attacks, offers the levels of security previously described, uses the secure service containers and hardware security models previously described – all running on a highly auditable operating environment. I travel to parts of the world that are not familiar with LinuxOne architecture – where IT executives are reluctant to use it, preferring x86-based solutions – even though LinuxOne uses an operating environment that is quite familiar in the industry: Linux. For those geographies that want the best Blockchain implementation available, but don’t want to run their own Blockchain environment, I would strongly recommend evaluating the above mentioned Blockchain service.

One last thing on Blockchain: regulation. As transactions take place across various domains, they run into dozens or perhaps hundreds of regulations that must be addressed. Watson now offers a regulatory service that can help streamline the complicated problem of dealing with regulations. The Blockchain story – LinuxONE combined with automated Watson regulation services – will be difficult for other vendors to mimic and overcome.

Summary Observations

Over the past few years, I’ve watched the giant behemoth called IBM change its course. It has changed its cloud strategy – it no longer tries to compete with the public cloud vendors or the traditional enterprise cloud makers per se – it now offers a solution that is completely different. IBM calls its offering a “hybrid cloud” – a blend of integrated public and private cloud facilities driven by analytics and cognitive computing. The company has taken its traditional enterprise management tools and cloud-enabled them – weaving them into the fabric of the cloud. In fact, wherever it makes sense, it has cloud- and Watson-enabled its entire product line, from management tools and infrastructure through applications, reporting tools and databases. And it has modified its pricing and go to market models – actually promoting many of its offerings first and foremost as “software-as-a-service” solutions.

These changes at IBM involved huge internal cultural changes, including migrating the “enterprise computing old guard” to “new disciples of analytics and cognitive computing.” These changes involved “new think” – like how do we most efficiently develop and support applications under the new, analytics/machine-driven cloud model? And how do we offer traditional, on-premise solutions as well as Software-as-a-service solutions? They involved retooling, integration efforts, the introduction of new APIs to join the old world to the new mobile world – and the new cloud environment to the existing premise environment. IBM has worked to integrate existing traditional management applications into the new model of cloud computing – and to find or create new solutions more suited for the new hybrid cloud environments.

The IBM Cloud has come a long way since I stopped covering it in 2014. This week, at Interconnect 2017, I saw firm evidence that IBM has made it over the hump – and has transitioned to a modern, extremely competitive analytics/cognitive cloud company that can address enterprise needs for a blended, rationalized public/private/hybrid computing environment. Enterprises planning for a future of cognitively-enabled applications and secure Blockchain must take a closer look at the IBM Cloud.

Posted in Uncategorized | Leave a comment

IBM Connect 2017: Collaboration Under the Radar

By Joe Clabby, Clabby Analytics

IBM runs six large technology events each year: World of Watson, InterConnect, Amplify, Edge, Vision and Connect. I usually attend the two largest, World of Watson and InterConnect, along with 15,000 to 20,000 other interested parties. Both of these events are huge and cover a wide range of technologies ranging from systems to infrastructure, to management, to cloud, to cognitive computing, to analytics and more. Although I suspect that IBM’s deep portfolio of collaboration products can be found at some of these events, I just plain don’t notice them.

My earliest memories of IBM date back to the 1970s when I was selling competitive products against their word processing and computer systems. In the 1980s, IBM collaborative computing “office products” started making the scene. Does anyone remember PROFS and DISOSS? In the 1990s IBM’s stunned the marketplace by purchasing an office product company by the name of Lotus for a whopping $3 billion. As a research analyst at the time, I could not fathom what IBM saw in Lotus and its email and collaboration products. By the early 2000s, I could clearly see the value of the purchase as IBM brought to market a myriad of new office and document management offerings, creating a multibillion-dollar collaboration software environment that dwarfed their now “miniscule” investment in Lotus.

This year I chose to attend IBM’s Connect 2017 event in San Francisco – leaving my 75° abode in in sunny Charleston, South Carolina, to travel to cold and rainy San Francisco. And as surprising as this may sound, I’m glad I did. Why so?

  • IBM Connect 2017 was an industry tradeshow built around IBM’s and its business partners’ collaborative solutions. It offers roadmaps and commitments to older IBM products, such as Sametime, Domino and Notes. It also showcases new IBM products such as Verse (a versatile modern day mail and messaging environment with a collaborative overlay), and Connection (a social business network platform), and  Watson Workspace and Work Services (a conversational collaboration environment and a set of APIs that allow developers to build applications that understand conversations and intent, and allow for integration into existing work applications).
  • Business partner participation at the event was strong, featuring new collaborative product offerings between IBM and Cisco and between IBM and Box . Connect also highlighted many blended solutions by vendors whose products overlay IBM offering, such as project management environments blended with Notes, and mobile interfaces blended with Domino environments.   Also featured were a slew of new products that integrate Watson cognitive technologies with existing business applications, such as Watson Workplace into traditional software offerings, thus delivering new functionalities to market.

What I Heard and Saw: Business Partners

I always seem to gravitate toward the EXPO floors at these events. I think the main reason is that I just plain love to play with technology – I like to see the way it’s used; I like to see what new and innovative directions that developers have taken with their hardware and software solutions; and the EXPO floors are a great place to talk with vendors about what’s really happening in the marketplace, as well as to meet IT buyers.

At Connect, I had a long conversation with Alex Homsi, the CEO of Trilog Group, an IBM business partner, who I asked to help put IBM Connect 2017 into context. The way Mr. Homsi described it, the IBM Connect events are all about getting things done. They are about increasing productivity and efficiency, but mostly about collaboration in the processing of complex workflows. As I stood in the Trilog booth, Mr. Homsi gestured around the floor “look over there – you see companies that offer telepresence, that create virtual project rooms, that offer sales collaboration tools and much, much more.” When I prodded him for more information about why people come to the event, Mr. Homsi told me that “They come to solve complex, mega-problems” and then he proceeded to talk about how his own project solutions help customers save millions (or in once case, hundreds of millions) of dollars by digitally capturing content, effectively communicating it and then coordinating the efforts of large groups. Incidentally, Mr. Homsi is also CEO of a company called Darwino, a company that helps customers mobilize IBM Notes Domino applications and migrate to the cloud.

I also talked at length with the company by the name of Oblong, a maker of a digital content management environment that I wish I owned. For sci-fi fans who may have seen the movie “Minority Report”, you might recall Tom Cruise pulling files and data streams from a wide variety of sources, which he examined in the holographic 3D air space that surrounded him. He could expand files, shrink files, push files to the side, look at multiple displays of realtime and recorded data in real time – easily moving between static and dynamic filing environments at the touch of his hand. Oblong makes a highly scalable environment that can scale across hundreds of displays (sorry, no holographs yet) where information that it collects can be collaboratively shared amongst large teams. I pictured disaster response use cases where a room of people look to coordinate a response to an event – and I have to admit, I thought back to the NASA launch room in the movie “Apollo 13” where a group of scientists collaborated on a way to help astronauts return to earth after a failed lunar landing (I guess I’ve been watching too many old movies lately…). Anyway, you get the point, the world is now digital, in these digital sources can be easily harvested and displayed – enabling people to more easily collaborate and make better decisions.

A company by the name of Imaging Systems, Inc. out of Slovenia also caught my attention with a product called IMiS MOBILE that can be used to easily mobile-enable legacy applications – thus broadening the platforms that can be used to conduct collaborative activities using hand-held devices. I liked this product because of its programming simplicity.

I went to at least a dozen other booths, including the IBM Verse and Connection booths (covered in the next section).

Last but not least, business partners were truly excited about using Watson to make their applications “smarter”. I wrote about this trend in my Pund-IT trip summary after attending World of Watson 2016 – and this trend is becoming omnipresent (I’m seeing it everywhere across the traditional software applications markets) as ISVs are recognizing that using Watson can simplify the use their products while expanding the types of and accuracy of the solutions they create. Watching the industry move to “Watson Enablement” is truly one of the most fascinating trends I’ve ever seen in the computing industry – it seems that the sky’s the limit in terms of what machine intelligence blended with traditional applications can now do.

What I Heard and Saw From IBM

The lead speaker at Connect was Inhi Cho Suh, IBM’s General Manager of Collaborative Solutions. She took the stage to tell the audience of about 2,000 Connect attendees what was going on in the collaboration marketplace and what IBM was doing to address the needs of the market.   is– I’ve known Ms. Suh for years – I first met her in the early days of IBM’s foray into the analytics marketplace – and she’s a real straight shooter. Bringing a person with her background to the collaboration space is a very smart move on IBM’s behalf – she knows analytics technology extremely well, and she knows how to Watson-enable IBM’s collaborative offerings as well as how to help business partners to do so.

Ms. Suh took the stage to tell the audience about the trends that she is seeing in the collaborative computing market space. She contended that the way we engage with others in the work world is changing thanks to new innovations, especially the use of Watson cognitive services that are being used to simplify products and extend their capabilities. She also talked about open collaboration where companies in the collaboration marketplace are working more closely together to build jointly integrated solutions. I saw a clear example of this with the joint IBM/Cisco announcement that integrates Cisco collaboration products with IBM collaboration products – in the past these two companies would’ve been strong competitors who likely would not have worked together, but now both are pleased to show how the best of each company’s solutions can be blended to create a more powerful and integrated collaborative environment.

Ms. Suh also talked about how collaboration products are getting better at streamlining process flows.

Other speakers talked about how cognitive computing is being blended with analytics. In short, cognitive computing is being used to help sort and prioritize “what is important to me”; it is streamlining the flow of work; it’s using bots and virtual assistants to help aid humans in their decision-making processes; it’s involved in using the Internet of Things sensory devices to aid decision-making; and it’s helping individuals focus better.

There was also an interesting discussion about whether today’s tools “create more noise” than help. From what I saw on the demo floor, today’s tools can take a lot of the uncertainty and human guesswork out decision-making, while at the same time making processes flow more easily. The “more noise” argument does not hold with me.

As for IBM products, I got a close look at IBM Connections and IBM Verse. Verse was pretty cool, a modern mail-and-messaging environment with the collaborative overlay that made it simple to access and sort the work of fellow team members as well as handle external inputs. I especially liked that the product could be used in a mode where you don’t have to delete your emails and related documents, you just leave them in your inbox when you’re done with them – and if you need to refer back to them you perform a search on the few keywords that you may remember and your document appears. I’ve deleted emails and messages for decades – what a silly concept… As for IBM Connections, I rarely have a need to collaborate with a group of people to conduct project work, but if I did, I’d consider using this social, collaborative environment.

As an Aside

I had the opportunity to attend a customer-presented session on deploying Blockchain on IBM’s Bluemix. For those not aware of Blockchain, it’s a new way of processing transactions based upon Bitcoin technology. It deals with distributed databases, distributed servers, encrypted data, the mining of that data – and establishes consensus between nodes. This technology is being used to create trusted, synchronized transactions. If a transaction is tampered with in any way, all stakeholders know about it and the transaction is thus broken. The ability to conduct secure transactions is one of the major points of this technology – but probably the biggest selling point is that it takes intermediaries who add processing, time and cost overhead to transactions out of the picture. It’s pretty exciting stuff and represents a whole new, more efficient and secure way of processing transactions – and I will have the pleasure of speaking about it at a government conference in Dubai in May. It was fun to see how another practitioner handled the topic.

Conclusion

I truly enjoy going to technology shows. Throughout my lifetime, I have been a technology sales representative, a project manager and a technology researcher. I remember how I used to do things in the old days, and I like seeing how technological advances have simplified tasks, and have made people more productive and efficient.

As I looked at the wide array of technological solutions presented at Connect 2017, I kept asking myself “How can I use these tools as a research analyst to my advantage?” I left Connect with some new ideas to explore, and with the self-realization that I go to events such as this to learn, to look for new innovations, to talk with people who have used or developed these technologies – and most of all to surround myself with people with like interests who enjoy technology and innovation as much as I do.

Will I go again to IBM’s Connect next year? I hope so…

 

 

 

Posted in Uncategorized | Leave a comment