This analogy also shows a couple of requirements for cloud computing:
- Pay-per-use. If you consume 1 kWh, you pay for 1 kWh. If you create 1,000 invoices, you pay for the creation of 1,000 invoice documents.
- The complexity must be low. A server executing a CPU-intensive calculation for a few hours, a database of any given capacity, or an HR service to manage all your employees – no matter what you consume from the cloud, it must be as simple as consuming 1 kWh of electric power.
- Higher SLAs at lower costs. The public power grid is reliable, can handle peak loads of a single customer easily, and the price per kWh is low. For ERP systems, the equivalent would be fewer downtimes in the cloud and lower TCOs compared to on-prem systems. Both conditions must be fulfilled!
The key to achieve both – better and cheaper – is optimization and standardization. Does anybody ask the power provider for 180-Volt lines, 80 Hz, or DC power? Of course not. All customizations are done in-house. And if the price per kWh would be too high, everybody would install PV panels and a large battery.
Standardize at what level?
IT services have many layers. We speak about Infrastructure-as-a-Service (IaaS), which provides network and computers on demand, or Platform-as-a-Service (PaaS), which provides a database. The highest level, Software-as-a-Service (SaaS), shifts the responsibility of the ERP module completely to an external provider.
Let’s have a look at standardizations at the IaaS and PaaS level first.
Very early, the hyperscalers have identified one problem with just providing base services: the later integration requirements for higher value services. In your own IT center, you store a document in the file server, and that’s it. In the cloud, however, you store the file, and it creates a file-creation event.
Suddenly, things can happen when a new file appears, like loading its contents into a database, notifying an employee. All kinds of workflows can be created because such events are provided. It is a little thing, but the impact is huge. Wouldn’t it be nice to have the same option on an ERP level? For example, when a financial period is closed, the event would be broadcast, and other workflows would be activated, without the need to do anything in the ERP system.
As a customer, I would like to have such events on premises as well.
The on-prem cloud
The one thing the cloud cannot solve is customers’ unwillingness to store data outside the corporate network. A cloud vendor can use arguments like data processing contracts, the data not leaving the country, the existence of firewalls, but nothing will help. Both problems, the better integration options in the cloud and the data locality on premises, can be solved if the hyperscaler simply adds another region – the customers’ own IT centers.
There have been some first offerings in this regard, and my prediction is that this will be the next big thing in cloud computing. The difference between on-prem and cloud computing will vanish because both are the same, the location is the only difference. The customer’s IT center will get all cloud features and functions, and the data will not leave the premise.
Adding a new region is nothing special for hyperscalers. Services are available in some regions but not others. Different regions have different capacity. Nothing special.
This approach combines all advantages without the disadvantages. A customer wants to spin up another database? No problem, specify the region to be either the local IT center or the hyperscaler-provided hardware and, if the capacity is available, it will be up and running a few seconds later. Cost allocation, different pricing, harmonized administration – everything is provided out of the box, and container technology is the enabler.
In such a world, we get cloud features on premises, and hyperscalers also get customers on board who are not willing to store or process data outside their physical control.
Sticking to my initial analogy, this would be a roof-top solar photovoltaic system. Power provided locally is consumed locally; it augments the public grid without trying to replace it.
Consequences for SAP
The current SAP cloud strategy is not well-suited to support this vision. SAP is trying to own the customer. It is using the hyperscalers’ hardware for its own purposes.
None of my requirements for outstanding success are fulfilled. Just ask yourself these few simple questions:
- Does SAP offer pay-per-use contracts for S/4 Hana Cloud?
- How high is the complexity to use S/4 Hana Cloud?
- Are the TCOs of cloud services a fraction of on prem?
- Can you easily transition from on-prem systems to the cloud and back?
- Are all cloud services available in all hyperscaler regions?
The current SAP strategy works if the offerings are better in all theoretically possible areas. However, can you name one SAP solution where there is not at least one competitor who is better in most key aspects? Even within SAP, there are multiple solutions for most areas and, depending on the person you talk to, you get different recommendations. These are some of the reasons why SAP invents and retires products at such scale, despite the costs and impact on the margin.
The optimal solution for all parties – customers, hyperscalers, and SAP alike – would be to change the cloud strategy for the fifth time: To provide services within the customers’ hyperscaler solutions instead of SAP running its services on the hyperscalers’ hardware.
Furthermore, if I am right with on-prem landscapes becoming a new cloud region over the next years, the strategy outlined above would allow SAP to participate in this vision as well.