In many organizations, the SAP environments that have grown over the years form part of the corporate IT’s core. Here, even today, in many instances the principle applied is to change as little as possible on proven systems. A look at the equipment and the operation of the SAP systems makes clear that usually these operate on classic dedicated servers and use storage systems connected via fibre channel.
Jointly with the server provider, the application users determine how many SAPS (SAP Application Performance Standard) are needed, based on SAP key figures on users, overviews of quantities, and benchmarks; they also establish how much RAM and CPU, hard-disk and network-card resources a server needs to be equipped with, to be able to deal with the envisaged workloads promptly.
Companies have virtualized the application servers; concerning the right dimensions, users have to decide whethe to use 2-tier or 3-tier architectures. Typical SAP tasks – such as producing SAP clones for test, development and production – were partly worked through on an automated basis with scripts, but at present a start-to-finish automation, ideally one instigated by the relevant specialist department, is rather the exception.
At the same time, new architectures and technologies present the case for them to be put into the Data Centers, promising simpler and more efficient IT operations. Hyperconverged systems play an important role in SAP solution environments combining server, storage systems, network components and virtualization software in one system.
Usually the solutions in the network area support at least 10 gigabit Ethernet, and sometimes (rarely) also fibre channel. Here the first challenge is to change configurations that have been customary up to that moment. Specifically, it is about abandoning classic storage-subsystems and switching to a local storage.
It is important for companies to be able to also use hyper-converged solutions for the familiar procedures of using SAP – especially cloning, back-up and snapshot integration, made possible by the classic storage systems. The advantages are reduced total cost of ownership, simplified operation and greater flexibility. The typical example of a hyper-converged system is the Dell PowerEdge FX.
Software-defined data center
The architecture of a software-defined data center takes the next level. It is often hyper-converged systems that form the basis for this. Here, in addition to widespread server-virtualization, storage is also defined solely by software.
Examples of such a solution are vSAN from VMWare with the Dell vSAN Ready Nodes, the Nutanix architecture, as is used on the Dell XC systems, or also solutions based on Microsoft Storage Spaces. Yet the network components are still lacking for a Data Center defined completely by software. Here three approaches have established themselves in the market:
- Controller-based administration. In which all switches in the network are orchestrated through a central component, for instance with the help of the open-flow communication protocol.
- The freedom to choose the operating system on the network components. Within this, switches are reduced to the pure hardware, onto which an application user then installs the preferred network operating system.
- Overlay networks that place a virtual network over classical networks. An example of this is the VMware NSX, integrated into the hypervisor, in which all structures and mechanisms of the network are realized in the software.
What is relevant for SAP operation is in essence the provision of sufficient bandwidth. This requirement can be implemented by means of all three variations.
Software-Defined Anything – and thus the consistent separation of hardware and software on the basis of open standards – offers extensive options in terms of the design and operation of new solutions.
Within this, companies benefit from greater flexibility and a lower commitment of resources in the implementation phase. Optimally coordinated to suit one another, servers, software-defined storage und software-defined networking act as the central components for building up a high-performance, efficient software-defined Data Center that is secure for the future.
The individual components of the solution are already matured and tested out, but their interplay is still in the phase of development and evaluation; application users are in the process of finding out which variations are suitable for certain application scenarios. In this, companies are dependent on the consultancy and support of partners such as Dell, who are able to cover the whole portfolio of solutions in all its details.