For over twenty years, IBM was “king,” dominating the large computer market. By the 1980s, the world had woken up to the fact that the IBM mainframe was expensive and difficult, taking a long time and a lot of work to get anything done. Eager for a new solution, tech professionals turned to the brave new concept of distributed systems for a more efficient alternative. On June 21st, 1988, IBM announced the launch of the AS/400, their answer to distributed computing.
1988: The Launch of the AS/400
The AS/400, a small standalone system, was intended to be a mainframe for small businesses, and for distributed computing. Back then the new standard was interactive, where interactive meant green screens. IBM was right in there competing with Data General, DEC, Honeywell, and everybody else who was in the computing world. The AS/400 took off soon after its introduction and experienced incredible success until the world started to change as we entered the 21st century and PCs grew to be Intel-based servers. One of the keys to the growth of the system was the insulation the architecture provided to the applications. In 1995 the old CISC-based 48-bit processor was replaced with a 64-bit Power PC-based processor and as part of an upgrade, all applications were automatically re-translated to take full advantage of the new hardware.
The New Millennium: The Shift to Client-Server Computing
At the start of the new millennium, there was a shift to client-server computing and the new distributed model with web, app, and database servers. The AS/400 became standardized with the rest of the IBM platform, moving on to standard IBM POWER chips with POWER5. With the launch of POWER6 came the introduction to standard virtualization technology, where you could take a server and create several completely virtual servers on it.
This was a slightly different approach from the rest of the world. IBM went with the idea that you would have a platform, put a couple of Virtual I/O (VIO) servers on it, and then you could create multiple VMs, and they would be connected to both VIO servers. As a result, you had a very redundant environment where you could take a VIO server down and everything would continue to run. Through the other VIO server, you could do your maintenance, make your changes, and bring it back up. It would reconnect to the second VIO server again and have full redundancy. Then you could take the other one down and do your maintenance.
In comparison, VMware’s approach was to take over the whole machine and just run virtual machines on top. If there was a physical problem, or the machine needed maintenance, the only thing you could do was to migrate those to another system. They got very good at that. However, if the system crashed, you were restarting everything on another virtual machine.
With the introduction to standard virtualization technology came the concept that if you had a VM, you should be able to migrate it to another physical server. In the IBM world, that became LPAR mobility. The AS/400 had been based on dedicated internal storage, and it was given full SAN support as the final modification. From a hardware point of view by the 2010 timeframe, the AS/400 was a modern system with IBM POWER chips, SAN storage, and virtualization in place.
Designed to be an Integrated System
In parallel to all of the hardware changes to the AS/400 was IBM’s continued focus to enhance the AS/400 as an integrated system, which was totally different from most of the rest of the world. When you bought the operating system, it came with an integrated database, integrated security, job scheduling, and job management capabilities. It was designed like a mainframe, where you could run multiple workloads on it. IBM’s competitors were concerned that trying to run everything on just one server wouldn’t work and instead, built their systems with multiple servers — web servers, app servers, and database servers.
With the IBM i, everything had been included. Early on, IBM believed the best of breed web server was Apache, therefore there was a version of Apache that was all packaged up to be the IBM web server. Then IBM built their own application server, the WebSphere Application Server. Customers who bought an AS/400 had the right to have WAS – Express, and a web server on it so that they could move into the new web world. However, very few AS/400 customers did that.
The biggest reason that the AS/400 was successful from an application development point of view, was because everything was integrated. You could go in as an application developer and design your green screen, design your database, and then write a program that pulled that information all together. Without having to do anything, the program knew what the database looked like, what the screen looked like, and now, all the programmers had to do was look after the logic of making the data flow.
It was a very productive application environment and IBM customers were all locked and loaded with green screens on everybody’s desk. When green screens went away, that turned into PCs with client access, which provided a green screen emulator. However, everything that you needed for web development and web services, and all of the new opensource programming, such as Python, and Perl was there. Whether people used it or not, was a totally different challenge, but IBM provided it and brought it all into place.
2003: Capacity Backup Unit
IBM now had a virtualized environment in place, with all of the standard web enablement and web services. The third piece of becoming cloud-enabled was to go through another transformation. The first step came in early 2000 when IBM realized that customers were now in an environment where they needed a higher level of availability. In addition to the production system, customers wanted a backup system. It didn’t make sense to have full software licensing on your backup system.
IBM started with the totally intuitive title of a “Capacity Backup Unit”, or CBU. It had a licensing model, which meant that you had a production box that was fully licensed, up and running. You could bring another box in and put minimum licensing on it, so that it could be up and operational and then you could use some sort of software. IBM liked an offer called PowerHA, but there was also a number of companies that offered logical replication, so that system could be a hot, or at least, a warm standby. In the case of a failure, you were able to logically transfer all of your software licensing to that backup system and run and be in compliance, as far as IBM was concerned, from a licensing perspective.
2011: Capacity On Demand
The next step involved addressing the needs of IBM’s high-end enterprise systems. IBM has two classes of systems: high-end enterprise systems and smaller systems, which they currently call, “Scale Out.” The enterprise systems are designed to be very robust, and from a scale-up perspective, very powerful. IBM started with those large, scale-up systems, and introduced what they called, “Capacity on Demand.” With Capacity on Demand, you could buy a system that had 16 processors, and a terabyte of memory on it, but you could license it to have eight processors, and 512 Gig of memory active. If you needed more than that, then you could get a hardware activation code that would turn another core, and another 16 Gig of memory on, or whatever it is that you wanted.
This was followed by “Trial Capacity on Demand,” where you could just turn everything on for a period of 30 days.
IBM realized that some of their high-end enterprise customers only needed Capacity on Demand for a short period of time. For example, a customer might have RS PCs, and in an RS PC they needed three extra cores but only for a certain number of months. As a result, IBM started coming out with “Temporary Capacity on Demand”, which meant that you could turn cores on, and IBM would monitor your system, and on a quarterly basis bill you for what you actually used. IBM also launched “Utility Capacity on Demand,” where you could pre-buy minutes of CPU and when you used it up you could buy more.
2016: Mobile Capacity on Demand
Customers who had multiple systems but had Temporary Capacity on Demand on only one of them soon asked for the ability to put Temporary Capacity on whichever system needed it. IBM then created “Mobile Capacity on Demand” for these large systems managed with a hardware management console. If you had two or three systems that are in a pool and hardware management, you could have a mobile activation and put that on whichever system you needed to use. It would be a temporary activation.
Building Public Clouds
Customers started to have a lot more flexibility and in fact, at that level, were starting to build their own private cloud. Based on that kind of capability, IBM started to look at building public clouds, where you could have a workload in your cloud and build a workload in somebody else’s cloud. IBM had already reached the point where they had LPAR mobility, which is the ability to do a live movement of a running, virtual machine from one physical server to another physical server. In this private cloud environment, you have the ability to move things around. You don’t have the ability to move a workload while it’s running from a private cloud to a public cloud because you just don’t have the connection since a lot of it is based on shared storage. But you started to be able to build a private cloud, a public cloud, and then, use all the web services, and everything else, that most people hadn’t started to use to tie things together and make them work together.
2019: Enterprise Pools 2.0
The last step that IBM took was to launch Enterprise Pools 2.0, which they expanded to the whole product line. Whereas before, the Capacity on Demand and Mobile Capacity on Demand was for the Enterprise systems. The Enterprise Pools 2.0 provided the ability to license the smaller systems differently. In an Enterprise Pool, you’re able to share all of your software licenses dynamically; you’re now managing with HMC, you connect your HMC to a cloud monitor from IBM. If for example, on your three systems, you have 10 licenses, then as long as your total utilization is less than 10 licenses, it works. If you are using more than 10, then on a minute basis, IBM will keep track of it. IBM would use the Utility Capacity on Demand, where you pre-buy a whole bunch of processor minutes, and if you were over what you’re entitled to, then you would draw on that. When it gets down to zero, you get to buy more.
Now you have the ability to build a very effective private cloud with multiple servers, the ability to move workloads around, the ability to share your licenses, and the ability to connect up into a number of instances in the public cloud. IBM has built this technology and now has a virtual power cloud offering that they are rolling out.
2022 -> 2023: IBM Power10 Scale-Out Systems
The launch of Enterprise Power10 systems delivers performance, security, and availability enhancements for mission-critical workloads. New innovative features like encrypted memory, embedded matrix math accelerators (for quantum-safe encryption and AI), and a new memory architecture (DDIMM) with enhanced bandwidth and RAS features ensure that continuous operation is optimized for demanding workloads. Power Private Cloud with dynamic capacity and consumption-based billing can reduce costs while optimizing resource utilization, based on workload and seasonal demands.
With the launch of the new Power10 scale-out systems, IBM is bringing all these enterprise features to smaller systems providing a consistent always-on architecture across the entire Power Systems lineup. The cloud and consumption options are only getting better.
Overcoming Backup and Recovery Challenges
Although not currently available, IBM’s intention is that the utility minutes that you buy could be used on whichever system needed it, including the ones in the public cloud. Instead of having to pay six different bills, as a business person, you would like to be able to put all of your budgeted money in a pool and have it be used best wherever it’s needed. Although that would be the ideal situation, there are some challenges when you get into this kind of environment, particularly with public clouds. One is backup and recovery, which can be difficult in a virtual world.
For example, if you’re in a web server, and it’s 25 Gig, and the answer is, “If it dies, start another one,” it’s not a big deal. When all of a sudden, you’re talking about a 20TB database, and it dies, “Just start another one” doesn’t work very well. Backup also becomes more complicated in larger systems. For example, if you’re backing up a web server, you could back that up very nicely in a virtual environment over the network. As you start to get into very large file servers, and very large data servers, that breaks down. You really need a VTL with a high-capacity connection, that you can back up at gigabyte speeds to get a backup done in a timely manner. More importantly, the days where systems just totally blew up don’t really happen very often.
If you do have a disaster, and you’re doing recovery over a network, it can take you a very long time. And most customers in this kind of environment aren’t prepared to do that. You need high-speed capacity and high-speed backup and recovery because these days, 90% of the time, the recovery is an application problem. Someone did something they shouldn’t do, and you have to reset the database so you can do it right.
Backup these days is normally used for an applicational recovery, as opposed to system crashes and rebuilds. At the same time when you go to a world-class data center, which is what most people are looking at, a type four data center that has redundant power, and redundant cooling, and redundant everything —there’s still a non-zero chance that a system’s going to roll over and die. In those kinds of environments, most cloud providers will simply just restart it. Your recovery time will probably be a few hours and some customers would accept that. Some wouldn’t. As a result, you now have to start to have some kind of a high availability strategy that’s replicating the database to another server, ideally in another data center. This will significantly reduce the recovery time, because now you have a hot system running somewhere else.
The fact that you’re in a public cloud doesn’t guarantee you’re never going to crash. And if it’s restarting a web server on another system, in those environments if a server dies, and you lose a couple, it automatically starts more on another server. Even though it’s a different environment, the AS/400 became the IBM eServer iSeries, became an IBM Power System, running IBM i — it’s still fundamentally a monolithic architecture, where you have the ability to run your database, your application server, and if you want, your web servers all in one platform.
Quite often, people will move the web servers out because they get a little more redundancy, and the ability to scale up and down much more easily. However, there are some real advantages to having the application server running with the database, simply because you get rid of the communication overhead. If you’re filling out an inquiry screen that’s going to take 100 I/O, that’s not a big deal. If you’re running a big batch process that’s going to do a million I/O, the multiplier says it would run way faster locally, than it would over a communication link.
When you can run locally and on the same server as the database, you get much better performance.
A Long Journey and the Road Ahead
It’s been a long journey, from the AS/400, which was a green screen, stand-alone, internal disk, distributed system built for the ’90s, to an IBM system running Power i, with the ability to build a private cloud and move workloads around. IBM continues to build on top of that. The IBM software stack, which started out growing with PowerVM and provided the basic virtualization to take a system and cut it up. Followed by the VIO server that let us virtualize the I/O, and create virtual machines and move them from system to system. Through to the PowerSC that let you try and manage the security, more in an AIX Linux world than an IBM i world since IBM i has a very strong built-in security. And lastly, to power PowerVC or Power Virtual Control, which provides you with a front-end to automate the creation of virtual machines and the management of them.
Dale Perkins
Dale Perkins is a Senior Solutions Architect with Mid-Range. Dale has been involved with Technical Sales Support for the IBM System/38, AS/400, iSeries, i5, and Power Systems since 1979. Dale is a server consolidation specialist who has extensive knowledge of both the IBM software and hardware architecture. Dale works directly with customers assisting them in building robust and flexible infrastructures.
He is an IBM Power Systems Specialist in Technical Solutions Design and was also one of the first IBM Systems Architects. In this role, he was responsible for designing solutions for customers using the complete IBM product line. He has been a key player in numerous server consolidation projects and is an expert in LPAR design and virtualization technologies.
Other Articles
Announcement: Service Express Acquires Mid-Range
December 19, 2024
Service Express, an industry-leading data center and infrastructure solutions provider, announces the acquisition of Mid-Range, a managed infrastructure solutions provider based in the Greater Toronto Area.
0 Comments5 Minutes
Choosing the Right Path: VMware, Virtualization Alternatives, or Cloud?
October 8, 2024
The acquisition of VMware by Broadcom has introduced significant changes to the virtualization landscape, prompting organizations to reassess their IT strategies. With revised licensing terms, the discontinuation of the free version, and a shift towards cloud-based solutions, businesses are left questioning whether to remain with VMware or explore alternative options. In this blog post, we delve into the implications of these changes, evaluate the potential benefits of cloud migration, and provide insights on how to choose the right path forward for your organization. At Mid-Range, we are here to support you through the decision-making process and assist you in your migration journey, ensuring you find the best solution to meet your unique needs.
0 Comments5 Minutes
7 Key Reasons Why You Need Veeam Backup for Microsoft Office 365
July 15, 2024
Discover the essential reasons why Veeam Backup for Office 365 is crucial for your business. Protect against accidental deletions, internal and external security threats, and compliance issues. Ensure comprehensive data protection and seamless management of hybrid deployments with Veeam's robust backup solutions. Learn more about safeguarding your Office 365 data today.
0 Comments6 Minutes
The Looming Skill Gap: How IBM i Users Face a Retiring Workforce and a Talent Drought
February 9, 2024
Debunking common myths about cloud migration to unlock its potential. Simplified processes, robust security, cost savings, and performance optimization revealed. Make informed decisions, drive innovation
0 Comments4 Minutes
Holiday Message from the Mid-Range Team 2023
December 19, 2023
Holiday message to our customers and partners for this holiday season.
0 Comments1 Minute
Holiday Message from the Mid-Range Team
December 19, 2023
Holiday message to our customers and partners for this holiday season.
0 Comments1 Minute