Deployment Architecture Evolution

Deploying Natively

Before the widespread adoption of cloud computing services and open-source solutions, it was common for companies to both purchase their own hardware and buy commercial, proprietary software (like Oracle WebLogic or IBM WebSphere), and then deploy their product natively. Regarding the hardware, companies had to acquire and maintain their own servers and networking equipment. This required significant capital investment in hardware, as well as the space to house these servers, often leading to the operation of dedicated data centers. As a result companies sought to maximize the use of their hardware by hosting multiple applications on the same physical server. This server utilization need meant that deployment had to be carefully planned to ensure that the various applications did not interfere with each other’s operations. Some of the challenges that comes with hosting multiple applications on a single machine are:

In order to avoid these challenges, various strategies for isolation and resource management were employed. These included:

All these strategies were necessary to ensure that applications could coexist on the same hardware without impacting one another's performance or stability significantly. However, despite best efforts, natively deployed applications running on shared hardware still had drawbacks like:

So the deployment still required engineers to do a lot of manual configuration of infrastructure, and these manual changes slow the overall process, making it difficult to quickly respond to the changing needs of the business or to deploy new features and updates with the agility that modern businesses require.