Written by Nipuna Dilhara
Where it all started - Datacenters
When it comes to the modern era, deploying web services has become a common requirement that needs minimal effort. However, it wasn’t the case a few decades ago.
In the late 90s, the internet was a trending topic among people due to the emergence of web applications. It was a fascinating idea to make the data available online and available from anywhere from the world.
During this period, companies started to move their services as monolithic web applications and deployed and hosted in bulky and massive physical servers with local data centers. However, deploying monolithic applications and maintaining physical servers required a lot of work. They needed a considerable amount of hardware components. Delivering, assembling, and configuring them required a lot of time and expertise.
The hardware components were needed to be delivered, assembled, and configured which was time-consuming and needed specific domain expertise that wasn't common in those days. It was the opposite of what we are experiencing today.
As time goes, the amount of data to be processed gradually increased due to the advancements of new technologies and more importantly due to the improvements of businesses and their use cases. More complex data types such as relational data, graphical representations, and document-oriented data came into play. Hence, using traditional approaches based on local servers was not an option anymore. Due to these reasons, businesses started to move towards running several environments on single hosting services to utilize the hardware and maintenance efforts.
The concept of Network-Attached Storage (NAS) was the first such approach towards shared hosting services. It was simply a file storage server attached to a computer network that facilitates access of data to a large and diverse group of clients.
Even though the NAS was an effective solution for the problem, the deployment, and required resources were highly expensive. As a result, the small and medium-sized companies could not afford to move into NAS based solutions.
Introduction of Cloud Computing
With the further advancements of technologies, the concept of using Virtual Machines (VM) surfaced as a much affordable and flexible approach. As the name indicates, the VM is a computer application that acts as a separate virtual computer inside the same computer. Hence it allows users to run different operating systems with the required applications and configurations inside the same physical machine. During the 1990s, these concepts provided the fundamental of developing necessary computer infrastructure for the initiation of cloud computing concepts.
When the required technological background was being built, Amazon launched the Amazon Web Services - Elastic Computer Cloud (AWS-EC2) in the year 2006, making an evolutionary change to the direction of cloud computing. This allowed people and businesses to hire VMs to run their web services with minimal configuration cost and effort. The EC2 provided a secure and scalable computer space to run user applications through virtual servers hosted in the cloud.
The EC2 consists of five key advantages over the known limitations of the traditional approach of data centers which became a common feature of later cloud computing services.
- Reduced labor cost
- Reduced risk
- Reduced infrastructure cost
- Scaling
- Lead time
Hosting monolithic applications in local servers required extensive maintenance and management. With the arrival of cloud computing, the corresponding responsibilities were transferred to the cloud service providers which was a huge relief and advantage for businesses.
Also, cloud computing freed businesses from having the burden of any hardware failure and the expertise of handling such situations. The cloud computing services provided the required infrastructure and their repair and maintenance for less cost.
In the early days, businesses had to go through the troublesome processes of buying more servers and related infrastructures when expanding their businesses with the growth of customer base and service. With the arrival of cloud computing, it became so much easier and cost-effective to increase and decrease the number of owning instances according to the requirement.
The significantly less deployment time of applications made cloud computing a good bargain for businesses. Also, it gave the edge for businesses over those who were still using traditional data centers based hostings.
IaaS, Saas, and PaaS
There are three popular paradigms of cloud computing which changed the way of using the infrastructure.
- Infrastructure as a Service (Iaas)
- Software as a Service (SaaS)
- Platform as a Service (PaaS)
IaaS
Iaas was the first one to be introduced. The AWS Simple Storage Service was one of such kind which was soon followed by Google, Microsoft, and other leading IT names of the industry.
The term ‘Cloud’ was originally referred to as a public cloud which was a set of computer resources delivered by a provider and stored in a service provider’s data centers. However, due to the rise of cloud infrastructures, businesses became able to have their own data centers utilizing software such as Open Stack. Such self-hosted systems that use on-premise hardware were known as private clouds.
PaaS
The PasS came into play after the IaaS. As IaaS provided all the infrastructure necessities, PaaS brought an operating system and kept on top of IaaS. This freed businesses from the burden of installing operating systems and setting up relevant configurations. The cloud service provider took the responsibility of:
- Operating system installation
- Upgrading patches
- Monitoring the system
Microsoft Azure, Google App Engine, Heroku, and Elastic Beanstalk are some examples of such PaaS providers.
If a business wants to have their own PaaS, they can either host it on-premise servers, on top of a public IaaS cloud service, or use a mix of both private and public clouds which was generally referred to as a hybrid cloud.
SaaS
The Software as a Service was a successor of PaaS. As the IaaS provided the necessary infrastructure and PaaS provided an operating system to run on, SaaS was a software distribution model where the businesses can host their software applications and make them available for customers. If a customer requested a published software, the business provided the customer with access to a copy of the software with which the customer can work. The customer received all the new features and updates promptly and customer-specific details were saved on the customer’s device, on the cloud, or both.
Parallel to these cloud computing advancements, the application architectures have also evolved massively. As the monolithic applications were needed to be scaled with the time, the amount of data processing was also increased with that. This made it’s difficult to run monolithic applications within a single server. So they were moved into a more fine-grained approach where the application was broken into small manageable parts that will run in dedicated machines. This leads the path to the arrival of containers.
Containers
A container consists of a complete run time environment that includes an application with all its dependencies, libraries, and other configuration files that are required for the successful and smooth run of the application.
The first commercial container was introduced as a feature of the Solaris 10 Unix operating system in the name “Zones”. There are some similarities between Containers and VMs as both of them provide secure and independent space for the software application to execute. They give the impression of an individual system with its own set of administrators and users. However, unlike with virtual machines, a stack of containers can be put together on a single hosted operating system with significantly less time compared to VMs.
In the early days of containers, the community believed that the containerizing was hard to do. The arrival of Docker in the year 2013 was able to change this upside down. Docker fastly got the attention of the industry experts due to its easiness to bundle applications with everything that it requires to run. As the containers gathered attention, cloud computing service providers had to host and manage containers on behalf of businesses which eventually got the name Container as a Service (CaaS). Kubernetes, Google container engines, AWS-ECS are some well-known examples of CaaS.
Containers offered a simplified interface for developers that can be used to directly implement the code which reduced the burden of what developers needed to worry about while deploying applications. However, it was not enough. the industry experts were looking for a more simplified and convenient approach to further reduce the complexity for developers. It was this era, that the concept of serverless emerged making it possible for developers to focus only on their code and corresponding service configurations instead of worrying about everything else. With the arrival of serverless, all the burden of deploying an application into the final hosted location was transferred to the service provider.
Serverless
The serverless architecture facilitates the developers with both Backend as a Service (BaaS) and Function as a Service (FaaS) features.
The BaaS allows programmers to concentrate on the front end of the application and manage backend infrastructure without building or maintaining them. The FaaS is a serverless backend service that enables developers to create modular pieces of code on the move that can be implemented in response to certain events. This approach took off the requirement of maintaining physical infrastructure and the procedure that should be followed when deploying an application to server-side spaces. Hence it further facilitated programmers to only focus on the application logic. The programmers only had to write the programming logic and upload them to the FaaS platform which would invoke the function as an executable object once an corresponding event occurred. This made the serverless to be an event-based approach where the flow of the program was determined by different events or user inputs.
Businesses got most out of these evolutionary advancements of serverless technologies. The arrival and development of serverless technologies reduced the cost and the effort required for the deployment and maintenance of business applications. When the businesses wanted to scale their operations or even change the directions of their services, they were able to do it with considerably less burden compared to monolithic applications hosted on server-side spaces. Tasks that took years to complete became less time consuming as the evolutions of serverless concepts.
The release of AWS Lambda marked a great milestone in the serverless, reducing the speed of deployment to milliseconds with greater scalability and availability. The AWS Lambda facilitated programmers to completely focus on the logical aspect of the application.
Soon the competition for the dominance of the serverless domain grew rapidly. Even though it seems like Amazon is still in lead, other services such as Microsoft and Google came into the play threatening the dominance of Amazon. However, even to the current date of 2020, Amazon is still leading the Cloud infrastructure services launching new concepts and technologies regularly.
But the race hasn't ended yet. The evolution of new serverless concepts and technologies hasn't ended yet. It’s guaranteed that there are a lot more to come. The focus on serverless technologies in the IT industry is greater than never before. Even at the university level, serverless technologies have become a trending topic that created opportunities for new projects and researches. It’s already clear that serverless technologies will become a key factor of the future IT and business world. For the moment, let’s keep our heads high and see where these technological evolutions take us.
Top comments (0)