A brief history of Containerization

Updated 12 November 2020

Before getting into the history of containers , lets start with “Do you know anything about container, and what it is exactly?” If no, then lets start with what is a container and then we will discuss about its emergence.

Container

Container means a methodology of packaging an application so that it can run with isolating dependencies.

Containers typically provide a means of creating container images based on different Linux distributions, an API to handle the container life cycle, client tools to communicate with the API, functionality to take snapshots, migration of container instances from one container host to another, etc.

Container History

Necessity is the mother of invention, and computer history is no exception. Earlier, computers are dedicated to a specific task that takes days or even weeks to run, due to which we saw the advent of virtualisation in the 1960s and 1970s. Below is the summary of container history :

From 1970

Multiple computer are connected to a single mainframe, which allows computing to be done from a central location, with this it becomes possible to control all the computers from a central computer. But the drawback of this centralization is if the mainframe breaks down, all the connected computers will also break down.

In 1979

This change root (chroot) command made it possible to change the apparent root directory for the running process, along with all of its children

In 2002

A jail command, introduced by FreeBSD in its OS. While similar to the chroot command, additional process sandboxing features was also included to isolate filesystems, users, networks, etc. This jail command gives the ability to assign an IP address, configure custom software installations, and make modifications to each jail.

In 2004

This phase introduces Solaris containers, which creates application environment through the use of solaris zone. Through zones, you can give an application full user, process, file system space along with the access to the system hardware. However, application can only see what is within its zone.

In 2006

This phase introduces a new feature called Control groups (Cgroups). Cgroups allocates system resources like CPU time, system memory, network bandwidth, or combinations of these resources among the processes running on the system.

In 2008

Cgroups are merged with Linux kernel 2.6.24, which lead to the project that we now called as LXC (linux containers) being developed. It provides operating system-level virtualisation by allowing several isolated Linux environments (containers) to run on a common Linux kernel. Each of those containers has its own network and process space.

In 2013

Google launched Let Me Contain That For You (LMCTFU) open source container stack. Such containers allows the isolation of resources used by multiple applications running on a single computer. The applications can be container- aware and can therefore build and control their own subcontainers.

Rise of Docker Container

Docker are introducing in 2013, as an open source project. It has the ability to package container so that they can move from one environment to another environment.Dockers initially rely on LXC technology, later replacing LXC with libcontainers.. Docker expanded its community contributions by providing global and local container registries, a restful API, and a CLI customer.

Later, Docker introduced Docker Swarm, a container cluster management system. Swarm Mode helps cloud developers to position and scale applications automatically around the cluster, and provides stability when one system fails.

This Docker swarm mode is similar to Kubernetes. It is a lightweight, extensible, open source framework for managing container workloads and services.

Similarly, AWS Fargate is the serverless compute engine for containers. It helps you in building your application. For AWS Fargate, to operate containers, you no longer have to configure, or scale groups of virtual machines. This removes the need for selecting server types, choosing when to scale the node groups or optimizing cluster packing.

In short, A container is a software package that contains all the dependencies that the software needs to run. In our next blog ,we will be discussing more about containers and dockers.

For any help or query, please contact us or raise a ticket.

author
. . .

Leave a Comment

Your email address will not be published. Required fields are marked*


Be the first to comment.

Start a Project






    Message Sent!

    If you have more details or questions, you can reply to the received confirmation email.

    Back to Home