Automating container creation
Containers – in the eyes of many – are magic. You can put all the stuff you need for a smaller application or a section of a larger application into an environment solely catered to it where it can function on its own. It’s like creating a separate planet where polar bears can live in their native environment forever free from the terrors of global warming. In this way, containers are amazing since they can help maintain nearly extinct technologies in environments that can sustain them. That is truly magic. But casting the spell is rather bothersome, which is why we automate stuff.
Sample 1: Creating containers based on a list of requirements
Containers change between initialization and stoppage based on changes in the state of the files and configurations within the container. Capturing an image from this changed container will give an image that has several layers added on top of the initial layer. This is a way to create custom containers as well. This can be useful when the containers that we find are largely what our requirements are but are not exactly our requirements. We can add a few steps (and a few layers) to make our container just as we would like it. We can then turn this into an image, which can then be replicated for other containers. We can do all of this with Python (big surprise, amirite?):
- Let’s once again start off with some simple code to start a container based on an image:
import docker
client = docker.from_env()
container = client.containers.run(‘ubuntu:latest’, detach=True, command=’/bin/bash’)
container_id = container.id
print(“Container ID:” + container_id)
This set of commands will run a container containing the latest version of Ubuntu. It will also give us the ID of the container, which will be important in the next step. This will be our starting point.
2. Now, let’s add on to it:
#you can put in any command you want as long as it works
new_command = “ls”
new_image = client.containers.get(container_id).commit()
new_image_tag = “<whatever_you_want>:latest”
new_container = client.containers.run(new_image_tag, detach=True, command=new_command)
Now, we have a new container that has the new command added on top of everything else in Ubuntu. This container is different from the original one but built upon the original.
3. Next, we need to export this image for later use:
image = client.images.get(“<whatever_you_want>:latest”)
image.save(“<insert_file_path_here>”)
This will save your image in the desired file path. Putting all of this code together, we get the following:
import docker
#Step 1: Intialize and run a container
client = docker.from_env()
container = client.containers.run(‘ubuntu:latest’, detach=True, command=’/bin/bash’)
container_id = container.id
print(“Container ID:” + container_id)
#Step 2: Add a layer
#you can put in any command you want as long as it works
new_command = “ls”
new_image = client.containers.get(container_id).commit()
new_image_tag = “<whatever_you_want>:latest”
new_container = client.containers.run(new_image_tag, detach=True, command=new_command)
#Step 3: Export layered container as an image
image = client.images.get(“<whatever_you_want>:latest”)
image.save(“<insert_file_path_here>”)
The full code gives us the complete picture and shows us that all of this can be done in just a few short steps. Adding layers simply means adding more commands. You can even start with an empty template that has nothing in it if you want.
This is all good if you are creating individual customized images, but another complicated aspect of containers is orchestrating multiple containers together to perform a task. This requires a lot of work and is why Kubernetes was created. Kubernetes clusters – even though they simplify container orchestration a lot – can be quite a handful. This is another area of container automation, then, that Python can be useful for.