Plastic meets Docker

Thursday, January 22, 2015 , 0 Comments

Docker seems to be the new trend in application virtualization, and it is growing so fast that even Microsoft is getting ready to run Docker containers on Azure. They are also getting Windows ready to be dockerized.

This blogpost explains how to run a pre-built Plastic server Docker image that we have published at https://hub.docker.com. It explains the container structure we’ve prepared and how to isolate the server container from the data container to ease upgrades.

Meeting Docker

I’ve been playing with Docker these days, studying the integration possibilities it offers and finding an initial approach to wrap our beloved Plastic SCM server in a Docker container. And after some hours toying with our own Docker containers, I must say that I’m very excited about its potential!

But first things first: what’s actually Docker?

Docker: Sharing standardized module containers

Docker, as they define themselves, is an open platform for developers and sysadmins to build, ship and run distributed applications. It allows developers to create their own isolated environments according to their needs, define communication points (disk volumes or TCP/UDP ports) and save these settings into a Docker image.

Once the image has been set up, any other developer or sysadmin can run a new instance of the image, instantiating it into a Docker container.

Each one of those containers is run and virtualized on top of the Docker server, ensuring there’s no interference between them or the host OS.

Also, any change made in a container is persistent: all containers have their own local filesystem, which is managed by Docker. It won’t be accessed by any other container unless explicitly told to do so.

Docker brings improvements for both developers and sysadmins

The images and containers are a pretty effective way to share controlled work environments between developers! As the Docker developers say, it prevents last-minute surprises caused by differences between development and production environments, so simply nobody will need to point the finger at anyone claiming “it works on my machine!” Think about time spent putting a new team member up to speed, too: with Docker he will just need to run the image provided by his/her team!

There’s also an important improvement for sysadmins, since Docker comes with the necessary tools to assemble complex application servers made out of many components. If everything’s done right, it will be as simple as running new components from the needed images with any necessary parameter properly set. The more specific the images are, the less trouble updates will be: the sysadmin just needs to download and run the image of the new component version, connect it with the rest and that’s it! How cool is that?

Docker image hosting

A cloud service is available to share and publish Docker images: https://hub.docker.com

It works like Git. A user prepares an image on his/her machine, pushes it to the cloud server repository (maybe tagging that version) and it’s already available to be pulled by any other authorized user.

Please note here that the Docker Hub contains image repositories, which contain all image versions pushed by its owner. Additionally, Docker allows us to set the visibility of our repositories as either public or private, as well as to grant write permissions to other users.

Dockerizing PlasticSCM

After the previous intro, it’s now time to put that knowledge into motion!

Let’s build ourselves our brand new PlasticSCM image. You’ll find the sources at https://github.com/PlasticSCM/plastic-docker, and a working image named plasticscm/server at the Docker Hub.

This time I won’t go into details of how to build an image from scratch, since it’s still a work in progress. Instead I’ll show you how to get a Plastic SCM container running on Docker in a clean and seamless way. I’ll be configuring Plastic’s user-password authentication and a sample SQLite backend for the time being.

I’ll be using two containers as the following image shows:

First I’ll set up the data container. Its only responsibility will be storing the data (config files, databases and logs) to decouple it from our local filesystem. We just need to execute this:

sudo docker run --name plastic_data plasticscm/server echo “My data container”

This will download our plasticscm server image and setup a virtual environment out of it. Docker will start a new container in which the command echo “My data container” will be run. Finally, the container will stop. Yes, we don’t need it to be running to access its data volumes, that’s the beauty of it! It’s called the Data-Only Container Pattern, and it’s already quite common. We proceed to display our current containers using docker ps -a (we need -a because stopped containers are not shown by default):

$ sudo docker ps -a
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS                   PORTS               NAMES
b0ea04ae5bcf        plasticscm/server:latest   "echo 'My data conta   1 minute ago        Exited (0) 2 hours ago                       plastic_data

As you see, the container has been successfully stopped. Our image exposes three data volumes:

  • /db/sqlite, where the SQLite database files are stored.
  • /config, containing all needed PlasticSCM configuration files (users.conf, groups.conf and plasticd.lic at this time).
  • /logs, where the server logging process writes all messages.
I’ll now start the production server. This time we don’t need to write any command as argument, since we’ve built our image to start a PlasticSCM server by default: PLASTIC_SERVER_ID=$(sudo docker run -d -p 8087:8087 --name plastic_server --volumes-from plastic_data plasticscm/server) Let’s have a look at the parameters: we’re starting our plastic_server from our previous image, plasticscm/server. We’re telling Docker that we want it to be detached (-d) since it’s a server, and we want the port 8087 on the container mapped to the 8087 port on the host machine (-p 8087:8087). Finally, we want to mount the data volumes we initialized before in our plastic_data container (--volumes-from plastic_data). Oh, and running images in detached mode returns a container ID, so we might as well store it in a variable, just in case. Let’s see the container list again:
$ sudo ps -a
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS                   PORTS                    NAMES
27e9de2c8c60        plasticscm/server:latest   "/opt/plasticscm5/se   26 seconds ago      Up 25 seconds            0.0.0.0:8087->8087/tcp   plastic_server      
b0ea04ae5bcf        plasticscm/server:latest   "echo 'My data conta   2 hours ago         Exited (0) 2 hours ago                            plastic_data  

That’s our brand new PlasticSCM server up and running! It will be listening on localhost:8087 since we mapped ports that way. Also, it creates a ‘root’ user with password ‘root’, added to a group called ‘administrators’. Keep that in mind if you wish to try just right now!

You can test connectivity running a cm lrep command, in my case I run it from an external machine as follows:

>cm lrep 192.168.116.148:8087
Enter credentials to connect to server [192.168.116.148:8087]
User: root
Password: ****
1 default 192.168.116.148:8087

It works!

Configure users

You’ll probably need to add more users, though. Here’s a command to help you with that:

$sudo docker run --rm --volumes-from plastic_data plasticscm/server umtool cu user_name user_password

We’re running a new container to execute our umtool application, mounting the data volumes from our data container. This is absolutely required, since we’re modifying our production data! Also, the --rm parameter tells Docker to remove the container once the executed command finishes.

To refresh the user cache a server restart is needed. We can take advantage of the Docker command ‘restart’ to do that:

$ sudo docker restart plastic_server

Configuring the license

It’s worth noting that plastic comes with a 5-day trial license. You can upload you own license file this way (assuming it’s on the present working directory):

$ docker run --rm --volumes-from plastic_data -v $(pwd):/newlicense  cp /newlicense/plasticd.lic /config

A server restart is needed in this case, too.

Upgrade to a new Plastic release

What if a new PlasticSCM image version is released? The answer is as simple as running a new server container from the new image!

Since the data is stored in a separate container, it is just a matter of running the new container with the new Plastic version.

To proceed with the update, we would stop the current container first:

sudo docker stop plasticscm_server

And then we run the new one, indicating the version number as a tag, 5.4.16.638 in this example:

$ PLASTIC_SERVER_ID=$(docker run -d -p 8087:8087 --name plastic_server --volumes-from plastic_data plasticscm/server:5.4.16.638)

Backups

Finally, let’s talk about backups. Everything is stored and exposed in a data volume inside our plastic_data container. So, let’s do something similar to what we did when adding a user to backup our data:

mkdir backup
sudo docker run --rm --volumes-from plastic_data -v $(pwd)/backup:/backup plasticscm/server tar cvf /backup/databases_backup_$(date).tar /db/sqlite

In this example, all database files will be stored in a tar file. The –v command mounts a directory from the host machine ($(pwd)/backup in the example) into a container path (/backup). But you can use the paths and command that best fit your interests, of course! Logs can be retrieved likewise, as follows:

mkdir /log_backup
docker run --rm --volumes-from plastic_data -v $(pwd)/log_backup:/log_backup plasticscm/server tar cvf /log_backup/logs_$(date).tar /logs

Since the loader.log.conf file included in the image is configured to write also to the standard output, the logs docker command can be used as well:

$ docker logs plastic_server

Why reuse the server image?

A question you may have is: “why are these guys using the plasticscm/server over and over again? Do they really need whatever is in there?”

If you have browsed a bit the Docker Hub you probably noticed that there are basic, bare OS images we might have used to create our data container and to manipulate backups and logs. However, that would require downloading completely different images, slowing the process a bit. Docker optimizes images, so each new container doesn’t duplicate the image in order to run. We save precious disk space in our server that way. Also, using our own image means that we can specify paths, volumes and users at will, which might be not possible using basic images. We recommend you to have a look at this article if you’re interested in the subject.

Wrapping up

We’ve seen through this example how to set up a simple PlasticSCM server to be working in a controlled, standardized environment in a couple of minutes. Honestly, here at Codice Software we’re pretty amazed by the possibilities brought by Docker and how smooth it renders the environment administration operations.

This was just a simple proof of concept, but also the beginning of our work with Docker, too. We’re looking forward to release a fully customizable image, compatible with all the authentication and database backend systems supported by Plastic SCM.

Last –but not least–, I’d like to thank mariusk, who recently wrote us in our forum detailing the Docker image he built to ship PlasticSCM. You’re all welcome to participate and create your own images as well!

0 comentarios: