$ date

Sat Dec 19 19:00:53 UTC 2015


The new age: Deploying code via Containers

Alright, calm down.

Take a deep breath... And another. And another.


Good.

Now that you are calm, we can talk about my workflow with Dock-- No, don't go away yet!

Alright, I know a ton of people blogged the shit out of stuff related to Docker: The good, the bad and the ugly.

I want to talk about my experience and workflow, and how it improved it a lot at slight cost.


My experience is the experience regarding Carbon, my Lua Application Toolkit, and an IRC Bot called Cobalt.

Carbon is written in Go, using two C Libraries: LuaJit and PhysicsFS.

It is go get-able and does not use a Makefile.

Normal Workflow ( Before Docker. )

My normal Workflow in Carbon before Docker was like this:

This is what I have to do:

  1. Commit Changes and Push to GitHub.

  2. Wait for CI(s) to finish building and eventually fix the issues with a new commit. (Anywhere from ~1-10 minutes.)

  3. After passing, ssh to my test server, attach to tmux. (Pretty much instant.)

  4. go get Carbon and hope it doesn't fail due to weird reasons. (~1 minute.)

  5. ^C already running test instance and start it again. (~30 seconds.)

  6. Repeat.

I used this workflow for a long time, but eventually got sick of it.

Nothing is automated here, I mean, the CI's are testing it, but they don't give me a build I can use, nor do they do it automatically.

Docker Workflow

This is the new workflow I have for my test site and

This is what I have to do:

  1. Commit Changes and Push to GitHub.

  2. Repeat.

Yep, you read right. Just commit and everything else is automatic.

A lazy developers dream.


Right, you probably wonder what is actually happening, don't you?

  1. Commit and Push changes to GitHub.

  2. GitHub pokes TravisCI, "Hey man, got some fresh code fo' ya!"

  3. TravisCI runs the Dockerfiles, which do the same thing as the CI before: Compile the code.

  4. TravisCI pushed the built Docker images to Docker Hub, which hosts them for Docker users to pull.

  5. Docker Hub notifies two services:

    1. Tutum

      1. Tutum detects the update and redeploys the test site.
    2. Docker Hub internally

      1. Docker Hub triggers a rebuild of Cobalt to make it use the new image, just in case it changed.

      2. Docker Hub notifies Tutum if the build of Cobalt succeeded, which redeploys Cobalt.


That's it.

This is my workflow, and I like it.

I mean, it may take a bit longer, but not by much, maybe a minute or two, but in that minute, I don't have to do anything, I can just code.

There are a few downsides to this workflow, sadly:

  1. To restart Cobalt or my test site, I mostly have to hit Redeploy in Tutum, instead of being able to ^C it.

  2. I am dependant on these services. I have given up the control to Tutum, Docker Hub and TravisCI.

I can deal with these downsides, and there are workarounds/notes:

  1. Just restart the container using Docker itself on the test box, or a nice tui interface like sen.

  2. Well damn. If any of those break, I really am doomed. But I don't think that's gonna happen anytime soon. In the worst case, I can run builds, deployment and testing myself.

And? Why the hell should I care?

Well, maybe you shouldn't.

Maybe you have the perfect setup for you, where you have to just program and everything else is done for you.

But I don't have that situation.

Eh, I don't know, maybe people working in a big ass company with no control whatsoever really do have less troubles after all...


I like my workflow, it takes work off of my shoulders, and best of all: It is free.

I mean, the server isn't, but there is still GitHub Education if you are a student... :)

$ cd ..