Docker has adopted the "ship early and often" mantra of software developers, but it isn't just shipping a new version of the Docker client a mere two months after the last one. Instead, it's offering up a major architectural change in Docker image delivery -- a clear sign Docker's success is forcing it to keep pace with its customers' real-world needs.
The original incarnation of the Docker image-delivery framework, Docker Registry, had begun experiencing performance issues under load. Scott Johnston, senior vice president of product at Docker, described in a phone conversation how that was part of a learning experience about how the company would need to support security, enhanced access control, and performance for Docker users.
Performance for Docker Registry, he said, "has become pretty evident as a critical feature, as we've watched the Docker Hub [itself based on Registry] grow. When you go from zero to three hundred million images downloaded, it really stresses the performance of the system."
Docker's answer to this was to radically refresh the architecture of Registry, switching the language used to write the software from Python to Google's Go language (which Docker itself is written in), and making changes to the way the protocol delivers images. Originally, layers of Docker images were delivered to clients sequentially; the new system downloads them in parallel.
The changes to the Docker client also reflect the ways demand and use cases have evolved rapidly for Docker, even if those changes still only answer part of the criticisms being laid at Docker's doorstep.
Aside from a Windows edition of the Docker client, soon to be joined by an actual Windows edition of the Docker engine, most of the other new features are in Compose, the tool used to assemble applications from the contents of multiple containers. Compose now allows configurations to be shared between multiple applications and app environments, as a way of establishing heritable dependencies for container-based apps.
Those configurations can also be used to separate the development version of a given application from the deployed version -- another sign of Docker's professed interest in providing tools for across the entire lifecycle of an application.
Version 1.6 of the Docker Engine also boasts two features that stem from the way Docker is drawing at least as many feature requests from the ops side as it is the developer side: more detailed image and container handling, and a new set of logging drivers.
The sheer number of containers and apps being built that use containers by the dev community, Johnston said, means QA, staging, and ops now need tools to manage and have inspection into those same apps. The first version of the logging framework in Docker, he said, "was okay for developers, largely, but we know these admins that are managing hundreds if not thousands of nodes take logging very seriously, and have a whole different level of systems they put in place."
Syslog, one of the logging frameworks now supported directly by Docker, is a widely-used staple amongst admins to aggregate data collected from multiple servers -- the kind of collection mechanism needed for introspection across many Docker containers. Johnston hopes other major logging framework producers -- Logstash or Splunk, for instance -- will step up and create drivers for Docker, something that seems inevitable given Docker's growth.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.