Incapture Technologies

Inside the Cloud

Incapture Technologies Blog


Rapture Runner

November 13, 2014

In a cloud/distributed environment with a large number of standard processes running we wanted a standard mechanism for ensuring that all processes have a consistent configuration and that there is a central place that operational managers can view the “state of the system”. We create the Rapture Runner process to handle this – it is not an absolute requirement that Rapture processes are coordinated through Rapture Runner but for some deployments it can be a very useful tool to manage what could be a complex environment.

Rapture Runner is an embedded Rapture application – it contains a complete copy of the Rapture kernel and can therefore access all of the APIs of Rapture internally. In an environment using Rapture Runner it is intended that this is the master process for all Rapture processes in an environment – each server contains one instance of Rapture Runner. As the process starts up it connects to the Rapture data and messaging environment and spawn any other Rapture processes that are configured to run on a given server. These processes could be binaries already installed on the server (through something such as Puppet or Chef perhaps) or stored within Rapture as blobs. The configuration for what processes are run on what servers is stored within Rapture and managed through a specific “runner” api.

The general architecture is shown in the diagram below:


Runner Configuration

Rapture Runner manages three main concepts:

Applications and Libraries

An application in Rapture Runner is a process that can be run. The definition in Rapture Runner (showing the web front end view below) gives an application a name, description and version. This information is used by Rapture Runner to locate the binaries for the application along with the “APPSOURCE” configuration parameter. A similar technique is used to define optional (java) libraries that should be embedded within the downloaded application.

Rapture Runner applications


A schedule defines how and when an application should be run. Using a cron style definition, Rapture Runner can choose to ensure a process is running on certain days and between certain times. The schedule also controls which “server group” the process should be running on:

Rapture Runner Schedule

Server group

Finally Rapture Runner manages the concept of a “server group” – containing an explicit or implicit definition of what physical/virtual servers are members of a given group:

Rapture Server Groups

Server startup

With this configuration in mind we can now describe how Rapture Runner performs its work. On startup, an instance determines which server groups it belongs to. Membership of a server group implies that, using the schedule entries, certain processes should be running. For those processes that should be running, Rapture Runner downloads the application and library binaries and starts the application.

From that point on Rapture Runner is actively checking 3 things. If the schedule indicates that a new process needs to be started or an existing process should be stopped (perhaps a maintenance window) that activity is performed. If an application managed by Rapture Runner terminates unexpectedly it is restarted – up to a maximum restart count. Finally through the API (and through the web interface) an application can be manually restarted or reconfigured.


The Rapture Runner API also manages the state of active servers running in this context – particularly their status and capabilities. This can be queried through the API and the web front end to Rapture Runner uses this to display the current status:

Rapture Runner Status


Rapture Runner is an ideal tool for those environments where operations wishes to manage the state of Rapture applications through Rapture. Because the runner API is available through the Rapture API the configuration management of the environment can be delegated to other tools or managed through the web interface of Rapture Runner. As always with Rapture there is a great deal of optionality to allow for many different deployment and management approaches.

Subscribe for updates