Incapture Technologies

Rapture Series

Incapture Technologies Blog


Rapture Series

Utilizing Java Watch Service

At Incapture we often implement data ingestion workflows for clients, typically as part of a larger re-engineering effort. Frequently, this involves waiting for and loading file-based data to arrive from another system or vendor. This is where Java’s Watch Service comes into play. Recently I was reading about Java’s Watch Service, which is included with the java.nio.file package, and thought this would could help us with client engagements.

Watch Service allows you to monitor directories and what types of events you want notifications for.  Events are create, modify and delete; more details here.

We have released  ‘WatchServer‘ as part of our open source platform. The server provides a file system monitoring capability that maps file system events to Rapture actions in a repeatable and configurable fashion.

Typically the action would be a Workflow.  As a reminder ‘Workflows’ in Rapture:

  • Are constructs that define a set of tasks (or steps) that need to be performed in some order:
  • Contain steps that can be implemented in various languages (Reflex, Java, Python etc)
  • Contain state that can be updated by each step
  • Manage step switching and execution via an internal pipeline
  • Can be initiated using Workflow API or attached to an event

There are many use cases we could support with this architecture plus Rapture platform capabilities, some of which are:

  • Loading csv file(s) to create time series accessible via Rapture’s Series API
  • Loading pdf file(s), indexing them and making them searchable via Rapture’s Search API
  • Loading xml file(s) and transforming to (json) documents accessible via Rapture’s Document API

To illustrate i’ve developed a workflow to load a SamplePriceData.xlsx file, extract data from each row and create a (json) document for that row in a Rapture document repository.

The WatchServer detects ENTRY_CREATE events and runs the workflow, which does:

  1. Loads a file from /opt/test and stores it in a blob Rapture repository blob://archive/yyyyMMdd_HHmmss/SamplePriceData.xlsx
  2. Create a Rapture document repository containing one document for each row in the spreadsheet document://data/yyyyMMdd_HHmmss/ROW000001..N. This uses Apache poi, a Java API for Microsoft documents, to extract data from the spreadsheet.

It is straightforward to setup and run locally using the process set out in the using images from Incapture’s public Docker Hub account.  Make sure to install Docker on your local system first! I use Docker for mac.

Once the workflow has been run once you can view the results in default Rapture system UI on http://localhost:8000.

The archived xlsx file saved as a blob:

archive repository

and the subsequent documents created in document://data repository:


Using WatchServer in conjunction with Workflows gives you a flexible but defined approach to implement your domain specific data loading processes. Plus the benefits from the built-in operational support Rapture provides.

If you’d like more information about Incapture or Rapture please email me, or to our general email address and we will get back to you for a more in depth discussion.

Rapture and REST

At Incapture we implemented a REST server to demonstrate exposing Rapture (Kernel) calls through a REST style interface. Specifically, to perform CRUD operations on the various Rapture data types: document, blob and series. This approach can be used when modeling and implementing your own Rapture client’s domain resources and interactions.

We wanted to use a simple and straightforward REST framework so we choose This allows you to get started quickly and provides everything needed to build an API.

Lets focus on Rapture ‘Documents’. One of the prime uses of Rapture is to manage access to data. Rapture has the concept of a repository for managing access to data. Various repositories, configurations and implementations are provided ‘out of the box’. For the purposes of this post we will be considering a versioned document repository hosted on MongoDB.

Document data repositories manage data as key/value pairs and are addressable through URIs. In fact, all data in a Rapture system is uniquely addressable via a URI and is a key concept in using the platform.

For example, consider the following document with URI document://orders/ORD000023312 and data:

    "id" : "ORD000023312",
    "orderDate" : "20150616",
    "ordType" : "market",
    "side" : "buy",
    "quantity" : 4000000.0,
    "strategy" : "XYZ",
    "fund" : "FUNDNAME",
    "status" : "FILLED",

Lets look at the process to create a document repository and load a document.

The first step is spin up a local Rapture system; this can be done easily using Docker. The steps are set out at this All the docker images are available on Incapture’s public Dockerhub registry.

So lets begin the process of:

  1. Creating a Document repository using a POST action
  2. Adding a document using a POST action
  3. Using GET action to retrieve the data
  4. Deleting the document

A postman collection is available with working API calls. Please note this uses https://localhost as we’re using Docker’s native (mac) tools.  Postman collection includes a /login call and provides all the necessary body (Raw JSON/Application) inputs.

The first task is to create a versioned Document repository configured to use MongoDB.  The REST call is as follows:

    POST /doc/:authority
    Example: /doc/orders
    Body: {"config":"NREP USING MONGODB {prefix=\"orders\"}"}

The server will route this call and create this repository: document://orders

Here is the (spark) method implementing the route note the Rapture Kernel calls:

post("/doc/:authority", (req, res) -> {;
    String data = JacksonUtil.getMapFromJson(req.body());
    String authority = req.params(":authority");
    String config = (String) data.get("config");
    CallingContext ctx = getContext(req);
    if (Kernel.getDoc().docRepoExists(ctx, authority)) {
        halt(409, String.format("Repo [%s] already exists", authority));
    Kernel.getDoc().createDocRepo(ctx, authority, config);
    return new RaptureURI(authority, Scheme.DOCUMENT).toString();

Next we will create a new ‘order’ document at URI document://orders/ORD000023312. The body for the call is provided in the postman collection.

   PUT /doc/:uri
   Example: /doc/orders/ORD000023312
   Body: {..order json here..}

Note the Rapture Kernel call to write a document putDoc(String uri, String body)

    put("/doc/*", (req, res) -> {
     return Kernel.getDoc().putDoc(getContext(req), getDocUriParam(req), req.body());

We won’t go through the subsequent GET and DELETE calls as the postman collection and github code are available to review.


  1. RESTServer Github repository
  2. Setting up local Docker environment
  3. Postman collection

If you’d like more information about Incapture or Rapture please email me or to our general email address and we will get back to you for a more in depth discussion.

Entitlements in Practice

Building from an earlier blog post which provides conceptual grounding on entitlements, this post provides some practical examples of how to implement entitlements in Rapture.


Entitlements in Rapture allow administrators to clearly define who can access what in Rapture.  It is a permissioning system based on users, groups, and entitlements.  API calls made to Rapture are protected by entitlements that are defined at compile-time.  A defined entitlement is associated with a number of groups of users, and this association can be made at run-time.


User – A user represents a person who is making calls to Rapture or an application that is making calls to Rapture. A user is a single entity with a username/password who needs access to Rapture.
Group – A group represents a collection of users.
Entitlement – An entitlement is a named permission that has associated with it 0 or more groups. If an entitlement has no groups associated with it, it is essentially open and any defined user in Rapture can access it. If an entitlement has at least 1 group associated with it, any user wishing to access the resource protected by this entitlement, must be a member of one of the associated groups.


The use of entitlements is best explained by using a simple example.

User “bob” is a defined user in Rapture.  He writes a Reflex script to update the description associated with his username in Rapture.  Thus, he wants to use the updateMyDescription API call.

#user.updateMyDescription("My name is Bob");

He is successful.  What happened underneath the hood?

Let’s first examine how updateMyDescription is defined in the user.api file in the RaptureNew/ApiGen project.  Every API call in Rapture has a defined entitlement associated with it.

[Update the current description for a user.]
@public RaptureUser updateMyDescription(String description);

The entitlement string for the updateMyDescription call is defined as “/user/write”.  Entitlements are always defined as hierarchical slashed strings with an optional wildcard.  This means if a user has permissions for /user he will be permissioned for entitlements /user/read and /user/write/ as well.  If a user has permission for /user/xxx, that does not mean has has permissions for /user/yyy, but he does have permissions for /user/xxx/yyy.

On startup, a brand new Rapture instance always creates every single entitlement possible (by scanning every single *.api file) and initializes it as empty.  This means any defined user has permissions to all entitlements on startup of a clean brand-new Rapture instance.  Using the Entitlements API, users can then be added to groups, and groups can then be associated with particular entitlements to control access.  These definitions and associations are persisted to the configuration repository.

Back to our example.  Assuming that “bob” executed that api call against a brand new instance of Rapture, the entitlement of “/user/write” would have had 0 groups associated with it.  The entitlement check would have passed, since remember, an entitlement with 0 associated groups is wide-open to all defined users in Rapture.  How do we make it not pass?  We have to use the Entitlements API to associate a group of users with the entitlement “/user/write”.

#entitlement.addUserToEntitlementGroup("groupThatHasAccessToUserWrite", "alice");
#entitlement.addGroupToEntitlement("/user/write", "groupThatHasAccessToUserWrite");

Notice above that only the user “alice” has been assigned to the group “groupThatHasAccessToUserWrite”.  That group was associated with the “/user/write” entitlement.  User “bob” is not a member of that group.  Therefore, if Bob were to execute his call again after the above changes were made, it would fail.  In order for Bob to be able to make that call, he would have to be added to that group using the Entitlements API.

Dynamic Entitlements

Dynamic entitlements are entitlements with a wildcard in the string, such as /user/put/$d.  The wildcard is substituted at runtime based on the argument(s) of the API call that is made.  This allows Rapture to define entitlements that are based on the actual arguments of the API call being made.  Here is a table showing the currently defined substitutions:

Substituted With
$d documentPath
$a authority
$f full path (i.e. authority/documentPath)
$u current_user

Another Example

User “bob” wants to read a document out of Rapture.  Thus, he writes a Reflex script to use the getContent API call, which has the following definition:

[Retrieve the content for a document.]
@public String getContent(String docURI);

Bob’s Reflex script:


The $f in this entitlement gets substituted such that Bob’s entitlement request looks like the following:


The entitlement system will check if Bob is member of the group associated with that entitlement.  For the sake of this example, let’s assume that Alice had previously created an entitlement “/data/read/myAuthority/alicesDocs/doc” with a group with just her username included.  She basically wanted an area that she can keep private.  This means Bob’s call will fail.  He is not a member of the group associated with the entitlement “/data/read/myAuthority/alicesDocs/doc”.

The wildcard substitutions defined above are based on the Rapture URI argument that is passed into the call.  The values of documentPath, partition, and authority are all components of a RaptureURI object.  At this point, it only makes sense to use dynamic entitlements with API calls that have a RaptureURI string as an argument.

Integrating Third-Party Services with Rapture: Stripe (Payments)

Clients building applications using Rapture may want to collect payment from users based on some usage metric (recurring subscription, service-based fees, consumption-based fees, etc.). In this blog post, we will describe how we integrated Stripe with Rapture to set up a subscription service for our hosted trial environments through the Incapture developer portal.

Stripe offers a suite of APIs that support online commerce. Two aspects of their offering stood out to us –
i. Emphasis on security and PCI compliance — all sensitive credit card data is directly sent to Stripe’s vault, without it touching Incapture’s servers
ii. Developer-friendly APIs — good documentation wins, hands down.

I’ll give examples of how easy it was to build the integration using Rapture’s Reflex language — a procedural language that runs on the Java Virtual Machine — with Stripe’s API. This article is as much about Stripe subscriptions as it is about Rapture, Reflex and a front-end framework (in this case, Angular) providing the requisite stack to build a simple web app.


I. Creating a Subscription Plan in Stripe
The use case our application addresses is migrating clients from a free to paid subscription after some initial trial period. Our first step was to define parameters of a subscription plan through the Stripe dashboard; we opted for a test 30-day recurring subscription at $50 per month with no limits on usage. Now, every time a customer requests a new environment, we can associate the subscription plan ID with it.

Stripe has the option of specifying a trial period while creating a subscription plan — a handy feature that releases you from the responsibility of keeping a tab on the trial end date. However, this also necessitates collection of payment details at the time an environment is set up. We decided to not use the feature to ensure a zero-pressure customer onboarding experience.

So, when a customer accesses their dev portal dashboard, in addition to being notified when the trial period ends, there’s now an option to “Upgrade” their environment.


Subscription status on an environment card

Fig. 1. ‘Subscription Status’ on an environment card on the dev portal dashboard


II. Creating a Form to Collect Payment Details
We took advantage of Stripe’s Checkout form. It is customizable at a high level (company logo, title etc.) as well as at a functional level. We can use the same form for two different purposes — creating a subscription and updating payment details — by passing in appropriate arguments to the handler.


Stripe Checkout handler with different arguments

Fig.2. Stripe Checkout handler with different arguments


III. Creating a Subscription
If a customer signs up for a subscription for the very first time (i.e. they have never provided their payment details before), submitting the form creates two new objects:
i. Stripe customer
ii. Stripe subscription — that ties the customer object to the subscription plan we created via the Stripe dashboard.

A returning user who previously created a subscription for another environment will already be associated with a customer object (and, consequently, a payment source) and we do not need to collect payment details again. All we do is create a new subscription object and link it to the customer object.

Once created, Stripe will automatically renew the subscription every 30 days.


IV. Managing Subscriptions
Developers have a lot of flexibility in designing payment workflows in Rapture applications. For instance, basic tasks like updating payment information and canceling or refreshing subscriptions can be fully automated. Alternatively, certain actions can trigger alerts that allow for a support team member to connect with a client. Rapture also provides a number of extension points that may be linked with payments. The entitlements framework can be used to manage access to certain services and datasets based on subscription tier. Because all system activity is automatically logged, producing usage reports and using this data to inform customer segmentation and pricing analysis becomes a quick task.


V. The Mechanics
Incapture’s dev portal uses an Angular front-end and an API server built on the Rapture platform. For most apps, Reflex is the scripting language we employ to tap into Rapture’s powerful service framework mechanism: a service endpoint written in Reflex is the medium that the front-end and server use to talk to each other. Stripe has a RESTful API and Reflex leverages the entire platform API — including the ability to handle HTTP request and response objects. The result? A fully-functional Stripe app built really quickly!

Let’s take a look at the example of creating a Stripe customer object.


Flow diagram for creating a Stripe customer object

Fig.3. Flow diagram for creating a Stripe customer object


From the Subscription page, following the ‘Subscribe’ button click, we present the Stripe Checkout modal to collect a customer’s card details. After submitting the Checkout form, if everything checks out, Stripe returns a token ID that should be used to create a new customer object.

This line in our Angular controller invokes the createSubscription Reflex script (remember, creating a customer is actually a step encountered while creating a subscription for the very first time):
paymentService.createSubscription(createCustomer,,, planId, envName)
(The first argument is a flag that is set to true or false depending on the use case.)

Moving on to the Reflex part..
An important aspect of Reflex is that we can call one script from another. So, in our main script that contains the core logic (that follows the flow of creating a subscription), we call another script that makes the Stripe API call to create a customer object.
(This separation of core logic from vendor-specific API calls will also make it really easy to update scripts if we decide to switch to another payment platform in the future.)

stripeCreateCustomer = fromjson(#script.runScript("script://curtis/stripe_createCustomer",
                                 {'token': token, 'email': email, 'planId': planId}));

(email and planId are optional arguments.)

The stripe_createCustomer script itself is this:

response = {};
import HttpData as http;
headers = {};
headers["Authorization"] = "Bearer " + ENV['STRIPE_SK'];
url = "";
params = {};
params["source"] = token;
params["description"] = "First plan: " + planId;
params["email"] = email;
urlwithparams = $http.uriBuilder(url,params);
stripeResponse = $http.get(urlwithparams, "POST", null, "JSON", headers);
if (stripeResponse.error == null) do
     response = stripeResponse;
else do
     response.error = stripeResponse.error;
return json(response);

(Reflex has a number of built-in functions and special operators — e.g. ENV[…] — that are semantic shortcuts when interacting with Rapture . More power to you!)

Importing the HttpData module into the script gives us the ability to make REST calls with support for HTTP POST, GET, DELETE etc.

On successful creation, Stripe returns a customer object in its response. The onus of processing what is required falls on the caller Reflex script. In this case, we are only interested in the customerId value; so, we retrieve it from the response and store it in our database. (customerId is what we use to get a customer’s Stripe-related info — including determining whether a customer has already subscribed to a plan and, therefore, not asking for their payment details again.)
Further processing can specify what should be sent to the front-end JavaScript code (e.g. feedback).

Creating web apps that integrate with third-party services is a breeze when built with the Rapture-Reflex-Angular stack! In a later post, we’ll explore how to make Stripe’s webhooks talk to our dev portal and Slack.

Rapture On-Demand

Over the past couple months at Incapture Technologies we have been working on a way to allow potential clients to experiment with a fully-fledged Rapture environment as seamlessly as possible. Our solution to this was actually using our own product, Rapture, to create a user dashboard & management application that would be used along with Docker & Tutum (soon to be Docker Cloud) to manage & launch trial environments as needed.

Once an interested user is up and running with their own environment, we provide them with our “Getting Started Guide”. That web page contains the Rapture Platform documentation as well as Introductory tutorials available in 3 languages (Reflex, Java or Python). These are in place to help a user get acquainted with how to use the Rapture API, the Rapture Information Manager browser application, and the real-world impact of how Rapture could be a solution for them. The first tutorial application demonstrates how easy it is to build an application in your chosen programming language on top of Rapture. This tutorial uses cleansed hedge fund sample data (price series, positions, orders, trades etc.) that comes conveniently pre-installed on a trial environment. The tutorial highlights how easy it is to load and transform data in various forms (csv blob, series data, or json document) using the Rapture platform.


Infrastructure & Deployment:

Our applications are all deployed as Docker containers in this scenario. If you aren’t already aware, Docker containers are essentially lightweight, self-contained (highly portable) VM’s. These images can be tagged and pushed to Dockerhub to be pulled from elsewhere. We strive to stay lean where we can, so some containers even implement the Alpine Linux OS (less than 5mb!). In short, containers provide a highly repeatable way to deploy applications.

This means that you can deploy your own application as a Docker container on top of Rapture. This means that your applications can be scalable, upgradeable, reliable and portable to different environments, just as Rapture is!

The next level up, we have Tutum (soon to be Docker Cloud). Via Tutum’s API, we are able to request host machines from Amazon Web Services (EC2), as well as specify what containers (and release version #) we would like to deploy where. Containers (called services in Tutum) can be organized into pre-configured “stacks” (collection of containers/services). This is useful for applications like Rapture, since we can define a stack that contains RabbitMQ, MongoDB, and Rapture, then deploy it, without having to worry about deploying individual containers.

User Dashboard Application:

From the beginning of this project, we knew we would need some form of “portal” or “dashboard” for users, and a way for us to keep track of them. We built our own solution using RaptureCore. The purpose of this application is two-fold:

  1. Allow a potential client, or interested persons to register with us, request an environment once verified, and view the environments that they own.
  2. Allow ourselves a way to manage users & environments, secure operational admin credentials for each environment, and generate useful data and metrics to help sales.


For potential customers, this is where you can:

          • Request an environment
          • Browse to your environment
          • View your available environments & their status
          • Add users to an environment


Several things get set into motion when a user requests a trial environment. A machine comes up in Tutum/EC2 and a “stack” yaml file gets generated and saved to Tutum. The services/containers we define in the stack automatically deploy to the requested host when the host comes online.
Once we have our core components installed (RabbitMQ, MongoDB & RaptureAPI server, seen in diagram), we can begin applying “plugins” to Rapture. Plugins range from pre-populating cleansed data or setting up the default trail user and their permissions, to an entire UI (Rapture Information Manager).

During this setup process, emails are sent to both Incapture Technologies and the requestor to notify when certain stages of setup are complete. We also have a proxy server with a custom program that listens for hosts in our Tutum VPC and updates the proxy’s configs with their locations. This way, people can access their sites by a nice-looking URL (ex: <trialInstanceName> moments after the application comes online.

At that proxied URL, the trial user can access the Rapture Information Manager from a web browser. The Rapture Information Manager allows users to view, create and edit data on their Rapture instance. Users can also execute Reflex scripts here and use a REPL Reflex window.

Users can also access their Rapture instance by RaptureAPI at <url>:8665/rapture. Users have the choice of using Reflex, Java, or Python to develop in (or all three!).


Earlier we mentioned using our user dashboard application to generate useful metrics and data that could also assist sales and improve user experience. There are two main areas to this:

        1. Salesforce integration
          • We implemented Salesforce’s rest API to allow our sales team to see new leads the moment they register.
          • This helps sales keep track of current and interested prospects and our interactions with them.
        2. Analysis of our own generated data
          • We have the ability to log & audit any activity that occurs on a trial Rapture instance. This means that we keep track of who is logging in, what API’s are being used, what exceptions are being thrown etc…
          • We plan to automate sending emails to users if they have not logged in for a certain period (come back!) or if they seem to be throwing an abnormal amount of exceptions (need help with that?).
          • Provide them relevant documentation based upon which API they seem to be using most.
          • We can get insight into which of the three languages you can write Rapture applications in is the most popular (Reflex, Java or Python).

Rapture is all about safe storage, management, and visibility of data. This equates to a great platform for business intelligence and data warehousing, allowing users to generate insightful data & metrics in addition to what they already store.

Building web applications on Rapture

In this next set of Rapture blogs I want to explore how we at Incapture build web applications on Rapture. You can use this same technique to build your own applications on Rapture as this framework is part of the general Rapture product. This is not the only way to do this – Rapture is a platform after all and there are many ways to use such a platform to create such an application.

As a taster for what I’ll be talking about I can show you a demonstration “front page” of our sandbox application:

Screen Shot 2015-02-26 at 1.49.22 PM

Here we have a tileset of “applications” that we have installed into this environment, and the ability to launch them. Over the next couple of blog posts I’ll explain behind the scenes how I used the framework to create these applications.

We should first consider what our requirements and constraints are for such an application framework. Ideally I wanted to have a very simple deployment approach – it should be straightforward to “add” an application to an existing environment – and to be able to do that in a more containerized deployment as well. I also didn’t want to have to compile and deploy “binaries” each time I changed a small aspect of an application. Finally it would be nice if I could examine and make minor modifications to an application from within Rapture itself.

In this introductory post I’ll simply talk about the underlying features of Rapture that will be used in this application framework. Subsequent posts will take each part and show how it all comes together.

The most fundamental parts of a web application is its static content – the html pages, the images, the javascript code and stylesheets. Within Rapture we have a good place to put such content – we call it a “blob repository”. Content in a blob repository has a mime type (which is very useful when serving content) and can also be versioned (which is very good if we make a mistake and we need to roll back content). We will need to have a way to serve content to anonymous users who haven’t logged in yet and to serve application content to those who have. Rapture’s security and entitlements model will help here.

The other aspect of a web application relates to the dynamic content. An earlier blog post talked about this but one way this can be achieved in Rapture is by serving the dynamic content (usually called via ajax calls from a javascript context on the client) through the deployment of Reflex scripts. Using scripts in this way helps to mitigate large amounts of data being passed for local processing.

Finally we’ll need a way of packaging up this content into something that can be easily deployed to an application instance. Rapture has a concept called “Features” which is an ideal match for this type of deployment approach. Even better – features can be packaged into a self-installable executable which we can run against an environment in a repeatable way.

So with static content, scripts and then the “real” data and workflows associated with our application we can very quickly create applications that can run in a Rapture environment. The end point for this blog journey will be to explain how we can create an application that can present this type of information:

Screen Shot 2015-02-26 at 2.03.06 PM

Watch this space (or subscribe using the button up and to the right) for more updates on this exciting way to build applications on Rapture.

Docker & Rapture Revisited, Etienne

In an earlier post (Rapture and Docker ) I talked about how we quickly put together some Docker images for running Rapture, and the means by which we linked various containers together to form a standalone Rapture environment. In this post I’ll talk about the next steps in this journey – where we’ve created a tool to manage our Docker deployments so that it is trivially easy to start and stop Rapture environments for demo purposes. We’ll be using this tool “behind the scenes” for clients and prospective clients so that they can quickly get an isolated environment together to play with.

The tool/application we created has been internally branded “Etienne”. It’s written in go-lang ( and presents a web page for an operator to interact with. It also has an API that is driven by an internal administration Rapture environment we use for managing and supporting clients.

On start up with nothing actively running, the web page looks something like this:

Screen Shot 2015-01-13 at 11.11.09 AM

As part of the configuration of the instance we’ve passed it three main things – information about the Docker environments that are available to the app (along with security credentials to connect to them), information about topologies that can be deployed (collections of images and how to bind them together) and finally Etienne also acts as a pass through between the outside world and Rapture so the domain name that Etienne is bound to is provided. In this way if we start an environment called “alan” the domain name “” might be bound to the Rapture environment started. (Note that we don’t use for this purpose so clicking around there won’t do anything!).

When we start a topology, Etienne picks an appropriate set of Docker environments (based on load and with a preference for services to be started on the same environment), starts the images (creating docker containers) and then binds them all together. After starting an environment — I called this “alan” — the Etienne screen looks something like this:

Screen Shot 2015-01-13 at 11.24.27 AM

In this environment I have four different images in play. The first two provide the underlying fabric for this environment – MongoDB and RabbitMQ in this example. The third is a Rapture environment itself. The final one is a special container that has a prepackaged Rapture feature within. On startup this container connects to Rapture and installs the feature on to it – it’s a nice component based way of taking a base environment and enhancing it with functionality.

Once started the Etienne UI has a number of features that are simply passed down to the underlying Docker environment. For example, you can view the logs of a given Docker container:

Screen Shot 2015-01-13 at 11.26.50 AM

And the link on the screen also allows someone internal to our network to log into Rapture. (External users will go through the external domain name which will get proxied through Etienne). Clicking on that URL will present a log in screen, and once logged in the UI is based on whatever was in the base platform and any additional features.

Screen Shot 2015-01-13 at 11.30.18 AM

With Etienne we have a very useful tool for building and deploying Rapture environments. The power really comes from a combination of Docker (the containerization) and the modular approach of Rapture which allows us to compose an environment from a number of parts – underlying “fabric” of a database and messaging, a base core code base and a suite of features that enhances that environment. With these together we will be able to present a wide variety of custom demonstrations of Rapture for many different audiences.

That comes next! Watch this space… If you would like a demonstration of Rapture before we have this automated please don’t hesitate to contact me at Extending Rapture – Reflex

In this article and the next I’ll show two different ways in which we can extend Rapture. They are not the only ways to extend Rapture but they will show the general approach that was taken in the architectural design.

Here I’ll show how we can extend the Reflex language with new functionality. As Reflex can be used in numerous places within a Rapture centric ecosystem an extension to the language can greatly increase the power and flexibility of the Rapture platform and the availability of any systems that the extension links to.

General interface

For this example we’ll choose Twilio as a service that we want to interact with through Reflex. Twilio is a good choice as it has a well defined and structured api that can be easily integrated with a Java based application.

Rapture is written in the Java language and Reflex extensions must be written in the Java language (or target the Java Virtual Machine). Reflex extensions must implement the following interface:

package reflex.importer;

public interface Module {
    void configure(List<ReflexValue> parameters);
    void setReflexHandler(IReflexHandler handler);
    void setReflexDebugger(IReflexDebugger debugger);

    boolean canUseReflection();
    boolean handlesKeyhole();

    ReflexValue keyholeCall(String name, List<ReflexValue> parameters);


This interface has three configuration style methods – the first passes configuration parameters from the Reflex script to the module and the other two pass in two important interfaces that the module can use to interact with Rapture (the ReflexHandler) and a debugger. Modules typically save this information for use in actual implemented calls.

Invocation approach

The next set of calls are really probes by Reflex to determine how calls in the script should be passed to the module. The simplest approach is to use reflection – where a method in Reflex called “hello” would correspond to a method on the implementing class that looks like the following:

   public ReflexValue hello(List<ReflexValue> params);

In this way the function can be passed parameters from the script and return a value back to the script. I’ll talk about the ReflexValue class a little later.

The second method is to use a keyhole mechanism. Using this technique the “hello” call would invoke the keyholeCall method with the first parameter being “hello” and the second parameter containing the parameters passed from the script.

Twilio Example

For our Twilio Reflex add-in we’ll use Twilio’s Java API library and in particular attempt to send a SMS message using the system. The pseudo code to send an SMS is reproduced below:

        TwilioRestClient client = new TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN);
        Account account = client.getAccount();
        SmsFactory smsFactory = account.getSmsFactory();
        Map<String, String> smsParams = new HashMap<String, String>();
        smsParams.put("To", params.get(0).asString());
        smsParams.put("From", params.get(1).asString());
        smsParams.put("Body", params.get(2).asString());
        Sms sms;
        sms = smsFactory.create(smsParams);

Here we create an Account object from a TwilioRestClient (noting that we have to pass in an ACCOUNT and an AUTH TOKEN to make that happen) and then there are three extra pieces of information we need to send the sms – who it’s from, who it’s to and the body of the message.

We could pass in the ACCOUNT and AUTH TOKEN in every single call but this is the purpose of the interface’s configure call. We can start by implementing that:

package reflex.module;

public class Twilio implements Module {
    private String ACCOUNT_SID = "";
    private String AUTH_TOKEN = "";

    public void configure(List<ReflexValue> arg0) {
        if (arg0.size() == 2) {
            ACCOUNT_SID = arg0.get(0).asString();
            AUTH_TOKEN = arg0.get(1).asString();

Here we assume the configuration is passed in the form of a string to the module, something like the following:

import Twilio as tw with ("ZZZZZZ6a624d40443a09039a0256e78a","827ea98ac8dbf6ac51c77eZZZZZZZc");

The AUTH_TOKEN and ACCOUNT_SID are part of our contract with Twilio.

The implementation of our sendSms function is as follows:

   public ReflexValue sendSms(List<ReflexValue> params) {
        TwilioRestClient client = new TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN);
        Account account = client.getAccount();
        SmsFactory smsFactory = account.getSmsFactory();
        Map<String, String> smsParams = new HashMap<String, String>();
        smsParams.put("To", params.get(0).asString());
        smsParams.put("From", params.get(1).asString());
        smsParams.put("Body", params.get(2).asString());
        Sms sms;
        try {
            sms = smsFactory.create(smsParams);
            return new ReflexValue(sms.getStatus());
        } catch (TwilioRestException e) {
            throw new ReflexException(-1, String.format("Could not send sms - %s",e.getMessage()), e);

It’s simply adapted from the example code above with some exception handling thrown in.

The final pieces of code in our module satisfies the interface and lets Reflex know that it should use reflection:

public class Twilio implements Module {


    public ReflexValue keyholeCall(String name, List<ReflexValue> parameters) {
        return null;

    public boolean handlesKeyhole() {
        return false;

    public boolean canUseReflection() {
        return true;

    public void setReflexHandler(IReflexHandler handler) {
       // We do not use Reflex Handler

    public void setReflexDebugger(IReflexDebugger debugger) {
       // We do not use the Reflex Debugger


That’s about it – now, if this jar file is on the classpath of a Rapture application that is invoking a Reflex script the following code (with appropriate account information!) will send an SMS:

import Twilio as tw with ("ZZZZZZ6a624d40443a09039a0256e78a","827ea98ac8dbf6ac51c77eZZZZZZZc");
res = $tw.sendSms("+15121112222","(888) 111-2222","Hello from Rapture!");

In some of our application environments we use this exact technique to send alert messages in case of certain failures.


We’ve created some other Reflex addins as samples – some of them more for fun and some very relevant:

  • A charting library that creates SVG charts
  • A library to allow searching of an ElasticSearch database
  • A Microsoft Excel Spreadsheet reader and writer
  • A PDF file creator
  • An interface into Wolfram Alpha
  • An interface into Yahoo Financials FX api

As before – if you’d like more information about Incapture or Rapture please drop me a line personally or to our general email address and we will get back to you for a more in depth discussion.

Rapture and Docker

Docker is a container virtualization platform that “lets you quickly assemble applications from components and eliminates the friction that can come when shipping code” (see

I thought it would be interesting to see what it would take to completely deploy a sample Rapture environment using Docker. It was straightforward to pull together a number of containers for this purpose and have a complete Rapture environment running on Docker in about 30 minutes – with the boot up time of the environment measured in seconds.

host ~> docker ps
CONTAINER ID        IMAGE                               COMMAND                CREATED             STATUS              PORTS                                               NAMES
c13338c5c5c8        incapture/raptureapiserver:latest   "/bin/bash /rapture/   About an hour ago   Up About an hour>8665/tcp                             lonely_ardinghelli
96a4817c7da6        incapture/rabbitmq:latest           "rabbitmq-start"       20 hours ago        Up 20 hours>15672/tcp,>5672/tcp   rabbitmq
bc7fe0f0ed87        incapture/mongo:latest              "/bin/sh -c usr/bin/   21 hours ago        Up 21 hours>27017/tcp                            mongo
host s004 ~>

Defining the lower level images

Docker introduces the concept of an image. We create containers from images. For my sample Rapture environment I used MongoDB as the database system and RabbitMQ as the messaging system. A quick internet search gave me some good recipes for those images – here expressed in their Dockerfiles (the configuration file used to create an image).

For mongodb:

FROM ubuntu:latest
RUN apt-key adv --keyserver hkp:// --recv 7F0CEB10
RUN echo 'deb dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
RUN apt-get update
RUN apt-get install -y mongodb-org
RUN mkdir -p /data/db
EXPOSE 27017
ENTRYPOINT usr/bin/mongod

And for rabbitmq (a little bit more tricky due to the key signing of the install)

FROM dockerfile/ubuntu
ADD bin/rabbitmq-start /usr/local/bin/

# Install RabbitMQ.
  wget -qO - | apt-key add - && \
  echo "deb testing main" > /etc/apt/sources.list.d/rabbitmq.list && \
  apt-get update && \
  DEBIAN_FRONTEND=noninteractive apt-get install -y rabbitmq-server && \
  rm -rf /var/lib/apt/lists/* && \
  rabbitmq-plugins enable rabbitmq_management && \
  echo "[{rabbit, [{loopback_users, []}]}]." > /etc/rabbitmq/rabbitmq.config && \
  chmod +x /usr/local/bin/rabbitmq-start

# Define environment variables.

# Define mount points.
VOLUME ["/data/log", "/data/mnesia"]

# Define working directory.

# Define default command.
CMD ["rabbitmq-start"]

# Expose ports.
EXPOSE 15672

(This was taken almost directly from

With those files in place I could create the images using these commands (run in the folder that contained the appropriate Dockerfile)

docker built -t incapture/mongo .
docker build -t incapture/rabbitmq .

After these commands completed I had a set of local images that I could use to start MongoDB and RabbitMQ.

Starting the lower level containers

With those images created I could quickly start a container for the two applications. I gave them very specific names in my example rather than using docker’s automatic naming facility:

docker run  -d incapture/mongo --name=mongo
docker run  -d incapture/rabbitmq --name=rabbitmq

After running these commands I have two local containers running on my host – one for MongoDB and one for RabbitMQ.


Before I created my Rapture image I had to consider how Rapture was going to “find” the two services I had started previously. The connection information for MongoDB and RabbitMQ is stored in a set of configuration files that are either on the resource (class) path of Rapture or in a very specific location on the filesystem – the location driven by an environment variable. Normally this configuration file has the name or ip address of the underlying service – but in this case I didn’t want to put a specific name in the configuration file – I would rather have a more general name and use docker’s linking (on add-host) facility to bind the service I had started to that general name. In this case I used the general names “rabbit” and “mongo”. We’ll see how that is referenced in a moment.

The RaptureAPIServer image

Rapture has a pre-packaged application available that wraps the core platform along with a servlet based binding for the API and a web front end for general operational management. It’s called “RaptureAPIServer”. For this example I am going to take that codebase and create an image from it.

To do this I copied a built (and installable) version of RaptureAPIServer. Once built a RaptureAPIServer has three subfolders – a bin folder for the start scripts and the website, a lib folder for the java libraries and an etc folder for the configuration. I created a local folder for my image creation and copied in the built application. Structurally it looks like this:

Screen Shot 2014-12-16 at 1.22.15 PM

Next I added two additional configuration files for the bindings to the services. I placed these in the etc/rapture/config folder:





Note that these refer to a general server “mongo” for MongoDB and “rabbit” for RabbitMQ.

Finally I created the DockerFile for RaptureAPIServer. It is reproduced below:

FROM ubuntu:latest
MAINTAINER Alan Moore <>
RUN apt-get update && apt-get install -y default-jdk
COPY app/RaptureAPIServer /rapture
ENV RAPTURE_CONFIG_HOME /rapture/etc/rapture/config
WORKDIR /rapture/bin
ENTRYPOINT [ "/bin/bash", "/rapture/bin/RaptureAPIServer" ]

The docker file basically installs the java jdk (as Rapture needs java to run) and copies the contents of the Rapture application into a folder called “/rapture”. It sets up the environment variable RAPTURE_CONFIG_HOME (where Rapture can optionally look for configuration files), sets the working directory appropriately, exposes the port used by the API and web site and invokes the start script for Rapture.

We build the image in the standard way:

docker build -t incapture/raptureapiserver .

Running Rapture

Finally we can start a Rapture container. The only special thing to do in this case was to link the new container with the previously started services:

docker run -d --link mongo:mongo --link rabbitmq:rabbit -P incapture/raptureapiserver

Rapture starts perfectly well – initializing the MongoDB environment for first use.

21:29:31,716  INFO [main] ( - Found default for RaptureLOGGER.cfg in classpath. URL: jar:file:/rapture/lib/RaptureAppConfig-!/rapture/config/defaults/RaptureLOGGER.cfg
21:29:31,720  INFO [main] ( - RAPTURE_CONFIG_HOME is set to /rapture/etc/rapture/config, retrieve config from there for RaptureLOGGER.cfg
21:29:31,723  INFO [main] ( - No global config found for RaptureLOGGER.cfg.
21:29:31,723  INFO [main] ( - No app-specific config found for RaptureLOGGER.cfg.
21:29:31,798  INFO [main] ( <> [] - Starting RaptureAPIServer
21:29:31,819  INFO [main] ( <> [] - ==================================
21:29:31,821  WARN [main] ( <> [] - No JMX port defined
21:29:31,827  INFO [main] ( <> [] - Loading addins
21:29:31,828  INFO [main] ( <> [] - No default found for RaptureLOCAL.cfg.
21:29:31,829  INFO [main] ( <> [] - RAPTURE_CONFIG_HOME is set to /rapture/etc/rapture/config, retrieve config from there for RaptureLOCAL.cfg
21:29:31,829  INFO [main] ( <> [] - Found global config for RaptureLOCAL.cfg in RAPTURE_CONFIG_HOME. URL: /rapture/etc/rapture/config/RaptureLOCAL.cfg
21:29:31,829  INFO [main] ( <> [] - No app-specific config found for RaptureLOCAL.cfg.
21:29:31,834  INFO [main] ( <> [] - Using /rapture/lib/RaptureAPI- from path of class library as the base folder
21:29:31,835  INFO [main] ( <> [] - Addins will be loaded from /rapture/addins
21:29:31,835  INFO [main] ( <> [] - Configured as a web server
21:29:31,864  INFO [main] ( <> [] - Loading Rapture Config
21:29:31,870  INFO [main] ( <> [] - Found default for rapture.cfg in classpath. URL: jar:file:/rapture/lib/RaptureAppConfig-!/rapture/config/defaults/rapture.cfg
21:29:31,870  INFO [main] ( <> [] - RAPTURE_CONFIG_HOME is set to /rapture/etc/rapture/config, retrieve config from there for rapture.cfg
21:29:31,871  INFO [main] ( <> [] - Found global config for rapture.cfg in RAPTURE_CONFIG_HOME. URL: /rapture/etc/rapture/config/rapture.cfg
21:29:31,871  INFO [main] ( <> [] - No app-specific config found for rapture.cfg.
21:29:31,975  INFO [main] ( <> [] - Successfully loaded config file
21:29:31,976  INFO [main] ( <> [] - Bootstrap config is REP {} USING MONGODB { prefix="rapture.bootstrap" }
21:29:32,076  INFO [main] ( <> [] - Found default for RaptureMONGODB.cfg in classpath. URL: jar:file:/rapture/lib/MongoDb-!/rapture/config/defaults/RaptureMONGODB.cfg
21:29:32,076  INFO [main] ( <> [] - RAPTURE_CONFIG_HOME is set to /rapture/etc/rapture/config, retrieve config from there for RaptureMONGODB.cfg
21:29:32,077  INFO [main] ( <> [] - Found global config for RaptureMONGODB.cfg in RAPTURE_CONFIG_HOME. URL: /rapture/etc/rapture/config/RaptureMONGODB.cfg
21:29:32,078  INFO [main] ( <> [] - No app-specific config found for RaptureMONGODB.cfg.
21:29:32,080  INFO [main] ( <> [] - Host is mongodb://rapture:rapture@mongo/RaptureMongoDB
21:29:32,095  INFO [main] ( <> [] - Username is rapture
21:29:32,095  INFO [main] ( <> [] - Host is [mongo]
21:29:32,096  INFO [main] ( <> [] - DBName is RaptureMongoDB
21:29:32,096  INFO [main] ( <> [] - Collection is null
21:29:32,877  INFO [main] ( <> [] - Creating audit log provider for kernel with config LOG {} using LOG4J {}
21:29:32,880  INFO [main] ( <> [] - Creating audit log from config - LOG {} using LOG4J {}
21:29:32,892  INFO [main] ( <> [] - No default found for RaptureRUNNER.cfg.
21:29:32,892  INFO [main] ( <> [] - RAPTURE_CONFIG_HOME is set to /rapture/etc/rapture/config, retrieve config from there for RaptureRUNNER.cfg
21:29:32,893  INFO [main] ( <> [] - No global config found for RaptureRUNNER.cfg.
21:29:32,893  INFO [main] ( <> [] - No app-specific config found for RaptureRUNNER.cfg.
21:29:32,894  WARN [main] ( <> [] - No config files found for RaptureRUNNER.cfg
21:29:32,894  INFO [main] ( <> [] - Unable to find overlay file for RaptureRUNNER.cfg
21:29:32,901  INFO [main] ( <> [] - Tue Dec 16 21:29:32 UTC 2014 (user=raptureApi)  [kernel] kernel: Instance started
21:29:32,901  INFO [main] ( <> [] - AppStyle is webapp
21:29:32,901  INFO [main] ( <> [] - Configuring server on port 8665
21:29:32,907  INFO [main] ( <> [] - Starting WebServer for Rapture

And the web site (available through the host port mapping) is available also:

Screen Shot 2014-12-16 at 1.30.58 PM

Next steps

This was just a quick demonstration on how easy it is to create a Rapture environment within a set of Docker containers. In the real world more work would be involved in automating the bindings and handling how to bind containers that are running on different hosts – but there is nothing inherently difficult about how to do that. In fact a higher level Rapture environment and workflows could manage the topology of these satellite environments!

As before – if you’d like more information about Incapture or Rapture please drop me a line personally or to our general email address and we will get back to you for a more in depth discussion.

A Reflex Sandbox

If you’ve been reading this blog for a while you may be interested in having a little play with Rapture and perhaps Reflex. We’re in the process of setting up a sandbox environment for this and wanted to create some simple web pages for interacting with the environment. This post shows how I created a simple page that can be used to edit and run Reflex scripts. The real page will have a number of additions to help guide the novice user and will probably look a little different but the core aspects of the work will be the same.

In a previous post I talked about an architectural approach we’ve used have a Web page (Javascript) talk to a Rapture back end — we used Reflex scripts as the “processing” on the server, returning json formatted documents back to the client. The code on the page ends up being simple Ajax calls to Rapture.

Let’s start with a simple screen shot of the page in action:

Simple Reflex View

In this page we have a Reflex script being displayed on the left, the output on the right, some buttons and parameter information at the bottom left and that’s about it. The real “function” of the web page could therefore be broken down into:

  • Read a Reflex script from Rapture and display it.
  • Edit that script
  • Save that script
  • Run that script on Rapture, capturing the output
  • Display that output
  • Alter parameters passed to script execution

We also did not want to write everything from scratch so we leaned on the large body of open source and free software available in this area:

The code for the page consists of the html for the general layout and some javascript for the main interaction. A truncated form of the page is reproduced below – it’s a simple bootstrap container:

<div class="container-fluid">
<div class="row">
<div class="col-md-6">
<div class="panel panel-default">
<div class="panel-heading">Reflex Script</div>
<div class="panel-body"></div>
<div class="panel-footer">
<div><button class="btn btn-info" id="saveScript">Save</button>
 <button class="btn btn-error" id="runScript">Run</button></div>
<div class="col-md-6">
<div class="panel panel-default">
<div class="panel-heading">Output</div>
<div class="panel-body"></div>
<div class="panel-footer">Return value from execution:
<div class="alert alert-info" id="execOutput"></div>

It’s worth pointing out at this point that I’m not a UI or UX expert and Incapture Technologies is hiring. After you’ve finished critiquing my little hack I’d love to hear from you if you want to join our team.

The layout above contains simple placeholders for the main functionality. The main work is done in the associated javascript code. I’ll split that up and explain each in turn.

The first snippet of code is about parsing the parameters passed to the web page. In this example the script to edit/run and parameters are passed in the url string. We of course rely on entitlements to protect the environment from callers manipulating those parameters – if you recall from an earlier post every api call in Rapture is protected by entitlements.

var QueryString = function() {
    var query_string = {};
    var query =;
    var vars = query.split("&");
    for ( var i = 0; i < vars.length; i++) {
        var pair = vars[i].split("=");
        if (typeof query_string[pair[0]] === "undefined") {
            query_string[pair[0]] = pair[1];
        } else if (typeof query_string[pair[0]] === "string") {
            var arr = [ query_string[pair[0]], pair[1] ];
            query_string[pair[0]] = arr;
        } else {
    return query_string;

I found this code on the internet a while back. There are now better ways of doing this but for my little page this does the trick – after execution the variable QueryString will be a dictionary with the parameter names and values.

The first real work I needed to do was to load the Reflex script associated with the parameter “id” passed to the page. This snippet does that job:

function startup() {

    var id =;
        url : "../web/getScript.rrfx?id=" + id,
        dataType : 'json',
        success : function(data) {
            editor.setValue(data.content, -1);
            window.scrollTo(0, 0);


The variable editor is setup earlier on – it’s an instance of an “ACE” editor:

var editor = ace.edit("editor");

Basically the loading code makes an Ajax call to the endpoint “web/getScript.rrfx” with the parameter “id” being the script to load. Rapture interprets this as “run the Reflex script web/getScript setting the variable to the passed parameter”. I’d previously installed this Reflex script in the Rapture environment, it looks like this:

response = {};
script = #script.getScript(;
response.content = script.script;

The simple Reflex script loads the Reflex script (which contains information about the script as well as the script itself, and then returns just the script as the content of the response. The script prints out the json format of the map which is what is returned to the JavaScript page. If you look at the Javascript code above you see it setting the editor content to being “data.content”.

So after this process we will have a nice editor showing our Reflex script, which we can edit and save. The saving code is attached to the save button and it looks like this:

$("#saveScript").on("click", function() {
    var d = editor.getValue();
    var id =;
    var vals = {};

    vals['id'] = id
    vals['contents'] = encodeURIComponent(d);

        url : "../web/putScript.rrfx",
        dataType : 'json',
        type : 'POST',
        data : vals,

        success : function(data, textStatus, jqXHR) {
            $("#message").text("Saved document");
    return false;

This code should feel similar to the earlier one – it’s taking the content of the editor and the script to save – it’s packaging them up as a set of parameters to be sent to a Reflex “web/putScript” call. The contents of that Reflex script is reproduced below:

id =;
contents = web.contents;
if #script.doesScriptExist(id) do
   rscript = #script.getScript(id);
   rscript.script = contents;
   #script.putScript(id, rscript);
else do
   #script.createScript(id, "REFLEX", "PROGRAM", contents);
println({ "status" : "ok"});

This code basically checks to see if the script already exists – if it does it loads the script, updates the contents and then saves the script back. If it doesn’t it calls a specific Rapture API call to create a new script. Again note that the script above is run in the security context of the logged in user and every API call made has its entitlements checked.

So we’ve loaded and saved a script, the final real part of our page is running the script and seeing the output. The first part (running the script) follows our usual pattern:

$('#runScript').click(function() {
    var scriptURI = $('#contentName').text();
          url: "../web/runScriptView.rrfx",
          dataType: 'json',
          data: QueryString,
          success: function(data) {
          error: function (error) {

Here there’s a little complexity processing the return value (we’ll look at that in a moment) but the execution of the script is done by executing the web url “/web/runScriptView” and that is itself a Reflex script that looks like the following:

scriptURI = web['id'];
res = #script.runScriptExtended(scriptURI, web);

The API call runScriptExtended basically runs the script and captures all of the output that script does (println calls for example) and the return value from the execution of the script. This structure is then returned to the caller – in this case the JavaScript ajax success or error functions.

In our little page I wanted to do very trivial display parsing – if the return was json then I could display that in an editor type context – if it was html I could display it directly, otherwise I’d just treat it as raw text. So the complicated looking code in the success/error functions of our execution code is there to show the output in the right format and to display the return value. In this example I did the rather fragile technique of looking at the first character returned to determine the format and called different display code based on the format:

function processOutput(output) {
    if (output[0][0] == '{') {
    } else if (output[0][0] == '<') {
    } else {

function processJson(output) {
    outputJ.setValue(output[0], -1);

function processHtml(output) {
        var div = document.getElementById('htmlOut');
        // data is raw html
        var data = "";
        for(var i=0; i< output.length; i++) {
            data = data + output[i];
        div.innerHTML = data;

function processText(output) {
    var oldTable = document.getElementById('execOutputList'),
    newTable = oldTable.cloneNode(false),tr,td;
    var tbody = document.createElement('tbody');
    for(var i=0; i< output.length; i++) {
       tr = document.createElement('tr');
    oldTable.parentNode.replaceChild(newTable, oldTable);

function addCell(tr, element) {
    var td = document.createElement('td');
    if (element == undefined) {
    } else {

For html output the picture looks something like:

Screen Shot 2014-12-10 at 8.25.16 AM

And for simple text we have:

Screen Shot 2014-12-10 at 8.26.43 AM

And that is pretty much it! The code for our simple page is split across three different layers – the html layout, the Javascript client side plumbing and the server side Reflex processing sitting on top of Rapture. Together they can make rich user experiences with a short development lifecycle. Look out for the sandbox Reflex explorer in the wild soon!


As before – if you’d like more information about Incapture or Rapture please drop me a line personally or to our general email address and we will get back to you for a more in depth discussion.

Subscribe for updates