- April 4, 2016
- Joe Grabenstein
Over the past couple months at Incapture Technologies we have been working on a way to allow potential clients to experiment with a fully-fledged Rapture environment as seamlessly as possible. Our solution to this was actually using our own product, Rapture, to create a user dashboard & management application that would be used along with Docker & Tutum (soon to be Docker Cloud) to manage & launch trial environments as needed.
Once an interested user is up and running with their own environment, we provide them with our “Getting Started Guide”. That web page contains the Rapture Platform documentation as well as Introductory tutorials available in 3 languages (Reflex, Java or Python). These are in place to help a user get acquainted with how to use the Rapture API, the Rapture Information Manager browser application, and the real-world impact of how Rapture could be a solution for them. The first tutorial application demonstrates how easy it is to build an application in your chosen programming language on top of Rapture. This tutorial uses cleansed hedge fund sample data (price series, positions, orders, trades etc.) that comes conveniently pre-installed on a trial environment. The tutorial highlights how easy it is to load and transform data in various forms (csv blob, series data, or json document) using the Rapture platform.
Infrastructure & Deployment:
Our applications are all deployed as Docker containers in this scenario. If you aren’t already aware, Docker containers are essentially lightweight, self-contained (highly portable) VM’s. These images can be tagged and pushed to Dockerhub to be pulled from elsewhere. We strive to stay lean where we can, so some containers even implement the Alpine Linux OS (less than 5mb!). In short, containers provide a highly repeatable way to deploy applications.
This means that you can deploy your own application as a Docker container on top of Rapture. This means that your applications can be scalable, upgradeable, reliable and portable to different environments, just as Rapture is!
The next level up, we have Tutum (soon to be Docker Cloud). Via Tutum’s API, we are able to request host machines from Amazon Web Services (EC2), as well as specify what containers (and release version #) we would like to deploy where. Containers (called services in Tutum) can be organized into pre-configured “stacks” (collection of containers/services). This is useful for applications like Rapture, since we can define a stack that contains RabbitMQ, MongoDB, and Rapture, then deploy it, without having to worry about deploying individual containers.
User Dashboard Application:
From the beginning of this project, we knew we would need some form of “portal” or “dashboard” for users, and a way for us to keep track of them. We built our own solution using RaptureCore. The purpose of this application is two-fold:
- Allow a potential client, or interested persons to register with us, request an environment once verified, and view the environments that they own.
- Allow ourselves a way to manage users & environments, secure operational admin credentials for each environment, and generate useful data and metrics to help sales.
For potential customers, this is where you can:
- Request an environment
- Browse to your environment
- View your available environments & their status
- Add users to an environment
Several things get set into motion when a user requests a trial environment. A machine comes up in Tutum/EC2 and a “stack” yaml file gets generated and saved to Tutum. The services/containers we define in the stack automatically deploy to the requested host when the host comes online.
Once we have our core components installed (RabbitMQ, MongoDB & RaptureAPI server, seen in diagram), we can begin applying “plugins” to Rapture. Plugins range from pre-populating cleansed data or setting up the default trail user and their permissions, to an entire UI (Rapture Information Manager).
During this setup process, emails are sent to both Incapture Technologies and the requestor to notify when certain stages of setup are complete. We also have a proxy server with a custom program that listens for hosts in our Tutum VPC and updates the proxy’s configs with their locations. This way, people can access their sites by a nice-looking URL (ex: <trialInstanceName>.incapture.net) moments after the application comes online.
At that proxied URL, the trial user can access the Rapture Information Manager from a web browser. The Rapture Information Manager allows users to view, create and edit data on their Rapture instance. Users can also execute Reflex scripts here and use a REPL Reflex window.
Users can also access their Rapture instance by RaptureAPI at <url>:8665/rapture. Users have the choice of using Reflex, Java, or Python to develop in (or all three!).
Earlier we mentioned using our user dashboard application to generate useful metrics and data that could also assist sales and improve user experience. There are two main areas to this:
- Salesforce integration
- We implemented Salesforce’s rest API to allow our sales team to see new leads the moment they register.
- This helps sales keep track of current and interested prospects and our interactions with them.
- Analysis of our own generated data
- We have the ability to log & audit any activity that occurs on a trial Rapture instance. This means that we keep track of who is logging in, what API’s are being used, what exceptions are being thrown etc…
- We plan to automate sending emails to users if they have not logged in for a certain period (come back!) or if they seem to be throwing an abnormal amount of exceptions (need help with that?).
- Provide them relevant documentation based upon which API they seem to be using most.
- We can get insight into which of the three languages you can write Rapture applications in is the most popular (Reflex, Java or Python).
Rapture is all about safe storage, management, and visibility of data. This equates to a great platform for business intelligence and data warehousing, allowing users to generate insightful data & metrics in addition to what they already store.