Incapture Technologies


Incapture Technologies Blog



Utilizing Java Watch Service

At Incapture we often implement data ingestion workflows for clients, typically as part of a larger re-engineering effort. Frequently, this involves waiting for and loading file-based data to arrive from another system or vendor. This is where Java’s Watch Service comes into play. Recently I was reading about Java’s Watch Service, which is included with the java.nio.file package, and thought this would could help us with client engagements.

Watch Service allows you to monitor directories and what types of events you want notifications for.  Events are create, modify and delete; more details here.

We have released  ‘WatchServer‘ as part of our open source platform. The server provides a file system monitoring capability that maps file system events to Rapture actions in a repeatable and configurable fashion.

Typically the action would be a Workflow.  As a reminder ‘Workflows’ in Rapture:

  • Are constructs that define a set of tasks (or steps) that need to be performed in some order:
  • Contain steps that can be implemented in various languages (Reflex, Java, Python etc)
  • Contain state that can be updated by each step
  • Manage step switching and execution via an internal pipeline
  • Can be initiated using Workflow API or attached to an event

There are many use cases we could support with this architecture plus Rapture platform capabilities, some of which are:

  • Loading csv file(s) to create time series accessible via Rapture’s Series API
  • Loading pdf file(s), indexing them and making them searchable via Rapture’s Search API
  • Loading xml file(s) and transforming to (json) documents accessible via Rapture’s Document API

To illustrate i’ve developed a workflow to load a SamplePriceData.xlsx file, extract data from each row and create a (json) document for that row in a Rapture document repository.

The WatchServer detects ENTRY_CREATE events and runs the workflow, which does:

  1. Loads a file from /opt/test and stores it in a blob Rapture repository blob://archive/yyyyMMdd_HHmmss/SamplePriceData.xlsx
  2. Create a Rapture document repository containing one document for each row in the spreadsheet document://data/yyyyMMdd_HHmmss/ROW000001..N. This uses Apache poi, a Java API for Microsoft documents, to extract data from the spreadsheet.

It is straightforward to setup and run locally using the process set out in the using images from Incapture’s public Docker Hub account.  Make sure to install Docker on your local system first! I use Docker for mac.

Once the workflow has been run once you can view the results in default Rapture system UI on http://localhost:8000.

The archived xlsx file saved as a blob:

archive repository

and the subsequent documents created in document://data repository:


Using WatchServer in conjunction with Workflows gives you a flexible but defined approach to implement your domain specific data loading processes. Plus the benefits from the built-in operational support Rapture provides.

If you’d like more information about Incapture or Rapture please email me, or to our general email address and we will get back to you for a more in depth discussion.

Rapture and REST

At Incapture we implemented a REST server to demonstrate exposing Rapture (Kernel) calls through a REST style interface. Specifically, to perform CRUD operations on the various Rapture data types: document, blob and series. This approach can be used when modeling and implementing your own Rapture client’s domain resources and interactions.

We wanted to use a simple and straightforward REST framework so we choose This allows you to get started quickly and provides everything needed to build an API.

Lets focus on Rapture ‘Documents’. One of the prime uses of Rapture is to manage access to data. Rapture has the concept of a repository for managing access to data. Various repositories, configurations and implementations are provided ‘out of the box’. For the purposes of this post we will be considering a versioned document repository hosted on MongoDB.

Document data repositories manage data as key/value pairs and are addressable through URIs. In fact, all data in a Rapture system is uniquely addressable via a URI and is a key concept in using the platform.

For example, consider the following document with URI document://orders/ORD000023312 and data:

    "id" : "ORD000023312",
    "orderDate" : "20150616",
    "ordType" : "market",
    "side" : "buy",
    "quantity" : 4000000.0,
    "strategy" : "XYZ",
    "fund" : "FUNDNAME",
    "status" : "FILLED",

Lets look at the process to create a document repository and load a document.

The first step is spin up a local Rapture system; this can be done easily using Docker. The steps are set out at this All the docker images are available on Incapture’s public Dockerhub registry.

So lets begin the process of:

  1. Creating a Document repository using a POST action
  2. Adding a document using a POST action
  3. Using GET action to retrieve the data
  4. Deleting the document

A postman collection is available with working API calls. Please note this uses https://localhost as we’re using Docker’s native (mac) tools.  Postman collection includes a /login call and provides all the necessary body (Raw JSON/Application) inputs.

The first task is to create a versioned Document repository configured to use MongoDB.  The REST call is as follows:

    POST /doc/:authority
    Example: /doc/orders
    Body: {"config":"NREP USING MONGODB {prefix=\"orders\"}"}

The server will route this call and create this repository: document://orders

Here is the (spark) method implementing the route note the Rapture Kernel calls:

post("/doc/:authority", (req, res) -> {;
    String data = JacksonUtil.getMapFromJson(req.body());
    String authority = req.params(":authority");
    String config = (String) data.get("config");
    CallingContext ctx = getContext(req);
    if (Kernel.getDoc().docRepoExists(ctx, authority)) {
        halt(409, String.format("Repo [%s] already exists", authority));
    Kernel.getDoc().createDocRepo(ctx, authority, config);
    return new RaptureURI(authority, Scheme.DOCUMENT).toString();

Next we will create a new ‘order’ document at URI document://orders/ORD000023312. The body for the call is provided in the postman collection.

   PUT /doc/:uri
   Example: /doc/orders/ORD000023312
   Body: {..order json here..}

Note the Rapture Kernel call to write a document putDoc(String uri, String body)

    put("/doc/*", (req, res) -> {
     return Kernel.getDoc().putDoc(getContext(req), getDocUriParam(req), req.body());

We won’t go through the subsequent GET and DELETE calls as the postman collection and github code are available to review.


  1. RESTServer Github repository
  2. Setting up local Docker environment
  3. Postman collection

If you’d like more information about Incapture or Rapture please email me or to our general email address and we will get back to you for a more in depth discussion.

Exec Breakfast Series: Using Data to Gain an Edge in Asset Management

San Francisco and New York City – July 14, 2016

Incapture Technologies (“Incapture”) sponsored a gathering of leading practitioners to explore how asset managers can harness new data sources, analytical tools, and technology platforms to drive performance in coming years. Organized and hosted by United Sales and Marketing Group (“USAM”), the event was held on July 14th at the Core Club in New York City.

The event featured a panel discussion moderated by Peter Knez, co-founder of Incapture. Peter enumerated the factors which have created conditions for profound disruption in asset management and highlighted why embracing “datafication” is essential for firms to gain and maintain an edge going forward.

Drew Kellerman, Managing Director of Business Development for Vertical Knowledge (“VK”), shared examples of how open source data can be curated to generate actionable insights that inform investment and trading decisions.

Braxton McKee, Founder and CEO of Ufora, highlighted the importance of instilling a disciplined engineering culture oriented towards building systems that are consistent, repeatable, and robust.

Larry Leibowitz, CEO of Incapture Technologies, illustrated how firms must adopt an open and flexible, platform driven approach to their technology stack in order to capitalize on these opportunities. This is especially true for firms who have built up a proliferation of vendor products over the years and lack the agility to quickly implement new technologies.

A highlight reel featuring key takeaways will be available at in the coming weeks.

About Incapture
Incapture Technologies supports and develops Rapture, an open and extensible, information curation platform, targeted at technology savvy, information workers. Rapture shortens the development cycle of complex projects thereby significantly improving business agility. Purpose built for the Capital Markets Industry, it has particular focus on research & analysis, risk & compliance, and other data driven business lines.

Visit and the Rapture project on Github to learn more.

About Vertical Knowledge
VK is a global supplier of open source data and analytics for the defense, financial services, and commercial markets. It enables clients to generate actionable insight from the compliant use of open source data.

About Ufora
Ufora provides Data Science Engineering consulting to select firms to optimize their data science stack for speed, scale and accuracy. Building on years of experience in building highly complex distributed computing systems and parallel processing engines, Ufora’s engineers can quickly identify opportunities for efficiency in your existing data science stack and implement the changes without disruption to your ongoing data science work.

Visit and the Pyfora project on Github to learn more.

About USAM
USAM Group provides outsourced sales and marketing services to financial technology vendors. Leveraging the deep industry experience of established sales professionals, USAM helps companies grow revenue faster and more cost effectively than they could by hiring and managing their own sales team.

Visit to learn more.

Entitlements in Practice

Building from an earlier blog post which provides conceptual grounding on entitlements, this post provides some practical examples of how to implement entitlements in Rapture.


Entitlements in Rapture allow administrators to clearly define who can access what in Rapture.  It is a permissioning system based on users, groups, and entitlements.  API calls made to Rapture are protected by entitlements that are defined at compile-time.  A defined entitlement is associated with a number of groups of users, and this association can be made at run-time.


User – A user represents a person who is making calls to Rapture or an application that is making calls to Rapture. A user is a single entity with a username/password who needs access to Rapture.
Group – A group represents a collection of users.
Entitlement – An entitlement is a named permission that has associated with it 0 or more groups. If an entitlement has no groups associated with it, it is essentially open and any defined user in Rapture can access it. If an entitlement has at least 1 group associated with it, any user wishing to access the resource protected by this entitlement, must be a member of one of the associated groups.


The use of entitlements is best explained by using a simple example.

User “bob” is a defined user in Rapture.  He writes a Reflex script to update the description associated with his username in Rapture.  Thus, he wants to use the updateMyDescription API call.

#user.updateMyDescription("My name is Bob");

He is successful.  What happened underneath the hood?

Let’s first examine how updateMyDescription is defined in the user.api file in the RaptureNew/ApiGen project.  Every API call in Rapture has a defined entitlement associated with it.

[Update the current description for a user.]
@public RaptureUser updateMyDescription(String description);

The entitlement string for the updateMyDescription call is defined as “/user/write”.  Entitlements are always defined as hierarchical slashed strings with an optional wildcard.  This means if a user has permissions for /user he will be permissioned for entitlements /user/read and /user/write/ as well.  If a user has permission for /user/xxx, that does not mean has has permissions for /user/yyy, but he does have permissions for /user/xxx/yyy.

On startup, a brand new Rapture instance always creates every single entitlement possible (by scanning every single *.api file) and initializes it as empty.  This means any defined user has permissions to all entitlements on startup of a clean brand-new Rapture instance.  Using the Entitlements API, users can then be added to groups, and groups can then be associated with particular entitlements to control access.  These definitions and associations are persisted to the configuration repository.

Back to our example.  Assuming that “bob” executed that api call against a brand new instance of Rapture, the entitlement of “/user/write” would have had 0 groups associated with it.  The entitlement check would have passed, since remember, an entitlement with 0 associated groups is wide-open to all defined users in Rapture.  How do we make it not pass?  We have to use the Entitlements API to associate a group of users with the entitlement “/user/write”.

#entitlement.addUserToEntitlementGroup("groupThatHasAccessToUserWrite", "alice");
#entitlement.addGroupToEntitlement("/user/write", "groupThatHasAccessToUserWrite");

Notice above that only the user “alice” has been assigned to the group “groupThatHasAccessToUserWrite”.  That group was associated with the “/user/write” entitlement.  User “bob” is not a member of that group.  Therefore, if Bob were to execute his call again after the above changes were made, it would fail.  In order for Bob to be able to make that call, he would have to be added to that group using the Entitlements API.

Dynamic Entitlements

Dynamic entitlements are entitlements with a wildcard in the string, such as /user/put/$d.  The wildcard is substituted at runtime based on the argument(s) of the API call that is made.  This allows Rapture to define entitlements that are based on the actual arguments of the API call being made.  Here is a table showing the currently defined substitutions:

Substituted With
$d documentPath
$a authority
$f full path (i.e. authority/documentPath)
$u current_user

Another Example

User “bob” wants to read a document out of Rapture.  Thus, he writes a Reflex script to use the getContent API call, which has the following definition:

[Retrieve the content for a document.]
@public String getContent(String docURI);

Bob’s Reflex script:


The $f in this entitlement gets substituted such that Bob’s entitlement request looks like the following:


The entitlement system will check if Bob is member of the group associated with that entitlement.  For the sake of this example, let’s assume that Alice had previously created an entitlement “/data/read/myAuthority/alicesDocs/doc” with a group with just her username included.  She basically wanted an area that she can keep private.  This means Bob’s call will fail.  He is not a member of the group associated with the entitlement “/data/read/myAuthority/alicesDocs/doc”.

The wildcard substitutions defined above are based on the Rapture URI argument that is passed into the call.  The values of documentPath, partition, and authority are all components of a RaptureURI object.  At this point, it only makes sense to use dynamic entitlements with API calls that have a RaptureURI string as an argument.

Integrating Third-Party Services with Rapture: Stripe (Payments)

Clients building applications using Rapture may want to collect payment from users based on some usage metric (recurring subscription, service-based fees, consumption-based fees, etc.). In this blog post, we will describe how we integrated Stripe with Rapture to set up a subscription service for our hosted trial environments through the Incapture developer portal.

Stripe offers a suite of APIs that support online commerce. Two aspects of their offering stood out to us –
i. Emphasis on security and PCI compliance — all sensitive credit card data is directly sent to Stripe’s vault, without it touching Incapture’s servers
ii. Developer-friendly APIs — good documentation wins, hands down.

I’ll give examples of how easy it was to build the integration using Rapture’s Reflex language — a procedural language that runs on the Java Virtual Machine — with Stripe’s API. This article is as much about Stripe subscriptions as it is about Rapture, Reflex and a front-end framework (in this case, Angular) providing the requisite stack to build a simple web app.


I. Creating a Subscription Plan in Stripe
The use case our application addresses is migrating clients from a free to paid subscription after some initial trial period. Our first step was to define parameters of a subscription plan through the Stripe dashboard; we opted for a test 30-day recurring subscription at $50 per month with no limits on usage. Now, every time a customer requests a new environment, we can associate the subscription plan ID with it.

Stripe has the option of specifying a trial period while creating a subscription plan — a handy feature that releases you from the responsibility of keeping a tab on the trial end date. However, this also necessitates collection of payment details at the time an environment is set up. We decided to not use the feature to ensure a zero-pressure customer onboarding experience.

So, when a customer accesses their dev portal dashboard, in addition to being notified when the trial period ends, there’s now an option to “Upgrade” their environment.


Subscription status on an environment card

Fig. 1. ‘Subscription Status’ on an environment card on the dev portal dashboard


II. Creating a Form to Collect Payment Details
We took advantage of Stripe’s Checkout form. It is customizable at a high level (company logo, title etc.) as well as at a functional level. We can use the same form for two different purposes — creating a subscription and updating payment details — by passing in appropriate arguments to the handler.


Stripe Checkout handler with different arguments

Fig.2. Stripe Checkout handler with different arguments


III. Creating a Subscription
If a customer signs up for a subscription for the very first time (i.e. they have never provided their payment details before), submitting the form creates two new objects:
i. Stripe customer
ii. Stripe subscription — that ties the customer object to the subscription plan we created via the Stripe dashboard.

A returning user who previously created a subscription for another environment will already be associated with a customer object (and, consequently, a payment source) and we do not need to collect payment details again. All we do is create a new subscription object and link it to the customer object.

Once created, Stripe will automatically renew the subscription every 30 days.


IV. Managing Subscriptions
Developers have a lot of flexibility in designing payment workflows in Rapture applications. For instance, basic tasks like updating payment information and canceling or refreshing subscriptions can be fully automated. Alternatively, certain actions can trigger alerts that allow for a support team member to connect with a client. Rapture also provides a number of extension points that may be linked with payments. The entitlements framework can be used to manage access to certain services and datasets based on subscription tier. Because all system activity is automatically logged, producing usage reports and using this data to inform customer segmentation and pricing analysis becomes a quick task.


V. The Mechanics
Incapture’s dev portal uses an Angular front-end and an API server built on the Rapture platform. For most apps, Reflex is the scripting language we employ to tap into Rapture’s powerful service framework mechanism: a service endpoint written in Reflex is the medium that the front-end and server use to talk to each other. Stripe has a RESTful API and Reflex leverages the entire platform API — including the ability to handle HTTP request and response objects. The result? A fully-functional Stripe app built really quickly!

Let’s take a look at the example of creating a Stripe customer object.


Flow diagram for creating a Stripe customer object

Fig.3. Flow diagram for creating a Stripe customer object


From the Subscription page, following the ‘Subscribe’ button click, we present the Stripe Checkout modal to collect a customer’s card details. After submitting the Checkout form, if everything checks out, Stripe returns a token ID that should be used to create a new customer object.

This line in our Angular controller invokes the createSubscription Reflex script (remember, creating a customer is actually a step encountered while creating a subscription for the very first time):
paymentService.createSubscription(createCustomer,,, planId, envName)
(The first argument is a flag that is set to true or false depending on the use case.)

Moving on to the Reflex part..
An important aspect of Reflex is that we can call one script from another. So, in our main script that contains the core logic (that follows the flow of creating a subscription), we call another script that makes the Stripe API call to create a customer object.
(This separation of core logic from vendor-specific API calls will also make it really easy to update scripts if we decide to switch to another payment platform in the future.)

stripeCreateCustomer = fromjson(#script.runScript("script://curtis/stripe_createCustomer",
                                 {'token': token, 'email': email, 'planId': planId}));

(email and planId are optional arguments.)

The stripe_createCustomer script itself is this:

response = {};
import HttpData as http;
headers = {};
headers["Authorization"] = "Bearer " + ENV['STRIPE_SK'];
url = "";
params = {};
params["source"] = token;
params["description"] = "First plan: " + planId;
params["email"] = email;
urlwithparams = $http.uriBuilder(url,params);
stripeResponse = $http.get(urlwithparams, "POST", null, "JSON", headers);
if (stripeResponse.error == null) do
     response = stripeResponse;
else do
     response.error = stripeResponse.error;
return json(response);

(Reflex has a number of built-in functions and special operators — e.g. ENV[…] — that are semantic shortcuts when interacting with Rapture . More power to you!)

Importing the HttpData module into the script gives us the ability to make REST calls with support for HTTP POST, GET, DELETE etc.

On successful creation, Stripe returns a customer object in its response. The onus of processing what is required falls on the caller Reflex script. In this case, we are only interested in the customerId value; so, we retrieve it from the response and store it in our database. (customerId is what we use to get a customer’s Stripe-related info — including determining whether a customer has already subscribed to a plan and, therefore, not asking for their payment details again.)
Further processing can specify what should be sent to the front-end JavaScript code (e.g. feedback).

Creating web apps that integrate with third-party services is a breeze when built with the Rapture-Reflex-Angular stack! In a later post, we’ll explore how to make Stripe’s webhooks talk to our dev portal and Slack.

Rapture available under open source license

San Francisco – Tuesday May 17, 2016

Incapture Technologies (“Incapture”) is excited to announce that the Rapture framework is now available as an open source offering.

Rapture delivers a development environment and run-time for distributed enterprise applications. Developers interact with a consistent API accessible in various languages which abstracts a number of fundamental tasks including data management, cloud deployment, messaging, entitlements, and audit.

Rapture is informed by decades of experience building and managing data driven applications for global asset management firms. The framework is equally suited for firms modernizing their existing environment or startups launching new offerings.

In order to capitalize on the promise of new technology paradigms while responding to client and regulatory demands for increased transparency and operational resiliency, capital markets participants must adopt a new approach to technology architecture. Closed, monolithic applications tied together with bespoke integrations and manual workarounds will give way to open architectures with API based integrations to internal and third party data and services. Rapture provides the foundation to realize this vision.

Releasing Rapture as open source is in line with an industry-wide movement towards open architectures. It also positions Rapture as a collaboration point amidst a burgeoning ecosystem of developers and vendors reshaping capital markets with new data sources, analytical tools, and applications.

Rapture is available through the open source MIT license. Incapture offers support licenses which provide on-demand access to experienced support resources for teams building mission critical applications.

Visit the Rapture Project github page to access documentation and visit to request access to a cloud hosted Rapture trial environment.

About Incapture
Incapture Technologies supports and develops Rapture, an open and extensible, information curation platform. targeted at technology savvy, information workers. Rapture shortens the development cycle of complex projects thereby significantly improving business agility. Purpose built for the Capital Markets Industry, it has particular focus on research & analysis, risk & compliance, and other data driven business lines.

Visit and to learn more.

Incapture partners with USAM to deliver Capital Markets solutions

San Francisco and New York City – Friday April 22, 2016

Incapture Technologies (“Incapture”) has partnered with United Sales and Marketing Group (“USAM”) to serve Capital Markets clients in North America.

Incapture has created an open source data integration and curation framework called Rapture. Rapture dramatically increases business agility by reducing the time to develop innovative new data-intensive applications, while delivering much-needed operational transparency and management oversight in managing complex data and applications. It is equally suited as a foundation when launching new products as it is when optimizing and simplifying existing business processes. Fundamental tasks such as data management, cloud deployment, messaging, entitlements, and auditing are abstracted by the framework freeing developers to focus on higher value work.

USAM will engage with prospects to understand their challenges and objectives in order to identify initiatives that will benefit from Rapture adoption. Incapture, whose team has decades of experience building and managing technology for leading asset managers and financial services firms, will collaborate with clients to design and deliver solutions.

Larry Leibowitz, Chief Executive Officer of Incapture, “the USAM team has a proven track record of helping capital markets firms identify and successfully deploy new technologies that deliver real business value. Their market knowledge and relationships are highly complementary to Incapture’s technology development and solution delivery capabilities.”

Feargal O’Sullivan, Chief Executive Officer of USAM, “Over the years, USAM has reviewed and analyzed hundreds of FinTech offerings, constantly searching for those that provide end-users a truly useful solution to problems that may otherwise seem intractable. The Rapture framework delivered by Incapture, is the best approach we’ve seen to empower developers while ensuring effective curation and operational controls are standard in complex projects.”

About Incapture
Incapture Technologies supports and develops Rapture, an open architecture and extensible, information curation platform. targeted at technology savvy, information workers. Rapture shortens the development cycle of complex projects thereby significantly improving business agility. It has particular focus on research & analysis, risk & compliance, and other data driven businesses.

Incapture is backed by a number of senior Financial Services executives including Bob Diamond (ex-CEO of Barclays), Duncan Niederauer (ex-CEO of NYSE Euronext), and Tom Glocer (ex-CEO of Thomson Reuters), who are all intimately familiar with the challenges Rapture addresses.
Visit to learn more

About USAM
United Sales and Marketing Group is a New York City based sales and marketing agency that specializes in the financial technology (FinTech) sector. Leveraging the deep industry experience of established sales professionals, USAM helps companies grow revenue faster and more cost effectively than they could by hiring and managing their own sales team.
Visit to learn more.

Rapture On-Demand

Over the past couple months at Incapture Technologies we have been working on a way to allow potential clients to experiment with a fully-fledged Rapture environment as seamlessly as possible. Our solution to this was actually using our own product, Rapture, to create a user dashboard & management application that would be used along with Docker & Tutum (soon to be Docker Cloud) to manage & launch trial environments as needed.

Once an interested user is up and running with their own environment, we provide them with our “Getting Started Guide”. That web page contains the Rapture Platform documentation as well as Introductory tutorials available in 3 languages (Reflex, Java or Python). These are in place to help a user get acquainted with how to use the Rapture API, the Rapture Information Manager browser application, and the real-world impact of how Rapture could be a solution for them. The first tutorial application demonstrates how easy it is to build an application in your chosen programming language on top of Rapture. This tutorial uses cleansed hedge fund sample data (price series, positions, orders, trades etc.) that comes conveniently pre-installed on a trial environment. The tutorial highlights how easy it is to load and transform data in various forms (csv blob, series data, or json document) using the Rapture platform.


Infrastructure & Deployment:

Our applications are all deployed as Docker containers in this scenario. If you aren’t already aware, Docker containers are essentially lightweight, self-contained (highly portable) VM’s. These images can be tagged and pushed to Dockerhub to be pulled from elsewhere. We strive to stay lean where we can, so some containers even implement the Alpine Linux OS (less than 5mb!). In short, containers provide a highly repeatable way to deploy applications.

This means that you can deploy your own application as a Docker container on top of Rapture. This means that your applications can be scalable, upgradeable, reliable and portable to different environments, just as Rapture is!

The next level up, we have Tutum (soon to be Docker Cloud). Via Tutum’s API, we are able to request host machines from Amazon Web Services (EC2), as well as specify what containers (and release version #) we would like to deploy where. Containers (called services in Tutum) can be organized into pre-configured “stacks” (collection of containers/services). This is useful for applications like Rapture, since we can define a stack that contains RabbitMQ, MongoDB, and Rapture, then deploy it, without having to worry about deploying individual containers.

User Dashboard Application:

From the beginning of this project, we knew we would need some form of “portal” or “dashboard” for users, and a way for us to keep track of them. We built our own solution using RaptureCore. The purpose of this application is two-fold:

  1. Allow a potential client, or interested persons to register with us, request an environment once verified, and view the environments that they own.
  2. Allow ourselves a way to manage users & environments, secure operational admin credentials for each environment, and generate useful data and metrics to help sales.


For potential customers, this is where you can:

          • Request an environment
          • Browse to your environment
          • View your available environments & their status
          • Add users to an environment


Several things get set into motion when a user requests a trial environment. A machine comes up in Tutum/EC2 and a “stack” yaml file gets generated and saved to Tutum. The services/containers we define in the stack automatically deploy to the requested host when the host comes online.
Once we have our core components installed (RabbitMQ, MongoDB & RaptureAPI server, seen in diagram), we can begin applying “plugins” to Rapture. Plugins range from pre-populating cleansed data or setting up the default trail user and their permissions, to an entire UI (Rapture Information Manager).

During this setup process, emails are sent to both Incapture Technologies and the requestor to notify when certain stages of setup are complete. We also have a proxy server with a custom program that listens for hosts in our Tutum VPC and updates the proxy’s configs with their locations. This way, people can access their sites by a nice-looking URL (ex: <trialInstanceName> moments after the application comes online.

At that proxied URL, the trial user can access the Rapture Information Manager from a web browser. The Rapture Information Manager allows users to view, create and edit data on their Rapture instance. Users can also execute Reflex scripts here and use a REPL Reflex window.

Users can also access their Rapture instance by RaptureAPI at <url>:8665/rapture. Users have the choice of using Reflex, Java, or Python to develop in (or all three!).


Earlier we mentioned using our user dashboard application to generate useful metrics and data that could also assist sales and improve user experience. There are two main areas to this:

        1. Salesforce integration
          • We implemented Salesforce’s rest API to allow our sales team to see new leads the moment they register.
          • This helps sales keep track of current and interested prospects and our interactions with them.
        2. Analysis of our own generated data
          • We have the ability to log & audit any activity that occurs on a trial Rapture instance. This means that we keep track of who is logging in, what API’s are being used, what exceptions are being thrown etc…
          • We plan to automate sending emails to users if they have not logged in for a certain period (come back!) or if they seem to be throwing an abnormal amount of exceptions (need help with that?).
          • Provide them relevant documentation based upon which API they seem to be using most.
          • We can get insight into which of the three languages you can write Rapture applications in is the most popular (Reflex, Java or Python).

Rapture is all about safe storage, management, and visibility of data. This equates to a great platform for business intelligence and data warehousing, allowing users to generate insightful data & metrics in addition to what they already store.

Data Lifecycle Manager for Regulatory Compliance

Compliance with regulatory mandates demands a fundamental re-think of how banks and asset managers approach data access, analysis, and governance. What is required? A flexible and open solution that promotes innovation while also providing operational controls. The Data Lifecycle Manager built on Rapture delivers on this ask.  

Fallout from the financial crisis continues as risk managers and IT professionals rush to implement solutions that address regulatory mandates. Banks and asset management firms must satisfy high level principles (the Basel Committee on Banking Supervision’s Risk Data Aggregation and Risk Reporting Principles) and specific asks (Volcker Rule and KYC/AML).

Data governance and operational transparency are consistent themes across these regulations, all driven by a motivation to bolster risk management capabilities. Manual workarounds and siloed systems that inhibit data sharing are rightly called out as areas of concern. A few changes on the margin to monolithic legacy systems will not be sufficient; general consensus is that a more fundamental re-think is needed:

Ideally, banks need a single platform that can be automated for data accumulation and aggregation. And since data governance has taken on additional importance with the Volcker Rule, the platform needs to be robust enough to provide an audit trail for risk managers, auditors and regulators.

Many of the largest banks with over $50 billion in assets are building their own in-house systems. However, it is questionable how well those systems are integrated with existing systems at the firms. Even more questionable is how smaller firms will cope with the cost and complexity of new technology demands.

IT departments face the confounding challenge of responding to these regulations while remaining responsive to business users seeking to experiment with the latest cloud-based research and trading offerings.

Incapture Technologies was founded by architects and developers with decades of experience delivering technology solutions for global asset managers. The challenges faced in those environments drove the development of Rapture, an open architecture framework.

By virtue of being built on Rapture, our products and solutions are flexible and adaptable while also offering a high degree of operational governance and controls. Some specific benefits that accrue to users include:

  • An automatically generated audit log of all system events
  • A robust entitlements framework that governs activity at the document level
  • Operational resiliency and scalability
  • Promotion of code through distinct research, testing, and production environments in service of a healthy SDLC
  • Transparency to underlying operations which makes troubleshooting easier
  • Open and modular architecture which promotes integration with legacy systems and adoption of new offerings
  • A robust SDK that supports continued enhancements

These underlying benefits are crucial; a recent survey of global banks found that of the 11 Risk Data principles outlined by the Basel Committee, Data Architecture and IT Infrastructure and Adaptability (part of Data Aggregation) had the lowest compliance. Adding new features and tools to existing applications will not help firms respond to these asks; rather, a platform driven approach is required.

Incapture Technologies delivers Products and Solutions that enable clients to leverage and benefit from foundational aspects of the Rapture platform. One product offering particularly suited to addressing the regulatory requirements detailed earlier is the Data Lifecycle Manager which covers the full lifecycle of enterprise data:

  • Capture: Automate the collection, validation, transformation, and storage of inbound data, whether through batch processing or real-time event streams
  • Curate: Empower searching, locating, and retrieval of data by creating a data catalog of associated data descriptions
  • Search: Federated access to data through consistent references which abstract away underlying operational concerns
  • Distribute: Expose data and services to various consumers including commercial and open source data visualization and analysis tools

Having implemented Rapture as the underlying fabric for data management, firms can easily build extensions to address specific needs; for example, build and execute fully audited workflows that automate extraction of structured data from a variety of sources to meet client on-boarding requirements. Or, integrate risk management models and reporting tools to deliver a flexible and transparent regulatory reporting engine. In all cases, clients are prepared for the future by virtue of a robust underlying architecture that delivers flexibility and operational resiliency.

The Data Lifecycle Manager built on Rapture offers IT departments a unique path forward from their present challenges. We welcome questions and inquiries as to how Incapture can support your business needs at

Building web applications on Rapture

In this next set of Rapture blogs I want to explore how we at Incapture build web applications on Rapture. You can use this same technique to build your own applications on Rapture as this framework is part of the general Rapture product. This is not the only way to do this – Rapture is a platform after all and there are many ways to use such a platform to create such an application.

As a taster for what I’ll be talking about I can show you a demonstration “front page” of our sandbox application:

Screen Shot 2015-02-26 at 1.49.22 PM

Here we have a tileset of “applications” that we have installed into this environment, and the ability to launch them. Over the next couple of blog posts I’ll explain behind the scenes how I used the framework to create these applications.

We should first consider what our requirements and constraints are for such an application framework. Ideally I wanted to have a very simple deployment approach – it should be straightforward to “add” an application to an existing environment – and to be able to do that in a more containerized deployment as well. I also didn’t want to have to compile and deploy “binaries” each time I changed a small aspect of an application. Finally it would be nice if I could examine and make minor modifications to an application from within Rapture itself.

In this introductory post I’ll simply talk about the underlying features of Rapture that will be used in this application framework. Subsequent posts will take each part and show how it all comes together.

The most fundamental parts of a web application is its static content – the html pages, the images, the javascript code and stylesheets. Within Rapture we have a good place to put such content – we call it a “blob repository”. Content in a blob repository has a mime type (which is very useful when serving content) and can also be versioned (which is very good if we make a mistake and we need to roll back content). We will need to have a way to serve content to anonymous users who haven’t logged in yet and to serve application content to those who have. Rapture’s security and entitlements model will help here.

The other aspect of a web application relates to the dynamic content. An earlier blog post talked about this but one way this can be achieved in Rapture is by serving the dynamic content (usually called via ajax calls from a javascript context on the client) through the deployment of Reflex scripts. Using scripts in this way helps to mitigate large amounts of data being passed for local processing.

Finally we’ll need a way of packaging up this content into something that can be easily deployed to an application instance. Rapture has a concept called “Features” which is an ideal match for this type of deployment approach. Even better – features can be packaged into a self-installable executable which we can run against an environment in a repeatable way.

So with static content, scripts and then the “real” data and workflows associated with our application we can very quickly create applications that can run in a Rapture environment. The end point for this blog journey will be to explain how we can create an application that can present this type of information:

Screen Shot 2015-02-26 at 2.03.06 PM

Watch this space (or subscribe using the button up and to the right) for more updates on this exciting way to build applications on Rapture.

Subscribe for updates