Risk Analytics

Building a Scalable and Portable Risk Engine with Apache Beam and Dataflow

This article details a company project focused on building a high-performance risk engine using Apache Beam and Google Cloud Dataflow. The project aimed to address the growing need for scalable compute power and risk management capabilities within financial institutions.

Motivation:

The drive towards scalable, grid-based compute for risk management stems from both external pressures (regulatory changes, market volatility) and internal needs (improved efficiency, better risk assessment). This project specifically targeted the concept of “risk muscle”—leveraging risk management as a driver for business growth.

Traditional Risk Engine Architecture:

Traditional risk engines often consist of a grid, workflow engine, caching layer, and input/output mechanisms for trades and risk calculations. This project aimed to modernize this architecture using distributed computing.

Introducing Dataflow and Apache Beam:

  • Dataflow: Google Cloud Dataflow (Google Cloud Dataflow) is a fully managed service for batch and stream data processing. It provides a unified model for handling both bounded and unbounded datasets. Dataflow builds upon Google’s extensive experience in big data, including technologies like FlumeJava and MillWheel.

  • Apache Beam: Apache Beam (Apache Beam) is an open-source, unified programming model for defining and executing both batch and streaming data processing pipelines. It provides an abstraction layer over various execution engines (“runners”), including Dataflow, Apache Flink (Apache Flink), and Apache Spark (Apache Spark). Beam’s core concept is the DoFn (Do Function), which operates on immutable collections of data (PCollections).

Project Implementation Details:

  1. Programming Model: The project utilized Java for the Beam pipeline implementation, leveraging its robust ecosystem and integration capabilities.

  2. Quant Library Integration: A key challenge involved integrating existing C++ quant libraries into the Java-based Beam pipeline. This was achieved using Java Native Interface (JNI) and an out-of-process execution strategy to mitigate container crashes due to C++ errors. A “dead-letter” pattern was implemented to monitor the external C++ process and handle failures gracefully.

  3. Serialization: Protocol Buffers (protobuf) (Protocol Buffers) was chosen for efficient serialization and deserialization of data exchanged between Java and C++. This enabled consistent data structures and simplified cross-language communication.

  4. Module Separation: Beam’s I/O connectors facilitated seamless integration with various data sources and sinks, including MongoDB (MongoDB), Kafka (Kafka), and Elasticsearch (Elasticsearch). This modularity allowed for clear separation of concerns between data ingestion/output and core pipeline logic.

  5. Testing: Protobuf enabled consistent unit testing across Java and C++ components, simplifying debugging and validation.

Runner Comparison: Dataflow vs. Flink:

While Dataflow was the primary target runner, Apache Flink was also evaluated for on-premise deployment. Key differences included:

  • Managed Service: Dataflow is a fully managed service, while Flink requires manual configuration and scaling.
  • Optimization: Dataflow offers advanced optimizations like fusion and dynamic rebalancing, which Flink lacks in its open-source distribution.
  • Deployment: Dataflow runs exclusively on Google Cloud Platform (GCP), while Flink can be deployed on various platforms, including on-premise clusters, AWS Elastic MapReduce (EMR) (AWS EMR), and Azure Kubernetes Service (AKS) (Azure Kubernetes Service). For on-premise Flink deployments, Data Artisans Platform (now part of Ververica Platform) (Ververica Platform) was recommended for enhanced monitoring and management.

Performance Results:

Performance testing demonstrated comparable results between Flink on-premise and Dataflow in the cloud, especially when using protobuf. However, Dataflow’s auto-scaling and optimization features provided significant advantages in terms of efficiency and ease of management.

Streaming Analytics and Windowing:

The project also explored streaming analytics using Beam’s windowing capabilities. Key concepts included event time, processing time, watermarks, and triggering mechanisms. Fixed windows were initially used, with plans to explore session windows for more complex scenarios. Challenges related to handling late-arriving data and synchronizing trade data with market data updates were identified as areas for future development.

Conclusion:

This project successfully demonstrated the power and flexibility of Apache Beam and Google Cloud Dataflow for building a scalable and portable risk engine. The use of protobuf, out-of-process execution, and modular design enabled seamless integration with existing C++ quant libraries. While Flink offered a viable on-premise option, Dataflow’s managed service capabilities provided significant advantages in terms of performance, scalability, and operational efficiency. Future work will focus on refining streaming analytics and exploring more advanced windowing strategies. The project also highlighted the importance of choosing the right serialization method and understanding the nuances of different runners when developing Beam pipelines.