Streaming Data Pipelines: Creating Next Gen Banking Analytics

ATB AI Lab, in the Library, with the Streaming Data Pipeline 

Making banking work for people means evolving solutions to meet the demands of tomorrow. In this whitepaper, Dmitriy Volinskiy and Gunjan Kaur of the ATB AI Lab break down how their team at the University of Alberta is finding innovative ways to produce robust and cost-efficient real-time data processing and analysis. 
    

Modern banking systems are a labyrinthian network of intricate processes, with incoming raw data typically in an electronic warehouse for scheduled end-of-day processing and analysis. The daily processing of banking information involves many different manipulation and analysis steps, often created by various groups with different data use cases. This situation results in process fragmentation, high computational cost due to the need to maintain and operate the in-house computational resources required to process this considerable amount of data in a relatively short time span, and data analysis inefficiencies due to siloing among the different teams processing and using this data. Another side effect of this structuring is that there is a significant delay between raw information coming into the system, and its appearance in an actionable form on an expert’s computer monitor. If the header for this section had you confused, then here’s the question: who killed this monster?

Our ATB AI lab is working in partnership with the University of Alberta (UofA) to create a next-gen data analysis stack that exploits modern serverless computing in conjunction with streaming data pipelines to allow for robust and cost-efficient real-time data processing and analysis. With real-time capabilities comes the ability to monitor suspicious transactions as they occur, make rapid adjustments to improve our customer’s experiences in our mobile app and website, and many more potential applications that help to make banking work for our customers. In this article we will explore some of the design philosophies and technologies that are helping to make this paradigm shift possible, and explore two use cases that have emerged from our lab.

Working Toward a Collaborative System

The primary objective of our multi-year research partnership with the UofA is to develop a flexible, real-time data-analysis systems that leverage cutting edge machine learning, and whose construction follows ‘collaborative’  design principles. Our systems are comprised of several independent services, or modules, which communicate with one another via ‘messages’ that are handled by a high capacity, intrinsically fault tolerant pub/sub service that ensures that messages persist until they are consumed by a service. This architecture takes the traditional concept of modular software design to its abstract conclusion, with each service in an engine effectively containerized such that each component operates completely independently of all other components, except for the messaging bus which enables the transfer of information throughout the system. This method of construction allows for components to be easily added onto the platform without impacting the pre-existing infrastructure. 

Collaborative design earns its name due to its decentralized implementation, with the system growing in functionality through the collaboration of modules from various contributing authors. This design has the advantage that in a revamped banking system, teams that are looking to work with data coming into the system can add their required functionality on top of the existing system, simply ‘connecting’ their module to whatever inputs they need from core modules, and those that other teams have added to the engine.

Elastic Computing and Streaming Data Pipelines

Our current engine prototypes make use of Apache Beam’s unified programming model, in conjunction with Google Cloud Platform (GCP)’s ‘compute as a service’ DataFlow pipeline tool to handle both batch and streaming data sources, with no barriers to the size or complexity of the data. These pipelines naturally describe transformations and the control flow of incoming data, and perform automatic resource allocation to make full use of available compute power. Tools such as DataFlow allow these pipelines to run on cloud elastic computing infrastructure that scales the number of workers running at any given time based on the volume of data entering the system. Taking advantage of modern on-demand serverless-computing’s automated horizontal scalability, and the flexibility of data pipelines to handle real-time data, our engine is at the forefront of next generation big data processing and analysis.

 

We are also designing our engines to improve the robustness of machine learning models applied to analysis of real-time data. The nature of real-time data analysis is that bursts and delays in the receipt of information are commonplace, however they prove to be problematic for machine learning models applied to time-windowed data. In order to ensure that our systems are able to handle these (semi-regular) irregularities, we are exploring the use of data streams augmented and blended with artificial data streams that serve to stabilize the flow of data into our models so that anomalies can be more effectively detected, even within delayed and bursty data. 

Another feature we are exploring in our engines is the use of competing, redundant databases to serve pipelines with different volumes and ‘urgencies’. For instance one can employ one database solution which is suitable for small - fast queries with emphasis on immediate data availability, and a second database which is used for larger, more comprehensive data requests that can afford to be served after a longer delay. By introducing this redundancy, and optimizing the selection of which database solution serves which data requests by developing a server recommendation system based on request urgency and size, we accomplish information retrieval that best suits the needs of the client service in terms of the availability, size, or consistency of the requested information. This means that data pipelines that need to make information available in real-time, and pipelines that perform non-urgent data processing and warehousing tasks can both be served by the same query handling engine, keeping overall costs down while still achieving desired latencies. 

Research → A Better Bank

The purpose of the research in our AI Lab is to provide innovative solutions for core business problems. We look to iterate toward production ready outcomes and learn from disproved hypotheses. In line with our work on real-time analysis systems, we have developed a pair of system to accomplish the tasks of Transaction Anomaly Detection and Customer Experience Experimentation.

The process of manually inspecting daily transactions for suspicious activity involves time consuming analysis of user transaction histories, and the application of ad-hoc metrics for identifying anomalies. We created an end-to-end anomaly detection engine which continually monitors real-time transactions, which can serve as  a standalone alert system, or as a suspicious transaction curation device for human fraud experts. The engine ingests incoming transactions, simultaneously updating a database of customer transaction histories, as well as feeding DataFlow pipelines that process both the incoming transaction stream, and perform batch calculation and analysis of customer transaction history statistics. These statistics are combined with the incoming data to enrich each incoming transaction with a customer activity profile which highlight deviations from each customer’s historical behaviour. These augmented transactions details are used as input for a supervised machine learning classifier, and the output from this model is then used to trigger alerts on individual transactions that are deemed suspicious. Built on top of GCP’s elastic computation framework, this engine dynamically scales its computational resources with the volume of incoming transactions.

Our second case study involved setting up an experimentation system to optimize customer experience. Having an experimentation back-end that is capable of testing price curves, cohort curves, optimizing notification delivery, and measuring campaign outcomes is key to developing products driven by customer choice. We created a prototype capable of ingesting data from mobile or web app, checking for outliers, and conducting statistical tests for hypothesis testing. Most importantly, it enables us to deliver products and services that make banking work for our customers.


Experimentation enables us to quantitatively measure customer response to changes that we make to their experience. Experiments involve using statistical hypothesis testing to understand if a particular change is providing us with the intended result in a rigorous manner. Whether these experiments are regarding customer response to a change in our banking app workflow, or the efficacy of a proposed new campaign, our prototype will help to guide the development of cutting edge hypothesis testing systems that enable rapid analysis of outcomes.

Our prototype is built using loosely coupled modules, following our collaborative design philosophy. Data is ingested from our mobile app or website into a data pipeline, which is capable of operating on both real-time streaming data, or scheduled batches of data. This pipeline performs preprocessing steps before supplying data to a BigQuery database for outlier detection and sample size validation. Incoming data is also analyzed to select an appropriate statistical measure for hypothesis testing. The engine can select a t-test for example (based on the distribution of the sample data), and then report the results of this test. These results are then output for visualization and further analysis. Using Google’s App Engine and a framework such as flask, this entire system can be wrapped in a webapp environment for deployment. ‘

The system provides a simple way to provide principled, automated, and highly scalable statistical hypothesis testing, and will lead to improved customer satisfaction through improved access to powerful customer insight analytics. 

It’s all about making banking work for people. To stay up to date with our Transformation initiatives, subscribe to alphaBeta below.

initializing
We are ATB transformation - innovating at the forefront
of robotics, AI, blockchain and the future.