Skip to content

The purpose of this project is to provide an API for manipulating time series on top of Apache Spark. Functionality includes featurization using lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, and downsampling & interpolation. This has been tested on TB-scale of historical data and is unit tested for quality pur…

License

Notifications You must be signed in to change notification settings

arunpamulapati/tempo

 
 

Repository files navigation

tempo - Time Series Utilities for Data Teams Using Databricks

Project Description

The purpose of this project is to make time series manipulation with Spark simpler. Operations covered under this package include AS OF joins, rolling statistics with user-specified window lengths, featurization of time series using lagged values, and Delta Lake optimization on time and partition fields.

codecov

Using the Project

Install in Databricks notebooks using:

%pip install git+https://github.com/databrickslabs/tempo.git

Install locally using:

pip install git+https://github.com/databrickslabs/tempo.git

Starting Point: TSDF object, a wrapper over a Spark data frame

The entry point into all features for time series analysis in tempo is a TSDF object which wraps the Spark data frame. At a high level, a TSDF contains a data frame which contains many smaller time series, one per partition key. In order to create a TSDF object, a distinguished timestamp column much be provided in order for sorting purposes for public methods. Optionally, a sequence number and partition columns can be provided as the assumptive columns on which to create new features from. Below are the public methods available for TSDF transformation and enrichment.

Sample Reference Architecture for Capital Markets

1. asofJoin - AS OF Join to Paste Latest AS OF Information onto Fact Table

This join uses windowing in order to select the latest record from a source table and merges this onto the base Fact table

Parameters:

ts_col = timestamp column on which to sort fact and source table partition_cols - columns to use for partitioning the TSDF into more granular time series for windowing and sorting


from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
normal_asof_result = base_trades.asofJoin(skewQuotes, partition_cols = ["symbol"], right_prefix = 'asof').df

2. Skew Join Optimized AS OF Join

The purpose of the skew optimized as of join is to bucket each set of partition_cols to get the latest source record merged onto the fact table

Parameters:

ts_col = timestamp column for sorting partition_cols = partition columns for defining granular time series for windowing and sorting tsPartitionVal = value to break up each partition into time brackets fraction = overlap fraction right_prefix = prefix used for source columns when merged into fact table

from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
partitioned_asof_result = base_trades.asofJoin(skewQuotes, partition_cols = ["symbol"], tsPartitionVal = 1200, fraction = 0.1, right_prefix='asof').df

3 - Approximate Exponential Moving Average

The approximate exponential moving average uses an approximation of the form EMA = e * lag(col,0) + e * (1 - e) * lag(col, 1) + e * (1 - e)^2 * lag(col, 2) to define a rolling moving average based on exponential decay.

Parameters:

ts_col = timestamp on which to sort for computing previous n terms where n is the size of the window window = number of lagged values to compute for moving average


from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
ema_trades = base_trades.EMA("trade_pr", window = 180, partitionCols = ["symbol"]).df

4 - Volume-weighted average point (VWAP) Calculation

This calculation computes a volume-weighted average point, where point can be any feature, e.g. a price, a temperature reading, etc.

Parameters:

ts_col = column on which to bin for VWAP calculation (default to minute unit) price_col = feature column on which to aggregate


from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
vwap_res = base_trades.vwap(price_col = "trade_pr").df

5 - Time Series Lookback Feature Generation

Method for placing lagged values into an array for traditional ML methods

Parameters:

ts_col = timestamp column used for sorting and computing lagged values per partition partitionCols = columns to use for more granular time series calculation lookbackWindowSize = cardinality of computed feature vector featureCols = features to aggregate into array


from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
res_df = base_trades.withLookbackFeatures(featureCols = ['trade_pr'] , lookbackWindowSize = 20, partitionCols=['symbol']).df

6 - Range Stats Lookback Append

Method for computing rolling statistics based on the distinguished timestamp column

Parameters:

ts_col = timestamp column used for sorting values to get rolling values partitionCols = partition columns used for the range stats windowing in Spark


from tempo import *

base_trades = TSDF(skewTrades, ts_col = 'event_ts')
res_stats = base_trades.withRangeStats(partitionCols=['symbol']).df

Project Support

Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

Project Setup

After cloning the repo, it is highly advised that you create a virtual environment to isolate and manage packages for this project, like so:

python -m venv <path to project root>/venv

You can then install the required modules via pip:

pip install requirements.txt

Building the Project

Once in the main project folder, build into a wheel using the following command:

python setup.py bdist_wheel

Releasing the Project

Instructions for how to release a version of the project

About

The purpose of this project is to provide an API for manipulating time series on top of Apache Spark. Functionality includes featurization using lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, and downsampling & interpolation. This has been tested on TB-scale of historical data and is unit tested for quality pur…

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%