Postgres
Featureform supports Postgres as an Offline Store.

Implementation

Primary Sources

Tables

Table sources are used directly via a view. Featureform will never write to a primary source.

Files

Files are copied into a Snowflake table via a Kubernetes Job kicked off by our coordinator. If scheduling is set, the table will atomically be re-copied over.

Transformation Sources

SQL transformations are used to create a view. By default, those views are materialized and updated according to the schedule parameter. Deprecated transformations are converted to un-materialized views to save storage space.

Offline to Inference Store Materialization

When a feature is registered, Featureform creates an internal transformation to get the newest value of every feature and its associated entity. A Kubernetes job is then kicked off to sync this up with the Inference store.

Training Set Generation

Every registered feature and label is associated with a view table. That view contains three columns, the entity, value, and timestamp. When a training set is registered, it is created as a materialized view via a JOIN on the corresponding label and feature views.

Configuration

First we have to add a declarative Postgres configuration in Python.
postgres_config.py
1
import featureform as ff
2
3
ff.register_postgres(
4
name = "postgres_docs",
5
description = "Example offline store store",
6
team = "Featureform",
7
host = "0.0.0.0",
8
port = 5432,
9
user = "postgres",
10
password = "password",
11
database = "postgres",
12
)
Copied!
Once our config file is complete, we can apply it to our Featureform deployment
1
featureform apply postgres_config.py --host $FEATUREFORM_HOST
Copied!
We can re-verify that the provider is created by checking the Providers tab of the Feature Registry.