Quickstart (Docker)
Quickstart (Docker)
This quickstart uses the standalone containered version of Featureform. It can be used to connect to the same providers as the Kubernetes Hosted version, but lacks the scaling capabilities.
This quickstart will walk through creating a few simple features, labels, and a training set from a fraud detection dataset using Postgres and Redis.
Requirements
Python 3.7+
Docker
Step 1: Install The Featureform CLI
pip install featureform
Step 2: Pull Containers
We’ll use Postgres and Redis as providers in this quickstart. We’ve created a Featureform Postgres container that has preloaded data. You can checkout the source data here, we’ll just be using a sample of it.
We’ll also run the Featureform container in the foreground so logs are available.
docker run -d -p 5432:5432 featureformcom/postgres
docker run -d -p 6379:6379 redis
docker run -p 7878:7878 -p 80:80 featureformcom/featureform
docker run -p 7878:7878 -p 80:80 -e ETCD_ARCH="ETCD_UNSUPPORTED_ARCH=arm64" featureformcom/featureform
Wait 30-60 seconds to allow the Postgres container to fully initialize the sample data.
Step 3: Set Environment Variables
We can store the address of our container endpoint for applying definitions and serving later.
export FEATUREFORM_HOST=localhost:7878
Step 4: Apply Definitions
Featureform definitions can be stored as both a local file and URL. Multiple files can be applied at the same time.
We’ll set the --insecure
flag since we’re using an unencrypted endpoint on the container.
featureform apply https://featureform-demo-files.s3.amazonaws.com/definitions.py --insecure
Step 5: Dashboard and Serving
The dashboard is available at localhost
In the dashboard you should be able to see that 2 Sources, 1 Feature, 1 Label, and 1 Training Set has been created.
You can check also check the status of the training set with:
featureform get training-set fraud_training default --insecure
When the status of these resources is READY, you can serve them with:
curl https://featureform-demo-files.s3.amazonaws.com/serving.py -o serving.py && python serving.py
curl https://featureform-demo-files.s3.amazonaws.com/training.py -o training.py && python training.py
This will download and run sample serving and training scripts, which will serve a single feature value and a sample of a training data set.
How Does It Work?
Now that we have everything running, we’ll walk through what was done to create the training set and feature.
Apply
If we download the definitions.py file, we can see what Featureform is doing when we run featureform apply
.
First we register the Postgres and Redis containers as providers so Featureform is aware of them.
import featureform as ff
postgres = ff.register_postgres(
name="postgres-quickstart",
host="host.docker.internal", # The docker dns name for postgres
port="5432",
user="postgres",
password="password",
database="postgres",
)
redis = ff.register_redis(
name = "redis-quickstart",
host="host.docker.internal", # The docker dns name for redis
port=6379,
)
We can then register our sources.
The first source we’ll register is our Transactions table that exists in Postgres. This is so Featureform is aware that the Transactions table exists and can be used as a dependency.
We can then create a Transformation source off of our Transactions table. This is done using an SQL query that is executed in Postgres and saved in a table.
transactions = postgres.register_table(
name="transactions",
table="Transactions", # This is the table's name in Postgres
)
@postgres.sql_transformation()
def average_user_transaction():
return "SELECT CustomerID as user_id, avg(TransactionAmount) " \
"as avg_transaction_amt from {{transactions.default}} GROUP BY user_id"
We can then register our feature, label, and training set.
The feature is registered off of the table we created with our SQL Transformation.
The label is registered off of our base Transactions table.
A Training Set can be created by joining our feature and label together.
@ff.entity
class User:
avg_transactions = ff.Feature(
average_user_transaction[["user_id", "avg_transaction_amt"]], # We can optional include the `timestamp_column` "timestamp" here
type=ff.Float32,
inference_store=redis,
)
fraudulent = ff.Label(
transactions[["customerid", "isfraud"]], variant="quickstart", type=ff.Bool
)
ff.register_training_set(
"fraud_training",
label=("fraudulent", "quickstart"),
features=["avg_transactions"],
)
The ff.entity
decorator will use the lowercased class name as the entity name. The class attributes avg_transactions
and fraudulent
will be registered as a feature and label, respectively, associated with the user
entity. Indexing into the sources (e.g. average_user_transaction
) with a [["<ENTITY COLUMN>", "<FEATURE/LABEL COLUMN>"]]
, returns the required parameters to the Feature
and Label
registration classes.
When registering more than one variant, we can use the Variants
registration class:
@ff.entity
class User:
avg_transactions = ff.Variants(
{
"quickstart": ff.Feature(
average_user_transaction[["user_id", "avg_transaction_amt"]],
type=ff.Float32,
inference_store=redis,
),
"quickstart_v2": ff.Feature(
average_user_transaction[["user_id", "avg_transaction_amt"]],
type=ff.Float32,
inference_store=redis,
),
}
)
fraudulent = ff.Label(
transactions[["customerid", "isfraud"]], variant="quickstart", type=ff.Bool
)
Serving
We can serve single features from Redis with the Serving Client. The features()
method takes the name of the feature and an entity that we want the value for.
from featureform import ServingClient
serving = ServingClient(insecure=True)
user_feat = serving.features(["avg_transactions"], {"user": "C1214240"})
print("User Result: ")
print(user_feat)
Training
We can serve a training dataset from Postgres with the Serving Client as well. This example takes the name of the training set and returns 25 rows of the training set.
from featureform import ServingClient
client = ServingClient(insecure=True)
dataset = client.training_set("fraud_training")
for i, batch in enumerate(dataset):
print(batch)
if i > 25:
break