Use Case Examples

Use case based notebooks.

Follow along with these use cases to famaliarize yourself with core Gretel features. These examples provide a starting point for common use cases which you can modify to suit your specific needs. We walk through three use cases using both the the Gretel SDK and the Gretel CLI in the #CLI and SDK Examples. The rest of our #Example Notebooks use the Gretel SDK and are provided in a Jupyter Notebook format.

On this page:

What's Next?

After trying some of our use case examples below, dive into the Gretel Gretel Fundamentals section to understand the core Gretel concepts you'll be working with regularly.

CLI and SDK Examples

These examples will walkthrough three core Gretel use cases using both the CLI and SDK.

Example Notebooks

Synthetics

Notebook
Description
Description

Walk through the basics of using Gretel's Python SDK to create a synthetic dataset from a Pandas DataFrame or CSV.

Train a synthetic model locally and generate data in your environment.

Conditional data generation (seeding a model) is helpful when you want to preserve some of the original row data (primary keys, dates, important categorical data) in synthetic datasets.

Balance demographic representation bias in a healthcare set using conditional data generation with a synthetic model.

Use a synthetic model to boost the representation of an extreme minority class in a dataset by incorporating features from nearest neighbors.

Run a sweep to automate hyper parameter optimization for a synthetic model using Weights and Biases.

Augment a popular machine learning dataset with synthetic data to improve downstream accuracy and algorithmic fairness.

This notebook shows how to generate synthetic data directly from a multi-table relational database to support data augmentation and subsetting use cases.

Generate synthetic daily oil price data using the DoppelGANger GAN for time-series data.

Produce a quality score and detailed report for any synthetic dataset vs. real world data.

Use Gretel ACTGAN model to conditionally generate additional minority samples on a dataset that only has a few instances of the minority class

Synthesize a sample database using Gretel Relational Synthetics

Transforms

Notebook
Launch
Description

In this blueprint, we will create a transform policy to identify and redact or replace PII with fake values. We will then use the SDK to transform a dataset and examine the results.

Label and transform sensitive data locally in your environment.

In this deep dive, we will walk through some of the more advanced features to de-identify data with the Transform API, including bucketing, date shifts, masking, and entity replacements.

This notebook walks through creating a policy using the Transform API to de-identify and anonymize data in a Postgres database for test use cases.

This notebook uses Gretel Relational Transform model to redact PII in a sample database.

Classify

Notebook
Launch
Description

In this blueprint, we will create a classification policy to identify PII as well as a custom regular expression. We will then use the SDK to classify data and examine the results.

Label managed and custom data types locally in your environment.

In this blueprint, we analyze and label a set of freetext email dumps looking for PII and other potentially sensitive information using NLP.

Evaluate

Notebook
Launch
Description

In this notebook, we benchmark datasets and models to analyze multiple synthetic generation algorithms (including, but not limited to, Gretel models). The Benchmark report provides Synthetic Data Quality Score (SQS) for each generated synthetic dataset, as well as train time, generate time, and total runtime (in secs).

Evaluate synthetic data vs. real data trained on AutoML classifiers. The Gretel Synthetic Data Utility Report provides a detailed table of classification metrics.

Evaluate synthetic data vs. real data trained on AutoML regression models. The Gretel Synthetic Data Utility Report provides a detailed table of regression metrics.

Last updated