Gretel Transform combines data classification with data transformation to easily detect and anonymize or mutate sensitive data.
Gretel Transform offers custom transformation logic, an expanded library of detectable and fakeable entities, and PII and custom entity detections.
Gretel Transform is a general-purpose programmatic dataset editing tool. Most commonly, Gretel customers use it to:
De-identify datasets, for example by detecting Personally Identifiable Information (PII) and replacing it with fake PII of the same type.
Pre-process datasets before using them to train a synthetic data model, for example to remove low quality records such as records containing too many blank values, or columns containing UUIDs or hashes which are not relevant for synthetic data models since they contain no discernible correlations or distributions for the model to learn.
Post-process synthetic data generated from a synthetic data model, for example to validate that the generated records respect business-specific rules, and drop or fix any records that don't.
As with other Gretel models, you can configure Transform using YAML. Transform config files consist of two sections:
globals
which contains default parameter values (such as the locale and seed used to generate fake values) and user-defined variables applicable throughout the config.
steps
which lists transformation steps applied sequentially. Transformation steps can define variables (vars
), and manipulate columns
(add
, drop
, and rename
) and rows
(drop
and update
). In practice, most Transform configs contain a single step, but more steps can be useful if for example the value of column B depends on the original (non-transformed) value of column A, but column A must also be eventually transformed. In that case, the first step could set the new value of column B, leaving column A unchanged, before ultimately setting the new value of column A in the second step.
Below is an example config which shows this config structure in action:
The config above:
Sets the default locale for fake values to Canada (English) and Canada (French). When multiple locales are provided, a random one is chosen from the list for each fake value.
Adds a new column named row_index
initially containing only blank values.
Drops invalid rows, which we define here as rows containing blank user_id
values. condition
is a Jinja template expression, which allows for custom validation logic.
Sets the value of the new row_index
column to the index of the record in the original dataset (this can be helpful for use cases where the ability to "reverse" transformations or maintain a mapping between the original and transformed values is important).
Replaces all values within columns detected as containing phone numbers (including phone_number_1
and phone_number_2
) with fake phone numbers having area codes in Canada, since the default locale is set to en_CA
and fr_CA
in the globals
section. fake
is a Faker object supporting all standard Faker providers.
Drops the sensitive user_id
column. Note that this is done in the second step, since that column is needed in the first step to drop invalid rows.
Renames the phone_number_1
and phone_number_2
columns respectively to cell_phone
and home_phone
.
To get started with building your own Transform config for de-identification or pre/post processing datasets, see the Examples page for starter configs for several use cases, and the Reference page for the full list of supported transformation steps, template expression syntax, and detectable entities.
Loading...
Below are a few complete sample configs to help you quickly get started with some of the most common Transform use cases.
Fallback on hashing entities not supported by Faker. If you don't require NER, remove the last rule (type: text -> fake_entities
) to run this config more than 10x faster assuming your dataset contains free text columns.
If you need to preserve certain ID columns for auditability or to maintain relationships between tables, you can explicitly exclude these columns from any transformation rules.
You can use the built-in Python re
library for regex operations in Python. Below we go a step further by listing all regular expressions we are looking to replace along with their Faker function mapping in the regex_to_faker
variable, then iterate through them to replace all of their occurrences in all free text columns.
Transform can be used to post-process synthetic data to increase accuracy, for example by dropping invalid rows according to custom business logic, or by ensuring calculated field values are accurate.
Loading...