Privacy Protection

Fine-tune Gretel's privacy protection filters to prevent adversarial attacks and better meet your data sharing needs.

In addition to the privacy inherent in the use of synthetic data, we can add supplemental protection by means of Gretel's privacy filters. These file configuration settings help to ensure that the generated data is safe from adversarial attacks.

Primary Protection Filters

There are four privacy protection mechanisms:

Overfitting Prevention: This mechanism ensures that the synthetic model will stop training before it has a chance to overfit. When a model is overfit, it will start to memorize the training data as opposed to learning generalized patterns in the data. This is a severe privacy risk as overfit models are commonly exploited by adversaries seeking to gain insights into the original data. Overfitting prevention is enabled using the validation_split and early_stopping configuration settings. Overfitting Prevention is available with Gretel LSTM.

Both these settings are booleans. Setting validation-split to Truewill automatically set aside 20% of the training data (randomly) as validation data to prevent overfitting.

We recommend keeping validation-splitenabled, except in the following scenarios:

  1. Time-series data: information leakage could occur, making the validation set less useful.

  2. Anomaly detection use cases: you may wish to ensure that the model is trained on all positive samples

  3. Small dataset: if your training dataset is very small, you may need to ensure all samples are present for model training.

Similarity Filters: Similarity filters ensure that no synthetic record is overly similar to a training record. Overly similar training records can be a severe privacy risk as adversarial attacks commonly exploit such records to gain insights into the original data. Similarity Filtering is enabled by the privacy_filters.similarity configuration setting. Similarity filters are available for Gretel LSTM and Gretel ACTGAN.

Allowed values are null, auto, medium, and high. A value of medium will filter out any synthetic record that is an exact duplicate of a training record, while high will filter out any synthetic record that is 99% similar or more to a training record. auto is equivalent to medium for most datasets, but can fall back to null if the similarity filter prevents the synthetic model from generating the requested number of records. However, if differential privacy is enabled, auto similarity filters will always be equivalent to null.

Outlier Filters: Outlier filters ensure that no synthetic record is an outlier with respect to the training dataset. Outliers revealed in the synthetic dataset can be exploited by Membership Inference Attacks, Attribute Inference, and a wide variety of other adversarial attacks. They are a serious privacy risk. Outlier Filtering is enabled by the privacy_filters.outliers configuration setting. Outlier filters are available for Gretel LSTM and Gretel ACTGAN.

Allowed values are null, auto, medium, and high. A value of medium will filter out any synthetic record that has a very high likelihood of being an outlier, while high will filter out any synthetic record that has a medium to high likelihood of being an outlier. auto is equivalent to medium for most datasets, but can fall back to null if the outlier filter prevents the synthetic model from generating the requested number of records. However, if differential privacy is enabled, auto outlier filters will always be equivalent to null.

Differential Privacy: We recommend the Gretel Tabular DP model for generating data with strong differential privacy guarantees (ε < 10).

Model Configuration

Synthetic model training and generation are driven by a configuration file. Here is an example configuration with commonly used privacy settings for Gretel LSTM.

schema_version: "1.0"

models:
  - synthetics:
      data_source: __tmp__
      params:
        early_stopping: True
        validation_split: True
        dp: False
        dp_noise_multiplier: 0.001
        dp_l2_norm_clip: 5.0
      privacy_filters:
        outliers: medium
        similarity: medium

Understanding Privacy Protection Levels

Your Data Privacy Score is calculated by measuring the protection of your data against simulated adversarial attacks. Your Privacy Configuration Score is calculated based on the enabled privacy mechanisms and displayed in the Gretel Performance Report. The top of the report displays gauges showing the scores for the generated synthetic data.

Values can range from Excellent to Poor, and we provide a matrix with the recommended Privacy Protection Levels for a given data sharing use case.

Data sharing use caseExcellentVery GoodGoodNormal

Internally, within the same team

Internally, across different teams

Externally, with trusted partners

Externally, public availability

We provide a summary of the protection level against Membership Inference Attacks and Attribute Inference Attacks.

For each metric, we provide a breakdown of the attack results that contributed to the score.

Membership Inference Protection is a measure of how well-protected your data is from membership inference attacks. A membership inference attack is a type of privacy attack on machine learning models where an adversary aims to determine whether a particular data sample was part of the model's training dataset. By exploiting the differences in the model's responses to data points from its training set versus those it has never seen before, an attacker can attempt to infer membership. This type of attack can have critical privacy implications, as it can reveal whether specific individuals' data was used to train the model. To simulate this attack, we take a 5% holdout from the training data prior to training the model. Based on directly analyzing the synthetic output, a high score indicates that your training data is well-protected from this type of attack. The score is based on 360 simulated attacks, and the percentages indicate how many fell into each protection level.

Attribute Inference Protection is a measure of how well-protected your data is from attribute inference attacks. An attribute inference attack is a type of privacy attack on machine learning models where an adversary seeks to infer missing attributes or sensitive information about individuals from their data that was used to train the model. By leveraging the model's output, the attacker can attempt to predict unknown attributes of a data sample. This type of attack poses significant privacy risks, as it can uncover sensitive details about individuals that were not intended to be revealed by the data owners. Based on directly analyzing the synthetic output, an overall high score indicates that your training data is well-protected from this type of attack. For a specific attribute, a high score indicates that even when other attributes are known, that specific attribute is difficult to predict.

We also provide a summary of available and enabled privacy protections.

Last updated