Search…
Running Locally
This walkthrough will teach you how to set up a local environment for GPU accelerated synthetic data model training and generation, as an alternative to training in the Gretel SaaS service.
By default, Gretel jobs are executed in the Gretel SaaS cloud and leverage compute and state of the art GPUs. This guide walks through how to configure your own systems as an alternative to the Gretel cloud for model training and generation.

Supported Operating Systems

Debian or Ubuntu 18.04 operating systems are highly recommended for training deep learning models.

Supported GPUs

For training and generating data from Gretel's deep learning-based synthetic data models, a GPU is strongly recommended. The following GPU-enabled devices are supported:

Install Docker Engine

To run Gretel Workers in your own environment you must install Docker on your host(s). The following install scripts for Docker are recommended for installation.
Install Docker with the command below:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
After Docker is installed, you should be able to access it via this command: sudo docker
Enable Docker in privileged mode:
sudo groupadd docker
sudo usermod -aG docker $USER
From here you may need to restart your shell session and now you should be able to access docker via this command: docker ps

GPU Configuration

Below is a script that will configure a Ubuntu 18.04 machine with NVIDIA GPUs to work with Gretel’s CLI. Provided you have created a VM or setup a machine with Ubuntu, Docker, and a GPU you can run:
curl https://raw.githubusercontent.com/gretelai/gretel-blueprints/main/utils/gpu_docker_setup.sh | bash
When this script completes you should see output similar to the following:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA Tesla T4 On | 00000000:00:04.0 Off | 0 |
| N/A 59C P0 28W / 70W | 0MiB / 15109MiB | 6% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Finally, you will need to configure Docker to allow your current user to run commands, this can be done by running:
sudo groupadd docker
sudo usermod -aG docker $USER
Now log out and back in. At this point, your machine is now configured to launch Gretel Workers with GPU support.