GCP Setup
Last updated
Was this helpful?
Last updated
Was this helpful?
We will deploy Gretel Hybrid's required cloud infrastructure using (an infrastructure as code tool). This guide will walk you through all the steps necessary to install and configure Terraform, even if you haven't used it before.
Gretel Hybrid must be deployed within a GKE cluster. You may choose to use a standard GKE cluster or an autopilot cluster.
For CPU and GPU based Gretel Jobs, we recommend configuring node types with at least 16GiB memory and 4vCPUs. We only require one GPU device per worker run.
The specific node types we recommend are:
CPU Gretel Model Workers -> n2-standard-4
GPU Gretel Model Workers -> g2-standard-4
Please consult the in the appendix for help requesting a GPU quota increase if you run into any resource constraints when deploying GPU nodes.
If you’re missing brew
, helm
or kubectl
, refer to the .
MacOS
You can install gcloud CLI using brew
.
Linux
Windows
Login with the gcloud CLI using the following commands.
If you have not done so for your GCP Project, you will need to enable the compute and container APIs using the following commands.
Now that you have a local copy of the repository, let's enter the proper working directory for deploying a full Gretel Hybrid environment with Terraform
Here is what our working directory looks like.
It is much harder to accidentally delete a state file from your bucket.
Anyone with access to the state file can make changes to the infrastructure as code files, allowing for collaboration within your team or organization.
If you're deploying this Gretel Hybrid Cluster for a production environment or for any sort of extended test, you should create a GCS Bucket to keep your Terraform State so that you do not lose state information.
We provide a script to create this Bucket for you. Simply run the command below, setting the flags as desired.
Now that the GCS Bucket is created, we need to tell Terraform to use it. Rename the backend.tf.example
file to backend.tf
and edit thebackend
block at the beginning of the file. Save your changes to the backend.tf
file.
The next step is to configure the variables Terraform will use to create the resources for Gretel Hybrid. First, rename the terraform.tfvars.example
file to terraform.tfvars
with the mv terraform.tfvars terraform.tfvars
command. Then review the variables inside the file and configure them as desired. Here is what they look like by default.
Make any desired changes to the provided variables. These will be used to create the GCP and Kubernetes resources that are part of the Gretel Hybrid deployment.
After retrieving your API Key, export the necessary variable using the below command. The variable name must match TF_VAR_gretel_api_key
exactly.
Run these terraform
commands from the full_deployment
directory.
Initialize terraform. This is an idempotent operation and is always safe to do (resources will not be created/destroyed).
View the changes terraform will make upon deployment. Use this any time you make changes to take a closer look at what is going on.
Deploy the module. This will require user confirmation so don't walk away from your shell until you confirm by typing "yes" to start the deployment.
It will take 5-10 minutes for all the necessary resources to be deployed. Congratulations! You've deployed everything necessary to run Gretel Hybrid within your own cloud tenant.
Follow our guide to test your deployment by running a model training job. Test Your Deployment
If you would like to clean up your provisioned GCP resources, the following command will cause all provisioned resourced to be deleted. The command will ask for confirmation before proceeding.
Using the filters up top, you can search for GPU quotas in the region you are operating in.
First you can filter by using the gpu
string, then choose your GPU type:
Once you select the GPU type you want to request an increase for, you can then filter on your region:
Finally, ensure you select the specific quota, and select Edit Quotas on the top right and select the quota required. We recommend using L4, T4, or A100 GPUs. Next, you need to make sure the Global GPU quota is set for your project:
You can filter on Quota: GPUs (all regions)
and increase the quota.
Even if you have already followed to install the Gretel CLI, we need to make sure that the GCP libraries are installed for testing our deployment once it is finished. Run the below command to install the latest version of the Gretel CLI with the GCP dependencies.
The official gcloud CLI installation documentation is .
Please follow the installation instructions referenced .
Please follow the installation instructions referenced .
After installing the gcloud CLI you will need to . Please run the below command.
If you're using kubectl
version 1.25 or earlier, you'll need to follow some extra steps to use the gke-gcloud-auth-plugin. These steps are located .
The Terraform CLI will utilize the authenticated session from the gcloud CLI to deploy and manage your Gretel Hybrid infrastructure. .
The official Terraform installation instructions are . After navigating to the linked documentation there is a dropdown menu where you are able to select your OS and installation method.
The Gretel Hybrid git repository is . You may clone the repository by running the below command.
Terraform stores information about currently deployed resources in the . By default Terraform stores this information in a local file within the current working directory. You can store the Terraform State in a GCS Bucket instead. This provides two benefits.
You can get your Gretel API key from the console by clicking the drop down menu in the top right hand corner of the console and selecting "API Key" under the "Account Settings" section. . If you haven't logged into the Gretel Console or set up an account, follow our .
For Gretel models that utilize GPUs, you’ll need to request GPU quota increases. Visit the .