Oracle Database
Connect to your Oracle database.
Last updated
Connect to your Oracle database.
Last updated
Prerequisites to create an Oracle Database based workflow. You will need
A source Oracle Database connection.
(optional) A list of tables OR SQL queries.
(optional) A destination Oracle Database connection OR object storage connection.
For the source database connection, we recommend using a backup or clone with read-only permissions, instead of connecting directly to your production database.
Do not use your input database connection as an output connector. This action can result in the unintended overwriting of existing data.
A oracle
connection is created using the following parameters:
First, create a file on your local computer containing the connection credentials. This file should also include type
, name
, config
, and credentials
. The config
and credentials
fields should contain fields that are specific to the connection being created.
Below is an example Oracle Database connection:
Now that you've created the credentials file, use the CLI to create the connection
In Oracle, the CREATE SCHEMA
command does not create a new, standalone schema. Instead, one creates a user. When the user is created, a schema is also automatically created for that user. When the user logs in, that schema is used by default for the session. In order to prevent name clashes or data accidents, we encourage you to create separate Oracle users for the Source and Destination connections.
The Oracle source action requires enough access to read from tables and access schema metadata. The following SQL script will create an Oracle user suitable for a Gretel Oracle source.
The following SQL script will create an Oracle user suitable for a Gretel Oracle destination. It will write to its own schema.
For more details please check your installation's version and see Oracle documents on CREATE USER.
The oracle_source
action reads data from your Oracle database. It can be used to extract:
an entire database, OR
selected tables from a database, OR
the results of SQL query/queries against a database.
Each time the workflow is run the source action will extract the most recent data from the source database.
When combined in a workflow, the data extracted from the oracle_source
action is used to train models and generate data with the gretel_tabular
action, and can be written to an output database with the oracle_destination
action. Your generated data can also be written to object storage connections, for more information see Writing to Object Storage.
For the source database connection, we recommend using a backup or clone with read-only permissions, instead of connecting directly to your production database.
The oracle_source
action takes slightly different inputs depending on the type of data you wish to extract. Flip through the tabs below to see the input config parameters and example action YAMLs for each type of extraction.
Entire Database
Example Source Action YAML
Whether you are extracting an entire database, selected tables, or querying against a database, the oracle_source
action always provides a single output, dataset
.
The output of a oracle_source
action can be used as the input to a gretel_tabular
action in order to transform and/or synthesize a database.
The oracle_destination
action can be used to write gretel_tabular
action outputs to Oracle destination databases.
Whether you are writing an entire database, selected tables, or table(s) created via SQL query, the oracle_destination
action always takes the same input, dataset
.
There are multiple strategies for writing records into the destination database. These strategies are configured from the sync.mode
field on a destination config.
sync.mode
may be one of truncate
, replace
, or append
.
When sync.mode
is configured with truncate
, records are first truncated from the destination table using the TRUNCATE TABLE
DML command.
When sync mode is configured with truncate
the destination table must already exist in the database.
When sync.mode
is configured with replace
, the destination table is first dropped and then recreated using the schema from the source table.
If the source table is from Oracle, the DDL is extracted using the GET_DDL
interface from the DBMS_METADATA
package. If the source table is from a non Oracle source, the destination table schema is inferred based on the column types of the source schema (if present) or data.
When sync mode is configured with replace
the destination table does not need to exist in the destination.
To respect foreign key constraints and referential integrity, tables without foreign keys are inserted first, and tables with foreign key references are inserted last.
When applying table DML for truncate
or replace
, operations are applied in reverse insertion order. This is to ensure records aren't deleted with incoming foreign key references.
It's also important to note: all table data is first dropped from the database before inserting new records back in. These operations are not atomic, so there may be periods of time when the destination database is in an incomplete state.
When sync.mode
is configured with append
, the destination action will simply insert records into the table, leaving any existing records in place.
When using the append
sync mode, referential integrity is difficult to maintain. It's only recommended to use append
mode when syncing adhoc queries to a destination table.
If append
mode is configured with a source that syncs an entire database, it's likely the destination will be unable to insert records while maintaining foreign key constraints or referential integrity, causing the action to fail.
Example Destination Action YAML
You can also write your output dataset to an object storage connection like Amazon S3 or Google Cloud Storage. Whether you are writing an entire database, selected tables, or table(s) created via SQL query, the {object_storage}_destination
action always takes the same inputs - filename
and input
, and path
. Additionally, S3 and GCS take bucket
and Azure Blob takes container
.
Example Destination Action YAML
Create a synthetic version of your Oracle database.
The following config will extract the entire Oracle database, train and run a synthetic model, then write the outputs of the model back to a destination Oracle database while maintaining referential integrity.
name
Display name of your choosing used to identify your connection within Gretel.
my-oracle-connection
username
Unique identifier associated with specific account authorized to access database. The connection will be to this user's schema.
john
password
Security credential to authenticate username.
...
host
Fully qualified domain name (FQDN) used to establish connection to database server.
myserver.example.com
port
Optional Port number; If left empty, the default value - 1521
- will be used.
1521
service_name
Name of database service to connect to.
my_service_name
(optional) instance_name
Optional Name of specific database instance for this connection.
instance_id
(optional) params
Optional JDBC URL parameters that can be used for advanced configuration.
key1=value1;key2=value2
Type
oracle_source
Connection
oracle
sync.mode
full
- extracts all records from tables in database
(coming soon) subset
- extract percentage of records from tables in database
sync.mode
full
- extracts all records from selected tables in database
(coming soon) subset
- extract percentage of records from selected tables in database
Sequence of mappings that lists the table(s) in the database to extract. name
- table name
name
- name of query; will be treated as name of resulting table
query
- SQL statement used to query connected database
Additional name
and query
mappings can be provided to include multiple SQL queries
dataset
A reference to the data extracted from the database, including tables and relationships/schema.
Type
oracle_destination
Connection
oracle
dataset
A reference to the table(s) generated by Gretel and (if applicable) the relationship schema extracted from the source database.
sync.mode
replace
- overwrites any existing data in table(s) at destination
append
- add generated data to existing table(s); only supported for query-created tables without primary keys
filename
This is the name(s) of the file(s) to write data back to. File name(s) will be appended to the path
if one is configured.
This is typically a reference to the output from the previous action, e.g. {outputs.<action-name>.dataset.files.filename}
input
Data to write to the file. This should be a reference to the output from the previous action, e.g. {outputs.<action-name>.dataset.files.data}
path
Defines the path prefix to write the object(s) into.
[S3 and GCS only] bucket
The bucket to write object(s) to. Please only include the name of the bucket, eg my-gretel-bucket
.
[Azure Blob only] container
The container to write object(s) to. Please only include the name of the container, eg my-gretel-container
.