Submit a new data ingestion request to upload the required data into NexusOne. Returns the ingestion flow details.
The access token received from the authorization server in the OAuth 2.0 flow.
The name of the ingestion job.
Defines the type of data ingestion (Valid values: file, jdbc, lakehouse).
file, jdbc, lakehouse The target schema where data will be stored.
The target table name for the data.
Defines how the query output is handled to the target table (Valid values: append, overwrite, merge).
The cron expression that specifies when or how often the ingestion runs.
Columns used to match and merge data when the mode is merge.
Unique ID of the uploaded file.
S3 URL to the file you want to ingest.
The format of the file (Valid values: csv, orc, xml, parquet, xls).
Options for reading the file.
Connection URL for the JDBC data source when the format is jdbc.
Username for the JDBC connection.
Password for the JDBC connection.
Type of JDBC database.
Source schema in the JDBC database.
Source table in the JDBC database.
SQL query used to extract data via JDBC.
Source schema in the lakehouse when file format is lakehouse.
Source table in the lakehouse.
WHERE conditions for filtering lakehouse data (For example, salary > 2000, store_id = 1).
Domain to be set for the dataset in DataHub.
Tags to be set for the dataset in DataHub.
The logged in user's email ID.