Submit a new data ingestion request to upload the required data into NexusOne. Returns the ingestion flow details.
Documentation Index
Fetch the complete documentation index at: https://docs.nx1cloud.com/llms.txt
Use this file to discover all available pages before exploring further.
The access token received from the authorization server in the OAuth 2.0 flow.
Request model for data ingestion jobs.
The name of the ingestion job.
Defines the type of data ingestion (Valid values: file, jdbc, lakehouse).
file, jdbc, lakehouse The target schema where data will be stored.
The target table name for the data.
Defines how the query output is handled to the target table (Valid values: append, overwrite, merge).
append, overwrite, merge The cron expression that specifies when or how often the ingestion runs.
Columns used to match and merge data when the mode is merge.
Optional list of column transformations to apply during ingestion. Supports casting to different data types, renaming columns, and encrypting values using vault_encrypt UDF.
Unique ID of the uploaded file.
S3 URL to the file you want to ingest.
The format of the file (Valid values: csv, orc, xml, parquet, xls).
Options for reading the file.
Connection URL for the JDBC data source when the format is jdbc.
Username for the JDBC connection.
Password for the JDBC connection.
Type of JDBC ingestion (Valid values: table, query).
table, query Source schema in the JDBC database.
Source table in the JDBC database.
SQL query used to extract data via JDBC.
Source schema in the lakehouse when file format is lakehouse.
Source table in the lakehouse.
WHERE conditions for filtering lakehouse data (For example, salary > 2000, store_id = 1).
Domain to be set for the dataset in DataHub.
Tags to be set for the dataset in DataHub.
The logged in user's email ID.