Converting files
Converting JSON to CSV specifying a JSON path schema:
1
echo '[{"id":1,"name":"Mark"}, {"id":2,"name":"Maria"}]' > example.json
2
​
3
curl https://konbert.com/api/v1/pipelines/sqlify/convert/run \
4
-u "${API_KEY}:" \
6
-F "options[to]=csv" \
7
-F "options[schema][0][name]=id" \
8
-F "options[schema][0][path]=\$[*].id" \
9
-F "return_output=true"
Copied!
Will return:
1
"id"
2
"1"
3
"2"
Copied!
Converting JSON to CSV without a schema:
1
echo '[{"id":1,"name":"Mark"}, {"id":2,"name":"Maria"}]' > example.json
2
​
3
curl https://konbert.com/api/v1/pipelines/sqlify/convert/run \
4
-u "${API_KEY}:" \
6
-F "options[to]=csv" \
7
-F "options[output][separator]=;" \
8
-F "return_output=true"
Copied!
Will return:
1
"id";"name"
2
1;"Mark"
3
2;"Maria"
Copied!
If you just want to convert a file as simply as possible, this is the best way to do it.
The convert endpoint is:
POST https://konbert.io/api/v1/pipelines/sqlify/convert/run
It accepts these params:
Name
Type
Description
file
multi-part file
The file that you want to convert
file_id
string
The file id that you want to convert, you need to specify a file or a file_id
return_output
bool
Indicate whether you want the result file to be returned, or a file object response instead.
options
object
options for conversion
options[from]
string
(optional) The input format that you want, can be csv or json, it will be automatically detected if not specified
options[to]
string
The output format that you want, can be csv, sql or json
options[schema]
​field[]
A list of fields, this is where you specify what fields you want to convert and their types
options[output]
object
Encoding options for the output format
options[input]
object
Decoding options for the input format

Files

File object

Name
Type
Description
file_id
string
Unique file id.
type
string
MIME type of the file.
size
integer
Size of the file in bytes.
schema
field[]
A list of fields defining the schema of the file.
runs
run[]
A list of runs that used this file as an input.
download_url
string
Url to get the raw data of this file.
download_url
string
Url to upload the chunks of the file.

Field object

Name
Type
Description
name
string
Name of the field
type
string
Type of the field, needs to be one of string, integer, float, json
path
string
The source of the field value.

Get all your files

1
curl https://konbert.com/api/v1/files -u "${API_KEY}:"
Copied!
The above command returns JSON structured like this:
1
{
2
"status": "ok",
3
"message": "",
4
"data": [
5
{
6
"file_id": "966065f3-35c1-4f6d-b0f6-f2e960896973",
7
"type": "application/json",
8
"size": 1024,
9
"schema": [],
10
"runs": [],
11
"download_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
12
"upload_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
13
}
14
]
15
}
Copied!
This endpoint retrieves all the files that belong to your account.

HTTP Request

GET https://konbert.com/api/v1/files

Result

Name
Type
Description
data
file[]
List of files

Get a file

1
curl https://konbert.com/api/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973 -u "${API_KEY}:"
Copied!
The above command returns JSON structured like this:
1
{
2
"status": "ok",
3
"message": "",
4
"data": [
5
"file_id": "966065f3-35c1-4f6d-b0f6-f2e960896973",
6
"type": "application/json",
7
"size": 1024,
8
"schema": [],
9
"runs": [],
10
"download_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
11
"upload_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
12
]
13
}
Copied!
Retrieve a single file by it's id.

HTTP Request

GET https://konbert.com/api/v1/files/${FILE_ID}

Result

Name
Type
Description
data
file
File object

Create a file

Theres a few ways you can create a file, via a URL, simple file upload or chunked file upload.

Create a file via a URL

POST https://api.sqlify.io/v1/files
Params
Name
Type
Description
url
string
The URL where the API should get the data from, make sure it's publicly accessible
1
curl -X POST https://konbert.com/api//v1/files \
2
-u "${API_KEY}:" \
Copied!
The above command returns JSON structured like this:
1
{
2
"status": "ok",
3
"message": "",
4
"data": [
5
"file_id": "966065f3-35c1-4f6d-b0f6-f2e960896973",
6
"type": "application/json",
7
"size": 1024,
8
"schema": [],
9
"runs": [],
10
"download_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
11
"upload_url": "https://api.sqlify.io/v1/files/966065f3-35c1-4f6d-b0f6-f2e960896973/data"
12
]
13
}
Copied!

Create a file via a simple file upload

We recommend using this method if you just want to upload a file and is less than 200Mb.
POST https://konbert.com/api/v1/files.
Params
Name
Type
Description
file
file
A multi-form part representing your file

Result

All the calls to create a file return the created file.
Name
Type
Description
data
file
File object

Chunked upload for large files

We recommend using this method if your file is bigger than 200Mb.
Create the file it first, specifying the total size of the file:

Create the file

POST https://konbert.com/api/v1/files
Name
Type
Description
size
integer
The total size of your file.
Now all you need to do is split the file and upload the chunks to this endpoint.

Upload a chunk

POST https://konbert.com/api/v1/files/${FILE_ID}/data
Name
Type
Description
file
file
A multi-from part representing your chunk.
size
integer
The size of the chunk in bytes, all the chunks must be multiple of 5MiB (5 1024 1024 = 5242880) but the last
offset
integer
The offset of the chunk in bytes.

Return Code

The upload chunk endpoint will return 201 Created when the last chunk is uploaded, 200 Sucess and for the rest of chunks.

Update a file

You can update a file to set it's schema.
POST https://konbert.com/api/v1/files/${FILE_ID}
Params
Name
Type
Description
schema
field[]
List of fields
schema.name[]
string
Name of the field
schema.type[]
string
Type of the field, needs to be one of string, integer, float, json
schema.path[]
string
The source of the field value. In case of a CSV file, the path will be the column index, or a JSON path expression in case of a JSON File.

Runs

A run is the object holding the information of the execution of a step. A step is a process that takes a file and outputs another. Pipelines are a list of steps.

Run object

Name
Type
Description
id
string
Unique run id.
step
string
Name of the step that the run is executing.
status
string
The run status, it can be queued, running, failed or completed
finished
bool
Indicates if the run has finished, same as having a failed or completed status
output_file_id
string
File id of the result, it will only be populated if the run is completed
error_code
string
An error code in case the run has failed, for example: limit_exceeded
error_message
string
An error message describing the failure.
error_details
string
A more detailed error message.

Get a run

1
curl https://konbert.com/api/v1/runs/2f123527-86d4-439f-be17-b579450196f4 \
2
-u "${API_KEY}:"
Copied!
1
{
2
"status":"ok",
3
"message":"",
4
"data": {
5
"step":"sqlify/detect-schema",
6
"status":"queued",
7
"output_file_id":null,
8
"id":"2f123527-86d4-439f-be17-b579450196f4",
9
"finished":false
10
}
11
}
Copied!
Check the state of a run.

HTTP Request

GET https://konbert.com/api/v1/runs/${RUN_ID}
Name
Type
Description
data
run
Run object

Queue runs

1
curl https://konbert.com/api/v1/pipelines/sqlify/convert-to-csv/run \
2
-u "${API_KEY}:" \
3
-d "file_id=9c7c6d10-ed9d-4560-88db-9a80487fedbd"
Copied!
1
{
2
"status":"ok",
3
"message":"pipeline queued",
4
"data":[
5
{
6
"step":"sqlify/detect-schema",
7
"status":"queued",
8
"output_file_id":null,
9
"id":"2f123527-86d4-439f-be17-b579450196f4",
10
"finished":false
11
},
12
{
13
"step":"sqlify/export-csv",
14
"status":"queued",
15
"output_file_id":null,
16
"id":"55f01194-3bfb-496d-947c-43ffb267171b",
17
"finished":false
18
}
19
]
20
}
Copied!
A step is a process that runs on an input file and produces an output file.
The sqlify/detect-schema step analyzes the file and writes the schema in the file specified.
The sqlify/export-* steps take the data, cast it to the schema and encode it in the desired format.
A pipeline is a group of steps that get are chained together.
These are the pipelines that we have available:
Name
Steps
Description
sqlify/convert-to-csv
sqlify/detect-schema sqlify/export-csv
Ouputs a CSV file
sqlify/convert-to-json
sqlify/detect-schema sqlify/export-json
Ouputs a JSON file
sqlify/convert-to-sql
sqlify/detect-schema sqlify/export-sql
Ouputs a SQL file
The detect schema step will skip files that already have a schema, unless the rewrite_schema option is passed. So you can specify the schema in your file and run the pipeline safely.

HTTP Request

GET https://konbert.com/api/v1/pipelines/${PIPELINE}/run
Params
Name
Type
Description
file_id
string
The input file id
options
object
Any options that you want to pass to the steps (optional)
Response
Name
Type
Description
data
run[]
List of runs
​
Last modified 1mo ago