Iris commands
An overview of the iris API.
These docs are outdated! Please check out https://docs.titanml.co for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discord.
Also see the API
Contents
By default, these commands will send a request to the backend, then print a table with the response to the command line. If you would like to receive the response as a JSON object, simply add the tag --json
after the command
iris distil
Dispatches knowledge distillation jobs to the TitanML platform. You can look at all of the iris distil arguments by using this command: iris distil --help.
iris finetune
Dispatches fine-tuning jobs to the TitanML platform.
For the above two commands, both model
and dataset
can be any of:
A HuggingFace model/dataset
A UUID generated by
iris upload
The filepath of a model/dataset folder (which will then be uploaded as part of the job).
More explanation here, here on how to launch a distillation or fine-tuning job.
The following arguments apply to both iris distil
and iris finetune
:
--model
-m
TEXT
The model to optimize. [default: None] [required]
--dataset
-d
TEXT
The dataset to optimize the model with. [default: None] [required]
--task
-t
[sequence_classification | question_answering | token_classification]
The task to optimize the model for.[default: None] [required]
--subset
-ss
TEXT
The subset of the dataset to use
--name
-n
TEXT
The name to use for this job. Visible in the titan web interface.
--file
-f
PATH
The .yaml file containing experiment parameters.
--short-run
-s
flag
Truncates the run after 1 batch and 1 epoch. Will provide poor results, but useful to check that the model and dataset choices are valid.
--num-labels
-nl
INTEGER
Number of labels. Required for task sequence_classification [default: None]
--text-fields
-tf
TEXT
Text fields. Required for task sequence_classification [default: None]
--has-negative
-hn
flag
Has negative. Required for question_answering
--label-name
-ln
int:TEXT
The label names used for token classification
--help
flag
Show this message and exit.
You can use these additional arguments to specify hyperparameters when running a fine-tuning job:
--batch-size
-bs
INTEGER
The batch size to use for training. Default is 16.
--learning-rate
-lr
FLOAT
The learning rate to use for training. Default is 2e-5.
--num-epochs
-ne
INTEGER
Number of epochs to finetune for. Default is 1.
iris get
Gets objects from the TitanAPI. You can look at all of the Iris get commands by using this command: iris get --help
iris status
Retrieves the status of an experiment. You can look at all of the Iris status commands by using this command: iris status --help
iris upload
Uploads local models and datasets by local filepath. You can look at all of the Iris upload commands by using iris upload --help
.
iris delete
Deletes a model or dataset (i.e. an artefact), or an entire experiment, from the TitanML store.
iris download
Downloads a finished, Titan-optimised model directly onto your machine in the ONNX format.
iris pull
Downloads a finished, Titan-optimised model as a Docker image (usage is equivalent to docker pull
).
iris infer
Runs inference by requesting the Triton Inference Server.
iris makesafe
Converts a model to the safetensors format, including those models which the transformers
save method can't convert.
Last updated