TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  • iris
  • iris delete
  • iris download
  • iris get
  • iris infer
  • iris login
  • iris logout
  • iris makesafe
  • iris post
  • iris pull
  • iris status
  • iris upload
  1. Getting started
  2. Iris commands

iris API

PreviousUsing iris uploadNextTitan Takeoff 🛫: Inference Server

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

N.B. Some 'Options' are mandatory, and require flags to prevent erroneous use of permanent methods.

iris

Usage:

$ iris [OPTIONS] COMMAND [ARGS]...

Options:

  • --help: Show this message and exit.

Commands:

  • delete: delete objects from the TitanML Store.

  • download: Download the titan-optimized onnx model.

  • get: Get objects from the TitanML Store.

  • infer: Run inference on a model.

  • login: Login to iris.

  • logout: Logout from iris.

  • makesafe: Convert a non-safetensor model into a safetensor model, including for models with shared weights.

  • post: Dispatch a job to the TitanML platform.

  • pull: Pull the titan-optimized server docker image.

  • status: Get the status of an experiment.

  • upload: Upload an artefact to the TitanML Hub.

  • version: Print the version of iris installed.

iris delete

Delete objects from the TitanML store.

Usage:

$ iris delete [OPTIONS] [OBJECT]:[experiment|artefact]

Arguments:

  • [OBJECT]:[experiment|artefact]: What type of object to delete - experiment or artefact (model/dataset). [default: experiment]

Options:

  • -i, --id TEXT: Which object to delete [required]

  • --help: Show this message and exit.

iris download

Download the titan-optimized onnx model.

Usage:

$ iris download IMAGE

Arguments:

  • IMAGE: The model to pull from those shown in the TitanML Hub. [required]

iris get

Get objects from the TitanML Store.

Usage:

$ iris get [OPTIONS] [OBJECT]:[experiment|artefact]

Arguments:

  • [OBJECT]:[experiment|artefact]: What type of object to get [default: experiment]

Options:

  • -i, --id TEXT: Which object to get. If None, then all accessible objects are returned. Queries specified by --id are evaluated server-side.

  • -h, --headers TEXT: Headers to send with the get request. Should be provided as colon separated key value pairs: -h a:b -h c:d -> {a:b, c:d} [default: ]

  • --help: Show this message and exit.

iris infer

Creates an ML inference server on the specified port, allowing inference to be ran on input texts.

Usage:

$ iris infer [OPTIONS]

Options:

  • --target TEXT: The url to run the inference server on. [default: localhost]

  • -p, --port INTEGER: The port to run the inference server on. [default: 8000]

  • -t, --task [sequence_classification|glue|question_answering| token_classification]: The task to optimize the model for. [required]

  • --use-cpu: Whether to use the CPU. If False, the GPU will be used. Choose CPU only when the opmitized model is in CPU format (OnnxRuntime). The default will be False. (using TensorRT) [default: False]

  • -t, --text TEXT: The text to run inference on. In classification tasks, this is the TEXT to be classified. In question answering tasks, this is the QUESTION to be answered. [required]

  • -c, --context TEXT: The context in question answering tasks. Only used in question answering tasks. [default: ]

  • --help: Show this message and exit.

iris login

Login to iris.

Usage:

$ iris login

iris logout

Logout from iris.

Usage:

$ iris logout

iris makesafe

Convert an unsafe (pytorch_model.bin) model into a safetensor (.safetensors) model, including for models with shared weights. The outputted weights file is placed in the input model's folder, and allows the model to then be uploaded to the TitanML Store.

Usage:

$ iris makesafe [MODEL]

Arguments:

  • [MODEL]: The path of the model-containing folder which should be converted to safe_tensors [default: 'model']

iris post

Dispatch a job to the TitanML platform.

Usage:

$ iris post [OPTIONS]

Options:

  • -m, --model TEXT: The model to optimize. [required]

  • -d, --dataset TEXT: The dataset to optimize the model with. [required]

  • -t, --task [sequence_classification|glue|question_answering|token_classification]: The task to optimize the model for. [required]

  • -n, --name TEXT: The name to use for this job. Visible in the TitanML Hub. [default: ]

  • -f, --file TEXT: Load the options from a config file [default: ]

  • -s, --short-run: Truncates the run after 1 batch and 1 epoch. Will provide bad results, but useful to check that the model and dataset choices are valid. [default: False]

  • -nl, --num-labels INTEGER: Number of labels. Required for task sequence_classification

  • -tf, --text-fields TEXT: Text fields. Required for task sequence_classification

  • -hn, --has-negative: Has negative. Required for question_answering [default: False]

  • -ln, --label-names TEXT: Names of token labels. Required for task token_classification. Specify as a mapping with no spaces: -ln 0:label1 -ln 1:label2

  • --help: Show this message and exit.

iris pull

Pull the titan-optimized server docker image.

Usage:

$ iris pull IMAGE

Arguments:

  • IMAGE: The model to pull from those shown in the TitanML Hub. [required]

iris status

Get the status of an experiment.

Usage:

$ iris status [OPTIONS]

Options:

  • -i, --id INTEGER: The id of the experiment to get the status of [required]

iris upload

Upload an artefact to the TitanML Hub.

Usage:

$ iris upload [OPTIONS] SRC [NAME] [DESCRIPTION]

Arguments:

  • [NAME]: The name of the artefact. Displayed in the TitanML Hub.

  • [DESCRIPTION]: A short description of the artefact. Displayed in the TitanML Hub.

Options:

  • --help: Show this message and exit.

-q, --query TEXT: A string, to filter the objects returned by the API. Evaluated client-side.

SRC: The location of the artefact on disk. Should be a folder, containing either a model or a dataset. For more information on the supported formats, see . [required]

https://docs.titanml.co
discord
JMESPath
here
Page cover image