TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  • Contents
  • iris distil
  • iris finetune
  • iris get
  • iris status
  • iris upload
  • iris delete
  • iris download
  • iris pull
  • iris infer
  • iris makesafe
  1. Getting started

Iris commands

An overview of the iris API.

PreviousSign up & sign inNextUsing iris upload

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

Also see the API

Contents

By default, these commands will send a request to the backend, then print a table with the response to the command line. If you would like to receive the response as a JSON object, simply add the tag --json after the command

iris distil

Dispatches knowledge distillation jobs to the TitanML platform. You can look at all of the iris distil arguments by using this command: iris distil --help.

iris finetune

Dispatches fine-tuning jobs to the TitanML platform.

For the above two commands, both model and dataset can be any of:

  • A HuggingFace model/dataset

  • A UUID generated by iris upload

  • The filepath of a model/dataset folder (which will then be uploaded as part of the job).

The following arguments apply to both iris distil and iris finetune:

Command
Shortcut
Type
Description

--model

-m

TEXT

The model to optimize. [default: None] [required]

--dataset

-d

TEXT

The dataset to optimize the model with. [default: None] [required]

--task

-t

[sequence_classification | question_answering | token_classification]

The task to optimize the model for.[default: None] [required]

--subset

-ss

TEXT

The subset of the dataset to use

--name

-n

TEXT

The name to use for this job. Visible in the titan web interface.

--file

-f

PATH

The .yaml file containing experiment parameters.

--short-run

-s

flag

Truncates the run after 1 batch and 1 epoch. Will provide poor results, but useful to check that the model and dataset choices are valid.

--num-labels

-nl

INTEGER

Number of labels. Required for task sequence_classification [default: None]

--text-fields

-tf

TEXT

Text fields. Required for task sequence_classification [default: None]

--has-negative

-hn

flag

Has negative. Required for question_answering

--label-name

-ln

int:TEXT

The label names used for token classification

--help

flag

Show this message and exit.

You can use these additional arguments to specify hyperparameters when running a fine-tuning job:

Command
Shortcut
Type
Description

--batch-size

-bs

INTEGER

The batch size to use for training. Default is 16.

--learning-rate

-lr

FLOAT

The learning rate to use for training. Default is 2e-5.

--num-epochs

-ne

INTEGER

Number of epochs to finetune for. Default is 1.

iris get

Gets objects from the TitanAPI. You can look at all of the Iris get commands by using this command: iris get --help

iris status

Retrieves the status of an experiment. You can look at all of the Iris status commands by using this command: iris status --help

iris upload

Uploads local models and datasets by local filepath. You can look at all of the Iris upload commands by using iris upload --help.

iris delete

Deletes a model or dataset (i.e. an artefact), or an entire experiment, from the TitanML store.

iris download

Downloads a finished, Titan-optimised model directly onto your machine in the ONNX format.

iris pull

Downloads a finished, Titan-optimised model as a Docker image (usage is equivalent to docker pull).

iris infer

Runs inference by requesting the Triton Inference Server.

iris makesafe

Converts a model to the safetensors format, including those models which the transformers save method can't convert.

More explanation , on how to launch a distillation or fine-tuning job.

here
here
https://docs.titanml.co
discord
iris
distil
iris get
iris status
iris upload
iris download
iris pull
iris makesafe