TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  • Viewing results in terminal
  • Viewing results on the TitanHub web interface
  1. Titan Optimise ✨: Knowledge Distillation
  2. Using iris distil

Monitoring progress

PreviousA closer look at iris distil argumentsNextEvaluating and selecting a model

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

Viewing results in terminal

To view the results of your posted experiment in your terminal window, you can run the iris status command; the syntax is iris status -i <experiment ID> , e.g. iris status -i 183. Results from all the compressed model sizes (medium, small and extra small) are displayed by default. You will see something like this:

[
    {
        "name": "183:M",
        "results": {
            "val/f1": 0.9109947681427002,
            "latency": 135.6980712890625,
            "n_params": 33360770,
            "val/loss": 0.34705352783203125,
            "val/accuracy": 0.875
        }
    },
    {
        "name": "183:XS",
        "results": {
            "val/f1": 0.8896797299385071,
            "latency": 53.609185791015626,
            "n_params": 12750594,
            "val/loss": 0.3862529397010803,
            "val/accuracy": 0.8480392098426819
        }
    },
    {
        "name": "183:baseline",
        "results": {
            "val/f1": 0.9042016863822937,
            "latency": 107.39190673828125,
            "n_params": 109483778,
            "val/loss": 0.6844630837440491,
            "val/accuracy": 0.8602941036224365
        }
    },
    {
        "name": "183:fp16",
        "results": {
            "val/f1": 0.9042016863822937,
            "latency": 105.9765380859375,
            "n_params": 109483778,
            "val/loss": 0.6844630837440491,
            "val/accuracy": 0.8602941036224365
        }
    },
    {
        "name": "183:S",
        "results": {
            "val/f1": 0.9052631855010986,
            "latency": 68.0869140625,
            "n_params": 22713986,
            "val/loss": 0.33026769757270813,
            "val/accuracy": 0.8676470518112183
        }
    }
]

Viewing results on the TitanHub web interface

Immediately after uploading an experiment, clicking on it on the 'experiments' page will yield a progress indicator for each job. It should take at least 30 minutes for your new, compressed models to be ready; however, estimating the precise duration is difficult because it depends heavily on the size of your dataset and the complexity of your task specifications.

When your models are ready, you can click on a model on the main/dashboard page to see a graph of performance against cost for each model size. Use the dropdown at the top to toggle between F1 score, loss and accuracy as a performance measure.

Click on a particular data point (each blue data point represents one of the TyTN model sizes) to see information about it. We will look at this further in the next section.

Go to and log in with the same username and password as to get started.

When you upload an experiment with iris post , it will appear under the in the form <experiment ID>-<experiment name>_<model size>.For example,183-test_experiment_M corresponds to the medium-sized (33MB) version of the model we uploaded to TitanHub in the example above.

✨
https://docs.titanml.co
discord
app.titanml.co
before
‘Models’ tab