TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  1. Titan Optimise ✨: Knowledge Distillation

How to get the most out of Titan Optimise

PreviousWhen should I use Titan Optimise?NextSupported models & Tasks

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

Garbage in, garbage out. Quality in, quality out.

The Titan-compressed models are produced from the models which you feed in from the command line. The better the model you put in the system, the better the model you will get out. For a lot of cases, an ELECTRA Large model will be the best choice.

If you put an already very small model into TitanML to be compressed even further, results will not be nearly as good as if you started with a large, high-accuracy model.

Whilst you might see a drop-off in performance relative to your input model, the output model will substantially outperform a directly-finetuned model of a similar size. Rather than comparing the accuracy of the smaller model with the bigger input model, compare the accuracy of the smaller model with a model of a similar resource profile. TitanML models typically perform much better!

The same applies to datasets: the higher the quality and size of your input dataset, the better the TitanML models will be.

✨
https://docs.titanml.co
discord