TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  • Question Answering on SQuAD v2
  • Sequence classification with GLUE MRPC
  • Token Classification with conll2003
  1. Titan Optimise ✨: Knowledge Distillation
  2. Using iris distil

Benchmark experiments for knowledge distillation

PreviousUsing iris distilNextA closer look at iris distil arguments

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

On this page you'll find a few examples of knowledge distillation experiments you can run with public HuggingFace models for different use-cases. For any of these experiments, you can substitute the model, dataset or both with a path to a suitable local folder if you want to try using your own models/datasets.

If you want to try running each experiment with a different model, we have a small selection of sample models for each task

Question Answering on SQuAD v2

iris distil \
	--model TitanML/Electra-Large-SQUADV2 \
	--dataset squad_v2 \
	--task question_answering \
	--name my_test_squadv2 \
	--has-negative
	

Note that since this experiment uses SQuAD v2, using the flag --has_negative is not necessary. However, any other dataset containing questions which are not answerable from context must be passed to iris distil with the flag.

Remember you can always use the abbreviated iris distil arguments as listed here; this goes for any task, and applies to both local and remote models/datasets. E.g.

iris distil -m TitanML/Electra-Large-SQUADV2 -d squad_v2 -t question_answering -hn -n my_test_squad

Sequence classification with GLUE MRPC

iris distil \
	--model TitanML/Electra-Large-MRPC \
	--dataset glue \
	--task sequence_classification \
	--subset mrpc \
	-tf sentence1 \
	-tf sentence2 \
	-nl 2 \
	--name my_test_mrpc
	

Remember you can skip the subset argument if you're not using a dataset (like GLUE) with subsets!

Token Classification with conll2003

conll2003 has 9 token labels as shown below; pass each one to iris distil in the form {index}:{label}.

iris distil \
        --model TitanML/Electra-Large-CONLL2003 \
        --dataset conll2003 \
        --subset conll2003 \
        --task token_classification \
        -ln 0:O \
        -ln 1:B-PER -ln 2:I-PER \
        -ln 3:B-ORG -ln 4:I-ORG \
        -ln 5:B-LOC -ln 6:I-LOC \
        -ln 7:B-MISC -ln 8:I-MISC \
        --name my_test_conll
        

This is the same as the example we used .

✨
https://docs.titanml.co
discord
here.
here