Benchmark experiments for knowledge distillation
Last updated
Last updated
These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .
On this page you'll find a few examples of knowledge distillation experiments you can run with public HuggingFace models for different use-cases. For any of these experiments, you can substitute the model, dataset or both with a path to a suitable local folder if you want to try using your own models/datasets.
If you want to try running each experiment with a different model, we have a small selection of sample models for each task
Note that since this experiment uses SQuAD v2, using the flag --has_negative
is not necessary. However, any other dataset containing questions which are not answerable from context must be passed to iris distil
with the flag.
Remember you can always use the abbreviated iris distil
arguments as listed here; this goes for any task, and applies to both local and remote models/datasets. E.g.
Remember you can skip the subset argument if you're not using a dataset (like GLUE) with subsets!
conll2003 has 9 token labels as shown below; pass each one to iris distil
in the form {index}:{label}.
This is the same as the example we used .