Benchmark experiments for finetuning

These docs are outdated! Please check out https://docs.titanml.co for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discord.

On this page you'll find a few examples of knowledge distillation experiments you can run with public HuggingFace models for different use-cases. For any of these experiments, you can substitute the model, dataset or both with a path to a suitable local folder if you want to try using your own models/datasets.

If you want to try running each experiment with a different model, we have a small selection of sample models for each task here.

Question Answering on SQuAD v2

iris finetune \
	--model bert-base-uncased \
	--dataset squad_v2 \
	--task question_answering \
	--name my_test_squadv2 \
	--has-negative

Note that since this experiment uses SQuAD v2, using the flag --has_negative is not necessary. However, any other dataset containing questions which are not answerable from context must be passed to iris distil with the flag.

Remember you can always use the abbreviated iris finetune arguments as listed here; this goes for any task, and applies to both local and remote models/datasets. E.g.

iris finetune -m TitanML/Electra-Large-SQUADV2 -d squad_v2 -t question_answering -hn -n my_test_squad

Sequence classification with GLUE MRPC

This is the same as the example we used here.

iris finetune \
	--model bert-base-uncased \
	--dataset glue \
	--task sequence_classification \
	--subset mrpc \
	-tf sentence1 \
	-tf sentence2 \
	-nl 2 \
	--name my_test_mrpc
	

Remember you can skip the subset argument if you're not using a dataset (like GLUE) with subsets!

Token Classification with conll2003

conll2003 has 9 token labels as shown below; pass each one to iris distil in the form {index}:{label}.

iris finetune \
        --model bert-base-uncased \
        --dataset conll2003 \
        --subset conll2003 \
        --task token_classification \
        -ln 0:O \
        -ln 1:B-PER -ln 2:I-PER \
        -ln 3:B-ORG -ln 4:I-ORG \
        -ln 5:B-LOC -ln 6:I-LOC \
        -ln 7:B-MISC -ln 8:I-MISC \
        --labels-column ner_tags \
        --name my_test_conll
        

Language Modelling with tiny_shakespeare

tiny_shakespeare is a dataset consisting of the works of shakespeare (see here for more information). To train a large language model to produce text in the style of shakespeare, try the following:

iris finetune \
        --model facebook/opt-125m \
        --dataset tiny_shakespeare \
        --task language_modelling \
        --name shakespeare
        

Last updated