Using iris finetune

These docs are outdated! Please check out https://docs.titanml.co for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discord.

Short cut! If you'd rather use the GUI than the command line you can find the command builder on the web app at app.titanml.co

Remember to ensure you have the latest version of iris installed before running any command! You can check this by running pip install update titan-iris.

You can now fine-tune a model on a particular dataset on the TitanML platform by using iris finetune. iris finetune sends a request to the backend based on your specified model and dataset, along with some information about your desired task. For example:

iris finetune \
	--model google/electra-large-discriminator \
	--dataset squad_v2 \
	--task question_answering \
	--name my_test_squadv2 \
	--has-negative

This will fine-tune an ELECTRA Large model on the SQuAD v2 question-answering dataset using the default values for batch size, learning rate and number of training epochs (16, 2e-5 and 1, respectively). To specify your own values for these hyperparameters, you can include any or all of them as arguments:

iris finetune \
	--model google/electra-large-discriminator \
	--dataset squad_v2 \
	--task question_answering \
	--name test_finetune_squad \
	--has-negative \
	--batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10

Or in short form:

iris finetune -m TitanML/Electra-Large-SQUADV2 -d squad_v2 -t question_answering -hn -n test_finetune_squad -bs 31 -lr 3e-5 -ep 10

The same applies to sequence and token classification tasks:

iris finetune \
	--task sequence_classification \
	--dataset glue \
	--subset mrpc \
	--model TitanML/Electra-Large-MRPC \
	--name test_finetune_mrpc \
	--text-fields sentence1 \
	--text-fields sentence2 \
	--num-labels 2 \
	--batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10
iris finetune \
        --model TitanML/Electra-Large-CONLL2003 \
        --dataset conll2003 \
        --subset conll2003 \
        --task token_classification \
        --name test_finetune_conll \
        -ln 0:O \
        -ln 1:B-PER -ln 2:I-PER \
        -ln 3:B-ORG -ln 4:I-ORG \
        -ln 5:B-LOC -ln 6:I-LOC \
        -ln 7:B-MISC -ln 8:I-MISC \
        --labels-column ner_tags \
        --batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10

Last updated