TitanML Documentation
TitanHub Dashboard
  • 💡Overview
    • Guide to TitanML...
    • Need help?
  • 🦮Guides
  • Getting started
    • Installing iris
    • Sign up & sign in
    • Iris commands
      • Using iris upload
      • iris API
  • 🛫Titan Takeoff 🛫: Inference Server
    • When should I use the Takeoff Server?
    • Getting started
    • Supported models
    • Using the Takeoff API (Client-side)
    • Chat and Playground UI
    • Shutting down
    • Using a local model
    • Generation Parameters
  • 🎓Titan Train 🎓: Finetuning Service
    • Quickstart
    • Supported models & Tasks
    • Using iris finetune
      • Benchmark experiments for finetuning
      • A closer look at iris finetune arguments
      • Evaluating the model performance
    • Deploying and Inferencing the model
    • When should I use Titan Train?
  • ✨Titan Optimise ✨: Knowledge Distillation
    • When should I use Titan Optimise?
    • How to get the most out of Titan Optimise
    • Supported models & Tasks
    • Using iris distil
      • Benchmark experiments for knowledge distillation
      • A closer look at iris distil arguments
      • Monitoring progress
    • Evaluating and selecting a model
    • Deploying the optimal model
      • Which hardware should I deploy to?
      • Pulling the model
      • Inferencing the model
  • 🤓Other bits!! 🤓
    • Iris roadmap
Powered by GitBook
On this page
  1. Titan Train 🎓: Finetuning Service

Using iris finetune

PreviousSupported models & TasksNextBenchmark experiments for finetuning

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .

Short cut! If you'd rather use the GUI than the command line you can find the command builder on the web app at app.titanml.co

Remember to ensure you have the latest version of iris installed before running any command! You can check this by running pip install update titan-iris.

You can now fine-tune a model on a particular dataset on the TitanML platform by using iris finetune. iris finetune sends a request to the backend based on your specified model and dataset, along with some information about your desired task. For example:

iris finetune \
	--model google/electra-large-discriminator \
	--dataset squad_v2 \
	--task question_answering \
	--name my_test_squadv2 \
	--has-negative

This will fine-tune an ELECTRA Large model on the SQuAD v2 question-answering dataset using the default values for batch size, learning rate and number of training epochs (16, 2e-5 and 1, respectively). To specify your own values for these hyperparameters, you can include any or all of them as arguments:

iris finetune \
	--model google/electra-large-discriminator \
	--dataset squad_v2 \
	--task question_answering \
	--name test_finetune_squad \
	--has-negative \
	--batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10

Or in short form:

iris finetune -m TitanML/Electra-Large-SQUADV2 -d squad_v2 -t question_answering -hn -n test_finetune_squad -bs 31 -lr 3e-5 -ep 10

The same applies to sequence and token classification tasks:

iris finetune \
	--task sequence_classification \
	--dataset glue \
	--subset mrpc \
	--model TitanML/Electra-Large-MRPC \
	--name test_finetune_mrpc \
	--text-fields sentence1 \
	--text-fields sentence2 \
	--num-labels 2 \
	--batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10
iris finetune \
        --model TitanML/Electra-Large-CONLL2003 \
        --dataset conll2003 \
        --subset conll2003 \
        --task token_classification \
        --name test_finetune_conll \
        -ln 0:O \
        -ln 1:B-PER -ln 2:I-PER \
        -ln 3:B-ORG -ln 4:I-ORG \
        -ln 5:B-LOC -ln 6:I-LOC \
        -ln 7:B-MISC -ln 8:I-MISC \
        --labels-column ner_tags \
        --batch-size 32 \
	--learning-rate 3e-5 \
	--num-epochs 10
🎓
https://docs.titanml.co
discord