Supported models & Tasks

These docs are outdated! Please check out https://docs.titanml.co for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discord.

Titan Optimise was created to optimise non-generative language models and tasks. For generative model optimisations check out Titan Takeoff

Supported tasks

  • Text classification - sequence_classification or glue

    Note that GLUE, the General Language Understanding Evaluation benchmark, is a collection of sequence classification tasks used to evaluate natural language understanding systems - so you'd have to specify which columns in your dataset contain the sequences which are to be classified, as well as how many labels there are. Using glueas the task and the glue task as the dataset is a handy shortcut.

  • Question answering - question_answering The most common datasets for question answering are the SQuAD datasets, but TitanML does support others. If you decide to use a different training dataset, you must indicate to Iris whether your dataset contains unanswerable questions (see how to do this here).

  • Token classification - token_classification

    TitanML also supports classification tasks involving individual tokens (including Named Entity Recognition). As with sequence classification, you must indicate which columns in your input dataset are to be classified, and how many labelled classes they are to be classified into.

  • Causal language modelling - language_modelling

    Causal language modelling is how large language models like GPT-4 and Claude are trained. The model learns to predict the next word (technically, token) given a string of previous words (tokens). TitanML supports language modelling for LLMs like OPT and pythia. Large models will automatically use state of the art parameter efficient training. See below for supported models.

  • Conditional language modelling (sequence to sequence) - language_modelling

    TitanML also supports conditional language modelling as a task. Conditional language modelling (also known as sequence to sequence modelling) involves producing output tokens conditioned on both previous tokens, and an additional sequence. Examples include translation, & summarization. Provide language_modelling as the iris task, and the platform will automatically deduce the task type from the model used. See below for supported models.

Supported models

The TitanML platform supports optimizing models for sequence_classification, token_classification, and question_answering. language_modelling is coming soon! Only the following model families are supported.

Never use DistilBert again!

There's usually little reason to use DistilBert! You'll typically get far better results by starting with a much better and larger model, like BERT or ELECTRA, and then using TitanML to compress down to a similar size to DistilBert.

Are there models that you use that you would like to see supported? Please let us know at hello@titanml.co or on the discord.

Last updated