🛫
Iris Takeoff Documentation
  • What is Iris Takeoff? 🦅
  • Getting Started
  • Using the Takeoff API (Client-side)
  • Built-in Interfaces
  • Shutting Down
  • Supported models
  • Using a local model
  • Other bits!
  • Generation Parameters
  • Takeoff Roadmap
Powered by GitBook
On this page
  • How to use a model I have saved locally?
  • Caching HF Models

Using a local model

PreviousSupported modelsNextOther bits!

Last updated 1 year ago

These docs are outdated! Please check out for the latest information on the Titan Takeoff server. If there's anything that's not covered there, please contact us on our .

How to use a model I have saved locally?

If you have fine-tuned a model already, you might want to run that in the Takeoff server instead of a model on huggingface.

To do this, you save the model to a local folder ~/.iris_cache.

E.g. you can do this using the huggingface .save_pretrained interface:

model = ...     # load your model 
tokenizer = ... # load your tokenizer

model.save_pretrained('~/.iris_cache/<my_model_folder>')
tokenizer.save_pretrained('~/.iris_cache/<my_model_folder>')

Now you can start the Takeoff server using:

iris takeoff --model <my_model_folder>

Caching HF Models

The .iris_cache folder is also where we will save models once they have been downloaded from huggingface. This is to avoid lengthy repeated downloads of large language models.

https://docs.titanml.co/docs/category/titan-takeoff
discord