Using a local model
Last updated
Last updated
These docs are outdated! Please check out for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our .
If you have fine-tuned a model already, you might want to run that in the Takeoff server instead of a model on huggingface.
To do this, you save the model to a local folder ~/.iris_cache.
E.g. you can do this using the huggingface .save_pretrained interface:
Now you can start the Takeoff server using:
The .iris_cache folder is also where we will save models once they have been downloaded from huggingface. This is to avoid lengthy repeated downloads of large language models.