# Using a local model

{% hint style="danger" %}
These docs are outdated! Please check out <https://docs.titanml.co> for the latest information on the TitanML platform.\
\
If there's anything that's not covered there, please contact us on our [discord](https://discord.com/invite/83RmHTjZgf).
{% endhint %}

### How to use a model I have saved locally?

If you have fine-tuned a model already, you might want to run that in the Takeoff server instead of a model on huggingface.

To do this, you save the model to a local folder \~/.iris\_cache.

E.g. you can do this using the huggingface .save\_pretrained interface:

```
model = ...     # load your model 
tokenizer = ... # load your tokenizer

model.save_pretrained('~/.iris_cache/<my_model_folder>')
tokenizer.save_pretrained('~/.iris_cache/<my_model_folder>')
```

Now you can start the Takeoff server using:

```
iris takeoff --model <my_model_folder>
```

### Caching HF Models

The .iris\_cache folder is also where we will save models once they have been downloaded from huggingface. This is to avoid lengthy repeated downloads of large language models.&#x20;
