# When should I use the Takeoff Server?

{% hint style="danger" %}
These docs are outdated! Please check out <https://docs.titanml.co> for the latest information on the TitanML platform.\
\
If there's anything that's not covered there, please contact us on our [discord](https://discord.com/invite/83RmHTjZgf).
{% endhint %}

* [Super fast optimised inference, even on local hardware like CPUs](https://youtu.be/LvrEO_lNjcA)
* Quickly experiment with inferencing different LLMs
* [Create local versions of ChatGPT & The Playground](https://titanml.gitbook.io/iris-documentation/titan-takeoff-inference-server/chat-and-playground-ui)
* Create inference servers that are local and private (think HF Inference Servers but local)
* [All for Generative models](https://titanml.gitbook.io/iris-documentation/titan-takeoff-inference-server/supported-models)
