These docs are outdated! Please check out https://docs.titanml.coarrow-up-right for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discordarrow-up-right.
Super fast optimised inference, even on local hardware like CPUsarrow-up-right
Quickly experiment with inferencing different LLMs
Create local versions of ChatGPT & The Playground
Create inference servers that are local and private (think HF Inference Servers but local)
All for Generative models
Last updated 2 years ago