Which hardware should I deploy to?
These docs are outdated! Please check out https://docs.titanml.co for the latest information on the TitanML platform. If there's anything that's not covered there, please contact us on our discord.
We built Titan so that you can achieve high quality inference even on much cheaper and less powerful hardware. Since we use a Triton inference server for deployment, we can detect your end hardware, enabling TitanML to optimise your model with the hardware in mind. However, for the very best results, we would recommend deploying to a GPU with Tensor Cores - this allows us to use the most advanced possible techniques!
Last updated