Which hardware should I deploy to?

We built Titan so that you can achieve high quality inference even on much cheaper and less powerful hardware. Since we use a Triton inference server for deployment, we can detect your end hardware, enabling TitanML to optimise your model with the hardware in mind. However, for the very best results, we would recommend deploying to a GPU with Tensor Cores - this allows us to use the most advanced possible techniques!

Last updated