The Latent AI Efficient Inference Platform (LEIP) optimizes neural networks for edge devices, builds repeatable delivery pipelines that scale performance, and rapidly produces models for different hardware targets.
LEIP adds edge capabilities to your current MLOps pipelines and standardizes trusted and scalable model delivery. LEIP lets you move your models to the data, not your data to the cloud for processing, with models optimized for compute, memory, and power that enable edge processing.
The LEIP SDK includes pre-configured but customizable templates called Recipes that let you develop and deploy models to common hardware quickly with the best possible results.