Documentation
Build efficient edge AI applications with Embedl Hub.
Optimize and deploy your model on any edge device with the Embedl Hub Python library:
- Compile your model for execution on CPU, GPU, NPU or other AI accelerators on your target devices.
- Quantize your model for lower latency and memory usage.
- Profile your model's latency and memory usage on real edge devices.
Embedl Hub logs your metrics, parameters, and profiling results, allowing you to inspect and compare your results on the web and reproduce them later.


Get started
Follow the setup guide to get started with Embedl Hub. Then, choose your path: use cloud providers to test across a wide range of devices, or connect your own hardware for fast experimentation.