Skip to content

Releases: roboflow/inference

v1.1.1

13 Mar 17:41
af014f7

Choose a tag to compare

🚀 Added

🌀 Execution Engine v1.8.0

Steps gated by control flow (e.g. after a ContinueIf block) can now run even when they have no data-derived lineage — meaning they don't receive batch-oriented inputs from upstream steps. Lineage and execution dimensionality are now derived from control flow predecessor steps. Existing workflows are unaffected.

  • 🔀 Control flow lineage — The compiler now tracks lineage coming from control flow steps (e.g. branches after ContinueIf). When a step has no batch-oriented data inputs but is preceded by control flow steps, its execution slices and batch structure are taken from those control flow predecessors.
  • 🔓 Loosened compatibility check — Previously, steps with control flow predecessors but no data-derived lineage would fail at compile time with ControlFlowDefinitionError. That check is now relaxed: lineage is derived from control flow predecessors when no input data lineage exists. The strict check still runs when the step does have data-derived lineage.
  • New step patterns — Steps triggered only by control flow that don't consume batch data now compile and run correctly. For example, you can send email notifications or run other side-effect steps after a ContinueIf without wiring any data into parameters like message_parameters — the step will execute once per control flow branch.
  • 🐛 Batch.remove_by_indices nested batch fix (breaking) — When removing indices via Batch.remove_by_indices, nested Batch elements are now recursively filtered by the same index set. Previously, only the top-level batch was filtered while nested batches were left unchanged, which could cause downstream blocks to silently process None values or fail outright.

Please review our change log 🥼 which outlines all introduced changes. PR: #2106 by @dkosowski87

Warning

One breaking change is included due to a bug fix in Batch.remove_by_indices with nested batches (see below) — impact is expected to be minimal.

🚧 Maintenance

Full Changelog: v1.1.0...v1.1.1

v1.1.0

11 Mar 15:59
6d5e50a

Choose a tag to compare

ℹ️ About 1.1.0 release

This inference release brings important changes to the ecosystem:

  • We have deprecated Python 3.9 which reached EOL
  • We have not made inference-models the default backend for running predictions - this change is postponed until version 1.2.0.

🚀 Added

🧠 Qwen3.5

Thanks to @Matvezy, inference now supports the new Qwen3.5 model.

Qwen3.5 is Alibaba's latest open-source model family (released Feb 2026), ranging from 0.8B to 397B parameters. The headline features are native multimodal (text + vision) support. inference and Workflows support small 0.8B parameters version.

Model is available only with inference-models backend - released in inference-models 0.20.0

🪄 GPT-5.4 support

Thanks to @Erol444, the LLM Workflows block now supports GPT-5.4, keeping inference current with the latest OpenAI model lineup.

⚙️ Selectable inference backend for batch processing

Following up on inferemce 1.0.0 release, Roboflow clients can now select which inference backend is used for batch processing — giving more fine-grained control when mixing legacy and new engine workloads.

Using inference-cli, one can specify which models backend will be selected inference-models or old-inference.

inference rf-cloud batch-processing process-images-with-workflow \
    --workflow-id <your-workflow> \
    --batch-id <your-batch> \
    --api-key<your-api-key> \
    --inference-backend inference-models
# or - for videos
inference rf-cloud batch-processing process-videos-with-workflow \
    --workflow-id <your-workflow> \
    --batch-id <your-batch> \
    --api-key<your-api-key> \
    --inference-backend inference-models

The same can be configured in Roboflow App and via HTTP integration - check out swagger

Caution

Currently, the default backend is old-inference, but that will change in the nearest future - Roboflow clients should verify new backend and make necessary adjustments in their integrations if they want to still use old-inference backend.

🦺 Maintanence

🐍 Drop of Python 3.9 and upgrade to transformers>=5

We've ported all public builds to work with versions of Python newer than 3.9, which was slowing us down when it comes to onboarding new features. Thanks to deprecation, we could migrate to transformers>=5 and enable new model - Qwen 3.5.

Other changes

Full Changelog: v1.0.5...v1.1.0

v1.0.5

06 Mar 20:40
c3eedf0

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.0.4...v1.0.5

v1.0.4

04 Mar 14:13
52a37e4

Choose a tag to compare

What's Changed

Full Changelog: v1.0.3...v1.0.4

v1.0.3

03 Mar 18:35
79519d4

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.0.2...v1.0.3

v1.0.2

27 Feb 21:22
f4b5fdf

Choose a tag to compare

What's Changed

Full Changelog: v1.0.1...v1.0.2

v1.0.1

23 Feb 12:24
2031bcd

Choose a tag to compare

What's Changed

Full Changelog: v1.0.0...v1.0.1

v1.0.0

20 Feb 16:07
13a6e40

Choose a tag to compare

🚀 Added

💪 inference 1.0.0 just landed 🔥

We are excited to announce the official 1.0.0 release of Inference - which was announced 2 weeks ago with 1.0.0rc1 preview release.

Over the past years, Inference has evolved from a lightweight prediction server into a widely adopted runtime powering local deployments, Docker workloads, edge devices, and production systems. After hundreds of releases, the project has matured — and so has the need for something faster, more modular, and more future-proof.

inference 1.0.0 closes one chapter and opens another. This release introduces a new prediction engine that will serve as the foundation for future development.

⚡ New prediction engine: inference-models

We are introducing inference-models, a redesigned engine to run models focused on:

  • faster model loading and inference
  • improved resource utilization
  • better modularity and extensibility
  • cleaner separation between serving and model runtime
  • support from different backends - including TensorRT

Important

With inference 1.0.0 we released also first stable build of inference-models 0.19.0. You can use the engine in inference - just set env variable USE_INFERENCE_MODELS=True

Caution

The new inference-models engine is wrapped with adapters - to serve as dropdown replacement for old engine. We are making it default engine on Roboflow platform, but clients running inference locally have the USE_INFERENCE_MODELS set to False by default. We would like all clients to test the new engine - when the flag is not set, inference works as usually.
In approximately 2 weeks, with inference 1.1.0 release - we will make inference-models default engine for everyone.

Caution

inference-models is completely new backend, we've fixed a lot of problems and bugs. As a result - predictions from your model may be different - but according to our tests, quality-wise they are better. That being said, we still may have introduced some minor bugs - please report us any problems - we will do our best to fix problems 🙏

🛣️ Roadmap

Todays release is just a start for broader changes in inference - the plan for the future is the following:

  • shortly after release, we will complete our work around Roboflow platform - including migration of small fraction of models not onboarded into new registry used by inference-models and adjusting automations on the platform - until finished, clients who very recently uploaded or renamed models may be impacted by HTTP 404 - contact us to receive support in such cases.
  • there will be consecutive hot-fixes (if needed) - released as 1.0.x versions.
  • clients running inference locally should test inference-models backend now, as in approximately 2 weeks, inference-models will become default engine
  • We have still some work to do in 1.x.x - mainly to provide patches - but we start a march towards 2.0, which should bring new quality for other components of inference - stay tuned for updates.
  • You should expect that new contributions to inference will be based on inference-models engine and may not work if you don't migrate.

Caution

One of the problem we have not addressed in 1.0.0 is models cache purge - new inference-models engine uses different structure of the local cache than old engine. As a result - inference server with USE_INFERENCE_MODELS=True does not perform clean-up on volume with models pulled from the platform. If you run locally, generally that should not be an issue, since we expect clients only use limited number of different models in their deployments.
If you use large amount of models or when your disk space is tight, running new inference you should perform periodic clean-ups of /tmp/cache. This issue will be addressed before 1.1.0 release.

🎨 Semantic Segmentation in inference

image

Thanks to @leeclemnet, DeepLabV3Plus segmentation model was onboarded to inference and can be used by clients.

📐 Area Measurement block 🤝 Workflows

Thanks to @jeku46 we can now measure area size with Workflows.

🚧 Maintanence

🏅 New Contributors

Full Changelog: v0.64.8...v1.0.0

v0.64.8

13 Feb 20:44
2fa30cf

Choose a tag to compare

💪 Added

  • Fisheye cameras in camera calibration block by @Erol444 in #1996
    Calibration block was supporting polynomial calibration which is not handling fisheye distortions well. This change adds support for fisheye calibration.
image
  • Heatmap block by @Erol444 in #1986
    This change adds heatmap block (uses supervision's heatmap annotator), which supports both:
  • detections, so heatmap based on where detections were
  • tracklets, which ignores stationary objects (default: on), so we heatmap the movements not the objects
heatmap2.mp4

🚧 Maintanence

Full Changelog: v0.64.7...v0.64.8

v1.0.0rc1

06 Feb 18:45
9de60d5

Choose a tag to compare

inference 1.0.0rc1 — Release Candidate

Today marks an important milestone for Inference.

Over the past years, Inference has grown from a lightweight prediction server into a widely adopted runtime used across local deployments, Docker, edge devices, and production systems. Hundreds of releases later, the project has matured significantly — and so has the need for a faster, more modular, and future-proof.

inference 1.0.0rc1 is a preview of 1.0.0 release which will close one chapter and open another - this release introduces a new prediction engine that will become the foundation for all future development.

🚀 New prediction engine - inference-models

We are introducing inference-models, a redesigned execution engine focused on:

  • faster model loading and inference
  • improved resource utilization
  • better modularity and extensibility
  • cleaner separation between serving and model runtime
  • stronger foundations for future major versions

The engine is already available today in:

  • inference-models package → 0.18.6rc8 (RC)
  • inference package and Docker → enabled with env variable
USE_INFERENCE_MODELS=True

inference-models wrapped within old inference is a drop-down replacement. This allows testing the new runtime without changing existing integrations.

Important

Predictions from your models may change - but generally for better! inference-models is completely new engine for running models, we have fixed a lot of bugs and make it multi-backend - capable to run onnx, torch and even trt models! It automatically negotiate with Roboflow model registry to choose best package to run in your environment. We have already migrated almost all Roboflow models to new registry - working hard to achieve full coverage soon!

📅 What happens next

  • Next week

    • Stable Inference 1.0.0
    • Stable inference-models release
    • Roboflow platform updated to use inference-models as the default engine
  • In the coming weeks

    • inference-models becomes the default engine for public builds (USE_INFERENCE_MODELS becomes opt-out, not opt-in)
    • continued performance improvements and runtime optimizations

🔭 Looking forward - the road to 2.0

  • This engine refresh is only the first step.
  • We are starting work toward Inference 2.0, a larger modernization effort similar in spirit to the changes introduced with inference-models.

Stay tuned for future updates!