Inspiration

Agentic AI systems like OpenClaw and toolkits like the Pi monorepo are becoming SUPER powerful, enough to handle real work: messages, calendars, code reviews, flight check-ins, and more. What stands out to me is not a lack of capability with these systems, but a lack of simple control. Even the best agents still require users to guide them through chat interfaces, menus, and follow-up prompts, especially when a task finishes and a decision is needed. That friction felt familiar from using creative tools like Photoshop and Premiere Pro. I wanted to explore what it would look like if agents could be operated through physical buttons and hardware instead of text, and the DevStudio challenge provided exactly that

What it does

Everything! LogiClaw turns an AI agent into something you can operate through Logitech hardware. It maps agent capabilities onto the MX Creative Console and Actions Ring so the agent can surface clear next steps every time a new event occurs - steps such as replying to a message, rescheduling a meeting, or approving a completed task - directly on physical controls. Users can approve actions with buttons, select intent with the Actions Ring, and review the agent’s full action history using the console dial, which opens a native desktop timeline view.

How we built it

LogiClaw is implemented as a plugin using the Logitech Actions SDK and integrates with two agent backends. One connects to an OpenClaw gateway, and the other connects to a custom sandboxed local agent based off tools from Pi-mono. The plugin supports predefined agent skills configured by the user, as well as dynamic action slots that the agent fills at runtime based on context. The console dial will be wired to a desktop application that displays a chronological view of agent actions and conversations whenever the dial is turned.

Challenges we ran into

The main challenge was designing an interaction model that felt safe and predictable while still being flexible. Agent systems tend to be opaque, so mapping their behavior onto a few predefined Actions required careful constraints.

Accomplishments that we're proud of

  • A working MVP that integrates with both OpenClaw and a custom agent
  • Dynamic, agent-generated Actions surfaced on Logitech hardware
  • A dial-driven, native desktop timeline for reviewing agent history
  • I drew the crustacean on the cover myself :-) AI's great at many things, but sometimes a little human touch is needed

What's next for LogiClaw

Next steps include refining the UX, expanding full support for the Creative Console keypad and dial, and completing the transition to a fully sandboxed custom agent backend. Longer term, LogiClaw can support packaged skill sets distributed through the Logitech Marketplace, making agentic workflows accessible to a wider audience without added complexity.

Built With

Share this project:

Updates