Lightly’s cover photo
Lightly

Lightly

Software Development

Your Data. At Its Full Potential.

About us

We help companies to improve machine learning models by curating vision data.

Website
https://www.lightly.ai/
Industry
Software Development
Company size
11-50 employees
Headquarters
Zurich
Type
Privately Held
Founded
2019

Locations

Employees at Lightly

Updates

  • The Lightly x mimic meetup was a full house 🙌 We packed the room for a fireside chat on multimodal AI, and the conversation did not disappoint. The panel talked about how teams are actually building with multiple modalities, the data challenges that come with it, and what AI means for how we engineer software. Oh, and the pizza was great too 🍕 We're already thinking about the next one. If you missed this edition - keep an eye out! 👀 Thanks to everyone who attended!

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Lightly

    6,459 followers

    Training computer vision models usually means facing a massive bottleneck: the need for huge labeled datasets. And let's be honest, generic models rarely fit specific, niche domains. That's why we built LightlyTrain. 🚀 We've put together a quick 1-minute overview showing how you can build production-ready CV models using pretraining, fine-tuning, autolabeling, and distillation - all in one workflow. Here's how LightlyTrain changes your workflow: ✅ Fine-Tune in a Few Lines of Code: Load a pretrained backbone and fine-tune on your own labeled data for object detection, segmentation, or classification. ✅ No Labels? No Problem: Pretrain directly on raw, unlabeled images to build stronger, domain-adapted foundations before you ever touch a label. ✅ Better Starting Point = Better Fine-Tuning: Because the backbone already understands your domain, fine-tuning converges faster and needs far fewer labeled examples to hit strong performance. ✅ Automated Annotation: Cut manual labeling work with the autolabeling and distillation pipeline. ✅ Fully On-Premises: Keep your data 100% secure and run everything locally. Getting started is as simple as one command: 💻 pip install lightly-train Check out the video below to see the full workflow in action - from pretraining on unlabeled data to fine-tuning a model ready for deployment. ▶️ Watch the full workflow in action: https://lnkd.in/e9fZgrQE 🔗 Learn more and get started: https://lnkd.in/er9PH25g

  • 📣 Last but definitely not least, meet Igor Susmelj Co-Founder & CTO at Lightly, and one of the hosts of this Thursday's fireside chat. Igor studied Electrical Engineering at ETH Zurich and took Lightly through Y Combinator in 2021. 👀 His take on why ML teams struggle? It's rarely the models. It's the data. Fragmented workflows, hard-to-trust pipelines, datasets that fall apart at scale - that's where most teams actually get stuck. Building a product around that reality, rather than around a single clever algorithm, has been the real challenge. That kind of thinking is exactly what this evening is about. 👏 Thursday, April 23rd at 6pm, Lightly HQ Zurich. If you are a CTO/Engineer building in Switzerland, this one is for you; https://luma.com/cdnhf3bu

    • No alternative text description for this image
  • Lightly reposted this

    View organization page for mimic

    17,107 followers

    As summer settles in, we're excited to team up with Lightly to host a curated AI meetup at their HQ. 🗓️ Date: April 23 (Thursday) ⏲️ Time: 6pm 📍 Location: Bahnhofstrasse 86, Zürich We'll keep it real. Featuring a fireside chat with: 👉 Elvis Nava (Co-Founder & CTO at mimic) 👉 Igor Susmelj (Co-Founder & CTO at Lightly) 👉 Franziska Geiger (Senior ML Engineer at Cradle) 👉 Tiago Kieliger (Co-founder & CTO at Rivia) This is an invite-only event with limited capacity. Registration link in the comments!

    • No alternative text description for this image
  • 🎙️ Next up in our speaker lineup for the Lightly x mimic AI Meetup - meet Tiago Kieliger, Co-Founder & CTO at Rivia. Tiago studied cybersecurity at ETH and worked for the Swiss Department of Defense before co-founding Rivia in drug development. 🔬 👉 He'll be bringing a perspective that's pretty rare in AI circles - what it actually takes to ship AI in a highly regulated industry where errors simply aren't an option. Join us this Thursday, April 23rd at 6pm at Lightly HQ in Zurich. Spots are limited: https://luma.com/cdnhf3bu

    • No alternative text description for this image
  • Lightly reposted this

    🗓️ Next week I'm co-hosting an AI meetup with our friends at mimic. We opened 60 spots. It got fully booked within 2 days. Flattering, really. But also a reminder that a *full* room isn't the same as a *good* room. I've been to enough tech events over the years to know that the thing that actually makes them worth going to is who's there. Not the agenda, not the snacks. The crowd. So we're being pretty deliberate about our waitlist. If you're an engineer/CTO building with AI, this one's for you. DM me for a spot or register here: https://luma.com/cdnhf3bu :)

    • No alternative text description for this image
  • We've added EUPE support to LightlyTrain 🧠 EUPE is Meta's newest visual backbone, in the same family as DINOv2 and DINOv3 - but instead of being trained purely self-supervised, it's distilled from a bunch of other Meta foundation models. EUPE might just be the strongest universal image encoder out there right now, doing really well on both global and dense tasks. With EUPE in LightlyTrain, you can: • Use it as a teacher to distill into smaller, faster models • Fine-tune it directly for things like object detection or semantic segmentation It's fully integrated, so a few lines of code are all you need to start training (snippet below 👇). Links if you want to dig in: 🔗 LightlyTrain — https://lnkd.in/esF-zV8i 🔗 EUPE — https://lnkd.in/emeAruZu 🔗 Object detection quick start — https://lnkd.in/egVCRwE9

    • No alternative text description for this image
  • Teaching a model to recognize dogs in a 5,000-image unlabeled dataset. Masa did it with 4 examples and a couple of refinement clicks. In this demo she shows how LightlyStudio's few-shot classifier works: use semantic search to find a few images of what you're looking for, mark them as positive examples, and let the classifier learn from there. A quick refinement loop, a few corrections, and the model knows the difference between "dog" and "not dog" across the entire dataset. 🐕 This is particularly useful when you have a large pile of raw unlabeled data and need to start organizing it fast, without setting up a full labeling pipeline first. See it in action in the video below 👇 Book a Demo to try LightlyStudio: https://lnkd.in/ewX3mPAs

Similar pages

Browse jobs