ShiftMag https://shiftmag.dev/ Insightful engineering content & community Fri, 13 Mar 2026 17:43:41 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://shiftmag.dev/wp-content/uploads/2024/08/cropped-ShiftMag-favicon-32x32.png ShiftMag https://shiftmag.dev/ 32 32 How Uber Engineers Use AI Agents https://shiftmag.dev/how-uber-engineers-use-ai-agents-8617/ Fri, 13 Mar 2026 15:39:36 +0000 https://shiftmag.dev/?p=8617 At the Pragmatic Summit, I heard firsthand that Uber engineers aren’t just using AI to write code anymore, they’re assigning it work. Let’s see how that plays out.

The post How Uber Engineers Use AI Agents appeared first on ShiftMag.

]]>

At the Pragmatic Summit, I listened to Uber’s Director of Engineering, Anshu Chadha, and Principal Engineer, Ty Smith, discuss how one of the world’s largest technology companies is integrating generative AI into its engineering workflow.

Then, they shared:

At Uber, engineers are beginning to assign coding tasks to AI agents much like managers distribute work among their teams.

Say hello to my new colleague – AI

Uber has been using AI for years in systems like its matching platform, but bringing generative AI into the day-to-day work of engineers is a newer step.

According to Anshu, the goal isn’t to replace engineers – it’s to help them get more done.

We’re not pushing for AI to automate all humans in the company. Our goal is to let engineers focus on creative work rather than toil.

Practically speaking, repetitive tasks such as code migrations, upgrades, documentation, and bug fixes are now being handled by AI-powered agents. According to Anshu, it frees engineers to build features and enhance the user experience.

The end of hands-on programming as we know it?

One of the biggest shifts Uber has observed is the transition from traditional AI-assisted coding tools toward agent-based workflows.

Tools like GitHub Copilot made coding faster by helping developers in the moment, but now we’re entering a new era: AI agents that can work independently, tackling tasks without needing someone at the keyboard.

Back in 2022 and 2023, developer velocity saw a modest 10–15% increase. Today, the paradigm has shifted to what we call “peer programming,” where developers can delegate workloads to AI agents and intervene or redirect them as needed.

This approach essentially positions engineers as tech leads directing AI agents. Developers define the goal, while agents execute parts of the work in the background and return results for review.

Uber has built an internal platform that plugs AI agents right into its engineering workflow, mostly on Michelangelo, its machine learning platform. This gives access to models from OpenAI and Anthropic, as well as Uber’s own internal models.

On top of that, they’ve created agent-driven tools that tap into company data (source code, documentation, Jira tickets, Slack) so the AI agents have enough context to actually get work done.

AI tackles toil, but gaining trust is the real challenge

At the conference, a standout demo was Uber’s “Minions” system. Engineers submit a prompt via web, Slack, or command line, and it generates code changes and opens pull requests automatically. Ty says:

You give the agent a prompt and expect a pull request as the output. A few minutes later the system notifies you on Slack that the task is complete and the PR is ready to review.

The platform also helps engineers craft better prompts by suggesting improvements when instructions are unclear, increasing the likelihood that the agent will succeed.

When Uber first rolled out agentic workflows, they found about 70% of submitted tasks were “toil” – repetitive maintenance work developers usually avoid. These predictable tasks are ideal for AI, creating a feedback loop: the more AI handles, the more developers are willing to delegate.

Still, scaling AI isn’t just about technology. Supporting engineers as they adjust from traditional workflows and gain confidence in AI-generated code is an important focus.

Sharing success stories sparks faster AI adoption

Uber found that peer-driven adoption worked better than mandates. Anshu points out:

The most successful tactic has been sharing wins. When engineers see examples from their peers where AI helped them accomplish something impressive, adoption spreads quickly.

But measuring real impact remains tricky.

Uber tracks metrics like developer satisfaction, productivity, and code output, but connecting them to business outcomes is harder. “These are activity metrics, not business outcomes,” Anshu says.

To fix this, Uber is working to track the full development lifecycle (from design to production) to see how AI truly speeds up product delivery.

AI is powerful but EXPENSIVE

Cost is also becoming an issue. Running large language models at scale requires expensive compute resources, and AI infrastructure spending has grown dramatically, Anshu explains:

Since 2024, our costs have gone up at least six times. This technology is amazing, but the cost of AI is too high.

That’s why Uber is investing in AI infrastructure that picks the right model for each task, balancing performance and cost. With the AI landscape changing fast, the company continuously evaluates new tools and updates its stack:

What’s successful this month may be overtaken next month. So we constantly test new tools, gather feedback from developers, and adapt.

To conclude: For Uber, generative AI is now central to engineering. And their experience shows that success depends as much on culture, cost control, and ongoing experimentation as on the technology itself.

The post How Uber Engineers Use AI Agents appeared first on ShiftMag.

]]>
License Laundering and the Death of Clean Room https://shiftmag.dev/license-laundering-and-the-death-of-clean-room-8528/ Tue, 10 Mar 2026 14:10:37 +0000 https://shiftmag.dev/?p=8528 A canary died in the open source coal mine and a hundred people showed up to argue about the autopsy.

The post License Laundering and the Death of Clean Room appeared first on ShiftMag.

]]>

Last week, a Python library called chardet became the most contested piece of open source software on the internet. And not because it does anything glamorous – it detects character encodings and figures out whether your file is UTF-8 or Shift-JIS.

It’s plumbing. The kind of thing that used to sit inside requests (and still does if you have it installed) which means it’s probably somewhere in your dependency tree whether you know it or not.

Here’s what happened:

The maintainer who kept this plumbing running for twelve years used Claude to rewrite it from scratch, then published the result under MIT instead of LGPL. The original author, who disappeared from public life in 2011, came back from the dead to object. Two hundred and forty-four comments followed. Most of them were unhelpful.

What makes this interesting isn’t the law, it’s that every single participant in this fight chose a position that protects their ego over one that would have actually fixed anything.

Three licenses walk into a bar…

If you don’t live in licensing land, here’s the short version – three licenses matter for this story: MIT, GPL, and LGPL. They all let you use, modify, and distribute the code. The difference is what they demand in return.

MIT says: do whatever you want. Use it in your startup, your side project, your proprietary product. Keep the copyright notice, and we’re done. No further obligations. This is why companies love it – their lawyers read it once, nod, and move on.

GPL (General Public License) says: you can use and modify this code, but if you distribute the result, you have to release your entire program’s source code under the same license. That’s the “copyleft” mechanism, keeping free software free. The trade-off: it’s viral – if GPL code touches your code, your code inherits the obligation. Most companies treat GPL like radioactive material; legal teams won’t approve it for anything shipped to customers.

LGPL (Lesser GPL) is the compromise. It says: you can use this library inside proprietary software without the copyleft spreading to your code (that’s the “Lesser” part). But if you modify the library itself, those modifications must stay under LGPL. You must include the license, provide the library’s source, and let users swap in their own modified version. That last part matters: it’s meant for libraries like chardet – shared plumbing usable everywhere, but with improvements flowing back to the community. The practical consequence: MIT code can go anywhere.

GPL code can only go into other GPL projects. LGPL sits in between: any project can use it, but nobody can take improvements private.

The Python standard library requires permissive licensing (MIT, BSD, or similar) because anything in stdlib ships with every Python installation on earth, including inside proprietary products. LGPL’s requirements (source disclosure, relinkability, copyleft on modifications) make that impossible. That one restriction is what set this entire fight in motion.

The release that started it all.

The maintainer should have forked

Dan Blanchard has been primary maintainer of chardet since 2013, contributing nearly seven hundred commits versus forty-eight from the next person. He aimed to relicense the library for Python standard library inclusion – a goal on record since 2014. It’s reasonable: LGPL prevents stdlib inclusion, and chardet is one of Python’s most widely used packages.

So, he used Claude Code to write a clean reimplementation. Ran JPlag plagiarism detection. Got 0.04% average similarity to the old codebase. Published the design documents. Published the implementation plans. Released it as version 7.0.0 under MIT.

And he did all of this under the same package name, in the same repository, on the same PyPI listing.

That’s the mistake – not the rewrite, not the AI. Not even the license change in isolation. The mistake is claiming your code is “an independent work, not a derivative” while simultaneously shipping it as the next version of the thing you say it’s independent from.

You don’t get to have it both ways: If 7.0.0 is an independent work, it’s not chardet, call it chardet-ng.Call it chardetect. Call it encoding-detector. Ship it as a new package. Let people migrate on their own terms. The Python standard library doesn’t care what the package is called on PyPI, it cares about the license and the code quality.

There’s a real question buried here that most people skipped over: who actually controls the PyPI listing? Dan has been the sole active maintainer for over a decade. Could he even publish a fresh package under the chardet name without owning the existing listing? PyPI doesn’t have a “fork the namespace” button. The infrastructure assumes continuity.

That’s a genuine constraint, not just a convenience play. But it doesn’t change the outcome – if the code is independent, it deserves an independent identity, even if the migration is harder.

But Dan didn’t do that, because the value isn’t in the code – the value is in the name. In the twelve years of trust built by that name. In the fact that thousands of requirements.txt files already have chardet in them.

He knows this. He said as much in the issue: “It’s not like this was a thing I just popped into last week.” Right. That’s the point. The name carries weight that the code, by itself, does not. And the name isn’t his to relicense.

The mob should have shown up years ago

242 comments. People comparing Dan to a sex offender. People offering to fund lawsuits. People from a group called “Monadic Sheep” volunteering to take over the project. The FSF was invoked. DMCA was invoked incorrectly. Someone brought up trademark law. Someone posted a Rust rewrite just to prove a point.

Where were all these people for the last twelve years?

Dan maintained this library alone. No funding. No co-maintainers. No help. The other two people on the chardet team haven’t committed since 2017 at the latest – one of them not since 2012. The original author deleted his entire internet presence in 2011. This is one of the most depended-upon packages in the Python ecosystem, and it was held together by a single person on their own time.

Now that person does something people don’t like, and suddenly everyone has opinions about governance and stewardship and the spirit of free software.

Mike Hoye nailed it in the thread:

If the end state of open source projects is that devs are left to work alone for years on the keystone projects of this jenga tower we’re calling modern infrastructure, and then we collectively jump all over them when they turn to the kind of help that, however reprehensible it might be, actually shows up to help, then this entire FOSS project is just a popularity contest where the losers join a slow, lonely suicide pact.

That’s not comfortable to read. It’s accurate.

The people most outraged by this license change are people who benefited from Dan’s work for a decade without lifting a finger. Consumed the output of a copyleft license without contributing back. Relied on a single maintainer without offering support. And now they’re furious he made a unilateral decision without consulting them.

If you want a voice in how a project is governed, you have to be present when the project needs governing. Not just when it does something you don’t like.

The last time anyone other than Dan Blanchard contributed to chardet, Donald Trump was just starting his first term.

The AI optimists should stop celebrating

Armin Ronacher wrote a blog post about this called “AI And The Ship of Theseus.” He’s excited. He sees AI rewrites as a way to finally escape the GPL, which he views as a restriction on sharing:

If you throw away all code and start from scratch, even if the end result behaves the same, it’s a new ship.

With respect to Armin, whose work I deeply admire, this framing is dangerous.

What he’s describing is license laundering. Take copyleft code. Feed it to a model that was trained on that code. Ask the model to produce something functionally equivalent. Point at the output and say “look, no similarity.” The fact that a plagiarism detector can’t find matching tokens doesn’t mean the work is independent. It means the laundering was effective.

If this technique is legitimate, every copyleft project in existence is one Claude session away from becoming MIT. Or proprietary. The same trick works in both directions.

Someone in the GitHub thread made the sharpest observation of the entire debate: take a leaked Windows source code dump, run it through an LLM, and release the output as open source. Is that acceptable? If not, explain why chardet is different. The mechanism is identical. The only variable is whether you sympathize with the copyright holder.

Ronacher also points out that:

Vercel happily reimplemented bash but got visibly upset when someone reimplemented Next.js in the same way.

He means this as a critique of hypocrisy. It’s actually a critique of his own position. Everyone is fine with license laundering when it benefits them. Nobody is fine with it when it’s their code being laundered. That’s not a principled stance. That’s convenience.

The celebration of AI-assisted relicensing as “exciting” or “progress” only works if you assume that copyleft licenses are a mistake that needs a technological workaround. If you think authors should have the right to choose how their work is used, this should terrify you. Not because the law is clear, but because it isn’t. And the people with the resources to push the boundaries are the ones who benefit from copyleft disappearing.

The man who defined open source says you already lost

Bruce Perens showed up in a separate chardet issue. If you don’t know the name: he co-founded the Open Source Initiative and wrote the Open Source Definition. He’s the person who decided what “open source” means.

His position should make everyone uncomfortable.

To the copyleft defenders, he said:

The courts have not sided with plaintiffs in finding AI work to be infringing so far, because the law as it stands today is built primarily around the concept of literal copying, cut and paste of the actual text.

The AI doesn’t do that. It produces statistically probable output from a blended model of everything it’s been trained on. The result is “unrecognizable as derived from any one source.”

Then he added this:

I was hoping the courts would go a different way than they have so far. My present conclusion is that the wrong side may have won, but they won.

Read that again. The man who defined open source licensing thinks the wrong side won. And he’s telling you to act accordingly.

To the AI enthusiasts celebrating, he offered no comfort either:

I am not evangelizing this. This might not be the world I would have liked to have, but it’s the one we got.

And to the companies wondering what to do, he was blunt:

I do not recommend rejecting an AI-mediated Open Source program with verified low-similarity to other works on the basis of legal risk at this time.

Not because it’s right. Because the law, as it currently works, doesn’t have the tools to stop it. “So, yes, it’s copying, but it’s not the kind of copying the court is going to prosecute as a copyright violation.” (He expanded on this in an interview with The Register.)

Perens is describing a world where copyright law was built for photocopiers and the technology outran it. The copyleft purists are right on principle. The AI laundering crowd is right on current law. And the gap between those two things is where every open source project now lives.

Nobody in the chardet thread wants to sit with this. The purists want to believe the law will catch up. The optimists want to believe the law was always wrong. Perens is telling both sides that the law is what it is, it’s probably not going to change fast enough to matter, and you should plan accordingly.

That’s a f*****g eulogy, not a victory speech.

The copyleft purists should stop pretending they’re owed

On the other side of the thread, people are arguing that Dan owes the LGPL, the FSF, and the original author something that goes beyond what the license actually says.

The LGPL says derivative works must be released under the same license. The core legal question is whether 7.0.0 is a derivative work. That’s a question for a judge, not for a GitHub comment thread. And it’s genuinely unclear. The JPlag numbers suggest structural independence. The fact that Claude was trained on the original code suggests something murkier.

Some commenters went further than the legal point: Dan should step down, never be trusted, his work is a supply chain risk, and twelve years of maintenance entitle him to nothing.

That’s not a principled defense of copyleft. That’s resentment wearing a license as a mask.

A license is a legal instrument. It grants and restricts specific rights. It does not create a moral hierarchy where the original author has permanent authority over a project they abandoned fifteen years ago, simply because they chose the license. The LGPL doesn’t say “the maintainer must defer to the original author in perpetuity.” It says derivative works must carry the same license.

If the code is truly independent, the LGPL doesn’t apply. If it’s not, the LGPL requires the license to be reverted. Those are the only two outcomes the license contemplates. “The maintainer should resign in shame” is not one of them.

Dan’s mistake was strategic, not moral. He should have made a new project. He didn’t, and now he’s in a mess. But treating him like he committed a crime against the commons, when he’s the only person who actually showed up to maintain the commons for over a decade, is selective outrage at its worst.

The only person who did their job right

Mark Pilgrim opened the issue: 4 paragraphs, no insults, no legal threats, no grandstanding.

I respectfully insist that they revert the project to its original license.

Then he stopped talking.

He made his position clear, provided his reasoning, and let others respond. He didn’t engage with the two hundred comments that followed. Didn’t threaten lawsuits or moralize. He stated a fact as he understands it and left.

Mark Pilgrim was a hero of mine. I grew up on Dive Into Python. His blog was one of those places where you went to learn how to think about the web, not just how to build for it. When he deleted his entire internet presence in 2011, it hit a lot of us hard. People called it an “infosuicide.” Whatever his reasons were, the web lost something real that day. Seeing his name show up in a GitHub issue in 2026 was – I don’t know. Not closure. But something.

Jason Scott vouched for his identity. And then he did something remarkable: he stayed out of it. Jason Scott is the kind of person who could give a two-hour talk on this topic without repeating himself. He’s an archivist, a historian, and nobody’s fool. The fact that he showed up, confirmed Mark was Mark, and then chose not to add his voice to the pile, that’s restraint that most people in that thread couldn’t manage. Sometimes the most useful thing you can do is not talk.

The person with the most legitimate claim to outrage was the least outraged person in the thread. The person most qualified to add commentary chose silence. There’s a lesson in that, if anyone’s paying attention.

Meanwhile, in the real world

While the GitHub thread debated philosophy, someone who works at NVIDIA opened a separate issue with a different framing entirely. No ideology. No license theory. Just a practical assessment from someone who has to get software approved by a legal team before it ships. (They explicitly noted their opinions don’t represent NVIDIA’s.)

The title: “v7.0.0 presents unacceptable legal risk to users due to copyright controversy.”

The conclusion:

chardet v7.0.0 is absolutely toxic. If my employer’s open source review legal people got wind of it, I seriously doubt that they’d approve v7.0.0 and up for any use under any circumstances whatsoever.

This is the part that should have stopped everyone cold.

Dan’s stated goal was to make chardet more widely adopted. Get it into the Python standard library. Remove the LGPL barrier that kept companies from contributing.

Instead, the rewrite made chardet less usable than it was before. Under LGPL, any company could use it freely – LGPL is specifically designed to be non-viral for library consumers. Under the disputed MIT, no company with a functioning legal team will touch it. Not because MIT is worse than LGPL, but because the provenance is radioactive. The license dispute itself is the contamination.

Dan’s own user is telling him:

I can’t use this anymore. The license isn’t the problem. The uncertainty is. And uncertainty is worse than the restriction ever was.

This is what happens when you optimize for a theoretical audience (stdlib committee, hypothetical future contributors) instead of the actual one (the thousands of projects that depend on your library right now). You trade a known constraint for an unknown risk. And in enterprise software, unknown risk is the one thing nobody can accept.

What this is actually about?

This fight isn’t about chardet, it’s about three things crashing into each other at once, and nobody wants to untangle them because each thread, pulled separately, leads somewhere uncomfortable.

Thread 1: who owns a project? Open source has no good answer for what happens when the original author leaves and a sole maintainer carries the project for a decade. The license governs the code. Nothing governs the name, the reputation, the PyPI listing, the trust. Dan accumulated something real over twelve years. It’s not copyright. It’s not a trademark. But it’s not nothing, and the current system has no framework for recognizing it or limiting it.

Thread 2: AI makes clean-room arguments meaningless. The entire concept of a clean-room implementation (where you build something from scratch without ever looking at the original code, so you can prove your version is independent) assumes that knowledge contamination is binary. Either you’ve seen the code or you haven’t. LLMs break this model completely. The model has “seen” the code during training. The developer has seen the code during years of maintenance. The output has near-zero structural similarity.

Is that independence, or is it effective laundering? Nobody knows. No court has ruled. The first ruling will set precedent for every copyleft project in every language.

Thread 3: copyleft has a sustainability problem. LGPL kept chardet from entering the Python standard library for over a decade. It kept a sole maintainer trapped in a licensing box that actively prevented the project from growing. The license did exactly what it was designed to do, and the result was a critical dependency maintained by one exhausted person. If your license’s primary effect is preventing adoption and discouraging contribution, you should at least acknowledge that outcome before invoking the license as sacred.

None of these threads have clean answers. But here’s what would have worked: Dan creates a new project called chardetect under MIT. He announces that chardet 6.x is the final LGPL release and will receive security fixes only. He points the community to the new project. Mark has no grounds to object because the new project doesn’t claim to be chardet. The Python stdlib gets its MIT implementation. Everyone who depends on chardet can migrate on their own schedule. Nobody’s trust gets violated.

That didn’t happen because it would have required Dan to give up the one thing that made the rewrite valuable: the name.

The uncomfortable question

Pull up your dependency tree. Find the packages maintained by a single person. Check when they last committed. Check who else has commit access.

That’s your chardet. It’s sitting there right now. And when the maintainer finally snaps, burns out, or makes a decision you don’t like, you’ll have opinions about governance too.

The question is whether you’ll have earned them.

Further reading: Simon Willison’s analysis of the chardet dispute, Armin Ronacher’s “AI And The Ship of Theseus”, Bruce Perens in The Register, the original GitHub issue, and the legal risk issue.

The original Mozilla research on universal character set detection that predates all of this. The US Copyright Office report on AI and copyrightability. Google v. Oracle on API copyrightability and fair use.

The post License Laundering and the Death of Clean Room appeared first on ShiftMag.

]]>
The journey of a lone female software developer https://shiftmag.dev/the-journey-of-a-lone-female-software-developer-2876/ https://shiftmag.dev/the-journey-of-a-lone-female-software-developer-2876/#respond Fri, 06 Mar 2026 07:59:00 +0000 https://shiftmag.dev/?p=2876 Being the only female software developer in a team is both unique and challenging – and it happens a lot. This is my story.

The post The journey of a lone female software developer appeared first on ShiftMag.

]]>

I have to be honest; this path is laden with high expectations, stereotypes, and the constant pressure to prove oneself.

But it’s also a journey of resilience, empowerment, friendship, and breaking barriers.

So, you may ask, what does it feel like to be the only girl in a team or a company? In some cases, it’s not a good feeling.

Who’s going to organize a party?

Arriving at a site and noticing all eyes on you, simply because of your gender can be disconcerting.

Women in tech understand that it can take time to find friends, as there may not be many female peers who can relate to their experiences. You long for someone who can reassure you and help alleviate the stares. It often feels like a game of musical chairs in your mind, where you anticipate being the one left standing.

Therefore, women must exert additional effort, determination, and speed to establish their position in an environment where they are frequently perceived as outliers.

Also, day-to-day work life for female engineers can go beyond coding – it often entails being both voluntarily and involuntarily nominated as the diversity and non-technical-stuff champion.

In my previous jobs, I learned that being assigned certain tasks often reflects an underlying expectation for women to fulfill specific roles. Such tasks included organizing meetings and lunches, buying gifts for other employees on various holidays, arranging and preparing rooms for meetings and taking notes from them, or representing the company at external events to ensure ‘representation and diversity.’

For some reason, these requests were always the first to go to me or possibly another female colleague. It’s a delicate balance between embracing the role of a team player and ensuring I don’t inadvertently box myself into stereotypes.

But, while organizing team events and managing administrative tasks, I’ve learned the importance of volunteering for projects that push my limits, not just those that fit into stereotypical female roles.

Stand out but also fit in

The pressure to stand out while also fitting in is a constant companion for some of our colleagues.

Our actions, I’ve noticed, can be magnified – it is a double-edged sword that’s both a curse and a blessing. This means that every mistake I make can seem bigger, but it also means that my successes can be more visible.

By being “The Only One,” practically everyone around you associates your name and you alone. This makes people more likely to notice and remember your actions.

I used to feel reluctant to ask questions or clarify doubts, worrying that doing so might reinforce stereotypes about women’s technical abilities, even though seeking clarification is a normal part of the learning and development process.

This visibility has forced me to constantly confront imposter syndrome, undermining the feeling that I have to work twice as hard to be considered at least half as good. It’s a struggle in which I’ve had to learn that my worth is not diminished by my gender and that certain limitations can be created by me in my head.

My journey has been riddled with moments that remind me that I am navigating a space that was not originally designed for me. From the awkwardness some colleagues exhibited around women to being labeled as “too loud, too pushy” in discussions or conversations – whereas some male colleagues displaying similar behavior were described as “assertive” and “passionate.”

There were times when I presented some topic or joined a discussion, and I could feel the dynamics and energy change immediately – people were more skeptical, answered questions as if I had joined the team yesterday, or I got the feeling they were trying to disagree with me by force.

It’s like I was constantly being measured by a different standard that fluctuates between being too feminine and not feminine enough.

One of my biggest challenges has been integrating into teams, where I am the exception. While trying to adapt to such an environment, it can be tempting to “become one of the guys,” it’s the kind of survival tactic that, while effective, often makes you replicate these uncool behaviors rather than trying to change things for the better.

If I had to give any advice on how to find oneself in such a situation, it would be to communicate very directly with others and not be afraid to speak up for yourself. Don’t doubt yourself. If someone interrupts you or tries to take over what’s yours – speak up.

But if that doesn’t work, find a new team, the one that accepts you as a woman and as an engineer.

Embrace the uniqueness

Despite these challenges, I’ve discovered that my unique position as the only woman on the team can bring noteworthy perspectives.

My journey in navigating male-dominated spaces has honed my communication skills, enabling me to navigate and mediate complex discussions with various people quite easily. It can also positively influence team dynamics and help with conflict resolution and customer interactions.

My experiences have taught me the importance of empathy and emotional intelligence in tech – a field often criticized for its lack of understanding. These “soft skills,” which I bring to the table, partially complement the team’s technical expertise, driving us toward more nuanced and holistic solutions.

I have learned not to take failures, criticism, and differing opinions personally. In my opinion and experience, in male teams, everything is talked about much more straightforwardly, without being crude, and nobody has a problem with it. We discuss ideas and actions, not individuals. If someone says, “This idea is stupid because of …” – no harm, no foul.

Once you have experienced the situation of being “The Only One” and fully face it, you will forget that you are a “female engineer” and start to think that you are an individuated co-worker, equal to everyone else.

You will not be surprised by any similar situation in your life or future jobs. You will know your value, and you will always stick to it.

The woman who coined the term “software engineering”

Diversity and inclusion in tech require a collective effort, not an individual pursuit.

Looking ahead, I draw inspiration from pioneer women who have shattered ceilings and paved the way for others like me. Their stories are not just tales of personal achievement but beacons of what women can accomplish in the face of adversity.

If you’re not familiar with Margaret Hamilton, a name synonymous with the Apollo missions, led the software engineering division at MIT under NASA’s Project Apollo. Her groundbreaking work in developing the onboard flight software for the Apollo missions was critical to the success of landing astronauts on the Moon.

Hamilton dedicated herself to her work and innovatively approached software development, thereby coining the term “software engineering.” She pioneered this discipline at a time when the aerospace engineering community had yet to recognize the crucial role of software.

Her vision and tenacity in a field dominated by men during the 1960s demonstrate the profound impact one person can have on technology and space exploration. Hamilton’s legacy teaches me the power of perseverance and the importance of pioneering new frontiers, regardless of the obstacles.

On the other hand, Susan Wojcicki has revolutionized YouTube as CEO since 2014. Under her leadership, YouTube has not only grown into a global platform but has also become a critical space for voices from diverse backgrounds to be heard.

Her advocacy for gender equality and diversity in the tech industry is evident in her initiatives to promote female representation and inclusion. Her efforts to implement family-friendly policies at YouTube, including extended maternity leave, have set new standards for supporting women in the workplace.

These stories illuminate the path and assure me that the journey towards diversity and inclusion in tech is not a solitary one but a collective endeavor enriched by the contributions of those who dare to dream and break barriers.

I hope it continues to inspire the next generation of female developers. It’s a reminder that our differences are our strengths, and through embracing them, we can reshape the tech industry to be more inclusive, innovative, and reflective of the world it serves.

The post The journey of a lone female software developer appeared first on ShiftMag.

]]>
https://shiftmag.dev/the-journey-of-a-lone-female-software-developer-2876/feed/ 0
20 developers share their unfiltered thoughts on AI https://shiftmag.dev/20-developers-share-their-unfiltered-thoughts-on-ai-8454/ Thu, 05 Mar 2026 14:43:26 +0000 https://shiftmag.dev/?p=8454 What do you really think about AI? Be honest, as David Beckham would say.

The post 20 developers share their unfiltered thoughts on AI appeared first on ShiftMag.

]]>

When it comes to AI, few can say no – some rely on it heavily, while others remain skeptical:

I was probably the most skeptical person about AI, but when I started using it, I realized it’s just a tool.

Some even say, “It’s still not good enough for me. I’ll probably use it in a couple of years once it matures.”

But for many, AI is now essential: it speeds up tasks, supports research, and even offers crash courses in new technologies.

Love it or hate it, it’s here to stay.

AI opens a world of endless possibilities

Developers see AI’s potential in all sorts of ways. Some use it like a supercharged search engine to dig up code documentation. Others point to breakthroughs in medicine, like AI-assisted cancer screening or discovering new cures.

Before, you could only imagine trying something – now I can spend 20–30 minutes and see what’s possible.

DeepMind’s AlphaFold, which finally solved the decades-old protein folding problem, was a standout. “For me, that was the first time I thought, Oh man, this is really something.”

… and it isn’t here to take your job

Worried about AI taking over your job? Most developers aren’t. Sure, it can handle repetitive tasks, but it can’t take responsibility. “AI will never ask me why, only humans can challenge, understand, and question,” said a senior developer.

The general agreement: engineers, mathematicians, and doctors aren’t going anywhere. AI is a tool to help us, not a replacement for us. But, fears persist outside the office too: many worry that society is becoming too reliant on AI, leading to a loss of critical thinking.

Curiosity is the best defense against fear.

Still, there’s hope. Curiosity, participants agreed, is the antidote to fear. Engaging with AI directly (experimenting, learning, and testing its limits) reduces anxiety and keeps humans in control.

Curious what your colleagues think about AI? Watch the video, share your thoughts, and maybe even agree that the best use case could be… the Will Smith eating spaghetti videos.

The post 20 developers share their unfiltered thoughts on AI appeared first on ShiftMag.

]]>
The Glossary You Must Read If You Wanna Talk About AI https://shiftmag.dev/the-glossary-you-must-read-if-you-wanna-talk-about-ai-8413/ Wed, 04 Mar 2026 15:08:43 +0000 https://shiftmag.dev/?p=8413 I often hear AI terms used loosely, so I put together this guide to explain key concepts like agents, tools, and LLMs clearly.

The post The Glossary You Must Read If You Wanna Talk About AI appeared first on ShiftMag.

]]>

AI terminology can be confusing, especially when words like agents, skills, tools, and LLMs get used interchangeably.

That’s why I put together this glossary as a quick reference, to explain these concepts and help everyone, technical or not, talk about AI clearly.

Agent Skill

An agent skill is a predefined capability or behavior that an AI agent uses to accomplish specific tasks like searching the web, writing code, sending emails, or reading files. Skills give agents a structured way to interact with tools, APIs, or data sources, making them more reliable and reusable across workflows. Think of them as modular “superpowers” you can plug into an agent.

At a minimum, skills are just folders the agent reads, containing logic, instructions, assets, templates, and more. Most of today’s state-of-the-art agent apps let you create your own custom skills.

MCP

MCP (Model Context Protocol) is an open standard that lets AI agents connect to external tools and data sources consistently. Instead of creating a custom integration for every service (like Slack, Google Drive, or GitHub…) MCP provides a universal “plug-in” format, allowing any MCP-compatible server to communicate with any MCP-compatible AI.

Think of it as USB-C, but for AI tool integrations.

Agent Tool

A tool is a function (code) that an AI agent can execute when it decides to. That’s why each tool has a name and a description, which influence when the model chooses to use it (for example, “Use this function to pull the latest tickets from a Jira project”).

Besides the name and description, the function contains the code that the AI agent runs with the required arguments. For example, the agent could call:

jira_fetch_tickets(project="AI", limit=10)

Tools are also components that power MCP servers. In Open WebUI project, users can even write custom Python tools that agents can invoke.

Large Language Model (LLM)

An AI model within the deep learning spectrum, primarily designed for language understanding and content generation. LLMs excel at processing and generating human-like text.

Token

In Generative AI, a token is the smallest unit of information a language model processes. Depending on the language and the model’s design, a token can represent a whole word, part of a word, or even a single character. Tokens are the building blocks that language models use to understand and generate text.

Context Window

The context window is the number of tokens an LLM can process as input or generate as output. Input and output limits are usually different, with the input capacity typically much larger than the output.
For example, GPT-4o has an input limit of 128.000 tokens and an output limit of 16.384 tokens.

Fine-Tuning

Fine-tuning is the process of modifying a language model’s neural network using your own data. It’s different from simply adding documents to a conversation or adjusting prompts, which don’t change the model’s underlying structure (see RAG for an alternative approach).

Retrieval Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is a system that uses a vector database to store ingested data, such as documents, web pages, and other sources. When a question is asked, relevant data is retrieved and combined with the question before being sent to a language model (LLM). The LLM itself doesn’t change, but it “sees” the retrieved information, allowing it to answer based on this additional context.

Prompt

A prompt is the user’s input that kicks off the model’s text generation. It guides the model to produce relevant and coherent responses based on the context or question. Prompts can be simple or detailed, shaping the quality and direction of the output.

ChatGPT

ChatGPT is an OpenAI product built on the GPT family of large language models (LLMs). As a product, it can use different LLMs, such as GPT-4o, GPT-4o-mini, and o1.

AI Agent

An AI agent is a software system powered by LLMs that performs tasks, answers questions, and automates processes for users. They can range from simple chatbots to advanced digital or robotic systems capable of running complex workflows autonomously. Key features include planning, using tools, perceiving their environment, and remembering past interactions, which help them improve performance over time.

Prompt Injection

Prompt injection is a type of cyberattack on large language models (LLMs), where malicious inputs are disguised as normal prompts to manipulate the model’s behavior or output. These attacks can make the model ignore safeguards, reveal sensitive information, or carry out unauthorized actions.

Google Gemini

Google’s family of Large Language Models (LLMs).

Anthropic Claude

Anthropic’s family of Large Language Models (LLMs).

Meta Llama

Meta’s family of Large Language Models.

LLM Parameters

LLM parameters are the components within a large language model that determine its behavior and capabilities. Learned during training, they include weights and biases that help the model understand and generate language. Generally, more parameters mean a smarter model, but they also require more computing power, especially memory (RAM), to run.

Copilot

Copilot is Microsoft’s branding for different AI agents, such as:

  • GitHub Copilot, which assists with coding
  • Copilot 365, which helps with Office and Windows tasks

System Prompt

A system prompt is a set of instructions or guidelines given to a language model to set its behavior, tone, and limits during a conversation.

Prompt Engineering

Prompt engineering is the practice of designing and refining prompts to optimize a language model’s performance and output. It involves crafting specific inputs that guide the model to produce the desired responses, improving accuracy, relevance, and coherence.

Digital Twin

Digital twins are virtual representations of assets, people, or processes and their environments that simulate strategies and optimize behaviors. In the CPaaS space, this usually refers to AI agents that mimic people using audio and video modalities.

Multimodal

Multimodal refers to the ability of AI systems to process and combine multiple types of data inputs (text, images, audio, video) to perform tasks or generate outputs. This approach allows AI models to understand and create content across different modalities, resulting in more comprehensive and context-aware applications.

Vector Database

Unlike traditional databases that store structured data in tables, vector databases are optimized for operations like similarity search, allowing efficient retrieval of data points that are mathematically close to a given query vector.

This capability is essential for applications such as recommendation systems, image recognition, natural language processing, and other AI-driven tasks where data is represented as vectors. For a common implementation, see RAG (Retrieval-Augmented Generation).

Hybrid Search

Hybrid search combines the strengths of vector search and traditional full-text search to improve the relevance of retrieved results. Vector search captures the semantic meaning of queries, matching based on context and intent, while full-text search ensures precise keyword matches.

By blending these approaches, hybrid search increases the likelihood of retrieving the most relevant documents, even when queries are vague or phrased differently from the source content. This boosts the accuracy of retrieval and enhances the overall effectiveness of the RAG (Retrieval-Augmented Generation) pipeline.

Embedding Model

An embedding model is a machine learning model trained to convert input text into numerical vectors, which can then be used for vector similarity search. Embedding models are a key part of the RAG (Retrieval-Augmented Generation) pipeline, as they transform user questions into vector representations.

The post The Glossary You Must Read If You Wanna Talk About AI appeared first on ShiftMag.

]]>
OpenAI Shares How They’re Turning Engineers into AI Team Leads https://shiftmag.dev/openai-shares-how-theyre-turning-engineers-into-ai-team-leads-8262/ Mon, 02 Mar 2026 15:56:59 +0000 https://shiftmag.dev/?p=8262 Roles aren’t disappearing - capabilities are expanding, and often the problem isn’t the system, it’s the prompt. I saw that firsthand at this year’s Pragmatic Summit in San Francisco.

The post OpenAI Shares How They’re Turning Engineers into AI Team Leads appeared first on ShiftMag.

]]>
Six months ago, if someone had told me that engineers would start naming their AI agents and treating them like teammates, I probably would’ve rolled my eyes.

Honestly, even today, it still sounds a little… absurd.

That is, until I heard directly at the Pragmatic Summit in San Francisco that’s happening right now inside OpenAI.

Vijaye Raji and Thibaut Sottiaux from OpenAI say AI is shifting development from manual coding to guiding AI teams (setting goals and guardrails) while speeding up work and keeping core roles essential.

Close the laptop. Join the meeting. Come back to finished code. 

Raji’s (CTO, Applications, OpenAI) been at OpenAI for only six months, and already he’s seen Codex go from just a tool, to an extension, to an agent… and now it actually feels like a teammate.

Inside OpenAI, they recently launched something called a Codex Box.

Basically, engineers can grab a dev box on the server, fire off prompts, and let the system run things in parallel while they just work from their laptop. Sounds amazing, right?

Ivan Brezak Brkan
Photo by Ivan Brezak Brkan

Some engineers are using hundreds of billions of tokens per week across multiple agents – not for fun, but because that’s just how they build now. Raji said:

Software development inside OpenAI isn’t a single-threaded human loop anymore. It’s parallel. And that is going to become the new normal.

Designers and PMs are writing code. What’s going on?

Sottiaux (Engineering lead for Codex, OpenAI) described how the Codex team works today.

“It changes constantly. Almost week to week,” he said. “We look for bottlenecks, solve them, and then a new one pops up.”

At first, the slowest part was code generation, then it became code review, and now the friction often comes from understanding user needs faster – parsing feedback from Twitter, Reddit, and SDK experiments and turning that into product direction.

Speed up coding, and suddenly reviews become the bottleneck. Fix reviews, and CI/CD slows things down. That rhythm has become normal. Instead of debating every trade-off in design docs and discarding alternatives, teams try multiple implementations in parallel and focus on what actually works.

“Trying things is cheaper,” Sottiaux added. “So we try more things.”

And the rules? They’re blurring. Designers are shipping more code, PMs are writing and testing ideas, and it’s not that roles disappear – everyone’s capabilities are expanding.

Usually the problem is the prompt, not the system

What about long-running, autonomous tasks?

AI coding tools might seem like advanced autocomplete – type a few words, get a few lines back. Helpful, yes, but still reactive. Sottiaux challenged that:

Give the model a meaningful, well-defined objective, and it doesn’t just respond – it runs, for hours.

Inside OpenAI, the model runs on its own for hours, sometimes producing full reports. Engineers review the results, pick what works, and feed it back – this isn’t just suggestions anymore, it’s delegated execution.

There was also an unusually honest anecdote shared during the discussion: a researcher admitted that whenever he thought he was smarter than Codex, it turned out the problem was the prompt, not the system.

The bottleneck isn’t typing speed – it’s defining the goal clearly.

Photo by Ivan Brezak Brkan

AI tools accelerate work and ahape AI-native engineers

During weekly analytics reviews, teams don’t assign follow-ups, they just trigger Codex threads. “Twenty minutes later, the answers are ready before the meeting even ends,” one leader said.

In high-severity incidents, Codex gets effectively paged into calls to help figure out what went wrong and suggest the fastest recovery. “It’s like having small consultants working quietly in parallel,” they added.

So what does this mean for junior engineers?

OpenAI is hiring new grads and running a strong internship program, believing the next generation will be AI-native and comfortable with these tools from day one.

At the same time, strong foundations, guardrails, and code reviews remain essential. As they put it, “Foundations will never go out of fashion.”

Engineers will guide AI teams, speeding up code without touching every line

Vijaye has spent more than two decades in the industry. He has lived through the rise of developer tools, the shift to higher-level abstractions, the mobile wave, and the social platform era. In his view, none of those transitions felt quite like this one.

What makes the current moment different isn’t just what the technology can do, it’s how quickly it is evolving. The speed of change, he suggested, is on another level entirely.

And Sottiaux expects that pace to accelerate even further.

In the near term, I anticipate another order-of-magnitude jump in development speed, enabled by networks of agents collaborating toward large, shared goals. Instead of a single assistant responding to prompts, entire clusters could work together on complex builds.

As systems get more complex, engineers stop checking every line of code and start setting constraints, guardrails, and validating outputs. It’s less about manual control and more about guiding the system, and working through a single assistant that coordinates all the agents behind the scenes.

Whether this ends up being the smartest leap in the industry or a step we rushed into too quickly, only time will tell.

The post OpenAI Shares How They’re Turning Engineers into AI Team Leads appeared first on ShiftMag.

]]>
Chip Huyen: To Build or Not to Build – When AI Can Do It All? https://shiftmag.dev/chip-huyen-to-build-or-not-to-build-when-ai-can-do-it-all-8238/ Fri, 27 Feb 2026 14:00:19 +0000 https://shiftmag.dev/?p=8238 I was at Pragmatic Summit when Chip Huyen reframed the AI conversation - if any product can be generated from a clear description, code isn’t the constraint, and true value lies elsewhere.

The post Chip Huyen: To Build or Not to Build – When AI Can Do It All? appeared first on ShiftMag.

]]>

“If AI can replicate almost anything quickly and cheaply, what’s the point of building anything at all?” Chip Huyen asked at the start of her talk at the Pragmatic Summit.”

And that question carries weight because she isn’t a casual AI observer: she’s an ex-Netflix researcher, former NVIDIA core developer, and an author who explores AI engineering.

She told us a personal story: after building a product, someone recreated it with AI almost immediately.

That moment forced her to confront hard questions – if anything can be copied, where’s the moat, the incentive, or the point of the effort?

“I built a product – and someone copied it with AI”

After she built a product, someone emailed her a clone generated with AI. The message read: “I love what you’ve built. So I used AI to recreate exactly that. And here’s the link.”

She described her reaction bluntly: “I’m flattered. But also, why the f**k?

That moment crystallized a new reality: if replication requires minimal effort, traditional defensibility weakens. Technical execution no longer guarantees leverage. She framed the shift clearly:

If you can describe a software, then AI can build it for you. The constraint moves upstream. The critical question no longer asks how to build, but what to build.

The real advantage comes from context

But Chip pushed back against the idea that AI erases all opportunities.

Common problems are quickly handled by AI, but challenges with nuance and context remain – and those are where real value lies.

She illustrated this with chatbots: U.S. users expect instant replies, while in parts of Asia, waiting signals respect. These nuances matter. As AI handles common solutions, advantage goes to those who master context (cultural, behavioral, or domain-specific) where generic automation fails.

Chip spoke at this year’s Pragmatic Summit in San Francisco.

Engineering culture is changing

Workflows built around humans writing code (pull requests, line-by-line reviews, mentorship) don’t work the same when AI generates large chunks.

Junior developers may disengage, and even seniors wonder “How do I give feedback to my AI?”

The focus moves from polishing code to designing instructions and systems. Mentorship now teaches structured thinking in a human–AI–human loop.

And Chip didn’t have an answers about job displacement or copyright, she acknowledged uncertainty.

I do think it’s a bit scary and I don’t really know what the futures look like but builders still shape tools that affect labor markets, creative industries, and institutions.

When AI acts, who’s accountable?

As AI systems move beyond code editors, the risks grow. Chip drew a hard line: if AI acts in the real world (like a car hitting a pedestrian), mistakes can’t be undone.

The question isn’t if AI can act, but whether it should without strict limits.

Engineers now must build guardrails, monitoring, and escalation paths from the start – autonomy demands containment.

Enjoy building, but choose wisely what to build

Chip closed on a personal note:

Fundamentally, I enjoy building. It just brings me joy.

In an environment where execution becomes cheap, intrinsic motivation gains weight. She compared building to music that creates tension and resolution, and to assembling Lego sets for friends. Not every project requires a moat. Not every product needs revenue logic.

Her final reframing carried strategic weight. If replication becomes trivial, the advantage may belong to those who decide what deserves to exist. Vision, context, and responsibility define the new frontier. Execution follows.

The post Chip Huyen: To Build or Not to Build – When AI Can Do It All? appeared first on ShiftMag.

]]>
I Tried Recreating OpenClaw – And The Hype Is Real https://shiftmag.dev/i-tried-recreating-openclaw-and-the-hype-is-real-8232/ Tue, 24 Feb 2026 14:58:12 +0000 https://shiftmag.dev/?p=8232 After spending time with OpenClaw and seeing how it actually works, I’m convinced the hype is real. It shows that autonomous AI agents are finally living up to their promise.

The post I Tried Recreating OpenClaw – And The Hype Is Real appeared first on ShiftMag.

]]>

I was skeptical when I first ran OpenClaw, it looked like just another AI tool riding the hype. Turns out, it’s not.

After experimenting with it and extending its messaging, I also found out that much of its core power (its AI agent architecture and human-in-the-loop interactions) can be recreated with off-the-shelf tools like the Agents SDK and Messages API.

In this post, I’ll share what I learned from using OpenClaw, explain why messaging is what makes autonomous agents truly work, and show how developers can leverage existing tools to build something similar without starting from scratch.

The agent that broke the internet 

In just three months, it’s taken off on GitHub, earning 200k stars in 84 days and thousands of forks. By mid-February, SecurityScorecard was tracking over 240k instances running in the wild.

With LLM token costs of $5-50 per instance, the project is already accounting for millions in inference spending, and it’s even causing Mac mini shortages as people rush to self-host OpenClaw. (You can actually run it on much cheaper hardware, which makes the story even crazier.)

The hype around the project is undeniable, even with a steep barrier to entry (users must install and run the server software themselves) and despite ongoing security concerns and reported vulnerabilities

Why I think the hype is justified

OpenClaw’s AHA moment is hard to ignore. It shows there’s real demand for autonomous AI agents, ones that free users from being stuck in a chat window on sites like chatgpt.com.

I’ve always felt that calling those website chatbots “agents” was a stretch – they’re more like conversation buddies than AI doing real work for you.

True agents, in my view, should run in the background, acting and reacting on their own without forcing users to stay glued to a single site. That’s exactly the experience OpenClaw delivers.

The “hold my beer” moment

As a developer, I was curious. Running OpenClaw was impressive, but I wanted to know: how does it actually work? And even more, what would it take to recreate its wow factor myself? Let’s break it down.

The first key ingredient is an AI agent, and I mean this in a very specific sense.

As Anthropic puts it, agents are systems where the LLM controls the program’s flow, instead of classic code deciding when to call the LLM. At a high level, agent apps are basically a while loop that calls the LLM and hooks in all the tools the AI might need. With the rise of MCP, connecting these tools has become easier and more standardized.

On the surface this seemed simple, but I quickly got bogged down in a bunch of edge cases and details to implement. Luckly, we don’t need to reinvent the wheel here. There are ready to use SDKs wrapping all the agent logic, recently renamed Agents SDK being a prime example. That got the AI agent part covered. But there was still one secret ingredient missing.

Users still need to approve important actions

Let’s go back to the OpenClaw user experience. Even when freed from a chat website, agents still need a way to stay in touch with their users. 

The human-in-the-loop approach remains essential for responsible AI: no one should discover their agent’s spending spree on a month-end bank statement. Critical actions still need user approval, and important results still need to be communicated. 

That’s why messaging channels are the very first feature highlighted in OpenClaw’s documentation

Messaging is what makes autonomous AI agents actually work for you. It lets them check in, keep you in the loop, and get your approval for important actions, without forcing you to refresh a page or babysit a chat window. It’s what gives you peace of mind, convenience, and, most importantly, control.

Cheat codes for messaging

Back to coding.

Connecting to mobile operators or chat services might sound intimidating at first, but I had a secret weapon: I work at Infobip. Luckily, you don’t need that advantage, anyone can pick up the unified Messages API and start sending and receiving messages on users’ phones.

With connectivity sorted, all I had to do was figure out how to hook the agent up to it.

There are few flows:

  • First up is passing new messages from users to the agent as prompts; basically, launching new tasks.
  • Secondly, the agent needs a way to send out reportsMCP servers work best here, as they are easy to integrate and trigger by LLMs. 
  • Finally, sending the agent’s output to the phone and getting the user’s feedback or confirmation. This is the all-important human-in-the-loop part! Historically interpreting free form input from users might have been hard, but these days we can easily pass it to an LLM and ask it to summarize the intent: does the user approve of the suggested action or not? Easy. 

And with that my experiment was over. 

Do a few off-the-shelf components (like the Agents SDK and Messages API) replicate the full OpenClaw experience? Not entirely. But they can help you kickstart a new project, up to the point where you can focus on your core features. And that’s the part that really matters.

It’s time to pay attention to autonomous agents

If you’re already working in AI (or thinking about it) autonomous agents are where things are moving. OpenClaw shows the demand is real, and the tools to build agents that can reason, act, and communicate are already here. Messaging isn’t just nice to have; it’s how your agent stays useful without you having to babysit it. With unified messaging APIs and MCP, sending updates and notifications is easy, so you can focus on shaping how your agent thinks and acts.

The post I Tried Recreating OpenClaw – And The Hype Is Real appeared first on ShiftMag.

]]>
Dev.to Acquired. Is This the End of the Beloved Developer Blog Network? https://shiftmag.dev/dev-to-got-acquired-whats-in-it-for-you-as-a-developer-8172/ Fri, 20 Feb 2026 13:40:01 +0000 https://shiftmag.dev/?p=8172 We break down how the MLH acquisition could affect content quality, promotion, and your reach.

The post Dev.to Acquired. Is This the End of the Beloved Developer Blog Network? appeared first on ShiftMag.

]]>

If you’ve ever published on Dev.to, built an audience there, or just scrolled through its feed looking for a useful tutorial, you probably had a few question marks over your head when news of the acquisition dropped.

Major League Hacking (MLH) has officially acquired Dev.to.

It sounds strategic. MLH runs hackathons, fellowships, and developer programs worldwide, and Dev.to is one of the most recognizable developer publishing platforms on the internet. Community meets talent pipeline. Content meets events. Growth meets distribution.

But if you’re a developer (especially one who’s spent years building a presence on Dev.to) the real challenge is figuring out how this shift will affect your work and your audience.

A new chapter for Dev.to?

MLH announced the acquisition as a step toward “building the largest community for software professionals.” They’ve also emphasized that Dev.to will remain community-first.

The open-source project that powers Dev.to will continue to exist.

Nothing radical, nothing alarming – at least based on what’s been shared so far. Still, developers naturally have questions. When platforms change ownership, people who’ve invested time and trust in them want to understand what the long-term impact might be.

The uncomfortable questions devs are asking

Five years ago, Lane Wagner argued in his post The Collapsing Quality of Dev.to that the platform’s overall content quality was declining, with repetitive, low-effort, and beginner-focused posts crowding out deeper, more insightful content – and that was long before the rise of the current AI-driven flood.

The author highlighted a lack of effective moderation tools and incentives that favor quantity over quality, making it difficult for higher-value content to stand out. As a result, Dev.to risks becoming less useful and less credible for experienced developers.

On the other hand, Samuel Zacharie raised the question on Dev.to itself: Has the platform become a victim of its own success? He argues that as Dev.to expanded, its content increasingly leaned toward beginner-focused, repetitive, and bootcamp-style posts, the kind that are easy to produce and tend to perform well in search rankings.

According to him, that shift makes it harder for more advanced perspectives to surface, ultimately diluting the diverse, high-quality developer voices that once defined the platform.

These aren’t just random opinions; they reflect a broader frustration with how developer publishing platforms scale. And that’s where the acquisition comes in.

Could quality drop further? Here’s what authors think

This is where the reactions start to split. 

To better understand how authors perceive the situation, we reached out to several developers who actively publish on Dev.to and asked them how they feel about the acquisition, and whether they are optimistic or concerned about what comes next. Jacob (northerndev) said:

My initial reaction to the MLH acquisition is a mix of optimism and caution. Dev.to is valuable because it is raw and community-driven. MLH has a great track record with hackathons and getting people to build things, which is fantastic for energy and momentum.

However, his main concern as an author is the signal-to-noise ratio. Hackathons naturally produce a lot of fast, chaotic output. Combined with the current wave of AI-generated content, there is a real risk that the platform could become flooded with low-effort posts.

If MLH uses their resources to filter out the noise and highlight genuine, hard-learned engineering experiences, this acquisition is a huge win. If it just becomes a marketing channel for hackathon projects, we lose the core value of the site. Signal needs to win over noise.

Hackathons spark momentum, experimentation, and projects, but they also produce rapid output. With today’s AI-assisted writing tools, the barrier to publishing is even lower.

That’s the context for this acquisition: not 2019, not pre-ChatGPT, but in the heart of what many developers call the AI slop era.

On the other hand, not everyone sees this as a threat.

Maame, other author at the platform, is optimistic:

As someone who writes about the student developer journey, Python, and Git on my Dev.to profile, I find this news very exciting. In my view, bringing together the community-driven storytelling of Dev.to with the hands-on energy of Major League Hacking (MLH) is the perfect combination. 

Hackathons are where much innovation begins, and Dev.to is where it’s documented and shared. Maame believes that closer ties between these collaboration-heavy events and the platform naturally benefit both authors and the wider developer ecosystem:

I’m optimistic that this will lead to more integrated ways for developers to showcase what they build.

Two very different outlooks. 

Reddit discussions about the quality drop started a long time ago.

So, what does this mean for you?

Ultimately, what this acquisition means depends on which side of the platform you’re on and how it unfolds.

If you’re an author, now is the time to watch closely. Notice what gets promoted: thoughtful engineering deep-dives, or fast hackathon-style recaps and AI-assisted tutorials. Your reach, your positioning, and even your choice to keep publishing here could hinge on how incentives shift.

If you’re a reader, the question is simpler: does the content get better or noisier? Will you find real-world lessons from builders, or just surface-level posts optimized for speed and visibility? MLH could bring structure, resources, and stronger curation to Dev.to, but growth pressures might also amplify existing quality concerns.

For now, it’s too early to call it a win or a loss. The coming months will show whether this turns into a story of revitalization or dilution.

The post Dev.to Acquired. Is This the End of the Beloved Developer Blog Network? appeared first on ShiftMag.

]]>
Developer Goals Don’t Have to Be Corporate Theatre https://shiftmag.dev/developer-goals-dont-have-to-be-corporate-theatre-8028/ Thu, 19 Feb 2026 13:02:04 +0000 https://shiftmag.dev/?p=8028 I get it - goals often feel like extra homework. But I’ve found they don’t have to be. Done right, they can keep you focused, accelerate your learning, and guide better decisions.

The post Developer Goals Don’t Have to Be Corporate Theatre appeared first on ShiftMag.

]]>

From my experience as a people manager (and an even longer stint as a developer), I’ve noticed that many engineers don’t see much value in goals.

If this is how you feel when your manager asks you to set yearly goals (or hands you new ones) you’re not alone.

In my perspective, they often feel like extra homework, tacked on after the “real work” of coding, only to resurface at the end of the year when someone asks about your progress. And from where I stand, you’re right: when treated this way, goals rarely deliver tangible benefits.

But it doesn’t have to be like this.

You’re missing out (on what exactly?)

Periodic goals, when used correctly, are a powerful long-term focus tool. In fast-paced teams, swamped with Jira tickets, code reviews, and spikes, you can be constantly busy yet rarely challenge the true priority or value of your next task.

That’s what goals are for:

  • Company goals make sure you’re working on what matters.
  • Personal development goals help you learn intentionally from your day-to-day work.

If your goals aren’t doing that, it’s probably because they’re not being used as intended.

How goals started making sense (and bringing value) to me

Once again: you must pick something that you truly feel strongly about. Realising this simple truth was the change in my case. It is better to have one goal that truly meets this requirement than four you only slightly care about.

My turning point came from my personal, not professional, life. I realised that my life wasn’t heading where I wanted, and if I didn’t change course, I wouldn’t be happy with how I’d spent it. This insight helped me see the importance of long-term planning (and I wouldn’t have seen it if I hadn’t read Stephen Covey’s classic book The 7 Habits of Highly Effective People).

From there, I began setting goals, actually tracking them, and finally seeing the value.

How to make goals work

For goals to work for you, you need three simple requirements:

  1. Choose goals you genuinely believe are important and valuable.
  2. Track progress regularly (if you’re not tracking, it’s usually a sign that requirement 1 isn’t really met).
  3. Connect goals to your daily work, so progress happens as you ship, not as an afterthought.

Don’t isolate goals from your workflow

A common problem is that the goals are too detached from your daily work. You likely spend most of your time in the codebase, making changes based on your issue‑tracking system. Now consider a goal like:

By the end of the year, I will have finished a Kotlin fundamentals training.

That’s planning to fail. Will you truly make room for this while putting tickets on hold? If yes, great. But for most of us, day-to-day tasks feel more urgent. December arrives, and you realise you’ve made no progress. Cue the last-minute scramble.

Define a different goal instead

When I see a Kotlin feature in the codebase that I don’t understand, I’ll read the documentation and make a note of it (at least once a week).

Why this works: you don’t need much extra time, it won’t derail the task at hand, it’s easy to track, and it builds a habit of continuous learning rather than a one-off course completion.

Focus on the “how,” not just “what”

Another tip is defining goals that focus you on how you’d like to accomplish something. Probably the task from the issue tracking system already tells you what you need to do. A goal can help you define additional criteria or focus on the quality.

Let’s say you introduced a few bugs into the system recently and want to avoid this in the future. You could define a goal:

For every applicable pull request, I will leave a comment documenting measured test coverage before and after the change and ensure it increases.

This keeps the goal anchored in your daily work while adding a perspective often missing from the ticket.

Keep the goals visible

This relates to Requirement 2: if you do not see the goal often, you’re likely to forget to track it. Keep them in sight: as a sticky note on your monitor or pinned at the top of your to-do list.

Use OKRs (or something similarly effective)

There’s a reason OKRs have stuck around: they’re simple and effective. A practical check I like is to define the Objective and then ask:

Will it really happen once these KRs are met?

If the answer is “not quite,” adjust your Key Results until they make the objective inevitable.

Final thought

If you’ve made it this far, I hope it’s a bit clearer how goals can help. Pick goals that matter, track them regularly, and tie them to your daily work. You’ll feel the difference.

Happy goal‑setting!

The post Developer Goals Don’t Have to Be Corporate Theatre appeared first on ShiftMag.

]]>