Skip to main content

Tag: ai

Coders Code, AI Codes Faster. Developers Still Matter.

It’s 2 a.m. I’m halfway through a beer, sitting across from my good friend. We’ve done this many times, arguing about tech, late into the night, each trying to convince the other they’re wrong. Sometimes it’s about programming languages, sometimes about remote work, sometimes about who should’ve never been promoted.

Tonight it’s AI (recently, it’s only AI). And it’s getting heated.

He’s at the edge of his seat, waving his hands like some sort of prophet of doom. “All software engineers are screwed,” he says. “AI is going to take over everything. You’ll see.”

I lean back, take a sip, and shake my head. “Not all of us,” I say. “Just the coders.”

That’s when things got interesting.

Coders vs Developers (Yeah, Again)

These words get thrown around like they mean the same thing. Coder, programmer, developer. It all kind of blends together, especially in job titles and HR speak.

But for the sake of this post, I’m going to draw a line.

  • Coders are people who only write code. They take a ticket, search Stack Overflow (or nowadays, paste it into ChatGPT), patch something together, and call it a day
  • Developers are people who build software. They think in systems. They understand design decisions, trade-offs, and long-term impact. They don’t just solve problems, but they figure out the right problems to solve.

Both write code. But only one knows what they’re doing beyond the code.

And AI is very good at replacing coders.

Why Coders Should Be Worried

Let’s be real. In the last few years, especially during the COVID hiring boom, the tech industry pulled in a ton of people who honestly weren’t ready.

Demand was high. Supply was low. Companies were desperate. So people with very little understanding of software engineering fundamentals got hired fast. Some learned the basics from bootcamps. Some jumped in from other industries. Some just happened to be in the right place at the right time.

Now, with AI tools improving and the hype settling down, a lot of those folks are being laid off.

But here’s the thing: AI isn’t causing the layoffs. The layoffs are happening because many of those roles never had much depth in the first place.

If your whole job was taking a vague ticket and gluing together some JavaScript you barely understood, then yeah, you’re replaceable. Sorry if I sound rough, but it’s true.

Developers Are Still Needed — More Than Ever

And here’s where I pushed back hard during our argument. Good developers are not going anywhere. In fact, they’re the ones making all this AI progress possible.

They’re building the tools, improving the workflows, designing the systems that scale. They understand trade-offs. They can lead teams through messy migrations. They know what needs to be done when the product, business, and tech all pull in different directions.

AI can autocomplete a function. It can’t design a resilient system that handles real-world chaos. It can suggest a fix. It can’t mentor a junior engineer or navigate a political roadmap meeting.

AI is fast, sure. But good developers are smart, adaptable, and connected to the bigger picture.

AI Is Just Another Tool

Every time a new tool comes around, people panic. We saw this with high-level languages, IDEs, cloud computing, no-code platforms. Now it’s AI.

You can argue it’s different, and its potential is unprecedented. Yes, but it’s still just a potential at this point. We’re still figuring out how to use it effectively and make it work for everyone. Most importantly, we’re figuring out if we can make it work for everyone.

But developers don’t panic. They adapt. They look at the tool, figure out how to use it, and make their work better. That’s what separates them from coders who just want to copy/paste their way through the day.

Wrapping Up

It’s almost 4 a.m. now. The bar’s closing soon (it’s late even for the hipsterish place in the middle of Berlin), and neither of us is totally convinced. But I can tell he’s thinking about it.

So here’s the takeaway:

Coders are in trouble. Developers are not. And the more you understand the whole system, the code, the business, the people, the safer you are.

How AI Personalization Fuels Groupthink and Uniformity

No, this isn’t another article crapping all over AI and babbling about our AI overlords replacing us at the top of the food chain. It’s a critical take on the ever-rising trend of pushing AI-powered suggestion and recommendation mechanisms in every service and tool we use.

I read Slack’s privacy principles on search, learning, and artificial intelligence a few days ago. Like many users, I wasn’t delighted to see how Saleforces’ Slack will use private conversations from its clients to train its AI. There’s much backlash about it online, and everyone and their mother already shared their opinion about it somewhere. I see no point in adding to that pile, but I find another part of that text as concerning.

Right now, I want to focus on how the data will be used. Please note that Slack is not the only one (or the first, for that matter) doing this. This was only the trigger that made me think again about how the bubbles we live in will become even smaller. Slack is the primary communication medium for many businesses worldwide. Let me be idealistic for a moment and say that all those businesses want to create value. To create value, things such as creativity, new ideas, original thoughts, and something you want to have more of, right?

Here are some ways how Slack will use their customer’s data to “make your life easier”:


  1. Autocomplete: Slack might make suggestions to complete search queries or other text– for example autocompleting the phrase “Customer Support” after a user types the first several letters of this phrase.
  2. Emoji Suggestion: Slack might suggest emoji reactions to messages using the content and sentiment of the message, the historic usage of the emoji, and the frequency of use of the emoji on the team in various contexts.
  3. Search Results: Our search machine learning models help users find what they’re seeking by identifying the right results for a particular query. We do this based on historical search results and previous engagements (…)

At first glance, these features seem harmless, even helpful. They save time, reduce friction, and enhance user experience. However, beneath the surface lies a more troubling consequence: the potential for these features to stifle creativity and reinforce groupthink.

Consider the autocomplete function. By suggesting common completions based on past data, Slack’s AI could inadvertently discourage users from thinking outside the box. If the AI continually nudges users toward conventional phrases and ideas, it might limit the expression of novel thoughts. Over time, this can create a feedback loop where the most common ideas become even more dominant.

Emoji suggestions present a similar issue. If certain emojis are suggested more frequently because they have been used more often in the past, it could lead to a homogenization of emotional expression. This might seem trivial, but it’s one more thing that makes the whole communication experience uniform, predictable, and boring.

Search results, too, are influenced by past behaviors. If AI algorithms prioritize results based on what has been frequently searched for and engaged with, they can create a self-reinforcing loop that amplifies popular ideas and suppresses less common ones. If everyone is guided to the same set of information, the range of ideas and solutions considered will narrow.

In essence, AI personalization in tools can lead to an echo chamber. It can lead to uniformity and groupthink.

There’s a popular saying, “Always use the right for the job.” Maybe we should add a part that goes, “if it’s not dangerous in the long run.”