Logo

How do you envision the role of AI in software development evolving in the future?

Last Updated: 22.06.2025 00:25

How do you envision the role of AI in software development evolving in the future?

agent-centric IDEs (cursor, windsurf, claude code…) which empower agents to reason with an entire codebase and provide more actionable answers / perform more useful tasks.

The trends I expect to continue are:

developers will spend less time typing code and more time thinking about code. ie describing their projects. Discussing what they want to achieve with an agent, which requires reasoning and formalizing what they want to accomplish.

Magic Johnson Bluntly Blames Two Players for Knicks' Season-Ending Loss - Sports Illustrated

a larger part of the code in codebases is going to be generated. This doesn’t mean that a large portion of the tasks that were once handled by humans can be entirely delegated to AI, but rather, in a typical commit, an increasingly large proportion of the lines of code changed will be done automatically.

In the “pre-copilot era”, there was a general push towards code quality, as in: developers were nudged into making code that was easier to maintain by their fellow developers. Code quality is going to evolve into: code that AI agents find easy to work with. Those two things are not incompatible, but it means things like more comments, more tests.

We’re still waiting to see how the dust is going to settle IMO.

What kind of pleasure do gay men get from being bottom? The idea is very appealing to me but in practice it's quite painful.

We are entering a new phase of uncertainty. In the late 2010s/early 2020s (“pre-copilot era”) the developer experience was concentrating around fewer tools with large adoption. Now the market for these tools is fractionated again.

In the past 3 years there’s been 3 pivotal moments:

the introduction of code completion tools (github Copilot etc. ) which liberate devs from memorizing precise syntax,

The Planetary Society reissues urgent call to reject disastrous budget proposal for NASA - The Planetary Society

I think that “vibe coding” ie giving a brief description of what you want to achieve and get fully functional code as a result is going to have very limited impact. It works, yes, but in very specific cases, but it doesn’t scale well, and the economies it creates are not worth the trouble in the general case.

conversational LLM agents (chatGPT, claude etc.) that can accelerate research, simulate brainstorming and perform small technical tasks,

Developers will spend more time on quality insurance, both upstream and downstream. Thinking - how should this piece of code integrate in the larger whole. What are the signals that it’s broken. What logs, testing, monitoring and alerting should I put in place.

A rocket scientist wrestles with backlash over her Blue Origin flight - The Washington Post