The AI Race and Its Impact on Software Engineering Careers

https://news.ycombinator.com/item?id=43163011

The Accelerating AI Race

This AI race is happening so fast. As a software developer/engineer, I’m worried about my job prospects. Time will tell what happens to the west coast housing bubbles once software engineers lose their high price tags. Will the next wave of knowledge workers move in and take their place?

Industry Perspectives

Adapting to Disruption

One perspective is that while the software development job market is being massively disrupted, there are strategies to come out on top:

  • Learn more of the entire stack, especially backend and DevOps
  • Embrace increased productivity to ship more products and solo projects
  • Be highly selective about how you spend your productive time
  • Set up an effective personal knowledge management system and agentic assistants

Breadth vs. Depth

There’s an ongoing debate about whether it’s better to gain broad experience across many areas or develop deep expertise in a few things. When AI can “slurp the whole internet in a single gulp,” is broadening your expertise the best allocation of your limited human training cycles?

Some suggest being “T-shaped” - having wide reach plus one narrow domain you can really excel in.

Knowledge Management Tools

For personal knowledge management, several approaches seem effective:

  • Using AI as a writing assistant for large documents to help assimilate knowledge
  • Combining multiple LLMs (ChatGPT, Perplexity, Gemini) to merge responses and provide critical feedback
  • Tools like Obsidian with Copilot plugin for note-taking
  • Emerging tools like WilmerAI for routing AI workflows and Hoarder for automatically ingesting and categorizing content into a local RAG

Is the Race Slowing Down?

Some argue the pace is actually slowing. After a wild period last year until around Llama 3, recent improvements have been relatively small. Even reasoning models represent small improvements over explicit planning with agents that were already possible.

However, others point out that linear improvements in AI performance can cause cubic improvements in task coverage due to the dimensionality of the labor-skill space.

Short to Medium Term Outlook

Many aren’t concerned in the short to medium term, believing there are too many edge cases and nuances that AI systems will miss:

  • Systems don’t always work as documented
  • AI struggles to differentiate between bugs in services versus bugs in its own code
  • AI has difficulty learning about bugs in the first place
  • AI can’t easily differentiate between bug reports and security threats

The complexity of the world means we’ll need people to guide AI in tricky situations. Good software engineers likely aren’t going anywhere soon.

Beyond Silicon Valley

The disruption may affect more than just Silicon Valley/West Coast. These models could disrupt employment in the industry globally, with AI labs specifically targeting software engineers.

For people outside SV who haven’t seen the high pay associated with being there, software engineering is often just a standard job - stressful with ongoing learning requirements. The anxiety of disruption is even higher for them since they likely had less disposable income to invest or save.

Ironically, software itself may be the first job automated by AI - not manual labor or driving. Other industries have hit dead ends or faced barriers like regulation and closed knowledge.

Looking Forward

The future remains uncertain, but adapting to work alongside AI tools seems to be the consensus approach for now. Understanding how to leverage these tools effectively within existing workflows may be the key to staying relevant in an evolving industry.

fallinditch 13 hours ago | parent | next [–]

My guess is that, yes, the software development job market is being massively disrupted, but there are things you can do to come out on top:

  • Learn more of the entire stack, especially the backend, and devops.

  • Embrace the increased productivity on offer to ship more products, solo projects, etc

  • Be highly selective as far as possible in how you spend your productive time: being uber-effective can mean thinking and planning in longer timescales.

  • Set up an awesome personal knowledge management system and agentic assistants

reply

whynotminot 10 hours ago | root | parent | next [–]

Learn more of the entire stack, especially the backend, and devops. I actually wonder about this. Is it better to gain some relatively mediocre experience at lots of things? AI seems to be pretty good at lots of things.

Or would it be better to develop deep expertise in a few things? Areas where even smart AI with reasoning still can get tripped up.

Trying to broaden your base of expertise seems like it’s always a good idea, but when AI can slurp the whole internet in a single gulp, maybe it isn’t the best allocation of your limited human training cycles.

reply

aizk 4 hours ago | root | parent | next [–]

I was advised to be T shaped, wide reach + one narrow domain you can really nail. reply

j_maffe 10 hours ago | root | parent | prev | next [–]

Do you have any specific tips for the last point? I completely agree with it and have set up a fairly robust Obsidian note taking structure that will benefit greatly from an agentic assistant. Do you use specific tools or workframe for this? reply

fallinditch 3 hours ago | root | parent | next [–]

What works well for me at the moment is to write ‘books’ - i.e use ai as a writing assistant for large documents. I do this because the act of compiling the info with ai assistance helps me to assimilate the knowledge. I use a combination of Chatgpt, perplexity and Gemini with notebook LM - to merge responses from separate LLMs, provide critical feedback on a response, or a chunk of writing, etc. This is a really accessible setup and is great for my current needs. Taking it to the next stage with agentic assistants is something I’m only just starting out on. I’m looking at WilmerAI [1] for routing ai workflows and Hoarder [2] to automatically ingest and categorize bookmarks, docs and RSS feed content into a local RAG.

[1] https://github.com/SomeOddCodeGuy/WilmerAI

[2] https://hoarder.app/

reply

jmehman 1 hour ago | root | parent | prev | next [–]

You know about the copilot plugin for obsidian? reply

ijidak 6 hours ago | root | parent | prev | next [–]

I love, especially the last point. But, what do you use for agentic assistants?

reply

fallinditch 2 hours ago | root | parent | next [–]

See answer above, it’s something I want to get into. I am inspired by this post on Reddit, it’s very cool what this guy is doing. https://www.reddit.com/r/LocalLLaMA/comments/1i1kz1c/sharing

reply

bilbo0s 12 hours ago | root | parent | prev | next [–]

This is really good advice. Underrated comment.

reply

viraptor 12 hours ago | parent | prev | next [–]

It seems to be slowing down actually. Last year was wild until around llama 3. The latest improvements are relatively small. Even the reasoning models are a small improvement over explicit planning with agents that we could already do before - it’s just nicely wrapped and slightly tuned for that purpose. Deepseek did some serious efficiency improvements, but not so much user-visible things. So I’d say that the AI race is starting to plateau a bit recently.

reply

j_maffe 10 hours ago | root | parent | next [–]

While I agree, you have to remember the dimensionality of the labor-skill space is. The was I see it is that you can imagine the capability of AI as a radius, and the amount of tasks it can cover is a sphere. Linear imporovements in performance causes cubic (or whatever the labor-skill dimensionality is) imporvement in task coverage. reply

manmal 14 minutes ago | root | parent | next [–]

I’m not sure that’s true with the latest models. o3-mini is good at analytical tasks and coding, and it really sucks at prose. Sonnet 3.7 is good at thinking but lost some ability in creating diffs. reply

LouisSayers 11 hours ago | parent | prev | next [–]

I’m not too concerned short to medium term. I feel there are just too many edge cases and nuances that are going to be missed by AI systems. For example, systems don’t always work in the way they’re documented to. How is an AI going to differentiate cases where there’s a bug in a service vs a bug in its own code? How will an AI even learn that the bug exists in the first place? How will an AI differentiate between someone reporting a bug and a hacker attempting to break into a system?

The world is a complex place and without ACTUAL artificial intelligence we’re going to need people to at least guide AI in these tricky situations.

My advice would be to get familiar with using AI and new AI tools and how they fit into our usual workflows.

Others may disagree, but I don’t think software engineers (at least ones the good ones) are going anywhere.

reply

throw234234234 12 hours ago | parent | prev | next [–]

It has the potential to effect a lot more than just SV/The West Coast - in fact SV may be one of the only areas who have some silver lining with AI development. I think these models have a chance to disrupt employment in the industry globally. Ironically it may be only SWE’s and a few other industries (writing, graphic design, etc) that truly change. You can see they and other AI labs are targeting SWEs in particular - just look at the announcement “Claude 3.7 and Code” - very little mention of any other domains on their announcement posts. For people who aren’t in SV for whatever reason and haven’t seen the really high pay associated with being there - SWE is just a standard job often stressful with lots of learning required ongoing. The pain/anxiety of being disrupted is even higher then since having high disposable income to invest/save would of been less likely. Software to them would of been a job with comparable pay’s to other jobs in the area; often requiring you to be degree qualified as well - anecdotally many I know got into it for the love; not the money.

Who would of thought the first job being automated by AI would be software itself? Not labor, or self driving cars. Other industries either seem to have hit dead ends, or had other barriers (regulation, closed knowledge, etc) that make it harder to do. SWE’s have set an example to other industries - don’t let AI in or keep it in-house as long as possible. Be closed source in other words. Seems ironic in hindsight.

reply

throw83288 11 hours ago | root | parent | next [–]

What do you even do then as a student? I’ve asked this dozens of times with zero practical answers at all. Frankly I’ve become entirely numb to it all. reply

throw234234234 10 hours ago | root | parent | next [–]

Be glad that you are empowered to pivot - I’m making the assumption you are still young being a student. In a disrupted industry you either want to be young (time to change out of it) or old (50+) - can retire with enough savings. The middle age people (say 15-25 years in the industry; your 35-50 yr olds) are most in trouble depending on the domain they are in. For all the “friendly” marketing IMO they are targeting tech jobs in general - for many people if it wasn’t for tech/coding/etc they would never need to use an LLM at all. Anthrophic’s recent stats as to who uses their products are telling - its mostly code code code. The real answer is either to pivot to a domain where the computer use/coding skills are secondary (i.e. you need the knowledge but it isn’t primary to the role) or move to an industry which isn’t very exposed to AI either due to natural protections (e.g. trades) or artifical ones (e.g regulation/oligopolies colluding to prevent knowledge leaking to AI). May not be a popular comment on this platform - I would love to be wrong.

reply

throw83288 10 hours ago | root | parent | next [–]

Not enough resources to get another bachelors, and a masters is probably practically worthless for a pivot. I would have to throw away the past 10 years of my life, start from scratch, with zero ideas for any real skill-developing projects since I’m not interested at all. Probably a completely non-viable candidate in anything I would choose. Maybe only Robotics would work, and that’s probably going to be solved quickly because: You assume nothing LLMs do are actually generalization. Once Field X is eaten the labs will pivot and use the generalization skills developed to blow out Field Y to make the next earnings report. I think at this current 10x/yr capability curve (Read: 2 years -> 100x 4 years -> 10000x) I’ll get screwed no matter what is chosen. Especially the ones in proximity to computing, which makes anything in which coding is secondary fruitless. Regulation is a paper wall and oligopolies will want to optimize as much as any firm. Trades are already saturating.

This is why I feel completely numb about this, I seriously think there is nothing I can do now. I just chose wrong because I was interested in the wrong thing.

reply

fragmede 8 minutes ago | root | parent | next [–]

If you’re taking a really high level look at the whole problem, you’re zooming too far out, and missing the trees themselves. You chose the wrong parents to be born to, but so did most of us. You were interested in what you were interested in. You didn’t ask what’s the right thing to be interested in, because there’s no right answer to that. What you’ve got is a good head on your shoulders, and the youth to be able to chase dreams. Yeah it’s scary. In the 90’s outsourcing was going to be the end of lucrative programming jobs in the US. There’s always going to be a reason to be scared. Sometimes it’s valid, sometimes the sky is falling because aliens are coming, and it turns out to be a weather balloon. You can definitely succumb to the fear. It sounds like you have. But courage isn’t the absence of fear, it’s what you do in the face of it. Are you going to let that fear paralyze you into inaction? Just not do anything other than post about being scared to the Internet? Or, having identified that fear, are you gonna wrestle it down to the ground and either choose to retrain into anything else and start from near zero, but it’ll be something not programming that you believe isn’t right about to be automated away, or dive in deeper, and get a masters in AI and learn all of the math behind LLMs and be an ML expert that trains the AI. That jobs not going away, there’s still a ton of techniques to be discovered/invented and all of the niches to be discovered. Fine-tuning an existing LLM to be better at some niche is gonna be hot for a while.

You’re lucky, you’re in a position to be able to go for a masters, even if you don’t choose that route. Others with a similar doomer mindset have it worse, being too old and not in a position to them consider doing a masters.

Face the fear and look into the future with eyes wide open. Decide to go into chicken farming or nursing or firefighter or aircraft mechanic or mortician or locksmith or beekeeping or actuary.

reply

currymj 5 hours ago | root | parent | prev | next [–]

I think if you believe LLMs can truly generalize and will be able to replace all labor in entire industries and 10x every year, you pretty much should believe in ASI at which point having a job is the least of your problems. if you rule out ASI, then that means progress is going to have to slow. consider that programming has been getting more and more automated continually since 1954. so put yourself in a position where what LLMs can do is a complement to what you can do. currently you still need to understand how software works in order to operate one of these things successfully.

reply

throw234234234 40 minutes ago | root | parent | next [–]

I don’t know if I agree with that and as a SWE myself its tempting to think that - it it a form of coping and hope that we will be all in it together. However rationally I can see where these models are evolving, and it leads me to think the software industry is on its own here at least in the short/medium term. Code and math, and with math you typically need to know enough about the domain know what abstract concept to ask, so that just leaves coding and software development. Even for non technical people they understand the result they want of code.

You can see it in this announcement - it’s all about “code, code, code” and how good they are in “code”. This is not by accident. The models are becoming more specialised and the techniques used to improve them beyond standard LLM’s are not as general to a wide variety of domains.

We engineers think AI automation is about difficulty and intelligence, but that’s only partly true. Its also about whether the engineer has the knowledge on what they want to automate, the training data is accessible and vast, and they even know WHAT data is applicable. This combination of both deep domain skills and AI expertise is actually quite rare which is why every AI CEO wants others to go “vertical” - they want others to do that leg work on their platforms. Even if it eventuates it is rare enough that, if they automate, will automate a LOT slower not at the deltas of a new model every few months.

We don’t need AGI/ASI to impact the software industry; in my opinion we just need well targeted models that get better at a decent rate. At some point they either hit a wall or surpass people - time will tell BUT they are definitely targeting SWE’s at this point.

reply

ttul 3 hours ago | parent | prev | next [–]

Trade your labour for capitalism. Own the means of production. This translates to: build a startup. reply

ilrwbwrkhv 13 hours ago | parent | prev | next [23 more]