Not sure what these guys are studying but can tell you in the real world - essentially zero AI rollout in accounting world for anything serious.
We've got access to some fancy enterprise copilot version, deep research, MS office integration and all that jazz. I use it diligently every day...to make me a summary of today's global news.
When I try to apply it to actual accounting work. It hallucinates left, right & center on stuff that can't be wrong. Millions and millions off. That's how you get the taxman to kick down your door. Even simple "are these two numbers the same" get false positives so often that it's impossible to trust. So now I've got a review tool that I can't trust the output of? It's like a programming language where the equality (==) symbol has a built in 20% random number generator and you're supposed to write mission critical code with it.
I keep trying to get it to review my personal credit card statements. I have my own budget tracking app that I made, and sometimes there's discrepancies. Resolving this by hand is annoying, and an LM should be able to do it: scrape the PDF, compare the records to mine, find the delta.
I've tried multiple models over the course of 6 months. Yesterday it told me I made a brilliant observation, but it hasn't managed to successfully pin down a single real anomaly. Once it told me the charges were Starbucks, when I had not been to a Starbucks—it's just that Starbucks is a probable output when analyzing credit card statements.
And I'm only dealing with a list of 40 records that I can check by hand, with zero consequences if I get it wrong beyond my personal budgeting being off by 1%.
I can't imagine trusting any business that leans on this for inappropriate jobs.
>> I keep trying to get it to review my personal credit card statements. I have my own budget tracking app that I made, and sometimes there's discrepancies. Resolving this by hand is annoying, and an LM should be able to do it: scrape the PDF, compare the records to mine, find the delta.
This is a perfect example of what people don't understand (or on HN keep forgetting). LLMs do NOT follow instructions, they predict the next word in text and spit it out. The process is somewhat random, and certainly does not include an interpreter (executive function?) to execute instructions - even natural language instructions.
There's a very fun video about accounting by Dan Toomey [0] that I think really drives home the point that accounting is:
1) Extremely important
2) Not that glamorous
I always think of accountants as the "nerds" of the finance world. I say this lovingly - I think in another life I would have become an accountant. I find it very fascinating. I worked at a company that worked with auditing datasets, so I knew much more about accounting that I would have otherwise.
Nobody ever wants to listen to accountants because they either are giving you bad news, or telling you the things that you should be doing. No one can deny how important they are, despite how much it seems like everyone wants to get rid of them.
An accounting story I love is how my old company got a lot of business because of Enron. Part of the reason that Enron was caught was due to their audit fees.
Their audit fees were reporting that Arthur Andersen was charging for a huge percentage of non-auditing work (audit fees report what percentage was auditing related and not). This was a huge red flag.
My company was the only one at the time that kept track of audit fees, and so a huge number of people paid to access that data stream.
If one day I quit programming, maybe I'll get my CPA.
"...can tell you in the real world - essentially zero AI rollout in accounitng world for anything serious."
The jobs the reseearchers concluded were affected were "unregulated" ones where there are no college education or professional certification requirements, e.g.,
receptionists
translators
software "engineers"
"Not sure what these guys are studying..."
Apparently, they studied payroll data from ADP on age, job title and headcount together with, who would have guessed, data from an AI company (Anthropic)
It should also be noted that there are some pretty big flaws in the analysis. They mention "the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy," but make no attempt to adjust their analysis for it. They also drop 30% of the data for which there is no job title recorded. With such a skewed sample, it's hard to tell how the analysis is supposed to generalize.
There seems to be this dream of Tax AI Software that will just do all of the taxes. But other than using AI as a fancy text search, I don't see it happening for a long long time. LLMs can't do arithmetic or count.
Yeah - classifying an invoice into building rent or say printer ink it'll have some success. So we'll see some of it at the very bottom end.
>LLMs can't do arithmetic or count.
Yes. The fancy copilot stuff does use pandas/python to look at excel files so stuff like add up a table does work sometimes, but the parameters going into the pandas code need to make sense too in the garbage in garbage out sense. The base LLM doesn't seem to understand the grid nature of Excel so it ends up looking at the wrong cells or misunderstands how headings relate to the numbers etc.
It'll get better but there doesn't seem to be the equivalent of "use LLM to write boilerplate code" in this world.
Yup, using AI for any serious tax calculation or even advice is a REALLY BAD idea.
A close relative is a top expert in US Trust & Estate Tax law working at a well-known BigLaw firm. Of course they have substantial AI initiatives, integration with their system, mandatory training, etc.
She finds tha AI marginally useful for some things, but overall not very much and there are serious errors, particularly the types of errors only a top expert would catch.
One of the big examples is that in the world of T&E law, there are a lot of mediocre (to be kind) attorneys who claim expertise but are very bad at it (causing a lot of work for the more serious firms and a lot of costs & losses for the intended heirs). The mediocre-minus attorneys of course also write blogs and papers to market themselves, often in greater volume than the top experts. Many of these blogs/papers are seriously WRONG, as in giving the exact opposite of the right advice.
Everyone here sees where this is going. The AI has zero ability to reason or figure out which parts of its training input are from actual top experts and which are dreck. The AI can not reason, and can not even validly check their 'thinking' against existing tax code (which is massive), or the regulations and rulings (which are orders of magnitude more massive). So, the AI gives advice that is confident, cheerful, and WRONG.
Worse yet, the LLM's advice is wrong in ways only a top expert would know, and in ways that will massively screw the heirs. But the errors will likely only be discovered decades later, when it is too late to fix.
Seriously, do NOT use LLMs for tax advice, unless you are also consulting a TOP professional. And skipping the LLM part is best.
My relative is quite frustrated and annoyed by the whole thing, which should be more helpful with these massive code/regs/rulings, but finds it often more work than just using the standard WestLaw/Lexis legal database searches.
LLMs basically can't do arithmetic directly, trying to get them to do so is a skill issue. Most models can and will happily write and execute code to do that work instead.
> It's like a programming language where the equality (==) symbol has a built in 20% random number generator and you're supposed to write mission critical code with it.
<bad joke>
Why are we talking about JavaScript in a thread about AI?
</bad joke>
The accounting note is not true in the traditional sense. The field in the US is just getting offshored to India/PH/Eastern Europe for better or for worse. There is even a big push to lower the educational requirements to attain licensure in the US (Big 4 partners want more bodies and are destroying the pipeline for US students). Audit quality will continue to suffer and public filers will issue bunk financials if they aren't properly attested to.
It's amusing to see programmers in the US promoting remote work.
Do those people really believe they're the most intellectually superior to the rest of the world? If a job can be done purely remotely, what stops the employer from hiring someone who lives in a cheaper place?
As a US-based developer I do not feel threatened by the "cheap" offshore developers I encounter. I've repeatedly been hired to clean up after offshore developers who:
* lied about their capabilities/experience to get the job,
* failed to grok requirements through the language barrier,
* were unable to fix critical bugs in their own code base,
* committed buggy chatgpt output verbatim,
* and could not be held liable because their firm is effectively beyond the reach of the US legal system.
In a couple of projects I've seen a single US based developer replace an entire offshore team, deliver a superior result, and provide management with a much more responsive communication loop, in 1% of the billable hours. The difference in value is so stark that one client even fired the VP who'd lead the offshoring boondoggle.
Software talent is simply not as fungible as some MBAs would like to believe.
I've worked alongside (but never with) offshore developers, often from the big consultancy companies. One thing they tend to do is place one competent developer and a dozen less-so, so that the work gets done by the one but they get paid for a dozen people.
But I also believe the managers hiring offshore employees are fully aware of this. If they aren't then they're not very good managers and/or have no idea what they're doing.
The offshore people mainly work on SAP and legacy systems though; it turns out it's very hard to find willing or competent people in Europe that actually want to work on / with SAP. However, foreign workers have less qualms about learning stuff like that, since the money is really good.
Yes this is the agency model here in Croatia. You would get one senior developer covering 2-3 projects and a few junior/mid developers working full time.
I have a feeling it's not working that well anymore because the people covering those juniors just earn more going to work straight for the client and they have less burden on them. Used to be harder so the agencies had leverage, nowadays even big companies will hire individual B2B contractor.
The only management experience I've had was as a team lead at a US-based consulting company. It was really stressful because I felt like I was managing a team that wasn't capable of doing the work. I was expected to spend at least some of my time coding, and was responsible for the overall project. This is the first time it has occurred to me that this might have been intentionally set up to exploit me while maximizing the amount we can charge the client.
we couldn't find good SAP security folks to save our life at a previous job. 900/hr for consultants.
regular "line" SAP admins had to be found in Mexico and brought up on TN visas -- still well paid but generally pretty good, doubly so because we had a Mexico City office and could retain the staff even after they rotated back to MX.
I’ve worked with awful, stereotypically garbage offshore teams. I’ve worked with quality offshore teams. The difference was money. The quality teams made less than, but nearly as much as an American worker. Maybe not a FAANG guy or a New York / SF worker, but all those small cities in flyover states? They came in 20-30k under, perhaps.
Language l, cultural, and time barriers still come into play regardless of how good they are, however.
Likewise! Though Latin American engineers also tend to be some of the priciest offshore developers (along with European engineers). Excellent engineers, but there's still some churn from the friction of hiring and maintaining teams overseas.
As posted above, we had great success with Mexican hires out of Mexico City.
General perception was the universities there produced qualified graduates who were not paper tigers (or didn't lie about creds).
Rates for them were pretty good, and we had better alignment with timezones and holidays.
Reasonably good alignment in terms of legal and HR issues -- easier to enforce than, like, Bangladesh
The NAFTA / USMCA / whatever its called now Visa made it easy for them to come across the border for a few years as well. Pay bump for a while plus a chance to work in HQ or the IT office directly, make fat stacks, and then rotate back to MX and buy a nice house. The Mexico City PMs were also instrumental for bridging the language gap when running projects in other LATAM countries.
We find it incredibly hard to hire these people. It turns out a lot of US companies are also interested in smart, capable, cheap engineers in Central Time Zone.
That's not the talent not being fungible but the trust and accountability not being fungible. Which is a structural issue and unlikely to be resolved. I suspect it's more profitable for a lot of VPs for offshore labor to be as inefficient as possible.
> and could not be held liable because their firm is effectively beyond the reach of the US legal system.
this is a big one. last F500 I was at dropped Tata for several internal support teams due to belief that they were messing with proprietary code and/or had screwed things up so badly they warranted a lawsuit -- but had no legal levers to chase them for damages.
ditto for the one-off programmer who sexually harasses people while remote -- how does a remote worker sue, or get sued, and under what law?
We all had similar experience(s). But if you have been around long enough you will experience also highly competent and sometimes outright brilliant folks who run circles around most of us. A bit less common in India than say eastern Europe, but thats about it.
Anyway highly competent and experienced folks will always thrive regardless of environment. Its the quiet rest that should be worried from multiple angles.
But at the same time, I doubt there is anything special about me or my US born coworkers. We aren't superior just because of the continent we live in. But offshore work is almost as a rule terrible quality done by people that are frustrating to work with. It doesn't make sense
This experience most likely because dealing with offshore software farms. Those are the same shit as their western counterparts and even worse because of language and logistics. On an individual scale however one can for example easily find great developers in Eastern Europe, and former USSR countries that do amazing job and for very attractive price. Just not dirt cheap.
And yes. There is nothing special about North America as far as quality of software developers in general. Mostly you get average buzzword indoctrinated not so great people with some amazing expectation salary wise.
Firstly, I want to say that we are "cheap" because things are dirt cheap here.
Now, I am not a software developer but in high school, but I have my brother/cousins working in the software dev industry and here are my thoughts.
>language barrier: I genuinely don't know how incompetent developers you can hire, I mean sure if you hire extremely shitty developers but even that's rare.
Most people here are comfortable enough with english, in the sense that literally anyone can speak english & mostly get the point across. Yes, I have heard of some misarrangements but I don't think that its really much of an issue.
Now some outsourcing companies are mass recruiters who recruit tech from Cs colleges where noone recruited them (Tata consultancy services, infosys?) and the thing with them is that they don't even pay the mediocre expectations of a developer even in INDIA, they are basically exploiting junior developers and are compared with govt. insitutions in my country given how slow they are.
My brother works in a decent Consultancy services but he says that there are a lot of inefficiencies in the system.
He worked on a project and we estimated and he got 1% or less than 1% of the work that he MOSTLY did. and so my brother has way more incentive to freelance and get a "remote job" not consultancy.
I think that you confused yourself with remote job and consultancy part. Remote jobs hiring / freelancing indians is still cheaper than a consultancy imo who are parasites on the developers.
My brother works in a consultancy right now because the job market is shitty and he has gotten offers 4x his current salary from countries like switzerland and america. Yet, my family doesn't want him to do the 4x income work because he is already working a job and they don't want him to burn out
And they don't want him to leave the job because its "safe", you can't trust these startups etc. given the volatile nature and if they fail, then whoops the job market is really messed up right now, even in India and also arrange marriage is a huge thing and the girl's family usually checks the company that the boy works in and they usually get fishy if its remote job (and I mean, for good reason)
Also trust me some indians can definitely work in american timezones too but that is a little tough. But I mean, we are okay if you might call us once or twice late at night when its day in america and you have something really urgent. Atleast I am okay with that.
And you could pay 2x the salary the normal indian dev gets and I feel like even that would be less than an american dev.
This can really filter some devs to get those with seniority or good projects.
Its a problem of incentives for consultancies (which is what you seem to hate) and maybe that's a bit fair given how much inefficiencies I see in that system. Just remote hire directly (I suppose)
People expect that they can pay 0.05x in the Philippines or India, or 0.1x in Poland or Estonia, when that's just not going to happen. I've heard a few people say the multiplier starts at something like 0.4x or more for equivalent talent.
Since that comes with all the disadvantages and risks you'd expect from splitting your team across two countries and operating in a market you don't understand, at that price point a US company should probably start thinking about spinning up a cheaper team in, I dunno, Dallas rather than offshoring.
you're delusional. of course if you take the cheapest possible offshore workers you get terrible results when compared to an experienced engineer in a developed country.
but it's a bit like ikea: if you buy their cheapest stuff it will fall apart after a few months but their "expensive" lines are still far cheaper than the competition but the same quality.
you might think you're a solid mahogany table but at the end of the day you're probably the same table as being sold at ikea, just more expensive
> what stops the employer from hiring someone who lives in a cheaper place?
I've worked with remote workers from around the world. Let me preface by saying there are of course exceptions but:
What I've found is that most often Americans exhibit self-starting and creativity. What I mean by that is non-us workers are great if you give them a specific task, even a really hard task.
But if you give them a nebulous problem, or worse, a business outcome, they tend to perform much more poorly. And I rarely see non-americans say something like "I think our customers would like it if we added X to the product, can I work on that?".
I don't think it's because Americans are better at this -- I think it's cultural. America has a much higher risk tolerance than the rest of the world. Failing is considered a good thing in the USA. And the USA is much more entrepreneurial than the rest of the world.
These two things combined create a culture difference that makes a business difference.
Additionally, what I've found is that the exceptions tend to move here because their risk taking is much more acceptable here (or they are risk takers willing to move across the world, hard to say which way the causation goes).
>> What I've found is that only Americans exhibit self-starting and creativity.
I'm going to counterpoint somewhat. I think those attributes are evenly spread into all countries, but equally I think they are uncommon in all countries.
I don't live in the US. I have traveled there and elsewhere. I would agree that there are lots of cultural differences between places, even places as nominally similar as say the UK, Australia and the US.
Of course who you interact with in various places matters. If you go to India and visit a remote-programming-company you'll meet a specific kind of person, one well suited to providing the services they offer.
Dig a bit deeper elsewhere and you'll find some very bright, very creative, engineers in every culture. In some cases those folk are doing remote work for US companies. In a few cases they're building the software (creatively and all) that the US company is selling.
In countries that are isolated for one or other reason creativity thrives. Israel, South Africa, Russia, all have (or had) exceptional engineering abilities developed because international support was withheld.
Yes, it is hard to find good talent. It is hard to develop and nurture it. But it exists everywhere. And more and more I'm seeing folks outside the US take American jobs, precisely because American workers are so keen to explain how portable those jobs are.
I understand that the American psyche is built on exceptionalism. And that does exist in some areas. But unfortunately it also acts as a filter blinding you to both exceptionalism elsewhere and inferiority at home. By the time you realise someone else has the edge, it's too late. We've seen this in industry after industry. Programing is no different.
I understand also that shooting the messenger is easier than absorbing the message. Let the down-voting begin.
> I think those attributes are evenly spread into all countries, but equally I think they are uncommon in all countries.
The data does not support your statement. From a startup report just four days ago:
The United States alone generates 46.6% of all startup activity worldwide, nearly half of the global total. Together with China (9.2%), the United Kingdom (5.6%), and India (5%), these four countries account for 66.4% of the absolute global startup activity.
I will give you that Israel in particular has a strong risk taking culture, as does Singapore and Estonia. And there are a lot of startups coming out of there.
But overall the US has way more risk taking.
And like I said at the very beginning, there are of course exceptions. Yes, every culture has some brilliant risk takers. But at least until recently, many of them came to the USA after they got successful.
There is startup activity in the US because there is enough capital to fund it. Getting funding for a startup even in pretty rich countries in EU is more difficult by an order of magnitude.
It's not so much about risk taking as about getting proper funding and overcoming the bureaucracy barriers.
E.g. Poland itself has very low startup rates, but somehow Poles which go to USA create things like OpenAI ;)
Creativity and startup are two different things. Many of those startups are not creative in any way. And conversely being creative does not imply creating a company. This is about how capital work.
America is unique in way it businessmen tend to think that creating a business is the only way to be creative.
And incidentally, post was about employee creativity.
um, So VC funded startups are the very definition of "not risky". Basically you'll do something as long as someone else ponies up a big pile of cash to pay for it. Pretty much any other business model, where you build with your own time, or money, capital is much more risky.
Equally I don't think this is an argument for American exceptionalism (which is the point under discussion.)
This is the best HN comment I have ever seen. So elegant.
I am going to use, "I think you should reflect on that" line from now on. This line is just pleasant to me, seems professional and actually inviting to a discussion while also showcasing the hidden irony of the original case that you pointed out.
You've made my case for me, if by "I think I should reflect on that", you do in fact mean "[you've] been holding a silly opinion don't want to admit it".
The former is a passive-aggressive way to say the latter. I aim to, and encourage others to say what they mean.
All jokes aside, the commenter I initially replied to really should reflect on why their concept of creativity and risk tolerance is so linked with financial outcomes, because that is a very particular association and it maybe informs their worldwide more than they may realise.
> The former is a passive-aggressive way to say the latter. I aim to, and encourage others to say what they mean.
While I wouldn't prescribe someone to sit down and think about why they tie the two together, you are probably right that it's reflective of their greater worldview(s). I wouldn't prescribe it because odds are, they already have reflected on it quite a bit. One thing I've really taken away recently reading about the historic lives of ordinary immigrants to early America, is that modern peoples are incredibly good at constantly reflecting and adapting their models of self, and of belief. I believe this constant reshaping is probably the main reason echo chambers are so effective, and dangerous.
Re: the irony, I don't see it, but I'm happy to hear your explanation of it. For what it's worth, my own interpretation of my words isn't passive aggressive, it's (charitably) pretty direct, or even (less charitably) plain old aggressive-aggressive.
Okay,so uhh, I think it was my comment where you said that it looks passive agressive and so I just read it again and yeah it does.
So yeah thanks, in the sense that I am not going to say this phrase now realizing it, Not sure how I even found it professional, man I am cringing.
But maybe the context OP used that was really maybe a good roast and I liked the use of this word in that context but yeah good point.
For what its worth, I also don't see the irony. And I also didn't see that it was passive agressive untill you told it and then I saw it..., So uh yeah.
> Israel, South Africa, Russia, all have (or had) exceptional engineering abilities developed because international support was withheld.
I think if you add the US to the list this theory disappears. It's more the frontier/self reliant/entrepreneurial attitude that I think makes the difference.
>What I've found is that only Americans exhibit self-starting and creativity.
Isn't that mostly a function of how incentives are aligned? I had a job with a lot of outsourcing to India. The Indians were given specific bits of code to write. They didn't even know how their code fit into the application.
Their entire incentive structure was geared toward getting them to write those bits of code as quickly as possible, finish, and take another task. There just wasn't any room for "self-starting and creativity".
I have a feeling if the entire application had been moved to India things would have been different.
It could be. But I worked at companies where we had full time employees all around the world, all of whom had full access to the same information the rest of us had. And I still saw this behavior generally. There were of course exceptions.
Interestingly the biggest exceptions were ones that had at some point lived and worked in the USA, and then had returned to their home country for some reason or another.
> I have a feeling if the entire application had been moved to India things would have been different.
I had direct experience with this. We had an office of full time employees in India tasked with a project, but I still had to hand hold them through most of the key decisions (which I didn't have to do with the US based teams nearly as much).
Its also like, no I don't think a family is supposed to be where some guy on the top extracts all the money and then trickes it down and I get %'s of what I did. This doesn't sound like a family.
Someone create a blog post on this phenomenon as to me, this seems like americans having an parasocial bond with companies (I vaguely remember the stripe CEO had said my name once or something along that lines, a blog post and it felt parasocial man)
I mean, I just feel like americans complaining about indians devs are complaining about the wrong things, like maybe I don't get them but its not true as to what they are saying. I just don't get it man.
I have seen Indian govt jobs to be much more like american private jobs in the sense that employment becomes central to their identity and there is this sense of tightknit community for the most part and maybe that has to do with the fact that the govt isn't usually exploiting its own workers and the tight knit sense of community comes from helping really poor children in teaching, building roads that my uncle flexes on me that I built this road or this college and showing me the absolute chad he sometimes is.
Cultural differences do exist. I don't understand why this isn't a major problem, because it's behavior I've seen again and again and again: Indians seem terrified of showing any initiative whatsoever (including asking), any own contribution, and do what you've asked them and only what you've asked them. They are also terrified of being accused of doing nothing. This goes to extremes, such as purposefully taking a very long time to finish a simple task simply because they haven't gotten a new one, don't dare ask for one, have to be seen to be working, and can't come up with anything themselves.
You want a long list of simple tasks finished? Excellent workers. An endless ticket queue with simple problems? There's a few issues with them not escalating real problems, but ok.
You want an application developed and a lot of problems solved? Stay away.
Well when you are paid peanuts, you do the bare minimum.
And an incentives issue.
Some software engineers work and they do the job and if they finish the work early, the company just start having more expectations of them WHILE PAYING THE SAME.
So you are effectively catered if you don't work or take more to do the same atleast in the consultancy or similar business in India.
I feel like a lot of Indians especially software devs don't have this allegiance to a company where we consider a company to be our "family", and I find it really fair. My cousins always tell me that a company extracts 10x more value from you than what they give you back. Not sure how much of that is true in US but some developers are literally exploited in India, they couldn't care less about an application developed if they are this stuck state of limbo where they won't get fired if they do shitty work but they won't really get higher up the ladder either and even if they do the good work, it would take years for the company to notice it and its better to just change companies for that raise.
An incentive issue at its finest which could and is fixed by many people, just because you used a consultancy that sucked or had people that sucked doesn't make us all shitty software devs man.
Its Not a cultural issue, It really offended me as by coating us all in this "culture", you said somethings which are clearly offending.
Maybe I can get the point that maybe software attracts a lot of shy people and so they are shy towards taking the first initiative but that's not a cultural issue.
The culture of our school depends, most schools don't incentivize extracurricular activities that much so we don't do it and that's why we don't usually take initiative, because boom everything matters what you wrote in 3 hours
The incentive system is flawed but maybe I have hope, I mean to be honest, Things aren't that better anywhere else in the world too. I just feel like either the devs I have met irl are absolutely really good from what I've seen or your guys experience hasn't been that good but it isn't that big of a difference and I feel like things are a little exaggerated when I come to such forums.
I have often experienced that it isn't a problem of pay or of incentives. They're terrified of asking for something to do for example. As in scared, and not a little bit. Not underpaid.
I'm not claiming they're well-paid, but I don't think this is the issue, or at least not the primary issue.
> And I rarely see non-americans say something like "I think our customers would like it if we added X to the product, can I work on that?".
The most crucial difference in this context is that Americans are employed directly by the company, while foreign workers are behind several layers of management belonging to several companies. While you can walk around and deliver elevator pitches to higher-ups, foreign workers must track their time spent on tasks down to the minute in Jira. Then, they must find a manager who would like to pitch a feature to a manager who would pitch a feature to a manager in the U.S.
Exactly. I used to work in such a situation for a few years (consulting company hiring EE devs). I tried suggesting things, building PoC’s, pitching it to the manager, all was met with just “we’re on a limited budget, so stick with what we’ve arranged.”
Had I built the things anyway it wouldn’t be met with praise, but looked down upon for bypassing the manager (or I just wouldn’t get paid for those hours).
Many big corporations tend to be similar even when you’re employed directly.
You can’t truly be creative when you’re stuck 7 layers of mgmt deep. You also have to understand that for those who’ve only worked in such situations, “risking” their position at a foreign company just to appear smart doesn’t seem like a good idea, so they don’t do it.
While on the one side I think you have a point, on the other there's different dynamics in place as well; you're comparing offshore workers to internal employees. An offshore worker gets hired to do a job for another company, an internal employee is part of the company.
That is, an external worker (and I'm a consultant, I know) gets paid per hour, if the company goes under for whatever reason they just move on to the next assignment, while an internal employee leans more on their job.
Anyway that's just a theory. I'm a "consultant" which is just a fancy word for a temp / hired hand, and I'm somewhere in the middle in that I will think along with the company and propose improvements, but at the same time have lower risk and much less attachment to the companies I work for.
I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
> An offshore worker gets hired to do a job for another company, an internal employee is part of the company.
I've experienced both. Working with offshore employees and full time employees who happened to be in foreign countries. It was a similar experience with both, the exception being the ones that had previously lived and worked in the US.
> I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
Sundar Pichai moved to the US when he was in college. His entire working career and a bunch of his schooling was in the US.
Satya Nadella did the same.
As I said in my original reply, the ones who are more entrepreneurial or successful tend to move to the US (or at least used to).
> Failing is considered a good thing in the USA. And the USA is much more entrepreneurial than the rest of the world.
I wonder how many devs have been sacked for going out of their way and making stuff nobody in business asked for, or perhaps that broke something along the way and ended up being a net negative: in the EU vs US and other parts of the world.
Might be loosely related to how much money the company has to burn and the nature of their work (e.g. probably not looked well upon in consulting where you have to convince clients to pay for whatever you've made), as well as how popular each type of work is in each part of the world.
I've worked with experts from around the world. After a certain level of competitiveness they are all pretty much the same. Once you become "pals" they all start suggesting improvements. Maybe you socialize better with americans.
Sure, at the highest levels you'll find these traits everywhere. But there is a reason these folks have ascended to the highest levels. What I'm saying is that you find it far more often in junior people in the USA.
> I don't think it's because Americans are better at this -- I think it's cultural.
My experience is ANY delegation incurs a big loss in agency. I want to create a startup -> my employees are much less invested than I am. My remote (French) employees are even less invested. My Ukrainian employees are completely passive and I fired them. The more the distance, the less invested, the more passive.
It’s tempting to attribute this to your country’s qualities, but my experience is every country is a mixed bag.
> non-us workers are great if you give them a specific task, even a really hard task
...which is a lot like the LLMs! Maybe the skillset required to manage non-US workers is the same as for managing ChatGPT 6o, but the latter scales better.
> America has a much higher risk tolerance than the rest of the world.
America is one of the most risk averse countries in the world, seriously. Americans are constantly scared - of loosing job, of physical injury, of everything and everywhere.
> Failing is considered a good thing in the USA
America punishes failure pretty hard. Some peoples failures are ignored, but most peoples failures are punished in pretty significant ways.
Americans are truly exceptional people. Or, at least, that's what I learned in American-made training on cultural differences. The funniest part is that the training touched on nationalism. You see, nationalism is a negative quality exhibited by people in other countries. Americans have a positive version of that: patriotism.
That's not at all what I said. I said I see it far more often in Americans than other cultures. And I have stepped out of my bubble many times. I've worked with a lot of people in a lot of countries.
> What I mean by that is non-us workers are great if you give them a specific task, even a really hard task. But if you give them a nebulous problem, or worse, a business outcome, they tend to perform much more poorly.
I mean come on, how do you expect people to interpret this paragraph? I can only assume you are trolling, so I'm done here.
It's easy to criticize that part but his last sentence is spot on: the creative it minds from those countries tend to migrate to places that match their entrepreneurial personality better and those usually won't be China or India but rather somewhere in America or even Europe.
They go where they are told it's easier to get money for their ideas. This has long been the US. However it looks like it is changing in some fields lately.
I'm one of those offshore people that live in a cheaper place and works remotely for a US co.
The majority of people in the company are still in the US, and even for the East coast, the timezones are just annoying to work around sometimes. Either I need to do late days, or they have to do uber early mornings/SUPER late days, don't even get me started on West coast where the hours basically never match. And I'm in the closest timezone I can be for the US.
And there's also a cultural aspect to it. I simply work differently to how the US bosses expect, because my employer has to respect worker's rights if they want to hire people in the EU unless they hire them as contractors (they still have many protections in that case though). I clock off at exactly 17:00, I never answer messages outside working hours, I don't do overtime or anything resembling it etc. And yes, they don't pay me the same as I would in the US, but it's really not that much lower, plus life is just cheaper, even here in the Netherlands. I get paid less relatively, but from what I can tell other that the people getting paid obscene amounts, my quality of life is higher than most of my US counterparts
I've noticed my US colleagues are much more willing to waste away their lives for their employer as well, even if there's no real expectation for them to do so, and the business obviously prefers those kind of employees over the ones like me.
So there's still plenty of reasons to keep hiring US-based devs, from cultural to logistical. Maybe you guys should work on getting some actual worker protections first, though...
> but it's really not that much lower, plus life is just cheaper, even here in the Netherlands.
Lived in the NL for 4 years, it was many things - cheap wasn't one of them. With the enormous taxes , high rents and mediocre salaries I don't think you can make the case it's somehow cheaper than the U.S unless you specifically mean Manhattan and Silicon Valley.
As some have said, it's not about being superior. Common language, background, maybe overlaps in education, and avoiding cultures like those at Indian offshore companies where there is a lot of churn, maybe 1 Sr person you "hired" really farming the work out to multiple Jr people.
I agree with what you've written, but I've worked with colleagues in South America and Eastern Europe where none of those problems existed: folks spoke perfect English, people were incredibly motivated to do a good job, and they spoke up proactively when problems arose.
I have had issues with Indian outsourcers like you say (lots of churn, time zone hell, a culture of pretending everything is fine until release day and then saying "sorry, nothing works", etc.), but it's a bigger world now, and there are still lots of folks making half of US dev salaries where none of these problems exist.
My intuition says there are some stylistic differences. It seems like some development cultures somewhat have better results with more rigid computer engineering sort of tasks with high granularity requirements and more straightforward goals, even if the tasks are really hard, deeply technical and the goals are difficult. I think some are better at the more nebulous sort of tasks with a lot of flexibility. Both are really useful mindsets that seem much less useful if improperly applied.
Given, outsourcing is probably going to be hit-or-miss regardless of who’s doing it.
As an outsider I think Americans still have the upper hand in, for lack of a better term, work ethic.
A lot of that stems from a lack of job security. Stuff like suddenly being locked out of your work email/slack or being escorted out of company premises is largely unheard of in the rest of the world.
As a point of comparison: I'm a contractor based in a popular outsourcing destination. My contract is extended well over a month before it expires and I would need to do something particularly harmful to be let go just like that, as our client values continuity of services and will hold the agency accountable should that suffer.
Over here if a job listing mentions "US client" it typically means considerably more work for considerably more pay. Some go for that, others opt for more relaxed roles. I can't imagine having US jobs as the only option.
It’s amusing to see these comments as if American tech companies don’t already have offices all over the world.
Even a mid-size tech company I worked for had over a dozen small offices around the world to collect as many qualified developers as they could. They had some remote work too.
Still hired a lot of Americans. Thinking that remote work will be the end of American workers has been the driving force behind outsourcing pushes for decades, but it hasn’t worked that way.
> Still hired a lot of Americans. Thinking that remote work will be the end of American workers has been the driving force behind outsourcing pushes for decades, but it hasn’t worked that way.
The difference is that back then the project lead could explore outsourcing certain roles to India, EE and LatAm, while today the VP can explore outsourcing the project lead roles to those countries. These countries have built up their own native tech talent, many of whom already bring more to the table than the typical American - they work longer hours, for cheaper, and often bring a lot more experience. I've seen companies who only run sales teams with Americans, with the rest of the workforce being shipped out.
Notably, India already has nearly 2000 GCCs (Global Capability Centers, mega complexes of offices for foreign companies) set up, with that number only projected to increase as more mid-market firms expand. While many of them are just back offices, some of them, like Walmart's GCC, is the entire tech division - the CTO remains in the US, while the entire software team is in India. While earlier the Indian team would have had to adjust their timings to USA's, now quite a few US-based employees have had to adjust their timings to India's.
It's not intellectual superiority. They've already offshored all the other jobs they can. If they could offshore my job, they would. But it's very hard to find reliable talent anywhere, much less offshore. It is easier to find the talent here, and there's more of it. Then there's the complexity of hiring, the timezones, language barrier, and all the other small complications that add up.
Once you have world-class experts all over the developing world, my job might disappear. But you need experience to get there, which they aren't getting, because they aren't here. It's privilege 101: if you have it, you get more of it; if you don't have it, you don't get any of it. We're very privileged to be high-value domestic workers.
And by the way, remote work has been a thing here for decades, yet the calculation hasn't changed. Our remote jobs are still safe.
You’ll get downvoted but in my experience, which may not be representative of the entire population, this is true.
A mid-size US tech company I know well went fully remote after a lot of insistence from the workforce, prior to the pandemic they were fully in office.
Soon enough they started hiring remotely from EU, and now the vast majority of their technical folks are from there. The only US workers remaining are mostly GTM/sales. I personally heard the founder saying “why should we pay US comp when we can get extremely good talent in EU for less than half the cost”. EU workers, on average, also tend to not switch job as frequently, so that’s a further advantage for the company.
Once you adapt to remote-only, you can scoop some amazing talent in Poland/Ukraine/Serbia/etc for $50k a year.
I think most programmers in the US simply don't realize how much they earn compared to the rest of the world.
I'm not talking about rural Chinese villages whose name you can't pronounce. Or the stereotypical Indian call centers. I'm talking about highly educated programmers who can communicate fluently in English, in cities like Beijing or Munich. If people in SV know how (relatively) little their counterparts make in these places, they'd be much more opposed to remote work.
And that was before LLM. Today practically the entire planet can write passable English.
Yeah, for $100k or slightly less you can hire very good devs with 5+ yr experience in CN or DE. Often speaks English at full professional proficiency without the help of LLMs too. I know because I currently work for a fully remote startup with people from both countries. For that kind of money you can do what in the U.S., hire below average juniors? Even the most clueless junior likely makes more in SV.
Flip that around. Junior devs in the US earning $100k is the anomaly. The fact this is the case indicates the pipeline for competent developer talent is bottlenecked. Right now is still an amazing time to be in Tech. The fact the industry is so hungry for talent it’s paying such rates and is expanding abroad in search of new supply is a sign of it’s health.
Agree. It is harsh truth. Even the good old outsourcing seems in resurgence. Lately I see at work large delegations of IT bodyshops claiming 60% saving with AI + a dev/support center in India.
It may or may not work but it can crater 70% of IT/software department by 2027 as per their plan.
It's interesting, ai seems to be enabling the middle in a positive way.
On the other side, we have started to find that the value of outsourcing to very low cost regions has completely disappeared.
I expect that the wages in eastern Europe will quickly rise in a way they never did in former outsourcing hotspots (India for example), because they are able to do similarly complex and quality work to westerners, and are now enabled by awesome translation tools.
The low quality for cheaper is now better served by the Artificial Indian.
There's a lot of nuance in these types of stories. First, the US is far from uniform in salaries. Salaries in large metro areas are different from smaller areas and are different from CA/SV. Europe also isn't uniform, and in Western Europe if a company doesn't move to all contractors they will pay significantly more into a countries equivalent to social security. Personally, I would be uncomfortable having my entire development staff be contractors as their interests are not exactly aligned with mine.
Amazing talent may end up cheaper in certain locales for a period of time, but if they are amazing they will become more expensive.
IMO, what's at risk are the entry/mid FAANG type jobs that pay a lot for what they are.
My previous employer stopped hiring in the EU (except for the UK, where they were based, and South Africa, where the CTO was from) because the labor laws there made it too difficult for them to fire people, which was a particularly troublesome for them as they had almost quarterly layoffs. They switched back to hiring in the UK and US where there are fewer worker protections.
The fixed exchange rates between EU countries massively drags down the international cost of a German software engineer, and US companies have yet to wisen up to that fact.
It doesn't matter what they promote, remote labor is an economic reality. It's not as if employers are going to forget they can offshore your job because you show up to the office 8am sharp every morning.
The moment they can replace you for cheaper, they will, whether you insist on working remotely or not.
And being "superior" doesn't necessarily mean extraordinary coding skills. The vast majority of code to be written doesn't require that. What it requires however is a combination of common sense and good understanding of the underlying business. This is in short supply in many of the locations the jobs are being offshored to. But let's be honest, it was also on short supply in the corporate IT departments being offshored, though not quite to the same degree.
Yes. I think American programmers are at a local optimum for combining ingenuity and work ethic. You can get more ingenuity vs work ethic or the other way around elsewhere, but the American blend seems to be best.
That's my argument against looking for a 100% remote job. Even if the company is happy with you now, eventually there will be new management that sees your job as low-hanging fruit for expense reduction.
I have no comment on your strawmanning about programmers thinking they're geniuses or something.
But I've yet to meet an accountant who puts in their 40 hours a week and somehow manages to grow their backlog rather than shrink it.
Whereas bad programmers who will do that exist in spades.
Clearly the two professions are not identical.
That said, I've had two mind bogglingly bad accountants on my payroll in the past who made $100K+ mistakes if we hadn't caught them and fired the fuck out of those dumbasses. One was American and one was Filipino.
You're getting downvoted, but IMO what you're saying is exactly true, and I've seen it happen.
In my experience, pre-2015 or so, offshoring was limited in its utility. Communication was a bitch because videoconferencing from everyday laptops wasn't quite there yet, and a lot of the favored offshoring centers like India had horrible time zone overlap with the US. And perhaps most importantly, companies as a whole weren't used to fully supporting remote colleagues.
Now, though, if I interact with the majority of my colleagues over Zoom/Teams/Meet anyway, what difference does it matter where they're sitting? I've worked with absolutely phenomenal developers from Argentina, Poland and Ukraine, and there was basically no difference logistically between working with them and American colleagues. Even the folks in Eastern Europe shifted their day slightly later so that we would get about 4 hours of overlap time, which was plenty of time for communication and collaboration, and IMO made folks even more productive because it naturally enforced "collaboration hours" vs. "heads down hours".
I understand why people like remote, but I agree, US devs pushing for remote should understand they're going to be competing against folks making less than half their salaries.
> ... should understand they're going to be competing against folks making less than half their salaries.
The lower salary can be offset by the lower need for money when you don't need to buy your lunch, you don't need that expensive car to get to work and so on. The time you used for commuting could instead be spent working for another company part time.
Regulation is for when businesses cannot regulate themselves.
In many larger companies also, nationstate threats and national security are a trending issue.
If you deal with a lot of PII, outsourcing your data processing pipelines to China isn't going to fly with Congress when you get subpoena'ed for a round with Hawley.
I mean, while this might be true, Europe is full to the brim of developers who speak fluent English, and yet cost maybe a third of their US counterparts. Programming is really quite far from being a global market.
Oh look, another person who thinks engineers are commodities, especially in a field as loosely defined and unregulated as software engineering.
They always ask “if a job can be done remote why not just hire a foreigner in a cheap place?” and never ask “if the foreigner was so good as the American engineer why wouldn’t they be getting paid the same as the American?”
It’s like they think companies are dumb and there is some undiscovered engineering arbitrage opportunity waiting to be tapped that will end the high 6 figure salaries of American software engineers forever.
And yet, since the 90s, software engineer salaries only go up. Millions of Indians flood the foreign markets, but American tech salaries only go up. Covid hits and everyone goes remote, but the salaries only go up. They always go up. American tech holds a supremacy over the world that you will likely not see the end of in your lifetime. There is so much money, so much risk taking, so much drive to dominate, other countries are generations behind.
But hey keep doing what you’re doing. Maybe you’ll save a couple bucks while your competitors gobble up the market with far better engineering talent. Not “equivalent” talent: better talent..
> if the foreigner was so good as the American engineer why wouldn’t they be getting paid the same as the American
You should also ask whether you're paying American so much because they are so good, or are you paying them so much because rents in SF are so high?
> Covid hits and everyone goes remote, but the salaries only go up. They always go up.
Once again, did it go up because COVID infections somehow made american workers even better or because lockdowns caused mini tech boom while money printing tanked the dollar's value?
Do you really think companies are paying high tech salaries out of the goodness of their hearts? Like “oh this individual lives in a HCOL area, let’s pay them an appropriate amount” or “let’s share the spoils of this tech boom with our workers! $1.5million dollar bonus for everyone!”
>>Do those people really believe they're the most intellectually superior to the rest of the world? If a job can be done purely remotely, what stops the employer from hiring someone who lives in a cheaper place?
capitalism dictates that a capable remote person will not keep working for a single employer, as it will be a waste of time.
he/she will work for multiple employers (overemployed and such), maximizing earnings, thus it will constantly keep a gap between in-office and remote workers
The reports from the usual "offshoring centers" aren't exactly inspiring. It's a bloodbath over there.
Seems like the capabilities of current systems map onto "the kind of labor that gets offshored" quite well. Some of the jobs that would get offloaded to India now get offloaded to Anthropic's datacenters instead.
How many of these jobs are getting offshored because of AI?
Language barriers, culture, and knowledge are some of the biggest challenges to overcome for offshoring. AI potentially solves many of those challenges
Language barriers: LLMs are language models and all of the major ones are built in English, speaking that language fluently is surely a prerequisite to interacting with them efficiently?
Knowledge: famously LLMs "know" nothing and are making things up all of the time and sometimes approximate "knowledge"
My experience: hosted LLMs are very good, but even 30B models you run locally are quite poor (at least in Romanian). To some degree they still hallucinate words (they don't conjugate properly sometimes).
Google Translate is relatively awful. I have an intern now who barely speaks my native language but very bad English so weve been using it all the time, and its always spot on, even for phrases that dont translate directly
I bet I can do a good job communicating with you without speaking a common language.
I tested chatgpt when it launched with my obscure native language which is spoken by maybe 6 million people, and certainly isnt easy to learn nor elegant in design and doesnt have much common with English.
It was absolutely flawless, to the level of accentuations and little quirks that no tool before even came close.
Parent is plain wrong and doesnt have a clue... thats what happens when folks skip on learn foreign languages, the most important thing for life you can learn at school. Actively using multiple languages literally increases brain plasticity, much better than running ie sudoku or similar brain teasers endlessly
Language barriers: The outsourced workers I know use AI to help them ask and answer questions about things in English they don’t perfectly understand because English is their second language. They use it to write better English from English with grammatical mistakes
Knowledge: True to an extent, but my assumption here is that it would be used to fill in gaps or correct misunderstandings. Not wholesale doing my job. At least that’s often how I use it
I worry things will be lost in translation (maybe would have already), Or the LLMs will fill in the gaps with wrong information, like some sort of weird telephone game.
That said, I have one ESL on my team who uses LLMs a lot like that and it's fine so who knows.
It’s basically a solved problem for Japanese <-> English. There are some hiccups, but my coworkers who aren’t fluent in English do pretty good job. We have most of our Slack set up with LLM-auto translations, and it’s been a couple of years of smooth sailing at this point.
I think you are conflating 2 things. AI could be going after new entry level jobs in software engineering. I am not a professional engineer but an accountant by trade (I like writing software as a hobby lol) but this article looks like evidence that IIT grads will have a harder time getting these jobs that AI is attacking. My comment rests on the fact that the report doesn't really reconcile with AI destroying entry level jobs for accounting, but rather this type of work being offshored to APAC/India. There are still new COEs being built up for mid cap companies for shared services in India to this day and I don't mean Cognizant and Wipro, but rather the end customer being the company in question with really slick offices there.
My experience has been that the cheap outsourcing to India is one of the main areas AI is a real disruption. You can go straight to the artificial indian, and get a better result than an outsourced worker with AI tooling. It's one of the most obvious "I no longer need a person for this" experiences I have had since self checkout.
I expect that other areas like accounting that use outsourcing are going to see similar effects in a few years.
I think the article doesn't really prove AI is the culprit but I think this other article disproves that offshoring is. If offshoring was the culprit why is it only affecting the most junior employees? I think the case is still open but AI is the leading candidate.
Maybe for software engineering but not for accounting. I've had to interface with many offshored teams and interviewed at places where accounting ops were in COE centers in EU/APAC.
Do you have any actual evidence that supports the headline? The article does not. It simply mentions 13% decline in relative employment and then blames AI with no actual evidence. Given what I know about the current state of AI and off-shoring, I think off-shoring is a million times more likely to be the culprit than AI.
The entire account department at my firm has moved to Poland. That’s nice for them, but as a US citizen it does mean the writing is on the wall. On the plus side I learned a fun fact. Malgorzata is a more common name than I had ever imagined.
IT help was outsourced to India years ago. I expect them to be replaced with AI the minute their government stops handing the firm big contracts because I’ve never spoken to anyone from that group who was actually better than a chat bot.
Have you seen how the profession has worked post SOX? Did you know 2016 was the peak year where you had accounting students enrolled in uni in the states? I want you to think laterally about this.
Every executive publicly saying obviously* false things like X job will be done by AI in 18 months is putting downward pressure on the labor market. The pressure is essentially peer pressure among executives: are we stupid for continuing to hire engineers instead of handing our engineering budget to Anthropic?
* - Someone should maintain a walkback list to track these. I believe recent additions are Amodei of Anthropic and the CEOs of AWS and Salesforce. (Benioff of Salesforce, in February: "We're not going to hire any new engineers this year." Their careers page shows a pivot from that position.)
Stuff that isn’t pure SaaS. Physical products that benefit from hands on interaction with customers, worksites, and other internal producers. Small and/or local businesses that want someone whose face they can see in person.
I wonder how much this actually matters? I understand that for an auditor, having a quality reputation matters. But if all audits from all firms are bad, how much would the world economy suffer?
Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?
The current system is not long term stable, and poor accounting is part of the reason more people don't know that. Even worse accounting would speed up the decline.
> Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?
Is this hyperbole? It seems like the real question being asked here is "would the world be worse off without deterministic checks and balances", which I think most people would agree is true, no?
I read it as assuming the deterministic checks and balances are already absent. We have the illusion of determinism but, in practice, audits (and justice) are mostly theatre as it is.
From that perspective, lowering the quality of something that is already non-rigourous might not have any perceivable effect. It’s only a problem if public perception lowers, but that’s a marketing issue that the big-4 already have a handle on.
Then you would have to think twice about the company you may be giving money to (ie the stock market and private bank loans). That's the whole objective of this. Every company is going to need an accountant in one way or another and you don't really need to follow strict GAAP for management requirements (what else is EBIDTA for if anything?), but it's something completely different than saying: I made x dollars and spent y dollars, here is what I have and what I owe, please give me money.
At the end of the day it is a question of convenience/standards, if GAAP didn't exist maybe firms could use a modified accrual standard that is wholly compliant with tax reporting and that's it.
This past year, I’ve seen a lot of entry-level jobs quietly disappear. It’s not that people are getting laid off, it’s that no one’s hiring beginners anymore. What’s really missing isn’t just the jobs, it’s the chance to grow. If there’s nowhere to start, how are new people supposed to get in and learn?
Were there ever that many low-level Junior jobs though?
In my experience, almost everyone in college would get an internship Junior / Senior year and convert into an FTE after graduation. Those that were not so talented or not so lucky usually struggled to find work, taking many months to finally land a job. Most typically at a Booz Allen Hamilton type of place that was just throwing bodies into seats.
At all of my employers, I’ve never really seen any openings for Juniors, only Mid and Senior positions. The few Juniors we did bring on outside of an internship pipeline were either internal transfers, e.g. a SOC analyst given a chance or a nepotism type of hire.
I got out of school 15 years ago so its been a while now, but at that time there were a ton of junior roles.
I got a CS bachelors from a decent state school, nothing fancy, and everyone I kept in touch with had found an entry level role pretty quickly after graduation.
I did do an internship and had an offer from them, but the psy was pretty low and I really didn't want to move where they were. It was a bit stressful turning that down early senior year without a backup yet, but I ended up with quite a few interviews and an offer before graduation.
Unfortunately i think many of those jobs can also be attributed to general economic health post low interest rates.
Companies now need to leave pre-revenue and turn a profit, or if you’re an established company you need to cut costs/increase margins from other economic headwinds (tariffs, inflation, gov policies etc)
A Junior dev (and most devs onboarding) will typically require 6-8 months to start being able to meaningfully contribute, then there’s a general oversight/mentorship for a few years after.
Yes they produce, however I think junior’s market salary plus the opportunity cost lost of the higher salaried mid and senior level in mentoring is a hard pill to swallow.
The team i work on is stretched very thin, and even after layoffs (which management agreed they went too far with) it’s pulling teeth to get another dev to build things companies are begging for and even willing to separately pay cash upfront for us to build
If you’re getting into the current job market as a junior, you’ll likely need to go heavy in the buzzword tech, accept a position from a smaller company that pays substantially less, then in 1-2 years job hop into a higher paying mid level role (not to say 1-2 years makes anyone mid level imo)
Sadly, they will just have to try harder. It is still doable especially for an American, and I'm not a fan of these doomsayers' prophesying. There is still hope because TikTok and video games are putting most young people in a trance.
It is a career path, but it 1) is a path that only works for a small amount of people, most people don't earn anything like that 2) requires a special kind of personality and set of skills 3) is subject to the whim of algorithms 4) requires brand building over time but can be destroyed overnight for many reasons. Most "regular" jobs are much more stable.
Not so long ago people who played with programming and computers were wasting their time and potential, disappointed their parents and would have been better off getting a "regular" job.
Um, no. Way more young people are making that kind of money by being a programmer or doctor or lawyer or nurse or actuary or something versus the minuscule number of people making any significant money on tiktok or video games.
Video and graphical designers? I’m not saying this with optimism, but rather as as observation.
I know a handful of digital marketers, that work for different marketing firms - and the use of GenAI for those tasks have exploded. Usually tasks which they either had in-house people, or freelancers do the work.
No it can't replace graphical designers except in low stakes projects. Companies won't risk their projects on cheap AI with other larger costs on the line.
Instead of replace, think reduce. A graphical designer/lead might have farmed out some work to a contractor, with AI they now just do it themselves. It doesn't look like 'cheap AI' because they used the tools to augment their skills - not replace.
I don't know what it is about AI that makes people think in absolutes.
I can think of a handful of people I work with who could be replaced by LLM. The hallucinations would be less frequent than the screw-ups the current humans make.
It could at least consolidate 5 of those people into 1 with increased efficiency.
If general government policies adapted to a world with less available jobs, it wouldn't be an issue.
As it stands, our governments are continuing on as normal despite the introduction of a technology which has the capability to erase an enormous amount of jobs from the pool (certainly many standard office jobs).
A world with less work and no general change in policy (no UBI or similar scheme) is a world with more unemployed people, falling living standards, more crime, and more instability.
It doesn't have to replace people on a one-for-one basis to cause job losses. Let's say LLMs make your developers 50% more efficient. Doesn't it stand to reason you can lay off the lowest performing 33% and get the same amount (or more) of work?
No, it does not stand, because you think linearly. Companies can't simply drop 33% of employees because there is competition. If competition uses both humans and AI they will get more value from both. No AI has sufficient autonomy or capability to be held accountable for its mistakes.
There is less upswing in reducing costs than in increasing profits. Companies want to increase profits actually, not just reduce costs which will be eaten away by competition. In a world where everyone has the same AIs, human still make the difference.
Have you ever been at a company where the limiting factor was finding stuff to build? I've never seen one personally.
If there's any productivity increase, they'll just build even more stuff.
(And that's if we agree about a 50% increase I'd say 5% is already generous)
I think the initial job loss from AI will come from having individual workers be more productive and eliminate the need to have larger teams to get the same work done.
Eventually, maybe. Right now I see a lot more people wasting time with AI in search of these promised efficiencies. A lot of companies reducing headcount are simply hiding the fact that they are deprioritizing projects or reducing their overall scope because the economy is shit (I know, I know - but it feels worse than reported IMO) and that's the right business cycle thing to do. If you're dramatic and take the DOGE/MAGA approach to management, just fire everyone and the important issues will become obvious where investment is actually needed. It's a headcount 'zero based budget' played out IRL. The truth is, there is a lot of fat to be cut from most large companies and I feel like it's the current business trend to be ruthless with the blade, especially since you have AI as a rose colored scapegoat.
The way I like to describe it is that you can't go from 1 developer to 0 thanks to AI, but you might be able to go from 10 to 9. Although not sure what the exact numbers are.
I'll go further than you. Even if the team is a cost center, it may not make sense to reduce the headcount if there's still more work to do. After all, an internal team that just assists other teams in the company without directly creating value suddenly become more productive could in turn make the other teams more productive. Automatically reducing headcount after a productivity increase is like that effect where people drive more dangerously when wearing seatbelts.
I agree its a popular excuse, however unlike the blockchain craze there’s legitimate use cases of productivity improvements with AI.
And if you can (in some cases) substantially increase productivity, then logically you can reduce team size and be as productive with less.
With the right prompting, you can cut a copywriting team in half easily.
My business has one copywriter/strategist, who I’ve automated the writing part by collecting transcripts and brand guidelines from client meetings. Now she can focus on much higher quality edits, work with other parts of the strategy pipeline, and ultimately more clients than before.
I can easily imagine a corp with 100 junior copywriters quickly reducing headcount
The problem is people (not sure if it's coping) present an argument that either it can perfectly replace someone 100% or it's an useless fad.
Even increasing the average productivity by 10-20% is huge and in some areas (like copywriting as you've mentioned) the gains are much bigger than that. Of course there's also the argument of the infinite demand (i.e. demand will always overshadow any gains in supply caused by AI) but evidence is never provided.
I used to hire someone who worked part time from home to bookmark some of the key pages in thousands of pdfs just so that I can directly jump to those pages instead of spending time myself on finding those pages.
AI can now do it very cheap so no need to give that job to a human anymore.
My use case cannot be solved by just grepping for the keywords because it does return false positives. You do need some intelligence (human or AI) to figure out if its the right page or not to add the bookmark.
This is what HasanAbi mentioned the other day. Betting on AI to do a job (especially if it fails to replace it) is a double economic whammy. You get rid of original people who did the job, and then you don't have funding for other things (also includes people) because you need to recover the AI costs.
Collective delusion about AI (or similar craze) can be large enough to actually tank the economy.
That I agree with. The problem with the assertion that AI took all these jobs is that the normalised point from which they took for assessing job losses is right at the peak of epic programmer hiring.
> I can’t think of a single job that modern AI could easily replace.
> I can’t think of a single job that modern AI could easily replace.
It could replace many workers, perhaps sacrificing quality, but that's considered quite acceptable by those making these decisions because of the huge labor cost savings.
It also could raise the quality of work product for those working at a senior level by allowing them to rapidly iterate on ideas and prototypes. This could lower the need for as many junior workers.
There are tons of internship-like positions where the employer just wants someone to prepare powerpoints and stuff of that nature, that they then edit because the intern doesn’t do a very good job at powerpoint.
Organizations are choosing to eliminate workers rather than amplify them with AI because they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity. That's the rent extraction model consuming its own productive infrastructure.
The Stanford study documents organizations systematically choosing inferior economic strategies because their rent-extraction frameworks cannot conceptualize workers as productive assets to amplify.
This reveals that these organizations are economic rent-seekers that happen to have productive workers, not production companies that happen to extract rents. When forced to choose between preserving rent extraction structures or maximizing value creation, they preserve extraction even at the cost of destroying productive capacity. So what comes next?
When you don’t need as many people because of automation, you also don’t need them to fight your wars. You use drones and other automated weapons. You don’t need things like democracy because that was to prevent people from turning to revolution, and that problem has been solved with automated weapons. So then you don’t really need as many people anymore, so you stop providing the expensive healthcare, food production, and water to keep them all alive
Yeah this is what we are seeing today, also its not just junior jobs going, according to Amazon they are using it to get rid of expensive senior employees while they are actually holding onto juniors using ai tools.
We have seen a lot of use of h1b and outsourcing despite the massive job shortage. Seeing lots of fake job sites filled with ai generated fake openings and paid membership for access to "premium jobs."
They're using ICE to effectively pay half the country to murder the other half, but the ICE budget is limited so that automated systems can then gun down the ICE community to replace 99.9% of humans with machines.
Ultimately this is great for Russia because they'll still be able to invade even if they have only 300 soldiers left in their military, after they hit a low orbit nuke blast to shutdown the Ai US, basically only Melania swinging her purse at the troops will be one of the few left alive to resist.
There's likely a slippery slope fallacy in there somewhere (I hope). If interested in the (not so) sci-fi aspects of automated weapons and their ramifications, I often plug Daniel Suarez's great Kill Decision talk and book: https://www.youtube.com/watch?v=pMYYx_im5QI
Ukraine and russia are already employing low-cost recon and hunter-killer drones
israel is already using sniper drones in Palestine that use AI to fly around and headshot whatever moves, as well as AI to select their bombing targets.
"Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor."
No mention of rent-seeking.
No evidence they are being economically short-sighted.
> they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity
They're using cheap AI to replace more expensive humans. There's no reason to think they are missing some exponential expansion opportunity that keeping those humans would achieve, and every reason to think otherwise.
I hope AI fuels a re-independence of many industries by making business software discovery and integration cheap and easy, every plumber with more than 10 years experience should own their company with low cost software running it, the efficiency gains from consolidating resources a la private equity for marketing and book keeping go away in an AI powered world
> Ancient egypt (elite in pyramids, slaves otherwise) is more likely.
No you're both being optimistic. The feudal lords had a vital need for serfs, and the pharaohs slaves.
It'll be more like elite in pyramids, everyone else (who survives) lives like a rat in the sewers, living off garbage and trying to stay out of sight. Once the elite no longer need workers like us, they'll withdraw the resources they need to live comfortably, or to even live at all. They're not making more land, and the capitalist elite have "better" uses for energy than heating your home and powering your shit.
I would say that’s still being optimistic. The end will come when Baidu, Facebook and Microsoft’s AI engage in total war against each other for survival while we watch in horror and incomprehension. The elites are just as fucked as anyone else.
> This reveals that these organizations are economic rent-seekers that happen to have productive workers, not production companies that happen to extract rents.
Your perspective is so contrary to reality I'm actually not sure if you're trolling or not. There is no such thing as pure value creation. In order for labor to create value, it must be aligned with the company's value proposition, i.e. what convinces customers to pay for the value that the company provides. Half the people off in the corner building something that they think is valuable are actually building something that customers do not care about, won't pay more for, and increase the company's maintenance burden.
Keeping labor aligned with value creation is the whole game. If it wasn't, then all these rent-seeking-first enterprises would have fired their layers and layers of middle management a long time ago; the company needs to pay them a salary (reducing profits) but they don't write any code / "produce any value". All these massive corporations would have moved to a flat management hierarchy a long time ago, if labor was truly capable of aligning itself to improving value generation; and if you think there's some nefarious/conspiratorial reason why massive corporations don't do that, then most of them would have been out-competed a long time ago by co-ops with flat management hierarchies that could produce the same value at a lower price due to lower administration costs.
Needing to hire employees is a necessary evil for businesses. Aligning employees is hard. Motivating employees is hard. Communication is hard. Businesses do not exist to provide people with jobs, which are created out of sheer necessity, and are destroyed when that necessity goes away.
You got there in the end. Hiring people is a necessary evil and ai allows companies to massive reduce the necessity of that evil. Having done budgeting and forecasting for a wide range or organizations companies will do anything to avoid hiring an employee. I’ve seen companies spend 3x what an employee would cost just to avoid the increased headcount.
The forces of capital do not want to share a single penny and are solely focused on getting to a place of rent.
What data or special insight do you have as to whether amplifying or eliminating is actually productive?
This argument is vacuous if you consider a marginal worker. Let's say AI eliminates one worker, Bob. You could argue "it was better to amplify Bob and share the gains". However, that assumes the company needs more of whatever Bob produces. That means you could also make an argument "given that the company didn't previously hire another worker Bill ~= Bob, it doesn't want to share gains that Bill would have provided blah blah". Ad absurdum, any company not trying to keep hiring infinitely is doing rent extraction.
You could make a much more narrow argument that cost of hiring Bill was higher than his marginal contribution but cost of keeping Bob + AI is lower than their combined contribution, but that's something you actually need to justify. Or, at the very least, justify why you know that it is, better than people running the company.
ChatGPT (might have) made a few superfluous email jobs obsolete and the people responding to this comment are acting like we’re standing on the threshold of Terminator 3.
Don't underestimate how much of the economy is "superfluous email jobs". Have you seen how stupid the average person is?[0] These people need jobs too.
[0] I was going to going to mark this as sarcasm but then I remembered that the US elected Donald Trump as president, 2 times so far, so I'm going to play it straight.
Implying "superfluous email jobs" isn't a significant portion of the international job market. Most people that work in offices fit under this definition.
> Most people that work in offices fit under this definition.
Not at all. The majority of office jobs can't be automated by current generation LLMs, because the jobs themselves serve either creative or supervisory functions. Generative AI might be able to fill in creative functions one day, but the whole point of a supervisory role is to verify the status of inputs and outputs. A lot of these roles already have legal moats around them (e.g. you can't have an LLM sign financial statements), but even if we assume that regulations would change, the technical problem of creating supervisory "AI" hasn't been solved; even if it was, implementation won't be trivial.
Lots of people are sceptical but I cannot imagine a use for entry level positions anymore. At my work everybody got to calling AI "the intern", which is not confusing because we do not have and have no use for interns.
So what are the long term risks when senior staff leaves and those need to be replaced with new seniors that have never seen the existing work vs. promoting younger people who knows the projects and practises?
Long... term... risk? What is this archaic concept? I just want to get into a position where I can extract rent from children when they grow up. Why do I need to worry about this so called long term risk?
These days normally what corporates do is bring in people from abroad who have the skills.
The idea that companies would seek to train up domestic workers if there is a skill shortage is outdated today – even if theoretically this might be good for the domestic workforce. It's just cheaper and easier to import the skills needed.
I think I disagree. It sounds like that companies that do this are average at most and has nothing own - otherwise it would not be possible to just replace the workers. If your company is average, it does not attract real talents.
But maybe my comment was not noting every discipline. I was mostly thinking about software development.
I'm not talking about "replacing" workers. I'm talking about hiring the most qualified which in a global talent pool will almost always be someone from abroad. There's no reason for a company to hire a relatively unskilled graduate domestically when they could hire someone with more experience from abroad.
If you look at the workforce makeup of many large tech companies today there's a reason Indian and East-Asian talent is so overrepresented, and it's not because they lack talent. It's because if you actually want to hire the best of the best you're not going to bring in juniors from the domestic workforce and train them up.
> I'm not talking about "replacing" workers. I'm talking about hiring the most qualified which in a global talent pool will almost always be someone from abroad. There's no reason for a company to hire a relatively unskilled graduate domestically when they could hire someone with more experience from abroad.
What I meant with replacing was that when someone retires, you hire a new worker and hope that they immediately produce a similar value. If the company has unique skill requirements, it is unlikely that the new hire provides similar value, at least immediately. How long it takes, is a question.
Alternatively, you have hired a junior already _on top of the_ already existing senior person. Junior doesn't have so good value/cost ratio but they still contribute. But this will increase over time.
And when the senior then retires, you are comparing the value of this trained junior vs. the associated risk of random new hire. You are never replacing retired person directly with "junior", junior is just an additional less necessary investment worker.
> If you look at the workforce makeup of many large tech companies today there's a reason Indian and East-Asian talent is so overrepresented, and it's not because they lack talent. It's because if you actually want to hire the best of the best you're not going to bring in juniors from the domestic workforce and train them up.
I haven't heard yet a company that actually has produced valuable product because of it. Usually the flow has been, that some Western country has created a succesfull product, and then later the workforce has been changed. And almost always the quality has decreased. But it does not matter because the product got already decent market share, and it takes years for revenue to drop because of that.
> I haven't heard yet a company that actually has produced valuable product because of it. Usually the flow has been, that some Western country has created a succesful product, and then later the workforce has been changed. And almost always the quality has decreased. But it does not matter because the product got already decent market share, and it takes years for revenue to drop because of that.
Are you talking about out-sourcing or hiring the best talent from abroad with work-visas? I guess I don't understand why you would think domestic labour is inherently better? I understand this perspective with outsourcing because it can be difficult to maintain quality when outsourcing to a team in another country, but there's plenty of examples of successful tech companies hiring the best talent from abroad, no? I'd argue this is almost the norm for large tech companies in the US.
And for what it's worth, I'm not saying I agree with this. I'm just saying that if you're a company genuinely interested in hiring the best of the best it makes no sense to limit your Labour pool to the domestic market, and it rarely makes sense to consider graduates unless they are truly exceptional given their relative lack of experience.
In the past companies were simply forced to hire and train domestically because countries like India and China didn't have the education or technological access to compete with the average graduate in the US. Today most of the world is online and education is decent enough that the best talent is far less concentrated to a few geographical areas.
In a pure meritocracy (which the US is close to) in a globalised world it's quite rare that someone graphically local will be the best person available for any given role.
But you can domestically hire that junior too - I am just arguing about that pure replacement model. You can pick the top talent from juniors too. Otherwise, if nobody ever hire juniors, at some point you run out of any workers because there is a portion for their life when they simply cannot get the job in their field. Young people won't invest the field if the first job requires that you are magician or have 10+ years of work experience that you either can't have or need to use your free-time.
Anyway, big tech doesn't work as an generic example because they have much more resources and therefore it cannot be generalized.
US also is an exception because the only expectation is English language. In other countries you may need two languages.
Traditionally, interns exist as a well-vetted and well-shaped supply of labor (which is very difficult to find through the traditional hiring process). The work they complete is secondary. Are companies going to stop needing good employees? Is nobody going to need to work in 40 years when all the current employees are phased out?
If interns existed like that they haven’t in the 21st century. They are a free disposable short term labour to be ripped through.
Also what company do you know of thinks in 40 year terms? The longest budgeting process I’ve been part of has been 3 years and the year 3 numbers were understood to be pure wish fulfilment
I'm not sure when interns stopped being part of the hiring chain, but I remember almost universally circa 2010 or 2011 in the great recession, almost every single intern and co-op student I knew were told bluntly there would be absolutely no hiring of them into full time roles.
Up until the great recession it was pretty normal, almost expected, that a good internship turned into a good job offer.
Every day when I am out in the city, I am amazed by how many jobs we have NOT managed to replace with AI yet.
For example, cashiers. There are still many people spending their lives dragging items over a scanner, reading a number from a screen, holding out their hand for the customer to put money in, and then sorting the coins into boxes.
Self checkout has been a thing for ages. Heck in Japan the 711s have cashiers but you put the money into a machine that counts and distributes change for them.
Supermarkets are actually getting rid of self checkouts due to crime. Surprise surprise, having less visible "supervision" in a store results in more shoplifting than having employees who won't stop it anyway.
It’s also just resulting in atrocious customer experience.
I can go to Safeway or the smaller chain half a block away.
The Safeway went all in on self checkouts. The store is barely staffed, shelves are constantly empty, you have to have your receipt checked by security every time, they closed the second entrance permanently, and for some reason the place smells.
Other store has self checkouts but they also have loads of staff. I usually go through the normal checkout because it’s easier and since they have adequate staff and self checkout lines it tends to be about the same speed to.
End result is I don’t shop at Safeway if I can avoid it.
Cashiers should not, and will not prevent theft. They're not paid nearly enough to get in danger, and it is not their job.
I'm sure you can find videos of thefts in San Francisco if you need a visual demonstration. No cashier is going to jump in front of someone to stop a theft.
True, but having a cashier standing there waiting to scan your items will prevent most normal people from stealing. Sure, some will brazenly walk right past with a TV on their shoulder, but most people won't.
If there's no cashier and you're doing it yourself, a whole lot more people will "forget" to scan a couple items, and that adds up.
There's usually a security person or two in the store, looking over the self checkouts. I agree that job prevents a lot of people from becoming opportunistic thiefs, but I'm making a distinction between cashiers and security. Today the store needs both.
> The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.
Yes, for one thing, it ignores that a very large share of retail theft is insider theft, and that cash handling positions are the largest portion of that.
They absolutely do. It’s not the cashiers being security, it’s having adequate staffing making people less likely to steal. Its not stopping crimes that have occurred it’s just reducing opportunistic theft.
Use your army of lawyers to help file misdemeanor theft charges of course. Then get one out of 100 of the defendants who actually has something to lose to pay big damages to fund it to happen again the next time.
You ban them from coming back in after a few warnings. Stores seem really icy about facial recognition right now though. The optics are pretty bad (a play on words pun?)
There have been a few stores that won't really stop people from coming back, they just quietly file charges, and then the person finds out next time they get pulled over or something along those lines.
Detecting theft does not mean theft is prevented. You then need the government to prosecute, and impose sufficient punishment to deter theft. This is not cheap, nor a given that it will happen.
So take the broken god awful experience of self checkout and add another layer of “I think you did something wrong so now you have to stand around waiting for an actual person”?
You mean ordering kiosks and self-checkout machines? We have automated it, it's just not everywhere has implemented it.
The one I'm desperately waiting for is serverless restaurants—food halls already do it but I want it everywhere. Just let me sit down, put an order into the kitchen, pick it up myself. I promise I can walk 20 feet and fill my own drink cup.
You seem to like self-checkout processes. I don't. I avoid any place where I have to interact with a screen. Be it a screen installed on-premise or the screen on my phone. It is not a relaxing experience for me.
Japan does this a lot of places, and it makes the experience much easier.
And I think the entire mid and low range restaurants could replace servers with a tablet and people would be happier. I'm not sure how it doesn't make more money for the restaurant too, making it so easy to order more during a meal.
There used to be two humans standing at the cash register,
now because of software, automatic change machines, and cameras there is only one. One of those humans' jobs got automated.
Call it what you like but replacing the work of humans one for one is difficult and usually not necessary. Reformulating the problem to one that machines can solve is basically the whole game. You don't need a robot front desk worker to greet you, you just need a tablet to do your check in.
This. And I do their work a lot more slowly because it's not my regular job, and I actually already had to do some of the work (getting the items out of my trolley and onto the conveyor). Now I stand there forever fumbling with barcodes, trying to get bags to stay open, switching between getting items out of the trolley and scanning. The old checkout system is so much more efficient when you are buying anything more than a couple of items at a time.
Yeah this is like saying Aldi “automated” cart return. They didn’t, they got every shopper to do the work themselves. Automated cart return would be if you just gave the cart a little “giddyup!” when you were done and it found its way home. Or those cart conveyor belts at Ikea, it’s only part of the process but that part is automated.
[edit] Aldi did automate the management of getting shoppers to do that work, because there’s not a person standing there taking and handing out quarters, but (very simple) machines. Without those machines they might need a person, so that hypothetical role (the existence of which might make the whole scheme uneconomical) is automated. But they didn’t automate cart return, all that work’s still being done by people.
Many "productivity improvements" in the modern era are just externalizing the problem. It's like saying I automated recycling by dumping all my unsorted stuff in my neighbors bin.
Do you consider all forms of "self-service" to not be automation of a job that previously required an additional human?
Like checking in at the airport via kiosk/app for example. Do you consider that to be "doing the work" of the desk clerk? Or say ordering at a restaurant by scanning a QR code, in both cases I have to look at the menu, decide what I want, and input my order into a system. But with the QR code there's no longer a human necessary.
Serverless restaurants have been common in Australia for decades. You just get a buzzer and then need to go pick up your food when it is ready. There's a single person behind the bar to take orders and pour beer/wine/soda.
Seems like perfect option for robots (not humanoid). Bring me my food. You can still keep people in kitchen for a bit, but well servers in many restaurants are not really needed.
Pharmacists are my favourite. They're a human vending machine that is bad at counting and reading. But law protects them. Pretty good regulatory capture.
Please actually understand what pharmacists actually do and _why_ AI is not a good replacement for them yet, unless you want to die of certain drugs interactions.
Hahaha, this drug interaction nonsense is what online people tell each other. It isn't even real. It's like "nice trigger discipline" or "the postal police don't fuck around" and shit like that. Just something that is not true but for some reason is internet urban legend.
Retail pharmacists are human vending machines. You don't need AI. It's a computer prescription written by a far more qualified human which is then provided to a nigh-illiterate half-wit who will then try as hard as possible to misread it. Having then misread it, the patient must then coax them out of their idiocy until they apologize and fulfill what's written.
Meanwhile some Internet guy who gets all his information from the Internet will repeat what he's heard on the Internet. I know this because anyone passingly acquainted with this would have at least made the clarification between compounding pharmacists and retail pharmacists or something.
Pharmacists are a fantastic example. My pharmacy is delivered my prescription by computer. They text me, by computer, when it's ready to pick up. I drive over there … and it isn't ready, and I have to loiter for 15 minutes.
Also, after the prescription ends, they're still filling it. I just never pick it up. The autonomous flow has no ability to handle this situation, so now I get a monthly text that my prescription is ready. The actual support line is literally unmanned, and messages given it are piped to /dev/null.
The existing automation is hot garbage. But C-suite would have me believe our Lord & Savior, AI, will fix it all.
The only way AI could fix this if it said "replace the pharmacist with a vending machine and hire a $150k junior engineer to make sure the DB is updated afterwards", which you never know, Claude Opus 4 might suggest. At that point, we'll know AGI has been achieved.
It presents a difference-in-differences (https://en.wikipedia.org/wiki/Difference_in_differences) design that exploits staggered adoption of generative AI to estimate the causal effect on productivity. It compares headcount over time by age group across several occupations, showing significant differentials across age groups.
Page 3: "We test for a class of such confounders by controlling for firm-time effects in an event study regression, absorbing aggregate firm shocks that impact all workers at a firm regardless of AI exposure. For workers aged 22-25, we find a 12 log-point decline in relative employment for the most AI-exposed quintiles compared to the least exposed quintile, a large and statistically significant effect."
Interesting technique, that DID. But it assumes the non treatment factors would affect both the treatment group and control group equally, that the effect would scale linearly. If the treatment group was more exposed to the non-treatment factors, then an increase could account for a larger difference than the one seem at time 1. Idk which other industry they used as the controll group but interest rates could have a superlinear effect on tech as compared to on that, so the difference of difference would be explained by the non-treatment factor too
I appreciate the link to differences in differences, I didn't know what to call this method.
The OP's point could still be valid: it’s still possible that macro factors like inflation, interest rates, or tariffs land harder on the exact group they label ‘AI-exposed.’ That makes the attribution messy.
You really do have to account for why this is mainly happening in industries that are adopting AI, why it's almost exclusively impacting entry-level positions (with senior positions steady or growing), and why controlling for broad economic conditions failed to correct this. I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
My personal theory is that the stock market rewards the behavior of cutting jobs as a signal of the company being on the AI bandwagon. Doesn't matter if the roles were needed or not. Line goes up, therefore it is good.
This is a complete reversal in the past where having a high headcount was an easy signal of a company's growth (i.e. more people, means more people building features, means more growth).
Investors are lazy. They see one line go down, they make the other line go up.
CEOs are lazy. They see line go up when other line goes down. So they make other line go down.
(I am aware that "line go up" is a stupid meme. But I think it's a perfect way to describe what's happening. It is stupid, lazy, absurd, memetic. It's the only thing that matters, stripped off of anything that is incidental. Line must go up.)
If we replace all juniors with AI, in a few years there won't be skilled talent for senior positions.
AI assistance is a lot different than AI running the company. Making expensive decisions. While it could progress, bear in mind that some seniors continue to move up in the ranks. Will AI eventually be the CEO?
We all dislike how some CEOs behave, but will AI really value life at all? CEOs have to have some place to live, after all.
The AI will at least be cheaper than a CEO, it might also be more competent and more ethical. The argument against making a Large Language Model the CEO seems to mostly be about protecting the feelings of the existing CEO, maybe the Board should look past these "feelings" and be bold ?
Nah, the insistence that humans are somehow uniquely too smart to destroy themselves is obviously laughable, it's troubling that you wrote that down without bursting into laughter because it is so silly.
First, we're discussing what an AI might do, with terms like "no air". EG, wholesale destruction.
So please do show when the human race has destroyed itself entirely. Oh wait, you're here to read this? I guess that has never, ever, ever happened. Ever. Because if one human exists, there's never been a case where humans were wiped out, with for example no air.
So the "obvious" aspect is not quite so clear. There's no evidence of it, merely conjecture.
Second, at no point did I say smart or not smart.
Instead, I discussed two viewpoints. The viewpoint of an AI, which does not require air, and the viewpoint of a human, which does care about air.
Get the difference?
You may want to dive into global warming, or pollution, or what not. These however, are longer term issues. Destruction today is far different than destruction in 100 years. Or 1000. For example, if global warming predictions are accurate, or even worse, there will still be humans somewhere for hundreds of years, without much change.
Some people might starve, the equator may be too hot, people will die, but there will be places to live still. The planet may go runaway CO2, but that partially abates as more humans die. Fewer humans, less CO2.
Yet either way, it's long term, and no one can definitively say the outcome.
Long term isn't typically part of most decision trees. There's a reason for that. When thinking long-term, you have to think of all the possible permutations, all the possible things that could happen, all the possible things that occur, and those greatly and massively expand with time.
Any thinking being which considers all of their actions right now, in the moment, would become almost immediately paralyzed if it had to consider those actions extremely long-term. Each move, each action, with massive pause and hours/days/weeks of thought. Do you consider how your next step will impact people 4,000 years in the future? A million? A billion?
What about eating toast for breakfast? How much consideration does the average entity put, into consuming food for energy, and yet looking forward on their actions for a billion years?
Beyond that, there is no accurate data for future outcomes, to make a proper determination of what may happen 500, a thousand, a million, a billion years in the future. So all of these computational chains are realistically for naught. After all, the sun may expand sooner than predicted. Certainly the moon will likely move further from Earth, and the Earth's spin will slow down. How will now, affect everything in that future?
You may say, why don't we consider our actions, you know, just in the next hundred years? But now, suddenly, are not you considering your actions in too short of a time frame? Should you not consider what the human race and what the Earth itself will be like in a billion years? If you're not considering those things, then are you not depriving entities and beings and living organisms a healthy planet one billion years in the future?
Where does it stop? Where does it begin? Where and how far in the future should you consider your actions, on a day-to-day basis?
More importantly, how much of what you do in a day should you consider with regards to the future? Which acts of yours are detrimental to the future? Do you know? Are you sure? Do you have any idea?
Obviously, some of the thoughts above are somewhat long term. Yet not thinking long term is why we got into this issue with global warming! And truthfully, if the complaint is that we're destroying the future planet for species that live here besides ourselves, then we really should be considering 10k, 50k, a million years in the future.
Anything else is only selfishly considering our own personal descendants for a couple of generations.
But let's take a step back. I'm not trying to say that we or anybody else can make these kinds of in-depth long scope decisions, nor am I saying that we should not care about global warming. Obviously we should. We actually know it's a problem now. We knew in the 70s.
Instead, what I'm saying is that individuals are individuals and excessively considering long-term ramifications of all of your actions can be highly detrimental to the capacity to make any decision whatsoever. Imagine an AI, which every single time it made a decision, every single time it even decided to compute something, every time it decided to take an action in the real world, it had to consider the ramifications one billion years hence.
Imagine the amount of processing power that would require. Now imagine the amount of energy or "food" needed. This is why beings cannot sit around for all eternity while a wolf leaps upon them, while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.
And as I've suggested above, it is also going to be a requirement for AI. Certainly it can consider some of its acts, much like a human being can consider some of its acts, but that's not how things work on a day-to-day basis.
Human beings solve this by observation after the fact of many of our acts, and secondarily, by reviewing what's happening in the environment and the world around us as we make change, and then determining if we should scale back or roll back what we're doing.
The same will be true of AI. If the same is not true of AI, AI will cause global warming merely by trying to stop it.
The sheer computational power required for an AI and all the AIs that are making decisions, optimally choosing best for 1M years in the future? It would eat enormous amounts of energy, this making global warming worse, whilst trying to make it better!
Whether or not we should be putting more energy into considering these things doesn't mean that it's possible for the average thinking entity to do so. Imagine the amount of processing power that would require. Now imagine the amount of energy or food. This is why beings cannot sit around for all eternity while wolf leaps upon them while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.
Software development is one of the listed industries. Well before AI we have seen that few companies wanted entry level devs due to the training and such.
Reducing in call centers has been going on for a while as more people use automated solutions (not necessarily AI) and many of the growing companies make it hard to reach a real person anyways (Amazon, Facebook, etc). I feel like AI is throwing fuel on the existing fire, but isn't as much of a driver as the headlines suggest.
American workers are truly under attack from all sides. H1B. Outsourcing. What's left? The blue collar manufacturing is mostly gone. White collar work well on its way out. Why is our own government (by the people for the people) actively assisting in destroying American's ability to get jobs (H1B)? Especially in these conditions. I'm no racist or idiot but it's unacceptable. I didn't expect the gov to actively be conspiring with big corps to make my economic position weaker. Unbelievable breach of trust. We need to demand change from our government.
It’s an unpopular opinion in the current environment but it’s the program that allows international talent to connect with local capital that creates all the jobs in tech.
The biggest problem right now is that there is no distinction between companies replacing Americans labor with cheap labor and entrepreneurial talent that creates jobs. Everyone is on the same visa.
And when we're all out of work except for the doctors and nurses, electricians and plumbers, there will be nobody to contribute to consumer spending. And we will suffer, at the hand of the government that assisted in this scam.
Having to work with ESL contractors from firms like Cognizant or HCL is true pain. Normally it would be like 3-4 US employees working on something and then its like 20-30 ESL outsourced people working on something. The quality is so poor though its not worth it.
My current org nuked their contract w HCL after 2 years because how shitty they are and now everything is back onshore. Millions wasted lol. Corporations are so silly sometimes.
They also need 5 people to do the work of one us worker. And then another US worker to guide and do some qa on the output they produce . I don't see how it saves money. There are other countries with lower wages than the US where this doesn't happen such as Poland or Australia.
> You really do have to account for why this is mainly happening in industries that are adopting AI
Correlation is not causation. The original research paper does not prove a connection.
> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
They are nonetheless subject to publish or perish pressure and have strong incentives to draw publishable attention-grabbing results even where the data is inconclusive.
Tariffs are just a massive government revenue generating consumption tax on particular industries. We would expect unemployment among the young trying to enter those industries to be hit hardest.
Is there some central authority that’s telling people to blame this all on AI, or how is everyone reaching this conclusion and ignoring the other obvious factors you stated?
It is in their interest to find explanations for reductions in labor that don't assign the blame to corporate greed.
For example, a call center might use the excuse of AI to fire a bunch of people. They would have liked to just arbitrarily fire people a few years ago, but if they did that people would notice the reduction in quality and perhaps realize it was done out of self-serving greed (executives get bigger bonuses / look better, etc). The AI excuse means that their service might be worse, perhaps inexcusably so, but no one is going to scrutinize it that closely because there is a palatable justification for why it was done.
This is certainly the type of effect I feel like underlies every story of AI firing I've heard about.
How is firing a bunch of people because you made a machine that you believe can do their jobs not textbook corporate greed? It seems like the worst impulses of Taylorism made manifest?
This is worse: this is just pretending like the machine does their jobs because it benefits them.
The big (biggest? ) problem of modernity is that quality is decorrelated from profit. There's a lot more money in having the optics of doing a good job than in actually doing it; the economy is so abstracted and distributed that the mechanism of competition to punish bad behavior, shitty customer service, low standards, crappy work, fraud... is very weak. There is too much information asymmetry, and the timescale of information propagation is too long to have much of an effect. As long as no one notices what you're fucking up very quickly you can get away with it for a long time.
Seems even worse to me. At least in the 'competition' paradigm there's a mechanism for things getting better for consumers. No such thing here.
It doesnt need to be a conspiracy. Incentives allign sometimes. Alot of people are invested in AI replacing jobs and it would be nice for them if the buzz was that it is actually the case
Blaming AI is better because it helps corporations convince the working class that there jobs are in long-term danger so they collectively settle for less favorable work terms and compensation, unlike if they are convinced that it is going to gradually improve with the upcoming monetary easing cycle..
End of ZIRP and the Sec. 179 change for engineering salaries probably explains more of this (plus the increase in outsourcing). I’m sure some decision makers also threw AI into the mix but the financials of hiring software engineers in the US was already challenging before AI “took everyone’s job”.
Since this article is about AI, and since this comment seems rather low effort compared to the Stanford study, I went ahead and used low effort to analyze the report compare it to this comment. Here's my low effort AI response:
> Prompt: Attached is a paper. Below is an argument made against it. Is there anything in the paper that addresses the argument?: High interest rates + tariff terror -> less investment -> less jobs
> High rates/firm shocks: They add firm–time fixed effects that absorb broad firm shocks (like interest-rate changes), and the within-firm drop for 22–25-year-olds in AI-exposed roles remains.
> “Less investment” story: They note the 2022 §174 R&D amortization change and show the pattern persists even after excluding computer occupations and information-sector firms.
> Other non-AI explanations: The decline shows up in both teleworkable and non-teleworkable jobs and isn’t explained by pandemic-era education issues.
> Tariffs: Tariffs aren’t analyzed directly; broad tariff impacts would be soaked up by the firm–time controls, but a tariff-specific, task-level channel isn’t separately tested.
Fitting, since it came up with unrelated information (the R&D tax thing) and the 3rd bullet point. Also started talking about tariffs as if it had addressed them, then notes that it doesn't address them.
I generally agree that AI is the scapegoat, but not for those same reasons. Despite the lack of job growth and the tariffs, recent data shows the economy grew about 3%. Even if it's not AI as the primary driver, efficiency seems to have increased.
How does that make sense? Wouldn’t high interest rates and tariffs cause more expensive engineers to have disproportionate opportunity? I remember during 2008 it was much easier for my employer to justify junior engineers than senior ones.
In my division we have 1000 IT workers. I'd guess the number born in the USA is less than 10%, probably less than 5%. Americans just dont work hard enough, I think now they are a minority they probably dont want to stay either.
> Some examples of these highly exposed jobs include customer service representatives, accountants and software developers.
We seem to be in this illogical (delusional?) era where we are being told that AI is 'replacing' people in certain sectors or types of work (under the guise that AI is better or will soon be better than humans in these roles) yet those same areas seem to be getting worse?
- Customer service seems worse than ever as humans are replaced with "AI" that doesn't actually help customers more than 'website chatbots' did 20 years ago.
- Accounting was a field that was desperate for qualified humans before AI. My attempts to use AI for pretty much anything accounting related has had abysmal results.
- The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences.
> "The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences."
^ This. (Tho I'm not sure about it being "general
consensus".) Vibe code is the payday loan (or high-interest credit card) of tech debt. Demo-quality code has a way of making it into production. Now "everyone" can produce demos and PoCs. Companies that leverage AI as a powerful tool in the hands of experienced engineers may be able to iterate faster and increase quality, but I expect a sad majority to learn the hard way that there's no free lunch, and shipping something you don't understand is a recipe for disaster.
I'm not sure it's that our job is the most automatable, but that the interface is the easiest to adapt to our workflow.
I have a feeling language models will be good at virtually every "sit at a desk" job in a virtually identical capacity, it's just the act of plugging an AI into these roles is non-obvious.
Like every business was impacted by the Internet equally, the early applications were just an artifact of what was an easy business decision.. e.g. it was easier to start a dotcom than to migrate a traditional corporate process.
What we will see here with AI is not the immediate replacement of jobs, but the disruption of markets with offerings that human labor simply can't out-compete.
> I'm not sure it's that our job is the most automatable
I don't know. It seems pretty friendly to automation to me.
When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?
These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.
Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.
It's all possible because the computer manages much of the complexity through various forms of automation.
Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.
RAD programming held the same promise, as did UML, flow/low/no code platforms.
Inevitably, people remember that the hard part of programming isn't so much the code as it is putting requirements into maintainable code that can respond to future requirements.
LLMs basically only automate the easiest part of the job today. Time will tell if they get better, but my money is on me fixing people's broken LLM generated businesses rather than being replaced by one.
Indeed. Capacity to do the hard parts of software engineering well may well be our best indicator of AGI.
I don't think LLMs alone are going to get there. They might be a key component in a more powerful system, but they might also be a very impressive dead end.
Sometimes I think we’re like cats that stumbled upon the ability to make mirrors. Many cats react like there’s another cat in the mirror, and I wonder if AGI is just us believing we can make more cats if we make the perfect mirror.
This has been my argument as well. We've been climbing the abstraction ladder for years. Assembly -> C -> OOP ->... this just seems like another layer of abstraction. "Programmers" are going to become "architects".
The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).
If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.
I do think software engineering is more exposed than many other jobs for multiple reasons:
There is an unimaginable amount of freely accessible training data out there. There aren't for example many transcribed therapy sessions out there.
The only thing that matters about software is that it's cheap and it sort of works. Low-quality software is already common. Bugs aren't usually catastrophic in the way structural failures would be.
Software engineers are expensive compared to many other white-collar workers.
Software engineering is completely unregulated and there is no union or lobby for software engineers. The second an LLM becomes good enough to replace you, you're gone.
Many other "sit at desk" jobs have at least some tasks that can't be done on a computer.
Software engineering feels like an extremely uncertain career right now.
I'm not so certain that non-desk jobs will be safe either. What makes the current LLMs great at programming is the vast amount of training data. There might be some other breakthrough for typical jobs - some combination of reinforcement learning, training on videos of people doing things, LLMs and old-fashioned AI.
> These models are extremely unreliable when unsupervised.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
No matter how much better they get, I don't see any actual sign of intelligence, do you?
There is a lot of handwaving around the definition of intelligence in this context, of course.
My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies":
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
Chess was once thought to require general intelligence. Then computing power became cheap enough that using raw compute made computers better than humans. Computers didn't play chess in a very human-like way and there were a few years where you could still beat a computer by playing to its weaknesses. Now you'll never beat a computer at chess ever again.
Similarly, many software engineers think that writing software requires general intelligence. Then computing power became cheap enough that training LLMs became possible. Sure, LLMs don't think in a very human-like way: There are some tasks that are trivial for humans and where LLMs struggle but LLMs also outcompete your average software engineer in many other tasks. It's still possible to win against an LLM in an intelligence-off by playing to its weaknesses.
It doesn't matter that computers don't have general intelligence when they use raw compute to crush you in chess. And it won't matter that computers don't have general intelligence when they use raw compute to crush you at programming.
The proof that software development requires general intelligence is on you. I think the stuff most software engineers do daily doesn't. And I think LLMs will get continously better at it.
I certainly don't feel comfortable betting my professional future on software development for the coming decades.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" ~ Upton Sinclair
Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.
Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.
Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.
I think I have a pretty good idea of what AI can do for software engineering, because I use it for that nearly every day and I experiment with different models and IDEs.
The way that has worked for me is to make prompts very specific, to the point where the prompt itself would not be comprehensible to someone who's not in the field.
If you sat a rando with no CS background in front of Cursor, Windsurf or Claude code, what do you suppose would happen?
It seems really doubtful to me that overcoming that gap is "just more training", because it would require a qualitatively different sort of product.
And even if we came to a point where no technical knowledge of how software actually works was required, you would still need to be precise about the business logic in natural language. Now you're writing computer code in natural language that will read like legalese. At that point you've just invented a new programming language.
Now maybe you're thinking, I'll just prompt it with all my email, all my docs, everything I have for context and just ask it to please make my boss happy.
But the level of integrative intelligence, combined with specialized world knowledge required for that task is really very far away from what current models can do.
The most powerful way that I've found to conceptualize what LLMs do is that they execute routines from huge learnt banks of programs that re-combine stored textual information along common patterns.
They're cut and paste engines where the recombination rules are potentially quite complex programs learnt from data.
This view fits well with the strengths and weaknesses of LLMs - they are good at combining two well understood solutions into something new, even if vaguely described.
But they are quite bad at abstracting textual information into a more fundamental model of program and world state and reasoning at that level.
I strongly suspect this is intrinsic to their training, because doing this is simply not required to complete the vast majority of text that could realistically have ended up in training databases.
Executing a sophisticated cut&paste scheme is in some ways just too effective; the technical challenge is how do you pose a training problem to force a model to learn beyond that.
I just completed a prototype of a non-trivial product that was vibe-coded just to test the ability and limits of LLMs.
My experience aligns largely with your excellent comment.
>But the level of integrative intelligence, combined with specialized world >knowledge required for that task is really very far away from what current >models can do.
Where LLMs excel are to put out large templates of what is needed, but they are frayed at the edges. Imagine programming as a jigsaw puzzle where the pieces have to fit together. LLMs can align the broader pieces, but fail to fit them precisely.
>But they are quite bad at abstracting textual information into a more >fundamental model of program and world state and reasoning at that level.
The more fundamental model of program is a "theory" or "mental-model" which unfortunately is not codified in the training data. LLMs can put together broad outlines based on their training data, but lack the precision in modeling at a more abstract level. For example, how concurrency could impact memory access is not precisely understood by the LLM - since it lacks a theory of it.
> the technical challenge is how do you pose a training problem to force a model > to learn beyond that.
This is the main challenge - how can an LLM learn more abstract patterns. For example, in the towers of hanoi problem, can the LLM learn the recursion and what recursion means. This requires LLM to learn abstraction precisely. I suspect LLMs learn abstraction "fuzzily" but what is required is to learn abstraction "precisely". The precision or determinism is largely where there is still a huge gap.
LLM-boosters would point to the bitter lesson and say it is a matter of time before this happens, but I am a skeptic. I think the process of symbolism or abstraction is not yet understood enough to be formalized.
AI training costs are increasing around 3x annually across each of the last 8 years to achieve its performance improvements. Last year, spending across all labs was $150bn. Keeping the 3x trend means that, to keep pace with current advances, costs should rise to $450bn in 2025, $900bn in 2026, $2.7tn in 2027, $8.1tn in 2028, $25tn in 2028, and $75tn in 2029 and $225tn in 2030. For reference, the GDP of the world is around $125tn.
I think the labs will be crushed by the exponent on their costs faster white-collar work will be crushed by the 5% improvement exponent.
Be careful you're not confusing the costs of training an LLM and the spending from each firm. Much of that spending is on expanding access to older LLMs, building new infrastructure, and other costs.
That’s a fair criticism of my method, however model training costs are a significant cost centre for the labs. Modelling from there instead of from total expenditure only adds 2-3 years before model training costs are larger than the entire global economy.
"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.
The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.
Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.
Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.
We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.
In 2025 you can add quality to jpegs. Your phone does it and you don't even notice. So the rhetorical metaphor employed holds up, in that AI is rapidly changing the fundamentals of how technology functions beyond our capacity to anticipate or keep up with it.
This is an especially bad example, a nice shiny grille is going to be strongly reflecting stuff that isn't already part of the image (and likely isn't covered well by adjacent pixels due the angle doubling of reflection).
Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.
That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.
I as a human being can of course not be replaced by a next token predictor.
But I as a chess player can easily be replaced by a chess engine and I as a programmer might soon be replaceable by a next token predictor.
The only reason programmers think they can't be replaced by a next token predictor is that programmers don't work that way. But chess players don't work like a chess engine either.
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.
LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.
"What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.
But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.
> "What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
People have been arguing this is not the case for at least hundreds of years.
Hallucination has significantly decreased in the last two years.
I'm not saying that LLMs will positively replace all programmers next year, I'm saying that there is a lot of uncertainty and that I don't want that uncertainty in my career.
If it gets to the point where I can no longer find a tech job I am just going to buy a trailer, live somewhere cheap, and just make money doing odd jobs while spending most of my time programming what I want. I don't want to participate in a society where all I have for job options is a McJob or some Amazon warehouse.
This is the best thing engineers can do. I moved to building as a solo founder. I am building an LLM enabled coding product and I teach. I'm hosting a session on Claude Code today, 134 guests signed up. I'm gradually planning to make money teaching for a few months while building the product.
That's actually a good idea. Now I just need to come up with an idea for an SAAS app. I was thinking originally or making one of the games on my project backlog and seeing how much I could make off it. Or creating one of the many idea I have for websites and webapps and see where they go.
Would be more difficult depending on where you live. My plan was to talk to others online and see if I could find someone willing to live such a simple life with me, maybe starting with an LDR first (I'm sort of doing that already)
Unrelated to the discussion, but I love these kinds of backup plans. I've found that most guys I talk to have one. Just a few days ago a guy was telling me that, if his beloved wife ever divorces him, then he'd move to a tropical island and become a coconut seller.
(My personal plan: find a small town in the Sonoran Desert that has a good library, dig a hole under a nice big Saguaro cactus, then live out my days reading library books in my cool and shady cave.)
The future seems very uncertain right now and we are living in weird times. Its always a good idea to have a backup plan in case your career path doesn't work out!
Mine is forrest fire fighter. Surely with climate change there will not be a shortage of work, and while dangerous and bad for you, it seems kind of fun.
Due the compact free association, US citizen can permanently settle to live and work in Micronesia with no visa or even any real checks other than a quick look at the passport.
I'd argue that, out of white collar jobs, it is actually one of the least automatable still. I.e. the rest of the jobs are likely going to get disrupted much faster because they are easier to automate (and have been the target of automation by the software industry in the past century). Whatever numbers were seeing now may be too early to reflect this accurately.
Also there are different metrics that are relevant like dollar count vs pure headcount. Cost cutting targets dollars. E.g. entry level developers are still expensive compared to other jobs.
Most "Software Engineering" is just applying the same code in slightly different contexts. If we were all smarter it would have been automated earlier through the use of some higher-level language.
Its not really an intelligence thing. You could have the most intelligent agent, but if the structural incentives for that agent are for example, "build and promote your own library for X for optimal career growth.", you would still have massive fragmentation. And under the current rent-seeking capitalist framework, this is a structural issue at every level. Firefox and Chrome? Multiple competing OSes? How many JS libraries? Now sure, maybe if everyone was perfectly intelligent _and_ perfectly trusting, then you could escape this.
You don’t need to perfect AI to the point of becoming credentialed professionals to gut job markets— it’s not just developers, or creative markets. Nobody’s worried that the world won’t have, say, lawyers anymore — they’re worried that AI will let 20% of the legal workforce do 100% of the requisite work, making the skill essentially worthless for the next few decades because we’d have way too many lawyers. Since the work AI does is largely entry-level work, that means almost nobody will be able to get a foothold in the business. Wash, rinse, repeat to varying levels across many white collar professions and you’ve got some real bad times brewing for people trying to enter the white collar workforce from now on— all without there being a single AI lawyer in the world.
We might end up needing 20% or so less doctors, because all that bureaucracy can be automated. A simple automated form pre-filler can save a lot of time. It’s likely that hospitals will try saving there.
You know the difference between doctors and programmers? One have a regulated profession and lobby, the other have neither. Actually, all the other have is the richest amount of open training data for ai companies among all professions (and it's not medicine)
It’s just engineers getting high on their own supply. All the hype men for the software are software engineers (or adjacent.)
Frankly, any time I see research indicating software engineering is at a high risk of being automated, I outright dismiss it as pseudo science. It ain’t happening with current tech.
I'm not necessarily opposed to unionization in general but it's never going to save many US software industry jobs. If a unionization drive succeeds at some big tech company then the workers might do well for a few years. But inevitably a non-union startup competitor with a lower cost structure and more flexible work rules will come along and eat their lunch. Then all the union workers will get laid off anyway.
Unionization kind of worked for mines and factories because the company was tied to a physical plant that couldn't easily be moved. But software can move around the world in milliseconds.
Unions _can_ protect against this, but they have to do it via lobbying the government for protectionism, tariffs, restricting non-union competition etc.
Indeed, just look at the CGI VFX industry of Hollywood. US invented it and was the leader for a long time, but now it has been commodified, standardized and run into the ground, because union or not, you can't stop US studios form offshoring the digital asset work to another country where labor is 80% cheaper than California and quality is 80% there. So the US is left with making the SW tools that VFX artist use, as the cutting edge graphics & GPU knowhow is all clustered there.
Similarly, a lot of non-cutting edge SW jobs will also leave the US as tooling becomes more standardized, and other nations upskill themselves to deliver similar value at less cost in exchange for USD.
This was when programmers were making software to time Amazon worker's bathroom breaks so believing "this could never happen to me" was probably an important psychological crutch.
This is, if true, a fundamental shift in the value of labor. There really isn’t a non-Luddite way to save these jobs without destroying American tech’s productivity.
That said, I’m still sceptical it isn’t simply a reflection of an overproduction of engineers and a broader economic slowdown.
Not really. If it’s overproduction, the solution is tighter standards at universities (and students exercising more discretion around which programmes they enroll in). If it’s overproduction and/or outsourcing, the solutions include labour organisation and, under this administration, immigration curbs and possibly services tariffs.
Either way, if it’s not AI the trend isn’t secular—it should eventually revert. This isn’t a story of junior coding roles being fucked, but one of an unlucky (and possibly poorly planning and misinformed) cohort.
It can be oversupply/outsourcing and also secular: You can have basically chronic oversupply due to a declining/maturing industry. Chronic oversupply because the number of engineers needed goes down every year and the pipeline isn't calibrated for that (academia has been dealing with this for a very long time now, look up the postdocalypse). Outsourcing, because as projects mature and new stuff doesn't come along to replace, running maintenance offshore gets easier.
Software isn't eating the world. Software ate the world. New use cases have basically not worked out (metaverse!) or are actively harmful.
So what your argument is we're so special that we deserve to hold back human progress to have a privileged life? If it's not that what would you want a union to do in this situation?
Do you really believe that all technological progress has bettered humanity? Where’s the four day work week we were promised? I thought automation was supposed to free us from labor.
I like human progress. I don’t like the apparent end goal that the entire wealth of the planet belongs to a few thousand people while the rest of us live in the mud.
Unions can only prevent automation up to a point. Really the only thing that could have reasonably prevented this would have been for programmers to not produce as much freely accessible training data (formerly known as "open source software").
Exactly. I am always so impressed by the fact that developers never see that open source is essentially them giving away free labor to giant corporations. Developers basically programmed their way out of a job, for free. It's the only profession that is proud to have its best work done on unpaid time and used for free by big corporations.
Unions won’t solve this for you. If a company just decides they have enough automation to reduce union workforce it can happen the next time contracts get negotiated.
Either way, there are layoff provisions with union agreements.
Isn't that just delaying the inevitable? Yangshan Deep-Water Port in Shanghai is one of the most automated ports. Considering there are more people in China than in the US, China still automated their port.
I'm not making a value judgment on the specific case of dock workers, I'm rather saying that unions can and do prevent automation. If Software Devs had unionized earlier, a lot of positions would probably still be around.
In Hollywood, union bargaining bought some time at least. Unions did mandate limits on the use of AI for a lot of the creation process.
AI is still used in Hollywood but nobody is proud of it. No movie director goes around quoting percentages of how many scenes were augmented by AI or how many lines in the script were written by ChatGPT.
Unions work in physical domains that need labor “here and now”, think plumbers, electricians, and the like. You can’t send that labor overseas, and the union can control attempts at subversion via labor force importation. But even that has limitations, e.g. union factory workers simply having their factory shipped overseas.
Software development at its core can be done anywhere, anytime. Unionization would crank the offshoring that already happens into overdrive.
We're not "too important." All a union would do is create extra problems for us.
There are two possibilities:
a) This is a large scale administrative coordination problem
b) We don't need as many software engineers.
Under (a) unionizing just adds more administrators and exacerbates the problem, under (b) unions are ineffective and just shaft new grads or if they manage to be effective, kills your employer (and then no one has a job.)
You can't just administrate away reality. The reason SWEs don't have unions is because most of us (unlike blue collar labor) are intelligent enough to understand this. I think additionally there was something to be said about factory work where the workers really were fungible and it was capital intensive, software development is almost the polar opposite where there's no capital and the value is the theory the programmers have in their head making them a lot less fungible.
Finally we do have legal tools like the GPL which do actually give us a lot of negotiating power. If you work on GPL software you can actually just tell your employer "behave or we'll take our ball and leave" if they do something stupid.
Unions would just delay the inevitable while causing other downsides like compressing salary bands, make it difficult to fire non-performers, union fees, increasing chance of corruption etc.
For a recent example:
> Volkswagen has an agreement with German unions, IG Metall, to implement over 35,000 job cuts in Germany by 2030 in a "socially responsible" way, following marathon talks in December 2024 that avoided immediate plant closures and compulsory layoffs, according to CNBC. The deal was a "Christmas miracle" after 70 hours of negotiations, aiming to save the company billions by reducing capacity and foregoing future wage increases, according to MSN and www.volkswagen-group.com.
I mean, I still don't want to unionize with the guys who find `git` too complicated to use (which is apparently the majority of HN). Also, you guys all hate immigrants which is not my vibe, sorry.
I really hope nobody had themselves convinced that software engineering couldn't be automated. Not with the code enterprise has been writing for decades now (lots and lots and lots of rules for gluing state to state, which are extremely structured but always just shy of being so structured that they were amenable to traditional finite-rule-based automation).
The goal of the industry has always been self-replacement. If you can't automate at least part of what you're working on you can't grow.
... unfortunately, as with many things, this meshes badly with capitalism when the question of "how do you justify your existence to society" comes up. Hypothetically, automating software engineering could lead to the largest open-source explosion in the history of the practice by freeing up software engineers to do something else instead of toil in the database mines... But in practice, we'll probably have to get barista jobs to make ends meet instead.
The experiences people are having when working with big, complex codebases don’t line up with your gloomy outlook. LLMs just fall apart beyond a certain project size, and then the tech debt must be paid.
Is it gloomy? I personally liken it to inventing the washing machine instead of doing laundry by hand, beating it against a washboard, for another hundred years.
CEOs citing savings from AI should be able to show higher profits soon. The fact that they’re not means those tall tales are coming home to roost soon.
Are people really replacing customer service agents with LLMs? It doesn't seem practical.
Customer service handles all the things that customers aren't trusted to manage on their own with a self-service portal - that's the whole point of having a trusted human involved at all. Giving those tasks to LLMs won't work because the customer can just prompt inject their way to whatever toolcalls correspond to their desired outcome.
I had a home internet outage recently and needed the ISP to send a tech out.
Five years ago, you'd wait on-hold to talk to a human being with a script. "Is the light on your modem red?" "Try turning your modem on and off again." "OK let's schedule a technician for you."
Today I get a pre-LLM chatbot, with the same script, no waiting on-hold, available 24/7, with pictures and guides. It's somewhat faster because no human has to read the script over the phone, the words just appear instantaneously. Getting through the script tree to the outcome I want is an easy and obvious path.
The vast, vast majority of Tier 1 support is not something where the customer service agent has any agency. They have a script. Taking humans out of the loop here is an improvement.
I mentioned it before as well — I know first hand companies that have replaced most low level CS with elevenlabs + GPT ticket systems, while tracking the main metrics. Overall, it’s been better and cheaper than operating all the human agents.
I think they might be replacing first-tier support, the people who currently answer really stupid questions, handle really common cases, etc. When you need real support and there's no script you go to second-tier support...
As a person who aspires to actually read documentation, try common troubleshooting, google it, etc. before calling support I'd really love to go directly to second-tier, but apparently bulk of support calls are low-effort users, and now they'd get the pleasure of LLM, instead of a person, telling them to reset their router, make sure the thingie on a spray bottle is in "on" position, or call the airline.
For similar forms of automation, isn’t 13% somewhat inline with what you’d expect?
For example, I wonder how many fewer juniors were needed when we had better programming languages and tools? Do certain programming practices lead to fewer new workers? How many new factory workers aren’t hired on the factory floor due to a form of automation?
I think you missed my point. I was trying to point how similar the situation programmers face with AI is to the Luddites. People think of them as anti-technology, but they clearly weren't. HN feels a lot like a forum for weavers discussing the hot new automatic weavers and how excited they are for them, without realizing the true goal and scope.
But how do they know it's AI and not, say, other economy-related issues? Just the other day there was a report that companies are not adopting AI because they don't know about it... and now suddenly, so many are using AI that it causes 13% decline in jobs?
> Current AI valuations assume one thing: mass workforce extinction pays the bill.
I disagree. My evidence is simple: just look at how the most recent generation of smartphones is being advertised. Look at the platforms like Base44 that are spamming their ads all over YouTube. The bet is diversified quite a bit, into the expectation that end users will (eventually) pay through the nose for AI-powered toys.
I think this is to be expected, all tech has societal impact like this. This is just happening over a span of few years instead of decades and centuries. Failure in government policy making at it's peak.
Yes, stunt growth if that growth is immediately harmful to the public. Provide adverse incentives that increase the cost of replacing humans. Less or no government subsidies, incentives or tax breaks if you replace humans with LLMs. Even without replacing humans, tax LLM usage like cigarettes.
In the short term that is. over time, wind down these artificial costs, so that humans transition to roles that can't be automated by LLMs. Go to school, get training,etc.. in other fields. Instead of having millions of unemployed restless people collapsing your society.
But everyone is on the take, they want their short term lobbying money and stock tips so they can take what's theirs and run before the ship sinks. (if I can be a bit over dramatic :) )
I’m suss about this paper when it makes this claim:
“where AI is more likely to
automate, rather than augment , human labor.”
Where is AI currently automating human labor? Not Software Engineering. Or - what’s the difference between AI that augments me so I can do the job of three people and AI that “automates human labor”
I was also curious about this. Table A1 on page 56 lists examples of positions that are automated vs augmented, and these are the positions the authors think are going to be most augmented (allegedly taken from [0]):
- Chief Executives
- Maintenance and Repair Workers, General
- Registered Nurses
- Computer and Information Systems Managers
After skimming [0], I can't seem to find a listing of jobs that would be augmented vs automated, just a breakdown of the % of analyzed queries that were augmenting vs automating, so I'm a bit confused where this is coming from.
When the Stanford paper looked at augment vs automate, they used the data from Anthropic's AI Economic Index. That paper defined the terms like this:
We also analyze how AI is being used for tasks, finding 57% of usage suggests augmentation of human capabilities (e.g., learning or iterating on an output) while 43% suggests automation (e.g., fulfilling a request with minimal human involvement).
From the data, software engineers are automating their own work, not augmenting. Anthropic's full paper is here:
I see a worrisome trend. On one hand, many of my proto-boomer friends are suffering from age-ism , and memes claim that over-50-year-olds are unemployable. Not 100% fidelity, but there's some truth.
Then I hear about a lot of youngsters struggling to find work, and see articles like this.
Well, who's left? Is there a sweet spot at like 31 that are just cleaning up?
beside the point, but over 50 = proto-boomer? You mean para-boomer, maybe? Gen X is <=60, I believe, so you referring to the cusp boomer/genx I think..
Reading the paper [1], I disagree, I think we are actually seeing an economic downturn due to the infinite spending during the COVID-19 pandemic. Borrowing was high, and now the interest on that borrowing is high.
> The patterns we observe in the data appear most acutely starting in late 2022, around the time of rapid proliferation of generative AI tools. They also hold for both occupations with a high share of college graduates and ones with a low college share, suggesting deteriorating education outcomes during COVID-19 do not drive our results.
The economic downturn was a delayed effect after high spending at low interest rates during COVID 19. In the UK [2], interest rates were as low as 0.1% (19 Mar 20). Whilst interest rates are low, it makes sense to invest heavily – and one of the largest investments most companies make is in employment. In late 2022 we saw interest rates raise above 1%, then 2%, and by mid-2023 they are above 5%. The largest cost to cut when borrowing is high is again, employment. For every employee, the overhead cost (at medium to large businesses) are 1.5x – 2x their wage. If you’re a big tech firm with 5 employees at $100k each, you could save $1mn a year cutting the right staff down. Apparently Microsoft have sacked some 15k employees this year [3], assuming an average of $100k a year wage, they may have just wiped up to $3bn off of their burn rate.
As the paper itself concedes:
> One possibility is that our results are explained by a general slowdown in technology hiring from 2022 to 2023 as firms recovered from the COVID-19 Pandemic.
AI is being trialled as an approach to maintaining growth despite significantly deep cuts. The Microsoft CEO is under the impression that up to 30% of their code is now written “by software” [4] (are we talking about automatically generated files at compilation, or LLMs?), apparently Google’s code is greater than 30%. I do not believe that AI has yet been linked with job losses convincingly.
My company did exactly this earlier in the year. It was a blatant lie and everyone who works here knew it. None of the people laid off were actually replaced with AI, the work they did was just eliminated.
It’s also just natural cost cutting from businesses that were previously massively over hiring, and outside of AI don’t exactly have a ton of areas with huge growing investment.
Plus slashing jobs like this keeps the plebs in line. They don’t like software engineers having the money and job security to raise a stink over things. They want drones terrified of losing everything.
This study feels pretty weak. Software as a occupation is collapsing, but it's not due to AI. Articles and "studies" like this are just a smoke screen to keep your eye off the ball.
This is good though right? There was decline in manufacturing jobs in the West over the last several decades because we could automate and outsource to cut costs. We all agree that cheap iPhones and clothes are good... Right...?
Now we get cheap services too!
Job losses to automation shouldn't be a sudden concern to people. This has been happening for decades. The only difference is that it's now impacting the more privileged in society as well.
Thinly veiled economic propaganda aside, I am dealing with a different AI mess everyday. Technical debt is exploding everywhere I turn. There is an ever larger part of me these days that wishes I could just call the bluff all at once and let all the companies in question learn the inevitable lessons here the hard way.
The worst thing for me would be just needing to get a job like I had before being a dev, the stakes are so much grander for all the companies. It's only really existential for the side of this that isn't me/us. I've been working since I was 15, I can figure it out. I'll be more happy cutting veggies in a kitchen than every single CEO out there when all is said and done!
It emphasizes "AI adoption linked to 13% decline," which implies causation. The study itself only claims "evidence consistent with the hypothesis."
The article also largely highlights job loss for young workers, while only briefly mentioning cases where AI complements workers.
The study's preliminary status -- it is not peer reviewed -- is noted but only once and at end. If the article was more balanced it would have noted this at the beginning.
Articles on the same subject by the World Economic Forum, McKinsey, and Goldman Sachs are more balance and less alarmist.
Short-term, discrete numbers like these are interesting to look at, but they don't really tell us much about the long-term trajectory. In parallel: [1].
Slow down people. Let's stop jumping to biases and see what we have here.
Note upfront: I'm not suggesting AI is not having an impact. That would be foolish. But I will say there's *a lot* less to the conclusion of this study, simply because the data is questionable. It's not that they did anything wrong per se. I won't say that here because it'll end up a HN cluster fuck. Cluster fuck aside, the caveats and associated doubt are enough to say, "Don't bet the farm on this study." Great bander for the bar? Sure.
It's an interesting study but I've seen it called "absolute proof" and other type things. Don't be fooled, it's not that.
> "This study uses data from ADP, the largest payroll processing firm in America. The company provides payroll services for firms employing over 25 million workers in the US. We use this information to track employment changes for workers in occupations measured as more or less exposed to artificial intelligence"
a) I'm calling this out because I've seen posts on LinkedIn saying it was a sample of 25M. Nope! ADP simply does payroll for that many.
b) The size of the US workforce is ~165M, making ADP's coverage ~15% of the workforce.
c) Do the business ADP server come from particular industries, are of a particular size, in particular geographic locations? etc.? It's not only about the size of the sample - which we'll get to shortly - but the nature of the companies - which we'll also get to shortly.
> "We make several sample restrictions for our main analysis sample."
d) It's great that they say this, but it should raise an eyebrow.
> "We include only workers employed by firms that use ADP’s payroll product to maintain worker earnings records. We also exclude employees classified by firms as part-time from the analysis and subset to people between the age of 18 and 70."
e) Translation: we did a slight bit of pruning (read: cherry-picking).
> "The set of firms using payroll services changes over time as companies join or leave ADP’s platform. We maintain a consistent set of firms across our main sample period by keeping only companies that have employee earnings records for each month from January 2021 through July 2025."
f) Translation: More cherry-picking.
> "In addition, ADP observes job titles for about 70% of workers in its system. We exclude workers who do not have a recorded job title."
g) Translation: More cherry-picking.
> "After these restrictions we have records on between 3.5 and 5 million workers each month for our main analysis sample, though we consider robustness to alternative analyses such as allowing for firms to enter and leave the sample."
h) 3.5M to 5.0M feels like a large enough sample... if it wasn't so "restricted." Furthermore, there's no explanation on the 1.5M delta, and how adding or removing that much impacts the analysis.
i) And they considered that why? And did what they did why? It's a significant assumpt that gets nothing more than a hand wave?
> "While the ADP data include millions of workers in each month, the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy."
j) Translation: as mentioned above ADP !== a representation of the broader economy.
> "Further details on differences in firm composition can be found in Cajner et al. (2018) and ADP Reserch (2025)."
j) Great there's a citation, but given the acknowledgement of the delta isn't at least a line or two in order? Something about the nature of the delta, and THEN mention the citation?
k) Editorial: You might think this hand-wave is ok, but to me it's usually indicative of a tell and a smell.
l) Finally, do understand the nature of academia and null research (which has been mentioned on HN). In short, there is a (career / financial) incentive to find something novel (read: worth publishing). You advance your career by doing not-null
research.
Again, I'm not suggesting anything nefarious per se. But this study is getting A LOT of attention. All things considered, more than it objectively deserves.
__Again: I'm not suggesting AI is not having an impact. That would be foolish.__
As I see it, it's really the lack of "capitalists" willpower to be actually capitalist.
We can't call it incompetence because neither those whom we have come to know as capitalists nor their advisors are incompetent, which means they quite literally do not want to offset any decline in jobs or (job creation) that can be linked to progress.
That's not strange. A "capitalist" wants market participation to grow, infinitely, which is possible. Who we came to know as capitalists don't care about the markets, actual market growth or market participation. They only care about the growth of the value of the markets, "however" that happens.
I highly recommend that journalists and economists dig a bit more radically honest into the matter. There'd be more value in that, more blog posts, more articles, more discussions on all platforms, and thus more participation.
I mean it's a scapegoat vs straw man vs actual culprit kind of situation ... isn't it?
Ostensibly it's to help programmers, or writers, or lawyers, or whomever. But those are just the users of AI.
The owners and buyers of AI at a company level are developing and using it to push down payroll expenses. That's it. It's to avoid paying people, and providing them benefits. Even if you fire 50% of your employees, realize it was a terrible mistake, and hire most of them back, it's a net reduction in payroll costs.
It just means they bought the hype and thought they could actually use AI to fire people. Many companies try to do it, and I don't know how many it works for.
But I do know that companies fire people and stay short-staffed just to keep payroll down all the time. Even when externally that seems like a terrible idea, and likely impacts bottom line. It's important to realize just how much companies hate payroll. And AI is a great way to try to reduce it.
Is it pure AI or a guy in India that can cover the gap using AI to create good enough slop to pass?
Everyone is doubling down on hiring IN India right now. H1B isn’t even a thing. It’s offshoring to Indians that are utilizing AI to ship good enough slop. Everyone’s India office is rapidly expanding.
I have a controversial take that the only reason knowledge jobs have been well paid and fairly safe in the West was because for a while they've been difficult to outsource, not because the skills themselves are particularly rare or hard to acquire.
A few decades ago there was a focus in several low-cost economies (like China) to build out a manufacturing base and because of cheap global shipping it just made sense to outsource manufacturing. It's not that manufacturing is unskilled or "unproductive" as some claim.
Over the last few years hundreds of middles of people from countries like India have gotten online and in many of those countries today education and English is sufficient that with the internet knowledge jobs can be outsourced. And so obviously they will be.
This idea that there are jobs that require such high IQ and knowledge that only a Westerner can do them is kinda silly. We should assume a race to the bottom is the rule when it comes to labour. At the end of it the only jobs that will be safe are those which cannot be easily automated or easily outsourced. Those will probably be skilled manual labour jobs that require a physical presence – jobs like roofing or plumbing.
Not sure what these guys are studying but can tell you in the real world - essentially zero AI rollout in accounting world for anything serious.
We've got access to some fancy enterprise copilot version, deep research, MS office integration and all that jazz. I use it diligently every day...to make me a summary of today's global news.
When I try to apply it to actual accounting work. It hallucinates left, right & center on stuff that can't be wrong. Millions and millions off. That's how you get the taxman to kick down your door. Even simple "are these two numbers the same" get false positives so often that it's impossible to trust. So now I've got a review tool that I can't trust the output of? It's like a programming language where the equality (==) symbol has a built in 20% random number generator and you're supposed to write mission critical code with it.
I keep trying to get it to review my personal credit card statements. I have my own budget tracking app that I made, and sometimes there's discrepancies. Resolving this by hand is annoying, and an LM should be able to do it: scrape the PDF, compare the records to mine, find the delta.
I've tried multiple models over the course of 6 months. Yesterday it told me I made a brilliant observation, but it hasn't managed to successfully pin down a single real anomaly. Once it told me the charges were Starbucks, when I had not been to a Starbucks—it's just that Starbucks is a probable output when analyzing credit card statements.
And I'm only dealing with a list of 40 records that I can check by hand, with zero consequences if I get it wrong beyond my personal budgeting being off by 1%.
I can't imagine trusting any business that leans on this for inappropriate jobs.
>> I keep trying to get it to review my personal credit card statements. I have my own budget tracking app that I made, and sometimes there's discrepancies. Resolving this by hand is annoying, and an LM should be able to do it: scrape the PDF, compare the records to mine, find the delta.
This is a perfect example of what people don't understand (or on HN keep forgetting). LLMs do NOT follow instructions, they predict the next word in text and spit it out. The process is somewhat random, and certainly does not include an interpreter (executive function?) to execute instructions - even natural language instructions.
Agreed. I keep trying stuff because I feel like I’m missing whatever magic people are talking about.
So far, I’ve found nothing of value besides natural language search.
There's a very fun video about accounting by Dan Toomey [0] that I think really drives home the point that accounting is:
1) Extremely important
2) Not that glamorous
I always think of accountants as the "nerds" of the finance world. I say this lovingly - I think in another life I would have become an accountant. I find it very fascinating. I worked at a company that worked with auditing datasets, so I knew much more about accounting that I would have otherwise.
Nobody ever wants to listen to accountants because they either are giving you bad news, or telling you the things that you should be doing. No one can deny how important they are, despite how much it seems like everyone wants to get rid of them.
An accounting story I love is how my old company got a lot of business because of Enron. Part of the reason that Enron was caught was due to their audit fees.
Their audit fees were reporting that Arthur Andersen was charging for a huge percentage of non-auditing work (audit fees report what percentage was auditing related and not). This was a huge red flag.
My company was the only one at the time that kept track of audit fees, and so a huge number of people paid to access that data stream.
If one day I quit programming, maybe I'll get my CPA.
[0]: https://youtu.be/vL4INHaK-sA?si=jIvFQVtrXU6tjh-1
"...can tell you in the real world - essentially zero AI rollout in accounitng world for anything serious."
The jobs the reseearchers concluded were affected were "unregulated" ones where there are no college education or professional certification requirements, e.g.,
"Not sure what these guys are studying..."Apparently, they studied payroll data from ADP on age, job title and headcount together with, who would have guessed, data from an AI company (Anthropic)
https://digitaleconomy.stanford.edu/publications/canaries-in...
This study has not been peer-reviewed
It should also be noted that there are some pretty big flaws in the analysis. They mention "the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy," but make no attempt to adjust their analysis for it. They also drop 30% of the data for which there is no job title recorded. With such a skewed sample, it's hard to tell how the analysis is supposed to generalize.
There seems to be this dream of Tax AI Software that will just do all of the taxes. But other than using AI as a fancy text search, I don't see it happening for a long long time. LLMs can't do arithmetic or count.
Yeah - classifying an invoice into building rent or say printer ink it'll have some success. So we'll see some of it at the very bottom end.
>LLMs can't do arithmetic or count.
Yes. The fancy copilot stuff does use pandas/python to look at excel files so stuff like add up a table does work sometimes, but the parameters going into the pandas code need to make sense too in the garbage in garbage out sense. The base LLM doesn't seem to understand the grid nature of Excel so it ends up looking at the wrong cells or misunderstands how headings relate to the numbers etc.
It'll get better but there doesn't seem to be the equivalent of "use LLM to write boilerplate code" in this world.
It is profoundly bad at accounting. But with a calculator tool, it works okay for math.
Aisde: Hey, whats the prompt you're using for a summary of the news events?
Yup, using AI for any serious tax calculation or even advice is a REALLY BAD idea.
A close relative is a top expert in US Trust & Estate Tax law working at a well-known BigLaw firm. Of course they have substantial AI initiatives, integration with their system, mandatory training, etc.
She finds tha AI marginally useful for some things, but overall not very much and there are serious errors, particularly the types of errors only a top expert would catch.
One of the big examples is that in the world of T&E law, there are a lot of mediocre (to be kind) attorneys who claim expertise but are very bad at it (causing a lot of work for the more serious firms and a lot of costs & losses for the intended heirs). The mediocre-minus attorneys of course also write blogs and papers to market themselves, often in greater volume than the top experts. Many of these blogs/papers are seriously WRONG, as in giving the exact opposite of the right advice.
Everyone here sees where this is going. The AI has zero ability to reason or figure out which parts of its training input are from actual top experts and which are dreck. The AI can not reason, and can not even validly check their 'thinking' against existing tax code (which is massive), or the regulations and rulings (which are orders of magnitude more massive). So, the AI gives advice that is confident, cheerful, and WRONG.
Worse yet, the LLM's advice is wrong in ways only a top expert would know, and in ways that will massively screw the heirs. But the errors will likely only be discovered decades later, when it is too late to fix.
Seriously, do NOT use LLMs for tax advice, unless you are also consulting a TOP professional. And skipping the LLM part is best.
My relative is quite frustrated and annoyed by the whole thing, which should be more helpful with these massive code/regs/rulings, but finds it often more work than just using the standard WestLaw/Lexis legal database searches.
LLMs basically can't do arithmetic directly, trying to get them to do so is a skill issue. Most models can and will happily write and execute code to do that work instead.
Which drives me a little crazy. Every LLM worth its salt should just... MCP or whatever the arithmetic of any question, I assume the good ones do.
> It's like a programming language where the equality (==) symbol has a built in 20% random number generator and you're supposed to write mission critical code with it.
<bad joke> Why are we talking about JavaScript in a thread about AI? </bad joke>
In fairness, a "20% random number generator" on "mission critical code" is something they literally do at NASA
The accounting note is not true in the traditional sense. The field in the US is just getting offshored to India/PH/Eastern Europe for better or for worse. There is even a big push to lower the educational requirements to attain licensure in the US (Big 4 partners want more bodies and are destroying the pipeline for US students). Audit quality will continue to suffer and public filers will issue bunk financials if they aren't properly attested to.
It's amusing to see programmers in the US promoting remote work.
Do those people really believe they're the most intellectually superior to the rest of the world? If a job can be done purely remotely, what stops the employer from hiring someone who lives in a cheaper place?
As a US-based developer I do not feel threatened by the "cheap" offshore developers I encounter. I've repeatedly been hired to clean up after offshore developers who:
* lied about their capabilities/experience to get the job,
* failed to grok requirements through the language barrier,
* were unable to fix critical bugs in their own code base,
* committed buggy chatgpt output verbatim,
* and could not be held liable because their firm is effectively beyond the reach of the US legal system.
In a couple of projects I've seen a single US based developer replace an entire offshore team, deliver a superior result, and provide management with a much more responsive communication loop, in 1% of the billable hours. The difference in value is so stark that one client even fired the VP who'd lead the offshoring boondoggle.
Software talent is simply not as fungible as some MBAs would like to believe.
I've worked alongside (but never with) offshore developers, often from the big consultancy companies. One thing they tend to do is place one competent developer and a dozen less-so, so that the work gets done by the one but they get paid for a dozen people.
But I also believe the managers hiring offshore employees are fully aware of this. If they aren't then they're not very good managers and/or have no idea what they're doing.
The offshore people mainly work on SAP and legacy systems though; it turns out it's very hard to find willing or competent people in Europe that actually want to work on / with SAP. However, foreign workers have less qualms about learning stuff like that, since the money is really good.
Yes this is the agency model here in Croatia. You would get one senior developer covering 2-3 projects and a few junior/mid developers working full time.
I have a feeling it's not working that well anymore because the people covering those juniors just earn more going to work straight for the client and they have less burden on them. Used to be harder so the agencies had leverage, nowadays even big companies will hire individual B2B contractor.
The only management experience I've had was as a team lead at a US-based consulting company. It was really stressful because I felt like I was managing a team that wasn't capable of doing the work. I was expected to spend at least some of my time coding, and was responsible for the overall project. This is the first time it has occurred to me that this might have been intentionally set up to exploit me while maximizing the amount we can charge the client.
we couldn't find good SAP security folks to save our life at a previous job. 900/hr for consultants.
regular "line" SAP admins had to be found in Mexico and brought up on TN visas -- still well paid but generally pretty good, doubly so because we had a Mexico City office and could retain the staff even after they rotated back to MX.
For a counterpoint, I’ve worked with many great engineers in Latin America who are smart, capable, and in the same time zones as the US
I’ve worked with awful, stereotypically garbage offshore teams. I’ve worked with quality offshore teams. The difference was money. The quality teams made less than, but nearly as much as an American worker. Maybe not a FAANG guy or a New York / SF worker, but all those small cities in flyover states? They came in 20-30k under, perhaps.
Language l, cultural, and time barriers still come into play regardless of how good they are, however.
Likewise! Though Latin American engineers also tend to be some of the priciest offshore developers (along with European engineers). Excellent engineers, but there's still some churn from the friction of hiring and maintaining teams overseas.
As posted above, we had great success with Mexican hires out of Mexico City.
General perception was the universities there produced qualified graduates who were not paper tigers (or didn't lie about creds).
Rates for them were pretty good, and we had better alignment with timezones and holidays.
Reasonably good alignment in terms of legal and HR issues -- easier to enforce than, like, Bangladesh
The NAFTA / USMCA / whatever its called now Visa made it easy for them to come across the border for a few years as well. Pay bump for a while plus a chance to work in HQ or the IT office directly, make fat stacks, and then rotate back to MX and buy a nice house. The Mexico City PMs were also instrumental for bridging the language gap when running projects in other LATAM countries.
Trump's ICE might be the end of that approach tho
We find it incredibly hard to hire these people. It turns out a lot of US companies are also interested in smart, capable, cheap engineers in Central Time Zone.
That's not the talent not being fungible but the trust and accountability not being fungible. Which is a structural issue and unlikely to be resolved. I suspect it's more profitable for a lot of VPs for offshore labor to be as inefficient as possible.
Exactly, its an incentives issue (see my comment as I talk about it in detail there)
> and could not be held liable because their firm is effectively beyond the reach of the US legal system.
this is a big one. last F500 I was at dropped Tata for several internal support teams due to belief that they were messing with proprietary code and/or had screwed things up so badly they warranted a lawsuit -- but had no legal levers to chase them for damages.
ditto for the one-off programmer who sexually harasses people while remote -- how does a remote worker sue, or get sued, and under what law?
or finance / tax -- who pays the payroll tax?
We all had similar experience(s). But if you have been around long enough you will experience also highly competent and sometimes outright brilliant folks who run circles around most of us. A bit less common in India than say eastern Europe, but thats about it.
Anyway highly competent and experienced folks will always thrive regardless of environment. Its the quiet rest that should be worried from multiple angles.
More or less my experience too.
But at the same time, I doubt there is anything special about me or my US born coworkers. We aren't superior just because of the continent we live in. But offshore work is almost as a rule terrible quality done by people that are frustrating to work with. It doesn't make sense
This experience most likely because dealing with offshore software farms. Those are the same shit as their western counterparts and even worse because of language and logistics. On an individual scale however one can for example easily find great developers in Eastern Europe, and former USSR countries that do amazing job and for very attractive price. Just not dirt cheap.
And yes. There is nothing special about North America as far as quality of software developers in general. Mostly you get average buzzword indoctrinated not so great people with some amazing expectation salary wise.
Firstly, I want to say that we are "cheap" because things are dirt cheap here.
Now, I am not a software developer but in high school, but I have my brother/cousins working in the software dev industry and here are my thoughts.
>language barrier: I genuinely don't know how incompetent developers you can hire, I mean sure if you hire extremely shitty developers but even that's rare.
Most people here are comfortable enough with english, in the sense that literally anyone can speak english & mostly get the point across. Yes, I have heard of some misarrangements but I don't think that its really much of an issue.
Now some outsourcing companies are mass recruiters who recruit tech from Cs colleges where noone recruited them (Tata consultancy services, infosys?) and the thing with them is that they don't even pay the mediocre expectations of a developer even in INDIA, they are basically exploiting junior developers and are compared with govt. insitutions in my country given how slow they are.
My brother works in a decent Consultancy services but he says that there are a lot of inefficiencies in the system.
He worked on a project and we estimated and he got 1% or less than 1% of the work that he MOSTLY did. and so my brother has way more incentive to freelance and get a "remote job" not consultancy.
I think that you confused yourself with remote job and consultancy part. Remote jobs hiring / freelancing indians is still cheaper than a consultancy imo who are parasites on the developers.
My brother works in a consultancy right now because the job market is shitty and he has gotten offers 4x his current salary from countries like switzerland and america. Yet, my family doesn't want him to do the 4x income work because he is already working a job and they don't want him to burn out
And they don't want him to leave the job because its "safe", you can't trust these startups etc. given the volatile nature and if they fail, then whoops the job market is really messed up right now, even in India and also arrange marriage is a huge thing and the girl's family usually checks the company that the boy works in and they usually get fishy if its remote job (and I mean, for good reason)
Also trust me some indians can definitely work in american timezones too but that is a little tough. But I mean, we are okay if you might call us once or twice late at night when its day in america and you have something really urgent. Atleast I am okay with that.
And you could pay 2x the salary the normal indian dev gets and I feel like even that would be less than an american dev. This can really filter some devs to get those with seniority or good projects.
Its a problem of incentives for consultancies (which is what you seem to hate) and maybe that's a bit fair given how much inefficiencies I see in that system. Just remote hire directly (I suppose)
People expect that they can pay 0.05x in the Philippines or India, or 0.1x in Poland or Estonia, when that's just not going to happen. I've heard a few people say the multiplier starts at something like 0.4x or more for equivalent talent.
Since that comes with all the disadvantages and risks you'd expect from splitting your team across two countries and operating in a market you don't understand, at that price point a US company should probably start thinking about spinning up a cheaper team in, I dunno, Dallas rather than offshoring.
you're delusional. of course if you take the cheapest possible offshore workers you get terrible results when compared to an experienced engineer in a developed country.
but it's a bit like ikea: if you buy their cheapest stuff it will fall apart after a few months but their "expensive" lines are still far cheaper than the competition but the same quality.
you might think you're a solid mahogany table but at the end of the day you're probably the same table as being sold at ikea, just more expensive
GitHub copilot already replaces 50% of what offshore talent could do. Can’t imagine someone spinning up an offshore dev hut instead of buying more AI.
Exactly my experience.
> what stops the employer from hiring someone who lives in a cheaper place?
I've worked with remote workers from around the world. Let me preface by saying there are of course exceptions but:
What I've found is that most often Americans exhibit self-starting and creativity. What I mean by that is non-us workers are great if you give them a specific task, even a really hard task.
But if you give them a nebulous problem, or worse, a business outcome, they tend to perform much more poorly. And I rarely see non-americans say something like "I think our customers would like it if we added X to the product, can I work on that?".
I don't think it's because Americans are better at this -- I think it's cultural. America has a much higher risk tolerance than the rest of the world. Failing is considered a good thing in the USA. And the USA is much more entrepreneurial than the rest of the world.
These two things combined create a culture difference that makes a business difference.
Additionally, what I've found is that the exceptions tend to move here because their risk taking is much more acceptable here (or they are risk takers willing to move across the world, hard to say which way the causation goes).
>> What I've found is that only Americans exhibit self-starting and creativity.
I'm going to counterpoint somewhat. I think those attributes are evenly spread into all countries, but equally I think they are uncommon in all countries.
I don't live in the US. I have traveled there and elsewhere. I would agree that there are lots of cultural differences between places, even places as nominally similar as say the UK, Australia and the US.
Of course who you interact with in various places matters. If you go to India and visit a remote-programming-company you'll meet a specific kind of person, one well suited to providing the services they offer.
Dig a bit deeper elsewhere and you'll find some very bright, very creative, engineers in every culture. In some cases those folk are doing remote work for US companies. In a few cases they're building the software (creatively and all) that the US company is selling.
In countries that are isolated for one or other reason creativity thrives. Israel, South Africa, Russia, all have (or had) exceptional engineering abilities developed because international support was withheld.
Yes, it is hard to find good talent. It is hard to develop and nurture it. But it exists everywhere. And more and more I'm seeing folks outside the US take American jobs, precisely because American workers are so keen to explain how portable those jobs are.
I understand that the American psyche is built on exceptionalism. And that does exist in some areas. But unfortunately it also acts as a filter blinding you to both exceptionalism elsewhere and inferiority at home. By the time you realise someone else has the edge, it's too late. We've seen this in industry after industry. Programing is no different.
I understand also that shooting the messenger is easier than absorbing the message. Let the down-voting begin.
> I think those attributes are evenly spread into all countries, but equally I think they are uncommon in all countries.
The data does not support your statement. From a startup report just four days ago:
The United States alone generates 46.6% of all startup activity worldwide, nearly half of the global total. Together with China (9.2%), the United Kingdom (5.6%), and India (5%), these four countries account for 66.4% of the absolute global startup activity.
I will give you that Israel in particular has a strong risk taking culture, as does Singapore and Estonia. And there are a lot of startups coming out of there.
But overall the US has way more risk taking.
And like I said at the very beginning, there are of course exceptions. Yes, every culture has some brilliant risk takers. But at least until recently, many of them came to the USA after they got successful.
There is startup activity in the US because there is enough capital to fund it. Getting funding for a startup even in pretty rich countries in EU is more difficult by an order of magnitude.
It’s not just funding but also bankruptcy laws are written in a way that encourages entrepreneurship, while not being overburdened by regulations
It's not so much about risk taking as about getting proper funding and overcoming the bureaucracy barriers. E.g. Poland itself has very low startup rates, but somehow Poles which go to USA create things like OpenAI ;)
Creativity and startup are two different things. Many of those startups are not creative in any way. And conversely being creative does not imply creating a company. This is about how capital work.
America is unique in way it businessmen tend to think that creating a business is the only way to be creative.
And incidentally, post was about employee creativity.
um, So VC funded startups are the very definition of "not risky". Basically you'll do something as long as someone else ponies up a big pile of cash to pay for it. Pretty much any other business model, where you build with your own time, or money, capital is much more risky.
Equally I don't think this is an argument for American exceptionalism (which is the point under discussion.)
It's interesting that your metrics for creativity and risk taking are financial. I think you should reflect on that.
This is the best HN comment I have ever seen. So elegant. I am going to use, "I think you should reflect on that" line from now on. This line is just pleasant to me, seems professional and actually inviting to a discussion while also showcasing the hidden irony of the original case that you pointed out.
This is art, mr white!
I'm actually a mirror salesperson.
I would advise against it, personally. Its a passive aggressive, thought terminating cliche that might as well be saying "I know better than you".
> Its a passive aggressive,
I think it's read as passive aggressive when people realise they've been holding a silly opinion don't want to admit it.
> thought terminating cliche
The irony.
> that might as well be saying "I know better than you".
Sometimes people do know better than you. I think I should reflect on that.
You've made my case for me, if by "I think I should reflect on that", you do in fact mean "[you've] been holding a silly opinion don't want to admit it".
The former is a passive-aggressive way to say the latter. I aim to, and encourage others to say what they mean.
All jokes aside, the commenter I initially replied to really should reflect on why their concept of creativity and risk tolerance is so linked with financial outcomes, because that is a very particular association and it maybe informs their worldwide more than they may realise.
> The former is a passive-aggressive way to say the latter. I aim to, and encourage others to say what they mean.
I suppose you don't see the irony?
While I wouldn't prescribe someone to sit down and think about why they tie the two together, you are probably right that it's reflective of their greater worldview(s). I wouldn't prescribe it because odds are, they already have reflected on it quite a bit. One thing I've really taken away recently reading about the historic lives of ordinary immigrants to early America, is that modern peoples are incredibly good at constantly reflecting and adapting their models of self, and of belief. I believe this constant reshaping is probably the main reason echo chambers are so effective, and dangerous.
Re: the irony, I don't see it, but I'm happy to hear your explanation of it. For what it's worth, my own interpretation of my words isn't passive aggressive, it's (charitably) pretty direct, or even (less charitably) plain old aggressive-aggressive.
Okay,so uhh, I think it was my comment where you said that it looks passive agressive and so I just read it again and yeah it does.
So yeah thanks, in the sense that I am not going to say this phrase now realizing it, Not sure how I even found it professional, man I am cringing.
But maybe the context OP used that was really maybe a good roast and I liked the use of this word in that context but yeah good point.
For what its worth, I also don't see the irony. And I also didn't see that it was passive agressive untill you told it and then I saw it..., So uh yeah.
> Israel, South Africa, Russia, all have (or had) exceptional engineering abilities developed because international support was withheld.
I think if you add the US to the list this theory disappears. It's more the frontier/self reliant/entrepreneurial attitude that I think makes the difference.
>What I've found is that only Americans exhibit self-starting and creativity.
Isn't that mostly a function of how incentives are aligned? I had a job with a lot of outsourcing to India. The Indians were given specific bits of code to write. They didn't even know how their code fit into the application.
Their entire incentive structure was geared toward getting them to write those bits of code as quickly as possible, finish, and take another task. There just wasn't any room for "self-starting and creativity".
I have a feeling if the entire application had been moved to India things would have been different.
It could be. But I worked at companies where we had full time employees all around the world, all of whom had full access to the same information the rest of us had. And I still saw this behavior generally. There were of course exceptions.
Interestingly the biggest exceptions were ones that had at some point lived and worked in the USA, and then had returned to their home country for some reason or another.
> I have a feeling if the entire application had been moved to India things would have been different.
I had direct experience with this. We had an office of full time employees in India tasked with a project, but I still had to hand hold them through most of the key decisions (which I didn't have to do with the US based teams nearly as much).
I think what you saw is more related to work/life balance than any innate difference in people. That's certainly my experience.
Employment is central to American's identity in a way that's almost considered perverse elsewhere.
Exactly!!
Its also like, no I don't think a family is supposed to be where some guy on the top extracts all the money and then trickes it down and I get %'s of what I did. This doesn't sound like a family.
Someone create a blog post on this phenomenon as to me, this seems like americans having an parasocial bond with companies (I vaguely remember the stripe CEO had said my name once or something along that lines, a blog post and it felt parasocial man)
I mean, I just feel like americans complaining about indians devs are complaining about the wrong things, like maybe I don't get them but its not true as to what they are saying. I just don't get it man.
I have seen Indian govt jobs to be much more like american private jobs in the sense that employment becomes central to their identity and there is this sense of tightknit community for the most part and maybe that has to do with the fact that the govt isn't usually exploiting its own workers and the tight knit sense of community comes from helping really poor children in teaching, building roads that my uncle flexes on me that I built this road or this college and showing me the absolute chad he sometimes is.
IBM / Hofstede has a lot of studies on this.
Cultural differences do exist. I don't understand why this isn't a major problem, because it's behavior I've seen again and again and again: Indians seem terrified of showing any initiative whatsoever (including asking), any own contribution, and do what you've asked them and only what you've asked them. They are also terrified of being accused of doing nothing. This goes to extremes, such as purposefully taking a very long time to finish a simple task simply because they haven't gotten a new one, don't dare ask for one, have to be seen to be working, and can't come up with anything themselves.
You want a long list of simple tasks finished? Excellent workers. An endless ticket queue with simple problems? There's a few issues with them not escalating real problems, but ok.
You want an application developed and a lot of problems solved? Stay away.
Well when you are paid peanuts, you do the bare minimum.
And an incentives issue.
Some software engineers work and they do the job and if they finish the work early, the company just start having more expectations of them WHILE PAYING THE SAME. So you are effectively catered if you don't work or take more to do the same atleast in the consultancy or similar business in India.
I feel like a lot of Indians especially software devs don't have this allegiance to a company where we consider a company to be our "family", and I find it really fair. My cousins always tell me that a company extracts 10x more value from you than what they give you back. Not sure how much of that is true in US but some developers are literally exploited in India, they couldn't care less about an application developed if they are this stuck state of limbo where they won't get fired if they do shitty work but they won't really get higher up the ladder either and even if they do the good work, it would take years for the company to notice it and its better to just change companies for that raise.
An incentive issue at its finest which could and is fixed by many people, just because you used a consultancy that sucked or had people that sucked doesn't make us all shitty software devs man.
Its Not a cultural issue, It really offended me as by coating us all in this "culture", you said somethings which are clearly offending.
Maybe I can get the point that maybe software attracts a lot of shy people and so they are shy towards taking the first initiative but that's not a cultural issue.
The culture of our school depends, most schools don't incentivize extracurricular activities that much so we don't do it and that's why we don't usually take initiative, because boom everything matters what you wrote in 3 hours
The incentive system is flawed but maybe I have hope, I mean to be honest, Things aren't that better anywhere else in the world too. I just feel like either the devs I have met irl are absolutely really good from what I've seen or your guys experience hasn't been that good but it isn't that big of a difference and I feel like things are a little exaggerated when I come to such forums.
I have often experienced that it isn't a problem of pay or of incentives. They're terrified of asking for something to do for example. As in scared, and not a little bit. Not underpaid.
I'm not claiming they're well-paid, but I don't think this is the issue, or at least not the primary issue.
> And I rarely see non-americans say something like "I think our customers would like it if we added X to the product, can I work on that?".
The most crucial difference in this context is that Americans are employed directly by the company, while foreign workers are behind several layers of management belonging to several companies. While you can walk around and deliver elevator pitches to higher-ups, foreign workers must track their time spent on tasks down to the minute in Jira. Then, they must find a manager who would like to pitch a feature to a manager who would pitch a feature to a manager in the U.S.
Exactly. I used to work in such a situation for a few years (consulting company hiring EE devs). I tried suggesting things, building PoC’s, pitching it to the manager, all was met with just “we’re on a limited budget, so stick with what we’ve arranged.”
Had I built the things anyway it wouldn’t be met with praise, but looked down upon for bypassing the manager (or I just wouldn’t get paid for those hours).
Many big corporations tend to be similar even when you’re employed directly.
You can’t truly be creative when you’re stuck 7 layers of mgmt deep. You also have to understand that for those who’ve only worked in such situations, “risking” their position at a foreign company just to appear smart doesn’t seem like a good idea, so they don’t do it.
While on the one side I think you have a point, on the other there's different dynamics in place as well; you're comparing offshore workers to internal employees. An offshore worker gets hired to do a job for another company, an internal employee is part of the company.
That is, an external worker (and I'm a consultant, I know) gets paid per hour, if the company goes under for whatever reason they just move on to the next assignment, while an internal employee leans more on their job.
Anyway that's just a theory. I'm a "consultant" which is just a fancy word for a temp / hired hand, and I'm somewhere in the middle in that I will think along with the company and propose improvements, but at the same time have lower risk and much less attachment to the companies I work for.
I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
> An offshore worker gets hired to do a job for another company, an internal employee is part of the company.
I've experienced both. Working with offshore employees and full time employees who happened to be in foreign countries. It was a similar experience with both, the exception being the ones that had previously lived and worked in the US.
> I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
Sundar Pichai moved to the US when he was in college. His entire working career and a bunch of his schooling was in the US.
Satya Nadella did the same.
As I said in my original reply, the ones who are more entrepreneurial or successful tend to move to the US (or at least used to).
> Failing is considered a good thing in the USA. And the USA is much more entrepreneurial than the rest of the world.
I wonder how many devs have been sacked for going out of their way and making stuff nobody in business asked for, or perhaps that broke something along the way and ended up being a net negative: in the EU vs US and other parts of the world.
Might be loosely related to how much money the company has to burn and the nature of their work (e.g. probably not looked well upon in consulting where you have to convince clients to pay for whatever you've made), as well as how popular each type of work is in each part of the world.
In my own experience (EU company, acquired by USA) USA developers are good at burning money, less good at actually delivering a reliable product.
But it can be due to terrible management hiring terribly.
I don't know about others but for me, I don't really care about business outcome. Why should I? It's the manager or the business side's job.
I work for an American company. 90% of my job is covering my ass because if I push for a novel idea and it fails, it's going to be a huge problem.
I've worked with experts from around the world. After a certain level of competitiveness they are all pretty much the same. Once you become "pals" they all start suggesting improvements. Maybe you socialize better with americans.
Sure, at the highest levels you'll find these traits everywhere. But there is a reason these folks have ascended to the highest levels. What I'm saying is that you find it far more often in junior people in the USA.
I gif it far more often in UK juniors, US juniors tend to be more concerned with moving to their next job than doing the current one.
Maybe we just have different cultural expectations.
US juniors are getting paid a lot more at their second job.
No, you found it in your self-selecting experience.
> I don't think it's because Americans are better at this -- I think it's cultural.
My experience is ANY delegation incurs a big loss in agency. I want to create a startup -> my employees are much less invested than I am. My remote (French) employees are even less invested. My Ukrainian employees are completely passive and I fired them. The more the distance, the less invested, the more passive.
It’s tempting to attribute this to your country’s qualities, but my experience is every country is a mixed bag.
> non-us workers are great if you give them a specific task, even a really hard task
...which is a lot like the LLMs! Maybe the skillset required to manage non-US workers is the same as for managing ChatGPT 6o, but the latter scales better.
> America has a much higher risk tolerance than the rest of the world.
America is one of the most risk averse countries in the world, seriously. Americans are constantly scared - of loosing job, of physical injury, of everything and everywhere.
> Failing is considered a good thing in the USA
America punishes failure pretty hard. Some peoples failures are ignored, but most peoples failures are punished in pretty significant ways.
Yes, only a tiny minority of palatable failures is allowed in the US. For everything else, society will discard you like rubbish.
Only Americans exhibit creativity and drive? What nationalistic nonsense is this? Step outside of your bubble lol.
Americans are truly exceptional people. Or, at least, that's what I learned in American-made training on cultural differences. The funniest part is that the training touched on nationalism. You see, nationalism is a negative quality exhibited by people in other countries. Americans have a positive version of that: patriotism.
That's not at all what I said. I said I see it far more often in Americans than other cultures. And I have stepped out of my bubble many times. I've worked with a lot of people in a lot of countries.
They agree with me.
> What I mean by that is non-us workers are great if you give them a specific task, even a really hard task. But if you give them a nebulous problem, or worse, a business outcome, they tend to perform much more poorly.
I mean come on, how do you expect people to interpret this paragraph? I can only assume you are trolling, so I'm done here.
So does ChatGPT. Have fun changing French fries into salad.
And now you're talking to people who don't agree with you. Maybe you hadn't punctured your bubble as much as you believe.
It's easy to criticize that part but his last sentence is spot on: the creative it minds from those countries tend to migrate to places that match their entrepreneurial personality better and those usually won't be China or India but rather somewhere in America or even Europe.
They go where they are told it's easier to get money for their ideas. This has long been the US. However it looks like it is changing in some fields lately.
I'm one of those offshore people that live in a cheaper place and works remotely for a US co.
The majority of people in the company are still in the US, and even for the East coast, the timezones are just annoying to work around sometimes. Either I need to do late days, or they have to do uber early mornings/SUPER late days, don't even get me started on West coast where the hours basically never match. And I'm in the closest timezone I can be for the US.
And there's also a cultural aspect to it. I simply work differently to how the US bosses expect, because my employer has to respect worker's rights if they want to hire people in the EU unless they hire them as contractors (they still have many protections in that case though). I clock off at exactly 17:00, I never answer messages outside working hours, I don't do overtime or anything resembling it etc. And yes, they don't pay me the same as I would in the US, but it's really not that much lower, plus life is just cheaper, even here in the Netherlands. I get paid less relatively, but from what I can tell other that the people getting paid obscene amounts, my quality of life is higher than most of my US counterparts
I've noticed my US colleagues are much more willing to waste away their lives for their employer as well, even if there's no real expectation for them to do so, and the business obviously prefers those kind of employees over the ones like me.
So there's still plenty of reasons to keep hiring US-based devs, from cultural to logistical. Maybe you guys should work on getting some actual worker protections first, though...
> but it's really not that much lower, plus life is just cheaper, even here in the Netherlands.
Lived in the NL for 4 years, it was many things - cheap wasn't one of them. With the enormous taxes , high rents and mediocre salaries I don't think you can make the case it's somehow cheaper than the U.S unless you specifically mean Manhattan and Silicon Valley.
As some have said, it's not about being superior. Common language, background, maybe overlaps in education, and avoiding cultures like those at Indian offshore companies where there is a lot of churn, maybe 1 Sr person you "hired" really farming the work out to multiple Jr people.
Timezone overlap is also a big one.
I agree with what you've written, but I've worked with colleagues in South America and Eastern Europe where none of those problems existed: folks spoke perfect English, people were incredibly motivated to do a good job, and they spoke up proactively when problems arose.
I have had issues with Indian outsourcers like you say (lots of churn, time zone hell, a culture of pretending everything is fine until release day and then saying "sorry, nothing works", etc.), but it's a bigger world now, and there are still lots of folks making half of US dev salaries where none of these problems exist.
My intuition says there are some stylistic differences. It seems like some development cultures somewhat have better results with more rigid computer engineering sort of tasks with high granularity requirements and more straightforward goals, even if the tasks are really hard, deeply technical and the goals are difficult. I think some are better at the more nebulous sort of tasks with a lot of flexibility. Both are really useful mindsets that seem much less useful if improperly applied.
Given, outsourcing is probably going to be hit-or-miss regardless of who’s doing it.
As an outsider I think Americans still have the upper hand in, for lack of a better term, work ethic.
A lot of that stems from a lack of job security. Stuff like suddenly being locked out of your work email/slack or being escorted out of company premises is largely unheard of in the rest of the world.
As a point of comparison: I'm a contractor based in a popular outsourcing destination. My contract is extended well over a month before it expires and I would need to do something particularly harmful to be let go just like that, as our client values continuity of services and will hold the agency accountable should that suffer.
Over here if a job listing mentions "US client" it typically means considerably more work for considerably more pay. Some go for that, others opt for more relaxed roles. I can't imagine having US jobs as the only option.
It’s amusing to see these comments as if American tech companies don’t already have offices all over the world.
Even a mid-size tech company I worked for had over a dozen small offices around the world to collect as many qualified developers as they could. They had some remote work too.
Still hired a lot of Americans. Thinking that remote work will be the end of American workers has been the driving force behind outsourcing pushes for decades, but it hasn’t worked that way.
> Still hired a lot of Americans. Thinking that remote work will be the end of American workers has been the driving force behind outsourcing pushes for decades, but it hasn’t worked that way.
The difference is that back then the project lead could explore outsourcing certain roles to India, EE and LatAm, while today the VP can explore outsourcing the project lead roles to those countries. These countries have built up their own native tech talent, many of whom already bring more to the table than the typical American - they work longer hours, for cheaper, and often bring a lot more experience. I've seen companies who only run sales teams with Americans, with the rest of the workforce being shipped out.
Notably, India already has nearly 2000 GCCs (Global Capability Centers, mega complexes of offices for foreign companies) set up, with that number only projected to increase as more mid-market firms expand. While many of them are just back offices, some of them, like Walmart's GCC, is the entire tech division - the CTO remains in the US, while the entire software team is in India. While earlier the Indian team would have had to adjust their timings to USA's, now quite a few US-based employees have had to adjust their timings to India's.
All of that has been true for decades, except maybe the specific numbers.
It's not intellectual superiority. They've already offshored all the other jobs they can. If they could offshore my job, they would. But it's very hard to find reliable talent anywhere, much less offshore. It is easier to find the talent here, and there's more of it. Then there's the complexity of hiring, the timezones, language barrier, and all the other small complications that add up.
Once you have world-class experts all over the developing world, my job might disappear. But you need experience to get there, which they aren't getting, because they aren't here. It's privilege 101: if you have it, you get more of it; if you don't have it, you don't get any of it. We're very privileged to be high-value domestic workers.
And by the way, remote work has been a thing here for decades, yet the calculation hasn't changed. Our remote jobs are still safe.
You’ll get downvoted but in my experience, which may not be representative of the entire population, this is true.
A mid-size US tech company I know well went fully remote after a lot of insistence from the workforce, prior to the pandemic they were fully in office.
Soon enough they started hiring remotely from EU, and now the vast majority of their technical folks are from there. The only US workers remaining are mostly GTM/sales. I personally heard the founder saying “why should we pay US comp when we can get extremely good talent in EU for less than half the cost”. EU workers, on average, also tend to not switch job as frequently, so that’s a further advantage for the company.
Once you adapt to remote-only, you can scoop some amazing talent in Poland/Ukraine/Serbia/etc for $50k a year.
I think most programmers in the US simply don't realize how much they earn compared to the rest of the world.
I'm not talking about rural Chinese villages whose name you can't pronounce. Or the stereotypical Indian call centers. I'm talking about highly educated programmers who can communicate fluently in English, in cities like Beijing or Munich. If people in SV know how (relatively) little their counterparts make in these places, they'd be much more opposed to remote work.
And that was before LLM. Today practically the entire planet can write passable English.
Yeah, for $100k or slightly less you can hire very good devs with 5+ yr experience in CN or DE. Often speaks English at full professional proficiency without the help of LLMs too. I know because I currently work for a fully remote startup with people from both countries. For that kind of money you can do what in the U.S., hire below average juniors? Even the most clueless junior likely makes more in SV.
Flip that around. Junior devs in the US earning $100k is the anomaly. The fact this is the case indicates the pipeline for competent developer talent is bottlenecked. Right now is still an amazing time to be in Tech. The fact the industry is so hungry for talent it’s paying such rates and is expanding abroad in search of new supply is a sign of it’s health.
Agree. It is harsh truth. Even the good old outsourcing seems in resurgence. Lately I see at work large delegations of IT bodyshops claiming 60% saving with AI + a dev/support center in India.
It may or may not work but it can crater 70% of IT/software department by 2027 as per their plan.
It's interesting, ai seems to be enabling the middle in a positive way.
On the other side, we have started to find that the value of outsourcing to very low cost regions has completely disappeared.
I expect that the wages in eastern Europe will quickly rise in a way they never did in former outsourcing hotspots (India for example), because they are able to do similarly complex and quality work to westerners, and are now enabled by awesome translation tools.
The low quality for cheaper is now better served by the Artificial Indian.
There's a lot of nuance in these types of stories. First, the US is far from uniform in salaries. Salaries in large metro areas are different from smaller areas and are different from CA/SV. Europe also isn't uniform, and in Western Europe if a company doesn't move to all contractors they will pay significantly more into a countries equivalent to social security. Personally, I would be uncomfortable having my entire development staff be contractors as their interests are not exactly aligned with mine.
Amazing talent may end up cheaper in certain locales for a period of time, but if they are amazing they will become more expensive.
IMO, what's at risk are the entry/mid FAANG type jobs that pay a lot for what they are.
My previous employer stopped hiring in the EU (except for the UK, where they were based, and South Africa, where the CTO was from) because the labor laws there made it too difficult for them to fire people, which was a particularly troublesome for them as they had almost quarterly layoffs. They switched back to hiring in the UK and US where there are fewer worker protections.
The fixed exchange rates between EU countries massively drags down the international cost of a German software engineer, and US companies have yet to wisen up to that fact.
It doesn't matter what they promote, remote labor is an economic reality. It's not as if employers are going to forget they can offshore your job because you show up to the office 8am sharp every morning.
The moment they can replace you for cheaper, they will, whether you insist on working remotely or not.
The trend of offshoring came and went nearly two decades ago.
Time zone differences, language barriers and cultural differences proved insurmountable.
Hybrid remote seems to work quite well, on the other hand.
Being in the office won't stop offshoring anyway.
And being "superior" doesn't necessarily mean extraordinary coding skills. The vast majority of code to be written doesn't require that. What it requires however is a combination of common sense and good understanding of the underlying business. This is in short supply in many of the locations the jobs are being offshored to. But let's be honest, it was also on short supply in the corporate IT departments being offshored, though not quite to the same degree.
Yes. I think American programmers are at a local optimum for combining ingenuity and work ethic. You can get more ingenuity vs work ethic or the other way around elsewhere, but the American blend seems to be best.
In an ideal world, we'd have some sort of central system that businesses are bound by, in the interest of the common good, to employ domestic workers.
But alas, such a system is fundamentally impossible. Physics just won't allow it.
That's my argument against looking for a 100% remote job. Even if the company is happy with you now, eventually there will be new management that sees your job as low-hanging fruit for expense reduction.
They've been trying to offshore the work for most of a century now. There are still millions of software engineers in the US.
I have no comment on your strawmanning about programmers thinking they're geniuses or something.
But I've yet to meet an accountant who puts in their 40 hours a week and somehow manages to grow their backlog rather than shrink it.
Whereas bad programmers who will do that exist in spades.
Clearly the two professions are not identical.
That said, I've had two mind bogglingly bad accountants on my payroll in the past who made $100K+ mistakes if we hadn't caught them and fired the fuck out of those dumbasses. One was American and one was Filipino.
When I was on projects with India, churn there was very much higher than from EU sources.
You're getting downvoted, but IMO what you're saying is exactly true, and I've seen it happen.
In my experience, pre-2015 or so, offshoring was limited in its utility. Communication was a bitch because videoconferencing from everyday laptops wasn't quite there yet, and a lot of the favored offshoring centers like India had horrible time zone overlap with the US. And perhaps most importantly, companies as a whole weren't used to fully supporting remote colleagues.
Now, though, if I interact with the majority of my colleagues over Zoom/Teams/Meet anyway, what difference does it matter where they're sitting? I've worked with absolutely phenomenal developers from Argentina, Poland and Ukraine, and there was basically no difference logistically between working with them and American colleagues. Even the folks in Eastern Europe shifted their day slightly later so that we would get about 4 hours of overlap time, which was plenty of time for communication and collaboration, and IMO made folks even more productive because it naturally enforced "collaboration hours" vs. "heads down hours".
I understand why people like remote, but I agree, US devs pushing for remote should understand they're going to be competing against folks making less than half their salaries.
> ... should understand they're going to be competing against folks making less than half their salaries.
The lower salary can be offset by the lower need for money when you don't need to buy your lunch, you don't need that expensive car to get to work and so on. The time you used for commuting could instead be spent working for another company part time.
Regulation is for when businesses cannot regulate themselves.
In many larger companies also, nationstate threats and national security are a trending issue.
If you deal with a lot of PII, outsourcing your data processing pipelines to China isn't going to fly with Congress when you get subpoena'ed for a round with Hawley.
I mean, while this might be true, Europe is full to the brim of developers who speak fluent English, and yet cost maybe a third of their US counterparts. Programming is really quite far from being a global market.
You think having a HQ in US would prevent a company from opening an office in another country?
Oh look, another person who thinks engineers are commodities, especially in a field as loosely defined and unregulated as software engineering.
They always ask “if a job can be done remote why not just hire a foreigner in a cheap place?” and never ask “if the foreigner was so good as the American engineer why wouldn’t they be getting paid the same as the American?”
It’s like they think companies are dumb and there is some undiscovered engineering arbitrage opportunity waiting to be tapped that will end the high 6 figure salaries of American software engineers forever.
And yet, since the 90s, software engineer salaries only go up. Millions of Indians flood the foreign markets, but American tech salaries only go up. Covid hits and everyone goes remote, but the salaries only go up. They always go up. American tech holds a supremacy over the world that you will likely not see the end of in your lifetime. There is so much money, so much risk taking, so much drive to dominate, other countries are generations behind.
But hey keep doing what you’re doing. Maybe you’ll save a couple bucks while your competitors gobble up the market with far better engineering talent. Not “equivalent” talent: better talent..
> if the foreigner was so good as the American engineer why wouldn’t they be getting paid the same as the American
You should also ask whether you're paying American so much because they are so good, or are you paying them so much because rents in SF are so high?
> Covid hits and everyone goes remote, but the salaries only go up. They always go up.
Once again, did it go up because COVID infections somehow made american workers even better or because lockdowns caused mini tech boom while money printing tanked the dollar's value?
Do you really think companies are paying high tech salaries out of the goodness of their hearts? Like “oh this individual lives in a HCOL area, let’s pay them an appropriate amount” or “let’s share the spoils of this tech boom with our workers! $1.5million dollar bonus for everyone!”
>>Do those people really believe they're the most intellectually superior to the rest of the world? If a job can be done purely remotely, what stops the employer from hiring someone who lives in a cheaper place?
capitalism dictates that a capable remote person will not keep working for a single employer, as it will be a waste of time.
he/she will work for multiple employers (overemployed and such), maximizing earnings, thus it will constantly keep a gap between in-office and remote workers
If remote work is cheaper for the owners, then why are the workers the ones promoting it?
The reports from the usual "offshoring centers" aren't exactly inspiring. It's a bloodbath over there.
Seems like the capabilities of current systems map onto "the kind of labor that gets offshored" quite well. Some of the jobs that would get offloaded to India now get offloaded to Anthropic's datacenters instead.
And some jobs, offshored or not, are just human frontend to datacenters.
How many of these jobs are getting offshored because of AI?
Language barriers, culture, and knowledge are some of the biggest challenges to overcome for offshoring. AI potentially solves many of those challenges
> AI potentially solves many of those challenges
Isn't it exactly the opposite?
Language barriers: LLMs are language models and all of the major ones are built in English, speaking that language fluently is surely a prerequisite to interacting with them efficiently?
Knowledge: famously LLMs "know" nothing and are making things up all of the time and sometimes approximate "knowledge"
Nope, LLMs are quite functional in non-english languages. My partner regularly works with ChatGPT in Turkish
My experience: hosted LLMs are very good, but even 30B models you run locally are quite poor (at least in Romanian). To some degree they still hallucinate words (they don't conjugate properly sometimes).
LLMs are really good with translations.
Google Translate is relatively awful. I have an intern now who barely speaks my native language but very bad English so weve been using it all the time, and its always spot on, even for phrases that dont translate directly
I bet I can do a good job communicating with you without speaking a common language.
I tested chatgpt when it launched with my obscure native language which is spoken by maybe 6 million people, and certainly isnt easy to learn nor elegant in design and doesnt have much common with English.
It was absolutely flawless, to the level of accentuations and little quirks that no tool before even came close.
Parent is plain wrong and doesnt have a clue... thats what happens when folks skip on learn foreign languages, the most important thing for life you can learn at school. Actively using multiple languages literally increases brain plasticity, much better than running ie sudoku or similar brain teasers endlessly
Language barriers: The outsourced workers I know use AI to help them ask and answer questions about things in English they don’t perfectly understand because English is their second language. They use it to write better English from English with grammatical mistakes
Knowledge: True to an extent, but my assumption here is that it would be used to fill in gaps or correct misunderstandings. Not wholesale doing my job. At least that’s often how I use it
I worry things will be lost in translation (maybe would have already), Or the LLMs will fill in the gaps with wrong information, like some sort of weird telephone game.
That said, I have one ESL on my team who uses LLMs a lot like that and it's fine so who knows.
It’s basically a solved problem for Japanese <-> English. There are some hiccups, but my coworkers who aren’t fluent in English do pretty good job. We have most of our Slack set up with LLM-auto translations, and it’s been a couple of years of smooth sailing at this point.
Found this article from last year saying IIT grads are facing the same grim outlook as technology hiring in India for new grads has also dried up
https://www.bloomberg.com/news/newsletters/2024-05-30/tough-...
So, that doesn't seem like a likely culprit unless you have some convincing evidence.
I think you are conflating 2 things. AI could be going after new entry level jobs in software engineering. I am not a professional engineer but an accountant by trade (I like writing software as a hobby lol) but this article looks like evidence that IIT grads will have a harder time getting these jobs that AI is attacking. My comment rests on the fact that the report doesn't really reconcile with AI destroying entry level jobs for accounting, but rather this type of work being offshored to APAC/India. There are still new COEs being built up for mid cap companies for shared services in India to this day and I don't mean Cognizant and Wipro, but rather the end customer being the company in question with really slick offices there.
My experience has been that the cheap outsourcing to India is one of the main areas AI is a real disruption. You can go straight to the artificial indian, and get a better result than an outsourced worker with AI tooling. It's one of the most obvious "I no longer need a person for this" experiences I have had since self checkout.
I expect that other areas like accounting that use outsourcing are going to see similar effects in a few years.
I think the article doesn't really prove AI is the culprit but I think this other article disproves that offshoring is. If offshoring was the culprit why is it only affecting the most junior employees? I think the case is still open but AI is the leading candidate.
This is exactly right.
The H1B pipeline has not decreased at all whereas millions of American workers have been laid off.
Maybe for software engineering but not for accounting. I've had to interface with many offshored teams and interviewed at places where accounting ops were in COE centers in EU/APAC.
Offshoring is parallel to H1B.
Happening simultaneously sadly.
Do you have any evidence of this because the rationale seems like a coping strategy or conspiracy theory how it's being suppositioned.
Do you have any actual evidence that supports the headline? The article does not. It simply mentions 13% decline in relative employment and then blames AI with no actual evidence. Given what I know about the current state of AI and off-shoring, I think off-shoring is a million times more likely to be the culprit than AI.
The entire account department at my firm has moved to Poland. That’s nice for them, but as a US citizen it does mean the writing is on the wall. On the plus side I learned a fun fact. Malgorzata is a more common name than I had ever imagined.
IT help was outsourced to India years ago. I expect them to be replaced with AI the minute their government stops handing the firm big contracts because I’ve never spoken to anyone from that group who was actually better than a chat bot.
Have you seen how the profession has worked post SOX? Did you know 2016 was the peak year where you had accounting students enrolled in uni in the states? I want you to think laterally about this.
Yup, 95% of the AI hype is to apply pressure on the labor market and provide cover for offshoring/downsizing.
Where is the evidence for this? Who is "applying pressure on the labour market"?
Every executive publicly saying obviously* false things like X job will be done by AI in 18 months is putting downward pressure on the labor market. The pressure is essentially peer pressure among executives: are we stupid for continuing to hire engineers instead of handing our engineering budget to Anthropic?
* - Someone should maintain a walkback list to track these. I believe recent additions are Amodei of Anthropic and the CEOs of AWS and Salesforce. (Benioff of Salesforce, in February: "We're not going to hire any new engineers this year." Their careers page shows a pivot from that position.)
Maybe it's a good time to ask for advice. Which IT job roles and companies are least vulnerable to offshoring? Defense contractors and the like?
Stuff that isn’t pure SaaS. Physical products that benefit from hands on interaction with customers, worksites, and other internal producers. Small and/or local businesses that want someone whose face they can see in person.
1 person billion dollar company - the new buzz phrase when "democratize" became so yesterday is in my opinion just that.
Well good thing we have our best guys in gov't to address this /s
> Audit quality will continue to suffer
I wonder how much this actually matters? I understand that for an auditor, having a quality reputation matters. But if all audits from all firms are bad, how much would the world economy suffer?
Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?
The current system is not long term stable, and poor accounting is part of the reason more people don't know that. Even worse accounting would speed up the decline.
> Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?
Is this hyperbole? It seems like the real question being asked here is "would the world be worse off without deterministic checks and balances", which I think most people would agree is true, no?
I read it as assuming the deterministic checks and balances are already absent. We have the illusion of determinism but, in practice, audits (and justice) are mostly theatre as it is.
From that perspective, lowering the quality of something that is already non-rigourous might not have any perceivable effect. It’s only a problem if public perception lowers, but that’s a marketing issue that the big-4 already have a handle on.
They don’t though. Marketing hits reality all the time. The Big 4 will survive, but you can only gaslight people for so long.
The all-in on AI shows a lack of imagination around innovation.
Then you would have to think twice about the company you may be giving money to (ie the stock market and private bank loans). That's the whole objective of this. Every company is going to need an accountant in one way or another and you don't really need to follow strict GAAP for management requirements (what else is EBIDTA for if anything?), but it's something completely different than saying: I made x dollars and spent y dollars, here is what I have and what I owe, please give me money.
At the end of the day it is a question of convenience/standards, if GAAP didn't exist maybe firms could use a modified accrual standard that is wholly compliant with tax reporting and that's it.
This past year, I’ve seen a lot of entry-level jobs quietly disappear. It’s not that people are getting laid off, it’s that no one’s hiring beginners anymore. What’s really missing isn’t just the jobs, it’s the chance to grow. If there’s nowhere to start, how are new people supposed to get in and learn?
Were there ever that many low-level Junior jobs though?
In my experience, almost everyone in college would get an internship Junior / Senior year and convert into an FTE after graduation. Those that were not so talented or not so lucky usually struggled to find work, taking many months to finally land a job. Most typically at a Booz Allen Hamilton type of place that was just throwing bodies into seats.
At all of my employers, I’ve never really seen any openings for Juniors, only Mid and Senior positions. The few Juniors we did bring on outside of an internship pipeline were either internal transfers, e.g. a SOC analyst given a chance or a nepotism type of hire.
I got out of school 15 years ago so its been a while now, but at that time there were a ton of junior roles.
I got a CS bachelors from a decent state school, nothing fancy, and everyone I kept in touch with had found an entry level role pretty quickly after graduation.
I did do an internship and had an offer from them, but the psy was pretty low and I really didn't want to move where they were. It was a bit stressful turning that down early senior year without a backup yet, but I ended up with quite a few interviews and an offer before graduation.
That's funny because I've been rejected from Booz Allen so many times lol
Unfortunately i think many of those jobs can also be attributed to general economic health post low interest rates.
Companies now need to leave pre-revenue and turn a profit, or if you’re an established company you need to cut costs/increase margins from other economic headwinds (tariffs, inflation, gov policies etc)
A Junior dev (and most devs onboarding) will typically require 6-8 months to start being able to meaningfully contribute, then there’s a general oversight/mentorship for a few years after.
Yes they produce, however I think junior’s market salary plus the opportunity cost lost of the higher salaried mid and senior level in mentoring is a hard pill to swallow.
The team i work on is stretched very thin, and even after layoffs (which management agreed they went too far with) it’s pulling teeth to get another dev to build things companies are begging for and even willing to separately pay cash upfront for us to build
If you’re getting into the current job market as a junior, you’ll likely need to go heavy in the buzzword tech, accept a position from a smaller company that pays substantially less, then in 1-2 years job hop into a higher paying mid level role (not to say 1-2 years makes anyone mid level imo)
The question is always: Is this simply the effects of a recession or AI?
No one wants to hire juniors, but when the alternative is too expensive they are an acceptable solution.
Or if you have some incentive structure where you can get more work out of them like consultancies.
With a market flooded with senior people accepting a paycut for a job why even attempt hiring juniors?
Sadly, they will just have to try harder. It is still doable especially for an American, and I'm not a fan of these doomsayers' prophesying. There is still hope because TikTok and video games are putting most young people in a trance.
TikTok and video games are also a more viable path to making $100,000 a month than any other professional path.
It is a career path, but it 1) is a path that only works for a small amount of people, most people don't earn anything like that 2) requires a special kind of personality and set of skills 3) is subject to the whim of algorithms 4) requires brand building over time but can be destroyed overnight for many reasons. Most "regular" jobs are much more stable.
Not so long ago people who played with programming and computers were wasting their time and potential, disappointed their parents and would have been better off getting a "regular" job.
Sure but your comment does not address all those facts that I said. So many jobs were unstable and remain unstable over the past 20 years.
Um, no. Way more young people are making that kind of money by being a programmer or doctor or lawyer or nurse or actuary or something versus the minuscule number of people making any significant money on tiktok or video games.
AI is the popular cover excuse for layoffs.
I can’t think of a single job that modern AI could easily replace.
Video and graphical designers? I’m not saying this with optimism, but rather as as observation.
I know a handful of digital marketers, that work for different marketing firms - and the use of GenAI for those tasks have exploded. Usually tasks which they either had in-house people, or freelancers do the work.
Now they just do it themselves.
No it can't replace graphical designers except in low stakes projects. Companies won't risk their projects on cheap AI with other larger costs on the line.
Instead of replace, think reduce. A graphical designer/lead might have farmed out some work to a contractor, with AI they now just do it themselves. It doesn't look like 'cheap AI' because they used the tools to augment their skills - not replace.
I don't know what it is about AI that makes people think in absolutes.
Already happening, for a solid couple of years now. AI slop on billboards, busses, newspapers, etc. is a daily occurrence now.
Obviously companies like Apple isn’t going to cut corners straight away, but small and medium sized companies? Already doing it.
>Companies won't risk their projects on cheap AI with other larger costs on the line.
Coca Cola's christmas ad had AI slop in it last year. That doesn't seem very cheap or low stakes.
Did the cost reduction have more value than the change in ad outcomes?
I can think of a handful of people I work with who could be replaced by LLM. The hallucinations would be less frequent than the screw-ups the current humans make.
It could at least consolidate 5 of those people into 1 with increased efficiency.
I could consolidate those 5 with a trained chimp, but it wouldnt mean that chimps are about to overthrow mankind.
If general government policies adapted to a world with less available jobs, it wouldn't be an issue.
As it stands, our governments are continuing on as normal despite the introduction of a technology which has the capability to erase an enormous amount of jobs from the pool (certainly many standard office jobs).
A world with less work and no general change in policy (no UBI or similar scheme) is a world with more unemployed people, falling living standards, more crime, and more instability.
That is assuming that this time it's different (tm) - for the first time in the multiple millennia of human history
It doesn't have to replace people on a one-for-one basis to cause job losses. Let's say LLMs make your developers 50% more efficient. Doesn't it stand to reason you can lay off the lowest performing 33% and get the same amount (or more) of work?
It also means that with lower costs your service becomes more attractive and maybe attracts more customers, so might even grow the number of workers.
This is known as Jevons Paradox https://en.wikipedia.org/wiki/Jevons_paradox
No, it does not stand, because you think linearly. Companies can't simply drop 33% of employees because there is competition. If competition uses both humans and AI they will get more value from both. No AI has sufficient autonomy or capability to be held accountable for its mistakes.
There is less upswing in reducing costs than in increasing profits. Companies want to increase profits actually, not just reduce costs which will be eaten away by competition. In a world where everyone has the same AIs, human still make the difference.
Think about Meta. A simple example is Metaverse. Mistakes and competition? Then, look at their market capitalization over the last three years.
there hasn’t been a single study that concludes any benefits to AI yet.
Either it’s a cover for something or people are a bit too overzealous to believe in gains that haven’t materialised yet.
Have you ever been at a company where the limiting factor was finding stuff to build? I've never seen one personally. If there's any productivity increase, they'll just build even more stuff.
(And that's if we agree about a 50% increase I'd say 5% is already generous)
I think the initial job loss from AI will come from having individual workers be more productive and eliminate the need to have larger teams to get the same work done.
Eventually, maybe. Right now I see a lot more people wasting time with AI in search of these promised efficiencies. A lot of companies reducing headcount are simply hiding the fact that they are deprioritizing projects or reducing their overall scope because the economy is shit (I know, I know - but it feels worse than reported IMO) and that's the right business cycle thing to do. If you're dramatic and take the DOGE/MAGA approach to management, just fire everyone and the important issues will become obvious where investment is actually needed. It's a headcount 'zero based budget' played out IRL. The truth is, there is a lot of fat to be cut from most large companies and I feel like it's the current business trend to be ruthless with the blade, especially since you have AI as a rose colored scapegoat.
The way I like to describe it is that you can't go from 1 developer to 0 thanks to AI, but you might be able to go from 10 to 9. Although not sure what the exact numbers are.
For cost centers, maybe. If your development team or org is a revenue generator with a backlog, I don't see why the team would be trimmed.
I'll go further than you. Even if the team is a cost center, it may not make sense to reduce the headcount if there's still more work to do. After all, an internal team that just assists other teams in the company without directly creating value suddenly become more productive could in turn make the other teams more productive. Automatically reducing headcount after a productivity increase is like that effect where people drive more dangerously when wearing seatbelts.
I agree its a popular excuse, however unlike the blockchain craze there’s legitimate use cases of productivity improvements with AI.
And if you can (in some cases) substantially increase productivity, then logically you can reduce team size and be as productive with less.
With the right prompting, you can cut a copywriting team in half easily.
My business has one copywriter/strategist, who I’ve automated the writing part by collecting transcripts and brand guidelines from client meetings. Now she can focus on much higher quality edits, work with other parts of the strategy pipeline, and ultimately more clients than before.
I can easily imagine a corp with 100 junior copywriters quickly reducing headcount
The problem is people (not sure if it's coping) present an argument that either it can perfectly replace someone 100% or it's an useless fad.
Even increasing the average productivity by 10-20% is huge and in some areas (like copywriting as you've mentioned) the gains are much bigger than that. Of course there's also the argument of the infinite demand (i.e. demand will always overshadow any gains in supply caused by AI) but evidence is never provided.
I used to hire someone who worked part time from home to bookmark some of the key pages in thousands of pdfs just so that I can directly jump to those pages instead of spending time myself on finding those pages.
AI can now do it very cheap so no need to give that job to a human anymore.
I know I have dozens of tasks like that but I can’t seem to think of them when I’m wondering what to do with AI!
gnu parallel + pdfgrepper saved my ass too many times
My use case cannot be solved by just grepping for the keywords because it does return false positives. You do need some intelligence (human or AI) to figure out if its the right page or not to add the bookmark.
The jobs aren't being taken by AI. The capital that used to fund those positions is instead being diverted into AI initiatives.
This is what HasanAbi mentioned the other day. Betting on AI to do a job (especially if it fails to replace it) is a double economic whammy. You get rid of original people who did the job, and then you don't have funding for other things (also includes people) because you need to recover the AI costs.
Collective delusion about AI (or similar craze) can be large enough to actually tank the economy.
> AI is the popular cover excuse for layoffs.
That I agree with. The problem with the assertion that AI took all these jobs is that the normalised point from which they took for assessing job losses is right at the peak of epic programmer hiring.
> I can’t think of a single job that modern AI could easily replace.
That I am less sure of.
> I can’t think of a single job that modern AI could easily replace.
It could replace many workers, perhaps sacrificing quality, but that's considered quite acceptable by those making these decisions because of the huge labor cost savings.
It also could raise the quality of work product for those working at a senior level by allowing them to rapidly iterate on ideas and prototypes. This could lower the need for as many junior workers.
Have you taken a Waymo yet?
no
Poor quality translations is one that actually are impacted. Maybe some graphical artists.
There are tons of internship-like positions where the employer just wants someone to prepare powerpoints and stuff of that nature, that they then edit because the intern doesn’t do a very good job at powerpoint.
Organizations are choosing to eliminate workers rather than amplify them with AI because they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity. That's the rent extraction model consuming its own productive infrastructure. The Stanford study documents organizations systematically choosing inferior economic strategies because their rent-extraction frameworks cannot conceptualize workers as productive assets to amplify. This reveals that these organizations are economic rent-seekers that happen to have productive workers, not production companies that happen to extract rents. When forced to choose between preserving rent extraction structures or maximizing value creation, they preserve extraction even at the cost of destroying productive capacity. So what comes next?
> So what comes next?
When you don’t need as many people because of automation, you also don’t need them to fight your wars. You use drones and other automated weapons. You don’t need things like democracy because that was to prevent people from turning to revolution, and that problem has been solved with automated weapons. So then you don’t really need as many people anymore, so you stop providing the expensive healthcare, food production, and water to keep them all alive
Yeah this is what we are seeing today, also its not just junior jobs going, according to Amazon they are using it to get rid of expensive senior employees while they are actually holding onto juniors using ai tools.
We have seen a lot of use of h1b and outsourcing despite the massive job shortage. Seeing lots of fake job sites filled with ai generated fake openings and paid membership for access to "premium jobs."
They're using ICE to effectively pay half the country to murder the other half, but the ICE budget is limited so that automated systems can then gun down the ICE community to replace 99.9% of humans with machines.
Ultimately this is great for Russia because they'll still be able to invade even if they have only 300 soldiers left in their military, after they hit a low orbit nuke blast to shutdown the Ai US, basically only Melania swinging her purse at the troops will be one of the few left alive to resist.
> Ultimately this is great for Russia
Wat
> When you don’t need as many people because of automation
You want to sell your stuff to someone, tho. So, unless you find a way to automate consumption as well, you do need people and lots of it.
Dark. But I can't think of a way to rebuke it…
There's likely a slippery slope fallacy in there somewhere (I hope). If interested in the (not so) sci-fi aspects of automated weapons and their ramifications, I often plug Daniel Suarez's great Kill Decision talk and book: https://www.youtube.com/watch?v=pMYYx_im5QI
Its deterministic and assumes that those in power are one uniform force. Its still possible to push for a different future.
This works even better with a declining fertility rate!
The current wave of automation (LLMs) aren't capable of "fighting your wars".
Ukraine and russia are already employing low-cost recon and hunter-killer drones
israel is already using sniper drones in Palestine that use AI to fly around and headshot whatever moves, as well as AI to select their bombing targets.
the future is now, isn't it exciting?
Why does Mr Beast dig wells in Africa?
To launder his reputation? To distract that the source of his wealth is selling gambling and sugar to children? To feel better about himself?
palantir and anduril :))
Your claim is not supported by the paper:
"Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor."
No mention of rent-seeking.
No evidence they are being economically short-sighted.
> they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity
They're using cheap AI to replace more expensive humans. There's no reason to think they are missing some exponential expansion opportunity that keeping those humans would achieve, and every reason to think otherwise.
I hope AI fuels a re-independence of many industries by making business software discovery and integration cheap and easy, every plumber with more than 10 years experience should own their company with low cost software running it, the efficiency gains from consolidating resources a la private equity for marketing and book keeping go away in an AI powered world
> So what comes next?
Feudalism.
That's optimistic.
Ancient egypt (elite in pyramids, slaves otherwise) is more likely.
>> Feudalism.
> That's optimistic.
> Ancient egypt (elite in pyramids, slaves otherwise) is more likely.
No you're both being optimistic. The feudal lords had a vital need for serfs, and the pharaohs slaves.
It'll be more like elite in pyramids, everyone else (who survives) lives like a rat in the sewers, living off garbage and trying to stay out of sight. Once the elite no longer need workers like us, they'll withdraw the resources they need to live comfortably, or to even live at all. They're not making more land, and the capitalist elite have "better" uses for energy than heating your home and powering your shit.
I would say that’s still being optimistic. The end will come when Baidu, Facebook and Microsoft’s AI engage in total war against each other for survival while we watch in horror and incomprehension. The elites are just as fucked as anyone else.
Is that what you think of yourself?
> This reveals that these organizations are economic rent-seekers that happen to have productive workers, not production companies that happen to extract rents.
Your perspective is so contrary to reality I'm actually not sure if you're trolling or not. There is no such thing as pure value creation. In order for labor to create value, it must be aligned with the company's value proposition, i.e. what convinces customers to pay for the value that the company provides. Half the people off in the corner building something that they think is valuable are actually building something that customers do not care about, won't pay more for, and increase the company's maintenance burden.
Keeping labor aligned with value creation is the whole game. If it wasn't, then all these rent-seeking-first enterprises would have fired their layers and layers of middle management a long time ago; the company needs to pay them a salary (reducing profits) but they don't write any code / "produce any value". All these massive corporations would have moved to a flat management hierarchy a long time ago, if labor was truly capable of aligning itself to improving value generation; and if you think there's some nefarious/conspiratorial reason why massive corporations don't do that, then most of them would have been out-competed a long time ago by co-ops with flat management hierarchies that could produce the same value at a lower price due to lower administration costs.
Needing to hire employees is a necessary evil for businesses. Aligning employees is hard. Motivating employees is hard. Communication is hard. Businesses do not exist to provide people with jobs, which are created out of sheer necessity, and are destroyed when that necessity goes away.
You got there in the end. Hiring people is a necessary evil and ai allows companies to massive reduce the necessity of that evil. Having done budgeting and forecasting for a wide range or organizations companies will do anything to avoid hiring an employee. I’ve seen companies spend 3x what an employee would cost just to avoid the increased headcount.
The forces of capital do not want to share a single penny and are solely focused on getting to a place of rent.
Probably because there are no free markets anymore, it's all monopoly, cartel, and/or regulatory capture.
Competition would fix a whole lot of problems.
It's really just the american companies deciding to do this. Seems like glorified suicide, tbh
What data or special insight do you have as to whether amplifying or eliminating is actually productive?
This argument is vacuous if you consider a marginal worker. Let's say AI eliminates one worker, Bob. You could argue "it was better to amplify Bob and share the gains". However, that assumes the company needs more of whatever Bob produces. That means you could also make an argument "given that the company didn't previously hire another worker Bill ~= Bob, it doesn't want to share gains that Bill would have provided blah blah". Ad absurdum, any company not trying to keep hiring infinitely is doing rent extraction.
You could make a much more narrow argument that cost of hiring Bill was higher than his marginal contribution but cost of keeping Bob + AI is lower than their combined contribution, but that's something you actually need to justify. Or, at the very least, justify why you know that it is, better than people running the company.
Late Stage Capitalism. The real paper clip maximizers are the Silicon Valley and Wallstreet bros we met along the way.
ChatGPT (might have) made a few superfluous email jobs obsolete and the people responding to this comment are acting like we’re standing on the threshold of Terminator 3.
Don't underestimate how much of the economy is "superfluous email jobs". Have you seen how stupid the average person is?[0] These people need jobs too.
[0] I was going to going to mark this as sarcasm but then I remembered that the US elected Donald Trump as president, 2 times so far, so I'm going to play it straight.
so instead of training and educate this people, you want them to keep that "obsolete" job????
a little bit late aren't we??? because if we do that, then we would still use postman to send message
Implying "superfluous email jobs" isn't a significant portion of the international job market. Most people that work in offices fit under this definition.
> Most people that work in offices fit under this definition.
Not at all. The majority of office jobs can't be automated by current generation LLMs, because the jobs themselves serve either creative or supervisory functions. Generative AI might be able to fill in creative functions one day, but the whole point of a supervisory role is to verify the status of inputs and outputs. A lot of these roles already have legal moats around them (e.g. you can't have an LLM sign financial statements), but even if we assume that regulations would change, the technical problem of creating supervisory "AI" hasn't been solved; even if it was, implementation won't be trivial.
Lots of people are sceptical but I cannot imagine a use for entry level positions anymore. At my work everybody got to calling AI "the intern", which is not confusing because we do not have and have no use for interns.
So what are the long term risks when senior staff leaves and those need to be replaced with new seniors that have never seen the existing work vs. promoting younger people who knows the projects and practises?
Long... term... risk? What is this archaic concept? I just want to get into a position where I can extract rent from children when they grow up. Why do I need to worry about this so called long term risk?
>I just want to get into a position where I can extract rent from children when they grow up.
That's called social security.
These days normally what corporates do is bring in people from abroad who have the skills.
The idea that companies would seek to train up domestic workers if there is a skill shortage is outdated today – even if theoretically this might be good for the domestic workforce. It's just cheaper and easier to import the skills needed.
I think I disagree. It sounds like that companies that do this are average at most and has nothing own - otherwise it would not be possible to just replace the workers. If your company is average, it does not attract real talents.
But maybe my comment was not noting every discipline. I was mostly thinking about software development.
I'm not talking about "replacing" workers. I'm talking about hiring the most qualified which in a global talent pool will almost always be someone from abroad. There's no reason for a company to hire a relatively unskilled graduate domestically when they could hire someone with more experience from abroad.
If you look at the workforce makeup of many large tech companies today there's a reason Indian and East-Asian talent is so overrepresented, and it's not because they lack talent. It's because if you actually want to hire the best of the best you're not going to bring in juniors from the domestic workforce and train them up.
> I'm not talking about "replacing" workers. I'm talking about hiring the most qualified which in a global talent pool will almost always be someone from abroad. There's no reason for a company to hire a relatively unskilled graduate domestically when they could hire someone with more experience from abroad.
What I meant with replacing was that when someone retires, you hire a new worker and hope that they immediately produce a similar value. If the company has unique skill requirements, it is unlikely that the new hire provides similar value, at least immediately. How long it takes, is a question.
Alternatively, you have hired a junior already _on top of the_ already existing senior person. Junior doesn't have so good value/cost ratio but they still contribute. But this will increase over time.
And when the senior then retires, you are comparing the value of this trained junior vs. the associated risk of random new hire. You are never replacing retired person directly with "junior", junior is just an additional less necessary investment worker.
> If you look at the workforce makeup of many large tech companies today there's a reason Indian and East-Asian talent is so overrepresented, and it's not because they lack talent. It's because if you actually want to hire the best of the best you're not going to bring in juniors from the domestic workforce and train them up.
I haven't heard yet a company that actually has produced valuable product because of it. Usually the flow has been, that some Western country has created a succesfull product, and then later the workforce has been changed. And almost always the quality has decreased. But it does not matter because the product got already decent market share, and it takes years for revenue to drop because of that.
> I haven't heard yet a company that actually has produced valuable product because of it. Usually the flow has been, that some Western country has created a succesful product, and then later the workforce has been changed. And almost always the quality has decreased. But it does not matter because the product got already decent market share, and it takes years for revenue to drop because of that.
Are you talking about out-sourcing or hiring the best talent from abroad with work-visas? I guess I don't understand why you would think domestic labour is inherently better? I understand this perspective with outsourcing because it can be difficult to maintain quality when outsourcing to a team in another country, but there's plenty of examples of successful tech companies hiring the best talent from abroad, no? I'd argue this is almost the norm for large tech companies in the US.
And for what it's worth, I'm not saying I agree with this. I'm just saying that if you're a company genuinely interested in hiring the best of the best it makes no sense to limit your Labour pool to the domestic market, and it rarely makes sense to consider graduates unless they are truly exceptional given their relative lack of experience.
In the past companies were simply forced to hire and train domestically because countries like India and China didn't have the education or technological access to compete with the average graduate in the US. Today most of the world is online and education is decent enough that the best talent is far less concentrated to a few geographical areas.
In a pure meritocracy (which the US is close to) in a globalised world it's quite rare that someone graphically local will be the best person available for any given role.
But you can domestically hire that junior too - I am just arguing about that pure replacement model. You can pick the top talent from juniors too. Otherwise, if nobody ever hire juniors, at some point you run out of any workers because there is a portion for their life when they simply cannot get the job in their field. Young people won't invest the field if the first job requires that you are magician or have 10+ years of work experience that you either can't have or need to use your free-time.
Anyway, big tech doesn't work as an generic example because they have much more resources and therefore it cannot be generalized.
US also is an exception because the only expectation is English language. In other countries you may need two languages.
The beauty is that, as mere workers, it's not our problem. Let the ruling class figure it out.
I think you are misunderstanding the purpose of interns.
Free labour, easy to dispose of labour? Even though it’s low quality at that price point it can’t be beat. That’s the only use I have ever seen.
Oh he is understanding the abuse perfectly
Traditionally, interns exist as a well-vetted and well-shaped supply of labor (which is very difficult to find through the traditional hiring process). The work they complete is secondary. Are companies going to stop needing good employees? Is nobody going to need to work in 40 years when all the current employees are phased out?
If interns existed like that they haven’t in the 21st century. They are a free disposable short term labour to be ripped through.
Also what company do you know of thinks in 40 year terms? The longest budgeting process I’ve been part of has been 3 years and the year 3 numbers were understood to be pure wish fulfilment
I'm not sure when interns stopped being part of the hiring chain, but I remember almost universally circa 2010 or 2011 in the great recession, almost every single intern and co-op student I knew were told bluntly there would be absolutely no hiring of them into full time roles.
Up until the great recession it was pretty normal, almost expected, that a good internship turned into a good job offer.
I'm guessing that just never reversed.
Every day when I am out in the city, I am amazed by how many jobs we have NOT managed to replace with AI yet.
For example, cashiers. There are still many people spending their lives dragging items over a scanner, reading a number from a screen, holding out their hand for the customer to put money in, and then sorting the coins into boxes.
How hard can it be to automate that?
They don’t need AI for that, they just cut staff to the bare minimum and put in self checkouts.
And then they hire supervisors, helpers and checkout guards/security. I hope it at least makes sense on paper.
Not AI - but businesses have tried offshoring them - https://www.npr.org/2022/09/30/1126167551/would-you-like-a-s...
>How hard can it be to automate that?
Self checkout has been a thing for ages. Heck in Japan the 711s have cashiers but you put the money into a machine that counts and distributes change for them.
Supermarkets are actually getting rid of self checkouts due to crime. Surprise surprise, having less visible "supervision" in a store results in more shoplifting than having employees who won't stop it anyway.
I've seen it in Japan, the machine just handles the money. But you still need a human to scan things/check to make sure things are scanned correctly.
It’s also just resulting in atrocious customer experience.
I can go to Safeway or the smaller chain half a block away.
The Safeway went all in on self checkouts. The store is barely staffed, shelves are constantly empty, you have to have your receipt checked by security every time, they closed the second entrance permanently, and for some reason the place smells.
Other store has self checkouts but they also have loads of staff. I usually go through the normal checkout because it’s easier and since they have adequate staff and self checkout lines it tends to be about the same speed to.
End result is I don’t shop at Safeway if I can avoid it.
The hard part is preventing theft, not adding numbers.
Cashiers should not, and will not prevent theft. They're not paid nearly enough to get in danger, and it is not their job.
I'm sure you can find videos of thefts in San Francisco if you need a visual demonstration. No cashier is going to jump in front of someone to stop a theft.
True, but having a cashier standing there waiting to scan your items will prevent most normal people from stealing. Sure, some will brazenly walk right past with a TV on their shoulder, but most people won't.
If there's no cashier and you're doing it yourself, a whole lot more people will "forget" to scan a couple items, and that adds up.
There's usually a security person or two in the store, looking over the self checkouts. I agree that job prevents a lot of people from becoming opportunistic thiefs, but I'm making a distinction between cashiers and security. Today the store needs both.
Pretty sure if a "security person" worked so well, Walmart wouldn't be severely reducing self checkouts at their stores to Walmart Plus members only.
That might be regional, then. I wouldn't say $COUTNRY is exactly a high-trust society, but it's not quite that bad for us over here.
I haven't observed this happening here (Toronto, Canada).
That's not the type of theft they were talking about. Rather, self scanners purposely not scanning items to get them for free, etc
I had a roommate in college who used to stuff containers of beef into produce bags full of kale, and weigh that on the self-service scanner.
A thief doesn't know what a cashier will do. And a cashier is an eye witness or can yell "hey stop them!"
You're doing the all or nothing fallacy. The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.
> The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.
Yes, for one thing, it ignores that a very large share of retail theft is insider theft, and that cash handling positions are the largest portion of that.
Cashiers absolutely do something for theft.
They absolutely do. It’s not the cashiers being security, it’s having adequate staffing making people less likely to steal. Its not stopping crimes that have occurred it’s just reducing opportunistic theft.
Is the theft really happening at the checkout?
And if so, why can't we detect it via camera + AI?
You detect someone leaving your store with a 4€ item. What then?
Use your army of lawyers to help file misdemeanor theft charges of course. Then get one out of 100 of the defendants who actually has something to lose to pay big damages to fund it to happen again the next time.
You ban them from coming back in after a few warnings. Stores seem really icy about facial recognition right now though. The optics are pretty bad (a play on words pun?)
Who is going to stop them from coming back in?
There have been a few stores that won't really stop people from coming back, they just quietly file charges, and then the person finds out next time they get pulled over or something along those lines.
No one, they get automatically flagged, and then someone asks them to leave. Or the police are called and they are trespassed.
You install AI-powered turnstiles at the entrance. Come on haven't you seen or read any dystopian media? :)
There are stores that are abandoning self-checkouts completely and going back to cashiers as the theft rose to unsustainable numbers.
Checkouts are often only egress points. So having pair of eyes over them does have some effect compared to having none at all.
Detecting theft does not mean theft is prevented. You then need the government to prosecute, and impose sufficient punishment to deter theft. This is not cheap, nor a given that it will happen.
So take the broken god awful experience of self checkout and add another layer of “I think you did something wrong so now you have to stand around waiting for an actual person”?
No thanks.
Amazon could not do it. They claimed they could, but it was just indians watching the video and tabulating totals overseas
You mean ordering kiosks and self-checkout machines? We have automated it, it's just not everywhere has implemented it.
The one I'm desperately waiting for is serverless restaurants—food halls already do it but I want it everywhere. Just let me sit down, put an order into the kitchen, pick it up myself. I promise I can walk 20 feet and fill my own drink cup.
You seem to like self-checkout processes. I don't. I avoid any place where I have to interact with a screen. Be it a screen installed on-premise or the screen on my phone. It is not a relaxing experience for me.
Japan does this a lot of places, and it makes the experience much easier.
And I think the entire mid and low range restaurants could replace servers with a tablet and people would be happier. I'm not sure how it doesn't make more money for the restaurant too, making it so easy to order more during a meal.
Self check-out machines aren't automation.
There used to be two humans standing at the cash register, now because of software, automatic change machines, and cameras there is only one. One of those humans' jobs got automated.
Call it what you like but replacing the work of humans one for one is difficult and usually not necessary. Reformulating the problem to one that machines can solve is basically the whole game. You don't need a robot front desk worker to greet you, you just need a tablet to do your check in.
I do their work. No work got automated.
This. And I do their work a lot more slowly because it's not my regular job, and I actually already had to do some of the work (getting the items out of my trolley and onto the conveyor). Now I stand there forever fumbling with barcodes, trying to get bags to stay open, switching between getting items out of the trolley and scanning. The old checkout system is so much more efficient when you are buying anything more than a couple of items at a time.
Yeah this is like saying Aldi “automated” cart return. They didn’t, they got every shopper to do the work themselves. Automated cart return would be if you just gave the cart a little “giddyup!” when you were done and it found its way home. Or those cart conveyor belts at Ikea, it’s only part of the process but that part is automated.
[edit] Aldi did automate the management of getting shoppers to do that work, because there’s not a person standing there taking and handing out quarters, but (very simple) machines. Without those machines they might need a person, so that hypothetical role (the existence of which might make the whole scheme uneconomical) is automated. But they didn’t automate cart return, all that work’s still being done by people.
Many "productivity improvements" in the modern era are just externalizing the problem. It's like saying I automated recycling by dumping all my unsorted stuff in my neighbors bin.
Do you consider all forms of "self-service" to not be automation of a job that previously required an additional human?
Like checking in at the airport via kiosk/app for example. Do you consider that to be "doing the work" of the desk clerk? Or say ordering at a restaurant by scanning a QR code, in both cases I have to look at the menu, decide what I want, and input my order into a system. But with the QR code there's no longer a human necessary.
Serverless restaurants have been common in Australia for decades. You just get a buzzer and then need to go pick up your food when it is ready. There's a single person behind the bar to take orders and pour beer/wine/soda.
I don't use self-checkouts at the stores, nor would I eat at automated or self-service restaurants. I have a kitchen for that already.
But it's good if both are available, as apparently there will be customers for both.
Seems like perfect option for robots (not humanoid). Bring me my food. You can still keep people in kitchen for a bit, but well servers in many restaurants are not really needed.
Pharmacists are my favourite. They're a human vending machine that is bad at counting and reading. But law protects them. Pretty good regulatory capture.
Please actually understand what pharmacists actually do and _why_ AI is not a good replacement for them yet, unless you want to die of certain drugs interactions.
Hahaha, this drug interaction nonsense is what online people tell each other. It isn't even real. It's like "nice trigger discipline" or "the postal police don't fuck around" and shit like that. Just something that is not true but for some reason is internet urban legend.
Retail pharmacists are human vending machines. You don't need AI. It's a computer prescription written by a far more qualified human which is then provided to a nigh-illiterate half-wit who will then try as hard as possible to misread it. Having then misread it, the patient must then coax them out of their idiocy until they apologize and fulfill what's written.
Meanwhile some Internet guy who gets all his information from the Internet will repeat what he's heard on the Internet. I know this because anyone passingly acquainted with this would have at least made the clarification between compounding pharmacists and retail pharmacists or something.
> trigger discipline
> In 2011, firearm injuries accounted for ... 851 deaths from the accidental discharge of firearms [in the United States].
https://jamanetwork.com/journals/jamainternalmedicine/fullar...
Pharmacists are a fantastic example. My pharmacy is delivered my prescription by computer. They text me, by computer, when it's ready to pick up. I drive over there … and it isn't ready, and I have to loiter for 15 minutes.
Also, after the prescription ends, they're still filling it. I just never pick it up. The autonomous flow has no ability to handle this situation, so now I get a monthly text that my prescription is ready. The actual support line is literally unmanned, and messages given it are piped to /dev/null.
The existing automation is hot garbage. But C-suite would have me believe our Lord & Savior, AI, will fix it all.
The only way AI could fix this if it said "replace the pharmacist with a vending machine and hire a $150k junior engineer to make sure the DB is updated afterwards", which you never know, Claude Opus 4 might suggest. At that point, we'll know AGI has been achieved.
High interest rates + tariff terror -> less investment -> less jobs
But let's blame AI
Let's read the paper instead: https://digitaleconomy.stanford.edu/wp-content/uploads/2025/...
It presents a difference-in-differences (https://en.wikipedia.org/wiki/Difference_in_differences) design that exploits staggered adoption of generative AI to estimate the causal effect on productivity. It compares headcount over time by age group across several occupations, showing significant differentials across age groups.
Page 3: "We test for a class of such confounders by controlling for firm-time effects in an event study regression, absorbing aggregate firm shocks that impact all workers at a firm regardless of AI exposure. For workers aged 22-25, we find a 12 log-point decline in relative employment for the most AI-exposed quintiles compared to the least exposed quintile, a large and statistically significant effect."
Interesting technique, that DID. But it assumes the non treatment factors would affect both the treatment group and control group equally, that the effect would scale linearly. If the treatment group was more exposed to the non-treatment factors, then an increase could account for a larger difference than the one seem at time 1. Idk which other industry they used as the controll group but interest rates could have a superlinear effect on tech as compared to on that, so the difference of difference would be explained by the non-treatment factor too
I appreciate the link to differences in differences, I didn't know what to call this method.
The OP's point could still be valid: it’s still possible that macro factors like inflation, interest rates, or tariffs land harder on the exact group they label ‘AI-exposed.’ That makes the attribution messy.
Those fixed effects are estimated separately for each age group, controlling for that.
pg. 19, "We run this regression separately for each age group."
Were entry level jobs the first to go in earlier developer downturns?
Is AI being used to attempt to mitigate that effect?
I don't think their methods or any statistical method could decouple a perfectly correlated signal.
Without AI, would junior jobs have grown as quickly as other?
I'm not trying to be clever here. I'm trying to be publicly stupid in an effort to understand.
You really do have to account for why this is mainly happening in industries that are adopting AI, why it's almost exclusively impacting entry-level positions (with senior positions steady or growing), and why controlling for broad economic conditions failed to correct this. I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
My personal theory is that the stock market rewards the behavior of cutting jobs as a signal of the company being on the AI bandwagon. Doesn't matter if the roles were needed or not. Line goes up, therefore it is good.
This is a complete reversal in the past where having a high headcount was an easy signal of a company's growth (i.e. more people, means more people building features, means more growth).
Investors are lazy. They see one line go down, they make the other line go up.
CEOs are lazy. They see line go up when other line goes down. So they make other line go down.
(I am aware that "line go up" is a stupid meme. But I think it's a perfect way to describe what's happening. It is stupid, lazy, absurd, memetic. It's the only thing that matters, stripped off of anything that is incidental. Line must go up.)
Juniors become seniors.
If we replace all juniors with AI, in a few years there won't be skilled talent for senior positions.
AI assistance is a lot different than AI running the company. Making expensive decisions. While it could progress, bear in mind that some seniors continue to move up in the ranks. Will AI eventually be the CEO?
We all dislike how some CEOs behave, but will AI really value life at all? CEOs have to have some place to live, after all.
The AI will at least be cheaper than a CEO, it might also be more competent and more ethical. The argument against making a Large Language Model the CEO seems to mostly be about protecting the feelings of the existing CEO, maybe the Board should look past these "feelings" and be bold ?
I'll re-explain.
A human CEO might do morally questionable things. All do not, of course, but some may.
Yet even so, they need a planet with air, water, and some way to survive. They also may what their kids to survive.
An AI may not care.
It could be taking "bad CEO" behaviour to a whole new level.
And even if the AI had human ethics, humans play "us vs them" games all the time. You don't get much more "them" than an entirely different lifeform.
Nah, the insistence that humans are somehow uniquely too smart to destroy themselves is obviously laughable, it's troubling that you wrote that down without bursting into laughter because it is so silly.
Ah, the classic cynical brooding response.
First, we're discussing what an AI might do, with terms like "no air". EG, wholesale destruction.
So please do show when the human race has destroyed itself entirely. Oh wait, you're here to read this? I guess that has never, ever, ever happened. Ever. Because if one human exists, there's never been a case where humans were wiped out, with for example no air.
So the "obvious" aspect is not quite so clear. There's no evidence of it, merely conjecture.
Second, at no point did I say smart or not smart.
Instead, I discussed two viewpoints. The viewpoint of an AI, which does not require air, and the viewpoint of a human, which does care about air.
Get the difference?
You may want to dive into global warming, or pollution, or what not. These however, are longer term issues. Destruction today is far different than destruction in 100 years. Or 1000. For example, if global warming predictions are accurate, or even worse, there will still be humans somewhere for hundreds of years, without much change.
Some people might starve, the equator may be too hot, people will die, but there will be places to live still. The planet may go runaway CO2, but that partially abates as more humans die. Fewer humans, less CO2.
Yet either way, it's long term, and no one can definitively say the outcome.
Long term isn't typically part of most decision trees. There's a reason for that. When thinking long-term, you have to think of all the possible permutations, all the possible things that could happen, all the possible things that occur, and those greatly and massively expand with time.
Any thinking being which considers all of their actions right now, in the moment, would become almost immediately paralyzed if it had to consider those actions extremely long-term. Each move, each action, with massive pause and hours/days/weeks of thought. Do you consider how your next step will impact people 4,000 years in the future? A million? A billion?
What about eating toast for breakfast? How much consideration does the average entity put, into consuming food for energy, and yet looking forward on their actions for a billion years?
Beyond that, there is no accurate data for future outcomes, to make a proper determination of what may happen 500, a thousand, a million, a billion years in the future. So all of these computational chains are realistically for naught. After all, the sun may expand sooner than predicted. Certainly the moon will likely move further from Earth, and the Earth's spin will slow down. How will now, affect everything in that future?
You may say, why don't we consider our actions, you know, just in the next hundred years? But now, suddenly, are not you considering your actions in too short of a time frame? Should you not consider what the human race and what the Earth itself will be like in a billion years? If you're not considering those things, then are you not depriving entities and beings and living organisms a healthy planet one billion years in the future?
Where does it stop? Where does it begin? Where and how far in the future should you consider your actions, on a day-to-day basis?
More importantly, how much of what you do in a day should you consider with regards to the future? Which acts of yours are detrimental to the future? Do you know? Are you sure? Do you have any idea?
Obviously, some of the thoughts above are somewhat long term. Yet not thinking long term is why we got into this issue with global warming! And truthfully, if the complaint is that we're destroying the future planet for species that live here besides ourselves, then we really should be considering 10k, 50k, a million years in the future.
Anything else is only selfishly considering our own personal descendants for a couple of generations.
But let's take a step back. I'm not trying to say that we or anybody else can make these kinds of in-depth long scope decisions, nor am I saying that we should not care about global warming. Obviously we should. We actually know it's a problem now. We knew in the 70s.
Instead, what I'm saying is that individuals are individuals and excessively considering long-term ramifications of all of your actions can be highly detrimental to the capacity to make any decision whatsoever. Imagine an AI, which every single time it made a decision, every single time it even decided to compute something, every time it decided to take an action in the real world, it had to consider the ramifications one billion years hence.
Imagine the amount of processing power that would require. Now imagine the amount of energy or "food" needed. This is why beings cannot sit around for all eternity while a wolf leaps upon them, while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.
And as I've suggested above, it is also going to be a requirement for AI. Certainly it can consider some of its acts, much like a human being can consider some of its acts, but that's not how things work on a day-to-day basis.
Human beings solve this by observation after the fact of many of our acts, and secondarily, by reviewing what's happening in the environment and the world around us as we make change, and then determining if we should scale back or roll back what we're doing.
The same will be true of AI. If the same is not true of AI, AI will cause global warming merely by trying to stop it.
The sheer computational power required for an AI and all the AIs that are making decisions, optimally choosing best for 1M years in the future? It would eat enormous amounts of energy, this making global warming worse, whilst trying to make it better!
Whether or not we should be putting more energy into considering these things doesn't mean that it's possible for the average thinking entity to do so. Imagine the amount of processing power that would require. Now imagine the amount of energy or food. This is why beings cannot sit around for all eternity while wolf leaps upon them while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.
The AI most certainly does not care, because it is a computer program. It also doesn't want to buy a boat.
It also doesn’t care if the company goes bankrupt tomorrow without paying out their bonus.
Given the timeline this is more likely a reversion to the mean following the end of zero interest rate policy.
Software development is one of the listed industries. Well before AI we have seen that few companies wanted entry level devs due to the training and such.
Reducing in call centers has been going on for a while as more people use automated solutions (not necessarily AI) and many of the growing companies make it hard to reach a real person anyways (Amazon, Facebook, etc). I feel like AI is throwing fuel on the existing fire, but isn't as much of a driver as the headlines suggest.
The jobs are going to India
American workers are truly under attack from all sides. H1B. Outsourcing. What's left? The blue collar manufacturing is mostly gone. White collar work well on its way out. Why is our own government (by the people for the people) actively assisting in destroying American's ability to get jobs (H1B)? Especially in these conditions. I'm no racist or idiot but it's unacceptable. I didn't expect the gov to actively be conspiring with big corps to make my economic position weaker. Unbelievable breach of trust. We need to demand change from our government.
Your government no longer works for you, it works for a small group of billionaires. Does it make sense then?
It’s an unpopular opinion in the current environment but it’s the program that allows international talent to connect with local capital that creates all the jobs in tech.
Nearly half the unicorns in the country were found by foreigners living in the country. https://gfmag.com/capital-raising-corporate-finance/us-unico...
The biggest problem right now is that there is no distinction between companies replacing Americans labor with cheap labor and entrepreneurial talent that creates jobs. Everyone is on the same visa.
Efficiency rules all.
It just doesn't make sense to pay someone $10 when you can pay someone else $2
And when we're all out of work except for the doctors and nurses, electricians and plumbers, there will be nobody to contribute to consumer spending. And we will suffer, at the hand of the government that assisted in this scam.
If the predictions of someone are correct, AI is going to hit everyone, including the doctors, nurses, and plumbers and electricians
Maybe they will give us subsistence UBI
2 dollars and a 12 hour time difference with a full day between messages in conversations.
They will come back (eventually).
Having to work with ESL contractors from firms like Cognizant or HCL is true pain. Normally it would be like 3-4 US employees working on something and then its like 20-30 ESL outsourced people working on something. The quality is so poor though its not worth it.
My current org nuked their contract w HCL after 2 years because how shitty they are and now everything is back onshore. Millions wasted lol. Corporations are so silly sometimes.
They also need 5 people to do the work of one us worker. And then another US worker to guide and do some qa on the output they produce . I don't see how it saves money. There are other countries with lower wages than the US where this doesn't happen such as Poland or Australia.
They are. Companies also do this and then wonder why they get blackmailed for terabytes of leaked proprietary data on the darkweb.
Saving money on wages isn't the only consideration.
> You really do have to account for why this is mainly happening in industries that are adopting AI
Correlation is not causation. The original research paper does not prove a connection.
> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
They are nonetheless subject to publish or perish pressure and have strong incentives to draw publishable attention-grabbing results even where the data is inconclusive.
> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
Digital Economy Lab and the Stanford Institute for Human-Centered Artificial Intelligence
I fully expect that these professors would be blindsided by even the most rudimentary real world economics.
Tariffs are just a massive government revenue generating consumption tax on particular industries. We would expect unemployment among the young trying to enter those industries to be hit hardest.
Do you understand that American employers don't have to pay American tariffs?
i'm curious who you think pays american tarrifs
You first
Everyone pays mate
> Do you understand that American employers don't have to pay American tariffs?
Except they do, if their raw materials, tools, etc., are imported.
More investment -> more return on investment -> "AI is increasing worker efficiency" -> This is good for AI.
Less investment -> more layoffs -> "AI is replacing workers" -> This is good for AI.
A computer does something good -> "That's AI" -> This is good for AI.
A computer does something bad -> "It needs more AI" -> This is good for AI.
It seems more true than the "this is good for bitcoin" meme now that bitcoin seems to track the dollar very closely
Is there some central authority that’s telling people to blame this all on AI, or how is everyone reaching this conclusion and ignoring the other obvious factors you stated?
It is in their interest to find explanations for reductions in labor that don't assign the blame to corporate greed.
For example, a call center might use the excuse of AI to fire a bunch of people. They would have liked to just arbitrarily fire people a few years ago, but if they did that people would notice the reduction in quality and perhaps realize it was done out of self-serving greed (executives get bigger bonuses / look better, etc). The AI excuse means that their service might be worse, perhaps inexcusably so, but no one is going to scrutinize it that closely because there is a palatable justification for why it was done.
This is certainly the type of effect I feel like underlies every story of AI firing I've heard about.
How is firing a bunch of people because you made a machine that you believe can do their jobs not textbook corporate greed? It seems like the worst impulses of Taylorism made manifest?
This is worse: this is just pretending like the machine does their jobs because it benefits them.
The big (biggest? ) problem of modernity is that quality is decorrelated from profit. There's a lot more money in having the optics of doing a good job than in actually doing it; the economy is so abstracted and distributed that the mechanism of competition to punish bad behavior, shitty customer service, low standards, crappy work, fraud... is very weak. There is too much information asymmetry, and the timescale of information propagation is too long to have much of an effect. As long as no one notices what you're fucking up very quickly you can get away with it for a long time.
Seems even worse to me. At least in the 'competition' paradigm there's a mechanism for things getting better for consumers. No such thing here.
> It is in their interest to find explanations for reductions in labor that don't assign the blame to corporate greed.
Exactly.
It doesnt need to be a conspiracy. Incentives allign sometimes. Alot of people are invested in AI replacing jobs and it would be nice for them if the buzz was that it is actually the case
Blaming AI is better because it helps corporations convince the working class that there jobs are in long-term danger so they collectively settle for less favorable work terms and compensation, unlike if they are convinced that it is going to gradually improve with the upcoming monetary easing cycle..
I'm sorry, have you read the paper, or did you just want to recite those here?
Here's the study:
https://digitaleconomy.stanford.edu/publications/canaries-in...
It looks like they're looking at data for the last few years, not just the last few months.
I haven't read it, and maybe you can disagree with their opinions, but there does appear to be a slow down in college graduates recently.
End of ZIRP and the Sec. 179 change for engineering salaries probably explains more of this (plus the increase in outsourcing). I’m sure some decision makers also threw AI into the mix but the financials of hiring software engineers in the US was already challenging before AI “took everyone’s job”.
Since this article is about AI, and since this comment seems rather low effort compared to the Stanford study, I went ahead and used low effort to analyze the report compare it to this comment. Here's my low effort AI response:
> Prompt: Attached is a paper. Below is an argument made against it. Is there anything in the paper that addresses the argument?: High interest rates + tariff terror -> less investment -> less jobs
> High rates/firm shocks: They add firm–time fixed effects that absorb broad firm shocks (like interest-rate changes), and the within-firm drop for 22–25-year-olds in AI-exposed roles remains.
> “Less investment” story: They note the 2022 §174 R&D amortization change and show the pattern persists even after excluding computer occupations and information-sector firms.
> Other non-AI explanations: The decline shows up in both teleworkable and non-teleworkable jobs and isn’t explained by pandemic-era education issues.
> Tariffs: Tariffs aren’t analyzed directly; broad tariff impacts would be soaked up by the firm–time controls, but a tariff-specific, task-level channel isn’t separately tested.
Fitting, since it came up with unrelated information (the R&D tax thing) and the 3rd bullet point. Also started talking about tariffs as if it had addressed them, then notes that it doesn't address them.
I generally agree that AI is the scapegoat, but not for those same reasons. Despite the lack of job growth and the tariffs, recent data shows the economy grew about 3%. Even if it's not AI as the primary driver, efficiency seems to have increased.
How does that make sense? Wouldn’t high interest rates and tariffs cause more expensive engineers to have disproportionate opportunity? I remember during 2008 it was much easier for my employer to justify junior engineers than senior ones.
Do you consider things to be that single-faceted, that other factors cannot realistically be a part of the equation?
I have to admit that something is "single-faceted" would be a nice break from hearing that something is "complex and multifaceted".
I was here in the 90s dotcom boom and interest rates were higher than today.
(High interest rates + tariff terror -> less investment -> less jobs) + AI
Well, you do have CEOs out there saying it...
Less investment? You must be trolling. I encourage you to look at the about of stupid money that has been “invested” into LLMs.
> But let's blame AI
The thing whose exact purpose is to replace labor? Must be a conspiracy going on to suggest its linked to reducing labor. Bias! Agenda!
The jobs are going overseas
2 things can be true
But usually one is more true. I'm on the camp of high interests and high tariffs being the cause more.
In my division we have 1000 IT workers. I'd guess the number born in the USA is less than 10%, probably less than 5%. Americans just dont work hard enough, I think now they are a minority they probably dont want to stay either.
Translation: "We spent so much on useless AI tools we had to stop hiring grads"
> Some examples of these highly exposed jobs include customer service representatives, accountants and software developers.
We seem to be in this illogical (delusional?) era where we are being told that AI is 'replacing' people in certain sectors or types of work (under the guise that AI is better or will soon be better than humans in these roles) yet those same areas seem to be getting worse?
- Customer service seems worse than ever as humans are replaced with "AI" that doesn't actually help customers more than 'website chatbots' did 20 years ago.
- Accounting was a field that was desperate for qualified humans before AI. My attempts to use AI for pretty much anything accounting related has had abysmal results.
- The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences.
> "The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences."
^ This. (Tho I'm not sure about it being "general consensus".) Vibe code is the payday loan (or high-interest credit card) of tech debt. Demo-quality code has a way of making it into production. Now "everyone" can produce demos and PoCs. Companies that leverage AI as a powerful tool in the hands of experienced engineers may be able to iterate faster and increase quality, but I expect a sad majority to learn the hard way that there's no free lunch, and shipping something you don't understand is a recipe for disaster.
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".
Was good while it lasted though.
I'm not sure it's that our job is the most automatable, but that the interface is the easiest to adapt to our workflow.
I have a feeling language models will be good at virtually every "sit at a desk" job in a virtually identical capacity, it's just the act of plugging an AI into these roles is non-obvious.
Like every business was impacted by the Internet equally, the early applications were just an artifact of what was an easy business decision.. e.g. it was easier to start a dotcom than to migrate a traditional corporate process.
What we will see here with AI is not the immediate replacement of jobs, but the disruption of markets with offerings that human labor simply can't out-compete.
> I'm not sure it's that our job is the most automatable
I don't know. It seems pretty friendly to automation to me.
When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?
These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.
Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.
It's all possible because the computer manages much of the complexity through various forms of automation.
Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.
RAD programming held the same promise, as did UML, flow/low/no code platforms.
Inevitably, people remember that the hard part of programming isn't so much the code as it is putting requirements into maintainable code that can respond to future requirements.
LLMs basically only automate the easiest part of the job today. Time will tell if they get better, but my money is on me fixing people's broken LLM generated businesses rather than being replaced by one.
Indeed. Capacity to do the hard parts of software engineering well may well be our best indicator of AGI.
I don't think LLMs alone are going to get there. They might be a key component in a more powerful system, but they might also be a very impressive dead end.
Sometimes I think we’re like cats that stumbled upon the ability to make mirrors. Many cats react like there’s another cat in the mirror, and I wonder if AGI is just us believing we can make more cats if we make the perfect mirror.
This has been my argument as well. We've been climbing the abstraction ladder for years. Assembly -> C -> OOP ->... this just seems like another layer of abstraction. "Programmers" are going to become "architects".
The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).
If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.
the cost of creating a line of code dropped to zero. the ongoing cost of having created a line of code has if anything gone up.
LLMs are just another layer of abstraction on top of countless. It’s not going to be the last layer, though.
I do think software engineering is more exposed than many other jobs for multiple reasons:
There is an unimaginable amount of freely accessible training data out there. There aren't for example many transcribed therapy sessions out there.
The only thing that matters about software is that it's cheap and it sort of works. Low-quality software is already common. Bugs aren't usually catastrophic in the way structural failures would be.
Software engineers are expensive compared to many other white-collar workers.
Software engineering is completely unregulated and there is no union or lobby for software engineers. The second an LLM becomes good enough to replace you, you're gone.
Many other "sit at desk" jobs have at least some tasks that can't be done on a computer.
Software engineering feels like an extremely uncertain career right now.
I'm not so certain that non-desk jobs will be safe either. What makes the current LLMs great at programming is the vast amount of training data. There might be some other breakthrough for typical jobs - some combination of reinforcement learning, training on videos of people doing things, LLMs and old-fashioned AI.
The only thing that AI is good at is a job that someone has already done before.
So 99% of all jobs
Maybe it's just the nature of being early adopters.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
> These models are extremely unreliable when unsupervised.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
No matter how much better they get, I don't see any actual sign of intelligence, do you?
There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies":
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
There is a reason chess was "solved" so fast. The game maps very nicely onto computers in general.
A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.
Yes, that's my point. AI doesn't need to be general to be useful. LLMs might replace software engineers without ever being "general intelligence".
Sorry for not making my point clear.
I'm arguing that the category of the problem matters a lot.
Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".
I think I haven't made my point clear enough:
Chess was once thought to require general intelligence. Then computing power became cheap enough that using raw compute made computers better than humans. Computers didn't play chess in a very human-like way and there were a few years where you could still beat a computer by playing to its weaknesses. Now you'll never beat a computer at chess ever again.
Similarly, many software engineers think that writing software requires general intelligence. Then computing power became cheap enough that training LLMs became possible. Sure, LLMs don't think in a very human-like way: There are some tasks that are trivial for humans and where LLMs struggle but LLMs also outcompete your average software engineer in many other tasks. It's still possible to win against an LLM in an intelligence-off by playing to its weaknesses.
It doesn't matter that computers don't have general intelligence when they use raw compute to crush you in chess. And it won't matter that computers don't have general intelligence when they use raw compute to crush you at programming.
The proof that software development requires general intelligence is on you. I think the stuff most software engineers do daily doesn't. And I think LLMs will get continously better at it.
I certainly don't feel comfortable betting my professional future on software development for the coming decades.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" ~ Upton Sinclair
Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.
Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.
Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.
How will this work exactly?
I think I have a pretty good idea of what AI can do for software engineering, because I use it for that nearly every day and I experiment with different models and IDEs.
The way that has worked for me is to make prompts very specific, to the point where the prompt itself would not be comprehensible to someone who's not in the field.
If you sat a rando with no CS background in front of Cursor, Windsurf or Claude code, what do you suppose would happen?
It seems really doubtful to me that overcoming that gap is "just more training", because it would require a qualitatively different sort of product.
And even if we came to a point where no technical knowledge of how software actually works was required, you would still need to be precise about the business logic in natural language. Now you're writing computer code in natural language that will read like legalese. At that point you've just invented a new programming language.
Now maybe you're thinking, I'll just prompt it with all my email, all my docs, everything I have for context and just ask it to please make my boss happy.
But the level of integrative intelligence, combined with specialized world knowledge required for that task is really very far away from what current models can do.
The most powerful way that I've found to conceptualize what LLMs do is that they execute routines from huge learnt banks of programs that re-combine stored textual information along common patterns.
They're cut and paste engines where the recombination rules are potentially quite complex programs learnt from data.
This view fits well with the strengths and weaknesses of LLMs - they are good at combining two well understood solutions into something new, even if vaguely described.
But they are quite bad at abstracting textual information into a more fundamental model of program and world state and reasoning at that level.
I strongly suspect this is intrinsic to their training, because doing this is simply not required to complete the vast majority of text that could realistically have ended up in training databases.
Executing a sophisticated cut&paste scheme is in some ways just too effective; the technical challenge is how do you pose a training problem to force a model to learn beyond that.
I just completed a prototype of a non-trivial product that was vibe-coded just to test the ability and limits of LLMs.
My experience aligns largely with your excellent comment.
>But the level of integrative intelligence, combined with specialized world >knowledge required for that task is really very far away from what current >models can do.
Where LLMs excel are to put out large templates of what is needed, but they are frayed at the edges. Imagine programming as a jigsaw puzzle where the pieces have to fit together. LLMs can align the broader pieces, but fail to fit them precisely.
>But they are quite bad at abstracting textual information into a more >fundamental model of program and world state and reasoning at that level.
The more fundamental model of program is a "theory" or "mental-model" which unfortunately is not codified in the training data. LLMs can put together broad outlines based on their training data, but lack the precision in modeling at a more abstract level. For example, how concurrency could impact memory access is not precisely understood by the LLM - since it lacks a theory of it.
> the technical challenge is how do you pose a training problem to force a model > to learn beyond that.
This is the main challenge - how can an LLM learn more abstract patterns. For example, in the towers of hanoi problem, can the LLM learn the recursion and what recursion means. This requires LLM to learn abstraction precisely. I suspect LLMs learn abstraction "fuzzily" but what is required is to learn abstraction "precisely". The precision or determinism is largely where there is still a huge gap.
LLM-boosters would point to the bitter lesson and say it is a matter of time before this happens, but I am a skeptic. I think the process of symbolism or abstraction is not yet understood enough to be formalized.
Ironic to post that quote about AI considering the hype is pretty much entirely from people who stand to make obscene wealth from it.
>That LLMs were nothing but glorified stochastic parrots.
Well yes , now we know they make kids kill themselves.
I think we've all fooled ourselves like this beetle
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
for thousands of years up until 2020 anything that conversed with us could safely be assumed to be another sentient/intelligent being.
No we have something that does that, but is neither sentient or intelligent, just a (complex)deterministic mechanism.
Ive heard this described as a kind vs a wicked learning environment.
LLMs can code, but they can’t engineer IMO. They lack those other parts of the brain that are not the speech center.
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
AI training costs are increasing around 3x annually across each of the last 8 years to achieve its performance improvements. Last year, spending across all labs was $150bn. Keeping the 3x trend means that, to keep pace with current advances, costs should rise to $450bn in 2025, $900bn in 2026, $2.7tn in 2027, $8.1tn in 2028, $25tn in 2028, and $75tn in 2029 and $225tn in 2030. For reference, the GDP of the world is around $125tn.
I think the labs will be crushed by the exponent on their costs faster white-collar work will be crushed by the 5% improvement exponent.
Be careful you're not confusing the costs of training an LLM and the spending from each firm. Much of that spending is on expanding access to older LLMs, building new infrastructure, and other costs.
That’s a fair criticism of my method, however model training costs are a significant cost centre for the labs. Modelling from there instead of from total expenditure only adds 2-3 years before model training costs are larger than the entire global economy.
Your math is a bit less than it should be because you doubled instead of trebled 2026
The current trained models are already pretty good enough for many things.
Is that so? Ok let the consumers decide - increase the price and let's see how many users are willing to pay the price.
They are mediocre plagiarism machines at best.
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
By all accounts, yes.
"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.
The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.
Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.
Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.
We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.
The jpeg compression argument is still valid.
It's lossy compression at the core.
In 2025 you can add quality to jpegs. Your phone does it and you don't even notice. So the rhetorical metaphor employed holds up, in that AI is rapidly changing the fundamentals of how technology functions beyond our capacity to anticipate or keep up with it.
> add quality to jpegs
Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place
You can if you know what to fill from other sources.
Like, the grill of a car. If we know the make and year, we can add detail with each zoom by filling in from external sources
This is an especially bad example, a nice shiny grille is going to be strongly reflecting stuff that isn't already part of the image (and likely isn't covered well by adjacent pixels due the angle doubling of reflection).
Is this like how crypto changed finance and currency
I don't think it is.
Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.
Sure, you can upscale a badly compressed jpeg using ai into something better looking.
Often the results will be great.
Sometimes the hallucinated details will not match the expectations.
I think this applies fundamentally to all of the LLM applications.
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.
That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.
I think you are selling yourself short if you believe you can be replaced by a next token predictor :)
I as a human being can of course not be replaced by a next token predictor.
But I as a chess player can easily be replaced by a chess engine and I as a programmer might soon be replaceable by a next token predictor.
The only reason programmers think they can't be replaced by a next token predictor is that programmers don't work that way. But chess players don't work like a chess engine either.
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.
LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.
LLMs have no ability to reason whatsoever.
They do have the ability to fool people and exacerbate or cause mental problems.
LLMs are actually pretty good at reasoning. They don't need to be perfect, humans aren't either.
Why isn't it then
what's actually happening is all your life you've been told by experience if something can talk to you is that it must be somewhat intelligent.
Now you get can't around that this might not be the case.
You're like that beetle going extinct mating with beer bottles.
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
"What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.
But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.
> "What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
People have been arguing this is not the case for at least hundreds of years.
Considering we don't understand consciousness at ALL or how humans think, you might want to backtrack your claims a bit.
Any abstraction you're noticing in an LLM is likely just a plagiarized one
this boring reductionist take on how LLMs work is so outdated that I'm getting second hand embarassment.
Sorry, I meant a very fancy next token predictor :)
Lots of technology is cool if you get to just say “if we get rid of the limitations” while offering no practical way to do so.
It’s still horrible btw.
Hallucination has significantly decreased in the last two years.
I'm not saying that LLMs will positively replace all programmers next year, I'm saying that there is a lot of uncertainty and that I don't want that uncertainty in my career.
Pretty crazy, and all you have to do is assume exponential performance growth for as long as it takes.
If it gets to the point where I can no longer find a tech job I am just going to buy a trailer, live somewhere cheap, and just make money doing odd jobs while spending most of my time programming what I want. I don't want to participate in a society where all I have for job options is a McJob or some Amazon warehouse.
That's plan C, plan B is to one person SAAS a better app than my current company makes.
This is the best thing engineers can do. I moved to building as a solo founder. I am building an LLM enabled coding product and I teach. I'm hosting a session on Claude Code today, 134 guests signed up. I'm gradually planning to make money teaching for a few months while building the product.
until you realize the success of a business is way more dependent on non-engineering skills
That's actually a good idea. Now I just need to come up with an idea for an SAAS app. I was thinking originally or making one of the games on my project backlog and seeing how much I could make off it. Or creating one of the many idea I have for websites and webapps and see where they go.
Is it hard to date with a trailer?
Would be more difficult depending on where you live. My plan was to talk to others online and see if I could find someone willing to live such a simple life with me, maybe starting with an LDR first (I'm sort of doing that already)
Not if it has a hitch.
Beginning to suspect this person is living in a trailer or cave and collecting info for their UniqueDating SaaS.
>Buy a trailer, live somewhere cheap, do odd jobs
Unrelated to the discussion, but I love these kinds of backup plans. I've found that most guys I talk to have one. Just a few days ago a guy was telling me that, if his beloved wife ever divorces him, then he'd move to a tropical island and become a coconut seller.
(My personal plan: find a small town in the Sonoran Desert that has a good library, dig a hole under a nice big Saguaro cactus, then live out my days reading library books in my cool and shady cave.)
Is it hard to date living under a cactus?
Yes, that's where living under a date palm is better.
it must be easier than dating on top of a cactus
Nah dating under a cactus is easy: just don't be a prick.
The future seems very uncertain right now and we are living in weird times. Its always a good idea to have a backup plan in case your career path doesn't work out!
Mine is forrest fire fighter. Surely with climate change there will not be a shortage of work, and while dangerous and bad for you, it seems kind of fun.
> he'd move to a tropical island and become a coconut seller.
Is there a visa for that? Doesn't seem feasible unless he lives in a country that has a tropical island already.
Due the compact free association, US citizen can permanently settle to live and work in Micronesia with no visa or even any real checks other than a quick look at the passport.
I'd argue that, out of white collar jobs, it is actually one of the least automatable still. I.e. the rest of the jobs are likely going to get disrupted much faster because they are easier to automate (and have been the target of automation by the software industry in the past century). Whatever numbers were seeing now may be too early to reflect this accurately.
Also there are different metrics that are relevant like dollar count vs pure headcount. Cost cutting targets dollars. E.g. entry level developers are still expensive compared to other jobs.
I'm sure those who lost a job to software at some point are feeling a great deal of sympathy for developers who are now losing out to automation.
Despite being the target of a lot of schadenfreude, most software developers aren't working on automation.
Nice watching it tear down recruiters though.
Most "Software Engineering" is just applying the same code in slightly different contexts. If we were all smarter it would have been automated earlier through the use of some higher-level language.
> If we were all smarter
Its not really an intelligence thing. You could have the most intelligent agent, but if the structural incentives for that agent are for example, "build and promote your own library for X for optimal career growth.", you would still have massive fragmentation. And under the current rent-seeking capitalist framework, this is a structural issue at every level. Firefox and Chrome? Multiple competing OSes? How many JS libraries? Now sure, maybe if everyone was perfectly intelligent _and_ perfectly trusting, then you could escape this.
Its the least regulated (not at all). So it will be the first to be changed.
AI lawyers? Many years away.
AI civil engineers? Same thing, there is a PE exam that protects them.
You don’t need to perfect AI to the point of becoming credentialed professionals to gut job markets— it’s not just developers, or creative markets. Nobody’s worried that the world won’t have, say, lawyers anymore — they’re worried that AI will let 20% of the legal workforce do 100% of the requisite work, making the skill essentially worthless for the next few decades because we’d have way too many lawyers. Since the work AI does is largely entry-level work, that means almost nobody will be able to get a foothold in the business. Wash, rinse, repeat to varying levels across many white collar professions and you’ve got some real bad times brewing for people trying to enter the white collar workforce from now on— all without there being a single AI lawyer in the world.
Same thing for doctors. Turns out radiologists are fine, it's software engineers that should be scared.
We might end up needing 20% or so less doctors, because all that bureaucracy can be automated. A simple automated form pre-filler can save a lot of time. It’s likely that hospitals will try saving there.
You know the difference between doctors and programmers? One have a regulated profession and lobby, the other have neither. Actually, all the other have is the richest amount of open training data for ai companies among all professions (and it's not medicine)
Oh really?
https://medium.com/backchannel/how-technology-led-a-hospital...
It’s just engineers getting high on their own supply. All the hype men for the software are software engineers (or adjacent.)
Frankly, any time I see research indicating software engineering is at a high risk of being automated, I outright dismiss it as pseudo science. It ain’t happening with current tech.
Which universe is that, the one consisting of the union of AI charlatans and people who don’t understand software engineering?
You know even the CEOs are backtracking on that nonsense right?
Too bad engineers were “too important” to unionize because their/our labor is “too special .”
I think you could find 10,000 quotes from HN alone why SDEs were immune to labor market struggles that would need a union
Oh well, good luck everyone.
I'm not necessarily opposed to unionization in general but it's never going to save many US software industry jobs. If a unionization drive succeeds at some big tech company then the workers might do well for a few years. But inevitably a non-union startup competitor with a lower cost structure and more flexible work rules will come along and eat their lunch. Then all the union workers will get laid off anyway.
Unionization kind of worked for mines and factories because the company was tied to a physical plant that couldn't easily be moved. But software can move around the world in milliseconds.
Unions _can_ protect against this, but they have to do it via lobbying the government for protectionism, tariffs, restricting non-union competition etc.
Indeed, just look at the CGI VFX industry of Hollywood. US invented it and was the leader for a long time, but now it has been commodified, standardized and run into the ground, because union or not, you can't stop US studios form offshoring the digital asset work to another country where labor is 80% cheaper than California and quality is 80% there. So the US is left with making the SW tools that VFX artist use, as the cutting edge graphics & GPU knowhow is all clustered there.
Similarly, a lot of non-cutting edge SW jobs will also leave the US as tooling becomes more standardized, and other nations upskill themselves to deliver similar value at less cost in exchange for USD.
This was when programmers were making software to time Amazon worker's bathroom breaks so believing "this could never happen to me" was probably an important psychological crutch.
Saying “programmers” did this is about as useful as saying humans did it.
This is, if true, a fundamental shift in the value of labor. There really isn’t a non-Luddite way to save these jobs without destroying American tech’s productivity.
That said, I’m still sceptical it isn’t simply a reflection of an overproduction of engineers and a broader economic slowdown.
Yeah I agree that outsourcing and oversupply are the real culprits and AI is a smoke screen. The outcome is the same though.
> outcome is the same though
Not really. If it’s overproduction, the solution is tighter standards at universities (and students exercising more discretion around which programmes they enroll in). If it’s overproduction and/or outsourcing, the solutions include labour organisation and, under this administration, immigration curbs and possibly services tariffs.
Either way, if it’s not AI the trend isn’t secular—it should eventually revert. This isn’t a story of junior coding roles being fucked, but one of an unlucky (and possibly poorly planning and misinformed) cohort.
It can be oversupply/outsourcing and also secular: You can have basically chronic oversupply due to a declining/maturing industry. Chronic oversupply because the number of engineers needed goes down every year and the pipeline isn't calibrated for that (academia has been dealing with this for a very long time now, look up the postdocalypse). Outsourcing, because as projects mature and new stuff doesn't come along to replace, running maintenance offshore gets easier.
Software isn't eating the world. Software ate the world. New use cases have basically not worked out (metaverse!) or are actively harmful.
So what your argument is we're so special that we deserve to hold back human progress to have a privileged life? If it's not that what would you want a union to do in this situation?
I’d prefer that my family are financially stable over “human progress”. One benefits me and the other benefits tech companies. Easy choice.
If our ancestors had thought like that we'd all be very busy and "stable" doing subsistence farming like we were doing 10,000 years ago.
Better our children never have to work because the robots do everything and they inherited some ownership of the robots.
Do you really believe that all technological progress has bettered humanity? Where’s the four day work week we were promised? I thought automation was supposed to free us from labor.
I don't think all progress has benefitted humanity but I do think we've never worked less while earning more than the present.
I like human progress. I don’t like the apparent end goal that the entire wealth of the planet belongs to a few thousand people while the rest of us live in the mud.
Unions can only prevent automation up to a point. Really the only thing that could have reasonably prevented this would have been for programmers to not produce as much freely accessible training data (formerly known as "open source software").
Exactly. I am always so impressed by the fact that developers never see that open source is essentially them giving away free labor to giant corporations. Developers basically programmed their way out of a job, for free. It's the only profession that is proud to have its best work done on unpaid time and used for free by big corporations.
Unions wouldnt stop any of this but professionalization would
Unions won’t solve this for you. If a company just decides they have enough automation to reduce union workforce it can happen the next time contracts get negotiated.
Either way, there are layoff provisions with union agreements.
Tell that to dock workers, who have successfully delayed the automation of ports to the extent we see them automated in e.g. the PRC [0].
Hell, they're even (successfully) pushing back against automated gates! [1]
[0] https://www.cnn.com/2024/10/02/business/dock-workers-strike-...
[1] https://www.npr.org/2024/10/03/nx-s1-5135597/striking-dockwo...
Isn't that just delaying the inevitable? Yangshan Deep-Water Port in Shanghai is one of the most automated ports. Considering there are more people in China than in the US, China still automated their port.
I'm not making a value judgment on the specific case of dock workers, I'm rather saying that unions can and do prevent automation. If Software Devs had unionized earlier, a lot of positions would probably still be around.
The dock owner may not have a lot of alternatives to negotiating with the union. If devs unionize, the work can move.
In Hollywood, union bargaining bought some time at least. Unions did mandate limits on the use of AI for a lot of the creation process.
AI is still used in Hollywood but nobody is proud of it. No movie director goes around quoting percentages of how many scenes were augmented by AI or how many lines in the script were written by ChatGPT.
Unions work in physical domains that need labor “here and now”, think plumbers, electricians, and the like. You can’t send that labor overseas, and the union can control attempts at subversion via labor force importation. But even that has limitations, e.g. union factory workers simply having their factory shipped overseas.
Software development at its core can be done anywhere, anytime. Unionization would crank the offshoring that already happens into overdrive.
We're not "too important." All a union would do is create extra problems for us.
There are two possibilities:
a) This is a large scale administrative coordination problem
b) We don't need as many software engineers.
Under (a) unionizing just adds more administrators and exacerbates the problem, under (b) unions are ineffective and just shaft new grads or if they manage to be effective, kills your employer (and then no one has a job.)
You can't just administrate away reality. The reason SWEs don't have unions is because most of us (unlike blue collar labor) are intelligent enough to understand this. I think additionally there was something to be said about factory work where the workers really were fungible and it was capital intensive, software development is almost the polar opposite where there's no capital and the value is the theory the programmers have in their head making them a lot less fungible.
Finally we do have legal tools like the GPL which do actually give us a lot of negotiating power. If you work on GPL software you can actually just tell your employer "behave or we'll take our ball and leave" if they do something stupid.
You said: All a union would do is create extra problems for us.
Then you said:
a) This is a large scale administrative coordination problem
Pray tell: what is it a union does other than the latter?
Or is your position that “union” is some narrowly defined, undifferentiated structural artifact of a specific legal system?
Unions would just delay the inevitable while causing other downsides like compressing salary bands, make it difficult to fire non-performers, union fees, increasing chance of corruption etc.
For a recent example:
> Volkswagen has an agreement with German unions, IG Metall, to implement over 35,000 job cuts in Germany by 2030 in a "socially responsible" way, following marathon talks in December 2024 that avoided immediate plant closures and compulsory layoffs, according to CNBC. The deal was a "Christmas miracle" after 70 hours of negotiations, aiming to save the company billions by reducing capacity and foregoing future wage increases, according to MSN and www.volkswagen-group.com.
I mean, I still don't want to unionize with the guys who find `git` too complicated to use (which is apparently the majority of HN). Also, you guys all hate immigrants which is not my vibe, sorry.
Then don’t complain when some other group treats you the same
Universally? Nah.
I really hope nobody had themselves convinced that software engineering couldn't be automated. Not with the code enterprise has been writing for decades now (lots and lots and lots of rules for gluing state to state, which are extremely structured but always just shy of being so structured that they were amenable to traditional finite-rule-based automation).
The goal of the industry has always been self-replacement. If you can't automate at least part of what you're working on you can't grow.
... unfortunately, as with many things, this meshes badly with capitalism when the question of "how do you justify your existence to society" comes up. Hypothetically, automating software engineering could lead to the largest open-source explosion in the history of the practice by freeing up software engineers to do something else instead of toil in the database mines... But in practice, we'll probably have to get barista jobs to make ends meet instead.
The experiences people are having when working with big, complex codebases don’t line up with your gloomy outlook. LLMs just fall apart beyond a certain project size, and then the tech debt must be paid.
Is it gloomy? I personally liken it to inventing the washing machine instead of doing laundry by hand, beating it against a washboard, for another hundred years.
If you want to know what will happen to software engineers in the US just follow the path of US factory workers in the 90s.
Looks like the study pretty arbitrarily picks "exposed industries" and notes that employment rate there has declined.
CEOs citing savings from AI should be able to show higher profits soon. The fact that they’re not means those tall tales are coming home to roost soon.
Nah, its going to be like when everyone included "bitcoin" in their quarterly reports and the market goes nuts, until it stops
Are people really replacing customer service agents with LLMs? It doesn't seem practical.
Customer service handles all the things that customers aren't trusted to manage on their own with a self-service portal - that's the whole point of having a trusted human involved at all. Giving those tasks to LLMs won't work because the customer can just prompt inject their way to whatever toolcalls correspond to their desired outcome.
I had a home internet outage recently and needed the ISP to send a tech out.
Five years ago, you'd wait on-hold to talk to a human being with a script. "Is the light on your modem red?" "Try turning your modem on and off again." "OK let's schedule a technician for you."
Today I get a pre-LLM chatbot, with the same script, no waiting on-hold, available 24/7, with pictures and guides. It's somewhat faster because no human has to read the script over the phone, the words just appear instantaneously. Getting through the script tree to the outcome I want is an easy and obvious path.
The vast, vast majority of Tier 1 support is not something where the customer service agent has any agency. They have a script. Taking humans out of the loop here is an improvement.
I mentioned it before as well — I know first hand companies that have replaced most low level CS with elevenlabs + GPT ticket systems, while tracking the main metrics. Overall, it’s been better and cheaper than operating all the human agents.
I think they might be replacing first-tier support, the people who currently answer really stupid questions, handle really common cases, etc. When you need real support and there's no script you go to second-tier support...
As a person who aspires to actually read documentation, try common troubleshooting, google it, etc. before calling support I'd really love to go directly to second-tier, but apparently bulk of support calls are low-effort users, and now they'd get the pleasure of LLM, instead of a person, telling them to reset their router, make sure the thingie on a spray bottle is in "on" position, or call the airline.
AI isn’t replacing people: leaders are.
The real disruption is whether we use it to multiply human potential, or to shrink it in the name of control.
Yeah, in the same way ice cream is linked to homicides!
For similar forms of automation, isn’t 13% somewhat inline with what you’d expect?
For example, I wonder how many fewer juniors were needed when we had better programming languages and tools? Do certain programming practices lead to fewer new workers? How many new factory workers aren’t hired on the factory floor due to a form of automation?
How many weavers were put out of work by textile automation in the 18th century?
https://en.wikipedia.org/wiki/Luddite
all of them, which not a brilliant argument when you discover where, who and how the majority of our clothes are made today to be fair.
I’ll upvote you though because I hadn’t read the whole backstory of the luddites before.
I think you missed my point. I was trying to point how similar the situation programmers face with AI is to the Luddites. People think of them as anti-technology, but they clearly weren't. HN feels a lot like a forum for weavers discussing the hot new automatic weavers and how excited they are for them, without realizing the true goal and scope.
But how do they know it's AI and not, say, other economy-related issues? Just the other day there was a report that companies are not adopting AI because they don't know about it... and now suddenly, so many are using AI that it causes 13% decline in jobs?
Read the paper and find out!
This is all covert AI advertising.
https://esborogardius.substack.com/p/if-ai-doesnt-fire-you-i...
> Current AI valuations assume one thing: mass workforce extinction pays the bill.
I disagree. My evidence is simple: just look at how the most recent generation of smartphones is being advertised. Look at the platforms like Base44 that are spamming their ads all over YouTube. The bet is diversified quite a bit, into the expectation that end users will (eventually) pay through the nose for AI-powered toys.
A recession could also explain this drop.
I think this is to be expected, all tech has societal impact like this. This is just happening over a span of few years instead of decades and centuries. Failure in government policy making at it's peak.
Wait; what would you look for a gov policy to do here? Stunt growth? Prevent efficiency or something else like what?
Tax incentives , labor laws,etc..
Yes, stunt growth if that growth is immediately harmful to the public. Provide adverse incentives that increase the cost of replacing humans. Less or no government subsidies, incentives or tax breaks if you replace humans with LLMs. Even without replacing humans, tax LLM usage like cigarettes.
In the short term that is. over time, wind down these artificial costs, so that humans transition to roles that can't be automated by LLMs. Go to school, get training,etc.. in other fields. Instead of having millions of unemployed restless people collapsing your society.
But everyone is on the take, they want their short term lobbying money and stock tips so they can take what's theirs and run before the ship sinks. (if I can be a bit over dramatic :) )
I’m suss about this paper when it makes this claim:
“where AI is more likely to automate, rather than augment , human labor.”
Where is AI currently automating human labor? Not Software Engineering. Or - what’s the difference between AI that augments me so I can do the job of three people and AI that “automates human labor”
I was also curious about this. Table A1 on page 56 lists examples of positions that are automated vs augmented, and these are the positions the authors think are going to be most augmented (allegedly taken from [0]):
- Chief Executives
- Maintenance and Repair Workers, General
- Registered Nurses
- Computer and Information Systems Managers
After skimming [0], I can't seem to find a listing of jobs that would be augmented vs automated, just a breakdown of the % of analyzed queries that were augmenting vs automating, so I'm a bit confused where this is coming from.
[0]: https://arxiv.org/abs/2503.04761
When the Stanford paper looked at augment vs automate, they used the data from Anthropic's AI Economic Index. That paper defined the terms like this:
We also analyze how AI is being used for tasks, finding 57% of usage suggests augmentation of human capabilities (e.g., learning or iterating on an output) while 43% suggests automation (e.g., fulfilling a request with minimal human involvement).
From the data, software engineers are automating their own work, not augmenting. Anthropic's full paper is here:
https://arxiv.org/html/2503.04761v1
Sounds like a snake eating it's own tail.
What is the effective difference between augment and automate? Either way, fewer man hours are needed to produce the same output.
If your job is to swing a hammer, then hammer swinging robot automates your job.
If your job is to swing a hammer, then drill robot augments your job (your job is now swing hammer and drill hole).
How that is different from drill bot automating human driller's job is an exercise left to the reader.
> What is the effective difference between augment and automate?
The paper says one of those is impacted, and the other isn't.
So, yeah, not only that's what the GP is asking, but I'd like to know it too.
> What is the effective difference between augment and automate?
If the field has a future.
The total output isn't going to stay the same, though.
Suffering has begun.
Soon the world will be ripe for a bloody revolution.
I see a worrisome trend. On one hand, many of my proto-boomer friends are suffering from age-ism , and memes claim that over-50-year-olds are unemployable. Not 100% fidelity, but there's some truth.
Then I hear about a lot of youngsters struggling to find work, and see articles like this.
Well, who's left? Is there a sweet spot at like 31 that are just cleaning up?
31 would line up with the post house bubble boom recovery
beside the point, but over 50 = proto-boomer? You mean para-boomer, maybe? Gen X is <=60, I believe, so you referring to the cusp boomer/genx I think..
boomer-adjacent seemed like a mouthful
genx is now proto-boomer
Proto as a prefix means it's first or at least before.
Reading the paper [1], I disagree, I think we are actually seeing an economic downturn due to the infinite spending during the COVID-19 pandemic. Borrowing was high, and now the interest on that borrowing is high.
> The patterns we observe in the data appear most acutely starting in late 2022, around the time of rapid proliferation of generative AI tools. They also hold for both occupations with a high share of college graduates and ones with a low college share, suggesting deteriorating education outcomes during COVID-19 do not drive our results.
The economic downturn was a delayed effect after high spending at low interest rates during COVID 19. In the UK [2], interest rates were as low as 0.1% (19 Mar 20). Whilst interest rates are low, it makes sense to invest heavily – and one of the largest investments most companies make is in employment. In late 2022 we saw interest rates raise above 1%, then 2%, and by mid-2023 they are above 5%. The largest cost to cut when borrowing is high is again, employment. For every employee, the overhead cost (at medium to large businesses) are 1.5x – 2x their wage. If you’re a big tech firm with 5 employees at $100k each, you could save $1mn a year cutting the right staff down. Apparently Microsoft have sacked some 15k employees this year [3], assuming an average of $100k a year wage, they may have just wiped up to $3bn off of their burn rate.
As the paper itself concedes:
> One possibility is that our results are explained by a general slowdown in technology hiring from 2022 to 2023 as firms recovered from the COVID-19 Pandemic.
AI is being trialled as an approach to maintaining growth despite significantly deep cuts. The Microsoft CEO is under the impression that up to 30% of their code is now written “by software” [4] (are we talking about automatically generated files at compilation, or LLMs?), apparently Google’s code is greater than 30%. I do not believe that AI has yet been linked with job losses convincingly.
[1] https://digitaleconomy.stanford.edu/wp-content/uploads/2025/...
[2] https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.a...
[3] https://www.financialexpress.com/trending/microsoft-is-filte...
[4] https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-3...
The study is bs. While executives are blaming AI, it is nowhere near levels of replacement.
What I bet is happening under the covers is reprioritization of work, offshoring or both.
Why bet? In the news recently Australian bank CBA was caught offshoring positions and claiming the jobs had been replaced by AI.
> What I bet is happening under the covers is reprioritization of work, offshoring or both.
AI has been frequently used as an explanation for layoffs.
Before AI, layoffs would be a positive signal to investors, but they'd be demoralizing to staff and/or harm the brand.
Now, you can say, "Wow, we're so good at technology, we're eliminated ___ jobs!" and try to get the best of both worlds.
My company did exactly this earlier in the year. It was a blatant lie and everyone who works here knew it. None of the people laid off were actually replaced with AI, the work they did was just eliminated.
Yeah, unquestioning “journalists” have allowed them to turn laying off thousands into an ad for their new tech.
It’s also just natural cost cutting from businesses that were previously massively over hiring, and outside of AI don’t exactly have a ton of areas with huge growing investment.
Plus slashing jobs like this keeps the plebs in line. They don’t like software engineers having the money and job security to raise a stink over things. They want drones terrified of losing everything.
This study feels pretty weak. Software as a occupation is collapsing, but it's not due to AI. Articles and "studies" like this are just a smoke screen to keep your eye off the ball.
Why is it collapsing?
This is good though right? There was decline in manufacturing jobs in the West over the last several decades because we could automate and outsource to cut costs. We all agree that cheap iPhones and clothes are good... Right...?
Now we get cheap services too!
Job losses to automation shouldn't be a sudden concern to people. This has been happening for decades. The only difference is that it's now impacting the more privileged in society as well.
Surely this must be linked to a general slowing of the economy.
Thinly veiled economic propaganda aside, I am dealing with a different AI mess everyday. Technical debt is exploding everywhere I turn. There is an ever larger part of me these days that wishes I could just call the bluff all at once and let all the companies in question learn the inevitable lessons here the hard way.
The worst thing for me would be just needing to get a job like I had before being a dev, the stakes are so much grander for all the companies. It's only really existential for the side of this that isn't me/us. I've been working since I was 15, I can figure it out. I'll be more happy cutting veggies in a kitchen than every single CEO out there when all is said and done!
Sensationalist, alarmist, b.s. article.
It emphasizes "AI adoption linked to 13% decline," which implies causation. The study itself only claims "evidence consistent with the hypothesis."
The article also largely highlights job loss for young workers, while only briefly mentioning cases where AI complements workers.
The study's preliminary status -- it is not peer reviewed -- is noted but only once and at end. If the article was more balanced it would have noted this at the beginning.
Articles on the same subject by the World Economic Forum, McKinsey, and Goldman Sachs are more balance and less alarmist.
Short-term, discrete numbers like these are interesting to look at, but they don't really tell us much about the long-term trajectory. In parallel: [1].
[1] "Nvidia Forecasts Decelerating Growth After Two-Year AI Boom" <https://news.ycombinator.com/item?id=45053175>
Slow down people. Let's stop jumping to biases and see what we have here.
Note upfront: I'm not suggesting AI is not having an impact. That would be foolish. But I will say there's *a lot* less to the conclusion of this study, simply because the data is questionable. It's not that they did anything wrong per se. I won't say that here because it'll end up a HN cluster fuck. Cluster fuck aside, the caveats and associated doubt are enough to say, "Don't bet the farm on this study." Great bander for the bar? Sure.
It's an interesting study but I've seen it called "absolute proof" and other type things. Don't be fooled, it's not that.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/...
From the original study:
> "This study uses data from ADP, the largest payroll processing firm in America. The company provides payroll services for firms employing over 25 million workers in the US. We use this information to track employment changes for workers in occupations measured as more or less exposed to artificial intelligence"
a) I'm calling this out because I've seen posts on LinkedIn saying it was a sample of 25M. Nope! ADP simply does payroll for that many.
b) The size of the US workforce is ~165M, making ADP's coverage ~15% of the workforce.
https://www.statista.com/statistics/191750/civilian-labor-fo...
c) Do the business ADP server come from particular industries, are of a particular size, in particular geographic locations? etc.? It's not only about the size of the sample - which we'll get to shortly - but the nature of the companies - which we'll also get to shortly.
> "We make several sample restrictions for our main analysis sample."
d) It's great that they say this, but it should raise an eyebrow.
> "We include only workers employed by firms that use ADP’s payroll product to maintain worker earnings records. We also exclude employees classified by firms as part-time from the analysis and subset to people between the age of 18 and 70."
e) Translation: we did a slight bit of pruning (read: cherry-picking).
> "The set of firms using payroll services changes over time as companies join or leave ADP’s platform. We maintain a consistent set of firms across our main sample period by keeping only companies that have employee earnings records for each month from January 2021 through July 2025."
f) Translation: More cherry-picking.
> "In addition, ADP observes job titles for about 70% of workers in its system. We exclude workers who do not have a recorded job title."
g) Translation: More cherry-picking.
> "After these restrictions we have records on between 3.5 and 5 million workers each month for our main analysis sample, though we consider robustness to alternative analyses such as allowing for firms to enter and leave the sample."
h) 3.5M to 5.0M feels like a large enough sample... if it wasn't so "restricted." Furthermore, there's no explanation on the 1.5M delta, and how adding or removing that much impacts the analysis.
i) And they considered that why? And did what they did why? It's a significant assumpt that gets nothing more than a hand wave?
> "While the ADP data include millions of workers in each month, the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy."
j) Translation: as mentioned above ADP !== a representation of the broader economy.
> "Further details on differences in firm composition can be found in Cajner et al. (2018) and ADP Reserch (2025)."
j) Great there's a citation, but given the acknowledgement of the delta isn't at least a line or two in order? Something about the nature of the delta, and THEN mention the citation?
k) Editorial: You might think this hand-wave is ok, but to me it's usually indicative of a tell and a smell.
l) Finally, do understand the nature of academia and null research (which has been mentioned on HN). In short, there is a (career / financial) incentive to find something novel (read: worth publishing). You advance your career by doing not-null research.
Again, I'm not suggesting anything nefarious per se. But this study is getting A LOT of attention. All things considered, more than it objectively deserves.
__Again: I'm not suggesting AI is not having an impact. That would be foolish.__
"AI adoption used as excuse for 13% decline in jobs for young U.S. workers"
[dupe] Discussion: https://news.ycombinator.com/item?id=45025978
And a better source article.
As I see it, it's really the lack of "capitalists" willpower to be actually capitalist.
We can't call it incompetence because neither those whom we have come to know as capitalists nor their advisors are incompetent, which means they quite literally do not want to offset any decline in jobs or (job creation) that can be linked to progress.
That's not strange. A "capitalist" wants market participation to grow, infinitely, which is possible. Who we came to know as capitalists don't care about the markets, actual market growth or market participation. They only care about the growth of the value of the markets, "however" that happens.
I highly recommend that journalists and economists dig a bit more radically honest into the matter. There'd be more value in that, more blog posts, more articles, more discussions on all platforms, and thus more participation.
I mean it's a scapegoat vs straw man vs actual culprit kind of situation ... isn't it?
Now it is getting harder and harder for young people
Of course it has. That's the entire point of AI.
Ostensibly it's to help programmers, or writers, or lawyers, or whomever. But those are just the users of AI.
The owners and buyers of AI at a company level are developing and using it to push down payroll expenses. That's it. It's to avoid paying people, and providing them benefits. Even if you fire 50% of your employees, realize it was a terrible mistake, and hire most of them back, it's a net reduction in payroll costs.
I will say though as someone who solo runs their own product business, right now feels like a great time to be building your own thing.
No idea if this will last long though.
I do wonder, will this drive the value of software “things” way down? Once everyone starts solo building their own things…
Fewer and fewer people to buy said thing when layoffs and no-hiring continues
Yeah, who’s going to pay for your single person AI-powered vibe coded calendar organiser product when nobody has a job?
Once stagflation really starts to set in, things will get rough.
> Even if you fire 50% of your employees, realize it was a terrible mistake, and hire most of them back, it's a net reduction in payroll costs.
This is inane. If an employer hired most of these employees back it means that firing them negatively impacted the bottom line.
It just means they bought the hype and thought they could actually use AI to fire people. Many companies try to do it, and I don't know how many it works for.
But I do know that companies fire people and stay short-staffed just to keep payroll down all the time. Even when externally that seems like a terrible idea, and likely impacts bottom line. It's important to realize just how much companies hate payroll. And AI is a great way to try to reduce it.
yet another clickbait "ai is taking jobs" study that doesn't investigate whether or not the employment decrease is directly caused by the ai adoption
Is it pure AI or a guy in India that can cover the gap using AI to create good enough slop to pass?
Everyone is doubling down on hiring IN India right now. H1B isn’t even a thing. It’s offshoring to Indians that are utilizing AI to ship good enough slop. Everyone’s India office is rapidly expanding.
I have a controversial take that the only reason knowledge jobs have been well paid and fairly safe in the West was because for a while they've been difficult to outsource, not because the skills themselves are particularly rare or hard to acquire.
A few decades ago there was a focus in several low-cost economies (like China) to build out a manufacturing base and because of cheap global shipping it just made sense to outsource manufacturing. It's not that manufacturing is unskilled or "unproductive" as some claim.
Over the last few years hundreds of middles of people from countries like India have gotten online and in many of those countries today education and English is sufficient that with the internet knowledge jobs can be outsourced. And so obviously they will be.
This idea that there are jobs that require such high IQ and knowledge that only a Westerner can do them is kinda silly. We should assume a race to the bottom is the rule when it comes to labour. At the end of it the only jobs that will be safe are those which cannot be easily automated or easily outsourced. Those will probably be skilled manual labour jobs that require a physical presence – jobs like roofing or plumbing.