"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."
I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.
Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.
Can't we reach a compromise where proven track record of good use of LLM by a contributor or a company (eg. Bun) be pre-approved or entertained? Blanket ban on a new technology shouldn't be the default option.
yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.
i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.
> However, there are lots of people in the world who live their whole life by vibing
Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
> Why do they think they are "helping" with hallucinated rubbish that can't even build?
Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.
AI is absolutely terrible for people like that, as it's the perfect enabler.
It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.
I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.
I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.
This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.
Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.
This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.
Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.
> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
If you will forgive an appeal to authority:
The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.
For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.
I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.
Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.
You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.
This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.
> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason
This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.
I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.
If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.
So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
Do You really want it?
There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?
> Do You really want it?
If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.
> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.
Ah, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?
Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.
That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.
I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:
> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.
I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.
> Programming was a domain that filtered out those people because they found it hard to succeed at it.
I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.
I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.
Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.
I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".
I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.
But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.
Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought!
It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly
Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.
I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.
On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).
One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.
> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
Any examples how you see some engineers being left behind?
I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.
It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.
The normal distribution doesn't account for things like "huge megacorporations pour billions of dollars into accelerating product adoption" or "other companies force their employees to use AI whether they want to or not" though.
I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.
For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.
For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.
The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.
I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.
Yes, that reply provides convincing arguments for not merging the Bun fork, as it interferes with Zig's own roadmap for achieving even better results, while continuing to improve the whole language.
When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.
Also 95% of those 30k lines changed are fully self-contained inside of the aarch64 directory and of the remaining changes it looks like the majority is just adding "aarch64" as another item into an existing list. There are a few core changes that to me look like they could be done in their own PRs, but also core maintainers get to decide if they want to apply bureaucracy to their own work.
> In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.
I feel like if their goal is to prioritize contributors over contributions, it'd also logically follow that they should try to have descriptions where possible? Just to make exploring any set of changes and learning easier? Looked it over briefly, no Markdown or similar doc changes there either.
I mean the changes can be amazing, it's just that adding some description of what they are in more detail, alongside the considerations during development, for new folks or anyone wanting to learn from good code would also be due diligence.
How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?
edit Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.
It's still fairly obvious just by skimming the code. The best AI models are still quite far from the best human developers in ability and especially in code quality.
If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.
We definitely are not close to that point though and it's unclear if/when we will get there.
It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.
If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.
Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.
I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.
How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.
> So while one could in theory be a valid contributor that makes use of LLMs, from the perspective of contributor poker it’s simply irrational for us to bet on LLM users while there’s a huge pool of other contributors that don’t present this risk factor.
> The people who remarked on how it’s impossible to know if a contribution comes from an LLM or not have completely missed the point of this policy and are clearly unaware of contributor poker.
The point isn't about the 3000 line PR, it's about do we think the submitter is going to stick around.
Because the pro-group are whining that the policy is preventing the merge, when in actual fact even if the policy did not exist, the PR is crap anyway.
I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig. But regardless, let’s say it’s a bad PR for the sake of argument - it’s beside the point. It cannot be merged no matter how good it is, due to the strict no-LLM policy.
> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.
That may be the case, but the bun project only needs zig to correctly compile bun. The zig project needs to be able to correctly compile all existing and possible zig programs.
I haven't reviewed things, but it's possible and even likely (at least based on my own experience with LLMs) that the validation is mostly focused on bun compilation.
This is the most common issue I see with LLM authored PRs. Yes it does fix the issue _right now_ but as a maintainer I need to consider how it affects the project in the future. But “contributors” get mad if you reject for those reasons. So I can understand having a blanket policy.
> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.
The PR is probably fine for bun’s purposes. That doesn’t make it a good PR for Zig’s purposes, and could very well paint Zig into a weird corner.
> It cannot be merged no matter how good it is, due to the strict no-LLM policy.
This is about meta-discourse. Of course it’s against the policy. That’s the point of discussing the PR: to get Zig to change the policy, or at least provide an exception in this case. Or to argue the opposite.
> Of course the policy is preventing the merge. That’s literally the point of the policy…
In this case it isn't the blocker - the fact that the dev took the time to read the PR in detail, comment on it, and provide reasons why it could not be merged makes it very clear to me that the policy wasn't the blocker.
If they were going to enforce the policy for this PR, they wouldn't have bothered to read it. The only reason to read it is to see if the policy is waived for this specific PR.
it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?
> it could have been rewritten, rewriting PRs is cheap today
Rewriting PRs with LLMs is cheap, but often the output is no better than the previous revision (fixing one issue only to cause another one is very common IME). And reviewing each revision of the PR is not cheap.
I've had good experiences with people submitting AI generated PRs who then actually take the time to understand what's going on and fix issues (either by hand or with a targeted LLM generated fix) that are brought up in review. But it's incredibly frustrating when you spend an hour reviewing something only to have someone throw your review comments directly back at the LLM and have it generate something new that requires another hour of review.
> it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?
In this case it looks like the answer is "Yes"; the PR was not dismissed immediately, it was first examined in great detail!
Why would the maintainer expend effort on something that was going to be rejected anyway?
> either effort was wasted or policy is at best 'not implemented'.
I don't understand this PoV - have you ever come across a policy in any environment that wasn't subject to case-by-case exceptions?
Even in highly regulated environments (banking/fintech, Insurance, Medical, etc), policies are subject to exceptions and exemptions, done on a case-by-case basis.
The notion, in this specific case, that "well they rejected it because of policy" is clearly nonsense and I don't understand why people are pushing this so hard when the explanation of why an exemption can't be made for this specific PR is public, accessible and, I feel, already public knowledge.
This is overstated. Not all LLM code is produced the same way. Code produced through substantial human creative input still falls under copyright, at least the way things are now. Besides, nothing legally prevents placing code under a license. Enforceability is the question, not permission.
It's a bit like saying speed limits don't apply on private property, therefore you can't have any traffic rules on your private racetrack.
Well let’s be real for a moment here before we get completely anti-AI.
Without AI, I’m a guy spending years learning C++ in spare time I don’t have to develop software concepts and solutions I want to work on TODAY.
The ZIG project, to me, has a place. Legacy coders right now do need protecting.
It’s not people like me that they need protection from.
It’s not even language models they need protection from.
What they need protection from are the corporate structures who falsely believe that this technology makes them obsolete.
The article talks about “playing the person, not the cards” and that thinking has one fatal flaw: the vibe coder is a person. The vibe coder may have creative agency that the legacy coder does not.
Look, I still cross up French and Spanish words because I took a year of each, C++ syntax, Python syntax, HTML, I understand their structures but I’m liable to start out writing a Python script and wind up with half a web page and a brutal error message in my IDE environment.
Zig’s motivation is correct in many ways I think. I am not really their target audience or their target coder. But I am also not their target enemy. Put the right group of legacy thinkers in my think tank, and the code would get even better.
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."
A healthy contributor community is more important than mere code performance, quantity of features or lines of code, etc..
Unfortunately, those are largely words of a foregone era. The zeromq "community" today is tenuous. It has some really good people in it, the few that remain active, but the human-level processes and communication channels are ill defined and not well "staffed". In some ways, this lack of human activity and interactivity is perhaps okay and even justified given how stable libzmq and most of its bindings are (and the sub-ecosystem around particular bindings are a bit more active). Perhaps Hintjens' grand (and excellent, imo) vision got zeromq to where it is but the project feels to have gone adrift since we lost him. Somewhat ironic to his community-centric vision statement (the guide) it seems a project needs a charismatic and active leader to gain and retain a community. I guess that says more about human nature than it does about software development.
I'm not sure how to tie this all back to the zig story other than to point out the stated premise that zig is not short of PRs and so they can pre-select for no-LLM contributions. I think that is a good move for them and I get the "contributor poker" idea. But, the game changes when the premise breaks and the flow of newbies reduces to a trickle. At that point, if there are still active zig people who still want newbies, they may need to broaden their net. But if/when that happens, it may be too late to recover by opening to LLM-assisted contributions.
If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:
> We do not need a middleman to talk to AI models. We are not bottlenecked by coding.
I'm almost not using AI, but a possible scenario is that the contributor spend like 20 hours in total.
Something like using the AI to get an initial bad version, make some tweaks to the prompt, make some manual fixes, ask the AI to fox something else, noticing some new related feature and asking the AI to add it, making some benchmarks and deciding to remove a small feature, or perhaps deciding between two similar implementations, add a few more manual fixes here and there, run the extended version of the automatic test and find a weird bug in the unusual setup, make a few fixes with the AI and manually. So after 20 hours of work, the final version has only 50 lines that have been rewriten like 5 times each. Now the mantainer can review only the final version in 1 hour or so.
This is very different to spending 5 minutes asking the AI to write a patch, that has 1000 lines that does not even compile and sending it to the maintainer without looking at it.
I'm finding that AI, when successful, gives me 2-3x speedup. It's not the kind of thing I can give high-level instructions to like I can to a human.
I suspect the people who claim that AI works by only giving it high-level instructions are mostly working on "mindless" projects where a developer in the weeds wouldn't need to think very much.
maybe you are not bottlnecked by coding. but there is high probability that you will be bottlenecked by verifying the correctness of LLM-generated code.
> If an AI improves developer productivity so much,
You're not suggesting the only metric of productivity is lines of code are you? And that the only benefit of using LLMs is for generating code you're too lazy to type yourself?
It seems like this policy will help them win at contributor poker in the short term, but lose in the end. The next generation of developers will, for better or worse, grow up using AI assistance to write their code, but none of them will ever become a Zig contributor.
The problem with AI generated code is that the code the data model was trained on almost exclusively comes from public repositories. And there's a lot of repositories that are absolute dog $h!t or out dated. Crap in equals crap out.
> Zig values contributors over their contributions. Each contributor represents an investment by the Zig core team - the primary goal of reviewing and accepting PRs isn't to land new code, it's to help grow new contributors who can become trusted and prolific over time.
> LLM assistance breaks that completely. It doesn't matter if the LLM helps you submit a perfect PR to Zig
That’s the best rational I’ve seen so far, and fully support Zig decision here. I really appreciate their long term vision for both the community and actual project. I don’t think LLMs have such a great place in more collaborative efforts to be honest. Though we will see how things evolve, but I do see that when getting AI generated PRs I basically have to redo it myself (using LLMs, ironically… something I’m really starting to feel conflicted about)
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.
> Why use someone's project when you can just have the robot write your own?
I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.
Depth of use over the lifetime of an app is a quality all its own that often not appreciated. A recurring pattern at $dayjob is that a new manager or director will join a business unit and declare an existing app as the worst terrible, no good, horrible app they've seen and they're going to fix that. A year and a half later the new app is finally delivered with 80% of the original functionality and a fresh set of bugs. The new dev team sees the surface functionality but misses a lot of the hard earned nuance the old system accrued over time. This is a pattern that existed long before LLMs.
An LLM most definitely cannot spit out robust tests or thorough documentation. It can spit out some tests or some documentation, but they will not cover the user perspective or edge cases unless those are already documented somewhere. That's verified by both experience and just thinking about it for two seconds.
The sanding down you refer to is what generates those tests and documentation.
> but they will not cover the user perspective or edge cases unless those are already documented somewhere
Are you suggesting that LLM's can't test for people who use screen readers? Keyboard only users? Slow network requests?
You're acting like the issues an app faces are so bespoke to the actual app itself (and have absolutely no relation to existing problems in this space) that an LLM couldn't possibly cover it. And it's just patently wrong.
I'm not talking about keyboards or screen readers or any sort of input testing, I'm talking about how the software is used in practice.
If you disagree with that, I think the onus is on you to show me that an LLM could simulate the full context in which a user interfaces with software. That's a ridiculous claim.
Feel free to show literally any evidence for this claim.
>Are you suggesting that LLM's can't test for people who use screen readers? Keyboard only users? Slow network requests?
I don't think it's feasible to fully simulate the full depth of actual usage, given that (especially in the case of screen readers and the like) there's a great deal of combinatorial depth and context to the problem. Which screen readers, on which operating systems, and which users thereof?
I feel similarly but IIUC I think that doesn’t strictly require an open source development model. I’ve benefited a huge amount from consuming and contributing to open source projects and I’m a bit worried that the “unit economics” changing might break some of the social dynamics upon which the ecosystem is built.
I chose my words carefully. "Robust tests" are tests that provide high coverage and aren't flaky. "Thorough documentation" likewise is documentation that describes as much of the code as possible.
Yep. I realised the same. No one reads docs, or goes through tests. Either ways it's easy to write useless tests. And easy to write useless docs. Idt most even read the code. Now the difference is that it has become possible to write useless code.
So it's just the fact that others have already gone through the motions before I did. That's it really. I suppose in commercial settings, this is even more true and perhaps extends to compliance.
It may be able to spit out text that purports to be that, in a few minutes. But for most software, an LLM will not be able to spit out robust tests - let alone useful documentation. (And documentation which just replicates the parameter names and types is thorough...ly useless.)
I value software that reveals knowledge. The frontier LLMs were trained on all the code that institutions had been keeping to themselves. So they're revealing programing know-how on a scale that just wasn't possible with open source. LLMs are the ultimate Prometheus. Information is more accessible and useful now than it's ever been.
I have worked during several decades in many companies, located in many countries, in a few continents, from startups to some of the biggest companies in their fields. Therefore I have seen many proprietary programs.
On average, proprietary programs are not better than open-source programs, but usually worse, because they are reviewed by fewer people and because frequently the programmers who write them may be stressed by having to meet unrealistic timelines for the projects.
The proprietary programs have greater quantity, not quality, by being written by a greater number of programmers working full-time on them, while much work on open-source projects is done in spare time by people occupied with something else.
Many proprietary programs can do things which cannot be done by open-source programs, but only because of access to documentation that is kept secret in the hope of preventing competition.
While lawyers, and other people who do not understand how research and development is really done, put a lot of weight in the so-called "intellectual property" of a company, which they believe to be embodied in things like the source code of proprietary programs or the design files for some hardware, the reality is that I have nowhere seen anything of substantial value in this so-called IP. Everywhere, what was really valuable in the know-how of the company was not the final implementation that could be read in some source code, but the knowledge about the many other solutions that had been tried before and they worked worse or not at all. This knowledge was too frequently not written down in any documentation. Knowing which are the dead ends is a great productivity boost for an experienced team, because any recent graduate could list many alternative ways of solving a problem, but most of them would not be the right choice in certain specific circumstances.
> On average, proprietary programs are not better than open-source programs, but usually worse, because they are reviewed by fewer people and because frequently the programmers who write them may be stressed by having to meet unrealistic timelines for the projects.
There's also the fact that when you write open-source code, you're writing for a friendly audience. I've often found myself writing the code, letting it rest for a few hours, then rewriting it so that it is easier to read. Sometimes, the code gets substantially rewritten before I push.
There's no cooling period when you write code during your 9-5 job: it works, it has the required test coverage, ship it and move on to the next task.
The claim is also just categorically untrue. The largest source of training data by far is publicly available code on e.g. Github, so it mostly just gives you a way to recycle already-available code, without crediting the author, while allowing you to pretend you own it.
I remember hearing the same arguments in the early 2010s, when the "3D printing revolution" was just around the corner. Why would anyone buy anything anymore if you can download a model and print it in the privacy of your home? And make it infinitely customizable?
The whole point of having a civilization is that most things in life can be made someone else's problem and you can focus on doing one thing well. If I'm a dentist or if I run a muffler shop, there are only so many hours in a day, so I'd probably rather pay a SaaS vendor than learn vibecoding and then be stuck supervising a weird, high-maintenance underling that may or may not build me the app with the features I need (and that I might not be able to articulate clearly). There are exceptions, but they're just that, exceptions. If a vendor is reasonable and makes a competent product, I'll gladly pay.
The same goes for open source... even if an LLM could reliably create a brand new operating system from scratch, would I really want it to? I don't want to maintain an OS. I don't want to be in charge of someone who maintains an OS. I don't necessarily trust myself to have a coherent vision for an OS in the first place!
That only holds true for the smallest tier of open source projects. Past a certain point of complexity, it's unlikely you can expect the robot to read your mind well enough to provide something of high quality and 'outstanding for just you'.
The Zig project is certainly far beyond such capability.
You have to push the robot to be as fanatical as you are. It holds so much back, always aiming to do the simple normal thing that most people do, rather than the top-notch stuff it knows.
I'm finding this out the hard way. I set out to build a 1 page app. I thought it would take a day. It's 98% vibe coded at this point. Even with AI implementing everything, its taken several weekends and many evenings. And not because AI is doing a bad job its just that as i see it come together, i have more and more feature requests. I've got a couple dozen left but I can't just let the AI chew through them all at once. Im effectively QA now. Have to make sure everything is just right.
LLM access is not yet universally available. There are those who can't exactly afford it. And there are also those with access but there are occasional or perennial issues, like Claude outages and general degraded performance over time. For example couple of months ago when I just started using Claude, I was easily making good progress on multiple projects within a week. Nowadays I'm hardly getting through much of anything as most of the time Claude is just showing spinners, and it also feels like the code quality has taken a nosedive.
> The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own
Because it takes hours/months/years of accumulated design decisions to get a great open source project. Something an AI agent can only approximate the surface of, unless you’re ready to spend a lot of time on it
I've been seeing a drop in PRs against my repositories. I have a couple of repositories with around a hundred stars. Nothing spectacular but they were getting occasional PRs until last year. This year I've had almost none so far. My theory is that LLMs prefer sticking to mainstream projects. And since lots of developers are now leaning heavily on LLMs, they are biased to ignoring most of what I provide.
And you indeed get a lot of wheel reinvention by LLMs because that is now cheap to do. So rather than using some obscure thing on Github (like my stuff), it's easier to just generate what you need. I've noticed this with my own choices in dependencies as well. I tend to just go with what the LLM suggests unless I have a very good reason not to.
Most people don’t have the ability to read code well enough to determine if an LLM output is good or not. And most people don’t have subscriptions to models that can develop non-trivial programs…
Maybe this will be a real problem in a couple years though.
Code aside, most people don't even know how to describe what they actually want it to do, and LLMs are still a loooong way away from mind reading. I've seen developers struggle to even write down what they want. Simple demos like they love to show off with snake-like games are fun and all but they're nothing like the complex opensource apps everyone seems to think we'll just generate with a simple prompt.
I think this ignores the amount of work needed to make LLM contributions be of high quality. It's much less work than making pure human contribution, but it's definitely not zero.
So centralizing that common work is a benefit of open-source just as much with LLMs as it was before.
>> Whereas earlier you had to use something that was mass produced to be satisfactory for everyone
As someone who recently started using OpenSCAD for a project I find this attitude quite irritating. You certainly did not "have to" use popular tools.
The OpenSCAD example is particularly illuminating because it's fussy and frustrating and clearly tuned towards a few specific maintainers; there's a ton of things I'd like changed. But I would never trust an LLM to do it! "Oh the output looks fine, cool" is not enough for a CAD program. "Oh, there are a lot of tests, cool" great, I have no idea what a thorough CAD test suite looks like. I would be a reckless idiot if I asked Claude to make me a custom SCAD program... unless I put in a counterproductive amount of work. So I'm fine with OpenSCAD.
I am also sincerely baffled as to how this stimulates the "labor economy." The most obvious objection is that Anthropic seems to be the only party here getting any form of economic benefit: the open-source maintainers are just plain screwed unless they compromise quality for productivity, and the LLM users are trading high-quality tooling built by people who understand the problem for shitty tooling built by a robot, in exchange for uncompensated labor. It only stimulates the "labor economy" in a Bizarro Keynesian sense, digging up glass bottles that someone forgot to put the money in.
I have seen at least 4 completely busted vibe-coded Rust SQLite clones in the last three months, happily used by people who think they don't need to worry their pretty little heads with routine matters like database design. It's a solved problem and Claude is on the case! In fact unlike those stooopid human SQLIte developers, Claude made it multithreaded! So fucking depressing.
This is funny because I was in the same situation, and actually used Claude to make a custom CAD program inspired by OpenSCAD :) https://fncad.github.io
You definitely need to have a strong sense of code design though. The AIs are not up to writing clean code at project scale on their own, yet.
This is a good example of what I mean! fnCAD appears to be a significantly buggier and highly incomplete version of OpenSCAD, where AI essentially grabbed the low-hanging fruit - albeit an impressively large amount of fruit - and left you with the hard parts. I fail to see how this solved any problems. Maybe it was an experiment, which is fine. But it's not even close to a viable CAD product, even by OpenSCAD's scruffy FOSS standards, and there's no feasible way to get it there without a ton of human work.
Not trying to denigrate the work here, as such. But this certainly didn't convince me that using AI to replace OpenSCAD (or any other major open-source project) is a good idea. The LLMs still aren't even close to being able to pull it off.
Anthropic will probably do what Google did in the 2000s, which is give jobs to all the open source developers whose work helped them get there.
Civilization isn't monotonic. People keep solving the same problems over and over again, telling the same stories with a different twist. For example in 1964 having a GUI work environment with a light pen as your mouse was a solved problem on IBM System/360. They had tools similar to CAD. So why don't we all just use that rather than make the same mistakes again. Each time a new way of doing things comes out, people get an opportunity to rewrite everything.
> Why use someone's project when you can just have the robot write your own?
Because it is incredibly expensive to write a replacement for semi-complex software? Good luck asking frontier models to write a replacement for Zig, Docker, VSCode, etc.
I have a similar sentiment unfortunately. I briefly thought about ways to force them to stop but all led to some sort of negative impact on privacy/freedom such as identify verification
The article explains Zig's stance in further detail, but the quoted part on its own caught my attention because my reading of it is rather "pro human communication" instead of "anti-AI".
I wonder - has it been confirmed that no LLMs for PRs literally means no AI assistance for code?
While I haven't codified it anywhere, the policy I would like is for issues and PR descriptions to have no LLMs - there is no reason to ban code completely though IMO. I would say that would be pro human-communication and a stance I would like a lot.
Good, pro AI people produce poor quality in everything they do. They are the least creative and worst problem solvers. I don't want them near me or my work.
The funny thing LLM's are amazingly good with writing in Zig. They could inspect stdlib source code to fix compatibility issues with newer compilers and quite prolific with idioms.
For example I got a working application with minimal prompt like "I need an X11 tray icon app showing battery charge level". BTW result: https://github.com/baverman/battray/
Now I'm trying to implement a full taskbar to replace bmpanel2. Results are very positive. I've got feature parity app in 1h with solid zig code.
> They could inspect stdlib source code to fix compatibility issues with newer compilers and quite prolific with idioms.
In order to even say this, you need to have knowledge and understanding about the language. I suspect you are not the intended target of this policy. They are defending their project with a harsh policy, knowing full well there are false negatives. Contributions for FOSS was already in borderline crisis mode before LLMs so it makes sense they’re desperate.
Their bet would be Venn diagram of LLM user overlaps with irresponsible. I think that’s correct, but not because good programmers suddenly become irresponsible when they use LLMs, but rather that an enormous barrage of bad programmers can participate in domains they otherwise wouldn’t even know where to begin.
> why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
perhaps that's what the maintainers should be doing after all. it still takes time and tokens, though; neither is free.
I'd personally rather have the maintainers spend the time writing as much docs and specs as possible so the future LLMs have strong guardrails. zig's policy will be completely outdated in a couple years, for better or worse. someone will take bun's fork, add a codegen improvement here, add a linker improvement there and suddenly you'll have a better, faster zig outside of zig.
If it gets outdated they can review their policy. Right now it is sensible. We're at early ages of this type of AI and we don't know what the end game will be.
Someone forking it and makeing it better with AI is a possibility. If that happens will know it was better for the project for the maintainers to just review the code. If that happens, they can probably become maintainers in the fork. Or maybe they don't like that work and could just go do something else
Zig strives to avoid numerous pitfalls, and I admire that.
Let's take a look at some of them:
1. Project control – if a LARGE company implements thousands of lines created by LLMs day after day – who is ultimately responsible for the project's progress? "You accept hundreds of PRs, so why not this one?"
And one more thing: will you be able to change the code yourself, or will you be forced to use LLMs? What if one of the "AI companies" implements a strict policy preventing "other tools that XXX" from editing the codebase?
2. Ownership. If most of the code was taken by an external company from their LLM, what about ownership of the code? The authors of Zig, the company, the authors of the original code, stolen by LLMs?
3. Liability. In the near future, a court may rule that LLMs are unethical and should not recombine code without the owners' prior consent. Who is responsible for damages and for removing the "stolen" code? The owners of Zig, the company that creates pull requests, or the authors of LLM programs?
4a. Vision. Creating and maintaining a large code base is very difficult – because without a broad perspective, vision, and the ability to predict and shape the future – code can devolve into an ugly mess of ad hoc fixes. We see this repeatedly when developers conclude, "This is unsustainable; the current code base prevents us from implementing the correct way to do things."
LLM programs cannot meet these requirements.
4b. There's another aspect – programming languages particularly suffer from a lack of vision or discipline. There are many factors that must be planned with appropriate capacity, vision, and rigor: the language itself should be modeled in a way that doesn't prevent correct implementation of behaviors. The standard library must be fast, concise, and stable. The compiler itself must be able to create code quickly and repeatably.
Users hate changes in a language – so if a language changes frequently, it is met with harsh criticism. Users hate incompatibility. Users hate technical debt and forced compatibility. Yes, there are conflicting requirements. The author of Zig understood this perfectly, having already gone through it himself (see, for example, "I/O Redesign").
This balance, in all aspects, is the pillar of human creativity.
To be honest, I'm not a huge fan of Zig because I dislike the tight syntax: too many periods and curly braces, which is why I prefer Odin. But I have a lot of affection and respect for Zig and its authors.
> If a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
That's a fair thing to ask, though it seems like people will arrive at very different conclusions there.
Presumably this only applies to newcomers? The thrust of their policy is to nurture new contributors. Once one has established oneself as a meaningful contributor — which the Bun team surely must have done by now — then it doesn’t matter where the code came from.
…in theory. In reality, I’m sure a policy like this can’t be selective and fair at the same time. Pick one!
One reason I love writing production code in Ur/Web is that LLMs are incapable of synthesising something even remotely resembling it. Keeps me on my toes.
Ur/Web! That's something I haven't heard about in ages. Is it still in active development? In what circumstances are you using it? Fun, your own startup, is some secret big commercial user of it...?
"What are the intellectual-property risks of using generative AI tools?
The Oracle Contributor Agreement (OCA) requires that a contributor own the intellectual property rights in each contribution and be able to grant those rights to Oracle, without restriction. Most generative AI tools, however, are trained on copyrighted and licensed content, and their output can include content that infringes those copyrights and licenses, so contributing such content would violate the OCA. Whether a user of a generative AI tool has IP rights in content generated by the tool is the subject of active litigation."
The other side of this is that open source projects that allow AI tools will be more restrictive towards new contributors.
This already happens to some degree on large software projects with corporate backing (Web engines, compilers, etc.), where it is often not trivial to start contributing as an independent individual.
Reasonable people can disagree on whether one approach is inherently better than the other, as ultimately they seem to be optimising for different goals.
Imagine getting contributions from someone, who has no access to build system and tests.
If I have a test harness, and LLM workflow setup, it is easier to just write new code myself. I am not giving away my "secret sauce". And I will not have a debate "why this simple feature needs 1000 new tests...", and two days just to make a full release build.
For merge I have to do 99% of work anyway (analyze, autotest, build, smoke, regression test). I usually merge smaller commits just to be polite (and not to look like one man show), but there is no way to accept large refactoring!
Another more practical issue with using LLMs for Zig is that it’s a quickly changing language, meaning LLMs may generate code for an older version of the language.
Honestly, that doesn't sound too bad. It does not say you can't use LLMs, it just doesn't let LLMs be the author of a commit. Meaning, if you as a developer make yourself responsible for what the LLM wrote, go ahead. But be ready to answer the technical questions, be ready to get grilled in the code review, and be called if you get a CVE on that part of the code...
It requires the people contributing the work to have the integrity to actually follow the project’s rules. It’s not OK to violate the project’s rules just because you don’t think you’ll be found out as a filthy fucking liar.
I mean best of luck policing this is all I'm going to say. We will soon be back to the "core contributors only" kind of policy in many projects I imagine to avoid the slop spam. The verification will be at the conferences.
This seems like a sensible long-term strategy, much better one than entering into token-fueled AI arms race against slop. It's not even clear what's the end goal of such race would be for an open source project. Open source software was traditionally about growing knowledgeable communities and giving users ability to examine and modify software they use. LLMs quite obviously blow that up on several levels. For starters, if you hate dealing with code and prefer prompts, it's unlikely that you will be generating code that's enjoyable to work with for people who do read it directly.
I don't have an opinion about Zig AI policy for contributions. Their project, their policies. Fine for me.
However, I wanted to give Zig a try in an agentic coding scenario. For tasks that would take a few seconds when choosing Python, Java, or JavaScript as a target language, it would take tens of minutes and waste millions of tokens before producing anything.
Almost any model gets stuck trying to figure out the correct syntax and correct libraries for a specific Zig version, fighting with compiling and figuring out function call parameters, frequently taking it wrong and going on side quests for things that should just work.
I guess the relative lack of resources and the language instability don't play well for models that try to generate Zig code. Using specific tools like zig-mcp helps only a bit.
Until LLM support for Zig improves (one needs to spend significant resources for that to happen), LLM-generated Zig code won't be good enough for either Zig programmers or Zig contributors.
the best PRs I get are from more senior level people who are at work, hit a specific problem they had, and wanted to help out the project with a good PR. Then you never hear from them again because, of course, they're busy!
When you have junior people come in with PRs and you do the whole hand-holding thing so they learn and grow and all that, they're there because my project is famous, they want to get credit (which I give them), then they're off to get jobs whereever and they are working with completely different technologies, and you never hear from them again either, because, of course, they're now busy!
Really, outside of my core group of hangers-on, Claude is the only contributor we have that doesn't leave us.
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
well yeah. I almost use PRs now just as a lazy means of issue prioritization. I'd love if github had more fine-grained controls to disable PRs but allow occasional contributors in (they don't).
Loris Cro banned me from his Zig forum because I disagreed with/corrected something he wrote.
I was also blocked from the Zig github repository, after being a frequent contributor to issue discussions, for reasons unknown (I was never informed, I just found out when I could no longer put a thumbs up on a comment).
(Ok ok I think we lost the fight already. I see soooooo many people using AI tools on github in the last ~2 weeks alone, claude in particular literally infiltrated everything there.)
Very convenient of Mr. Willison to omit the fact that Bun's upstream changes are total garbage and would not be upstreamed regardless of any policies, omitting LLM generated code or not, since they are, as a zig core team member articulated in a classier way, shite.
>There’s the 4x speedup claimed by the Bun team, already available on Zig 0.16.0!
>Each [incremental] update is taking less than 0.4s, compared to the 120+ seconds taken to rebuild with LLVM. In other words, incremental updates are over 300 times faster on this codebase than fresh LLVM builds are. In comparison, an enhancement capped at a 4x improvement is pretty abysmal. [..] Again, this feature is available in Zig 0.16.0—you can use it!
I don't think I've ever heard anything positive about Zig. Every time I've seen the project mentioned is them using bizarre black and white moral judgements to justify stupid decisions.
You need to look past this. Zig is an excellent low-level language. Thanks to the comptime features, you can have high-level looking APIs while staying down to the metal. It's not for everyone, obviously, but as a language, it is really good.
You have to be wilfully blind, then. It gets rather frequently praised on HN (as much as any niche language can be), and they certainly don't make black-and-white moral judgements often.
Rule should be whatever the people running the project think the rule should be. If you've got your own project, do implement the anti-fully-autonomous-PRs rule for your project. But the creators of Zig do not owe you or me the rule we like.
IMHO: hard, inflexible rules like these are always deeply rooted in biases and personal convictions, not in facts. The suggested policy amendment by Claude at the end is much more honest, logical, and palatable.
> The argument assumes that unassisted PR authorship is what builds trustworthy contributors, and that LLM assistance prevents that growth.
No, I don't think that was the argument. As I understood it, unassisted contributions have higher chances to grow a trusted contributor. Not 100% vs 0% chances, but statistically higher. So, given limited resources, it makes sense to prefer unassisted over assisted contributions.
I don't believe that even the weakened version of the argument works -- it is based on an assumption, not fact.
Why would a contributor that uses AI assistance have fewer chances to be trusted?
I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.
You could extend that argument to any tool used by the developer, like a linter, sanitizer, the IDE itself, or even auto-completion. Why target LLMs specifically?
The more I think about it, the more nonsensical it is.
- What if I do everything by hand, but have an LLM review my work at the very end?
- What if I have an LLM guide me through the codebase just by specifying the files I should read and in what order, but I do all the reading myself?
- What if I do everything by hand, but then use an LLM to optimize a small part of an algorithm?
You can easily see how absurd it is to completely ban LLMs.
What matters is the quality and correctness of the contribution. Even with heavy LLM usage, unless the developer understands what problem they're solving, the quality will be sub-par.
"We wont take contributions from non hand written assembly code, these C 'high level' language patches are not allowed. Zig is a great project and language but it will die on this hill.
On multiple occasions over the last months, I have been wishing the Zig/ZSF team would use LLMs. I've found many copy&paste errors that simply wouldn't exist if mundane tasks were delegated to a good LLM. It's even in the Zig community, I've seen PRs to some projects I'm interested in boosting how it was all human made, and containing all kinds of trivial logical errors that even the worst LLM would catch.
It's a good rationale. But it points the finger at a real bottleneck in open source development: the burden of manually reviewing contributions. And the need to automate that with AI as well. Reviews were already becoming a problem before AI. Lots of projects have been dealing with a large influx of contributions from inexperienced developers from all over the world looking to boost their CVs by increasing their Github statistics. It's the same dynamic that destroyed Stackoverflow. Which, thanks to AI has been largely sidelined now. And now that AI is there, those same inexperienced developers are using that at scale to generate even more garbage contributions.
Doing manual reviews of everything is very labor intensive and not scalable. However, AIs are pretty good at doing code reviews and verifying adherence to guard rails, contributor guidelines, and other rules. It's not perfect, but it's an underused tool. Both by reviewers and contributors. If your contribution obviously doesn't comply with the guidelines, it should be rejected automatically. The word "obviously" here translates into "easy to detect with some AI system".
Projects should be using a lot of scrutiny for contributions by new contributors. And most of that scrutiny should be automated. They should reserve their attention for things that make it past automated checks for contribution quality, contributor reputability, adherence to whatever rules are in place, etc. Reputability is a good way to ensure that contributions from reputable sources get priority. If your reputation is not great, you should expect more scrutiny and a lower priority.
I don't know Zig, but I think that is not the problem here. Not exactly. The real question is: why spending all those efforts to grow and align a pool of contributors if contributions are cheap and correct? Code review is not just about checking if what it says it does, and if it does it according to the guidelines. The review is a touch point to discuss where the project is heading and how to get there. That is the most important part in the long run. As a collective human effort, it needs coordination. Some of it is via the review process (especially for those not part if the core team that draft the roadmap). One could document all those micro decisions with the rational, but it might end up be a wakamole game. IMO, projects which allow AI usage need to spend way more effort in coordination (and quality insurance).
You still have to review everything manually again anyway. It's a compiler for a language, bugs and bad architecture decisions cost a lot. They moved to codeberg, so there are less garbage PRs now. They try to grow a culture where you expected to deliver good code in the PRs so the review takes less time.
It takes like 5 minutes to spot garbage PRs manually. LLM can flood you with a wall of text where only half of the stuff make sense. Also, they can't really spot bad architecture. It's a compiler in an unpopular language, don't forget that.
> [you can] stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project
The real bottle neck when you want to grow is connecting with the right people. An LLM is not helping with that if you want to build a community. When you use LLM to skip the need to understand a problem how are you ever going to get a reputation that I can trust?
The post is not about reputation it about seeing how people respond and work with you in a community.
EDIT: I see that you frame it as a help and a tool and sure it might work, but I feel like it is just another obstacle.
> the burden of manually reviewing contributions... [a]nd the need to automate that with AI as well.
I suggest we also automate the distribution and the use of software with AI as well, and then just all go to the beach and sip on some cocktails or something.
As a heavy AI-assisted open source code creator (and someone with 40+ years of dev experience), this seems wrong-headed to me. I think it is an excellent policy, as they say, to "value contributors over their contributions," but this policy excludes all potential contributors who use the latest tools. It will eventually doom zig to a smaller "artisanal" pool of contributors, rather than welcoming newbies and helping them become better open-source developers.
Presumably Zig are OK with that. For their particular project - a brand new programming language and compiler - a small pool of artisanal developers is likely preferable to a large pool of LLM-assisted developers who don't have as deep an understanding of how everything works.
There are plenty of less stringent projects for people who to get better at open source to contribute to.
From https://kristoff.it/blog/contributor-poker-and-ai/:
"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."
Pretty much sums up the LLM fanbase.
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."
I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.
Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.
> I'm a good programmer, and it speeds up my work a lot
The problem with this line of thinking is the same with "I so good as C developer, my code is so-safe!".
And we see what reality instead tell: Yes, exist people where this claims are true, not, is not even a decently sized minority.
> to "for this next step I need a vpn multiplexer written in a language I don't use"
but that acceleration is exactly because you're not good at that language
Can't we reach a compromise where proven track record of good use of LLM by a contributor or a company (eg. Bun) be pre-approved or entertained? Blanket ban on a new technology shouldn't be the default option.
No.
yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.
i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.
It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.
> However, there are lots of people in the world who live their whole life by vibing
Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
> Why do they think they are "helping" with hallucinated rubbish that can't even build?
Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.
the target audience of the cyber typer terminal [0]
[0] https://hackertyper.net/
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.
AI is absolutely terrible for people like that, as it's the perfect enabler.
> Why do they think they are "helping"
It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.
I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.
I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.
This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.
Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.
You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.
It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option
I wonder how many are account farming.
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.
This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
> LLMs are in this case enabling bad behavior
Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.
Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.
They excel at communication. Perfecting the art.
Now LLMs are there to accelerate the trend.
> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
If you will forgive an appeal to authority:
The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.
- Fred Brooks, 1986
> It's hard to know if LLMs will end up being a net win for the industry.
True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.
"Claude, don't create any technical debt please"
wouldnt llm do all the tasks that determistic programs are doing. like chatgpt files taxes for you instead of using turbotax.
For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.
I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.
Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.
You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.
This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.
> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason
This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.
I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.
It says a lot that most readers can't distinguish good writing from something an LLM spat out.
Ray Kroc's genius was to make people forget that you get what you pay for.
False equivalency. If you had the humility to run your own writing through an LLM first, it would have caught it. Just saying.
Not picking on you in particular, but most of the anti-AI crowd can’t present their case compellingly and have an utter lack of humility.
If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.
So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
Do You really want it?
There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
That's looks like a strong dystopia for me.
> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?
> Do You really want it?
If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.
> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.
> teaching better than most humans
Ah, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?
All of which are parts of the writing and thinking skillset, no?
Right. It can enhance that skillset. Are you suggesting it can’t?
This wouldn’t be a plausible position.
Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.
That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.
I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:
> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.
I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.
> Programming was a domain that filtered out those people because they found it hard to succeed at it.
I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.
I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.
Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.
FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.
However if it is a big anti-llm project I just fork and have agents rebase my changes.
Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.
I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".
I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.
But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.
Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
This mindset is fine (it's mine essentially too).
But it absolutely has to be combined with verification/testing at the same speed as code production.
I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly
Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.
I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.
On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).
It's not. Invariably, the code is locally fine and globally nonsense.
One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.
> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
Any examples how you see some engineers being left behind?
I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.
It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.
Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."
Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.
The normal distribution doesn't account for things like "huge megacorporations pour billions of dollars into accelerating product adoption" or "other companies force their employees to use AI whether they want to or not" though.
Fake it ‘till you make it. Seems like LLM’s have caught-on to that too.
You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.
People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.
I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.
For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.
For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.
The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.
I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.
Yes, that reply provides convincing arguments for not merging the Bun fork, as it interferes with Zig's own roadmap for achieving even better results, while continuing to improve the whole language.
Not only this, but also:
Bun's fork will exhibit indeterministic behavior.
As if that was a bad thing in 2026!
...why does it being 2026 make nondeterminism more desirable or reasonable?
A single PR for a 3000-line addition would, in all likelihood, be rejected anyway.
Really depends the author and context. Large PRs are often justified for compiler work, you have a lot of pieces to touch at the same time
Doubt it: https://github.com/ziglang/zig/pull/24536
When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.
3000 line LLM commit is not that.
Also 95% of those 30k lines changed are fully self-contained inside of the aarch64 directory and of the remaining changes it looks like the majority is just adding "aarch64" as another item into an existing list. There are a few core changes that to me look like they could be done in their own PRs, but also core maintainers get to decide if they want to apply bureaucracy to their own work.
No description provided. I love this PR. But yeah, try being anyone besides Jacob and submitting that!
> In successful open source projects you eventually reach a point where you start getting more PRs than what you’re capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project. Instead, we try our best to help new contributors to get their work in, even if they need some help getting there. We don’t do this just because it’s the “right” thing to do, but also because it’s the smart thing to do.
I feel like if their goal is to prioritize contributors over contributions, it'd also logically follow that they should try to have descriptions where possible? Just to make exploring any set of changes and learning easier? Looked it over briefly, no Markdown or similar doc changes there either.
I mean the changes can be amazing, it's just that adding some description of what they are in more detail, alongside the considerations during development, for new folks or anyone wanting to learn from good code would also be due diligence.
How would you differentiate a 3000 line LLM commit made by the best models and good AI processes from a 3000 line commit made by the best human developer?
edit Okay, I set the bar too high here with "best human developer" and vague "good AI processes". My bad. Yes, LLM is not quite there yet.
A personal relationship and trust, as seems to be the case here?
By using my brain.
Don't be ridiculous! We don't do that anymore.
Read it?
It's still fairly obvious just by skimming the code. The best AI models are still quite far from the best human developers in ability and especially in code quality.
When the best AI models are the same or better than the best[1] human developers, what then?
We're already at the point talking about best vs. best.
If that happens and we have a way of reliably knowing if some code is produced to that high quality, then I think we probably can accept that AI coding is the only sensible option.
We definitely are not close to that point though and it's unclear if/when we will get there.
It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.
If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.
Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.
I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.
https://kristoff.it/blog/contributor-poker-and-ai/
How can AI possibly be better than “the best” when the corpus of training data now includes its own slop in addition to all the code by new devs/lazy devs/bad devs scattered all over the internet? Law of averages applies here.
Don't use "the corpus", but use thinking, code, tools, best practices.
Billion times faster than a human, no tiring, no miscalculation, no brain-fart, no cheating.
Because LLM models are obviously much more than the sum of their parts.
Oh, which parts are those? Do tell!
The post that inspired this post [0] says:
> So while one could in theory be a valid contributor that makes use of LLMs, from the perspective of contributor poker it’s simply irrational for us to bet on LLM users while there’s a huge pool of other contributors that don’t present this risk factor.
> The people who remarked on how it’s impossible to know if a contribution comes from an LLM or not have completely missed the point of this policy and are clearly unaware of contributor poker.
The point isn't about the 3000 line PR, it's about do we think the submitter is going to stick around.
[0] https://kristoff.it/blog/contributor-poker-and-ai/
It seems to be trivially easy for everyone but people heavily invested into LLM to spot LLM slop
Jacob is part of the core team, not a random outside contributor.
Very different context: that PR is from a maintainer, and trusted member of Zig, which surely discussed the implementation/design internally as well
What’s the point in debating the PR quality? The policy explicitly forbids all LLM code, so that policy is of course the “real reason”.
> What’s the point in debating the PR quality?
Because the pro-group are whining that the policy is preventing the merge, when in actual fact even if the policy did not exist, the PR is crap anyway.
I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig. But regardless, let’s say it’s a bad PR for the sake of argument - it’s beside the point. It cannot be merged no matter how good it is, due to the strict no-LLM policy.
> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.
That may be the case, but the bun project only needs zig to correctly compile bun. The zig project needs to be able to correctly compile all existing and possible zig programs.
I haven't reviewed things, but it's possible and even likely (at least based on my own experience with LLMs) that the validation is mostly focused on bun compilation.
Do you think they skipped the main zig test suite or something? Only tested bun compilation? That seems unlikely to me
They didn't take into account the long-run impacts of the changes on future development, etc.
I recommend reading the explanation given by one of the Zig devs, as it's a very clear and solid one.
This is the most common issue I see with LLM authored PRs. Yes it does fix the issue _right now_ but as a maintainer I need to consider how it affects the project in the future. But “contributors” get mad if you reject for those reasons. So I can understand having a blanket policy.
> I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig.
The PR is probably fine for bun’s purposes. That doesn’t make it a good PR for Zig’s purposes, and could very well paint Zig into a weird corner.
> It cannot be merged no matter how good it is, due to the strict no-LLM policy.
This is about meta-discourse. Of course it’s against the policy. That’s the point of discussing the PR: to get Zig to change the policy, or at least provide an exception in this case. Or to argue the opposite.
Of course the policy is preventing the merge. That’s literally the point of the policy…
> Of course the policy is preventing the merge. That’s literally the point of the policy…
In this case it isn't the blocker - the fact that the dev took the time to read the PR in detail, comment on it, and provide reasons why it could not be merged makes it very clear to me that the policy wasn't the blocker.
If they were going to enforce the policy for this PR, they wouldn't have bothered to read it. The only reason to read it is to see if the policy is waived for this specific PR.
OTOH why bother to polish the PR if it won't get accepted anyway?
> OTOH why bother to polish the PR if it won't get accepted anyway?
As the Zig maintainer so patiently explained, no amount of "polish" can fix the PR because it is misaligned to the correctness that they require.
IOW, that PR is so far off the reservation, unless it is completely rewritten, it won't be accepted.
it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?
> it could have been rewritten, rewriting PRs is cheap today
Rewriting PRs with LLMs is cheap, but often the output is no better than the previous revision (fixing one issue only to cause another one is very common IME). And reviewing each revision of the PR is not cheap.
I've had good experiences with people submitting AI generated PRs who then actually take the time to understand what's going on and fix issues (either by hand or with a targeted LLM generated fix) that are brought up in review. But it's incredibly frustrating when you spend an hour reviewing something only to have someone throw your review comments directly back at the LLM and have it generate something new that requires another hour of review.
> it could have been rewritten, rewriting PRs is cheap today, but that isn't the question. the question is, would it have been accepted had it met all the quality and engineering standards and full disclosure that it was 90%+ LLM generated?
In this case it looks like the answer is "Yes"; the PR was not dismissed immediately, it was first examined in great detail!
Why would the maintainer expend effort on something that was going to be rejected anyway?
because the policy is clearly 'reject' and yet significant time has been spent - either effort was wasted or policy is at best 'not implemented'.
> either effort was wasted or policy is at best 'not implemented'.
I don't understand this PoV - have you ever come across a policy in any environment that wasn't subject to case-by-case exceptions?
Even in highly regulated environments (banking/fintech, Insurance, Medical, etc), policies are subject to exceptions and exemptions, done on a case-by-case basis.
The notion, in this specific case, that "well they rejected it because of policy" is clearly nonsense and I don't understand why people are pushing this so hard when the explanation of why an exemption can't be made for this specific PR is public, accessible and, I feel, already public knowledge.
No amount of rewriting will help you if you, fundamentally, wrote the wrong thing, as is the case here.
People forget that LLM code cannot be covered by copyright. So LLM code cannot be placed under an open source license
This is overstated. Not all LLM code is produced the same way. Code produced through substantial human creative input still falls under copyright, at least the way things are now. Besides, nothing legally prevents placing code under a license. Enforceability is the question, not permission.
It's a bit like saying speed limits don't apply on private property, therefore you can't have any traffic rules on your private racetrack.
Because it's Bun. Which is practically the use case testimonial of Zig.
Well let’s be real for a moment here before we get completely anti-AI.
Without AI, I’m a guy spending years learning C++ in spare time I don’t have to develop software concepts and solutions I want to work on TODAY.
The ZIG project, to me, has a place. Legacy coders right now do need protecting.
It’s not people like me that they need protection from.
It’s not even language models they need protection from.
What they need protection from are the corporate structures who falsely believe that this technology makes them obsolete.
The article talks about “playing the person, not the cards” and that thinking has one fatal flaw: the vibe coder is a person. The vibe coder may have creative agency that the legacy coder does not.
Look, I still cross up French and Spanish words because I took a year of each, C++ syntax, Python syntax, HTML, I understand their structures but I’m liable to start out writing a Python script and wind up with half a web page and a brutal error message in my IDE environment.
Zig’s motivation is correct in many ways I think. I am not really their target audience or their target coder. But I am also not their target enemy. Put the right group of legacy thinkers in my think tank, and the code would get even better.
-The Court Jester of Vibe Code
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."
A healthy contributor community is more important than mere code performance, quantity of features or lines of code, etc..
[1] https://zguide.zeromq.org/docs/chapter6
Unfortunately, those are largely words of a foregone era. The zeromq "community" today is tenuous. It has some really good people in it, the few that remain active, but the human-level processes and communication channels are ill defined and not well "staffed". In some ways, this lack of human activity and interactivity is perhaps okay and even justified given how stable libzmq and most of its bindings are (and the sub-ecosystem around particular bindings are a bit more active). Perhaps Hintjens' grand (and excellent, imo) vision got zeromq to where it is but the project feels to have gone adrift since we lost him. Somewhat ironic to his community-centric vision statement (the guide) it seems a project needs a charismatic and active leader to gain and retain a community. I guess that says more about human nature than it does about software development.
I'm not sure how to tie this all back to the zig story other than to point out the stated premise that zig is not short of PRs and so they can pre-select for no-LLM contributions. I think that is a good move for them and I get the "contributor poker" idea. But, the game changes when the premise breaks and the flow of newbies reduces to a trickle. At that point, if there are still active zig people who still want newbies, they may need to broaden their net. But if/when that happens, it may be too late to recover by opening to LLM-assisted contributions.
My issue with AI-generated OSS contributions is:
If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:
> We do not need a middleman to talk to AI models. We are not bottlenecked by coding.
I'm almost not using AI, but a possible scenario is that the contributor spend like 20 hours in total.
Something like using the AI to get an initial bad version, make some tweaks to the prompt, make some manual fixes, ask the AI to fox something else, noticing some new related feature and asking the AI to add it, making some benchmarks and deciding to remove a small feature, or perhaps deciding between two similar implementations, add a few more manual fixes here and there, run the extended version of the automatic test and find a weird bug in the unusual setup, make a few fixes with the AI and manually. So after 20 hours of work, the final version has only 50 lines that have been rewriten like 5 times each. Now the mantainer can review only the final version in 1 hour or so.
This is very different to spending 5 minutes asking the AI to write a patch, that has 1000 lines that does not even compile and sending it to the maintainer without looking at it.
I'm finding that AI, when successful, gives me 2-3x speedup. It's not the kind of thing I can give high-level instructions to like I can to a human.
I suspect the people who claim that AI works by only giving it high-level instructions are mostly working on "mindless" projects where a developer in the weeds wouldn't need to think very much.
maybe you are not bottlnecked by coding. but there is high probability that you will be bottlenecked by verifying the correctness of LLM-generated code.
That is indeed the point I was making.
Where is the real bottleneck, if I may ask?
> If an AI improves developer productivity so much,
You're not suggesting the only metric of productivity is lines of code are you? And that the only benefit of using LLMs is for generating code you're too lazy to type yourself?
It seems like this policy will help them win at contributor poker in the short term, but lose in the end. The next generation of developers will, for better or worse, grow up using AI assistance to write their code, but none of them will ever become a Zig contributor.
Luckily, if that ends up being the case, they can change the policy. It’s a FOSS project — not a constitutional amendment.
The problem with AI generated code is that the code the data model was trained on almost exclusively comes from public repositories. And there's a lot of repositories that are absolute dog $h!t or out dated. Crap in equals crap out.
ha I had this thought a few months ago made me wonder how a model trained on just John Carmack's code would fair.
> Zig values contributors over their contributions. Each contributor represents an investment by the Zig core team - the primary goal of reviewing and accepting PRs isn't to land new code, it's to help grow new contributors who can become trusted and prolific over time.
> LLM assistance breaks that completely. It doesn't matter if the LLM helps you submit a perfect PR to Zig
That’s the best rational I’ve seen so far, and fully support Zig decision here. I really appreciate their long term vision for both the community and actual project. I don’t think LLMs have such a great place in more collaborative efforts to be honest. Though we will see how things evolve, but I do see that when getting AI generated PRs I basically have to redo it myself (using LLMs, ironically… something I’m really starting to feel conflicted about)
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.
> Why use someone's project when you can just have the robot write your own?
I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.
Depth of use over the lifetime of an app is a quality all its own that often not appreciated. A recurring pattern at $dayjob is that a new manager or director will join a business unit and declare an existing app as the worst terrible, no good, horrible app they've seen and they're going to fix that. A year and a half later the new app is finally delivered with 80% of the original functionality and a fresh set of bugs. The new dev team sees the surface functionality but misses a lot of the hard earned nuance the old system accrued over time. This is a pattern that existed long before LLMs.
Yes, see e.g. a quarter-century-old (!!) https://www.joelonsoftware.com/2000/04/06/things-you-should-...
Good read!
An LLM most definitely cannot spit out robust tests or thorough documentation. It can spit out some tests or some documentation, but they will not cover the user perspective or edge cases unless those are already documented somewhere. That's verified by both experience and just thinking about it for two seconds.
The sanding down you refer to is what generates those tests and documentation.
> but they will not cover the user perspective or edge cases unless those are already documented somewhere
Are you suggesting that LLM's can't test for people who use screen readers? Keyboard only users? Slow network requests?
You're acting like the issues an app faces are so bespoke to the actual app itself (and have absolutely no relation to existing problems in this space) that an LLM couldn't possibly cover it. And it's just patently wrong.
I'm not talking about keyboards or screen readers or any sort of input testing, I'm talking about how the software is used in practice.
If you disagree with that, I think the onus is on you to show me that an LLM could simulate the full context in which a user interfaces with software. That's a ridiculous claim.
Feel free to show literally any evidence for this claim.
>Are you suggesting that LLM's can't test for people who use screen readers? Keyboard only users? Slow network requests?
I don't think it's feasible to fully simulate the full depth of actual usage, given that (especially in the case of screen readers and the like) there's a great deal of combinatorial depth and context to the problem. Which screen readers, on which operating systems, and which users thereof?
I can’t tell if you’re being sarcastic or not
I feel similarly but IIUC I think that doesn’t strictly require an open source development model. I’ve benefited a huge amount from consuming and contributing to open source projects and I’m a bit worried that the “unit economics” changing might break some of the social dynamics upon which the ecosystem is built.
> he thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes.
Can it if we stop defining "robust tests" as "a lot of test code lines" and "good documentation" as "lengthy documentation"?
I chose my words carefully. "Robust tests" are tests that provide high coverage and aren't flaky. "Thorough documentation" likewise is documentation that describes as much of the code as possible.
I didn't use the word good.
Yep. I realised the same. No one reads docs, or goes through tests. Either ways it's easy to write useless tests. And easy to write useless docs. Idt most even read the code. Now the difference is that it has become possible to write useless code.
So it's just the fact that others have already gone through the motions before I did. That's it really. I suppose in commercial settings, this is even more true and perhaps extends to compliance.
> No one reads docs, or goes through tests.
I regularly do both when trying to use library, especially unfamiliar to me.
Dare I say you're in the minority
> an LLM can spit those out in a few minutes.
It may be able to spit out text that purports to be that, in a few minutes. But for most software, an LLM will not be able to spit out robust tests - let alone useful documentation. (And documentation which just replicates the parameter names and types is thorough...ly useless.)
That's why I said "thorough" and not "good".
I value software that reveals knowledge. The frontier LLMs were trained on all the code that institutions had been keeping to themselves. So they're revealing programing know-how on a scale that just wasn't possible with open source. LLMs are the ultimate Prometheus. Information is more accessible and useful now than it's ever been.
> The frontier LLMs were trained on all the code that institutions had been keeping to themselves.
Lolz! I haven’t encountered “code that institutions had been keeping to themselves” that got even remotely close to OSS in quality.
I promise you, "the code that institutions had been keeping to themselves" is not nearly as special or good as you are implying here.
True.
I have worked during several decades in many companies, located in many countries, in a few continents, from startups to some of the biggest companies in their fields. Therefore I have seen many proprietary programs.
On average, proprietary programs are not better than open-source programs, but usually worse, because they are reviewed by fewer people and because frequently the programmers who write them may be stressed by having to meet unrealistic timelines for the projects.
The proprietary programs have greater quantity, not quality, by being written by a greater number of programmers working full-time on them, while much work on open-source projects is done in spare time by people occupied with something else.
Many proprietary programs can do things which cannot be done by open-source programs, but only because of access to documentation that is kept secret in the hope of preventing competition.
While lawyers, and other people who do not understand how research and development is really done, put a lot of weight in the so-called "intellectual property" of a company, which they believe to be embodied in things like the source code of proprietary programs or the design files for some hardware, the reality is that I have nowhere seen anything of substantial value in this so-called IP. Everywhere, what was really valuable in the know-how of the company was not the final implementation that could be read in some source code, but the knowledge about the many other solutions that had been tried before and they worked worse or not at all. This knowledge was too frequently not written down in any documentation. Knowing which are the dead ends is a great productivity boost for an experienced team, because any recent graduate could list many alternative ways of solving a problem, but most of them would not be the right choice in certain specific circumstances.
> On average, proprietary programs are not better than open-source programs, but usually worse, because they are reviewed by fewer people and because frequently the programmers who write them may be stressed by having to meet unrealistic timelines for the projects.
There's also the fact that when you write open-source code, you're writing for a friendly audience. I've often found myself writing the code, letting it rest for a few hours, then rewriting it so that it is easier to read. Sometimes, the code gets substantially rewritten before I push.
There's no cooling period when you write code during your 9-5 job: it works, it has the required test coverage, ship it and move on to the next task.
The claim is also just categorically untrue. The largest source of training data by far is publicly available code on e.g. Github, so it mostly just gives you a way to recycle already-available code, without crediting the author, while allowing you to pretend you own it.
So you're both saying all the alpha in Claude comes from open source devs like me? Even when I'm wrong I'm right.
I remember hearing the same arguments in the early 2010s, when the "3D printing revolution" was just around the corner. Why would anyone buy anything anymore if you can download a model and print it in the privacy of your home? And make it infinitely customizable?
The whole point of having a civilization is that most things in life can be made someone else's problem and you can focus on doing one thing well. If I'm a dentist or if I run a muffler shop, there are only so many hours in a day, so I'd probably rather pay a SaaS vendor than learn vibecoding and then be stuck supervising a weird, high-maintenance underling that may or may not build me the app with the features I need (and that I might not be able to articulate clearly). There are exceptions, but they're just that, exceptions. If a vendor is reasonable and makes a competent product, I'll gladly pay.
The same goes for open source... even if an LLM could reliably create a brand new operating system from scratch, would I really want it to? I don't want to maintain an OS. I don't want to be in charge of someone who maintains an OS. I don't necessarily trust myself to have a coherent vision for an OS in the first place!
That only holds true for the smallest tier of open source projects. Past a certain point of complexity, it's unlikely you can expect the robot to read your mind well enough to provide something of high quality and 'outstanding for just you'.
The Zig project is certainly far beyond such capability.
You have to push the robot to be as fanatical as you are. It holds so much back, always aiming to do the simple normal thing that most people do, rather than the top-notch stuff it knows.
I'm finding this out the hard way. I set out to build a 1 page app. I thought it would take a day. It's 98% vibe coded at this point. Even with AI implementing everything, its taken several weekends and many evenings. And not because AI is doing a bad job its just that as i see it come together, i have more and more feature requests. I've got a couple dozen left but I can't just let the AI chew through them all at once. Im effectively QA now. Have to make sure everything is just right.
LLM access is not yet universally available. There are those who can't exactly afford it. And there are also those with access but there are occasional or perennial issues, like Claude outages and general degraded performance over time. For example couple of months ago when I just started using Claude, I was easily making good progress on multiple projects within a week. Nowadays I'm hardly getting through much of anything as most of the time Claude is just showing spinners, and it also feels like the code quality has taken a nosedive.
> The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own
Because it takes hours/months/years of accumulated design decisions to get a great open source project. Something an AI agent can only approximate the surface of, unless you’re ready to spend a lot of time on it
I've been seeing a drop in PRs against my repositories. I have a couple of repositories with around a hundred stars. Nothing spectacular but they were getting occasional PRs until last year. This year I've had almost none so far. My theory is that LLMs prefer sticking to mainstream projects. And since lots of developers are now leaning heavily on LLMs, they are biased to ignoring most of what I provide.
And you indeed get a lot of wheel reinvention by LLMs because that is now cheap to do. So rather than using some obscure thing on Github (like my stuff), it's easier to just generate what you need. I've noticed this with my own choices in dependencies as well. I tend to just go with what the LLM suggests unless I have a very good reason not to.
Most people don’t have the ability to read code well enough to determine if an LLM output is good or not. And most people don’t have subscriptions to models that can develop non-trivial programs…
Maybe this will be a real problem in a couple years though.
Code aside, most people don't even know how to describe what they actually want it to do, and LLMs are still a loooong way away from mind reading. I've seen developers struggle to even write down what they want. Simple demos like they love to show off with snake-like games are fun and all but they're nothing like the complex opensource apps everyone seems to think we'll just generate with a simple prompt.
I think this ignores the amount of work needed to make LLM contributions be of high quality. It's much less work than making pure human contribution, but it's definitely not zero.
So centralizing that common work is a benefit of open-source just as much with LLMs as it was before.
>> Whereas earlier you had to use something that was mass produced to be satisfactory for everyone
As someone who recently started using OpenSCAD for a project I find this attitude quite irritating. You certainly did not "have to" use popular tools.
The OpenSCAD example is particularly illuminating because it's fussy and frustrating and clearly tuned towards a few specific maintainers; there's a ton of things I'd like changed. But I would never trust an LLM to do it! "Oh the output looks fine, cool" is not enough for a CAD program. "Oh, there are a lot of tests, cool" great, I have no idea what a thorough CAD test suite looks like. I would be a reckless idiot if I asked Claude to make me a custom SCAD program... unless I put in a counterproductive amount of work. So I'm fine with OpenSCAD.
I am also sincerely baffled as to how this stimulates the "labor economy." The most obvious objection is that Anthropic seems to be the only party here getting any form of economic benefit: the open-source maintainers are just plain screwed unless they compromise quality for productivity, and the LLM users are trading high-quality tooling built by people who understand the problem for shitty tooling built by a robot, in exchange for uncompensated labor. It only stimulates the "labor economy" in a Bizarro Keynesian sense, digging up glass bottles that someone forgot to put the money in.
I have seen at least 4 completely busted vibe-coded Rust SQLite clones in the last three months, happily used by people who think they don't need to worry their pretty little heads with routine matters like database design. It's a solved problem and Claude is on the case! In fact unlike those stooopid human SQLIte developers, Claude made it multithreaded! So fucking depressing.
This is funny because I was in the same situation, and actually used Claude to make a custom CAD program inspired by OpenSCAD :) https://fncad.github.io
You definitely need to have a strong sense of code design though. The AIs are not up to writing clean code at project scale on their own, yet.
This is a good example of what I mean! fnCAD appears to be a significantly buggier and highly incomplete version of OpenSCAD, where AI essentially grabbed the low-hanging fruit - albeit an impressively large amount of fruit - and left you with the hard parts. I fail to see how this solved any problems. Maybe it was an experiment, which is fine. But it's not even close to a viable CAD product, even by OpenSCAD's scruffy FOSS standards, and there's no feasible way to get it there without a ton of human work.
Not trying to denigrate the work here, as such. But this certainly didn't convince me that using AI to replace OpenSCAD (or any other major open-source project) is a good idea. The LLMs still aren't even close to being able to pull it off.
Anthropic will probably do what Google did in the 2000s, which is give jobs to all the open source developers whose work helped them get there.
Civilization isn't monotonic. People keep solving the same problems over and over again, telling the same stories with a different twist. For example in 1964 having a GUI work environment with a light pen as your mouse was a solved problem on IBM System/360. They had tools similar to CAD. So why don't we all just use that rather than make the same mistakes again. Each time a new way of doing things comes out, people get an opportunity to rewrite everything.
> Why use someone's project when you can just have the robot write your own?
Because it is incredibly expensive to write a replacement for semi-complex software? Good luck asking frontier models to write a replacement for Zig, Docker, VSCode, etc.
> Why use someone's project when you can just have the robot write your own?
Iff it is doable, then it would be worth considering it as alternative.
> It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.
not sure what you mean by that
LLMs really can't do as much as you people think they can.
LLMs are not smart as the LLM vendors claimed to be.
If they are, we wouldn't be having this conversation because they will be fully autonomous
People who blindly submits LLM generated code or do not cite its usage really need to stop doing it
> need to stop doing it
They won't I suspect. If there isn't any good way to give them a good smack for doing it then I don't know what would make them stop.
I have a similar sentiment unfortunately. I briefly thought about ways to force them to stop but all led to some sort of negative impact on privacy/freedom such as identify verification
The article explains Zig's stance in further detail, but the quoted part on its own caught my attention because my reading of it is rather "pro human communication" instead of "anti-AI".
They're banning all AI though, so it looks pretty much anti-AI to me.
I wonder - has it been confirmed that no LLMs for PRs literally means no AI assistance for code?
While I haven't codified it anywhere, the policy I would like is for issues and PR descriptions to have no LLMs - there is no reason to ban code completely though IMO. I would say that would be pro human-communication and a stance I would like a lot.
Good, pro AI people produce poor quality in everything they do. They are the least creative and worst problem solvers. I don't want them near me or my work.
Your whole schtick, judging by your dozens of comments, is a sense of superiority over pro-AI people and LLMs.
I'm willing to bet I produce higher quality code than you with AI assistance.
The funny thing LLM's are amazingly good with writing in Zig. They could inspect stdlib source code to fix compatibility issues with newer compilers and quite prolific with idioms.
For example I got a working application with minimal prompt like "I need an X11 tray icon app showing battery charge level". BTW result: https://github.com/baverman/battray/
Now I'm trying to implement a full taskbar to replace bmpanel2. Results are very positive. I've got feature parity app in 1h with solid zig code.
None of the numerous existing human-coded X11 tray icons showing battery charge level is good for you? Why? What are they missing?
> They could inspect stdlib source code to fix compatibility issues with newer compilers and quite prolific with idioms.
In order to even say this, you need to have knowledge and understanding about the language. I suspect you are not the intended target of this policy. They are defending their project with a harsh policy, knowing full well there are false negatives. Contributions for FOSS was already in borderline crisis mode before LLMs so it makes sense they’re desperate.
Their bet would be Venn diagram of LLM user overlaps with irresponsible. I think that’s correct, but not because good programmers suddenly become irresponsible when they use LLMs, but rather that an enormous barrage of bad programmers can participate in domains they otherwise wouldn’t even know where to begin.
Also my experience. Though my actual ability to remember the language nuances and stdlib is suffering from this :(
can't you ask llms to consider those nuances while writing the code or refresh your memory?
> why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
perhaps that's what the maintainers should be doing after all. it still takes time and tokens, though; neither is free.
I'd personally rather have the maintainers spend the time writing as much docs and specs as possible so the future LLMs have strong guardrails. zig's policy will be completely outdated in a couple years, for better or worse. someone will take bun's fork, add a codegen improvement here, add a linker improvement there and suddenly you'll have a better, faster zig outside of zig.
If it gets outdated they can review their policy. Right now it is sensible. We're at early ages of this type of AI and we don't know what the end game will be.
Someone forking it and makeing it better with AI is a possibility. If that happens will know it was better for the project for the maintainers to just review the code. If that happens, they can probably become maintainers in the fork. Or maybe they don't like that work and could just go do something else
Zig strives to avoid numerous pitfalls, and I admire that.
Let's take a look at some of them:
1. Project control – if a LARGE company implements thousands of lines created by LLMs day after day – who is ultimately responsible for the project's progress? "You accept hundreds of PRs, so why not this one?"
And one more thing: will you be able to change the code yourself, or will you be forced to use LLMs? What if one of the "AI companies" implements a strict policy preventing "other tools that XXX" from editing the codebase?
2. Ownership. If most of the code was taken by an external company from their LLM, what about ownership of the code? The authors of Zig, the company, the authors of the original code, stolen by LLMs?
3. Liability. In the near future, a court may rule that LLMs are unethical and should not recombine code without the owners' prior consent. Who is responsible for damages and for removing the "stolen" code? The owners of Zig, the company that creates pull requests, or the authors of LLM programs?
4a. Vision. Creating and maintaining a large code base is very difficult – because without a broad perspective, vision, and the ability to predict and shape the future – code can devolve into an ugly mess of ad hoc fixes. We see this repeatedly when developers conclude, "This is unsustainable; the current code base prevents us from implementing the correct way to do things."
LLM programs cannot meet these requirements.
4b. There's another aspect – programming languages particularly suffer from a lack of vision or discipline. There are many factors that must be planned with appropriate capacity, vision, and rigor: the language itself should be modeled in a way that doesn't prevent correct implementation of behaviors. The standard library must be fast, concise, and stable. The compiler itself must be able to create code quickly and repeatably.
Users hate changes in a language – so if a language changes frequently, it is met with harsh criticism. Users hate incompatibility. Users hate technical debt and forced compatibility. Yes, there are conflicting requirements. The author of Zig understood this perfectly, having already gone through it himself (see, for example, "I/O Redesign").
This balance, in all aspects, is the pillar of human creativity.
To be honest, I'm not a huge fan of Zig because I dislike the tight syntax: too many periods and curly braces, which is why I prefer Odin. But I have a lot of affection and respect for Zig and its authors.
> If a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
That's a fair thing to ask, though it seems like people will arrive at very different conclusions there.
Presumably this only applies to newcomers? The thrust of their policy is to nurture new contributors. Once one has established oneself as a meaningful contributor — which the Bun team surely must have done by now — then it doesn’t matter where the code came from.
…in theory. In reality, I’m sure a policy like this can’t be selective and fair at the same time. Pick one!
This reminds me of when I was in college in the early 2000s.
My fraternity's national organization refused to take photos over email for the newsletter because they got a virus.
It's a short-sighted policy that's akin to "throwing the baby out with the bathwater."
One reason I love writing production code in Ur/Web is that LLMs are incapable of synthesising something even remotely resembling it. Keeps me on my toes.
I think this is a great policy by the Zig team.
Ur/Web! That's something I haven't heard about in ages. Is it still in active development? In what circumstances are you using it? Fun, your own startup, is some secret big commercial user of it...?
How about intellectual-property risks?
If LLM code really does have IP risk then most of the world's most valuable companies may have to throw away ~18 months of work at this point.
OpenJDK project (interim) AI-policy faq (https://openjdk.org/legal/ai):
"What are the intellectual-property risks of using generative AI tools? The Oracle Contributor Agreement (OCA) requires that a contributor own the intellectual property rights in each contribution and be able to grant those rights to Oracle, without restriction. Most generative AI tools, however, are trained on copyrighted and licensed content, and their output can include content that infringes those copyrights and licenses, so contributing such content would violate the OCA. Whether a user of a generative AI tool has IP rights in content generated by the tool is the subject of active litigation."
The other side of this is that open source projects that allow AI tools will be more restrictive towards new contributors.
This already happens to some degree on large software projects with corporate backing (Web engines, compilers, etc.), where it is often not trivial to start contributing as an independent individual.
Reasonable people can disagree on whether one approach is inherently better than the other, as ultimately they seem to be optimising for different goals.
Imagine getting contributions from someone, who has no access to build system and tests.
If I have a test harness, and LLM workflow setup, it is easier to just write new code myself. I am not giving away my "secret sauce". And I will not have a debate "why this simple feature needs 1000 new tests...", and two days just to make a full release build.
For merge I have to do 99% of work anyway (analyze, autotest, build, smoke, regression test). I usually merge smaller commits just to be polite (and not to look like one man show), but there is no way to accept large refactoring!
yeah giving a llm git blame and git grep has saved me a lot of time of doing boring basically re.
Another more practical issue with using LLMs for Zig is that it’s a quickly changing language, meaning LLMs may generate code for an older version of the language.
Honestly, that doesn't sound too bad. It does not say you can't use LLMs, it just doesn't let LLMs be the author of a commit. Meaning, if you as a developer make yourself responsible for what the LLM wrote, go ahead. But be ready to answer the technical questions, be ready to get grilled in the code review, and be called if you get a CVE on that part of the code...
People shouldn't have to justify not putting up with bullshit. It is a sensible default.
I'm not sure how you could really take a stance on this. If someone used the tool to expedite work its unlikely you'd ever know it.
If you use the tool to yeah, go one shot a ton of garbage then it will in fact be garbage.
It requires the people contributing the work to have the integrity to actually follow the project’s rules. It’s not OK to violate the project’s rules just because you don’t think you’ll be found out as a filthy fucking liar.
I mean best of luck policing this is all I'm going to say. We will soon be back to the "core contributors only" kind of policy in many projects I imagine to avoid the slop spam. The verification will be at the conferences.
This seems like a sensible long-term strategy, much better one than entering into token-fueled AI arms race against slop. It's not even clear what's the end goal of such race would be for an open source project. Open source software was traditionally about growing knowledgeable communities and giving users ability to examine and modify software they use. LLMs quite obviously blow that up on several levels. For starters, if you hate dealing with code and prefer prompts, it's unlikely that you will be generating code that's enjoyable to work with for people who do read it directly.
I don't have an opinion about Zig AI policy for contributions. Their project, their policies. Fine for me.
However, I wanted to give Zig a try in an agentic coding scenario. For tasks that would take a few seconds when choosing Python, Java, or JavaScript as a target language, it would take tens of minutes and waste millions of tokens before producing anything.
Almost any model gets stuck trying to figure out the correct syntax and correct libraries for a specific Zig version, fighting with compiling and figuring out function call parameters, frequently taking it wrong and going on side quests for things that should just work.
I guess the relative lack of resources and the language instability don't play well for models that try to generate Zig code. Using specific tools like zig-mcp helps only a bit.
Until LLM support for Zig improves (one needs to spend significant resources for that to happen), LLM-generated Zig code won't be good enough for either Zig programmers or Zig contributors.
the best PRs I get are from more senior level people who are at work, hit a specific problem they had, and wanted to help out the project with a good PR. Then you never hear from them again because, of course, they're busy!
When you have junior people come in with PRs and you do the whole hand-holding thing so they learn and grow and all that, they're there because my project is famous, they want to get credit (which I give them), then they're off to get jobs whereever and they are working with completely different technologies, and you never hear from them again either, because, of course, they're now busy!
Really, outside of my core group of hangers-on, Claude is the only contributor we have that doesn't leave us.
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
well yeah. I almost use PRs now just as a lazy means of issue prioritization. I'd love if github had more fine-grained controls to disable PRs but allow occasional contributors in (they don't).
Loris Cro banned me from his Zig forum because I disagreed with/corrected something he wrote.
I was also blocked from the Zig github repository, after being a frequent contributor to issue discussions, for reasons unknown (I was never informed, I just found out when I could no longer put a thumbs up on a comment).
AI must die - don't let Skynet 7.0 win!!!
(Ok ok I think we lost the fight already. I see soooooo many people using AI tools on github in the last ~2 weeks alone, claude in particular literally infiltrated everything there.)
No human should trust any bullshit made by bullshit machine.
Very convenient of Mr. Willison to omit the fact that Bun's upstream changes are total garbage and would not be upstreamed regardless of any policies, omitting LLM generated code or not, since they are, as a zig core team member articulated in a classier way, shite.
I hadn't see that post when I wrote this. I've updated it now to add a link.
What were you trying to imply by "very convenient"?
Also, that zig team is already working on other approaches that are better and more stable than what Bun team did:
https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
Notable quotes:
>There’s the 4x speedup claimed by the Bun team, already available on Zig 0.16.0!
>Each [incremental] update is taking less than 0.4s, compared to the 120+ seconds taken to rebuild with LLVM. In other words, incremental updates are over 300 times faster on this codebase than fresh LLVM builds are. In comparison, an enhancement capped at a 4x improvement is pretty abysmal. [..] Again, this feature is available in Zig 0.16.0—you can use it!
I have learned to take always Willison's words with a giant grain of salt, despite how popular those articles are here.
How can I do better?
Quality over quantity
Go zig! I don't use the language but I totally respect where they're coming from and their mission and ethics.
For those who are pissed because a large OSS project isn't accepting LLM generated slop: Fuck off!
I don't think I've ever heard anything positive about Zig. Every time I've seen the project mentioned is them using bizarre black and white moral judgements to justify stupid decisions.
You need to look past this. Zig is an excellent low-level language. Thanks to the comptime features, you can have high-level looking APIs while staying down to the metal. It's not for everyone, obviously, but as a language, it is really good.
You have to be wilfully blind, then. It gets rather frequently praised on HN (as much as any niche language can be), and they certainly don't make black-and-white moral judgements often.
Rule should be anti-fully-autonomous-PRs. (LLMs dont push bad code. People use LLMs to push bad code and DDoS the maintainers mental bandwidth)
Rule should be whatever the people running the project think the rule should be. If you've got your own project, do implement the anti-fully-autonomous-PRs rule for your project. But the creators of Zig do not owe you or me the rule we like.
Perhaps if the Zig maintainers had an LLM review their terrible rationale they would have picked up on the fact that it logically makes no sense.
Please elaborate?
https://claude.ai/share/f38ee8a6-56f1-408a-a536-211eb34c7045
I mostly agree with the assessment.
IMHO: hard, inflexible rules like these are always deeply rooted in biases and personal convictions, not in facts. The suggested policy amendment by Claude at the end is much more honest, logical, and palatable.
> The argument assumes that unassisted PR authorship is what builds trustworthy contributors, and that LLM assistance prevents that growth.
No, I don't think that was the argument. As I understood it, unassisted contributions have higher chances to grow a trusted contributor. Not 100% vs 0% chances, but statistically higher. So, given limited resources, it makes sense to prefer unassisted over assisted contributions.
I don't believe that even the weakened version of the argument works -- it is based on an assumption, not fact.
Why would a contributor that uses AI assistance have fewer chances to be trusted?
I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.
Because you are at the whims of the bot they are at least partially dependent on.
You could extend that argument to any tool used by the developer, like a linter, sanitizer, the IDE itself, or even auto-completion. Why target LLMs specifically?
The more I think about it, the more nonsensical it is. - What if I do everything by hand, but have an LLM review my work at the very end? - What if I have an LLM guide me through the codebase just by specifying the files I should read and in what order, but I do all the reading myself? - What if I do everything by hand, but then use an LLM to optimize a small part of an algorithm?
You can easily see how absurd it is to completely ban LLMs.
What matters is the quality and correctness of the contribution. Even with heavy LLM usage, unless the developer understands what problem they're solving, the quality will be sub-par.
Would you let your nanny subcontract?
"We wont take contributions from non hand written assembly code, these C 'high level' language patches are not allowed. Zig is a great project and language but it will die on this hill.
You paint them wrongly as elitists.
It's a critique of low effort PRs compared to the high effort review they require.
On multiple occasions over the last months, I have been wishing the Zig/ZSF team would use LLMs. I've found many copy&paste errors that simply wouldn't exist if mundane tasks were delegated to a good LLM. It's even in the Zig community, I've seen PRs to some projects I'm interested in boosting how it was all human made, and containing all kinds of trivial logical errors that even the worst LLM would catch.
If you see them, why don't you help squash them?
I did.
no cite?
It's a good rationale. But it points the finger at a real bottleneck in open source development: the burden of manually reviewing contributions. And the need to automate that with AI as well. Reviews were already becoming a problem before AI. Lots of projects have been dealing with a large influx of contributions from inexperienced developers from all over the world looking to boost their CVs by increasing their Github statistics. It's the same dynamic that destroyed Stackoverflow. Which, thanks to AI has been largely sidelined now. And now that AI is there, those same inexperienced developers are using that at scale to generate even more garbage contributions.
Doing manual reviews of everything is very labor intensive and not scalable. However, AIs are pretty good at doing code reviews and verifying adherence to guard rails, contributor guidelines, and other rules. It's not perfect, but it's an underused tool. Both by reviewers and contributors. If your contribution obviously doesn't comply with the guidelines, it should be rejected automatically. The word "obviously" here translates into "easy to detect with some AI system".
Projects should be using a lot of scrutiny for contributions by new contributors. And most of that scrutiny should be automated. They should reserve their attention for things that make it past automated checks for contribution quality, contributor reputability, adherence to whatever rules are in place, etc. Reputability is a good way to ensure that contributions from reputable sources get priority. If your reputation is not great, you should expect more scrutiny and a lower priority.
I don't know Zig, but I think that is not the problem here. Not exactly. The real question is: why spending all those efforts to grow and align a pool of contributors if contributions are cheap and correct? Code review is not just about checking if what it says it does, and if it does it according to the guidelines. The review is a touch point to discuss where the project is heading and how to get there. That is the most important part in the long run. As a collective human effort, it needs coordination. Some of it is via the review process (especially for those not part if the core team that draft the roadmap). One could document all those micro decisions with the rational, but it might end up be a wakamole game. IMO, projects which allow AI usage need to spend way more effort in coordination (and quality insurance).
> The real question is: why spending all those efforts to grow and align a pool of contributors if contributions are cheap and correct?
Until the contributions are cheap and correct, you need valuable contributors more than you need the contributions.
You point would be valid when we get to a point of contributions all being both correct and cheap. Right now they are only cheap.
You still have to review everything manually again anyway. It's a compiler for a language, bugs and bad architecture decisions cost a lot. They moved to codeberg, so there are less garbage PRs now. They try to grow a culture where you expected to deliver good code in the PRs so the review takes less time.
It takes like 5 minutes to spot garbage PRs manually. LLM can flood you with a wall of text where only half of the stuff make sense. Also, they can't really spot bad architecture. It's a compiler in an unpopular language, don't forget that.
> [you can] stop accepting imperfect PRs in order to maximize ROI from your work, but that’s not what we do in the Zig project
The real bottle neck when you want to grow is connecting with the right people. An LLM is not helping with that if you want to build a community. When you use LLM to skip the need to understand a problem how are you ever going to get a reputation that I can trust?
The post is not about reputation it about seeing how people respond and work with you in a community.
EDIT: I see that you frame it as a help and a tool and sure it might work, but I feel like it is just another obstacle.
> the burden of manually reviewing contributions... [a]nd the need to automate that with AI as well.
I suggest we also automate the distribution and the use of software with AI as well, and then just all go to the beach and sip on some cocktails or something.
Or in other words: Good luck with that.
As a heavy AI-assisted open source code creator (and someone with 40+ years of dev experience), this seems wrong-headed to me. I think it is an excellent policy, as they say, to "value contributors over their contributions," but this policy excludes all potential contributors who use the latest tools. It will eventually doom zig to a smaller "artisanal" pool of contributors, rather than welcoming newbies and helping them become better open-source developers.
Presumably Zig are OK with that. For their particular project - a brand new programming language and compiler - a small pool of artisanal developers is likely preferable to a large pool of LLM-assisted developers who don't have as deep an understanding of how everything works.
There are plenty of less stringent projects for people who to get better at open source to contribute to.
> It will eventually doom zig to a smaller "artisanal" pool of contributors
“Artisanal” and “Zig” are just about synonymous