This article was painful to read because of all the misconceptions. A cpio archive is not a filesystem. Author uses initramfs, which is based on tmpfs. Linux can extract cpio to tmpfs. An archive of files and directories is in itself not a program.
Just because something looks similar doesn't mean it's equivalent. Binary programs are executed on the CPU, so if there's an interpreter involed it's hiding in the hardware environment. That's outside the scope of an OS kernel.
If you have a shell script in your filesystem and run it, you need to also provide the shell that interprets the script. Author omits this detail and confuses the kernel with the shell program.
Linux can easily be compiled without support for initramfs and ramdisk. It can still boot and run whatever userland sits in the filesystem.
"Linux initrd interpreter" hurts my brain. That's not how it works.
Edit: should've read further. Still a backwards way of explaining things imho.
> An archive of files and directories is in itself not a program.
Okay, but you can make the same argument to say that ELF files aren't programs in and of themselves either. In fact, some ELF files are dynamic libraries without an entrypoint, and therefore not actually executable in any meaningful way unless connected to yet another program.
If you can accept that some ELF files are executables and some aren't, then you can also accept that some CPIOs are executables and some aren't. What's the difference between ld.so unpacking an ELF file into RAM and running its entrypoint, and the Linux kernel unpacking an initramfs into RAM and running its entrypoint?
Binary programs are executed on the CPU but the program file is an archive with sections, and only one of them is the program, usually, while the others are all metadata. The CPU isn't capable of understanding the program file at all. Linux has to establish the conditions under which the program runs, that means at a minimum establishing the address space in which the program counter lies then jumping to that address. The instructions for how to do that are in the metadata sections of the ELF executable.
Most of the time. But sometimes, no. See ATL thunk emulation (last I checked, still alive in the windows kernel) and ntvdm handling of the BOP pseudoinstruction.
See also: Jazelle DBX.
Hell, on modern x86 processors, many “native” instructions are actually a series of micro-ops for a mostly undocumented and mostly poorly understood microcode architecture that differs from the natively documented instruction set.
"Okay, so the reason I initially did this was because I didn’t want to pay Contabo an extra $1.50/mo to have object storage just to be able to spawn VPSes from premade disk images."
I think there's a sweetspot between " I spent 50 hours to save 1.50$/mo" and "every engineer should be spending 250K$/mo in tokens".
Host employees still need to eat, if we can't afford 1.50$/mo, then we aren't really professionals and are just coasting on real infrastructure subsidized by professionals that pay for the pay-as-you-go infrastructure.
It's still possible to go even further to these extremes, there's thousands of developers that just coast by on github pages and vercel subdomains. So at least having a VPS puts you ahead of that mass competitively, but trying to save 1.50$/mo is a harsh place to be. At that point I don't think that the technical skills are the bottleneck, it's more likely that there's some social work that needs to be done, and that obsessing over running doom on curl is not a very productive use of one's time in a critical economic spot.
I write this because I am in that spot, but perhaps I'm reading a bit much into it.
That sounds like something I would've done... When I was a kid, the 5€/month for a VPS was a massive expense, to the point where I occasionally had to download my 10GB rootfs to my mom's windows laptop, terminate the instance and then rebuild it once I had enough money. Eventually I got an old Kindle that was able to run an app called Terminal IDE which had a Linux shell with some basic programs like busybox, gcc. Spartacus Rex, if you're out there, thank you for making my entire career possible.
And I think this point is heavily under-appreciated in the cloud Vs. on prem debate.
The cost for 1 hour of cloud CPU time is the same (barring discounts), no matter who you are. THe cost for 1 hour of engineer time varies wildly. If you're a non-profit or a solo dev, you may even consider that cost to be "free."
If your engineer costs are far lower than what AWS assumes they are, going with AWS is a stupid decision, you're far better off using VPSes / dedicated servers and self-hosting all the services you need to run on top.
The author did write that, yes. But it's very obviously a joke. The real reasons are literally the very next paragraph:
> I thought it was a neat trick, a funny shitpost that riffs on the eternal curl | sh debate. I could write a blog post about it, I tell you about how you can do it yourself, one thousand words, I learn something, you learn something, I get internet points, win win.
> it's more likely that there's some social work that needs to be done, and that obsessing over running doom on curl is not a very productive use of one's time in a critical economic spot.
It can be a problem but it can be also just a human following their special interests that give them joy.
For me as a ADHD person engaging with my special interests is a hard requirement to keep my mental health in check and therefore a very good use of my time.
> if we can't afford 1.50$/mo, then we aren't really professionals and are just coasting on real infrastructure subsidized by professionals
This is a strange claim.
Whether someone is getting paid or not to do something is what determines who is a professional, not whether or how much they're paying someone else. (And that's the only thing that matters, unlike the way that "professional" is used as a euphemism in Americans' bizarre discursive repertoire.)
I think the sense of the word professional here is not as a boolean professional/amateur, but the sense of professionalism, the characteristic of taking business seriously, not letting personal matters intervene, and in this case, investing into tools.
To put an example, suppose you hire a painter, and they show up with non-work attire, no ladder, no brush, they ask you to buy a can of paint for them and a brush. Compared to a contractor that bills you flat and brins their own ladder, has work clothing and shoes, an air pneumatic spray painter, a breathing mask. Who is more professional?
It's part of a broader debate for sure, OP seems to have done it more for the experience than to actually save 1.50$.
Nope. There's no broader debate. "Professional" means "X is getting paid for this", not "X is paying something in order for X to be able to do this". It's that simple.
> To put an example, suppose you hire a painter, and they show up with non-work attire, no ladder, no brush, they ask you to buy a can of paint for them and a brush. Compared to a contractor that bills you flat and brins their own ladder, has work clothing and shoes, an air pneumatic spray painter, a breathing mask. Who is more professional?
It always depends on results. It can be unprofessional to design a system that takes an external variable like S3 for granted, especially if it's not needed. As long as the hack isn't worse than the official $1.50 happy-path, you might as well save the end-customer a monthly fee and reduce your attack surface.
I think hacks like these have a positive effect on the industry. It pushes back on meaningless, encroaching monetization and encourages Conatbo to reevaluate their service offerings to ensure they justify the price.
... I think you're reading a bit much into it. It's less that I couldn't afford to pay that, and more that I didn't want to pay that, and iterating on the solution I used to dodge that led me down a giant rabbit hole of learning more about Linux while solving stupider and stupider problems posed for myself.
I really get that, and I value these otherwise pointless hack articles as much as the next guy. But I think I was specifically getting at the fact that these might actually turn into an economically useful skill just by finding a sweetspot in the amount of money they can save.
1.5$/mo is still in the toy realm, (and games can be very good for practicing before the real stuff), but using tricks like this to save 50$/mo or 500$/mo or 5k$/mo or 50k$/mo and so on can definitely cross the threshold into actually (massively) useful.
The biggest challenge in crossing that bridge is matching up clients with bad engineers but good budgets, with good engineers with no budget. There's probably thousands of engineers that are currently spinning 5$/mo into impressive architecture for their blog or their 2 user startup, and clients throwing buckets of cash into tokens and zapier/n8n. The world needs Cupids that match those together.
... (in which case no command-line options to the dynamic linker can be passed and, in the ELF case, the dynamic linker which is stored in the .interp section of the program is executed)
Well - Linux is kind of like a somewhat generic interface to have actionable, programmable tasks. One could use Windows for this too, but IMO Linux is in general better suited for that task.
The only area I think Windows may be better is the graphical user interface. Now, the windows interface annoys me to no ends, but GNOME annoys me and KDE annoys me too. I have been more using fluxbox or icewm, sometimes when I feel fancy xfce or mate-desktop, but by and large I think my "hardcore desktop days" are over. I want things to be fast and efficient and simple. Most of the work I do I handle via the commandline and a bit of web-browsing and writing code/text in an editor, for the most part (say, 95% of the activities).
> I want things to be fast and efficient and simple.
Sway + foot with keybinds to provision each workspace to your liking is pretty nice. No desktop, but really flies for your use case (mine also). Bind window focus to your most comfortable keys.
> The only area I think Windows may be better is the graphical user interface.
Nah. You're right about Gnome and KDE, but Windows is even worse because you can't exactly escape away from microsoft's insane labyrinth or awful wm. Frankly, not a fan of the Xerox bloodline of desktop interfaces in general. mpx/mux heritage is the one I like. 9wm, cwm or dwm. Closer to Engelbart and just generally all around better.
This article was painful to read because of all the misconceptions. A cpio archive is not a filesystem. Author uses initramfs, which is based on tmpfs. Linux can extract cpio to tmpfs. An archive of files and directories is in itself not a program.
Just because something looks similar doesn't mean it's equivalent. Binary programs are executed on the CPU, so if there's an interpreter involed it's hiding in the hardware environment. That's outside the scope of an OS kernel.
If you have a shell script in your filesystem and run it, you need to also provide the shell that interprets the script. Author omits this detail and confuses the kernel with the shell program.
Linux can easily be compiled without support for initramfs and ramdisk. It can still boot and run whatever userland sits in the filesystem.
"Linux initrd interpreter" hurts my brain. That's not how it works.
Edit: should've read further. Still a backwards way of explaining things imho.
> An archive of files and directories is in itself not a program.
Okay, but you can make the same argument to say that ELF files aren't programs in and of themselves either. In fact, some ELF files are dynamic libraries without an entrypoint, and therefore not actually executable in any meaningful way unless connected to yet another program.
If you can accept that some ELF files are executables and some aren't, then you can also accept that some CPIOs are executables and some aren't. What's the difference between ld.so unpacking an ELF file into RAM and running its entrypoint, and the Linux kernel unpacking an initramfs into RAM and running its entrypoint?
Binary programs are executed on the CPU but the program file is an archive with sections, and only one of them is the program, usually, while the others are all metadata. The CPU isn't capable of understanding the program file at all. Linux has to establish the conditions under which the program runs, that means at a minimum establishing the address space in which the program counter lies then jumping to that address. The instructions for how to do that are in the metadata sections of the ELF executable.
It's the init in the cpio which is the interpeted program, and the rest of the cpio is memory for this interpeted progam.
At least it isn't AI slop!
I dunno, sure seems like "AI research" at least.
Isn't every OS an interpreter for machine code with kernel privileges?
No. The OS's software doesn't individually read each instruction and decide what to do with it.
It passes it off to the hardware (CPU) which runs the instructions.
Most of the time. But sometimes, no. See ATL thunk emulation (last I checked, still alive in the windows kernel) and ntvdm handling of the BOP pseudoinstruction.
See also: Jazelle DBX.
Hell, on modern x86 processors, many “native” instructions are actually a series of micro-ops for a mostly undocumented and mostly poorly understood microcode architecture that differs from the natively documented instruction set.
It’s turtles all the way down.
Jazelle and micro-ops are not interpreters, they are executed in hardware.
This one is an interpreter for CPIO files.
Everything is an interpreter?
From earlier in the series.
"Okay, so the reason I initially did this was because I didn’t want to pay Contabo an extra $1.50/mo to have object storage just to be able to spawn VPSes from premade disk images."
I think there's a sweetspot between " I spent 50 hours to save 1.50$/mo" and "every engineer should be spending 250K$/mo in tokens".
Host employees still need to eat, if we can't afford 1.50$/mo, then we aren't really professionals and are just coasting on real infrastructure subsidized by professionals that pay for the pay-as-you-go infrastructure.
It's still possible to go even further to these extremes, there's thousands of developers that just coast by on github pages and vercel subdomains. So at least having a VPS puts you ahead of that mass competitively, but trying to save 1.50$/mo is a harsh place to be. At that point I don't think that the technical skills are the bottleneck, it's more likely that there's some social work that needs to be done, and that obsessing over running doom on curl is not a very productive use of one's time in a critical economic spot.
I write this because I am in that spot, but perhaps I'm reading a bit much into it.
That sounds like something I would've done... When I was a kid, the 5€/month for a VPS was a massive expense, to the point where I occasionally had to download my 10GB rootfs to my mom's windows laptop, terminate the instance and then rebuild it once I had enough money. Eventually I got an old Kindle that was able to run an app called Terminal IDE which had a Linux shell with some basic programs like busybox, gcc. Spartacus Rex, if you're out there, thank you for making my entire career possible.
And I think this point is heavily under-appreciated in the cloud Vs. on prem debate.
The cost for 1 hour of cloud CPU time is the same (barring discounts), no matter who you are. THe cost for 1 hour of engineer time varies wildly. If you're a non-profit or a solo dev, you may even consider that cost to be "free."
If your engineer costs are far lower than what AWS assumes they are, going with AWS is a stupid decision, you're far better off using VPSes / dedicated servers and self-hosting all the services you need to run on top.
The author did write that, yes. But it's very obviously a joke. The real reasons are literally the very next paragraph:
> I thought it was a neat trick, a funny shitpost that riffs on the eternal curl | sh debate. I could write a blog post about it, I tell you about how you can do it yourself, one thousand words, I learn something, you learn something, I get internet points, win win.
> it's more likely that there's some social work that needs to be done, and that obsessing over running doom on curl is not a very productive use of one's time in a critical economic spot.
It can be a problem but it can be also just a human following their special interests that give them joy.
For me as a ADHD person engaging with my special interests is a hard requirement to keep my mental health in check and therefore a very good use of my time.
I like the term host employee, carrying the LLM parasite as it uses us to embody itself and reproduce into the singularity.
I think they meant employees from the hosting company. But that's a funny interpretation!
> if we can't afford 1.50$/mo, then we aren't really professionals and are just coasting on real infrastructure subsidized by professionals
This is a strange claim.
Whether someone is getting paid or not to do something is what determines who is a professional, not whether or how much they're paying someone else. (And that's the only thing that matters, unlike the way that "professional" is used as a euphemism in Americans' bizarre discursive repertoire.)
I think the sense of the word professional here is not as a boolean professional/amateur, but the sense of professionalism, the characteristic of taking business seriously, not letting personal matters intervene, and in this case, investing into tools.
To put an example, suppose you hire a painter, and they show up with non-work attire, no ladder, no brush, they ask you to buy a can of paint for them and a brush. Compared to a contractor that bills you flat and brins their own ladder, has work clothing and shoes, an air pneumatic spray painter, a breathing mask. Who is more professional?
It's part of a broader debate for sure, OP seems to have done it more for the experience than to actually save 1.50$.
Nope. There's no broader debate. "Professional" means "X is getting paid for this", not "X is paying something in order for X to be able to do this". It's that simple.
> To put an example, suppose you hire a painter, and they show up with non-work attire, no ladder, no brush, they ask you to buy a can of paint for them and a brush. Compared to a contractor that bills you flat and brins their own ladder, has work clothing and shoes, an air pneumatic spray painter, a breathing mask. Who is more professional?
Literally meaningless. Are both getting paid?
It always depends on results. It can be unprofessional to design a system that takes an external variable like S3 for granted, especially if it's not needed. As long as the hack isn't worse than the official $1.50 happy-path, you might as well save the end-customer a monthly fee and reduce your attack surface.
I think hacks like these have a positive effect on the industry. It pushes back on meaningless, encroaching monetization and encourages Conatbo to reevaluate their service offerings to ensure they justify the price.
... I think you're reading a bit much into it. It's less that I couldn't afford to pay that, and more that I didn't want to pay that, and iterating on the solution I used to dodge that led me down a giant rabbit hole of learning more about Linux while solving stupider and stupider problems posed for myself.
That four part blog was one of the most entertaining things I've read this year, thanks.
Really in the spirit of "hacker" news IMO.
I get the motivation, it's less avoiding the 1.50 per month and more like a challenge to work around it!
Calling cheap hacks unprofessional misses the point, some suprisingly portable tricks only show up when you stop paying for everything on autopilot.
I really get that, and I value these otherwise pointless hack articles as much as the next guy. But I think I was specifically getting at the fact that these might actually turn into an economically useful skill just by finding a sweetspot in the amount of money they can save.
1.5$/mo is still in the toy realm, (and games can be very good for practicing before the real stuff), but using tricks like this to save 50$/mo or 500$/mo or 5k$/mo or 50k$/mo and so on can definitely cross the threshold into actually (massively) useful.
The biggest challenge in crossing that bridge is matching up clients with bad engineers but good budgets, with good engineers with no budget. There's probably thousands of engineers that are currently spinning 5$/mo into impressive architecture for their blog or their 2 user startup, and clients throwing buckets of cash into tokens and zapier/n8n. The world needs Cupids that match those together.
Turing’s Theta Combinator
man ld.so:
... (in which case no command-line options to the dynamic linker can be passed and, in the ELF case, the dynamic linker which is stored in the .interp section of the program is executed)
note how the ELF section is named.
Well - Linux is kind of like a somewhat generic interface to have actionable, programmable tasks. One could use Windows for this too, but IMO Linux is in general better suited for that task.
The only area I think Windows may be better is the graphical user interface. Now, the windows interface annoys me to no ends, but GNOME annoys me and KDE annoys me too. I have been more using fluxbox or icewm, sometimes when I feel fancy xfce or mate-desktop, but by and large I think my "hardcore desktop days" are over. I want things to be fast and efficient and simple. Most of the work I do I handle via the commandline and a bit of web-browsing and writing code/text in an editor, for the most part (say, 95% of the activities).
> I want things to be fast and efficient and simple.
Sway + foot with keybinds to provision each workspace to your liking is pretty nice. No desktop, but really flies for your use case (mine also). Bind window focus to your most comfortable keys.
> The only area I think Windows may be better is the graphical user interface.
Nah. You're right about Gnome and KDE, but Windows is even worse because you can't exactly escape away from microsoft's insane labyrinth or awful wm. Frankly, not a fan of the Xerox bloodline of desktop interfaces in general. mpx/mux heritage is the one I like. 9wm, cwm or dwm. Closer to Engelbart and just generally all around better.