> Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.
I've written a bit of Racket code (https://github.com/evdubs?tab=repositories&q=&type=&language...) and I still haven't written a macro. In only one case did I even think a macro would be useful: merging class member definitions to include both the type and the default value on the same line. It's sort of a shame that Racket, a Scheme with a much larger standard library and many great user-contributed libraries, has to deal with the Scheme/Lisp marketing of "you can build low level tools with macros" when it's more likely that Racket developers won't need to write macros since they're already written and part of the standard library.
> But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF.
What a missed opportunity to preach another gospel of Lisp: s-expressions. XML and JSON are forms of data that are likely not native to the programming language you're using (the exception being JSON in JavaScript). What is better than XML or JSON? s-expressions. How do Lisp developers deal with XML and JSON? Convert it to s-expressions. What about defining data? Since you have s-expressions, you aren't limited to XML and JSON and you can instead use sorted maps for your data or use proper dates for your data; you don't need to fit everything into the array, hash, string, and float buckets as you would with JSON.
If you've been hearing about Lisp and you get turned off by all of this "you can build a DSL and use better macros" marketing, Racket has been a much more comfortable environment for a developer used to languages with large standard libraries like Java and C#.
When I learned Scheme, I liked the language but strongly disliked macros and quotation. I'd only been using it a short while and when I searched for solutions to a few problems these "fexpr" things kept appearing up, which i didn't understand, and this "Kernel" language. I decided to learn it since "fexprs" were apparently the solution to several of my problems. This wasn't easy at first - I had to read the Kernel Report several times, but I ended up finding it way more intuitive than using macros and quotes.
I've not written a Scheme macro since. I've written hundreds of Kernel operatives though.
I was also a typoholic previously, but am in remission now thanks to Kernel.
Think of macros as what you want when you want to perform computation at compile time rather than run time.
An example: building the equivalent of a switch statement, but that compares (via string equality) with a set of strings. The macro would translate this into code that would do something like a decision tree on string length or particular characters at particular positions.
Basically anything that's done with a preprocessor in another language can be done with macros in Lisp family languages.
The other motivation for me is to drastically reduce boilerplate code. I can’t believe people here are saying they never use macros, they are so good for this that avoiding them sounds to me like a skill issue! Overuse can damage readability, sure, but so can pretending macros are not an option.
Operatives do that for me, better than macros. Parent is correct that macros are compile time, which gives them a performance advantage over operatives - but IMO, they're not better ergonomically. I find operatives simpler, cleaner and more powerful.
I understand the use case, but Scheme macros never felt intuitive to me. I think it may be the quotation more than anything that I dislike - though I also dislike that they're second class (which was the key thing which led me to Kernel).
I use C preprocessor macros extensively and don't have the typical dislike for them that many people have - though I clearly understand their limitations and the advantage Scheme macros have over them.
Since learning Kernel, the boundary of "compile time" and "runtime" is more blurry - I can write operatives which behave somewhat like a macro, and I do more "multi-stage" programming, where one operative optimizes its argument to produce something more efficient which is later evaluated - though there are still limitations due to the inability to fully compile Kernel.
As one example, I've used a kind of operative I call a "template", which evaluates its free symbols ahead of time but doesn't actually evaluate the body. When we later apply the some operands it replaces the bound symbols with the operands, looking up any symbols to produce an expression which we don't need to immediately evaluate either - but this expression has all symbols fully resolved. This is somewhere between a macro and regular operative.
Consider:
($define! z 10)
($define! @add-z
($template (x y)
(+ x y z)))
In this template `x` and `y` are bound variables and `+` and `z` are free. The template resolves the free symbols and returns an operative expecting 2 operands, effectively providing an operative with the body:
([#applicative: +] x y 10)
When we call the template with the two operands, it resolves any symbols in the arguments and returns the full expression with no symbols present, but it doesn't evaluate the expression yet.
When we decide to evaluate the expression, no symbol lookup is necessary - it can perform the operation rather quickly, despite the slow interpretation.
---
The $template form above isn't too difficult to implement. I've iterated several forms of this - some which only partially resolved the bound symbols, but lost them in a RAID failure. An earlier version which has some issues I still have because I put it online:
At present the best interpreter is klisp, and the fastest is bronze-age-lisp, which uses klisp - with parts of hand-written 32-bit x86 assembly.
I've been working on a faster interpreter for a number of years as a side project, optimized for x86_64 with some parts C and some parts assembly. It has diverged in some parts from the Kernel report, but still retains what I see are the key ingredients.
My modified Kernel has optional types, and we have operatives to `$typecheck` complex expressions ahead of evaluating them. I intend to go all in on the "multi-stage" aspect and have operatives to JIT-compile expressions in a manner similar to the above template.
I use klisp[1] and bronze-age-lisp[2] mostly for testing, as they're the closest to a feature complete implementation of the Kernel Report.
I've written a number of less complete interpreters over the years. I currently have a long-running side-project to provide a more complete, highly optimized implementation for x86_64.
Sometime back 15 years ago [0], I hit a bit of an existential crisis regarding my career and the kind of work I was doing.
I thought the particular technology I was working in was "part of the problem", as I felt pigeon-holed by .NET and C# to always be a corporate-monkey CRUD consultant. So, I went out in search of something better. Different programming languages. Different environments. Just something that wasn't working for asshole clients who thought it was okay to yell at people about an outage in a hotel on the complete opposite side of the country that was more due to local radio interference than anything I had done in the database code that configured things. Long story involving missing a holiday with my family over something completely outside of my control and yet I still got blamed for it. The problem wasn't the technology, it was the company I was working for, but at that time in my life, I didn't understand the difference.
Racket was a life preserver at that time.
It's really hard to explain, because I never actually ended up working in Racket full-time and I haven't even touched it in probably 10 years. But it still has this impact on my identity as a software developer. I learned Racket. I forced myself out of being a Glub programmer and into someone who saw the strings that underwrote The Universe. The beauty of S-Expressions and syntactic forms and code-is-data and all that. It had a permanent impact on my view of what this job could be.
I still work primarily in .NET. Most of the things that were technological issues about .NET Framework got absolved by what was first .NET Core and what is now .NET. So, I no longer feel like my tools are holding me back. And I'll forever be thankful to Racket (and the community! The Racket listserve was amazing back then. Probably still is, I just don't interact with it anymore) for being there for me.
Edit: Haskell was in fact another language I explored at that time, in addition to Ocaml and Ruby and Python (ugh! Don't get me started on Python!) and many other things. They were all "cool" in their own way, but nothing felt like Racket. They all had their own weird rules that felt like being bossed at again. Racket felt like art. Racket felt like it was there for me, not the other way around.
[0] I still think of this time as the "mid-point" in my career, but it's now been long enough ago that I've been more past the crisis than I was ever in it. Strange feelings.
> [...] who thought it was okay to yell at people about [...]
That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.
I think I much prefer Haskell DSLs over Lisp macros as the basis for APIs in foreign code. That might be due to my relative inexperience with Lisps, but macros just seem to make all the bad aspects of dynamically typed langues much worse. Looking at some piece of code in isolation, not only is it often impossible to tell what is the type/shape of data that are coming in (as is common with dynamic type systems) but with macros added to the mix I also can't tell what the control flow is. So to understand what a single piece of code is doing, I find myself chasing for hints that are scattered throughout the entire codebase.
Contrast this to Haskell's use of DSLs – although they really can be quite dense sometimes, I feel like, when I get stuck, I can always just dig into the documentation on Hackage, and figure out things from the type definitions (even when explanations in docs are lacking). Though it does require being comfortable with the abstractions being used (monads and such). Rust is similar in this manner but to a lesser extent.
But again, maybe the macro critique stems from my inexperience with them.
I wrote a couple macros that record data transiting through code at runtime (it's in Clojure, so basically almost every function is pure, returning what they produce as if it was water flowing out of a faucet), stores these intermediary results in a file, and finally display these values in the code itself, as comments, just below the call-site that produced them.
You can then, for a given call-site, choose to "load" these recorded computations, which will change the displayed comments, both below this call site and all the other instrumented call-sites that are downstream to it, even for code sitting in other source files.
It's a bit fragile and needs more polishing but it's a lot more convenient than any type system that will always get in the way, be not powerful enough, and it allows me to see what kind of data flows in my program without running it. Because I record everything and display the result not at compile-time but at coding time, in the same window, alongside the rest of the code. I don't understand why this was never done (to the best of my knowledge). Biggest limit I encounter is that Clojure doesn't provide any mean to identify areas in my code that are not pure.
> You can pause, inspect objects, change values, and even redefine a broken function on the fly to test a fix in any environment (yes even in production, while running).
I see this mentioned often, and it sounds amazingly useful (especially the part about fixing in production!). But how truly widespread is it among the Lisp dialects to be able to connect to a running program, debug, and hotfix it? I understand Common Lisp has it, but I struggled to figure out how to do it in, say, Racket. Admittedly I'm am relatively inexperienced Lisp programmer, so maybe I wasn't looking in the right place or for the right words. Which Lisp dialects do indeed support the extreme version of this capability to inspect and edit running programs?
It’s trivially easy to do in Clojure (literally one line of code to start an nREPL server, after deps/requires), and often very useful in dev and personal, local projects. In practice, I’ve never once used it in a user-facing production system, in 16 years of writing Clojure.
Out of the box, there’s zero security or audit trail. Building that properly isn’t trivial and, even with it in place, many corporate infosec teams would have fits if you suggested that engineers can make arbitrary inspections/modifications to a running production system.
Where it could be appropriate, often you’re running the code in autoscaling containers or something similar. Modifying one instance then is rarely anything but a terrible idea.
Where I have used it is for things like long-running internal batch systems that run a single instance and never touch any sensitive data. Connecting a REPL in those cases is much more flexible and powerful than, say, building a dashboard UI or a control API over http, and you get it for free.
Yes but I don’t know how someone familiar with a Jetbrains IDE can claim that only Lisp has that feature. I love Common Lisp and SLIME, but most of what it can do, I can also do in Java with the IDE. Change a method definition while it’s running and then restart the method? No problem. Run any code within the context of the running method? Yes, Java can do it. Change local variables values in the middle of a method? Easy!
The Lisp REPLcis still superior because it comes with more stuff, like DECOMPILE, INSPECT and so on that can only exist because the language is essentially a compiler even at runtime, which can also be a problem for sensitive domains… but in Java you can do all those things using the IDE so the distance between what is possible in Lisp and a language with good IDE support like Java and Kotlin is now negligible in my opinion.
I've frequently said that Java + JRebel gets the closest to the Common Lisp + slime experience (closer than Python) but as you say the Lisp experience is still superior, the Java ecosystem has yet to close the gap*. The widest part of that gap I'd mention is in not having the condition system built-in to Java (though I'm aware people have tried to make a comparable one as a library), lacking it degrades the debugging experience considerably (even though simple step-debugging is typically more pleasant than in Lisp). IntelliJ's drop frame feature isn't good enough. The other problem is needing Java + something. What you get with just a regular JVM running under your IDE is no better than what other languages offer (if they offer anything) as their cute hotswap/hotpatch feature and comes with big limitations. (Like no changing method signatures or no adding/removing methods or properties, or only applying changes to new objects.) Once you're doing something non-trivial, especially if you're trying to incrementally develop your program rather than just debug one specific problem, you'll have to restart. In contrast Common Lisp's got its disassemble, describe, inspect, compile, fmakunbound, ... all being functions callable at runtime, and update-instance-for-redefined-class is part of the standard language too. Support for live reloading of everything is baked into the language rather than a hack on top, slime is just a convenient way of working with it. It's still convenient to restart the program occasionally, but few things force you to.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
People do it in Clojure all the time in the dev setup. And you technically can do in your customer environments too, but it's of course a bit of a cowboy thing to do there.
"Cowboy thing" is putting it mildly. It invites/incentivises terrible behavioral patterns. The next guy looking has no idea what happened to that running system. (That next guy may well be you yourself a week or month later.)
it's been my experience that when most people say "Lisp does this that or the other", what they usually mean is "Common Lisp does this that or the other". Often there's an implicit "with SLIME" in there as well
Can you elaborate on how this is doable (in, say, Racket) and what tooling is needed? I'm afraid your reply doesn't add much information beyond the same assertion that I quoted that was in the article posted to HN. And I haven't been able to find information on this with Racket.
That sort of hotfix workflow isn't really a thing in Racket or Scheme in general. Changing the definition of a function doesn't update everything else that calls that function like it does in CL.
Clojure allows for that, giving you neat hot reload capabilities when working in Clojurescript. I believe Emacs Lisp works the same way, and allows for fairly fluid debugging sessions.
Universal hot reload is really a messy beast though. For every "yeah we can just reload this without re-init'ing the structure" there's another "actually reloading causes weird state issues and you have to restart everything anyways" thing.
I've found that hot reloading _specific targetting things_ tends to get you closer to where you want. But even then... sometimes using browser dev tools to experiment on the output will get you where you want faster than trying to hot reload clojurescript but having to "reset" state over and over again or otherwise work around weirdness.
I think this flow works well in Emacs though because you're operating on an editor. So you can change things, press buttons, change things, and have a good mental model. Emacs Lisp methods tend to have very little state to them as well (instead the editor is holding a bunch of exposed state).
Meanwhile React (for example) has _loads_ of hard-to-munge state that means that swapping one component for another inline might be totally fine or might just crash things in a weird way or might not have anything happen. Sometimes just a full page refresh will save you thinking about this
I use it a lot for my one man projects; it is really fantastic in that setting. I use SBCL exclusively; it is very fast and robust and has image based development. I have my own versioning toolkit so I don't go insane.
It is obvious why it is not really used or recommended as it really falls flat in a team setting, mostly even when 2 people are involved. But fixing bugs live as they happen and then spitting out a new .exe for clients is still a lot faster than modern alternatives. Far more dangerous too.
What makes you think it falls flat in a team setting? There are plenty of N-pizza-sized teams successfully using Lisp to this day and you're probably aware of many teams successfully using Lisp in the past, too. There's also the success of Clojure. What's required to have a well functioning team is mostly programming language independent; Lisp itself won't save a team lacking those properties anymore than say Java would.
Python is not Lisp, but jumping into a Python REPL in a halfway-run program and poking at the internals easily is _very_ useful as a debugging tool, quickly getting you answers on some messier programs.
It's a shame that other scripting languages that theoretically have the capabilities to do this don't do this (looking at you, node! Chrome dev tools are fine but way too futzy compared to `import pdb; pdb.set_trace()` and "just" using stdin)
I do also use Emacs, and with Emacs Lisp `trace-function` means you can very quickly get call traces in your running instance without having to pull out a debugger and the like. Not like you can't trace functions with `gdb` of course. But the lowered barrier to entry and the ability to do in-process debugging dynamically means you just have access to richer debugging tools from the outset.
In ruby it used to be common to ssh into a box, attach to the console and edit files from the REPL and rerun the code to see if your patch worked. I haven’t touched it in years and I doubt many people do that anymore.
A common workflow is to run code to test some function in the REPL and then promote it to a test when you are ready, and this process has been the smoothest in lisps, especially since you can create your own test harness if you need to.
Fun fact is that giving AI repls also reduce error rates so much that you can save up to half the tokens/time or more.
Not Lisp, but for those interested in editing programs that are running in production:
I read some Erlang article saying that hot swapping is not actually very useful in production because of some reasons, and instead a blue-green deployment is preferred. Can't find the link atm. This was close: https://learnyousomeerlang.com/relups
Compare to this comment: https://news.ycombinator.com/item?id=42405168
Hot swaps for small patches and bugfixes, and hard restarts for changing data structures and supervisor tree.
It not that hot swapping isn’t useful, it’s just difficult to do well and you need to write your code in a way that supports it. If you need 0 downtime on a device that can do a blue green deployment then the BEAM has you covered. Most people just don’t need that, so the extra hassle isn’t worth constantly considering how to migrate data in flight.
It’s common in Clojure as well as other Lisps. I was just doing that exact thing, modifying a running program in production, earlier this week, adding in print calls to gather debugging information and then modifying the code to fix the bug and it immediately going live and the correct behavior verified.
I also see this mentioned often and have wondered the same. I can sort of envision this working in a single threaded application, but how would this work in a web application for example? If a problematic function needs to be debugged, can you pick what thread you're debugging? If not, do all incoming requests get blocked while you debug and step through stack frames?
Being paused in the debugger is per-thread. If the server's using a thread-per-request model, and you're stopped in the request, then other requests can proceed just fine. If some of those requests also trigger the debugger, they'll pause and have to wait, they won't interrupt your current debugging view. Extra care should be taken in any sort of production debugging, of course. (At a Java BigCo, production debugging was technically allowed but required multiple signoffs, the engineer wasn't the one in control but had to direct someone else, lots of barriers to prevent looking at arbitrary customer data, and of course still limited to what you can do with a standard JVM restarted in debug mode. (Mainly setting breakpoints and walking stack traces.))
But the nicest part is that once you connect to the production application, apart from network lag it's no different than if you were developing and debugging locally on similarly specced hardware to the server, you have all the same tools. Many of the broader activities around "debugging" don't need to happen in a paused thread that was entered with an explicit breakpoint or error, they can happen in a separate thread entirely. You connect, then you can start inspecting (even modifying) any global state, you can define new variables, you can inspect objects, you can define new functions to test hypotheses, redefine existing functions... if you want all requests to pause until you're done, you can make it so. Or if you want to temporarily redirect all requests to some maintenance page, you can make that so instead. A simple thing I like doing sometimes when developing locally (and I could do it on a production binary too) is to define some (namespaced) global variable and redefine a singly-dispatched method to set it to the self object (possibly conditionally), and once I have it I might redefine the method again to have that bit commented out just so I know it won't change underneath me. Alternatively I can (and sometimes do) instead set this where the object is created. Then I have a nice variable independent of any stack frames that I can inspect, pass to other method calls, change properties of, whatever, at my leisure without really impacting the rest of the program's running operation. Another neat trick is being able to dynamically add/remove inherited mixin superclasses to some class, and when you do that it automatically impacts all existing objects of that class as well. Mixin classes are characterized by having aspect-oriented methods associated with them; you can define custom :before, :after, or :around methods independent of the primary method that gets called for some object.
The nREPL is present even in newer dialects. It is as easy as installing Calva vscode extension for Clojure, or jacking in with Cider. This makes it perfect for LLM interaction as well.
> Of course, to be completely fair about my toolkit, standard Scheme can sometimes lack the heavyweight, “batteries-included” ecosystem required for massive enterprise production compared to the JVM.
I was thinking the whole time, "this person would _love_ Clojure".
kawa is unfortunately a somewhat shoddy project. Alot of halfbaked features / abstraction ideas (eg trying to support CL for whatever reason), dubious tooling for a java project (autotools), unclean and inconsistent code formatting. It's missing some features that are expected in a real scheme like multishot continuations; someone wrote research about it as a MSc thesis, but due to mentioned shoddiness its integration to upstream stalled and hadn't been merged.
At some point I thought of forking it to then cut out and polish the core, but then my attention got caught by graal's truffle framework as a plausibly better path for implementing scheme in java
Its funny, I can definitely sympathize with wanting multishot continuations, but I can't think of many times where I have wanted them to solve a problem.
> Actually, in my opinion, Scheme (and Lisp) allows you to express complex systems and problem domains in more simple terms than any other language can.
Short article. Worth reading. But all I swallowed was this one sentence.
Its the sytax. If you like semicolons, thats why you like Pascal-like languages.
Actually, variations on M-expressions have been created many times in the Lisp world. (Look what you can do with macros!) So far, none of them has caught on. The latest attempt for Scheme is SRFI-266, which creates a very nice infix expression sublanguage. If I were working on a team, I would encourage them to use this, but I don't know if it has enough traction to become widespread.
it's not just the syntax. the entire language, and even the ecosystem in general, has relatively few atoms that can be combined with a higher degree of freedom than the alternatives.
it has both upsides and downsides. the upsides mostly win for me.
You could wrap it in an unsafeIO function to make it return `()` again.
However, I’ve had very little use of printing for debugging. In Haskell you write small (ish) and pure functions that you can test extensively with property based testing. The types already help a lot as well.
So basically the only place where you deal with unexpected input is at the communication boundaries of the app where you are in some form of IO already and printing just available.
That's fine for a library or locally run executable, but I've worked on distributed systems in Haskell and you really need logging in place to track what is going on.
Of course, you will have IO somewhere in a executable where you can handle logging so just separate pure and IO and make sure you have good tests for the pure functions. Also, linting to catch partial functions and dangerous lazy ones (or use an alternative prelude).
Sure you want logging and tracing (in the RPC sense not Debug.Trace.trace).
Most of this can still be done from IO places where the pure functions collect enough error information bubbling up (e.g. content and line/col of parser errors etc.) to not need ad hoc print statements for debugging.
Eh sure. But you can always collect/carry decisions in something like an Either. When using arrows or your own monadic bindings it is even possible to abstract this away from view.
What evolution in particular do you think? The developers use it for commercial products in quantum computing and defense [1]. That doesn't mean it's done in some complete language ecosystem sense (which is discussed in [1], and one could argue Haskell also never feels "finished"), but it also doesn't seem like an unfinished hobby project. Given that it's embedded in Common Lisp, there's always a way to fill in the library gaps, sort of like how if a "native" library doesn't exist in Clojure, one can always reach for Java.
[1] From Toward Safe, Flexible, and Efficient Software in Common Lisp at the European Lisp Symposium, "[Coalton] has been used for the past 5 or so years [...] first in quantum computing and now a serious defense application." https://youtu.be/xuSrsjqJN4M&t=9m14s
I am an avid sbcl and coalton user (and sponsor of both when I can) and never said it was not a great thing; comparing it to Haskell is, outside the theoretical type system roots, just a bit early type system wise.
I agree with you further and you did an excellent promotional comment for Coalton and CL; keep doing that please. I have said many times here before that I did not like my time away from CL and Coalton makes it even better.
Compared to lisp? Ok fine. Syntax doesn't get more simple than Lisp. But compared to JavaScript? C++? C#? Haskell is top tier when it comes to syntactic and conceptual elegance. The biggest problem is tooling, I would say.
I could not agree less. People used to call Python “executable pseudocode” - in that spirit, Haskell is executable pseudo-math. If you’ve done enough higher math that a professor’s whiteboard notation feels natural to you, then Haskell might feel like a reasonable approximation of that style. Otherwise: it’s line noise.
Haskell is very elegant and pretty. It's hard to describe what pretty is when it comes to programming languages, but imo golang is ugly, rust is good, and Haskell the best.
I don't believe monads are a "heavy handed abstraction" and that's what prevents people from prototyping in Haskell.
What really prevents people from writing in Haskell at a reasonable speed is the poor language design. Programming languages are supposed to aid in reading by emphasizing structure. It's important to emphasize that a particular group of "words" constitutes a function call, or a variable definition, or a type definition -- whatever the language has to offer.
Haskell is a word salad. Every line you read, you have to read multiple times, every time trying to guess the structure from the disconnected acronyms. It belongs to the "buffalo buffalo buffalo buffalo" gimmick family. This is a huge roadblock on the way to prototyping as well as any other activity that implies the ability to read code quickly. And then it's also spiced by the most bizarre indentation rules invented by men.
This is not at all a problem with eg. SML or Erlang, even though they are roughly in the same category of languages.
Haskell would've been a much better language if it made its syntax more systematic and disallowed syntactical extensions s.a. introduction of user-invented infix operators, overloading of literals (heaven, why???) and requiring parenthesis around function arguments both for definition and for application. The execution model is great, the typesystem is great... but the surface, the front door to all these nice things the language has is just some amateur level nonsense.
* * *
As for the upsides of using languages from the Lisp family for practical problems... I don't find (syntax-rules ...) all that exciting. I understand this was an attempt to constrain the freedom given by Common Lisp macros, and I don't think it worked. I think it's clumsy and annoying to deal with. The very first time I tried to use it, I ran into its limitations, and that felt completely unjustified. To prototype, you want freedom of movement, not some pedantry that will stand in your way and demand you work around it somehow.
The absolute selling point, however, is SWANK. Instead of editing the source code, you are editing the program itself, that can be interacted with in points of your choosing. I don't know of any modern language that offers this kind of experience. I think, even still in the 80s, this approach to programmers interacting with computers was common. At school, we had terminals with some variety of Basic, and it worked just like that: you type the program and it instantly shows the effect of your changes. Then, there was also Forth, which also worked in a similar way: it felt like you are "talking" to the computer in a very organized and structured way, but real-time.
Most mainstream languages today sprouted from the idea of batch jobs, where the programmer isn't at the keyboard when the program runs. They came with the need to anticipate and protect the programmer from every minor mistake they might've easily detected and fixed during an interactive session far, far in advance.
Whenever I think about writing in C, or Rust, or Haskell, I imagine being tasked with going to the grocery blindfolded: I'd need to memorize the number of steps, the turns, predict the traffic, have canned strategies for what to do when potatoes go on sale... I deeply regret that programming evolved using this evolution path, and our idea of what it means to program is, mostly, the skill of guessing the impossible to predict future, instead of learning to react to the events as they unfold.
This is not what "subjective" means. You can't argue something is subjective because many people don't agree with an opinion.
When someone argues subjectivity (in a negative sense), they need to show that the opinion does not rely on facts, rather it's based on... nothing (feelings).
I offered a very easy way to numerically assess the negative impact of poor language design choices made by Haskell designers. It's not about what I "feel" about the language: in Java, you write three-words program, and you get, usually, a unique interpretation. In Haskell, you write a three-words program, and you get 9 (nine) possible interpretations. It's impossible for a human to examine nine interpretations simultaneously and figure out which of them are valid and might fit the context. So, reading a Haskell program takes longer and requires more effort than a Java program.
Of course, Haskell programmers find ways to adapt to their misfortune. They try to avoid pathological cases (eg. writing four-words programs, let alone five!), they memorize a lot of acronyms and non-typographical symbols that they later use to prune the search for a possible meaning of the program. They invent conventions on top of the bare language design that constrain the search space for possible programs to make their task easier.
It's absolutely possible that after layers of conventions and a long time spent memorizing various acronyms and symbols, Haskell programmers catch up to speed of programmers in other languages: after all, the superficial difficulties with the language might seem like a small price to pay for the access to the language's riches that lay beyond the surface. The language grammar rules cannot account for the entirety of the performance of the programmers who chose to write in the language.
This situation is very similar to the "universal" (claimed, but not in practice) mathematical language, which is extremely difficult to read, write, edit, typeset... yet the tradition of using it prevails and the overwhelming majority of mathematicians use, and prefer using the "universal" mathematical language even though much saner alternatives exist.
There aren't a lot of Haskell programmers, so "lots" is maybe an exaggeration.
I see OP's point. Haskell feels (or felt, I admit I haven't been keeping up the last 15 years) needlessly obtuse sometimes, like how people love to invent new infix operators all the time.
> Haskell is a word salad. Every line you read, you have to read multiple times, every time trying to guess the structure from the disconnected acronyms. This is a huge roadblock on the way to prototyping as well as any other activity that implies the ability to read code quickly.
I couldn't disagree more. Yes, there is more upfront work understanding Haskell code. But it's very dense. Once you understand the patterns, you can read it much quicker. Just like map/filter/fold are harder to understand then a for-loop, but once you do, you can immediately see what kind of iteration is applied. The for-loop can do all kinds of crazy index manipulation that you always have to digest from scratch.
> And then it's also spiced by the most bizarre indentation rules invented by men.
Again, quite surprised by this criticism. The rule is extremely simple: inner expressions must be indented more. You're free to decide by how much. That's why there are many "styles" out there. Maybe that's what you mean with bizarre. But it's not like the language is forcing weird constraints on you. If anything the constraints are too lax. Any other language with non-mandatory indentation allows that as well. In general, I really don't understand why not more languages do mandatory indentation. You only need curly braces and semicolons if you want the option to write a whole if/else/while/... statement in one line. But nobody does that.
Not to support the parent comment, which I disagree with, but If you use multi-line let-bindings, those require that you indent not just more than the previous line, but as much as the first token after the let keyword on the previous line. It’s a very strange rule, all the more surprising because it’s inconsistent even with the rest of the language. It is totally avoidable if you, like I think most experienced haskellers do, just prefer ‘where’, but people more familiar with procedural code usually lean into using ‘let’ everywhere because it feels more familiar.
I think the strange indentation used to be required in more places - I vaguely remember running into it a lot more when I started with Haskell 20 years ago, but that was also just when I was new to the language. These days I just keep ‘let’ to a bare minimum, so it doesn’t bother me. One thing that made Elm frustrating was that it disallowed ‘where’ clauses, forcing you to deal with this weird edge case all the time.
No, the issue is if the first binding is on the same line as the `let`, you are required to write, e.g.:
someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
rather than:
someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
I think it used to be the case that it had to be indented past the `=` or the `let` even if it was't on the same line. Note also that `in` has to be indented past `someValue`, but doesn't need to be indented as far `let`.
This is fine:
someValue = let
f = 9
fo = 10
foo = 123
in f+fo+foo
So, it is possible to land on sane indentation, but the parser is much pickier than, e.g., Python's off-sides rule, so it takes some trial and error for new users to find it, and it can be frustrating if you're just temporarily modifying an expression to quickly try something out.
I honestly think it would be less surprising if the parser just disallowed writing the first binding on the same line as the `let` entirely, treating it only as a block, but some people (bewilderingly) do seem to prefer to write their code with the excessive indentation (I'd imagine with editor support, rather than manually maintaining the spacing).
I feel like you are describing that the parser is too lenient rather than too picky. It could just require you to always put `let` and `in` on their own lines, in which case the indentation makes sense, I think. It's only when trying to keep more stuff on the same line that the details of Haskell's indentation rules come into play.
> It's important to emphasize that a particular group of "words" constitutes a function call, or a variable definition, or a type definition -- whatever the language has to offer.
For background: my first time in college, I was studying typography. An integral part of this trade is figuring out what is easier for people to read by answering questions s.a. what is the best line length, what number of columns per page is the best, what number of ascent elements per font face is the best, considering letter frequencies and coincidence and so on.
It also comes with the editing part, as in the trade of taking a manuscript (a text intended to be published) and making sure that the text meets certain reader expectations in terms of consistency, clarity, structure. This, obviously, includes the use of punctuation, but it's more about the language structure, things like adjectives order or anaphora usage etc.
Programming languages can be judged using the same rules, because, in the end of the day, we read them and need to interpret them. People have particular strengths and weaknesses when it comes to reading: we can remember the anaphora's anchor for only so long, we can hold only so many "variables" in fast-to-access memory, we only can do so many levels of adverb phrase nesting and so on.
Haskell was designed by someone completely oblivious to human abilities to read. It's very demanding and straining when it comes to extracting structure from text in the same way how, in English, you'd struggle to extract structure from so-called "garden path" sentences, because it's intentionally obfuscated. I don't believe Haskell is intentionally obfuscated, instead, I attribute the poor performance to the lack of awareness on the part of the author.
To convey the same point by means of example: Haskell is almost uniquely bad in that given a program
A B C
the programmer can't tell if the program is actually A(B, C), or B(A, C), or C(A, B), or A(B(C)), or A(C(B)), or (A(B))(C), or (B(C))(A), or (B(A))(C), or (C(B))(A).
There's absolutely no reason a language should offer these kinds of puzzles, especially in a very large quantity as Haskell does. Removing this "feature" would make the language a lot easier to work with.
In Haskell it's only ever one of (A(B)(C) or (B(A)(C), and you can tell which based on which characters B is made up of. If B starts with one of !#$%&*+./<=>?@\^|-~` it's the second situation, otherwise it's the first.[0] All functions are unary in Haskell so A(B, C), B(A, C) and C(A, B) can never actually happen. The cases where it looks like A(B(C)), etc. are happening are actually cases of (B(A)(C), e.g. f $ g is a (B(A)(C) case where B=$. So the basic syntax of Haskell is actually very simple and consistent, but due to lazy evaluation the functions can affect control flow much more than in other languages.
0: OK, there are some additional non-ASCII Unicode symbols, but everything but string literals should be kept ASCII IMO.
> the programmer can't tell if the program is actually
What do you mean, "can't tell"? If I see this in Python
(A)(B)(C)
how do I know which of your 9 it means? Well, I'm a Python programmer so I know that it means
A(B)(C)
which is the function A applied to B, which returns a function that gets applied to C. If you're a Haskell programmer you know that it means the same thing.
I grant you that it is odd to those who are unfamiliar and it took me quite a while to get used to it, but it's much better to write that way in Haskell when writing programs that use higher-order functions.
Mmm.I think I understand where you are coming from. You can write incomprehensible code in Haskell very easily and I agree that some people tend to write Haskell in a way that is easy when writing but very hard during reading.
But that is a choice. I prefer not using complex function compositions and the lenses due to this, split complex expressions into a bunch of let bindings etc..
So you also can write very readable code in Haskell.
It's just not good because you need to work around its limitations, whatever its purpose is. Not good for prototyping because it's the red tape you need to cut to get work done. Red tape isn't, in general, a bad thing, but when it comes to prototyping it is.
I think most people misunderstood syntax rules. It was not meant as the macro system for scheme. It was meant as the template macro system everyone could agree on, while leaving the more powerful low level macro systems to the implementations. Syntax case, or explicit/implicit renaming or syntactic closures or what have you.
From your last paragraph, I am curious which languages / paradigms you advocate for. Sorry it wasn't clear to me except that you like SWANK, which I'm not familiar with.
I wish there was some sort of a single metric that would allow measuring languages against each other and thus determining the best one. Unfortunately, there are multiple variables and the relationship between the variables is unclear. But, going totally with my gut feeling, some examples of good languages (in terms of ease of reading) include:
* Prolog (and, by extension, Erlang).
* Pascal.
* Java 5 and earlier (and Go, as it's almost a Java's twin).
These languages somehow manage to hit the sweet spot of enough system and enough diversity, few unexpected syntax constructs (eg. Pascal or Java have the "dangling else" problem, but it's manageable compared to the problems introduced by optional statement delimiters in Go or JavaScript for example). In every case, a programmer must program defensively against these sorts of language "pathologies".
To give some examples of questionable or outright bad design decisions:
* In Common Lisp (and Scheme as well as a number of similar languages) there's a problem with identifying the open parenthesis that will be closed by typing the closing parenthesis. Programmers must invent tools and techniques to manage this problem.
* In C++, there's a laughable (or, at least was, for a long time) rookie "whoopsie" when it comes to ">>" in templates vs infix operator. And the "solution" offered by the language designer makes you think they were just... lazy (add space).
Here are also examples of some (perhaps, accidentally) good decisions:
* Kebab-case in many Lisp family of languages. In Latin script, the position of the hyphen in the middle of the lower-case letter is a better choice then, eg. underscore (which is tutted to be a "not a typographic character"). Same reason why, eg. in traditional Hebrew hyphens are at the height of a capital letter (Hebrew doesn't have lower-case letters and the shape of letters is better suited for hyphens at the top rather than the middle).
* Clojure as well as Racket (afaik, deliberately) introduced more kinds of parenthesis-like delimiters to make it easier to guess which expression is being terminated by the currently typed delimiter.
* * *
Note that this is a "superficial" metric, because languages are also valuable for concepts they are able to express both in terms of program logic as well as program application to the hardware it manages; the ability to process, modify, generate, analyze the language automatically; the ability to constrain the language to a desired subset of all available operations... Incorporating all of these into a single metric seems like mission impossible :)
> Are you mixing tabs and spaces? Maybe an example here would help.
This is not what "rules" means. Rules aren't about what I do. Rules are about what the language treats as legal or illegal. I don't write in Haskell at all because I don't like it and have no use for it, but Haskell rules don't change because of that, they are still mindbogglingly complex when it comes to telling the programmer if the next line is the right amount of space to the right or not. None of that complexity is necessary and could've been totally avoided if the language used statement delimiters.
> No, this is important, so that default strings don't to have to be something crummy.
My argument is that to get a little accidental convenience you sacrificed a huge amount of routine convenience. The mental load of having to distrust a string when you see it is just not worth the accidental convenience of writing a prepared statement and making it appear as if it was a string. In other words, you are the guy who traded a donkey for three beans, but the beans didn't sprout into a huge ladder that took you to the giant's castle. You just made a very watery soup and that was that.
> Again, an example would be helpful.
Look up the example I gave in the adjacent reply.
> I thought lazy execution was widely agreed to be the worst part of Haskell.
It's good because it's unique and, when it fits the purpose, it's useful for that particular purpose and neigh irreplaceable, because it is unique. It's worth having for the sake of research, to understand how languages can be designed and what tools or techniques can be discovered on this path. This is said from the perspective that Haskell is not the end product, but rather a research attempting to study how languages can work and what concepts they can develop.
I learned Scheme before Haskell and as much as I enjoyed the experience, I still wouldn't reach for Haskell first. It's pretty much limited to my xmonad configuration.
Can you say more about the system? A lifetime ago I was really excited about gambit (and bigloo) but I never had the chance to work with them beyond messing around here and there after work.
That's why I switched to Common Lisp, its type system isn't perfect but it works well enough for my needs (especially with the occasional (describe 'sycamore:tree-insert) in the REPL).
> Remember that later, just adding a simple print somewhere is not going to work without refactor (welcome to the IO monad).
This hits home for me. Print statements are essential to the way I code. I use them to debug, to examine variables and parameters, to trace execution flow. I rarely use debuggers.
For caveman debugging, if I'm not sitting in a monad, I usually reach for something like Debug.Trace. Typically that's in Idris or my own language, but I see that haskell has it too.
For my own language, I have the syntax highlighting set to put the `trace` keyword in red, so I can easily clean up.
Debug.Trace.trace and friends can help here. This can work from pure code.
But the lazy evaluation does imply that trace functions only execute when the statement is actually forced evaluation. But it is actually quite helpful during debugging..
Picking a language is a matter of selecting the best fit given the constraints of the project.
For stuff I like to work on on my own time, "how I like to work" is a major forcing constraint. So it's no surprise that I have a large number of Lisp projects sitting around. Maybe it's because I'm auDHD, but the ability to evolve a program through active dialogue with the machine (and not of the sloppotron variety) just fits better with how I think through a problem and its solution.
Irrespective of the language, I love the REPL. For this reason, among others, I just cannot get into Agentic Coding. It seems like a step back to batch processing.
I tried some ML language once, it's difficult even to write a basic factorial example,
which in Scheme I could do it iteratively and recursively with ease.
Either with S9 Scheme for quick fun (it has Unix sockets and ncurses :D ) or Chicken Scheme for completeneless (R5RS/R7RS-small + modules), I always have fun with both.
Oh, and well, Forth, too, but more like a puzzle (altough it shines to teach you that you can do a lot with a fixed point). Hint: write helpers for rationals -a/b where a is an integer and b a non-zero integer- and complex numbers by placing two items in the stack for each case (for rat helpers you need four (a/b [+-*/] c/d) .
You can have a look at qcomplex.tcl (either online or installed) as an example on how can it work even under JimTCL itself by just sourcing that file. Magic, complex numbers under jimsh thanks to the algebraic properties. So, you can implement the same for yourself in some Forths, even under EForth for Muxleq. Useless? It depends, under an ESP32 it can be damn fast, faster than Micropython.
I think syntax matches with our brains or not. I think anyone is capable of learning any syntax. The question is whether they want to. At some level, programming is art.
From my limited SMLNJ experience I think for something as simple as factorial, it is nearly the same. Both have TCO, recursion, inner functions, pattern matching and those good things. You can structure the code the same way.
I mean, in Scheme it is longer to write. I enjoy Lisps and use Emacs for everything, but Haskell can be as terse, or even more terse. (Which is not always a good thing.)
> Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.
I've written a bit of Racket code (https://github.com/evdubs?tab=repositories&q=&type=&language...) and I still haven't written a macro. In only one case did I even think a macro would be useful: merging class member definitions to include both the type and the default value on the same line. It's sort of a shame that Racket, a Scheme with a much larger standard library and many great user-contributed libraries, has to deal with the Scheme/Lisp marketing of "you can build low level tools with macros" when it's more likely that Racket developers won't need to write macros since they're already written and part of the standard library.
> But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF.
What a missed opportunity to preach another gospel of Lisp: s-expressions. XML and JSON are forms of data that are likely not native to the programming language you're using (the exception being JSON in JavaScript). What is better than XML or JSON? s-expressions. How do Lisp developers deal with XML and JSON? Convert it to s-expressions. What about defining data? Since you have s-expressions, you aren't limited to XML and JSON and you can instead use sorted maps for your data or use proper dates for your data; you don't need to fit everything into the array, hash, string, and float buckets as you would with JSON.
If you've been hearing about Lisp and you get turned off by all of this "you can build a DSL and use better macros" marketing, Racket has been a much more comfortable environment for a developer used to languages with large standard libraries like Java and C#.
How do Lisp developers deal with XML and JSON? Convert it to s-expressions.
As a common lisp developer, that is only very vaguely true for me.
The mapping I prefer for json<->Lisp is:
This falls out of my desire for the mapping to be bijective:- The only built-in type that is unambiguously a mapping type is hash-tabe.
- nil is the only value that is falsy in CL
- () is the same as nil, so we can't use it as an empty list; vectors are the obvious alternative
- Not really any obvious values left to use for "null" so punt to a keyword.
In Kernel I would use something like this:
Where &, :, @ are defined as: Using the "person" example from the JSON/syntax section on Wikipedia: I would then define `?` Now we can query the object.[0] https://web.cs.wpi.edu/~jshutt/kernel.html
For what it's worth, anytime I have written a macro it's usually not because it's needed, but just because I think it'll be fun :)
When I learned Scheme, I liked the language but strongly disliked macros and quotation. I'd only been using it a short while and when I searched for solutions to a few problems these "fexpr" things kept appearing up, which i didn't understand, and this "Kernel" language. I decided to learn it since "fexprs" were apparently the solution to several of my problems. This wasn't easy at first - I had to read the Kernel Report several times, but I ended up finding it way more intuitive than using macros and quotes.
I've not written a Scheme macro since. I've written hundreds of Kernel operatives though.
I was also a typoholic previously, but am in remission now thanks to Kernel.
https://web.cs.wpi.edu/~jshutt/kernel.html
Think of macros as what you want when you want to perform computation at compile time rather than run time.
An example: building the equivalent of a switch statement, but that compares (via string equality) with a set of strings. The macro would translate this into code that would do something like a decision tree on string length or particular characters at particular positions.
Basically anything that's done with a preprocessor in another language can be done with macros in Lisp family languages.
The other motivation for me is to drastically reduce boilerplate code. I can’t believe people here are saying they never use macros, they are so good for this that avoiding them sounds to me like a skill issue! Overuse can damage readability, sure, but so can pretending macros are not an option.
Operatives do that for me, better than macros. Parent is correct that macros are compile time, which gives them a performance advantage over operatives - but IMO, they're not better ergonomically. I find operatives simpler, cleaner and more powerful.
I understand the use case, but Scheme macros never felt intuitive to me. I think it may be the quotation more than anything that I dislike - though I also dislike that they're second class (which was the key thing which led me to Kernel).
I use C preprocessor macros extensively and don't have the typical dislike for them that many people have - though I clearly understand their limitations and the advantage Scheme macros have over them.
Since learning Kernel, the boundary of "compile time" and "runtime" is more blurry - I can write operatives which behave somewhat like a macro, and I do more "multi-stage" programming, where one operative optimizes its argument to produce something more efficient which is later evaluated - though there are still limitations due to the inability to fully compile Kernel.
As one example, I've used a kind of operative I call a "template", which evaluates its free symbols ahead of time but doesn't actually evaluate the body. When we later apply the some operands it replaces the bound symbols with the operands, looking up any symbols to produce an expression which we don't need to immediately evaluate either - but this expression has all symbols fully resolved. This is somewhere between a macro and regular operative.
Consider:
In this template `x` and `y` are bound variables and `+` and `z` are free. The template resolves the free symbols and returns an operative expecting 2 operands, effectively providing an operative with the body: When we call the template with the two operands, it resolves any symbols in the arguments and returns the full expression with no symbols present, but it doesn't evaluate the expression yet. When we decide to evaluate the expression, no symbol lookup is necessary - it can perform the operation rather quickly, despite the slow interpretation.---
The $template form above isn't too difficult to implement. I've iterated several forms of this - some which only partially resolved the bound symbols, but lost them in a RAID failure. An earlier version which has some issues I still have because I put it online:
---At present the best interpreter is klisp, and the fastest is bronze-age-lisp, which uses klisp - with parts of hand-written 32-bit x86 assembly.
I've been working on a faster interpreter for a number of years as a side project, optimized for x86_64 with some parts C and some parts assembly. It has diverged in some parts from the Kernel report, but still retains what I see are the key ingredients.
My modified Kernel has optional types, and we have operatives to `$typecheck` complex expressions ahead of evaluating them. I intend to go all in on the "multi-stage" aspect and have operatives to JIT-compile expressions in a manner similar to the above template.
Which implementation do you use?
I use klisp[1] and bronze-age-lisp[2] mostly for testing, as they're the closest to a feature complete implementation of the Kernel Report.
I've written a number of less complete interpreters over the years. I currently have a long-running side-project to provide a more complete, highly optimized implementation for x86_64.
[1]:https://github.com/dbohdan/klisp
[2]:https://github.com/ghosthamlet/bronze-age-lisp
Sometime back 15 years ago [0], I hit a bit of an existential crisis regarding my career and the kind of work I was doing.
I thought the particular technology I was working in was "part of the problem", as I felt pigeon-holed by .NET and C# to always be a corporate-monkey CRUD consultant. So, I went out in search of something better. Different programming languages. Different environments. Just something that wasn't working for asshole clients who thought it was okay to yell at people about an outage in a hotel on the complete opposite side of the country that was more due to local radio interference than anything I had done in the database code that configured things. Long story involving missing a holiday with my family over something completely outside of my control and yet I still got blamed for it. The problem wasn't the technology, it was the company I was working for, but at that time in my life, I didn't understand the difference.
Racket was a life preserver at that time.
It's really hard to explain, because I never actually ended up working in Racket full-time and I haven't even touched it in probably 10 years. But it still has this impact on my identity as a software developer. I learned Racket. I forced myself out of being a Glub programmer and into someone who saw the strings that underwrote The Universe. The beauty of S-Expressions and syntactic forms and code-is-data and all that. It had a permanent impact on my view of what this job could be.
I still work primarily in .NET. Most of the things that were technological issues about .NET Framework got absolved by what was first .NET Core and what is now .NET. So, I no longer feel like my tools are holding me back. And I'll forever be thankful to Racket (and the community! The Racket listserve was amazing back then. Probably still is, I just don't interact with it anymore) for being there for me.
Edit: Haskell was in fact another language I explored at that time, in addition to Ocaml and Ruby and Python (ugh! Don't get me started on Python!) and many other things. They were all "cool" in their own way, but nothing felt like Racket. They all had their own weird rules that felt like being bossed at again. Racket felt like art. Racket felt like it was there for me, not the other way around.
[0] I still think of this time as the "mid-point" in my career, but it's now been long enough ago that I've been more past the crisis than I was ever in it. Strange feelings.
> [...] who thought it was okay to yell at people about [...]
That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.
> Nobody would want to accept this kind of crap from family and friends
Hm… I think I have bad news for you.
I think I much prefer Haskell DSLs over Lisp macros as the basis for APIs in foreign code. That might be due to my relative inexperience with Lisps, but macros just seem to make all the bad aspects of dynamically typed langues much worse. Looking at some piece of code in isolation, not only is it often impossible to tell what is the type/shape of data that are coming in (as is common with dynamic type systems) but with macros added to the mix I also can't tell what the control flow is. So to understand what a single piece of code is doing, I find myself chasing for hints that are scattered throughout the entire codebase.
Contrast this to Haskell's use of DSLs – although they really can be quite dense sometimes, I feel like, when I get stuck, I can always just dig into the documentation on Hackage, and figure out things from the type definitions (even when explanations in docs are lacking). Though it does require being comfortable with the abstractions being used (monads and such). Rust is similar in this manner but to a lesser extent.
But again, maybe the macro critique stems from my inexperience with them.
I wrote a couple macros that record data transiting through code at runtime (it's in Clojure, so basically almost every function is pure, returning what they produce as if it was water flowing out of a faucet), stores these intermediary results in a file, and finally display these values in the code itself, as comments, just below the call-site that produced them.
You can then, for a given call-site, choose to "load" these recorded computations, which will change the displayed comments, both below this call site and all the other instrumented call-sites that are downstream to it, even for code sitting in other source files.
It's a bit fragile and needs more polishing but it's a lot more convenient than any type system that will always get in the way, be not powerful enough, and it allows me to see what kind of data flows in my program without running it. Because I record everything and display the result not at compile-time but at coding time, in the same window, alongside the rest of the code. I don't understand why this was never done (to the best of my knowledge). Biggest limit I encounter is that Clojure doesn't provide any mean to identify areas in my code that are not pure.
> You can pause, inspect objects, change values, and even redefine a broken function on the fly to test a fix in any environment (yes even in production, while running).
I see this mentioned often, and it sounds amazingly useful (especially the part about fixing in production!). But how truly widespread is it among the Lisp dialects to be able to connect to a running program, debug, and hotfix it? I understand Common Lisp has it, but I struggled to figure out how to do it in, say, Racket. Admittedly I'm am relatively inexperienced Lisp programmer, so maybe I wasn't looking in the right place or for the right words. Which Lisp dialects do indeed support the extreme version of this capability to inspect and edit running programs?
It’s trivially easy to do in Clojure (literally one line of code to start an nREPL server, after deps/requires), and often very useful in dev and personal, local projects. In practice, I’ve never once used it in a user-facing production system, in 16 years of writing Clojure.
Out of the box, there’s zero security or audit trail. Building that properly isn’t trivial and, even with it in place, many corporate infosec teams would have fits if you suggested that engineers can make arbitrary inspections/modifications to a running production system.
Where it could be appropriate, often you’re running the code in autoscaling containers or something similar. Modifying one instance then is rarely anything but a terrible idea.
Where I have used it is for things like long-running internal batch systems that run a single instance and never touch any sensitive data. Connecting a REPL in those cases is much more flexible and powerful than, say, building a dashboard UI or a control API over http, and you get it for free.
Yes but I don’t know how someone familiar with a Jetbrains IDE can claim that only Lisp has that feature. I love Common Lisp and SLIME, but most of what it can do, I can also do in Java with the IDE. Change a method definition while it’s running and then restart the method? No problem. Run any code within the context of the running method? Yes, Java can do it. Change local variables values in the middle of a method? Easy!
The Lisp REPLcis still superior because it comes with more stuff, like DECOMPILE, INSPECT and so on that can only exist because the language is essentially a compiler even at runtime, which can also be a problem for sensitive domains… but in Java you can do all those things using the IDE so the distance between what is possible in Lisp and a language with good IDE support like Java and Kotlin is now negligible in my opinion.
I've frequently said that Java + JRebel gets the closest to the Common Lisp + slime experience (closer than Python) but as you say the Lisp experience is still superior, the Java ecosystem has yet to close the gap*. The widest part of that gap I'd mention is in not having the condition system built-in to Java (though I'm aware people have tried to make a comparable one as a library), lacking it degrades the debugging experience considerably (even though simple step-debugging is typically more pleasant than in Lisp). IntelliJ's drop frame feature isn't good enough. The other problem is needing Java + something. What you get with just a regular JVM running under your IDE is no better than what other languages offer (if they offer anything) as their cute hotswap/hotpatch feature and comes with big limitations. (Like no changing method signatures or no adding/removing methods or properties, or only applying changes to new objects.) Once you're doing something non-trivial, especially if you're trying to incrementally develop your program rather than just debug one specific problem, you'll have to restart. In contrast Common Lisp's got its disassemble, describe, inspect, compile, fmakunbound, ... all being functions callable at runtime, and update-instance-for-redefined-class is part of the standard language too. Support for live reloading of everything is baked into the language rather than a hack on top, slime is just a convenient way of working with it. It's still convenient to restart the program occasionally, but few things force you to.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
I used this in the past: https://ssw.jku.at/dcevm/
Though nowadays the IntelliJ debugger with the OpenJDK is enough for me. I know what works and what doesn’t so I rarely feel frustrated.
People do it in Clojure all the time in the dev setup. And you technically can do in your customer environments too, but it's of course a bit of a cowboy thing to do there.
"Cowboy thing" is putting it mildly. It invites/incentivises terrible behavioral patterns. The next guy looking has no idea what happened to that running system. (That next guy may well be you yourself a week or month later.)
it's been my experience that when most people say "Lisp does this that or the other", what they usually mean is "Common Lisp does this that or the other". Often there's an implicit "with SLIME" in there as well
This is doable in Common Lisp, Scheme/Racket, and Clojure. Yes, it might require some tooling.
Can you elaborate on how this is doable (in, say, Racket) and what tooling is needed? I'm afraid your reply doesn't add much information beyond the same assertion that I quoted that was in the article posted to HN. And I haven't been able to find information on this with Racket.
That could very well be it. I guess I had gotten my hopes up, seeing the statement in a piece that purported to be specifically about Scheme .
That sort of hotfix workflow isn't really a thing in Racket or Scheme in general. Changing the definition of a function doesn't update everything else that calls that function like it does in CL.
Maybe emacs lisp works that way?
You know, after some testing with a bunch of different scheme implementations, I take back what I said, at least for working in a REPL.
outputs 6 and 7 in every one I tried, not the 6 and 6 I expected.Clojure allows for that, giving you neat hot reload capabilities when working in Clojurescript. I believe Emacs Lisp works the same way, and allows for fairly fluid debugging sessions.
Universal hot reload is really a messy beast though. For every "yeah we can just reload this without re-init'ing the structure" there's another "actually reloading causes weird state issues and you have to restart everything anyways" thing.
I've found that hot reloading _specific targetting things_ tends to get you closer to where you want. But even then... sometimes using browser dev tools to experiment on the output will get you where you want faster than trying to hot reload clojurescript but having to "reset" state over and over again or otherwise work around weirdness.
I think this flow works well in Emacs though because you're operating on an editor. So you can change things, press buttons, change things, and have a good mental model. Emacs Lisp methods tend to have very little state to them as well (instead the editor is holding a bunch of exposed state).
Meanwhile React (for example) has _loads_ of hard-to-munge state that means that swapping one component for another inline might be totally fine or might just crash things in a weird way or might not have anything happen. Sometimes just a full page refresh will save you thinking about this
I use it a lot for my one man projects; it is really fantastic in that setting. I use SBCL exclusively; it is very fast and robust and has image based development. I have my own versioning toolkit so I don't go insane.
It is obvious why it is not really used or recommended as it really falls flat in a team setting, mostly even when 2 people are involved. But fixing bugs live as they happen and then spitting out a new .exe for clients is still a lot faster than modern alternatives. Far more dangerous too.
What makes you think it falls flat in a team setting? There are plenty of N-pizza-sized teams successfully using Lisp to this day and you're probably aware of many teams successfully using Lisp in the past, too. There's also the success of Clojure. What's required to have a well functioning team is mostly programming language independent; Lisp itself won't save a team lacking those properties anymore than say Java would.
Python is not Lisp, but jumping into a Python REPL in a halfway-run program and poking at the internals easily is _very_ useful as a debugging tool, quickly getting you answers on some messier programs.
It's a shame that other scripting languages that theoretically have the capabilities to do this don't do this (looking at you, node! Chrome dev tools are fine but way too futzy compared to `import pdb; pdb.set_trace()` and "just" using stdin)
I do also use Emacs, and with Emacs Lisp `trace-function` means you can very quickly get call traces in your running instance without having to pull out a debugger and the like. Not like you can't trace functions with `gdb` of course. But the lowered barrier to entry and the ability to do in-process debugging dynamically means you just have access to richer debugging tools from the outset.
In ruby it used to be common to ssh into a box, attach to the console and edit files from the REPL and rerun the code to see if your patch worked. I haven’t touched it in years and I doubt many people do that anymore.
Yeah not having an equivalent to pdb.set_trace() is what turned me off compiled languages, but with AI I'm not even sure anymore.
A common workflow is to run code to test some function in the REPL and then promote it to a test when you are ready, and this process has been the smoothest in lisps, especially since you can create your own test harness if you need to.
Fun fact is that giving AI repls also reduce error rates so much that you can save up to half the tokens/time or more.
Not Lisp, but for those interested in editing programs that are running in production:
I read some Erlang article saying that hot swapping is not actually very useful in production because of some reasons, and instead a blue-green deployment is preferred. Can't find the link atm. This was close: https://learnyousomeerlang.com/relups
Compare to this comment: https://news.ycombinator.com/item?id=42405168 Hot swaps for small patches and bugfixes, and hard restarts for changing data structures and supervisor tree.
It not that hot swapping isn’t useful, it’s just difficult to do well and you need to write your code in a way that supports it. If you need 0 downtime on a device that can do a blue green deployment then the BEAM has you covered. Most people just don’t need that, so the extra hassle isn’t worth constantly considering how to migrate data in flight.
Have had to do this live in a production MtG card management application. It worked well. The owner kept their MtG card money. Lisp saved the day.
It’s common in Clojure as well as other Lisps. I was just doing that exact thing, modifying a running program in production, earlier this week, adding in print calls to gather debugging information and then modifying the code to fix the bug and it immediately going live and the correct behavior verified.
I also see this mentioned often and have wondered the same. I can sort of envision this working in a single threaded application, but how would this work in a web application for example? If a problematic function needs to be debugged, can you pick what thread you're debugging? If not, do all incoming requests get blocked while you debug and step through stack frames?
Being paused in the debugger is per-thread. If the server's using a thread-per-request model, and you're stopped in the request, then other requests can proceed just fine. If some of those requests also trigger the debugger, they'll pause and have to wait, they won't interrupt your current debugging view. Extra care should be taken in any sort of production debugging, of course. (At a Java BigCo, production debugging was technically allowed but required multiple signoffs, the engineer wasn't the one in control but had to direct someone else, lots of barriers to prevent looking at arbitrary customer data, and of course still limited to what you can do with a standard JVM restarted in debug mode. (Mainly setting breakpoints and walking stack traces.))
But the nicest part is that once you connect to the production application, apart from network lag it's no different than if you were developing and debugging locally on similarly specced hardware to the server, you have all the same tools. Many of the broader activities around "debugging" don't need to happen in a paused thread that was entered with an explicit breakpoint or error, they can happen in a separate thread entirely. You connect, then you can start inspecting (even modifying) any global state, you can define new variables, you can inspect objects, you can define new functions to test hypotheses, redefine existing functions... if you want all requests to pause until you're done, you can make it so. Or if you want to temporarily redirect all requests to some maintenance page, you can make that so instead. A simple thing I like doing sometimes when developing locally (and I could do it on a production binary too) is to define some (namespaced) global variable and redefine a singly-dispatched method to set it to the self object (possibly conditionally), and once I have it I might redefine the method again to have that bit commented out just so I know it won't change underneath me. Alternatively I can (and sometimes do) instead set this where the object is created. Then I have a nice variable independent of any stack frames that I can inspect, pass to other method calls, change properties of, whatever, at my leisure without really impacting the rest of the program's running operation. Another neat trick is being able to dynamically add/remove inherited mixin superclasses to some class, and when you do that it automatically impacts all existing objects of that class as well. Mixin classes are characterized by having aspect-oriented methods associated with them; you can define custom :before, :after, or :around methods independent of the primary method that gets called for some object.
The nREPL is present even in newer dialects. It is as easy as installing Calva vscode extension for Clojure, or jacking in with Cider. This makes it perfect for LLM interaction as well.
> Of course, to be completely fair about my toolkit, standard Scheme can sometimes lack the heavyweight, “batteries-included” ecosystem required for massive enterprise production compared to the JVM.
I was thinking the whole time, "this person would _love_ Clojure".
Kawa is a Scheme which runs on the JVM and is pretty great.
https://www.gnu.org/software/kawa/index.html
I am one of these people who cannot countenance a Lisp that doesn't have `syntax-case`.
kawa is unfortunately a somewhat shoddy project. Alot of halfbaked features / abstraction ideas (eg trying to support CL for whatever reason), dubious tooling for a java project (autotools), unclean and inconsistent code formatting. It's missing some features that are expected in a real scheme like multishot continuations; someone wrote research about it as a MSc thesis, but due to mentioned shoddiness its integration to upstream stalled and hadn't been merged.
At some point I thought of forking it to then cut out and polish the core, but then my attention got caught by graal's truffle framework as a plausibly better path for implementing scheme in java
Its funny, I can definitely sympathize with wanting multishot continuations, but I can't think of many times where I have wanted them to solve a problem.
as a part time schemer, I also love Clojure and reach for it more often than Scheme these days.
> Actually, in my opinion, Scheme (and Lisp) allows you to express complex systems and problem domains in more simple terms than any other language can.
Short article. Worth reading. But all I swallowed was this one sentence.
Its the sytax. If you like semicolons, thats why you like Pascal-like languages.
For all practical purposes, the syntax of Lisp isn't just a cosmetic choice, though.
Lisp was meant to be written with M-expressions instead of S-expressions anyway.
For a brief period of time over 60 years ago, yes. :)
If you want a Lisp that basically has M-expressions, try Dylan. It even started with an S-expression syntax initially and then converted to infix.
M-expressions were never implemented and never used.
Actually, variations on M-expressions have been created many times in the Lisp world. (Look what you can do with macros!) So far, none of them has caught on. The latest attempt for Scheme is SRFI-266, which creates a very nice infix expression sublanguage. If I were working on a team, I would encourage them to use this, but I don't know if it has enough traction to become widespread.
Haskell's syntax comes from ISWIM, which was motivated quite a lot by m-expressions.
Except in mathematica - which isn’t formally a lisp, but practically it’s used like one a lot of the time.
it's not just the syntax. the entire language, and even the ecosystem in general, has relatively few atoms that can be combined with a higher degree of freedom than the alternatives.
it has both upsides and downsides. the upsides mostly win for me.
(In Haskell) > just adding a simple print somewhere is not going to work without refactor
Interesting. How do people cope with this in practice? Does it mean you can't really use log() -statements for debugging?
You just add `trace`. It's not hard.
https://wiki.haskell.org/Debugging
There is always `trace`.
https://wiki.haskell.org/Debugging#Printf_and_friends
Whoa you were faster than me!
You could wrap it in an unsafeIO function to make it return `()` again.
However, I’ve had very little use of printing for debugging. In Haskell you write small (ish) and pure functions that you can test extensively with property based testing. The types already help a lot as well.
So basically the only place where you deal with unexpected input is at the communication boundaries of the app where you are in some form of IO already and printing just available.
That's fine for a library or locally run executable, but I've worked on distributed systems in Haskell and you really need logging in place to track what is going on.
Of course, you will have IO somewhere in a executable where you can handle logging so just separate pure and IO and make sure you have good tests for the pure functions. Also, linting to catch partial functions and dangerous lazy ones (or use an alternative prelude).
Sure you want logging and tracing (in the RPC sense not Debug.Trace.trace).
Most of this can still be done from IO places where the pure functions collect enough error information bubbling up (e.g. content and line/col of parser errors etc.) to not need ad hoc print statements for debugging.
In practice this just doesn't happen because you've composed a bunch of pure functions with various branches within them.
You lose the ability to log "why" some effect is happening.
Eh sure. But you can always collect/carry decisions in something like an Either. When using arrows or your own monadic bindings it is even possible to abstract this away from view.
If you know lisp, just reach for Coalton instead of Haskell
Coalton has some evolution to go before that, but it is good and flexible enough.
What evolution in particular do you think? The developers use it for commercial products in quantum computing and defense [1]. That doesn't mean it's done in some complete language ecosystem sense (which is discussed in [1], and one could argue Haskell also never feels "finished"), but it also doesn't seem like an unfinished hobby project. Given that it's embedded in Common Lisp, there's always a way to fill in the library gaps, sort of like how if a "native" library doesn't exist in Clojure, one can always reach for Java.
[1] From Toward Safe, Flexible, and Efficient Software in Common Lisp at the European Lisp Symposium, "[Coalton] has been used for the past 5 or so years [...] first in quantum computing and now a serious defense application." https://youtu.be/xuSrsjqJN4M&t=9m14s
I am an avid sbcl and coalton user (and sponsor of both when I can) and never said it was not a great thing; comparing it to Haskell is, outside the theoretical type system roots, just a bit early type system wise.
I agree with you further and you did an excellent promotional comment for Coalton and CL; keep doing that please. I have said many times here before that I did not like my time away from CL and Coalton makes it even better.
Seems a bit similar to 'Why I prefer Scheme to Haskell' (<https://news.ycombinator.com/item?id=3816385>, 2012). Seems a bit plagiarized, but that may just be a coincidence.
Because they're elegant. Haskell is a conceptual and syntax mess.
Compared to lisp? Ok fine. Syntax doesn't get more simple than Lisp. But compared to JavaScript? C++? C#? Haskell is top tier when it comes to syntactic and conceptual elegance. The biggest problem is tooling, I would say.
I could not agree less. People used to call Python “executable pseudocode” - in that spirit, Haskell is executable pseudo-math. If you’ve done enough higher math that a professor’s whiteboard notation feels natural to you, then Haskell might feel like a reasonable approximation of that style. Otherwise: it’s line noise.
(I write Haskell professionally)
I don't think:
"Haskell: more elegant than Javascript and C++" would make a good promotional motto.
That's like bragging how prettier you are than Danny Trejo.
Haskell is very elegant and pretty. It's hard to describe what pretty is when it comes to programming languages, but imo golang is ugly, rust is good, and Haskell the best.
For me one of the best things about Haskell is syntax and how clean it is.
I guess beauty is in the eye of the beholder. I've always liked Haskell and OCaml syntax.
I don't believe monads are a "heavy handed abstraction" and that's what prevents people from prototyping in Haskell.
What really prevents people from writing in Haskell at a reasonable speed is the poor language design. Programming languages are supposed to aid in reading by emphasizing structure. It's important to emphasize that a particular group of "words" constitutes a function call, or a variable definition, or a type definition -- whatever the language has to offer.
Haskell is a word salad. Every line you read, you have to read multiple times, every time trying to guess the structure from the disconnected acronyms. It belongs to the "buffalo buffalo buffalo buffalo" gimmick family. This is a huge roadblock on the way to prototyping as well as any other activity that implies the ability to read code quickly. And then it's also spiced by the most bizarre indentation rules invented by men.
This is not at all a problem with eg. SML or Erlang, even though they are roughly in the same category of languages.
Haskell would've been a much better language if it made its syntax more systematic and disallowed syntactical extensions s.a. introduction of user-invented infix operators, overloading of literals (heaven, why???) and requiring parenthesis around function arguments both for definition and for application. The execution model is great, the typesystem is great... but the surface, the front door to all these nice things the language has is just some amateur level nonsense.
* * *
As for the upsides of using languages from the Lisp family for practical problems... I don't find (syntax-rules ...) all that exciting. I understand this was an attempt to constrain the freedom given by Common Lisp macros, and I don't think it worked. I think it's clumsy and annoying to deal with. The very first time I tried to use it, I ran into its limitations, and that felt completely unjustified. To prototype, you want freedom of movement, not some pedantry that will stand in your way and demand you work around it somehow.
The absolute selling point, however, is SWANK. Instead of editing the source code, you are editing the program itself, that can be interacted with in points of your choosing. I don't know of any modern language that offers this kind of experience. I think, even still in the 80s, this approach to programmers interacting with computers was common. At school, we had terminals with some variety of Basic, and it worked just like that: you type the program and it instantly shows the effect of your changes. Then, there was also Forth, which also worked in a similar way: it felt like you are "talking" to the computer in a very organized and structured way, but real-time.
Most mainstream languages today sprouted from the idea of batch jobs, where the programmer isn't at the keyboard when the program runs. They came with the need to anticipate and protect the programmer from every minor mistake they might've easily detected and fixed during an interactive session far, far in advance.
Whenever I think about writing in C, or Rust, or Haskell, I imagine being tasked with going to the grocery blindfolded: I'd need to memorize the number of steps, the turns, predict the traffic, have canned strategies for what to do when potatoes go on sale... I deeply regret that programming evolved using this evolution path, and our idea of what it means to program is, mostly, the skill of guessing the impossible to predict future, instead of learning to react to the events as they unfold.
Your criticism of Haskell is entirely subjective. There are lots of people, myself included, that like and prefer Haskell's syntax.
This is not what "subjective" means. You can't argue something is subjective because many people don't agree with an opinion.
When someone argues subjectivity (in a negative sense), they need to show that the opinion does not rely on facts, rather it's based on... nothing (feelings).
I offered a very easy way to numerically assess the negative impact of poor language design choices made by Haskell designers. It's not about what I "feel" about the language: in Java, you write three-words program, and you get, usually, a unique interpretation. In Haskell, you write a three-words program, and you get 9 (nine) possible interpretations. It's impossible for a human to examine nine interpretations simultaneously and figure out which of them are valid and might fit the context. So, reading a Haskell program takes longer and requires more effort than a Java program.
Of course, Haskell programmers find ways to adapt to their misfortune. They try to avoid pathological cases (eg. writing four-words programs, let alone five!), they memorize a lot of acronyms and non-typographical symbols that they later use to prune the search for a possible meaning of the program. They invent conventions on top of the bare language design that constrain the search space for possible programs to make their task easier.
It's absolutely possible that after layers of conventions and a long time spent memorizing various acronyms and symbols, Haskell programmers catch up to speed of programmers in other languages: after all, the superficial difficulties with the language might seem like a small price to pay for the access to the language's riches that lay beyond the surface. The language grammar rules cannot account for the entirety of the performance of the programmers who chose to write in the language.
This situation is very similar to the "universal" (claimed, but not in practice) mathematical language, which is extremely difficult to read, write, edit, typeset... yet the tradition of using it prevails and the overwhelming majority of mathematicians use, and prefer using the "universal" mathematical language even though much saner alternatives exist.
There aren't a lot of Haskell programmers, so "lots" is maybe an exaggeration.
I see OP's point. Haskell feels (or felt, I admit I haven't been keeping up the last 15 years) needlessly obtuse sometimes, like how people love to invent new infix operators all the time.
> Haskell is a word salad. Every line you read, you have to read multiple times, every time trying to guess the structure from the disconnected acronyms. This is a huge roadblock on the way to prototyping as well as any other activity that implies the ability to read code quickly.
I couldn't disagree more. Yes, there is more upfront work understanding Haskell code. But it's very dense. Once you understand the patterns, you can read it much quicker. Just like map/filter/fold are harder to understand then a for-loop, but once you do, you can immediately see what kind of iteration is applied. The for-loop can do all kinds of crazy index manipulation that you always have to digest from scratch.
> And then it's also spiced by the most bizarre indentation rules invented by men.
Again, quite surprised by this criticism. The rule is extremely simple: inner expressions must be indented more. You're free to decide by how much. That's why there are many "styles" out there. Maybe that's what you mean with bizarre. But it's not like the language is forcing weird constraints on you. If anything the constraints are too lax. Any other language with non-mandatory indentation allows that as well. In general, I really don't understand why not more languages do mandatory indentation. You only need curly braces and semicolons if you want the option to write a whole if/else/while/... statement in one line. But nobody does that.
> inner expressions must be indented more
Not to support the parent comment, which I disagree with, but If you use multi-line let-bindings, those require that you indent not just more than the previous line, but as much as the first token after the let keyword on the previous line. It’s a very strange rule, all the more surprising because it’s inconsistent even with the rest of the language. It is totally avoidable if you, like I think most experienced haskellers do, just prefer ‘where’, but people more familiar with procedural code usually lean into using ‘let’ everywhere because it feels more familiar.
I think the strange indentation used to be required in more places - I vaguely remember running into it a lot more when I started with Haskell 20 years ago, but that was also just when I was new to the language. These days I just keep ‘let’ to a bare minimum, so it doesn’t bother me. One thing that made Elm frustrating was that it disallowed ‘where’ clauses, forcing you to deal with this weird edge case all the time.
So you want to line the equals signs up or similar?
vs.No, the issue is if the first binding is on the same line as the `let`, you are required to write, e.g.:
rather than: I think it used to be the case that it had to be indented past the `=` or the `let` even if it was't on the same line. Note also that `in` has to be indented past `someValue`, but doesn't need to be indented as far `let`.This is fine:
So, it is possible to land on sane indentation, but the parser is much pickier than, e.g., Python's off-sides rule, so it takes some trial and error for new users to find it, and it can be frustrating if you're just temporarily modifying an expression to quickly try something out.I honestly think it would be less surprising if the parser just disallowed writing the first binding on the same line as the `let` entirely, treating it only as a block, but some people (bewilderingly) do seem to prefer to write their code with the excessive indentation (I'd imagine with editor support, rather than manually maintaining the spacing).
I feel like you are describing that the parser is too lenient rather than too picky. It could just require you to always put `let` and `in` on their own lines, in which case the indentation makes sense, I think. It's only when trying to keep more stuff on the same line that the details of Haskell's indentation rules come into play.
> I couldn't disagree more
[proceeds to agree on all points]
Not even sure what to tell you... Have more introspection?
> It's important to emphasize that a particular group of "words" constitutes a function call, or a variable definition, or a type definition -- whatever the language has to offer.
Syntax highlighting? Please take a look at https://play.haskell.org/
I am completely baffled by this comment. Are you missing the parenthesized function calls by any chance? If so then I can relate a bit.
No, it's not syntax highlighting.
For background: my first time in college, I was studying typography. An integral part of this trade is figuring out what is easier for people to read by answering questions s.a. what is the best line length, what number of columns per page is the best, what number of ascent elements per font face is the best, considering letter frequencies and coincidence and so on.
It also comes with the editing part, as in the trade of taking a manuscript (a text intended to be published) and making sure that the text meets certain reader expectations in terms of consistency, clarity, structure. This, obviously, includes the use of punctuation, but it's more about the language structure, things like adjectives order or anaphora usage etc.
Programming languages can be judged using the same rules, because, in the end of the day, we read them and need to interpret them. People have particular strengths and weaknesses when it comes to reading: we can remember the anaphora's anchor for only so long, we can hold only so many "variables" in fast-to-access memory, we only can do so many levels of adverb phrase nesting and so on.
Haskell was designed by someone completely oblivious to human abilities to read. It's very demanding and straining when it comes to extracting structure from text in the same way how, in English, you'd struggle to extract structure from so-called "garden path" sentences, because it's intentionally obfuscated. I don't believe Haskell is intentionally obfuscated, instead, I attribute the poor performance to the lack of awareness on the part of the author.
To convey the same point by means of example: Haskell is almost uniquely bad in that given a program
the programmer can't tell if the program is actually A(B, C), or B(A, C), or C(A, B), or A(B(C)), or A(C(B)), or (A(B))(C), or (B(C))(A), or (B(A))(C), or (C(B))(A).There's absolutely no reason a language should offer these kinds of puzzles, especially in a very large quantity as Haskell does. Removing this "feature" would make the language a lot easier to work with.
In Haskell it's only ever one of (A(B)(C) or (B(A)(C), and you can tell which based on which characters B is made up of. If B starts with one of !#$%&*+./<=>?@\^|-~` it's the second situation, otherwise it's the first.[0] All functions are unary in Haskell so A(B, C), B(A, C) and C(A, B) can never actually happen. The cases where it looks like A(B(C)), etc. are happening are actually cases of (B(A)(C), e.g. f $ g is a (B(A)(C) case where B=$. So the basic syntax of Haskell is actually very simple and consistent, but due to lazy evaluation the functions can affect control flow much more than in other languages.
0: OK, there are some additional non-ASCII Unicode symbols, but everything but string literals should be kept ASCII IMO.
> the programmer can't tell if the program is actually
What do you mean, "can't tell"? If I see this in Python
how do I know which of your 9 it means? Well, I'm a Python programmer so I know that it means which is the function A applied to B, which returns a function that gets applied to C. If you're a Haskell programmer you know that it means the same thing.I grant you that it is odd to those who are unfamiliar and it took me quite a while to get used to it, but it's much better to write that way in Haskell when writing programs that use higher-order functions.
Mmm.I think I understand where you are coming from. You can write incomprehensible code in Haskell very easily and I agree that some people tend to write Haskell in a way that is easy when writing but very hard during reading.
But that is a choice. I prefer not using complex function compositions and the lenses due to this, split complex expressions into a bunch of let bindings etc..
So you also can write very readable code in Haskell.
> (syntax-rules ...) The very first time I tried to use it, I ran into its limitations
syntax-case is the general purpose construct to use. syntax-rules is a restricted, easy-things-should-be-easy construct.
https://www.scheme.com/tspl2d/syntax.html
You don't need syntax case to do advanced things though. Alex shinn's match.scm uses all the dirty syntax-rules trick.
It is pretty awful to write things like that.
It's just not good because you need to work around its limitations, whatever its purpose is. Not good for prototyping because it's the red tape you need to cut to get work done. Red tape isn't, in general, a bad thing, but when it comes to prototyping it is.
I think most people misunderstood syntax rules. It was not meant as the macro system for scheme. It was meant as the template macro system everyone could agree on, while leaving the more powerful low level macro systems to the implementations. Syntax case, or explicit/implicit renaming or syntactic closures or what have you.
Agree. It got the ball rolling.
From your last paragraph, I am curious which languages / paradigms you advocate for. Sorry it wasn't clear to me except that you like SWANK, which I'm not familiar with.
I wish there was some sort of a single metric that would allow measuring languages against each other and thus determining the best one. Unfortunately, there are multiple variables and the relationship between the variables is unclear. But, going totally with my gut feeling, some examples of good languages (in terms of ease of reading) include:
* Prolog (and, by extension, Erlang).
* Pascal.
* Java 5 and earlier (and Go, as it's almost a Java's twin).
These languages somehow manage to hit the sweet spot of enough system and enough diversity, few unexpected syntax constructs (eg. Pascal or Java have the "dangling else" problem, but it's manageable compared to the problems introduced by optional statement delimiters in Go or JavaScript for example). In every case, a programmer must program defensively against these sorts of language "pathologies".
To give some examples of questionable or outright bad design decisions:
* In Common Lisp (and Scheme as well as a number of similar languages) there's a problem with identifying the open parenthesis that will be closed by typing the closing parenthesis. Programmers must invent tools and techniques to manage this problem.
* In C++, there's a laughable (or, at least was, for a long time) rookie "whoopsie" when it comes to ">>" in templates vs infix operator. And the "solution" offered by the language designer makes you think they were just... lazy (add space).
Here are also examples of some (perhaps, accidentally) good decisions:
* Kebab-case in many Lisp family of languages. In Latin script, the position of the hyphen in the middle of the lower-case letter is a better choice then, eg. underscore (which is tutted to be a "not a typographic character"). Same reason why, eg. in traditional Hebrew hyphens are at the height of a capital letter (Hebrew doesn't have lower-case letters and the shape of letters is better suited for hyphens at the top rather than the middle).
* Clojure as well as Racket (afaik, deliberately) introduced more kinds of parenthesis-like delimiters to make it easier to guess which expression is being terminated by the currently typed delimiter.
* * *
Note that this is a "superficial" metric, because languages are also valuable for concepts they are able to express both in terms of program logic as well as program application to the hardware it manages; the ability to process, modify, generate, analyze the language automatically; the ability to constrain the language to a desired subset of all available operations... Incorporating all of these into a single metric seems like mission impossible :)
Try Clojure with CIDER/nREPL (roughly similar to SLIME/SWANK).
>And then it's also spiced by the most bizarre indentation rules
Are you mixing tabs and spaces? Maybe an example here would help.
>overloading of literals (heaven, why???)
No, this is important, so that default strings don't to have to be something crummy. Even C++ got on this bandwagon.
>and requiring parenthesis around function arguments both for definition and for application.
??? Again, an example would be helpful. Usually the complaint with Haskell is that people don't use enough parenthesis.
>The execution model is great
...I thought lazy execution was widely agreed to be the worst part of Haskell.
> Are you mixing tabs and spaces? Maybe an example here would help.
This is not what "rules" means. Rules aren't about what I do. Rules are about what the language treats as legal or illegal. I don't write in Haskell at all because I don't like it and have no use for it, but Haskell rules don't change because of that, they are still mindbogglingly complex when it comes to telling the programmer if the next line is the right amount of space to the right or not. None of that complexity is necessary and could've been totally avoided if the language used statement delimiters.
> No, this is important, so that default strings don't to have to be something crummy.
My argument is that to get a little accidental convenience you sacrificed a huge amount of routine convenience. The mental load of having to distrust a string when you see it is just not worth the accidental convenience of writing a prepared statement and making it appear as if it was a string. In other words, you are the guy who traded a donkey for three beans, but the beans didn't sprout into a huge ladder that took you to the giant's castle. You just made a very watery soup and that was that.
> Again, an example would be helpful.
Look up the example I gave in the adjacent reply.
> I thought lazy execution was widely agreed to be the worst part of Haskell.
It's good because it's unique and, when it fits the purpose, it's useful for that particular purpose and neigh irreplaceable, because it is unique. It's worth having for the sake of research, to understand how languages can be designed and what tools or techniques can be discovered on this path. This is said from the perspective that Haskell is not the end product, but rather a research attempting to study how languages can work and what concepts they can develop.
> if the language used statement delimiters
I mean, it does. White-space sensitive syntax is entirely opt-in when you chose to omit delimiters. Here's an explicit delimiter example:
I learned Scheme before Haskell and as much as I enjoyed the experience, I still wouldn't reach for Haskell first. It's pretty much limited to my xmonad configuration.
I have written a very large codebase in Scheme (gambit) and in the end I really, really, wanted a type system to catch bugs.
Can you say more about the system? A lifetime ago I was really excited about gambit (and bigloo) but I never had the chance to work with them beyond messing around here and there after work.
Jank looks promising if you want a typed Lisp. It’s essentially native Clojure without the JVM: https://jank-lang.org/
In case you're into machine learning, I'm also building something similar - a tensor-first, native Clojure-like ML framework.
There's also Crunch Scheme(from creator of Chicken): https://wiki.call-cc.org/eggref/6/crunch
That's why I switched to Common Lisp, its type system isn't perfect but it works well enough for my needs (especially with the occasional (describe 'sycamore:tree-insert) in the REPL).
https://github.com/carp-lang/Carp might be of interest. It's a statically typed lisp.
I get where you're coming from but I talked to a few folks working in large Haskell codebases and I'm not sure I would make that trade.
Yeah, its genuinely a case of "software hard."
> Remember that later, just adding a simple print somewhere is not going to work without refactor (welcome to the IO monad).
This hits home for me. Print statements are essential to the way I code. I use them to debug, to examine variables and parameters, to trace execution flow. I rarely use debuggers.
For caveman debugging, if I'm not sitting in a monad, I usually reach for something like Debug.Trace. Typically that's in Idris or my own language, but I see that haskell has it too.
For my own language, I have the syntax highlighting set to put the `trace` keyword in red, so I can easily clean up.
Debug.Trace.trace and friends can help here. This can work from pure code.
But the lazy evaluation does imply that trace functions only execute when the statement is actually forced evaluation. But it is actually quite helpful during debugging..
Picking a language is a matter of selecting the best fit given the constraints of the project.
For stuff I like to work on on my own time, "how I like to work" is a major forcing constraint. So it's no surprise that I have a large number of Lisp projects sitting around. Maybe it's because I'm auDHD, but the ability to evolve a program through active dialogue with the machine (and not of the sloppotron variety) just fits better with how I think through a problem and its solution.
Haskell is godsend when using LLMs though.
Irrespective of the language, I love the REPL. For this reason, among others, I just cannot get into Agentic Coding. It seems like a step back to batch processing.
Many people here are saying AIs work great with the REPL.
I think it does, but Agentic does not.
Or that the AI is the REPL?
I tried some ML language once, it's difficult even to write a basic factorial example, which in Scheme I could do it iteratively and recursively with ease.
Either with S9 Scheme for quick fun (it has Unix sockets and ncurses :D ) or Chicken Scheme for completeneless (R5RS/R7RS-small + modules), I always have fun with both.
Oh, and well, Forth, too, but more like a puzzle (altough it shines to teach you that you can do a lot with a fixed point). Hint: write helpers for rationals -a/b where a is an integer and b a non-zero integer- and complex numbers by placing two items in the stack for each case (for rat helpers you need four (a/b [+-*/] c/d) .
You can have a look at qcomplex.tcl (either online or installed) as an example on how can it work even under JimTCL itself by just sourcing that file. Magic, complex numbers under jimsh thanks to the algebraic properties. So, you can implement the same for yourself in some Forths, even under EForth for Muxleq. Useless? It depends, under an ESP32 it can be damn fast, faster than Micropython.
I don't see how:
Racket:
OCaml:Whenever someone complains about not being able to use a slightly different syntax, I assume they just don't have any neuroplasticity anymore.
I think syntax matches with our brains or not. I think anyone is capable of learning any syntax. The question is whether they want to. At some level, programming is art.
Haskell:
Obligatory "The Evolution of a Haskell Programmer":
https://people.willamette.edu/~fruehr/haskell/evolution.html
That's an odd way to rewrite most of the SfICPICP exercises for scheme.
From my limited SMLNJ experience I think for something as simple as factorial, it is nearly the same. Both have TCO, recursion, inner functions, pattern matching and those good things. You can structure the code the same way.
Even as simple as
which is a working Haskell implementation?I mean, in Scheme it is longer to write. I enjoy Lisps and use Emacs for everything, but Haskell can be as terse, or even more terse. (Which is not always a good thing.)
I think in terms of token count it comes out to about the same; and Lisp admits fewer kinds of tokens.
> I tried some ML language once, it's difficult even to write a basic factorial example
What do you mean? It's one of the first things taught in any tutorial for the ML family or Haskell.