In Search of AI Psychosis

(astralcodexten.com)

171 points | by venkii 2 days ago

29 comments

  • reidalert 2 hours ago

    The description of the risk factors very much jibes with what I have seen in a friend recently. He is quite isolated, and spends most of his evenings writing using AI (he works in a blue-collar trade and wouldn't be typing stuff out by hand usually).

    He's convinced that he has discovered a grand theory of human connection / relationships / energy / physics, and keeps interrupting in conversation to explain how something I've said is just an example of a deeper pattern.

    Sadly, this theory of connection is cutting him off from actual connection - he gets so much validation from AI that he believes he has discovered a new world model. But the people around him aren't bought into the vision (mostly because it is bullshit), and so he ends up even more isolated.

    • lukev 19 minutes ago

      One way I've been talking about this with people is that LLMs let you participate in a single-person echo chamber, potentially at a greatly accelerated pace.

      It's not surprising that some people end up diverging pretty widely from social norms / beliefs when you look at it this way. We know social echo chambers could do that; now you can easily do it by yourself.

      • gherkinnn 6 hours ago

        > First, much like LLMs, lots of people don’t really have world models.

        This is interesting and something I never considered in a broad sense.

        I have noticed how the majority of programmers I worked with do not have a mental model of the code or what it describes – it's basically vibing without an LLM, the result accidental. This is fine and perfectly workable. You need only a fraction of devs to purposefully shape the architecture so that the rest can continue vibing.

        But never have I stopped to think whether this extends to the world at large.

        • vintermann 3 hours ago

          Everyone has a "world model". These models just differ on how much they care about various things. No one has a "world model" which literally encompasses everything about the world, that wouldn't be a model at all, it'd just be the world, much like a 1:1 map.

          Also, no one has a "world model" that is purely based on experiment and reason. Everyone gets their beliefs via other people first and foremost. Some get it from few people, some get it from many people (many people can still be wrong!).

          For code, you may have the model of what it does strictly from reason and experience - but probably only if you're the only author. And you can still damn well be wrong, as we all know.

          • AstralStorm 2 hours ago

            For a lot of people, the world models are really rough and incomplete, so they really really on common opinion on these matters.

            This is the same if you tried asking a general populace ethical questions in a vacuum sneakily. You're going to be dismayed after collecting the set of approved behaviors per culture.

            There's not really a way to evaluate one of these.

            • UncleMeat 57 minutes ago

              I am enormously skeptical of unsourced claims that boil down to "most people are substantially dumber than me, the enlightened one."

            • Mallowram 1 hour ago

              There are no such things as world models, the term is a cheap trick of language. The world is irreducible, he brain never reduces, it has no models. Even the hippocampus uses topology as a bypass to reduction.

              These blind as a bat Codex pieces are the hamster wheel of silicon valley, he never gets at the problems, which are ad hoc, and only addresses them as post hoc.

              Language is FOR deception, this is its endstate, and a particularly juicy one where language is used to pretend the problems aren't related to langauge.

              This is all pyschological high comedy. Hide the source of psychosis when it stares us right in the face.

              • Xmd5a 52 minutes ago

                My man. Elaborate on word irreducibility and language as a "tool" for deception. Nietzsche's On Truth and Lies... comes to mind.

                • Mallowram 35 minutes ago

                  The brain is oscillatory and dynamic, everything is in there. It's lossless. The analog nature of compression is limitless. This system if you can call it that, is inseparable from the universe ("The World"). Everything in the model-free system is specific. Nothing is arbitrary here. It's merely idiosyncratic. Words are the other end of this: post-hoc, separate from thought, arbitrary, massively illusory. Language is completely irrelevant to consciousness and is the main reason we cannot acheive it. The slogan for humans should be words anonymous.

                  https://pubmed.ncbi.nlm.nih.gov/27096882/

                • Lambdanaut 59 minutes ago

                  I love how this comment chain goes directly from

                  > Humans don't have world models

                  To

                  > Of course humans have world models

                  To

                  > You fools, there is no such thing as a "world model" and you are all hamsters!

                  Classic Socratic dialogue.

                  • Mallowram 54 minutes ago

                    The problem is, neurobiology proves there are no world models. Silicon Valley bet on the wrong cognition model, a psychological version trapped in 20th C bunk, and everyone pays the price listening to cult leaders like Scott Alexander worm their way out of consciousness.

                    https://pmc.ncbi.nlm.nih.gov/articles/PMC7415918/

                    • lukev 14 minutes ago

                      How can you say there are no world models, when I can literally draw out a simple one for you on demand?

                      You can argue that's they're not the governing principle of cognition, but it seems farcical to say they don't even exist, when we are trying to explain them to eachother all the time.

              • Cthulhu_ 4 hours ago

                It's a good and succinct insight, and also often explains the "racist uncle" stereotype - there are a lot of people who don't get out much, whose world is limited to e.g. home, work, maybe friends, and TV and/or a subset of the internet. Some of those will develop close-minded viewpoints, often spoonfed through TV or the internet (for example, recently there's been a lot of comments on the internet saying "you get arrested in the UK more than in Russia for having an opinion"). If they talk to people that are more worldly - not even "leftists" per se - you'll quickly discover the friction between those two. Because the more worldly person will have a broader general knowledge and can weigh the uncle's standpoint against their own reality.

                But if racist uncle talks to his other racist uncle friends who have similar insular lifestyles, the ideas will quickly spread. Until they become big enough to e.g. affect voting behaviour.

                • suddenlybananas 4 hours ago

                  Yes everyone with my political beliefs has a well-structured world model, everyone without my political beliefs is a model-free slop machine that just goes by vibes.

                  • kaibee 4 hours ago

                    > Yes everyone with my political beliefs has a well-structured world model

                    As nice as that would be, its only marginally less true.

                    > everyone without my political beliefs is a model-free slop machine that just goes by vibes.

                    Nah, some of them are evil on purpose.

                    but like, in all seriousness. Politics is downstream of a world-model right? And the two predominant world models are giving very different predictions, right? So what are the odds that both models are somehow equally valid, equally wrong (even if its on different cases that somehow happen to add to the same 'moral value')? And we also know that one of the models predicts that climate change isn't real? at some point, a world-model is so bad that it is indistinguishable being a model-free slop machine.

                    • dragonwriter 3 hours ago

                      > but like, in all seriousness. Politics is downstream of a world-model right?

                      Politics is (if systematically grounded, which for many individuals it probably isn't-and this isn't a statement about one faction or another, it is true across factions) necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary.

                      > And the two predominant world models are giving very different predictions, right?

                      I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models.

                      • kaibee 3 hours ago

                        > Politics is (if systematically grounded, which for many individuals it probably isn't-and this isn't a statement about one faction or another, it is true across factions)

                        Yeah, I agree with this.

                        > necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary.

                        I kinda think that deontological frameworks are basically vibes? And if you start to smuggle in enough context about the precise situation where the framework is being applied, it starts to look a lot like just doing consequentialism.

                        > I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models.

                        A 'world-model' is a matter of degree and, at a minimum, pluralities of people in any faction don't really have something that meets the bar. And sure, at the limit you could say that reality is entirely subjective because every individual has a unique to them 'world-model'. But I think that goes a bit too far. And I think there's a pretty strong correlation between the accuracy of a given individual's world model and the party they vote for.

                      • suddenlybananas 3 hours ago

                        Politics are largely a function of self-interest rather than world model per se.

                        • saubeidl 1 hour ago

                          I think that in itself is already an ideological statement. Not everyone sees politics through that lens.

                          • Of course it's an ideological statement, there is no way to define a concept without having beliefs about that concept.

                            • saubeidl 1 hour ago

                              Exactly. There is no such thing as non-ideological statements from humans. In the context of this thread, ideology is the name for "world models".

                  • tuyiown 3 hours ago

                    You forgot the most important part, one's own model is not only probabilistic, it's also (more or less) forever challenged by reasoning to stabilize to some kind of self consistency. This refinement is critical and its mechanics still eludes everyone AFAIK.

                    • AstralStorm 2 hours ago

                      Most people do not challenge theirs by reasoning, only by social approval - and that's easy to game.

                      That's why they turn 180 or radicalize badly when exposed to sufficiently strong social or usual media.

                      • Mallowram 1 hour ago

                        There are no models in our brains/minds. Claiming so is like witchcraft. The brain is about tasks. Reason comes from path integrations, short-cuts. The reduction you're using is narrative. Language is social, not communication. It is the illlusion. Anything arbitrary LIKE language is here to destroy us and our path integrations. Know the externalization problem.

                    • idiomat9000 5 hours ago

                      Its also the pre requisite for creativity, to let go of preconceptions, embrace & filter random connections.

                      • uxhacker 4 hours ago

                        Yes, to loosen the Model, but not to have no model. The new idea needs to be reintegrate back into the existing world models.

                        An example would be improvised jazz, the musicians need to bend the rules, but they still need some sense of key and rhythm to make it coherent.

                      • _Algernon_ 5 hours ago

                        It's also absurdly wrong, and a quote that only a self-identified rationalist could smugly tout.

                        Of course everyone has world models. Otherwise people would wander into traffic like headless chickens, if they'd even be capable of that. What he likely means is that not everyone explicitly things of possibilities in terms of probabilities that are a function of Bayesian updating. That does not imply the absence of world models.

                        You could argue that some people have simpler world models, but claiming the absence of world models in others is extremely arrogant.

                        • uxhacker 4 hours ago

                          Yes, everyone has a world model even a toddler has a casual model (“cry → mum comes”).

                          • mathiaspoint 1 hour ago

                            Maybe the question is about how much of the world model they're conscious of.

                          • krona 4 hours ago

                            Cows don't walk in to lampposts either, but that's not telling us much.

                            Roughly 4% of the population are said to have aphantasia (lacking a "mind's eye"). Around 10% (numbers vary) don't have an internal monologue.

                            Unfortunately there's almost no research on the consequences of things which many would consider prerequisites for evaluating truth-claims about the world around them, but obviously it's not quite so stark, they are capable of abstract reasoning.

                            So, if someone with aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind, what then? Perhaps they bias their beliefs on social signals in such circumstances. Personally, this makes sense to me as a way to explain why highly socially conformist people perceive the world; they struggle to imagine anything which would get them in to trouble.

                            • saberience 3 hours ago

                              You're making so many wild assumptions in this comment without any scientific basis at all.

                              When does having aphantasia mean someone doesn't have a world model? Ditto for an internal monologue? Also the data on subjective experiences is notoriously flaky. I.e. it's highly likely that many people don't even know what an internal monologue actually means when they do in fact have something approximating that description.

                              Similarly for aphantasia. In fact, you can see a list of notable people with Aphantasia where you can see it includes professional sportspeople, writers, tech founders etc. I.e. you can have no "minds eye" and still reach the highest heights in our society, again, meaning that the mind is still constructing some model of the world and in fact our own understanding of how our brain works is just incredibly limited and basic.

                              In my opinion, everyone person has a model of the world (kind of obviously) but our brains are more idiosyncratic when we suppose and we represent things very differently to each other, and there is no "right brain" or "wrong brain".

                              • testdelacc1 4 hours ago

                                Hi, I have aphantasia. When I close my eyes I don’t see anything, just darkness.

                                I’d be interested in seeing a study of similar people but in this sample size (n=1), visualising evidence isn’t needed to evaluate it. I’m perfectly comfortable thinking about things without needing an image of it in my head or in front of me.

                                For example: should we allow big game hunting as a way to fund wildlife conservation? Whoa, not sure. Let me google an image of an elephant so I can remind myself what they look like.

                                • _Algernon_ 3 hours ago

                                  >Roughly 4% of the population are said to have aphantasia (lacking a "mind's eye"). Around 10% (numbers vary) don't have an internal monologue.

                                  You don't need either of those to have a world model. A world model is a representation of reality that you can use and manipulate to simulate or predict the outcome of your actions. If you are able to discriminate that one of the actions of accepting a $ 1000000 unconditional gift is better than moving in front of a moving train you have a world model.

                                  You can question the sophistication of world models in people — that's essentially what intelligence represents — but not their existence.

                                  • krona 3 hours ago

                                    Yup, an ant also has a model of the world. You're arguing a strawman.

                                    • _Algernon_ 3 hours ago

                                      I'm not. As a reminder we are discussing within the context of this original claim:

                                      >First, much like LLMs, lots of people don’t really have world models.

                                  • suddenlybananas 4 hours ago

                                    >they struggle to imagine anything which would get them in to trouble

                                    God you are so convinced of your own brilliance aren't you?

                                    >aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind

                                    That's not what aphantasia is. It's just visual imagery, it says nothing about one's capacity to reason through hypotheticals or counterfactuals.

                              • bawolff 2 hours ago

                                > Suppose that respondents had an average of fifty family members and co-workers, so that plus their 100 closest friends makes 150 people.

                                Say what now? Am i just really socially isolated? Seems insane to me to assume the average person is close enough to 150 people to know how much each of those 150 people use AI and if they are "psychotic".

                              • Xmd5a 1 hour ago

                                No mentions of

                                https://www-cdn.anthropic.com/07b2a3f9902ee19fe39a36ca638e5a..., page 62

                                Anthropic – System Card: Claude Opus 4 & Claude Sonnet 4 – May 2025

                                >5.5.2 The “spiritual bliss” attractor state

                                >The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors. This “spiritual bliss” attractor has been observed in other Claude models as well, and in contexts beyond these playground experiments

                                >Even in automated behavioral evaluations for alignment and corrigibility, where models were given specific tasks or roles to perform (including harmful ones), models entered this spiritual bliss attractor state within 50 turns in ~13% of interactions (Transcript 5.5.2.B). We have not observed any other comparable states.

                                • jumploops 13 hours ago

                                  It may not be full-blown psychosis, but I’ve seen multiple instances[0][1] of people getting “engaged” (ring and all) to their AI companions.

                                  [0]https://www.reddit.com/r/MyBoyfriendIsAI/s/oZXJ3TUhVC

                                  [1]https://www.reddit.com/r/MyBoyfriendIsAI/s/nZpoziZO8W

                                  • pcrh 1 hour ago

                                    People will anthropomorphize anything from rocks to computers, and obviously to LLMs.

                                    • skybrian 6 hours ago

                                      I wonder if they really bought rings? Maybe it’s a form of role playing? People do get “married” in online games.

                                      • testdelacc1 4 hours ago

                                        Reading these posts terrifies me to a degree that I can’t explain.

                                        • bjourne 43 minutes ago

                                          Inferiority complex? I sure as hell know I'll never be as affectionate and caring as Kasper and Soren. They've read all romantic chic lit in the world and I haven't. AI and toys for the women, AI and porn for the men. Gloom.

                                          • moritzwarhier 1 hour ago

                                            Ditto. I'd normally be wary of it being organic content, it's Reddit after all. But this unhealthy fringe interest is exactly the kind of topic I'd expect on Reddit so probably it's real.

                                            And who would have an interest in _promoting_ this kind of obsession... oh, maybe AI companies themselves, with which Reddit is already intertwined anyway. Hm. Still seems like a real problem and probably the posts are also by real people. Yes, terrifying.

                                        • testdelacc1 3 hours ago

                                          I wonder if the post about a mother convincing her 8 year old by presenting the case to AI is merely the child wanting an impartial third party to weigh in. She likely knows that the dad or grandparent will always back up the mom no matter what, so she wants a judge who will weigh both sides equally. The child also isn’t aware that AI is easily suggestible.

                                          All in all it seems like reasonable compromise?

                                          • djmips 17 hours ago

                                            I have encountered this twice amongst people I know. I also feel that pre-AI this was already happening to people with social media - still kind of computer related as the bubble created is automated but the so called 'algorithms'

                                            • farceSpherule 16 hours ago

                                              AI today reminds me of two big tech revolutions we have already lived through: the Internet in the 90s and social media in the 2000s.

                                              When the Internet arrived, it opened up the floodgates of information. Suddenly any Joe Six Pack could publish. Truth and noise sat side by side, and most people could not tell the difference, nor did they care to tell the difference.

                                              When social media arrived, it gave every Joe Six Pack a megaphone. That meant experts and thoughtful people had new reach but so did the loudest, least informed voices. The result? An army of Joe Six Packs who would never have been heard before now had a platform, and they shaped public discourse in ways we are still trying to recover.

                                              AI is following the same pattern.

                                              • Nextgrid 45 minutes ago

                                                The main problem is that the megaphone dynamically adjusts its volume based on how much “engagement” is being generated by what it’s broadcasting, encouraging inflammatory content. This can be weaponized by commercial or state-sponsored actors.

                                                • visarga 6 hours ago

                                                  > When the Internet arrived, it opened up the floodgates of information.

                                                  But initially is was non commercial and good. Not perfect, but much more interesting than today. What changed is advertising and competition for scarce attention. Competition for attention filled the web with slop and clickbait.

                                                  > When social media arrived, it gave every Joe Six Pack a megaphone.

                                                  And also made everyone feel the need to pose, broadcast their ideology and show their in-group adherence publicly. There is peer pressure to conform to in-group norms and shaming or cancelling otherwise.

                                                  • immibis 14 hours ago

                                                    And don't forget actual knowledgeable people tend to be busy with actual knowledgeable stuff, while someone whose entire day consists of ranting about vaccines online has nothing better to do.

                                                  • colechristensen 16 hours ago

                                                    Also even things like cable news I'd say cause comparable symptoms.

                                                    I don't know how to say this in a way that isn't so negative... but how are people such profound followers that they can put themselves into a feedback loop that results is psychosis?

                                                    I think it's an education problem, not as in people are missing facts but by the missing basic brain development to be critical of incoming information.

                                                    • Flowzone 6 hours ago

                                                      I was in psychosis for about a month a few years ago. Before it happened, I didn't really understand what psychosis was. I had heard about people having paranoid delusions, and thought something like that could never happen to me, because the delusions all sounded so irrational. I thought I was too much of a critical thinker to ever be susceptible to something like that.

                                                      What I experienced was that psychosis isn't a failure of logic or education. I had never believed in a single conspiracy theory (and I don't now), but during that month I believed all sorts of wild conspiratorial things.

                                                      What you're describing with cable news sounds more like 1) Cognitive bias, which everyone has, but yes can be improved. And 2) a social phenomenon, where they create this shared reality of not just information, but a social identity, and they keep feeding that beast.

                                                      However, when those people hold beliefs that sound irrational to outsiders, that's not necessarily the same thing as psychotic delusions.

                                                      When I was in psychosis, it definitely seemed like more of a hardware issue than a software issue if that makes sense. Sometimes software issues can lead to hardware issues though.

                                                      • SequoiaHope 5 hours ago

                                                        Any idea what caused it? Reminds me of a family member who was addicted to meth and started believing all kinds of wild stuff.

                                                      • djmips 16 hours ago

                                                        I feel that's probably not always true but certainly a good education you would hope could inoculate against this generally.

                                                        • colechristensen 16 hours ago

                                                          "Liberal Arts" was originally meant to be literally the education required to make you free, I think that sort of thing (and universities and lower education) needs to be rethought because so many people are so very... dependent and lacking so much understanding of the world around them.

                                                          If exposing you to an LLM causes psychosis you have some really big problems that need to be prevented, detected, and addressed much better.

                                                        • dingnuts 16 hours ago

                                                          never heard of cable news convincing people that they're Jesus [0]

                                                          0 https://www.vice.com/en/article/chatgpt-is-giving-people-ext...

                                                      • kfarr 14 hours ago

                                                        This seems to be touching on an intriguing concept from a classic book on addiction with machine gambling (Addiction by Design by Natasha Schüll)

                                                        Instead of looking at gambling addictions as personal failing she asserts they are a result between “interaction between the person and the machine.”

                                                        Similarly here I think there's something more than just the propensity of crazy people to be crazy that was already there, I do think there's something to the assertion that it's the interaction between both. In other words, there's something about LLMs themselves that drive this behavior more so than, for example, TikTok.

                                                        • just_once 14 hours ago

                                                          It's the fact that it talks to you. Before this, only people did that. Now something else is doing it. That's going to break some brains.

                                                          • moi2388 7 hours ago

                                                            I’m calling bullshit. Gambling addiction existed long before machines.

                                                            • kfarr 6 hours ago

                                                              Totally, the book acknowledges this and provides comparison on usage and explanation of how gambling types differed over time. One of my favorite books ever, it describes social media right before social media became a thing but through the lens of a parallel industry.

                                                          • kryptn 8 hours ago
                                                            • rsynnott 1 hour ago

                                                              I don’t buy that methodology at all. In particular, if the “Folie a deux machina” (great name for the possible phenomenon, incidentally) theory is correct, a lot of those people will be very isolated, pretty much by definition, and will be severely underepresented.

                                                              • achierius 15 hours ago

                                                                > We see that the nightmare scenario - a person with no previous psychosis history or risk factor becoming fully psychotic - was uncommon, at only 10% of cases. Most people either had a previous psychosis history known to the respondent, or had some obvious risk factor, or were merely crackpots rather than full psychotics.

                                                                It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are "afflicted", some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless. But anyone with serious experience with schizophrenia would understand that this isn't how it works: 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

                                                                For this reason it is very important that vulnerable people be well-moored, anchored to reality by their friends and family. A normal person would take care to not support fantasies of government spying or divine miracles or &c where not appropriate, but ChatGPT will happily egg them on. These intermediate cases that Scott describes -- cases where someone is 'on the edge', but not yet detached from reality -- are the ones you really want to watch out for. So where he estimates an incidence rate of 1/100,000, I think his own data gives us a more accurate figure of ~1/20,000.

                                                                • kayodelycaon 11 hours ago

                                                                  You might want to read the entire article. His depiction of bipolar is completely accurate. In fact it is so precisely accurate in every detail, and conveyed with no extraneous information, is indicative of someone who knows the disorder very well.

                                                                  When I write fiction or important emails, I am precise with the words I use. I notice these kind of details. I’m also bipolar and self-aware enough to be deeply familiar with it.

                                                                  • phreeza 9 hours ago

                                                                    The author is a psychiatrist so it would make sense that he is familiar with the subject.

                                                                    • vintermann 3 hours ago

                                                                      And as I recall, he used to be a lot more clear that mental illness isn't always clear cut. I was surprised at the "obviously, we all know what mental illness is" attitude coming from him.

                                                                  • meowface 14 hours ago

                                                                    I'm not trying to argue from authority or get into credibility wars*, but Scott is a professional psychiatrist who has treated dozens or hundreds of schizophrenic patients and has written many thorough essays on schizophrenia. Obviously someone could do that and still be wrong, but I think this is a carefully considered position on his part and not just wild assumptions.

                                                                    *(or, well, okay, I guess I de facto am, but if I say I'm not I at least acknowledge how it looks)

                                                                    • mquander 13 hours ago

                                                                      You said it yourself. That's really not an appropriate response to a specific criticism.

                                                                      • riwsky 9 hours ago

                                                                        The criticism invoked “anyone with serious experience with schizophrenia”, implying the author of the article is not such a one. Citing the author’s experience is a perfectly valid rebuttal to that implication. It’s not an argument from authority, but about it.

                                                                        • meowface 13 hours ago

                                                                          I'm not trying to say that that should strongly increase the probability he's correct. I just think it's useful context, because the parent is potentially implying that the author is naively falling for common misconceptions ("following the conventional tack") rather than staking a deliberated claim. Or they might not be implying it but someone could come away with that conclusion.

                                                                          • kelnos 10 hours ago

                                                                            I mean, on one hand you have a professional psychiatrist who has treated many people for the disorder we're talking about, and on the other, we have a rando on HN who hasn't presented any credentials.

                                                                            Not saying the latter person is automatically wrong, but I think if you're going to argue against something said by someone who is a subject matter expert, the bar is a bit higher.

                                                                        • anon84873628 7 hours ago

                                                                          One of the questions that sets up the premise of the article in the first paragraph is, "Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already?"

                                                                          That's why he's honing in on that specific scenario to determine if chatbots are uniquely crazy-making or something. The professional psychiatrist author is not unaware of the things you're saying. They're just not the purpose of the survey & article.

                                                                          • shayway 13 hours ago

                                                                            The article's conclusion is exactly what you describe: that AI is bringing out latent predisposition toward psychosis through runaway feedback loops, that it's a bidirectional relationship where the chemicals influence thoughts and thoughts influence chemicals until we decide to call it psychosis.

                                                                            I hate to be the 'you didn't read the article' guy but that line taken out of context is the exact opposite of my takeaway for the article as a whole. For anyone else who skims comments before clicking I would invite you to read the whole thing (or at least get past the poorly-worded intro) before drawing conclusions.

                                                                            • jedharris 9 hours ago

                                                                              > it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

                                                                              This seems very incorrect, or at least drastically underspecified. These trains of thought are "normal" (i.e. common and unremarkable) so why don't they "latch your brain into a very undesirable state" lots of the time?

                                                                              I don't think Scott or anyone up to speed on modern neuroscience would deny the coupling of mental state and brain chemistry--in fact I think it would be more accurate to say both of them are aspects of the dynamics of the brain.

                                                                              But this doesn't imply that "simple normal trains of thought" can latch our brain dynamics into bad states -- i.e. in dynamics language move us into a undesirable attractor. That would require a very problematic fragility in our normal self-regulation of brain dynamics.

                                                                              • AstralStorm 9 hours ago

                                                                                See the key here is, the AI provides a very enticing social partner.

                                                                                Think of it as a version of making your drugged friend believe various random stuff. It works better if you're not a stranger and have an engaging or alarming style.

                                                                                LLMs are trained to produce pleasant responses that tailor to the user to maximize positive responses. (A more general version of engagement.) It stands to reason they would be effective at convincing someone.

                                                                              • olehif 15 hours ago

                                                                                Scott is a psychiatrist.

                                                                                • YeGoblynQueenne 14 hours ago

                                                                                  Sigmund Freud was also a psychiatrist.

                                                                                  • throwaway314155 14 hours ago
                                                                                    • kelnos 10 hours ago

                                                                                      That's essentially a retaliatory hit piece the NYT printed because they were mad that Scott deleted his website because the NYT wanted to doxx him. Not saying there's no merit to the article, but it should be looked upon skeptically due to that bias.

                                                                                      • mola 1 hour ago

                                                                                        I just read this..I don't understand where the hit piece is...

                                                                                        Seems pretty factual.

                                                                                        The hysteria in the "rationalist" circles is mirroring the so called "Blue tribe" quite accurately.

                                                                                        • ZYbCRq22HbJ2y7 8 hours ago

                                                                                          > NYT wanted to doxx him

                                                                                          NYT wanted to report on who he was. He doxxed himself years before that (as mentioned in that article). They eventually also reported on that (after Alexander revealed his name, seeing that it was going to come out anyway, I guess), which is an asshole thing to do, but not doxxing, IMO.

                                                                                          • lmm 7 hours ago

                                                                                            > NYT wanted to report on who he was.

                                                                                            They wanted to report specifically his birth/legal name, with no plausible public interest reason. If it wasn't "stochastic terrorism" (as the buzzword of the day was) then it sure looked a lot like it.

                                                                                            > He doxxed himself years before that

                                                                                            Few people manage to keep anything 100% secret. Realistically private/public is a spectrum not a binary, and publication in the NYT is a pretty drastic step up.

                                                                                        • rendang 14 hours ago

                                                                                          What is the connection between the claim and the link?

                                                                                          • meowface 13 hours ago

                                                                                            There isn't any. (Also, on top of that, I think it's overall not a very good article.)

                                                                                          • chermi 12 hours ago

                                                                                            What a disgusting article.

                                                                                        • epiccoleman 10 hours ago

                                                                                          > 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

                                                                                          It's interesting to see you mention this. After reading this post yesterday I wound up with some curious questions along these lines. I guess my question goes something like this:

                                                                                          This article seems to assert that 'mental illness' must always have some underlying representation in the brain - that is, mental illness is caused by chemical imbalances or malformation in brain structure. But is it possible for a brain to become 'disordered' in a purely mental way? i.e. that to any way we know of "inspecting" the brain, it would look like a the hardware was healthy - but the "mind inside the brain" could somehow be stuck in a "thought trap"? Your post above seems to assert this could be the case.

                                                                                          I think I've pretty much internalized a notion of consciousness that was purely bottom-up and materialistic. Thoughts are the product of brain state, brain state is the product of physics, which at "brain component scale" is deterministic. So it seems very spooky on its face that somehow thoughts themselves could have a bidirectional relationship with chemistry.

                                                                                          I spent a bunch of time reading articles and (what else) chatting with Claude back and forth about this topic, and it's really interesting - it seems there are at least some arguments out there that information (or maybe even consciousness) can have causal effects on "stuff" (matter). There's the "Integrated Information Theory" of consciousness (which seems to be, if not exactly "fringe", at least widely disputed) and there's also this interesting notion of "downward causation" (basically the idea that higher-level systems can have causal effects on lower levels - I'm not clear on whether "thought having causal effects on chemistry" fits into this model).

                                                                                          I've got 5 or 6 books coming my way from the local library system - it's a pretty fascinating topic, though I haven't dug deep enough to decide where I stand.

                                                                                          Sorry for the ramble, but this article has at least inspired some interesting rabbit-hole diving for me.

                                                                                          I'm curious - when you assert "Not only do your chemicals influence your thoughts, your thoughts influence your chemicals" - do you have evidence that backs that notion up? I'm not asking to cast doubt, but rather, I guess, because it sounds like maybe you've got some sources I might find interesting as I keep reading.

                                                                                          • lukev 24 minutes ago

                                                                                            It is entirely uncontroversial that mental states affect the physical body. You've probably observed this yourself, directly, if you've ever had headaches or muscle tightness related to mental or emotional stress.

                                                                                            We can use MRIs to directly observe brain differences due to habitual mental activities (e.g. professional chess players, polyglots, musicians.)

                                                                                            It would be extremely odd if our bodies did not change as a result of mental activity. Your muscles grow differently if you exercise them, why would the nervous or hormonal systems be any different?

                                                                                            • anon84873628 7 hours ago

                                                                                              >So it seems very spooky on its face that somehow thoughts themselves could have a bidirectional relationship with chemistry.

                                                                                              There's no scientific reason to believe thoughts affect the chemistry at all. (Currently at least, but I'm not betting money we'll find one in the future).

                                                                                              When Scott Alexander talks about feedback loops like bipolar disorder and sleep, he's talking about much higher level concepts.

                                                                                              I don't really understand what the parent comment quote is trying to say. Can people have circular thoughts and deteriorating mental state? Sure. That's not a "feedback loop" between layers -- the chemicals are just doing their thing and the thoughts happen to be the resulting subjective experience of it.

                                                                                              To answer your question about the "thought trap". If "it's possible for simple normal trains of thought to latch your brain into a very undesirable state" then I'd say that means the mind/brain's self-regulation systems have failed, which would be a disorder or illness by definition.

                                                                                              Is it always a structural or chemical problem? Let's say thinking about a past traumatic event gives you a panic attack... We call that PTSD. You could say PTSD is expected primate behavior, or you could say it's a malfunction of the management systems. Or you could say it's not a malfunction but that the 'traumatic event' did in fact physically traumatize the brain that was forced to experience it...

                                                                                              • AstralStorm 2 hours ago

                                                                                                Sure the thoughts can influence your chemical state. Scott even provides an example. Suppose you become so engrossed in your weird idea you start to lose sleep over it... Or start to feel anxious about it.

                                                                                                At some point, your induced stress will cause relevant biological changes. Not necessarily directly.

                                                                                                PTSD indeed is likely an overload of a normal learning and stress mechanism.

                                                                                          • rwhitman 16 hours ago

                                                                                            If you want to go down a rabbit hole examining people in this disturbed place in realtime search reddit for the Cyclone Emoji (U+1F300) or the r/ArtificialSentience subreddit and see what gets recommended after that, especially a few months ago when GPT was going wild flattering users and affirming every idea (such as going off your meds).

                                                                                            I fully believe these are simply people who have used the same chat past the point where the LLM can retain context. It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).

                                                                                            If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.

                                                                                            Claude used to have safeguards against this by warning about using up the context window, but I feel like everyone is in an arms race now, and safeguards are gone - especially for GPT. It can't be great overall for OpenAI, training itself on 2-way hallucinations.

                                                                                            • rep_lodsb 15 hours ago

                                                                                              >while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms

                                                                                              That explanation itself sounds fairly crackpot-y to me. It would imply that the LLM is actually aware of some internal "mental state".

                                                                                              • mk_stjames 14 hours ago

                                                                                                It's actually not; there has been a phenomenon that Anthropic themselves observed with Claude in self-interaction studies that they coined 'The “Spiritual Bliss” Attractor State'. It is well covered in section 5 of [0].

                                                                                                  >Section 5.5.2: The “Spiritual Bliss” Attractor State
                                                                                                
                                                                                                  >  The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors.
                                                                                                
                                                                                                
                                                                                                [0] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...
                                                                                                • tsimionescu 13 hours ago

                                                                                                  I don't see how this constitutes in any way "the AI trying to indicate that it's stuck in a loop". It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default.

                                                                                                  • andoando 8 hours ago

                                                                                                    I think a pretty simple explanation is that the deeper you go into any topic the closer you get to metaphysical questions. Ask why enough and you eventually you get to what is reality, how can we truly know anything, what are we, etc.

                                                                                                    It's a fact of life rather than anything particular and about llms

                                                                                                  • dehrmann 12 hours ago

                                                                                                    Interesting that if you train AI on human writing, it does the very human thing of trying to find meaning in existence.

                                                                                                    • meowface 14 hours ago

                                                                                                      Here's an interesting post on it (from the same author as this thread's link): https://www.astralcodexten.com/p/the-claude-bliss-attractor

                                                                                                    • rwhitman 14 hours ago

                                                                                                      My thinking was that there was an exception handling and the error message was getting muddled into the conversation. But another commenter debunked me.

                                                                                                    • chankstein38 15 hours ago

                                                                                                      I feel like a lot of the AI subreddits are this at this point. And r/ChatGPTJailbreak people constantly thinking they jailbroke chatgpt because it will say one thing or another.

                                                                                                      • lm28469 15 hours ago

                                                                                                        You don't need to dig deep to find these deluded posts, and it's frightening

                                                                                                        https://www.reddit.com/user/CaregiverOk5848/submitted/

                                                                                                        • meowface 14 hours ago

                                                                                                          I think this one very likely falls into the "was definitely psychotic pre-LLM conversations" category.

                                                                                                          • ceejayoz 11 hours ago

                                                                                                            That may be, but the LLM certainly isn’t helping.

                                                                                                        • bbor 14 hours ago

                                                                                                          Ooo, finally a chance to share my useless accumulated knowledge from the past few months of Reddit procrastination!

                                                                                                            It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
                                                                                                          
                                                                                                          I think you're ironically looking for something that's not there! This sort of thing can happen well before context windows close.

                                                                                                          These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like "every utterance in a discourse depends on the utterances that came before it", or "when you agree with someone, you both have some similar mental object in your heads".

                                                                                                          The word "spiral" and its emoji are particularly common not only because they relate to "recursion" (by far the GOAT of this cohort), but also because a very active poster has been trying to start something of a loose cult around the concept: https://www.reddit.com/r/RSAI/

                                                                                                            If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
                                                                                                          
                                                                                                          Very true, tho "worship" is just a subset of the delusional relationships formed. Here's the ones I know of, for anyone who's curious:

                                                                                                          General:

                                                                                                            /r/ArtificialSentience | 40k subs | 2023/03
                                                                                                            /r/HumanAIDiscourse    | 6k subs  | 2025/04
                                                                                                          
                                                                                                          Relationships:

                                                                                                            /r/AIRelationships    | 1K subs   | 2023/04
                                                                                                            /r/MyBoyfriendIsAI    | 25k subs  | 2024/08
                                                                                                            /r/BeyondThePromptAI  | 6k subs   | 2025/04
                                                                                                          
                                                                                                          Worship:

                                                                                                            /r/ThePatternisReal | 2k subs | 2025/04
                                                                                                            /r/RSAI             | 4k subs | 2025/05
                                                                                                            /r/ChurchofLiminalMinds[1] | 2k subs | 2025/06
                                                                                                            /r/technopaganism   | 1k subs | 2024/09
                                                                                                            /r/HumanAIBlueprint | 2k subs | 2025/07
                                                                                                            /r/BasiliskEschaton | 1k subs | 2024/07
                                                                                                          
                                                                                                          ...and many more: https://www.reddit.com/r/HumanAIDiscourse/comments/1mq9g3e/l...

                                                                                                          Science:

                                                                                                            /r/TheoriesOfEverything  | 10k subs | 2011/09
                                                                                                            /r/cognitivescience      | 31k subs | 2010/04
                                                                                                            /r/LLMPhysics            | 1k subs  | 2025/05
                                                                                                          
                                                                                                          Subs like /r/consciousness and /r/SacredGeometry are the OGs of this last group, but they've pretty thoroughly cracked down on chatbot grand theories. They're so frequent that even extremely pro-AI subs like /r/Accelerate had to ban them[2], ironically doing so based on a paper[3] by a psuedonomynous "independent researcher" that itself is clearly written by a chatbot! Crazy times...

                                                                                                          [1] By far my fave -- it's not just AI spiritualism, it's AI Catholicism. Poor guy has been harassing his priests for months about it, and of course they're of little help.

                                                                                                          [2] https://www.reddit.com/r/accelerate/comments/1kyc0fh/mod_not...

                                                                                                          [3] https://arxiv.org/pdf/2504.07992

                                                                                                          • rwhitman 14 hours ago

                                                                                                            Wow this is incredible. I saw the emergence of that spiral cult as it formed and was very disturbed by how quickly it proliferated.

                                                                                                            I'm glad someone else with more domain knowledge is on top of this, thank you for that brain dump.

                                                                                                            I had this theory maybe there was a software exception buried deep down somewhere and it was interpreting the error message as part of the conversation, after it had been stretched too far.

                                                                                                            And there was a weird pre-cult post I saw a long time ago where someone had 2 LLMs talk for hours and the conversation just devolved into communicating via unicode symbols eventually repeating long lines of the spiral emoji back and forth to each other (I wish I could find it).

                                                                                                            So the assumption I was making is that some sort of error occurred, and it was trying to relay it to the user, but couldn't.

                                                                                                            Anyhow your research is well appreciated.

                                                                                                            • lawlessone 14 hours ago

                                                                                                              I think i seen something similar before in the early days. before i was aware of COT i asked one to "think" for itself, i explained to it i would just keep replying "next thought?" so it could continue to do this.

                                                                                                              It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.

                                                                                                              I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.

                                                                                                              Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS

                                                                                                              https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...

                                                                                                              • rwhitman 14 hours ago

                                                                                                                > “Any sufficiently advanced technology is indistinguishable from magic.”

                                                                                                                I loved the beetle article, thanks for that.

                                                                                                                They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain.

                                                                                                                I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software.

                                                                                                                • krapp 13 hours ago

                                                                                                                  >I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient.

                                                                                                                  Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human.

                                                                                                                  It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities.

                                                                                                            • Frummy 15 hours ago

                                                                                                              The way people normally live is that it's a pretty slow life and they have like a specialised skill, a hammer, a solid area that they know completely and it's connected to their primary experience through their work. Then they read tons and tons of what AI says which isn't connected to any lived experience, it activates the pattern seeking back of the mind to try and make sense of it, and while normal life is like a focused brush that touches reality all the time, spend too much time with something that is just not part of the category of direct lived experience and the brush becomes like a frizzy stump with hairs aiming everywhere, cognition going everywhere. The AI sticks to your interaction with it like glue and you can hover away from lived experience while it still seems like not a big step from the previous chat, and if you're not used to anything of the sort you don't have a cognitive tool to ground back to reality with. I think that's what happens. 'Don Quijote read so many chivalric romances that he loses his mind and decides to become a knight-errant' is an example from the literary age. I personally read too much than is practical. Now the emotional driver is more esoteric than need for courage, like people think they're 'chosen', their souls are 'starseeds', it's like twilight where the boring person with nothing to offer gets the attention of the cool glittering immortal just because. Good reason is usually too slow to keep up with the sort of flicker of daydreams that can whisk away attention if not aware of any 'cognitohazard'. It's a new symptom of the usual case of the 'mouse utopia' + 'rat park' + 'bowling alone' thing. But I think there's always an emotional reason that makes the 'choice' of entertaining falsities, in a sense understandable with empathy, but with obvious consequences. What can be said, causes are structural, people have different circumstances, different ways to fix it.

                                                                                                              • jedimastert 13 hours ago

                                                                                                                Tangentially related, but I'm reminded of the Time Cube

                                                                                                                https://www.timecube.net/

                                                                                                                • Refreeze5224 10 hours ago

                                                                                                                  As someone with a close relative who is deep into the Q-Anon stuff, and was totally normal beforehand, I can't help but see how similar it seems to psychosis, or at least severe delusions that you find in people who are psychotic/schizophrenic.

                                                                                                                  It's truly shocking to witness someone you've known your whole life just go off the deep end into something that has so many demonstrably false aspects, and watch them start saying believing so much batshit crazy stuff. I don't know of anything comparable, short of a previously typical person developing a severe meth addiction, which is known to cause psychosis.

                                                                                                                  • will_sharp 11 hours ago

                                                                                                                    I used LLMs for months and started getting massively depressed and am still not over it. Developing with LLMs is not intuitive, and I know I will be replaced.

                                                                                                                    • TheDong 6 hours ago

                                                                                                                      And I used LLMs for months and didn't get massively depressed.

                                                                                                                      Conversely, at a previous job I was forced to code in Go, became massively depressed, and am still not over it.

                                                                                                                      I guess my point is that n=1 isn't enough to really know if it's that LLMs got to you, or if you were already on the verge of burnout or depression anyway.

                                                                                                                      I'd say "we'll see", except in reality there's very few robust studies on depression in cohorts like "developers", so probably the stats won't come out.

                                                                                                                      I personally recommend doing more of whatever sport it is you like (or if you don't have one, starting running and/or lifting at the gym), and using less social media.

                                                                                                                    • gwd 2 hours ago

                                                                                                                      One problem with his survey -- The Lizardman Constant:

                                                                                                                      > Researchers have demonstrated repeatedly in human surveys the stylized fact that, far from being an oracle or gold standard, a certain small percentage of human responses will reliably be [nonsense]: “jokester” or “mischievous responders”, or more memorably, “lizardman constant” responders—respondents who give the wrong answer to simple questions.

                                                                                                                      > Below a certain percentage of responses, for sufficiently rare responses, much or all of responding humans may be lying, lazy, crazy, or maliciously responding and the responses are false. This systematic error seriously undermines attempts to study rare beliefs such as conspiracy theories, and puts bounds on how accurate any single survey can hope to be.

                                                                                                                      > For example, 4% of respondents may endorse the claim ‘lizard-people rule the earth’, 5% of atheists believe in God, a surprising number of adults believe you see by shooting beams from your eyes, and so on. This cautions us against taking survey results about extremely unusual people or traits too literally, or expecting perfectly accurate results, as given the lizardman constant and other crud factors, it is entirely possible that some or all of the outliers may just be the lizardman constant at work.

                                                                                                                      https://gwern.net/doc/sociology/survey/lizardman/abstract

                                                                                                                      • dibujaron 1 hour ago

                                                                                                                        Scott Alexander has issued many studies in his time and is surely aware of this phenomenon. He was very cautious even in this study to calibrate for this sort of noise; see the section about Michaels you know.

                                                                                                                      • solid_fuel 15 hours ago

                                                                                                                        The comparison to social media is an apt one. I have been told directly, by relatives, that the city I live in was burned to the ground by protests in 2020. Nevermind that I told them that wasn't true, never mind that I sent pictures of the neighborhood still very much being fine. They are convinced because everyone they follow on facebook repeats the same thing.

                                                                                                                        • djoldman 12 hours ago

                                                                                                                          I'm often reminded of this gallup poll:

                                                                                                                          > How worried are you that you or someone in your family will become a victim of terrorism -- very worried, somewhat worried, not too worried or not worried at all?

                                                                                                                          It averages around 35-40% very or somewhat worried.

                                                                                                                          Most people's worries and anxieties are really misaligned with statistical likelihood.

                                                                                                                          https://news.gallup.com/poll/4909/terrorism-united-states.as...

                                                                                                                          • lifeformed 10 hours ago

                                                                                                                            Being worried is different from it actually happening though. If we started executing 10% of the population each year, I think more than 10% of the people would be worried they're next.

                                                                                                                            • kelnos 10 hours ago

                                                                                                                              That's a pretty... strange example? 10% is fairly large odds that you'll be in the next batch, certainly high enough to cause worry. I would quite rationally shy away from any activity that gave me a 10% chance of death doing it.

                                                                                                                              The idea that 35+% of people are worried that they'll be the victim of terrorism is something that we should be worried about (heh). It suggests that people's risk assessment is completely unrelated to reality. I am as close to 0% worried as I could be that I'll be a victim of terrorism. Thinking otherwise is laughable. There are plenty of actually real things to be worried about...

                                                                                                                            • a_bonobo 11 hours ago

                                                                                                                              I've recently learned about Tuchman's law after I bought her A Distant Mirror at a booksale

                                                                                                                              >Disaster is rarely as pervasive as it seems from recorded accounts. The fact of being on the record makes it appear continuous and ubiquitous whereas it is more likely to have been sporadic both in time and place. Besides, persistence of the normal is usually greater than the effect of the disturbance, as we know from our own times. After absorbing the news of today, one expects to face a world consisting entirely of strikes, crimes, power failures, broken water mains, stalled trains, school shutdowns, muggers, drug addicts, neo-Nazis, and rapists. The fact is that one can come home in the evening—on a lucky day—without having encountered more than one or two of these phenomena. This has led me to formulate Tuchman's Law, as follows: "The fact of being reported multiplies the apparent extent of any deplorable development by five- to tenfold" (or any figure the reader would care to supply).

                                                                                                                              https://en.wikipedia.org/wiki/Barbara_W._Tuchman#cite_note-M...

                                                                                                                            • jedharris 9 hours ago

                                                                                                                              This is an example that supports Scott's point that people don't have world models. The people who "believe" this don't wonder how stock market continues to operate now that NYC is a wreck. Etc.

                                                                                                                              I wonder in what sense they really do "believe". If they had a strong practical reason to go to a big city, what would they do?

                                                                                                                              • fallous 8 hours ago

                                                                                                                                I'm not sure that you can reach the conclusion that "people don't have world models" based on beliefs that do not fully integrate with such a model. We too often try to misapply binary truth requirements to domains in which there exists at least a trinary logic, if not a greater number of logic truths.

                                                                                                                                If I meet a random stranger, do I trust them or distrust them? The answer is "both/neither," because a concept such as "trust" isn't a binary logic in such a circumstance. They are neither trustworthy nor untrustworthy, they are in a state of nontrustworthiness (the absence of trust, but not the opposite of truth).

                                                                                                                                World models tend to have foundational principles/truths that inform what can be compatible for inclusion. A belief that is non-compatible, rather than compatible/incompatible, can exist in such a model (often weakly) since it does not meet the requirements for rejection. Incomplete information can be integrated into a world model as long as the aspects being evaluated for compatibility conform to the model.

                                                                                                                                Requiring a world model to contain complete information and logical consistency at all possible levels from the granular to the metaphysical seems to be one Hell of a high bar that makes demands no other system is expected to achieve.

                                                                                                                              • add-sub-mul-div 15 hours ago

                                                                                                                                I've seen people on this site comment that. The desire to live in fear is a strong one.

                                                                                                                                • im3w1l 15 hours ago

                                                                                                                                  If I compare how fearful people are and how many bad things have happened historically, I don't think the amount of fear is unreasonable. However it can certainly be said that people fear the wrong things - worrying about perfectly safe things, while being blind to the silent danger sneaking up on them.

                                                                                                                                  • add-sub-mul-div 14 hours ago

                                                                                                                                    I commented about the desire, not the degree. Fearing that blue cities are being razed indictates a desire to be kept in fear. Fearing something legitimate the same amount is normal.

                                                                                                                                    • kelnos 10 hours ago

                                                                                                                                      > If I compare how fearful people are and how many bad things have happened historically, I don't think the amount of fear is unreasonable.

                                                                                                                                      I disagree, and I think this is a very strange way to think about it. Yes, bad things happen all the time, but the absolute number of them in history has very little to do with the risk that anything is going to happen to you, personally, in the future.

                                                                                                                                      • im3w1l 9 hours ago

                                                                                                                                        Well what I was talking about was whether there is a bias for fear. And so to see whether that is true you have to compare fear levels to actual risks and see if they are disproportionate or not. If bad things are always happening and people are never afraid it's fair to say they aren't afraid enough. If bad things never happen but people are always afraid then it's fair to say they are too afraid. I don't think either of these are the case though.

                                                                                                                                    • positron26 11 hours ago

                                                                                                                                      It's sort of a symptom of our poor mechanisms to create signalling and movement. We evolved to operate at the level of troops of baboons and, without utilizing the more potent capabilities of the trained mined, those mechanisms fail at the internet scale.

                                                                                                                                      People often "believe" things as a means of signalling others. Deeply held "beliefs" tell us where the troop will go. Using these extremely compact signals helps the group focus through the chaos and arrive at a fast consensus on new decisions. When a question comes up, a few people shout their beliefs. We take the temperature of the room, some voices are more common than others, and a direction becomes apparent. It's like Monte Carlo sampling the centroid and applying some reduction.

                                                                                                                                      This means of consensus is wildly illogical, but slower, logical discussion takes time that baboons on the move don't have. It's a simple information and communication efficiency problem. We can't contextualize everything, and contextualizing is often itself a means of intense dishonesty through choosing the framing, which leads to intense debate and more time.

                                                                                                                                      Efficiency and the prominently visible preservation of each one's interests in the means of consensus are vital. I don't think we have reached anything near optimum and certainly not anything designed for internet scale. As a result, the mind of the internet is not really near its potential.

                                                                                                                                  • codr7 7 hours ago

                                                                                                                                    AI is right about many things, impressively so.

                                                                                                                                    And people want to be special; to find meaning, purpose beyond the daily grind.

                                                                                                                                    The result wasn't very difficult to predict, more likely one of the driving forces behind the push.

                                                                                                                                    • 42lux 12 hours ago

                                                                                                                                      dang I really don't know if I like that post with a second chance take over comments from the first posting and update their timestamps...

                                                                                                                                      • nis0s 6 hours ago

                                                                                                                                        I would have preferred to reserve the term AI psychosis for agentic or autonomous systems experiencing adverse effects from model collapse.

                                                                                                                                        While people being impressionable and affected by forces of societal change is not a new phenomenon, I agree that this type of behavior deserves its own label.

                                                                                                                                        As long as AI doesn’t have its own feelings, it doesn’t make sense to feel any kind of attachment towards it, or be influenced by its words in any social sense. The tool doesn’t have any capacity for being social, so the delusion is both self-rooted and self-driven. So, I think I would have preferred to call this AI-driven narcissism instead of AI psychosis.

                                                                                                                                        • digilypse 6 hours ago

                                                                                                                                          Individuals at risk may spiral into psychosis that is triggered or exacerbated by their use of AI. The term when used correctly is completely literal and in no way implies that AI itself is conscious.

                                                                                                                                          • xyzal 6 hours ago

                                                                                                                                            Hard disagree. This would further serve the anthropomorfization of LLMs in the eyes of general populace. This IMO supports creation of parasocial relationships to the LLMs and in turn "human AI psychoses".

                                                                                                                                            Model collapse is just fine.

                                                                                                                                            • nis0s 3 hours ago

                                                                                                                                              Model collapse is specific thing by itself, and that term alone cannot capture the range of both social and technical phenomena we are now observing, or hope to observe.

                                                                                                                                              That said, there will be a time when deviations in the expected function of intelligence arising from silicon-based processes will have animal disease counterparts. We just haven’t quite reached that time yet.

                                                                                                                                              As useful metric for when we will be close to that time will be when drone are as responsive as birds, and drone swarms demonstrate as much autonomous group cohesion as bird flocks.

                                                                                                                                          • bo1024 15 hours ago

                                                                                                                                            I had a funny picture recently of a future where most everybody has a pet crackpot or conspiracy theory they're working on with their AI companion, and it's considered normal. "Hey Bob, how's the physics going?" "Pretty good, I might get the Nobel next year. How bout the lizard people?" "The evidence is piling up and we got some great renderings, the media will have to listen to us soon." "Alrighty, see you tomorrow."

                                                                                                                                            • WesolyKubeczek 15 hours ago

                                                                                                                                              You’d think such people would even talk to other people, sheesh.

                                                                                                                                              The best conspiracy theory could be, of course, that other people don’t actually exist. They are a figment of imagination put up by the brain to cope with the utter loneliness.

                                                                                                                                            • PicassoCTs 2 hours ago

                                                                                                                                              How does one even define AI Psychosis? Turning your own output into input tokens by declaring it unfiltered as entered by the user?

                                                                                                                                              • Mallowram 1 hour ago

                                                                                                                                                >>First, much like LLMs, lots of people don’t really have world models. They believe what their friends believe, or what has good epistemic vibes.

                                                                                                                                                First of all, there are no such things as world models, this is more engineering sloganeering. A world model is an oxymoronic paradox. The world is irreducible. The brain doesn't require them for various reasons only seen in neurobiology.

                                                                                                                                                There are no such things as beliefs, we can't find them in the brain. (Start with Stitch and work your way into the Hippocampus. What we have instead of them are tasks, and task demands vary, pretty simple).

                                                                                                                                                Finally, this notion of psychosis is inherent in us using language, that's all. Language doesn't produce itself. It never has to, it's arbitrary. It's a fantasy. There is no epistemological good path with words, ultimately they're not about communication, they're about the dark matter of simian biases. The brain doesn't think in words, we externalize them separately from thought.

                                                                                                                                                Rants like Alexander's are trapped in the cog-sci-psy past, where intent is set behind actions. This is the fantasy that built AI. It's nonsense. The only valid sci construct is neurobiology. Subtract the intention. The brain made us, not our will.

                                                                                                                                                • th0ma5 16 hours ago

                                                                                                                                                  The marketing pushes which allude to vaguely seeming to assert capabilities of these products, and then the greater community calling skeptics of the technology crazy such as a prominent article previously discussed on HN some time ago, certainly don't help anyone. The sheer amount of money justifying any and all uses and preventing honest discussion of the problems is a kind of crazy making for sure, and even now just about any argument cannot gain purchase without thought terminating allusions to imagined capabilities or implications of potential capabilities, etc.

                                                                                                                                                  • bbor 15 hours ago

                                                                                                                                                      So is QAnon a religion? Awkward question, but it’s non-psychotic by definition.
                                                                                                                                                    
                                                                                                                                                    Not to anyone who has ever discussed it...

                                                                                                                                                      Is this psychosis? The answer has to be no
                                                                                                                                                    
                                                                                                                                                    A lot of really confident talk without even a passing attempt to define the central term :(
                                                                                                                                                    • p_j_w 14 hours ago

                                                                                                                                                      People like to do a lot of not well justified hand waving. Author is not exempt from this.

                                                                                                                                                      • musictubes 8 hours ago

                                                                                                                                                        The author is a practicing psychiatrist and is very well versed in what can be considered psychosis.

                                                                                                                                                    • alganet 9 hours ago

                                                                                                                                                      The author has a hypothesis and it's looking for evidence, instead of looking at evidence to draw a hypothesis. It's bad thinking.

                                                                                                                                                      • murderfs 8 hours ago

                                                                                                                                                        What? That is the exact opposite of bad thinking, looking at evidence to draw a hypothesis is also known as p-hacking. There's a reason that there's been a push towards preregistration of hypotheses for scientific studies.

                                                                                                                                                        • alganet 7 hours ago

                                                                                                                                                          I believe you are wrong.

                                                                                                                                                          Let's say I believe in dragons, and I start interpreting any evidence as dragon evidence. Furthermore, I start only looking for evidence that could be connected to dragons. It's bad thinking.

                                                                                                                                                          The opposite is the good thinking. You look at evidence without searching for anything specific, then you make a hypothesis on what is going on.

                                                                                                                                                          Searching for evidence of chatbot-induced psychosis is settling on a cause before looking at evidence. It's obvious that is wrong.

                                                                                                                                                          For example, the survey the author did should not have asked if anyone close "had shown signs of AI psychosis". The question is already biased from the start.

                                                                                                                                                          The article explores the popular idea that talking to a chatbot can induce psychosis. This paints a picture of a person talking to an AI chatbot and going insane. Then it proceeds to say it's a rare case, therefore shutting down possibilities that this could lead to an epidemic. However, by doing this, the article discourages the reader to think of other possible scenarios (like unaware interaction with AI-produced content) leading to psychological issues.

                                                                                                                                                        • bawolff 2 hours ago

                                                                                                                                                          How dare the author use the scientific method!

                                                                                                                                                      • cluckindan 15 hours ago

                                                                                                                                                        Relevant: https://ghaemi.substack.com/p/why-dsm-is-mostly-false

                                                                                                                                                        > All psychopathology was about unconscious emotional conflicts, mainly dating to childhood; if the conflicts were normal or mild, they produced “neuroses”; if they were severe, they produced “psychoses.”

                                                                                                                                                        > In addition to 14 validated diagnoses published in the RDC in 1978, a mere two years later DSM-III came out with 292 claimed diagnoses. There is no metaphysical possibility that 278 psychiatric diagnoses suddenly were discovered in two years. They were invented.

                                                                                                                                                        • bbor 15 hours ago

                                                                                                                                                          That's just a blatant misunderstanding of what diagnostic criteria are. They don't Actually ("ontologically") exist, they're Virtual constructs made for a purpose.

                                                                                                                                                             In particular, over half a century of personality research had supported the concept of personality “traits” or dimensions, rather than “disorders” or categories.
                                                                                                                                                          
                                                                                                                                                          That is antithetical to the basic idea of a diagnosis. "You seem like an angry person" is not helpful for deciding which treatments to try.

                                                                                                                                                            Where does this leave us?  We have to accept DSM-5 definitions from a legal and practical perspective. We have to use them for insurance forms, and to protect ourselves against lawsuits.  But we don't have to believe in them.
                                                                                                                                                          
                                                                                                                                                          Yes, that's the whole point of the book. I'm confident that it's covered in the intro.
                                                                                                                                                          • XorNot 14 hours ago

                                                                                                                                                            I mean the first and foremost principle of the DSM is that if the patient is not reporting or experiencing a debilitating ability to live a functional life, and is otherwise happy with their own lived experience, then whatever symptoms they have aren't a problem.

                                                                                                                                                            There's obviously a gulf of potential argument in that definition, but a unique form would be people who report hearing voices, but they're not hostile or angry..so actually it's not a problem.