Brute-forcing my algorithmic ignorance with an LLM in 7 days

(blog.dominikrudnik.pl)

81 points | by qikcik 7 hours ago

11 comments

  • stephen_cagle 2 hours ago

    > Find Minimum in Rotated Sorted Array

    I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).

      class Solution:
          def findMin(self, nums: List[int]) -> int:
              class RotatedList():
                  def __init__(self, rotation):
                      self.rotation = rotation
                  def __getitem__(self, index):
                      return nums[(index + self.rotation) % len(nums)]
      
              class RotatedListIsSorted():
                  def __getitem__(self, index) -> bool:
                      rotated = RotatedList(index)
                      print(index, [rotated[i] for i in range(len(nums))])
                      return rotated[0] < rotated[len(nums) // 2]
                  def __len__(self):
                      return len(nums)
      
              rotation = bisect_left(RotatedListIsSorted(), True)
              print('rotation =>', rotation)
              return RotatedList(rotation)[0]
    
    
    I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!

    Good luck on your second round!

    • kixiQu 2 hours ago

      I'm always interested in write-ups when folks try new attacks on self-study.

      I will also admit that this part hurt my heart to read (vicarious embarrassment):

      > the recruiter mentioned I needed to pay more attention to code debuggability (whatever this means - I assume that under the corpo-language, they mean that I wrote invalid code)

      • qikcik 1 hour ago

        I completely understand why that line caused vicarious embarrassment. Looking back, I realize my brain was(is) operating on a completely different definition of that word based on my daily constraints. I plan to write more about this in Part Two, but at that point in time, I wasn't even aware of this alternative understanding of the term.

        In telco, when a remote node crashes at a client's site, I often only have access to a heavily restricted subset of logs, and the debugging communication loop via email can take days to understand "what happens". Because of that, I write defensive, strictly encapsulated code, and I think in terms of domain-specific states and objects that can be explicitly tracked from an external PoV.

        Similarly, during game jams, "debuggable and maintainable" means to me that the code is modular enough that I can completely rip out and rewrite a core mechanic in the final 3 hours just because the game design suddenly changed.

        My habit of writing code optimized for remote logs and sudden architectural shifts actually became my biggest enemy under the algorthimic interview (or 45-minute LeetCode) constraint. It makes the core algorithmic state less clear and hides algorithmic mistakes under layers of defensive "if" statements (where I would normally drop a debug log).

        I am simply used to not trusting the inputs, whereas in algorithmic problems, the whole point is to exploit constraints that you need to be absolutely sure about.

        So the "if" statements that usually increase "debuggability" in telco or during game jams are the exact opposite of the "debuggability" term used in algorithmic thinking.

        Thanks for naming this issue so clearly - it is a very valid reality check.

      • TrackerFF 3 hours ago

        Note: I haven't done any tech interview in 6 years.

        I'm kind of surprised they still do leetcode-style questions on remote interviews these days. I thought those types of interviews would be 100% gamed by now.

        • mgraczyk 2 hours ago

          Still plenty of signal. You'd be surprised at how badly most people do at very simple questions.

          • gsnedders 2 hours ago

            Yeah, this.

            In my experience, it’s the relatively basic questions that have the highest value — both because they’re what you run into programming most often, and because they’re less likely to overwhelm candidates in a high-stress setting.

            The goal, at least from my point of view, isn’t to see if they can come up with the perfect algorithm, but about how they construct an algorithm, how they communicate about the decisions they’re making, how they respond to challenges about edge-cases, etc.

            I’m also strongly in favour of picking out questions that are reflective of the actual codebase they’re being hired for — find something with some basic algorithmic complexity which has a relatively simple and easy to explain input and output, and use that as the problem.

            In general, I think the best problems are those which any competent senior engineer could design a good solution for almost off the top of their head with little difficulty.

            • epolanski 2 hours ago

              I am quite passionate about algos, do lots of katas on codewars for fun, and done plenty of leetcodes.

              Then I had a technical interview when I was asked to implement a simple algo for the tris game (aka tic tac toe) and my mind was completely blurry.

              I was tired, i'm in eu and this was for a San Francisco startup interviewing me at their lunch time, very late in Italy.

              And generally don't like to be interviewed/tasked.

              Of course the solution is beyond simple, but I struggled even at brute forcing it.

              I can easily do these kind of exercises (and much harder ones obviously) for fun, but not when interviewed.

              I struggled with the same thing in University. I graduated with 104/110 even though I was consistently among the most prepared, and I learned to learn, not to pass exams (plenty of stellar performers didn't remember anything few weeks after exams).

              Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions.

              "Because you never appear convinced when you answer".

              I get nervous, I don't like to prove my knowledge this way. I rethink constantly what I'm saying, or even how I sound.

              I forget how to type braces or back ticks.

              I did not have any issues when not interviewed, or in written exams, or during my research period when I published 3 papers that have been highly cited.

              But I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.

              You interview me and you'll have the very wrong impression if you ask me to live code or white board.

              Meanwhile I've seen leetcode black belts spend most of their time logged on Tekken 7 on discord, consistently creating work and providing negative value while somehow always selling their high skills.

              I have found much more value in seeing personal projects, and OSS contributions.

              Never asked a single of these bs questions and never failed at hiring anyone. Not once.

              • mgraczyk 2 hours ago

                > I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.

                Unfortunately this is wrong and I have seen tons of data at 5 companies showing this. These kinds of interviews really do correlate well with job performance

                There is noise, but large companies in particular need a scalable process and this one works pretty well

                Startups shouldn't do this though, but the reason is the opposite of what you're complaining about. It's too easy to accidentally waste your time on somebody who is good at leetcode

                • epolanski 2 hours ago

                  I have never seen correlation.

                  The most important thing about a candidate is whether he's gonna be professional and hard working.

                  Technical interviews tell me nothing about it.

                  Of course I can see large companies that have a high number of candidates relative to positions needing such methods, they can afford false negatives.

                  But outside those cases personal projects and OSS contributions say much more.

                  • mgraczyk 2 hours ago

                    How many people have you interviewed and hired? I have interviewed around 400 and hired around 20, and I've seen data compiled on over 100,000 interviews. I have never worried about a false negative, except DEI stuff pre-2021

                    Maybe this is a Europe vs US thing?

                    • epolanski 1 hour ago

                      I've hired around 10 and interviewed around 50/60.

                      Half the people I hired, I did so without any technical interview, I met them in coding communities and saw everyday their OSS contributions and skills.

                      In any case I'm not debating your method is wrong, I'm just saying there's individuals like me that don't do well in these kind of situations/exams and can easily be false negatives.

                      I'm also saying that this isn't rocket science, and in general trying to understand if the person is honest and hard working is much more important than his coding skills.

                      I've seen way too many stellar graduates leetcode ninjas being just terrible at their job or completely uninterested.

                      And in Europe it's hard to fire them.

                • giva 35 minutes ago

                  > I forget how to type braces or back ticks.

                  US layout and compose key on AltGr. You'll never look back.

                  • pstuart 1 hour ago

                    Yes, understanding algos is valuable, and implementing them of the top of one's head is a nice party trick, but in the end we're paid to solve problems and it's much faster and easier to identify existing solutions and adapt them rather than reinvent the wheel.

                    Knowing when and what to optimize is vital.

                • autotune 3 hours ago

                  They don't, at least in the SRE space. I have been interviewing for 6 months without a single coding challenge or LeetCode-type challenge. Though I would passively avoid companies that offer them, as in I would avoid them if offered, I have yet to be offered the chance to avoid it in an interview.

                  • jevndev 2 hours ago

                    Unfortunately this practice is prevalent still. Recently I’ve been applying to jobs in the two industries I have experience in (algorithmic robotics and fintech) and nearly half of the companies that I’ve heard back from start with either a timed leetcode problem or an HR interview which is immediately followed by a timed leetcode problem. It’s exhausting.

                    • autotune 2 hours ago

                      Interesting. I am going for a broad search rather than being targeted. Maybe it's as you say, an industry specific problem. At my last fintech job they just quizzed me a bit on Terraform and asked me about experience, though that role ended up being a disaster later on.

                    • culopatin 1 hour ago

                      What have you encountered instead?

                  • ashwinnair99 4 hours ago

                    The honest ones who admit they used it as a learning tool rather than a shortcut are getting more useful out of it than anyone else.

                    • nico 2 hours ago

                      Recently had a coding interview in which I was allowed to search online but not use any AI. On the first google search, the interviewer realized that the first result is now AI generated and said I couldn’t use anything from there. So I had to just click on different links and piece together what I needed from inside the pages

                      • You have to build a house but dont use concrete mixers you must mix by hand to really see if you know the physics of concrete

                      • e12e 5 hours ago

                        Interesting article - but perhaps a bit light on details in some places, like:

                        > I generated a list of the most common interview tasks

                        How? I suppose they mean gathered, or searched for, not strictly generated?

                        Also a little light on details of the actual interview.

                        I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?

                        It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.

                        As for:

                        > How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?

                        I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?

                        Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?

                        • qikcik 3 hours ago

                          You are right, I missed some crucial details in the blog entry. I will definitely take your feedback into account for Part 2, where I want to do a more detailed deep dive into the prompting protocols (with maybe some exact examples) and my learning strategy.

                          To answer your questions:

                          1. By "generated" I mean that I prompted the LLM incrementally to provide me the list of the next LeetCode problems to do (without the deep research/search function)

                          2. Yes, the problem names are the exact names from LeetCode. Initially, the LLM suggested this format, and I later forced it to stick to real LeetCode problems.

                          This allowed me to verify some output independently of the LLM (avoiding hallucinations), cross-check solutions with other materials, and track my progress.

                          Interestingly, I realized later that the LLM was mostly pulling from the standard Blind 75 problem set, and almost all the problems are from that list.

                          3. About the "forgotten and forbidden" code: I probably phrased it poorly in the article. As you said, this algorithmic logic is abstracted away in standard libraries and data structures. The disconnect for me (and I suspect for many "business logic" developers too) is that our daily production code rarely requires writing these fundamental structures from scratch, so we do not see the patterns that can also be applied in more high-level business logic. But this is still an in-progress hypothesis in my mind, without detailed examples.

                        • and12-qwd 3 hours ago

                          It is kind of odd to admit this before the second round of interviews. Perhaps glorification of LLMs is now a positive, but still it is a gamble.

                          It is also odd that this article appears here after someone complained about vibe coding killing the interest in algorithms.

                          This game is played often. People have valid complaints, then someone posts a "rebuttal" ("LLMs are not bad for $X---they are good for $X").

                          Anyway, he uses LLMs more in the search capability, which is less controversial than generative AI and vibe coding.

                          • qikcik 3 hours ago

                            Just a quick correction: the entire recruitment process is already finished. Publishing this is a retrospective analysis of a closed chapter.

                          • gurachek 4 hours ago

                            Your "no compiler" rule on day 3 taught you more than the LLM did. The LLM made concepts click. But the binary search vanishing under interview stress proves that understanding something and being able to produce it under pressure are totally different skills. Nobody talks about this enough in the "just use ChatGPT to learn" discourse.

                            • qikcik 3 hours ago

                              You are 100% right. For me, the most important thing is that the LLM teacher allowed me to break through my algorithmic ignorance in just one week.

                              The rest is somehow orthogonal to the LLM and is just pure practice. It is very easy to procrastinate with an LLM without actual practice.

                              It allowed me to actually see the problem space and something like the "beauty of classical algorithms". It shifted my "unknown unknowns" into "known unknowns". I had failed so many times to achieve exactly that without an LLM in the past.

                              • gurachek 2 hours ago

                                Yeeah, LLMs are the perfect procrastination tool because they feel productive. You're "learning", you're "exploring", you're having this great conversation about the problem. And then you close the tab and realize you never actually wrote anything yourself.

                                The best procrastination device ever built because it validates you the entire time. Great post, even beyond the algorithms example.

                            • tom-blk 5 hours ago

                              Very cool, I have personally been studying zk-cryptography with a similar approach, works really well with some caveats. Will save this article and try this version as well when the time comes!

                              • piokoch 6 hours ago

                                This is very interesting, I've been using LLM to learn new things that way and it really worked. To some extent, learning with LLM is better than taking any course, even with a tutor, as I am getting something prepared for me, in terms of my experience, progress level, etc.

                                LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.

                                Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?

                                Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).

                                • fragmede 6 hours ago

                                  To some extent. I had Claude (Sonnet 4.5) generate some homework problems for students I was teaching to code, and the problem/answers weren't actually right. They were subtlety wrong, which makes me worry about using it for other subjects.

                                  • e12e 5 hours ago

                                    I think that Knoll’s law of media accuracy applies quite well to LLMs as well:

                                    > “everything you read in the newspapers is absolutely true, except for the rare story of which you happen to have firsthand knowledge”.

                                  • pandatigox 6 hours ago

                                    Sounds interesting, can you share some useful prompts for learning?

                                    • r_lee 6 hours ago

                                      (not OP but..) I personally am not very into "prompting", you just need to figure out how these models work

                                      it's best when you ask a well known problem/thing they can reference (vs. a niche way to solve exactly what you want to solve)

                                      then you work backwards, I e. why is it like this, what is this for, what are the alternative ways to accomplish this etc...

                                      it's a big query engine after all.

                                      don't try to ask like "what is the exact right way" or etc. because it will try to generate that and likely hallucinate if there is no such answer in its training corpus.

                                      instead ask what the model does know, or doesn't.

                                      • ImPostingOnHN 4 hours ago

                                        I can! What thing have you been learning lately? What is your current knowledge level? What is it your goal to learn about it next?

                                      • traceroute66 5 hours ago

                                        > LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.

                                        No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.

                                        What you wrote is what the STUDENTS think. The students think they have found a cheat code.

                                        No university professor considers LLM "a competitor". They see the slop output every day on their desk.

                                        The reality is just like LLMs will confidently push out slop code, they will also push out slop for everything else. Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.

                                        The students come to the professors with stupid questions because they've been trusting the AI instead of learning properly. Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.

                                        So what do my university professor friends end up doing ?

                                        They spend their evenings and weekends thinking up lab tasks that the students cannot achieve by simply asking the LLM for the answer. The whole point of university is you go there to learn to reason and think with your own damn brain, not paste the question into a text box and paste the answer to your professor.

                                        Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.

                                        That, my friend, is the reality.

                                        • cjbgkagh 4 hours ago

                                          I’m an unusually good programmer, I’ve worked in over 25 different programming languages and have been doing it since I was 6. I’ve spent most of my career as an applied researcher in research orgs where my full time job is study.

                                          Finding new relevant things to learn gets progressively more difficult and LLMs have blown that right open. Even if they haze zero new ideas the encoding and searching of existing ideas is nothing live I’ve seen before. If they can teach me things they can definitely teach less experienced people things as well. Sometimes it takes a bit of prodding, like it will insist something is impossible but when presented with evidence to the contrary will resume to give working prototypes. Which means in these very long tail instances it does still help to have some prerequisite knowledge. I wish they were more able to express uncertainty.

                                          I think the primary reason Ed Tech hasn’t been disrupted is that an expensive education is a costly signal and a class demarcator, making it cheaper defeats the primary purpose. Grade creep, reproducibility crisis, plagiarism crisis, cheating scandals fail to undermine this purpose. In fact the worse it gets the more it becomes a costly signal. As inequality increases so does the importance social signals. In many countries Universities are given special privileges to act as a gateway to permanent residency which is extremely profitable. If anything is to replace education it would have to either supplant this role as a social signal or the reward for the social signal will need to be lost and I don’t see either happening anytime soon short of a major calamity.

                                          • wiseowise 3 hours ago

                                            > No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.

                                            Maybe some fancy professors in their cushy Ivy league ivory tower won't, but a lot of teachers that work for minimal salary sure will.

                                            > Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.

                                            A lot of humans can't even do that.

                                            > Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.

                                            Students challenge professors over some stupid assumption, more news at 11.

                                            > Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.

                                            Who even said anything about cheating? Witch hunting too much? For majority of layman topics LLM will be a far superior offering precisely because LLMs have no ego and will reply to their best abilities instead of chastising students about, oh God forgive, HAVING AUDACITY to disagree over a topic.

                                            • traceroute66 2 hours ago

                                              > LLMs will reply to their best abilities

                                              Which includes hallucination, reward-hacking, over-confident delivery of completely wrong answers etc.

                                              > HAVING AUDACITY to disagree over a topic.

                                              When we are discussing a long-standing centuries-old textbook mathematical formula which is internationally recognised there is no disagreement to be had.

                                              If an LLM hallucinates and tells a student that the textbook mathematical formula is wrong, and the student has the audacity to complain to the professor on that basis, I see no issue with the professor firmly challenging the student. University is there to foster learning and reasoning using your own brain, not outsourcing it to a hallucinating LLM.

                                            • pcf 4 hours ago

                                              He said: "LLM is going to change schools and universities a lot"

                                              You said: "No it won't. It really, really wont."

                                              With the explosive development of LLMs and their abilities, it seems your point of view is probably the hopeful one while the other poster has the realistic one.

                                              It seems that you simply can't say anything about what LLMs will not be able to do. Especially when you try to use current "AI slop" as your main reason, which is being more and more eradicated.

                                              • traceroute66 4 hours ago

                                                > "AI slop" as your main reason, which is being more and more eradicated.

                                                The slop is the hard truth.

                                                As I made perfectly clear in my original post. My university professor friends get handed AI slop by their students each and every day.

                                                There is no "eradication of slop" happening. If anything, it is getting worse. Trust me, my friends see the output from all the latest algorithms on their desk.

                                                The students think they are being very clever, the students think the magical LLM is the best thing since sliced bread.

                                                All the professor sees is a wall of slop on their desk and a student that is not learning how to reason and think with their own damn brain.

                                                And when the professors tries politely and patiently to challenge them and test their understanding as you would expect in a university environment, the snowflake students just whine and complain because they know they've been caught out drinking the LLM kool-aid again for the 100th time this week.

                                                Hence the student is wasting their time and money at university, and the professor is wasting their time trying to teach someone who is clearly not interested in learning because they think they can get the answer in 5 seconds from an LLM chatbot.

                                                My professor friends chose the career they did because they enjoy the challenge of helping students along the way through their courses and watching them develop.

                                                They are no longer seeing that same development in their students. And instead of devoting time to helping students, they are wasting time thinking up over-engineered fiendishly-complicated lab-tasks and tests that the students cannot cheat using LLM.

                                                It is honestly a lose-lose situation for everybody.

                                                • culopatin 3 hours ago

                                                  I think you're missing the point. The conversation is not about what students give the professors, it's about how students learn. This obviously requires someone that wants to learn.

                                                  • traceroute66 3 hours ago

                                                    > it's about how students learn. This obviously requires someone that wants to learn.

                                                    FINALLY ! Someone who gets the point I was trying to make. I wish I could upvote you a million times.

                                                    This is precisely the point. Professors are happy to help people who want to learn.

                                                    Students who prefer to copy/paste into LLMs do not want to learn. University is there to foster learning and reasoning using your own brain. An LLM helps with neither.

                                                    • saltcured 2 hours ago

                                                      Sweep aside the misunderstanding about students trying to "cheat" with LLM output instead of engagement in the topic at hand. I think there is a secondary debate here, even when you understand the original intent of the post above. It still boils down to the same concerns about "slop". Not the student presenting slop to the existing teaching system, but the student being led stray by the slop they are consuming on their own.

                                                      Being an auto-didact has always been a double-edged sword. You can potentially accelerate your learning and find your own specialization, but it is an extremely easy failure mode to turn yourself into some semi-educated crank. Once in a while, this leads to some renegade genius who opens new branches of knowledge. But in more cases, it aborts useful learning. The crank gets lost in their half-baked ontology and unable to really fix the flaws nor progress to more advanced topics.

                                                      The whole long history of learning institutions is, in part, trying to manage this very human risk. One of a teacher's main roles is to recognize a student who is spiraling out in this manner and steer them back. Nearly everyone has this potential to incrementally develop a sort of self-delusion, if not getting reality-checked on a regular basis. It takes incredible diligence to self-govern and never lose yourself in the chase.

                                                      This is where "sycophancy" in LLMs is a bigger problem than mere diction. If the AI continues to function as a sort of keyhole predictor, it does not have the context to model a big-picture purpose like education and keep all the incremental wanderings on course and bound to reality. Instead, it can amplify this worst-case scenario where you plunge down some rabbit-hole.

                                                    • wiseowise 3 hours ago

                                                      I sure hope those "university professor friends" exist, and you're not self-distancing. Because you really need help with the mindset like that. Students are not your enemies and LLMs are not ought to get you. Seek help.

                                                  • pr337h4m 4 hours ago

                                                    What you call "slop" is a far better education than what 99% of children in the world receive.

                                                    • traceroute66 4 hours ago

                                                      > What you call "slop" is a far better education than what 99% of children in the world receive.

                                                      I'm talking about university education here. Where the hell does the clutching at straws "99% of children in the world" argument come into it ?

                                                      • wiseowise 3 hours ago

                                                        > clutching at straws

                                                        > as if he wasn't the one who clutched at some mythical "the university"

                                                        Next you'll tell that anything below Ivy league isn't considered university and kids should've known better and used their damn head™ really hard to be born in better families to be able to get real university™ education.

                                                        • traceroute66 2 hours ago

                                                          > Next you'll tell that anything below Ivy league isn't considered university

                                                          I know professors from across the spectrum and I most certainly do not consider Ivy League should be placed on a pedestal.

                                                          That is all I am willing to say on that subject. Let's not drift off into personal attacks, please.

                                                  • jeffbee 6 hours ago

                                                    LLMs aren't any of these things: infinite, knowledgable, patient, or ready. They are a compressed representation of all of the misstatements and misunderstandings in the history of Reddit. If you think you've been using LLMs to "learn new things" it could be because you aren't already familiar with the domain and you can't see where it's misleading you.

                                                    • mistahenry 5 hours ago

                                                      I mean, you've collapsed a complex, mixed system into a single negative narrative.

                                                      Examples of how I learn with LLMs:

                                                      - Paste sections from reading and ask questions / clarify my understanding / ask it to quiz me

                                                      - Produce Anki cards by pasting in chapter text and the culling out the goods ones

                                                      - Request resources / links for further learning

                                                      Basically, LLMs serve as a thinking partner. Yes, it's a fallible tool, not an oracle. But dismissing the idea that you can learn (and learn faster / more efficiently) with LLMs is reductionist

                                                  • doctorpangloss 4 hours ago

                                                    another POV is, it used to be cool to work for Google

                                                    it's been so uncool and harrowing for a while now, to deal with their leetcode BS. i mean obviously this guy is well meaning but didn't learn anything, other than for the paycheck and whatever desperate circumstances require that.

                                                    the LLM stuff being used to solve their interview process is an inflection point where it really steeply goes down to want to work for Google for any reason other than money

                                                    maybe this is why Google deepmind researchers keep leaving to start their own successful companies