10 comments

  • dvt 1 hour ago

    So weird/cool/interesting/cyberpunk that we have stuff like this in the year of our Lord 2026:

       ├── MEMORY.md            # Long-term knowledge (auto-loaded each session)
       ├── HEARTBEAT.md         # Autonomous task queue
       ├── SOUL.md              # Personality and behavioral guidance
    
    Say what you will, but AI really does feel like living in the future. As far as the project is concerned, pretty neat, but I'm not really sure about calling it "local-first" as it's still reliant on an `ANTHROPIC_API_KEY`.

    I do think that local-first will end up being the future long-term though. I built something similar last year (unreleased) also in Rust, but it was also running the model locally (you can see how slow/fast it is here[1], keeping in mind I have a 3080Ti and was running Mistral-Instruct).

    I need to re-visit this project and release it, but building in the context of the OS is pretty mindblowing, so kudos to you. I think that the paradigm of how we interact with our devices will fundamentally shift in the next 5-10 years.

    [1] https://www.youtube.com/watch?v=tRrKQl0kzvQ

    • halJordan 1 hour ago

      You absolutely do not have to use a third party llm. You can point it to any openai/anthropic compatible endpoint. It can even be on localhost.

      • dvt 1 hour ago

        Ah true, missed that! Still a bit cumbersome & lazy imo, I'm a fan of just shipping with that capability out-of-the-box (Huggingface's Candle is fantastic for downloading/syncing/running models locally).

        • embedding-shape 57 minutes ago

          Ah come on, lazy? As long as it works with the runtime you wanna use, instead of hardcoding their own solution, should work fine. If you want to use Candle and have to implement new architectures with it to be able to use it, you still can, just expose it over HTTP.

      • atmanactive 1 hour ago

        > but I'm not really sure about calling it "local-first" as it's still reliant on an `ANTHROPIC_API_KEY`.

        See here:

        https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

      • ramon156 1 hour ago

        Pro tip (sorry if these comments are overdone), write your posts and docs yourself (or at least edit them).

        Your docs and this post is all written by an LLM, which doesn't reflect much effort.

        • Szpadel 32 minutes ago

          counterargument: I always hated writing docs and therefore most of thing that I done at my day job didn't had any and it made using it more difficult for others.

          I was also burnt many times where some software docs said one thing and after many hours of debugging I found out that code does something different.

          LLMs are so good at creating decent descriptions and keeping them up to date that I believe docs are the number one thing to use them for. yes, you can tell human didn't write them, so what? if they are correct I see no issue at all.

          • DonaldPShimoda 20 minutes ago

            > if they are correct I see no issue at all.

            Indeed. Are you verifying that they are correct, or are you glancing at the output and seeing something that seems plausible enough and then not really scrutinizing? Because the latter is how LLMs often propagate errors: through humans choosing to trust the fancy predictive text engine, abdicating their own responsibility in the process.

            As a consumer of an API, I would much rather have static types and nothing else than incorrect LLM-generated prosaic documentation.

            • jack_pp 11 minutes ago

              Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?

              Somehow I doubt at this point in time they can even fail at something so simple.

          • bakugo 1 hour ago

            > which doesn't reflect much effort.

            I wish this was an effective deterrent against posting low effort slop, but it isn't. Vibe coders are actively proud of the fact that they don't put any effort into the things they claim to have created.

            • g0h0m3 57 minutes ago

              Github repo that is nothing but forks of others projects and some 4chan utilities.

              Professional codependent leveraging anonymity to target others. The internet is a mediocrity factory.

          • applesauce004 1 hour ago

            Can someone explain to me why this needs to connect to LLM providers like OpenAI or Anthropic? I thought it was meant to be a local GPT. Sorry if i misunderstood what this project is trying to do.

            Does this mean the inference is remote and only context is local?

            • atmanactive 1 hour ago

              It doesn't. It has to connect to SOME LLM provider, but that CAN also be local Ollama server (running instance). The choice ALWAYS need to be present since, depending on your use case, Ollama (local machine LLM) could be just right, or it could be completely unusable, in which case you can always switch to data center size LLMs.

              The ReadMe gives only a Antropic version example, but, judging by the source code [1], you can use other providers, including Ollama, just by changing the syntax of that one config file line.

              [1] https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

              • vgb2k18 1 hour ago

                If local isn't configured then fallback to online providers:

                https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

                • halJordan 1 hour ago

                  It doesn't need to

                • thcuk 39 minutes ago

                  Fails to build "cargo install localgpt" under Linux Mint. Git clone and change Cargo.toml by adding

                  """rust # Desktop GUI eframe = { version = "0.30", default-features = false, features = [ "default_fonts", "glow", "persistence", "x11", ] } """

                  That is add "x11" Then cargo build --release succeeds. I am not a Rust programmer.

                  • dpweb 1 hour ago

                    Made a quick bot app (OC clone). For me I just want to iMessage it - but do not want to give Full Disk rights to terminal (to read the imessage db).

                    Uses Mlx for local llm on apple silicon. Performance has been pretty good for a basic spec M4 mini.

                    Nor install the little apps that I don't know what they're doing and reading my chat history and mac system folders.

                    What I did was create a shortcut on my iphone to write imessages to an iCloud file, which syncs to my mac mini (quick) - and the script loop on the mini to process my messages. It works.

                    Wonder if others have ideas so I can iMessage the bot, im in iMessage and don't really want to use another app.

                  • theParadox42 1 hour ago

                    I am excited to see more competitors in this space. Openclaw feels like a hot mess with poor abstractions. I got bit by a race condition for the past 36 hours that skipped all of my cron jobs, as did many others before getting fixed. The CLI is also painfully slow for no reason other than it was vibe coded in typescript. And the errors messages are poor and hidden and the TUIs are broken… and the CLI has bad path conventions. All I really want is a nice way to authenticate between various APIs and then let the agent build and manage the rest of its own infrastructure.

                    • mraza007 44 minutes ago

                      I love how you used SQLite (FTS5 + sqlite-vec)

                      Its fast and amazing for generating embedding and lookups

                      • DetroitThrow 36 minutes ago

                        It doesn't build for me unfortunately. I'm using Ubuntu Linux, nothing special.

                        • dalemhurley 1 hour ago

                          I’m am playing with Apple Foundation Models.

                          • AndrewKemendo 1 hour ago

                            Properly local too with the llama and onnx format models available! Awesome

                            I assume I could just adjust the toml to point to deep seek API locally hosted right?