SKILL.make: Makefile Styled Skill File

(github.com)

29 points | by teaonly 4 hours ago

6 comments

  • gavmor 1 hour ago

    So—let me get this straight—this is just a suggestion to format skills' dot-md files with inline `makefile` pseudocode codeblocks for the sake of one-shot alignment?

    Have you tried it in Polish? [0]

    0. https://arxiv.org/html/2501.02266v1

    • stingraycharles 2 hours ago

      I don’t get it.

      “ Dependency Resolution: The harness resolves the DAG (Directed Acyclic Graph) automatically. No more relying on an LLM to "guess" the next logical step.Uses the Target: Dependency + Recipe model to ensure Agents follow a strict execution order without skipping steps.”

      How does it do that? Does it just generate a Makefile? If so, why not just put the actual Makefile as a resource in the skill package and provide execution commands? That way the Makefile doesn’t need to be read at all.

      If not, and you rely on an LLM interpreting the execution order, wouldn’t that statement just be false?

      • hrimfaxi 1 hour ago

        It seems like it relies on an LLM to guess the next logical step and codify it.

        https://github.com/Teaonly/SKILL.make/blob/06872841537273376...

        • teaonly 2 hours ago

          What I did here was to rewrite SKILL.md in Makefile style, using a DAG structure and omitting the text describing the process. So this should be considered a pseudo-Makefile; writing a SKILL using the Makefile approach is a very natural method.

          • SwellJoe 2 hours ago

            You're just repeating the readme, not answering the question.

            • teaonly 2 hours ago

              My next step is to design the recipe to be hot-loadable. The goal is to achieve self-evolving, optimizing the recipe independently without changing the DAG. This ability to perform local optimization is something Markdown lacks, but Makefiles can.

              • SwellJoe 36 minutes ago

                I'm not sure using LLMs is good for your mental health.

                You should probably step away from the computer for a little while. LLMs are not always safe to use for everyone for reasons that I don't think are well understood, and I don't know exactly why it only causes trouble for some people, but the way you're talking is concerning.

                I'm being sincere here, not trying to be dismissive.

                • flexagoon 1 hour ago

                  How is this relevant to the question?

              • How is the DAG enforced, if not by executing “make” ? Then you’re just relying on the LLM to infer intent, which invalidates the claim, right?

            • forestcall 2 hours ago

              This is interesting. Do you have a robust skill built with this I could checkout? I have been working on a planning skill that has sub-agent that do stuff like research with Tavily and Exa an it uses Claude CLI and Codex CLI to write separate plans and compare and uses a plan template with a micro task layout with multiple phases, test, etc.

              • teaonly 4 hours ago

                The core idea of this project is to use Makefiles to style SKILL documentation, leveraging Makefiles' built-in DAG functionality and a defined syntax. The advantages are as follows:

                1. It reduces the token consumption of the original MD format;

                2. SKILLs are easier to read and more suitable for AI use because the inherent DAG is a Plane Mode;

                3. Makefiles are ideal for auditing (git tracing, call statistics), providing a solid fundation for future self-evolving enginering.

                • ares623 1 hour ago

                  the obsession on token discounts recently is pretty funny. If you extrapolate far enough you end up back to where we started, programming languages.

                  • thegagne 1 hour ago

                    Hah, I think about this all the time. I think we subtly desire LLMs to be more and more deterministic and efficient. This is why one of the main uses of LLMs is building tools to make their job easier.

                    I made my own project, with one of the goals being discounting tokens, but found that the real goal was just ensuring quality and making things more programmatic.

                    https://ktext.dev

                    Basically ends up being agents.md in a schema driven yaml file. Thinking about extending it to also generate or replace skill.md.

                    I think the proliferation of markdown is cool, and lowers the barrier for entry, but it’s also very unpredictable and loose. I think over time we will drive these to be more like config files instead of free text.

                    • xandrius 1 hour ago

                      Yeah, I wonder what kind of work people do that they need more than 500k or 1M context window.

                      Even when it's a big project, breaking it down doesn't change the output quality.

                      • nunodonato 1 hour ago

                        it's crazy, I have seen so many projects popping up just focusing on reducing token usage. At least caveman speak is funny!

                        Have to say that since we switched to our own model in a rented GPU, we stopped worrying about tokens and just use the hell out of our AI as much as we want :)

                        • nunodonato 1 hour ago

                          remember TOON? it killed JSON

                          /s just in case

                          • xandrius 1 hour ago

                            The thing which is 98% JSON and absolutely didn't kill JSON?

                        • nimchimpsky 2 hours ago

                          [dead]