4 comments

  • gavinray 2 hours ago

      > The shift for me was realizing test generation shouldn’t be a one-off step. Tests need to live alongside the codebase so they stay in sync and have more context.
    
    Does the actual test code generated by the agent get persisted to project?

    If not, you have kicked the proverbial can down the road.

    • ashish004 2 hours ago

      Yes gavinray, It gets persisted to the project. Its lives alongside the codebase. So that any test generated has the best context of what is being shipped. which makes the AI models use the best context to test any feature more accurately and consistently.

    • avikaa 3 hours ago

      This solves a massive headache. The drift between externally generated tests and an active codebase is a brutal problem to maintain.

      Using vision-based execution instead of brittle XPaths is a great baseline, but moving the test definitions to live directly alongside the repo context is definitely the real win here.

      Did you find that generating the YAML from the codebase context entirely eliminated the "stale test" issue, or do developers still need to manually tweak the generated YAML when mobile UI layouts change drastically? Great project!

      • ashish004 3 hours ago

        Hi Avikaa, finalrun provides skills that you can integrate with any IDE of your choice. You can just ask the finalrun-generate-test skill to update all the test for your new feature.

      • sahilahuja 3 hours ago

        Agentic testing. Kudos to your decision to open-source it!

        • arnold_laishram 5 hours ago

          Looks pretty cool. How does your agent understand plain english?