You talk about "semantic tree memory" but what does the system use as a storage? Just the text that is passed into and generated by the LLM? Doesn't that strategy risk context window overflows soon, and also content loss due to the LLM's imperfect recall? Did you think about integrating an external storage layer via tool calls?
You talk about "semantic tree memory" but what does the system use as a storage? Just the text that is passed into and generated by the LLM? Doesn't that strategy risk context window overflows soon, and also content loss due to the LLM's imperfect recall? Did you think about integrating an external storage layer via tool calls?