Show HN: OpenSymbolicAI – Agents with typed variables, not just context stuffing

2 points - today at 8:19 PM


Hi HN,

We've spent the last year building AI agents and kept hitting the same wall: prompt engineering doesn't feel like software engineering. It feels like guessing.

We built OpenSymbolicAI to turn agent development into actual programming. It is an open-source framework (MIT) that lets you build agents using typed primitives, explicit decompositions, and unit tests.

THE MAIN PROBLEM: CONTEXT WINDOW ABUSE

Most agent frameworks (like ReAct) force you to dump tool outputs back into the LLM's context window to decide the next step.

Agent searches DB.

Agent gets back 50kb of JSON.

You paste that 50kb back into the prompt just to ask "What do I do next?"

This is slow, expensive, and confuses the model.

THE SOLUTION: DATA AS VARIABLES

In OpenSymbolicAI, the LLM generates a plan (code) that manipulates variables. The actual heavy data (search results, PDF contents, API payloads) is stored in the Python/runtime variables and is never passed through the LLM context until a specific primitive actually needs to read it.

Think of it as pass-by-reference for Agents. The LLM manipulates variable handles (docs), while the Python runtime stores the actual data.

EXAMPLE: A RAG AGENT

Instead of the LLM hallucinating a plan based on a wall of text, it simply writes the logic to manipulate the data containers.

class ResearchAgent(PlanExecute):

  @primitive
  def retrieve_documents(self, query: str) -> list[Document]:
      """Fetches heavy documents from vector DB."""
      # Returns heavy objects that stay in Python memory
      return vector_store.search(query)

  @primitive
  def synthesize_answer(self, docs: list[Document]) -> str:
      """Consumes documents to generate an answer."""
      # This is the ONLY step that actually reads the document text
      context = "\n".join([d.text for d in docs])
      return llm.generate(context)

  @decomposition(intent="Research quantum computing")
  def _example_flow(self):
      # The LLM generates this execution plan.
      # Crucially: The LLM manages the 'docs' variable symbol,
      # but never sees the massive payload inside it during planning.
      docs = self.retrieve_documents("current state of quantum computing")
      return self.synthesize_answer(docs)
agent = ResearchAgent() agent.run("Research the latest in solid state batteries")

DISCUSSION

We'd love to hear from the community about:

Where have you struggled with prompt engineering brittleness?

What would convince you to try treating prompts as code?

Are there other domains where this approach would shine?

What's missing to make this production-ready for your use case?

The code is intentionally simple Python, no magic, no framework lock-in. If the approach resonates, it's easy to adapt to your specific needs or integrate with existing codebases.

Repos:

Core (Python): https://github.com/OpenSymbolicAI/core-py

Docs: https://www.opensymbolic.ai/

Blog (Technical deep dives): https://www.opensymbolic.ai/blog

Comments

pillbitsHQ today at 11:36 PM
[dead]