The Learning Loop Engine

Elixium shifts the focus from "shipping features" to "validating hypotheses." The Learning Loop is the core engine that powers this shift, integrating Agentic AI into your daily ceremonies.


Core Concepts

Hypothesis

Every story starts as a bet. Instead of just a "User Story," you define a Hypothesis: "If we build X, we expect outcome Y." This guides the AI and the team to focus on impact, not output.

Risk Profile

Stories are tagged with a Risk Profile (User, Tech, Market, Compliance, Model Drift). The Board visualizes this risk (colored borders), allowing you to "tackle the riskiest assumptions first."

Ceremonies

Smart Standup

The Standup Mode toggle on your board filters out noise. It hides "Done" work and highlights blocked or high-risk items in the "Current Iteration."

  • Focuses the team on active risks.
  • Hides the backlog to reduce cognitive load.

Knowledge Search (Retro)

Don't lose your learnings. The Search Learnings bar allows you to query past stories based on their Outcome Summary.

"What did we learn about 'latency' last quarter?"

Agent Integration

The Elixium Agent (via MCP) participates in your loop:

  • Context Awareness: It reads the "Current Iteration" to understand active work.
  • Hypothesis Generation: It can propose new experiments (Stories) directly into your Icebox.
  • Outcome Recording: It can help summarize what was learned when a story moves to Done.

Security Note (Production)

To secure the Agent API endpoints in a production environment (GCP, AWS), you must configure the following environment variable:

ELIXIUM_API_KEY=your-secure-random-key

Without this key, the Agent APIs (/api/stories, /api/context) will be disabled in production.