This is the most important question to answer clearly, because most AI marketing conflates two very different things.
The LLM (language model) itself does not learn from your data. Whether you're using a self-hosted model or a cloud model, the underlying model has fixed weights. It generates responses based on its pre-existing training — not on your catalog, your history, or your decisions.
What does grow and get smarter is your account-level context layer. SENTINEL maintains, for your account only:
- Knowledge Brain — A vector database of every document, SOP, contract, and product spec you've uploaded, indexed semantically so SENTINEL AI can retrieve the exact right context when answering your questions.
- Operational Principles — A growing set of rules and preferences SENTINEL has inferred from your decisions over time. Which recommendations you accepted. Which you overrode. Which pricing moves worked on which channels. This is stored as structured data in your account, not inside the LLM.
- Catalog & History Context — Your full SKU catalog, order history, P&L data, and channel performance is always available as context that gets injected into every AI query. The LLM doesn't "know" this — it reads it fresh each time.
- Decision Memory — When you make a judgment call (e.g., "don't reprice during Q4"), SENTINEL logs it. Future recommendations are shaped by this history.
The analogy:: The LLM is the engine. Your business context is the fuel. The engine doesn't change shape — it just runs on increasingly rich fuel that is yours and only yours.