Context Engineering: The Skill That Makes or Breaks AI in Your Business
You've probably heard the term "prompt engineering." Maybe you've even tried it — typing carefully worded instructions into ChatGPT and noticing that phrasing matters. That's real. But it's about 10% of what determines whether AI actually works inside a business.
The other 90% is something the AI industry has started calling context engineering. And it's the single biggest reason the same AI tool can produce brilliant results at one company and useless noise at another.
The Term Everyone's Talking About
In mid-2025, Andrej Karpathy — former head of AI at Tesla and one of the founding researchers at OpenAI — drew a clear line between the two concepts. Context engineering, he wrote, is the "delicate art and science of filling the context window with just the right information for the next step."1 Shopify CEO Tobi Lütke made the same observation, arguing that context engineering better captures what people actually do when they build AI systems that work.2
The distinction matters more than it sounds.
A prompt is an instruction. "Summarize this report." "Draft a response to this customer complaint." "Analyze these invoices for discrepancies." Prompt engineering is about making that instruction clear and precise.
Context engineering is everything else the AI sees when it processes that instruction — and it's what determines whether the output is actually useful.
What you say to the AI.
"Summarize this customer complaint and draft a response."
Everything else the AI sees.
Your return policy. The customer's order history. Your brand voice guidelines. The escalation criteria. Last quarter's complaint trends.
Why the Same Tool Produces Different Results
Here's a scenario we see constantly. Two manufacturing companies license the same AI tool for the same use case — say, automating customer inquiry responses. Company A sees immediate productivity gains. Company B concludes the tool doesn't work.
The technology is identical. The difference is context.
Company A's implementation includes product specifications, shipping timelines, pricing tiers, common customer questions, escalation rules, and brand voice guidelines — all fed to the AI as context before it generates a single response. The AI knows what it's talking about because someone ensured it had the right information.
Company B typed "respond to customer emails professionally" and expected the AI to figure out the rest.
This isn't a technology failure. It's a context failure. And it explains the staggering gap between companies that see real ROI from AI and those that write it off as hype.
Context Is Not Static
Here's what makes this genuinely hard — and why it can't be a one-time setup.
Your business context changes. Constantly. You launch a new product line. A key supplier changes lead times. Your return policy shifts for Q4. You hire a new sales team that works differently than the old one. A major customer renegotiates terms.
Every one of these shifts affects the context your AI agents need to produce accurate outputs. And none of them trigger an automatic update. The AI doesn't know your supplier changed lead times from 6 weeks to 8 unless someone tells it. It will confidently quote the old number — and it'll look perfectly professional while doing it.
The failure mode of AI isn't dramatic. It doesn't crash. It doesn't throw errors. It just quietly becomes wrong — and keeps sounding confident while it does.
This is the drift problem, and it's the primary driver of the 95% pilot failure rate we've written about before. The initial context was right. The business changed. The context didn't. The AI's outputs degraded so gradually that nobody noticed until stakeholders had already lost trust.
What Good Context Engineering Looks Like
In practice, maintaining context for a business AI deployment involves several ongoing disciplines. None of them are technically exotic. All of them require someone who understands the business well enough to know what's changed and what the AI needs to know about it.
There's the business knowledge layer — product catalogs, pricing, policies, org structure, workflow documentation. This is the foundation. Most implementations get it partially right at launch and never update it.
There's the operational layer — current lead times, active promotions, seasonal adjustments, staffing changes. This changes weekly or monthly and requires regular attention.
There's the quality feedback layer — reviewing AI outputs, spotting patterns in errors, identifying where the context is stale or incomplete. This is the most valuable and most frequently neglected discipline.
And there's the expansion layer — recognizing when a workflow that wasn't originally automated should be, or when an AI agent's scope should grow because the context has matured enough to support it.
MIT Technology Review observed in late 2025 that context engineering represents a fundamental shift in AI development — moving from crafting individual instructions to designing entire information environments.3 That's the right framing. And it's why the work requires sustained human attention, not a one-time configuration.
Who Does This Work?
In most companies today, nobody does. That's the honest answer.
The AI gets deployed, the initial context gets configured, and then the consultants leave and nobody owns the ongoing work of keeping things current. It's not an IT function — IT doesn't know enough about business operations to maintain business context. It's not an operations function — operations doesn't know enough about how AI systems process context to do it well. The work sits in a gap between the two, and that gap is exactly where most AI investments go to die.
This is the job we built the Fractional AI Agent Manager role to fill. Not because the concept is complicated, but because the work is ongoing and it requires someone who lives in both worlds — deep familiarity with the client's business and deep familiarity with how AI systems consume and use context.
The companies that figure out who owns this work will compound their AI investments over time. The ones that don't will keep resetting to zero.
1 Andrej Karpathy. Post on X, June 25, 2025. Endorsed the term "context engineering" over "prompt engineering," describing it as the "delicate art and science of filling the context window with just the right information for the next step."
2 Tobi Lütke, CEO of Shopify. Post on X, June 18, 2025. Advocated for "context engineering" as a more accurate term, writing: "It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM."
3 MIT Technology Review. "From Vibe Coding to Context Engineering: 2025 in Software Development." November 2025. Analysis of the industry-wide shift from individual prompt-based approaches to systematic context management in production AI applications.
Want to understand what context engineering looks like for your specific workflows? That's exactly what a Discovery Day is designed to uncover.