The explosion of generative AI (GenAI) in 2023 ushered in a new era of optimism. Startups, corporates, and even casual users imagined a world where AI would deliver instant content, code, or insights. Behind the curtain, leaders bet heavily on plug-and-play solutions, assuming that smart prompts or connections to internal data would yield business-transforming results. But as the initial exuberance fades and we approach 2025, it’s clear: GenAI isn’t a magic box. Real breakthroughs require more than clever queries or raw information – they demand real context.
The cracks appear: Why prompt engineering and RAG aren’t enough
Early on, “prompt engineering” became the must-have skill: the ability to coax the best answers from large language models (LLMs) through detailed instructions. While this unlocked incremental gains, teams soon ran into severe limits. No matter how crafty the prompt, AI can’t produce useful outcomes if it lacks the crucial details, nuances, and lived experience that drive business value.
Retrieval-Augmented Generation (RAG) promised to bridge that gap, automatically searching knowledge bases and feeding new data to the model. But like a Google search gone wrong, RAG is only as good as what’s stored and surfaced. Too often, RAG provides irrelevant, outdated, or generic information, hidden behind the scenes with little user transparency or control.
The core shared weakness? Both approaches overlook an uncomfortable truth: “Garbage in, garbage out.” Without high-quality, curated context, even the most sophisticated AI flounders.
The missing ingredient: Context is king
What really drives effective AI in business? Context—the kind typically held in the heads of experienced team members, buried in Slack threads, or never documented at all. Most of the knowledge that makes an organisation run smoothly isn’t found in public wikis or static databases. The subtleties of customer relationships, product quirks, or fast-changing priorities are invisible to machines unless teams explicitly surface and structure them.
Think of deploying an LLM as onboarding a brilliant but clueless intern—you wouldn’t expect success without proper orientation, mentorship, and access to living organisational knowledge. Yet, that’s been the default approach for most GenAI deployments so far.
Beyond knowledge management: Context engineering defined
Traditional “knowledge management” conjures images of dense, unused wikis and bureaucratic processes—anathema to fast-moving startups. But the AI era demands a new discipline, one that captures the critical, evolving context of real work and makes it accessible to machines.
Context engineering is this practical evolution. Formally, it’s the intentional design, curation, and delivery of high-quality, relevant, and task-specific information—structured so that AI systems can reason more effectively and produce accurate, business-aligned outputs. As researchers at Anthropic define it, context engineering is “the set of strategies for curating and maintaining the optimal set of tokens (information) during LLM inference”.
This discipline goes beyond just writing documents or constructing prompt templates. It means identifying what knowledge matters most for a given use case, continuously surfacing and structuring it, and ensuring AI has access to the exact context needed—no more, no less—at just the right moment. It is, as some have described it, “the delicate art and science of filling the context window with just the right information for the next step”.
Context engineering in practice: Real-world impact
Forward-thinking teams are already realising outsized returns by putting context engineering into practice.
For example, at Etsy, an AI-assisted onboarding system for new employees was developed to answer questions about the company’s Travel & Entertainment (T&E) policies. Instead of fine-tuning a large language model, the team used context engineering to feed the model with relevant documents. By embedding the T&E policy documents and using retrieval-augmented generation, the system could answer questions with high accuracy. However, they found that some prompts led to hallucinations. To mitigate this, they used further context engineering techniques, such as asking the model to cite its sources and using chain-of-thought prompting to improve the reliability of the answers.
Another example comes from teams adopting visual and collaborative approaches to AI workflow design. Instead of relying on linear chat interfaces, these teams use shared canvases to curate the right information for their models — organising prompts, data, and prior project insights into a single, navigable workspace. By visually mapping their thinking and structuring context before engaging AI, teams reduce redundancy, improve accuracy, and strengthen alignment. This shifts AI interactions from isolated exchanges to a shared process of reasoning, where human context and machine intelligence evolve together.
Why context engineering matters—for startups and enterprises
For startups where speed is survival, context engineering offers a lightweight, flexible alternative to traditional knowledge management. It’s about curating what matters now and feeding it to AI at every touchpoint—no need for encyclopedic documentation. For corporates, context engineering becomes the glue across silos, ensuring AI-powered systems understand the nuance of each department or project.
Ultimately, context engineering bridges the gap between AI’s raw potential and practical productivity. It’s a competitive differentiator, not just in AI adoption but in how teams capture, evolve, and share the information that defines their business edge.
The road ahead: Don’t chase the magic box—engineer your context
The GenAI wave has proven that powerful models alone aren’t enough. As access to state-of-the-art LLMs becomes commoditised, the real differentiator is context—the knowledge flows, priorities, and tacit insights that drive your business each day.
Winning teams will stop chasing the illusion of one-click AI answers. Instead, they’ll invest in context engineering—structuring and maintaining the information environment their AI needs to succeed.
In the age of ubiquitous AI, context is king. Don’t just prompt your way to the future. Engineer it.

Andrey Leskov is Co-Founder & CEO of illumi.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Growtika on Unsplash
How AI agents are shaping the future of work across industries

