share this post on


16 September, 2025
The idea that Large Language Models can be plugged into systems as reasoning engines opened the door to far more autonomous digital systems, operating like independent agents. It is a powerful shift that will undoubtedly redefine the very fabric of work - but it will take time, and you need to make innovation decisions today. Where is this technology now, and what does the immediate future look like?
As the year unfolded, through both research and the implementation of multiple Agentic systems ourselves, we developed a clearer view of where things are headed over the next 365 days. Since we need to build systems today that create impact soon, we are focused on extracting the real potential from LLM-based technologies while staying immune to the hype.
In the midst of such a sweeping sociotechnical shift, predicting the long-term future is impossible. But as a business leader, you still need to understand where the vectors are pointed and what the likely reality of the coming months will look like. These four well-grounded trends are here to help you do exactly that.
Here is our current take on Agentic AI. Ambitious, but grounded in viability.
1. The main direction: domain specific, specialised Vertical Agents
While many companies are still figuring out how to give employees secure access to personal AI productivity tools that can handle confidential data (CoPilot, ChatGPT, Gemini etc.), most have also realised that real transformation requires a systematic rethinking of how work is done across entire business functions. This means creating Agentic AI Systems that support a specific business function, role, or workflow.
The goal is to maximise utility and reliability in a narrow domain, using Large Language Models as AI brains at multiple points. These systems can include more independent AI agents, more structured agentic workflows, or any combination of the two.
Most companies will likely adopt a mixed build-and-buy strategy. They will buy off-the-shelf systems for generic functions where limited flexibility is acceptable. However, they will build their own systems for processes that are core to their competitive advantage, are unique to the company, or change frequently.
2. Companies will balance between traditional software and probabilistic AI
The idea of autonomous AI Agents that perform tasks with the independence and decision-making power of a human teammate is compelling. Independent thinking is exactly what makes humans such flexible problem solvers. However, there are still very few situations where companies are ready to grant this level of autonomy to AI. Doing so will require technical solutions that enable both control and independence, and it will also need changes in governance, legal frameworks, and company culture.
Most leaders are still operating within the old paradigm where software is fully deterministic. Unpredictability is something only tolerated from human employees. As a result, most near-term Agentic AI systems will be hybrids of traditional software and GenAI components. Companies will continue to use conventional business logic where it works well, and plug in LLM Brains only where they create clear price or capability advantages.
For many use cases, this hybrid structure is ideal, and not a half-step compromise. It captures the benefits of LLMs while avoiding much of their unpredictability. Adding Agentic elements to existing systems opens up thousands of business use cases that were previously too costly or entirely unfeasible to tackle. The “LLM brain” components make systems capable of processing messy human inputs, following loosely defined procedures that traditional systems could never handle, and introduce natural, human-like communication, redefining employee or customer experience.
3. Evaluation know-how will be crucial
Companies that want to move beyond experimentation and deploy real systems need to build specialised Agentic AI solutions that work reliably within their defined domains. This often means giving the AI Brain a narrow decision space, setting strict procedures, and designing well-defined choice architectures, supported by extra layers of control.
Still, some level of probabilistic behaviour must remain. If we remove that entirely, we lose the simulated general intelligence that makes this technology valuable in the first place. Because of this, evaluating the quality and reliability of outputs is very different from classical software testing. In traditional systems, we could aim for 100 percent test coverage by enumerating all possible scenarios. That is not feasible here. The possible permutations are endless.
Despite this, meaningful evaluation is possible. By combining data science, human feedback, and synthetic data generation, companies can build robust frameworks that produce actionable metrics. Without this kind of evaluation in place, product development remains guesswork. There is no way to verify whether a change, such as introducing a new model, actually improves output quality. Leadership will also lack the confidence needed to integrate these systems into everyday operations.
4. Augmenting humans is the path to truly impactful systems
In simple or low-stakes scenarios, full automation with Agentic AI can work well. For example, a content recommendation engine does not need to be right every time. In more complex or high-stakes workflows, however, augmenting human workers is the more effective approach. This is the paradigm most likely to move beyond pilots and into real deployment.
In these systems, humans and machines collaborate. The aim is higher productivity, better output quality, and ideally, greater job satisfaction. The system helps the human do their job better by removing repetitive tasks, surfacing useful information at the right moment, and enhancing their capabilities.
This collaboration can take different forms. In some cases, it is a strategic division of labour, where the machine handles data analysis and monitoring at scale, while the human makes decisions. In others, it is a more integrated process, such as co-authoring a medical report.
This approach will remain dominant unless company culture shifts significantly. Managers would need to accept some unpredictability in exchange for higher ROI. Legal and regulatory frameworks around issues like liability would also need to adapt. Until then, human-AI collaboration is the model that works.
It is also the more responsible one. Companies and societies are better off using Agentic AI to expand human potential rather than to replace it. This is exactly why we are focused on augmentation at Supercharge.
FAQ's
let's talk about your product
The form is loding...