The journey to Agentic AI impact in 2025

This is a guest blogpost by Paul O’Sullivan, SVP Solution Engineering and Salesforce UKI CTO.

Artificial Intelligence has undergone an extraordinary evolution — from predicting trends and generating content to powering hyper-automation — culminating now in the rise of Agentic AI, a revolutionary leap toward autonomous decision-making and action.

Reflecting on the AI journey so far we’ve experienced its predictive wave, where businesses leveraged data to forecast trends and inform decisions. Then came the generative wave, popularised by tools like ChatGPT, creating content and engaging users in entirely new ways. Sandwiched between these cycles was hyper-automation, where enterprises sought efficiency by automating repetitive processes—though often at significant cost and effort.

Agents represent the next breakthrough. It isn’t just about predicting outcomes or generating content; it’s about acting with reason. These AI agents represent a digital workforce, integrating predictive insights and generative capabilities into our workforces to make decisions and execute tasks autonomously across multiple systems. Think of them as digital co-workers who never clock out, capable of handling everything from managing customer service tickets to optimising supply chains.

However, with every AI innovation comes challenges. Missteps like poorly thought-out DIY implementations, data misuse, or lack of solid governance can lead to disastrous outcomes. The stakes are higher this time—and so are the potential rewards.

Key challenges for agentic AI implementation

  1. Avoiding the DIY trap

A do-it-yourself (DIY) approach to developing AI agents has the potential to become a financial black hole. Training an in-house LLM isn’t just expensive—it’s impractical for most enterprises. Without the necessary infrastructure, engineering support, and continuous tuning, the system not only underperforms but also poses substantial risks to data privacy and security. Leverage existing enterprise-grade AI platforms that are built for scale, security, and seamless integration allows businesses to deploy autonomous agents without reinventing the wheel.

  1. Integrating disparate data sources  

Many enterprises suffer from fragmented data environments. Customer interactions might be stored in Slack, order data in PDFs, and strategy documents in scattered Google Docs. Without unifying this data, enterprises risk incomplete insights and disjointed customer experiences. Adopting a unified data integration platform, enables seamless data integration which in turn empowers AI agents to understand customer journeys comprehensively, generating valuable contextual intelligence.

  1. Navigating regulatory landmines

The rapid adoption of AI has far outpaced regulatory frameworks. When data is absorbed into an LLM’s training set, questions about ownership, compliance, and governance emerge, making auditing and regulatory adherence an ongoing challenge for businesses. To operate ethically and ensure trust, businesses must prioritise governance frameworks in doing so this infrastructure helps organisations govern AI usage, secure sensitive data, and align with global data privacy regulations.

  1. Taming unstructured data  

From support tickets to social media interactions, a significant portion of enterprise data is trapped in unstructured formats, making it inaccessible to AI models designed to extract actionable insights. Integrating platforms that can unlock unstructured data, enabling AI systems to analyse emails, PDFs, and even audio files, improves the accuracy and reliability of AI-driven insights, ultimately enhancing customer interactions and decision-making.

  1. Building a flexible, integrated AI ecosystem  

AI agents can’t excel in isolation. Without connectivity across data, business logic, automation, and workflows, enterprises fail to deliver AI at scale. Deploying a comprehensive, flexible AI ecosystem ensures that AI agents interact seamlessly with human employees, business processes, and customer data, enabling scalability and accuracy.

  1. Beware of hyper-automation disguised as agentic AI  

Beware of technology marketed as “agentic AI” when it’s merely automation with no true reasoning or autonomy. Many solutions labeled as AI still rely on predefined workflows and lack the ability to independently assess, adapt, or act without human input. True agentic AI requires contextual understanding and decision-making capabilities. Organisations should critically evaluate claims, ensuring they invest in innovation that enhances strategic outcomes rather than just repackaging automation as intelligence.

Agents in 2025 and beyond

The future will see agents becoming an indispensable part of the enterprise toolkit. They’ll interact with customers, learn from mistakes, and build trust—just like human employees. That said, malicious agents and governance challenges loom large. Robust data policies and ongoing oversight will be essential to managing the ethical and practical implications of autonomous AI systems.

As AI continues to evolve, so too will workflows and task execution. Within the next decade, tasks that today seem familiar will be replaced by entirely new methodologies driven by automation and AI. The adoption of Agentic AI will also prompt a seismic shift in education systems. Workers across generations need to understand the risks, rewards, and ethics of AI. This calls for a collaborative effort between industries and academia to prepare tomorrow’s workforce.

The enterprises that rise above the rest will be those that partner within ecosystems designed to deliver agentic AI grounded in trust, reliability, and scale.