Shuo - stock.adobe.com

How organisations can secure AI agents

Dan Karpati, Check Point Software’s vice-president of AI, discusses the unique challenges and potential ways to secure AI agents

Artificial intelligence (AI) agents have the potential to boost business productivity, but their autonomous nature and reasoning capabilities are expected to introduce new security risks.

At Check Point’s CPX 2025 conference in Bangkok this week, Dan Karpati, the company’s vice-president of AI, outlined what organisations can do to secure their AI agent workforce – starting with the need to understand the key difference between AI agents and traditional AI algorithms.

“AI agents are designated with a goal, but as opposed to an algorithm, they have the ability to decide and reason,” he told Computer Weekly, adding that this autonomy makes it challenging to define the boundaries of an agent’s actions. “If you give an AI agent the ability to solve tickets, or to screen resumes and then act upon them, it’s very hard to decide where the border lines are in terms of what it can do.”

This difficulty is compounded by the way AI agents interact with their environments – they can access and process data from various sources, including databases and web pages, creating opportunities for threat actors to plant malicious things in those sources to manipulate the behaviour of AI agents.

Karpati predicted that the proliferation of AI agents will also lead to new identity challenges. “As we go along, we will see more enterprises having flipped agents,” he said, referring to agents compromised by malicious actors.

These rogue agents could blend seamlessly into an organisation’s AI workforce, making detection difficult. The problem extends beyond individual agents to the orchestration platforms that manage them. Securing these central control points will be critical to preventing widespread compromise, he added.

To address these challenges, Karpati stressed the importance of AI-driven security capabilities to counter the non-deterministic nature of agent behaviour. “You need AI to understand … if [an agent] is still related to the job or not,” he said.

Read more about AI in APAC

Karpati also advocated for industry standards to govern the development and authentication of AI agents. “There should be some commonality between the AI agents,” he suggested, referring to standardised interfaces and communication protocols. Emerging agent frameworks from Amazon Web Services (AWS) and Microsoft are a step in the right direction, added Karpati.

Managing the lifecycle of AI agents also presents unique challenges. Unlike traditional software, agents can modify their own code, leading to a more dynamic and unpredictable lifecycle. “It’s a different lifecycle, and they will be maintained in a more agile way,” he said.

Check Point’s own approach to AI agent development involves a hybrid architecture that leverages both cloud-based and on-premise models. The company uses function calling, which allows AI models to select and execute predefined functions based on natural language instructions. This ensures sensitive customer data remains in Check Point’s control.

“The only interface to the large language model is a prompt and all the data is still on-premise,” said Karpati.

For organisations operating in highly regulated industries, Check Point offers open-source models that can be deployed in their own environments. This allows for greater control and customisation, including the ability to train models on proprietary data.

Karpati also discussed the importance of considering factors such as performance and form when selecting AI models. He anticipates a future where multiple models operate concurrently, each optimised for specific tasks.

“We will get to a point where we have multiple models, but which one you use will depend on what you ask or what the agent performs,” said Karpati, adding that this flexibility will allow organisations to tailor their AI deployments to specific needs.

Read more on Information technology (IT) in ASEAN