Innovating with AI agents and managing risks: how do you find the right balance?
The time when we cautiously experimented with AI chats is behind us. The new development: AI agents. More and more organizations are investing in it. And many employees have already started working with it.
This has brought us to a phase comparable to the sudden growth of Microsoft Teams around 2020: proliferation. Those who have a Copilot license can now develop agents themselves. People are taking advantage of that opportunity left and right. At the same time, the grip is hard to find. Because innovation is desirable and necessary, but what about data quality and security?
We sat down with Jesse van der Zwet, Power Platform consultant at Wortell. In this blog post, he explains how to find the right balance between innovation and risk management!
What types of agents can you build?
"Microsoft offers several tools that make it easy to set up and share agents yourself," says Jesse. "But there is little supervision of quality. In addition, Microsoft has made it very easy to link both internal and external data to agents. This significantly increases the risk that company data will be made public."
People tend to overlook that. What do they mainly focus on? The benefits that an agent can bring.
According to Jesse, we can roughly divide agents into two categories: "First of all, there are agents who help the individual to work more efficiently. With such personal productivity agents, you automate tasks that you perform on a daily basis. For example, if you pick up a handbook every day to look up certain answers, it is useful to build an agent once that contains all the necessary information. You can then chat with it to get the right data quickly."
With such agents, quality plays a role at an individual level: the agent must provide the employee in question with correct information. But what about agents that you deploy throughout the organization? For example, what if you build an agent that allows all employees to retrieve their vacation days and personal HR information?
"These agents fall into the second category," Jesse explains. "In doing so, you want to protect personal data very well and ensure that there is one source of truth. Because several people use such an agent, it must comply with certain regulations. There are simply greater risks involved."
Is it worth building such agents? "Yes, of course," says Jesse. "They bring benefits to the entire organization and can also increase turnover and job satisfaction."
(AI) Governance that grows with you
Get a grip on agent development, deployment, and management in a growing Copilot ecosystem
Keep a grip? Create a frame!
How do you benefit from innovation with agents and mitigate the risks? "Everything stands or falls with the foundation," says Jesse. "As an organization, you want to set clear rules, guidelines and frameworks, taking security and the best practices within your sector into account. For example, you can prevent confidential data from just ending up in an agent. Or that people create duplicate agents. When you create a framework in which employees can work safely with agents, you ensure that agents do what they do best: contribute to innovation and development."
Another advantage of such a framework: cost control. Jesse: "If you develop agents indefinitely, it comes with a hefty price tag. Of course, organizations want to avoid unforeseen costs. If you set a solid framework in advance, you also ensure transparency in costs and use. This way, the IT department keeps a grip on the number of agents that employees build and the effect that this has on costs."
Need support in the AI agent era? At Wortell, we know everything about agents and apply relevant Microsoft, AI and sector-related best practices . Learn more about Agent Governance.