You Just Hired an AI Agent but Nobody Told HR

AI agents can read your email, browse the web and execute code. Most small teams have no rules around any of it

You Just Hired an AI Agent but  Nobody Told HR

Agentic AI tools are powerful, autonomous and spreading fast. Here is what your team needs to know before something goes wrong.

AI Security for the Rest of Us

Most of the conversation around AI security happens in enterprise circles, in rooms full of people debating zero trust architecture and data loss prevention budgets. That conversation matters. But it is not the one that startups, small businesses and local councils need to have right now.

The one they need is simpler and more urgent. You are probably already using AI tools. Does anyone in your organisation have any rules around that?

What changed

For years, AI was something you queried. You asked it a question. It gave you an answer. The worst case was a wrong answer or a privacy policy you did not read carefully enough.

That is no longer the shape of the problem.

The tools being built and deployed right now are agentic. They do not just answer questions. They browse the web, read and send email, write and execute code, manage files and call external services on your behalf. The demos look impressive because they are impressive. They are also doing things that carry real consequences.

An agent that can read your email can be manipulated through your email. An agent that browses the web can be hijacked by a webpage that contains carefully crafted text designed to override its instructions. That attack has a name. It is called prompt injection, and it is not theoretical. It is happening now.

That changes the security question entirely. It is no longer just about what data the AI can see. It is about what the AI can do.

The permissions problem nobody talks about

Most people deploying AI tools locally are not thinking carefully about what those tools can access. Run something like Ollama with a web interface and an agent framework, and you may find yourself with a locally hosted model that has broad access to your file system, your command line and your network, sitting behind little or no authentication.

OpenClaw is a good example of why this matters. Powerful capability, very weak authentication assumptions, built on the presumption that local means safe. Local does not mean safe if you are on a shared network, if your firewall is misconfigured, or if someone on your team opens the wrong link.

Least privilege applies to AI agents just as much as it does to human users. Probably more, because an agent does not get suspicious. It just executes.

Microsoft Copilot is a specific problem for small teams on Microsoft 365

If your organisation uses Microsoft 365, and most do, then Copilot is either already deployed or coming soon. The pitch is seamless productivity across your entire Microsoft environment.

The risk is that Copilot does not introduce new permissions. It inherits the ones that already exist. And in most small organisations, those permissions are a mess. Old SharePoint sites nobody cleaned up. Files shared with everyone back in 2021 because it was easier. Documents nobody thought about because nobody thought AI would ever go looking.

Copilot will find all of it. Instantly. And surface it in response to a question from someone who probably should not have seen it.

The US House of Representatives banned Copilot for congressional staff over exactly this kind of concern. That is not a trivial data point.

Where your data actually goes

This is the question most people skip. It matters.

There are broadly three options. You can run a model locally using something like Ollama and OpenWebUI, in which case your data stays on your hardware and you own the risk entirely. You can use a sovereign API provider like Infomaniak, which keeps your data under Swiss and European jurisdiction with contractual guarantees that actually hold up when lawyers enter the room. Or you can use a US hyperscaler, which is convenient, capable and subject to American legal reach regardless of which data centre your prompts land in.

None of those options is wrong by default. But choosing without understanding what you are choosing is a problem.

For a local council handling citizen data, the answer is probably not a US hyperscaler. For a startup doing internal drafting work, a sovereign API provider is a reasonable middle ground. For a security-conscious team that wants absolute control, local hosting is worth the operational overhead.

The decision should be deliberate. Most of the time it is not.

What regulation is coming for you

The EU AI Act is real and it is in force. Most small organisations are not paying attention to it yet because the obligations are phased and the guidance is still catching up with the technology.

That will change.

ISO 42001 is the AI management system standard. Your enterprise clients will start asking whether you comply with it. Your public sector clients may require it. Getting ahead of it now costs far less than retrofitting it later.

Neither of these frameworks is impossibly demanding for a small organisation. But they do require you to know what AI tools you are running, what they have access to, and what your policy is. If you cannot answer those three questions today, that is where to start.

Three things to do this week

Write down every AI tool your team is using. All of them, including the ones people are using on personal accounts.

Decide where your data is allowed to go. Put it in writing, even if it is one paragraph.

Apply least privilege to any agentic tool before you expand what it can access. Not after something goes wrong. Before.

The tools are genuinely useful. The risks are genuinely real. Neither of those facts cancels out the other.

The question is whether you are making a deliberate choice or just hoping for the best.