top of page

AI Agents and the rise of non-human identities: securing AI at scale

  • Writer: Revio
    Revio
  • May 28, 2025
  • 2 min read

Updated: Aug 7, 2025

Artificial intelligence is transforming business operations, from APIs to internal LLM-powered assistants. But this transformation brings a hidden risk namely the explosion of non-human identities (NHIs: bots, agents, service accounts) that now outnumber human users by a factor of 45:1 in some organisations.


Each AI agent, pipeline, or bot must authenticate services, typically via secrets like API keys or tokens. As AI scales, so does the sprawl of credentials. It was reported that 23.7 million secrets were exposed on public GitHub in 2024 alone. These NHIs are not governed like human users. They rarely have credential rotation, scoped access or de-provisioning policies, making them low-hanging fruit for attackers.


At Revio, we see this as a critical identity crisis and an opportunity because with the right controls, organisations can harness AI’s power securely.


Five essential controls to secure AI-driven NHIs


1. Audit and sanitise data sources: Modern LLMs use retrieval-augmented generation (RAG) to access live data. If that data contains secrets in Confluence, Jira, or Slack, your AI could unknowingly leak credentials in responses or logs. Action: Regularly scan and scrub internal sources for secrets.


2. Centralise and govern NHIs: You can’t protect what you don’t track. Use cyber security software solutions to centrally manage and rotate secrets for all NHIs. Action: Build inventories that focus on secrets ownership, not just accounts.


3. Secure AI integrations and deployments: Model Context Protocol (MCP) simplifies how AI connects to services. It’s been reported that approx. 5.2% of MCP servers had hardcoded secrets this a red flag. Action: Integrate secrets scanning into the CI/CD pipeline to catch exposed credentials.


4. Log with caution: LLMs log everything including prompts, context and responses for tuning. If secrets are exposed any stage, they’re logged and duplicated across systems. Action: Implement log sanitisation before data hits external storage. Automate with scanning tools triggered by scripts.


5. Restrict AI access by design: LLMs don’t need unrestricted access to your CRM or source code. Apply zero trust and least privilege principles. Don’t over-permission in the name of innovation because this is where breaches begin.


Don’t forget the human element

Policies, tools and processes mean nothing without developer and engineer buy-in. Security and DevOps must collaborate to embed safe practices into AI development workflows. At Revio, we recommend:


  • Security onboarding for AI developers

  • Developer-friendly secrets detection tools

  • Cross-functional audits of AI deployments



 
 
 

Comments


bottom of page