10 AI cybersecurity considerations for boards
Boards embracing AI must do so securely. These 10 questions help leaders manage risk, protect data, and build trust in their AI strategy.

AI is changing the shape of business — fast. From automating operations to augmenting decision-making capabilities , it’s becoming embedded in how organisations compete and grow.
But behind the promise lies a growing web of risk. Every AI system adopted introduces potential vulnerabilities — in data handling, model integrity, regulatory exposure, and more.
Cybersecurity isn’t a topic reserved for technical and compliance teams — it involves every stakeholder, from top to bottom. For boards, it’s a question of governance. Decisions around AI are strategic — and without robust oversight, organisations may unlock innovation at the cost of safety, trust, and compliance. But this isn’t straightforward subject matter for boards to oversee — let alone adopt themselves.
In this article, we explore ten critical cybersecurity considerations that every board should weigh when adopting AI, to ensure that they achieve the upside potential of AI while keeping the risks under control. Let’s dive in.
What are the top 10 cybersecurity considerations for boards adopting AI?
AI is no longer experimental — it’s foundational. But as its adoption accelerates, boards need to ensure that security keeps pace with innovation.
Recent data from PwC’s 28th Annual Global CEO Survey reveals that almost half of CEOs say their biggest priorities over the next three years are integrating AI — including GenAI — into technology platforms, business processes, and workflows. The momentum is clear, but so is the pressure on those at the top to get it right.
These ten considerations equip boards with the insight needed to champion secure, responsible AI.
1. Data protection and privacy
To start with perhaps the most obvious consideration: data protection. AI feeds on data — large volumes of it — and sometimes that data is sensitive, personal, or regulated. This is especially true when it comes to the subjects of board discussions. Without proper safeguards, AI could wreak havoc on organisations’ information security — and quickly create liabilities.
To avoid compromising data protection and privacy, your board should:
- Ensure AI initiatives comply with GDPR, the Swiss DPA, and other relevant data protection laws.
- Push for privacy-by-design principles in AI systems, where data protection is baked in from the start.
- Insist on encryption protocols, access controls, and clear data governance policies.
- Encourage regular audits of how data is collected, stored, processed, and shared.
Put simply: if your AI isn’t protecting people’s data, it’s putting your organisation at risk.
2. Third-party vendor management
AI rarely operates in a vacuum. Most systems are powered by, or tethered to, a network of external vendors — each one adding capability, but also complexity.
Boards need to treat these relationships with a careful eye. While some vendors offer compelling features, their dependencies often run deep: cloud-based models, proprietary infrastructure, and opaque data pathways can all quietly increase exposure.
To stay in control of third party risks, boards should:
- Vet providers not just for performance, but for how and where data is handled — including on-premise vs. cloud-based setups.
- Prioritise solutions that avoid vendor lock and limit external dependencies.
- Insist on clear security commitments and concrete incident response terms in every agreement.
- Regularly reassess vendor risk, especially as AI tools evolve or scale.
Autonomy isn’t a luxury. When AI becomes a fixture in decision-making, knowing exactly who holds the keys — and where those keys live — becomes a matter of strategic resilience.
Subscribe to our newsletter
Receive our latest articles, interviews and product updates.
3. AI system integrity
AI doesn’t just absorb data — it learns from it. And that makes it vulnerable.
From adversarial prompts to data poisoning, attackers can twist how systems behave, often without detection. These aren’t just technical risks — they’re governance blind spots. If an AI system is corrupted, its decisions can mislead leadership, misdirect strategy, or even put customers at risk.
Boards must take steps to verify that AI outputs remain accurate, secure, and accountable — not just at launch, but continuously.
To maintain the integrity of AI systems, boards should:
- Mandate routine vulnerability assessments and stress testing of AI systems.
- Ensure ongoing monitoring for irregular outputs or behaviours that signal compromise.
- Create protocols to disable AI systems rapidly in the event of misuse or breach.
- Support investment in explainable AI tools to ensure outputs can be traced and validated.
When AI makes decisions, boards need to be sure those decisions haven’t been tampered with.
4. Ethical AI use
Ethics and AI are intrinsically linked — and for good reason. Ensuring that AI isn’t reinforcing and creating bias is fundamental to the way we use AI tooling. And this goes beyond just ethics — it’s also a reputational risk, and in some cases, a legal one. Ethical lapses can erode public trust and stakeholder confidence.
To ensure ethics in AI adoption, boards should:
- Lead the development of company-wide principles for ethical AI use.
- Ensure diverse teams are involved in AI training and validation processes.
- Encourage independent reviews of algorithmic fairness and discrimination risk.
- Define clear accountability for AI decisions and how they impact both customers or employees.
As a recent report from KPMG highlights, "Building trust in AI requires a commitment to ethical principles, transparency, and accountability. Failure to address bias and fairness can lead to significant reputational damage and regulatory scrutiny."
Ethics and security go hand-in-hand — both are essential to responsible governance.
5. Regulatory compliance
The regulatory landscape for AI is shifting beneath our feet. New laws are being introduced at pace, with frameworks — such as the EU’s AI Act — set to reshape how organisations develop and deploy AI. Boards need to continuously monitor and formulate responses to these changes, in order to ensure total compliance at every step of your organisations’ AI journey.
To stay on top of AI regulatory compliance, boards should:
- Create reporting structures to help stay informed about current and upcoming AI-related regulations in all jurisdictions where they operate.
- Engage legal and compliance teams early in the AI adoption process to ensure alignment with regulatory requirements.
- Implement robust documentation practices to demonstrate compliance efforts and decision-making processes.
Deloitte puts it plainly: "The regulatory landscape for AI is rapidly evolving, with new laws and standards emerging globally. Organizations must stay informed and proactive to ensure compliance and avoid potential penalties."
With legal and reputational risks on the line, AI compliance can’t be reactive — it needs to be built into the board’s strategic foresight.
6. Incident response planning
AI amplifies impact — and the same is true for cyber incidents. These can escalate quickly, and create scenarios that ordinary incident response playbooks simply can’t handle. Boards need to ensure that their organisations can respond swiftly and decisively if and when things go wrong.
To formulate robust AI incident response plans, boards should:
- Ensure incident response plans are updated to include AI-specific risks and response protocols.
- Run scenario-based drills that simulate AI-driven disruptions or breaches.
- Define a communications strategy — including disclosure obligations and stakeholder messaging.
- Establish a crisis decision-making protocol, including who can shut down AI systems if needed.
Preparedness means rehearsing the worst-case, not just hoping for the best.
7. Employee training and awareness
AI tools can introduce new interfaces and new behaviours. If skills gaps exist and employees don’t understand the risks, they become a weak link. Boards should champion a culture of awareness — where learning is continuous, curiosity is encouraged, and security habits evolve alongside the tech.
To get employee training and awareness around AI right, boards should:
- Ensure training programmes include AI-specific cyber hygiene and use cases.
- Promote ongoing learning through refreshers, updates, and peer-led sessions.
- Encourage a culture of curiosity and caution — where asking “how does this work?” is welcomed.
- Track participation in training and measure awareness levels over time.
Technology changes fast — awareness needs to keep up.
8. Continuous monitoring and evaluation
AI isn’t a one-and-done deployment. It evolves, and so do the threats around it. Boards should push for systems that keep pace in real time — with tools, metrics, and reporting that shine a light on blind spots before they become breaches.
To ensure an evolving approach to AI governance, boards should:
- Mandate real-time monitoring systems to flag irregular activity or security anomalies.
- Support investments in tools that offer continuous threat intelligence and model transparency.
- Request regular reports on system health, risk exposure, and mitigation efforts.
- Challenge management to define clear KPIs for AI security.
Your board can't afford to have blind spots when innovation is moving this quickly. This makes sure you don't.
9. Collaboration with cybersecurity experts
AI threats are fast-moving, complex, and often invisible until it’s too late. Relying solely on in-house resources risks missing what’s lurking outside the perimeter. Boards need to broaden their field of vision — by bringing in specialists, stress-testing defences, and learning from how others win (or lose) the same fight.
When considering external AI advice, boards should:
- Commission independent reviews and penetration testing from external specialists.
- Leverage industry threat-sharing networks for real-time insights.
- Encourage cross-sector collaboration to learn from others’ successes — and failures.
- Appoint board-level advisors or subcommittees focused on cybersecurity and AI governance.
Security isn’t a silo. It’s an ecosystem.
10. Transparent communication
AI can be complex — but your stakeholders still need clarity. When systems feel opaque or secretive, trust erodes fast. Confusion breeds fear, and silence after a breach only makes things worse. Boards need to govern with openness — sharing how AI is used, what safeguards exist, and how issues will be handled when they arise.
When it comes to communications around AI, boards should:
- Proactively share how AI is being used, and what safeguards are in place.
- Disclose breaches swiftly and clearly, with a focus on action and accountability.
- Communicate regularly with regulators, shareholders, and internal stakeholders.
- Publish AI use policies and ethical frameworks where possible.
In the absence of information, people assume the worst. Openness builds trust — and strengthens resilience.
What is Sherpany’s approach to AI cybersecurity?
As research from EY points out, “Boards have a responsibility to understand the full range and extent of the risks and opportunities presented by AI.” This process should involve evaluating solutions fully. Boards embracing AI need assurance that their tools aren’t just smart — they’re secure.
Sherpany takes this responsibility seriously, blending rigorous security protocols with a privacy-first mindset to protect what matters most: your data. Visit the Sherpany Trust Centre to learn more.
Here’s how Sherpany addresses AI-related cybersecurity concerns with confidence and clarity:
1. Encrypted, private, and geo-redundant
Sherpany’s infrastructure is designed for maximum protection:
- End-to-end encryption: AES 256 and SSL-TLS 256 encryption secure all data in transit and at rest.
- Geo-redundant hosting: Data is housed in independent Swiss-based data centres, ensuring 99.9% availability and robust disaster recovery.
- Private AI model: Any AI functionality we introduce runs in a fully private environment using open-source models — no vendor lock, no public model integration, no data leakage.
- On-premise AI infrastructure: Our AI features are designed to operate within our existing secure systems — with no reliance on third-party platforms or external APIs.
There’s no back door, no shadow copies, no handing off your board data to cloud vendors. Sensitive board materials deserve more than standard protection. Sherpany treats them like state secrets.
2. Built-in, certified compliance
Regulations are complex and constantly evolving. Sherpany builds compliance into the foundation:
- Certified to ISO 27001: One of the most recognised global standards for information security management.
- ISAE 3000 Type II assurance: Independent validation of our control systems and safeguards.
- Fully compliant with GDPR and Swiss data protection laws: Your data remains compliant with gloabl data protection and privacy laws.
- No US Cloud Act exposure: We believe that your data should be sovereign and shielded from foreign access — including the US CLOUD Act.
This isn’t box-ticking. It’s peace of mind, architected from the ground up.
3. Always-on risk management
Cybersecurity isn’t static — and neither is Sherpany’s approach:
- Continuous monitoring: Our systems are under constant surveillance to detect and respond to threats.
- Independent audits: Regular third-party reviews, including compliance with FINMA 18/3, ensure accountability.
- Bug Bounty programme: Collaborations with ethical hackers help us uncover vulnerabilities before bad actors can.
We don’t wait for problems. We hunt them down.
4. Security tools to avoid human errors
Even the strongest tech can be undone by a weak password or misplaced device. Sherpany empowers users with smart controls:
- Two-factor authentication (2FA): Enhanced login security powered by Futurae.
- Granular access control: Tailor who sees what, and when — with strict permission settings and role-based access.
- Remote wipe functionality: Lost device? One tap erases Sherpany data from the mobile app.
Your team stays in control — even when things go wrong.
5. Built for regulated industries
Some sectors face tighter constraints — and Sherpany is built for them:
- Confidentiality classification levels: Label and restrict access based on document sensitivity.
- Restricted actions: Block copy-paste, download, and printing for sensitive materials.
- Password-protected uploads: Add an extra barrier to confidential files.
Sherpany’s AI features have been built the same way we build everything: with control, transparency, and long-term trust in mind. We ensure your AI adoption won’t outpace your defences — so trust, transparency, and security are never left behind.
Augment your board with AI — securely
AI can be a powerful ally — but only when it’s wrapped in strong, proactive cybersecurity. As boards steer their organisations through this technological shift, security can’t be an afterthought. It needs to sit at the centre of every AI conversation.
By addressing these ten critical considerations, boards can govern with confidence, ensuring their AI journey is not only innovative, but safe, compliant, and ethical. The risks are real — but with the right approach, so are the rewards.
Sherpany augments the capabilities of boards with AI — but without compromising on cybersecurity. Book a free demo today and see how our platform helps protect what matters most.
Here’s What You Should Do Next…
Learn How to Make Every Meeting the Best Ever
Our monthly newsletter delivers actionable advice through articles, interviews, and best practices.
No spam. Unsubscribe any time.
Get Answers to Your Board’s Most Pressing Challenges
Browse our articles, containing an amazing number of useful tools and techniques, including:
Discover How Sherpany’s Board Management Software Can Cut Your Meeting Preparation Time in Half
- Learn from the world's leading companies who are using Sherpany to revolutionise their meetings.
- Discover how Sherpany can transform your meeting culture
Or
Book a demo to see Sherpany in action