From Our Radar: Emerging Signals in Cyber, AI & Regulation

As artificial intelligence continues to advance, new regulatory frameworks and cybersecurity risks are rapidly emerging. From the implementation of the EU AI Act to rising concerns around deepfake-enabled fraud, voice assistant vulnerabilities, and gaps in cyber insurance, the landscape is shifting fast. These developments are creating both challenges and opportunities — especially for startups building solutions at the intersection of compliance, threat detection, and next-gen cybersecurity.

EU AI Act Implementation

The European Union’s landmark AI Act—the world’s first comprehensive AI regulation—was formally adopted in 2024 and categorizes AI systems based on risk levels:
Prohibited: Applications that manipulate human behavior or use real-time biometric identification in public spaces.
High-Risk: Systems deployed in sectors like healthcare, critical infrastructure, and education must meet strict requirements for transparency, safety, and human oversight.

This regulatory push is fueling growth in early-stage startups focused on AI compliance, traceability, and model auditing, particularly in the EU’s enterprise and public-sector markets.

Source↗

Cyber Insurance Gaps Widening

The rise of deepfake-enabled fraud and AI-driven cyber threats is exposing gaps in traditional cyber insurance models. According to recent industry analysis, deepfakes now account for 6.5% of digital fraud attempts, with some segments reporting a 3,000% year-over-year increase in deepfake-related fraud.

These developments are prompting insurers to reevaluate their policies, creating a clear opening for startups offering real-time threat quantification, dynamic risk scoring, and AI-enhanced underwriting tools that enable more adaptive and accurate pricing of cyber risk.

Source↗

Voice Assistants Under Scrutiny

Recent discoveries of critical vulnerabilities in widely-used smart assistants—such as Alexa, Google Assistant, and Siri—are raising red flags in the security community. Exploits have ranged from unauthorized device control to passive audio surveillance through adversarial prompts and signal interference. This scrutiny is accelerating demand for privacy-by-design architectures and context-aware security layers that go beyond traditional device-level safeguards.

Source↗

Subscribe to receive the latest updates
Keep up to date with the latest news, investments & financial updates. Please see our Privacy Policy to see how we handle your data.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.