Skip to content

Double-edged sword of AI meets EU AI Act

AI, the trillion-dollar game changer, fuels both innovation and growing cyberattacks. While the EU seeks balance with new regulations, this "double-edged sword" still thrives, shaping businesses and lives worldwide.

EU-AI-regulation EU-AI-regulation

The state of AI applications today

The European Union (EU) is taking a big step towards an AI-powered future with the implementation of its first comprehensive set of AI regulations. These regulations aim to strike a balance between encouraging innovation and mitigating risks of AI. This provides businesses with a clearer legal landscape, fostering trust and accelerating AI adoption by consumers.¹

While the new EU AI Act helps prevent AI-led cybercrime in a number of ways, some bad bots still slip through the cracks. 

To help you prepare for what’s to come, here’s what AI-led attacks look like today and how prevention and detection techniques coupled with the new EU AI regulation can help combat them for businesses and consumers.

 

How AI is being used on cyberattacks today

New attacks such as sophisticated manipulation techniques, deep fakes, chatbots and voice clones used for identity theft are the chilling reality of AI-led cyberattacks exploiting consumers' lack of awareness of these techniques. They use AI to develop deepfakes and voice clones of people in order to trick customer service representatives into transferring money to their accounts.²

Attacks like these have been around long before generative AI took the spotlight in 2023. With its development velocity however, having a legal guidance governing the misuse of AI becomes paramount in order to foster transformative applications and promote consumer trust. 

 

How the new EU AI Act can help outweigh the risks of AI

The Council presidency and the European Parliament’s negotiators have reached a provisional agreement on the proposal on harmonised rules on artificial intelligence (AI).³

This act categorizes the use of AI by 4 risk levels, each with different rules:

  • Minimal or no risks
    The vast majority of AI systems do not pose risks, and therefore they can continue to be used and will not be regulated or affected by the EU's AI act.
  • Limited risks
    AI systems that present only limited risks will be subject to very light transparency obligations, such as disclosure that their content was AI-generated, so that users can make informed decisions concerning further use.
  • High risks
    A wide range of high-risk AI systems will be authorized, though subject to a set of requirements and obligations for gaining access to the EU market.
  • Unacceptable risks
    For some uses of artificial intelligence, the risks are deemed unacceptable, so these systems will be banned from use in the EU. These include cognitive behavioral manipulation, predictive policing, emotion recognition in the workplace and educational institutions, and social scoring. Remote biometric identification systems such as facial recognition will also be banned, with some limited exceptions.

    Read more here.

The Act focuses on overseeing AI systems, ensuring their safety, transparency, traceability, non-discrimination, and environmental friendliness. These policies, coupled with mandated assessment and governance, can help prevent AI-assisted cyberattacks by prohibiting the use of high-risk AI applications.

 

What are the ways EU AI Act promotes businesses' and consumers' trust in AI?

The new AI Act will require organizations to take into account the cybersecurity implications of AI systems, including the risk of cyberattacks that can leverage AI-specific assets such as training data sets or trained models. 

Here are ways the new AI Act can provide peace of mind to businesses and consumers:¹

  1. Regulatory clarity and harmonization

    The new regulation Establishes a clear and harmonized regulatory framework for AI systems across the EU, ensuring consistent protection of rights and freedoms.

    It takes a risk-based approach, subjecting high-risk AI systems to stricter requirements while allowing more flexibility for lower-risk systems.

  2. Transparency and explainability

    It mandates transparency and explainability measures so users can better understand how AI systems function and the decisions they make.

  3. Safety and robustness

    It requires providers to implement technical and organizational measures to ensure AI systems are safe, robust, and respect privacy and data protection rules.

  4. Responsible Innovation

    It establishes AI regulatory sandboxes to allow for responsible testing and development of new AI systems under supervision.

    It also encourages voluntary codes of conduct to promote AI literacy and responsible practices among developers and users.

  5. Traceability and Oversight

    It creates an EU database for registering high-risk AI systems, increasing transparency and traceability.

Fraud prevention guide for marketplaces and platforms

Prevention and detection of AI-assisted attacks

While the new regulatory framework prompts AI operators and companies who use AI to enhance their security protocols in accordance with EU AI Act guidelines, bad bots remain a threat. 

However, there are many ways to defend against AI-led attacks today:

Prevention

  • Spreading awareness: Victims of AI-led attacks, such as deep fakes and voice clones, are often unaware of these techniques. As a fundamental strategy, spreading awareness of these practices can help targets deter these attacks and effectively prevent them. 

Detection

For detecting face manipulation

  • Visible edges: Look for inconsistencies in skin tone, texture, or additional facial elements around the swapped face.
  • Blurred details: Pay attention to blurry teeth, eyes, or other sharp features, especially during close-ups.
  • Unnatural expressions: Observe for limited or strange facial movements, particularly in profile views or quick head turns.
  • Inconsistent lighting: Watch for sudden changes in lighting or shadows that don't match the environment.

For detecting voice manipulation:

  • Metallic tones: The audio may sound robotic or have a metallic twang.
  • Mispronunciation: Words, especially from different languages, might be pronounced incorrectly.
  • Monotone voice: Speech lacks natural emphasis and sounds flat.
  • Unnatural pronunciation: Accents or word stresses don't match the expected speaker.
  • Extraneous sounds: Background noise or unnatural pauses may signal manipulation.

     

Conclusion:
Industry leaders, researchers, and policymakers worldwide hold a positive outlook on AI's potential, fueled by the development of cutting-edge prevention and detection techniques. Embracing these tools and adhering to the EU AI Act's framework are crucial steps toward a future where AI empowers us all.

Learn how to protect your platform from cyberattacks through a 360° fraud prevention solution.

Sources:

  1. EU AI Act
  2. FTC cyberattacks
  3. European Council AI Act press release