Skip to content

The EU's groundbreaking new AI Act: Shaping an ethical future for technology

Explore the implications of the EU's groundbreaking AI Act, a pioneering legislation shaping the future of AI regulation.

The European Union has taken a monumental step in digital policy with the introduction of the Artificial Intelligence Act (AI Act). The first of its kind, this pioneering legislation aims to create a structured regulatory framework for the development and deployment of AI technologies. The EU's objective is to leverage the benefits of AI (e.g., improved healthcare, safer transportation, efficient manufacturing, sustainable energy) while ensuring these technologies are safe, transparent, and aligned with societal values. 

Monumental shifts like this can only be achieved with the help of future-facing digital professionals. In this blog post, we’ll explore what the AI Act does. We’ll also highlight how a Digital Futures MA can help you shape a more ethical future for technology. 

What are the goals and purpose of the AI Act? 

The AI Act was proposed by the European Commission in April 2021 and adopted by Parliament in March 2024. It seeks to regulate AI systems based on the risk they pose to users. Its primary goals include: 

  • Ensuring safety and accountability by making AI systems safe, transparent, and traceable. 
  • Promoting non-discrimination by avoiding biases. 
  • Supporting environmental sustainability through eco-friendly technologies. 
  • Maintaining human control to prevent harmful outcomes.  

This comprehensive approach seeks to foster an ethical and responsible environment for AI development. 

Risk categories to protect EU citizens 

The AI Act introduces a risk-based classification system for all AI technologies. High-risk systems are defined as those that could negatively impact safety or fundamental rights. They must undergo assessments before and during their lifecycle. These include product safety-related systems and those in critical sectors like infrastructure, education, and law enforcement.  

Unacceptable risk systems, such as those involved in cognitive behavioural manipulation, social scoring, and biometric identification, will be banned, with exceptions for law enforcement in serious cases. This structure aims to protect EU citizens while enabling the ethical advancement of AI. 

Criticisms of the AI Act 

Fundamentally, the goal of the AI Act is to protect EU citizens. However, it has faced considerable criticism for its approach to regulating artificial intelligence. Critics argue that the act overregulates and stifles innovation by imposing heavy compliance costs and creating barriers. This could deter AI developers and businesses from operating within the EU.  

Meanwhile, other criticisms are centred on the act's broad definitions, and the vast discretionary power it grants to the European Commission. This could lead to loopholes, along with a lack of clarity and predictability in the law's application.  

For example, governments and police forces get special exceptions when it comes to using high-risk technology. But these institutions may have the power to abuse this technology, too. What if governments use AI technology in a way that cognitively manipulates users to vote for their party? Will there be repercussions if governments and police forces harness AI technology to create social credit systems like the initiative adopted in China? 

These criticisms suggest that despite the positive intentions of the AI Act, its current form could inadvertently hamper the innovation required to foster a competitive digital economy.  

Navigate and shape AI trends with the Digital Futures MA 

The EU’s AI Act is a pioneering step towards creating a safer and more ethical future for AI, but it’s just the beginning. As AI continues to evolve, its regulation becomes increasingly complex, driving a global need for experts who can navigate these challenges. 

Our MA in Digital Futures equips you with the critical skills needed to influence AI and broader digital policies. Through modules like AI and Society and Mapping New Trends in the Digital Landscape, you'll gain the expertise to evaluate AI’s societal impacts, anticipate future trends, and engage in meaningful dialogue with policymakers, industry leaders, and the public. Whether advising on regulations, ensuring compliance, or shaping ethical innovation, you’ll be prepared to lead in this rapidly changing field. 

Together, these modules provide a comprehensive toolkit that blends technical knowledge with strategic foresight, empowering you to contribute effectively to conversations around AI regulation and beyond. 

See course details

Have questions?

Complete the form below and a member of our course adviser team will contact you shortly.