LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

Europe’s New AI Code of Conduct: Inside the Plan to Tame Big Tech’s Models by 2025

Europe’s New AI Code of Conduct: Inside the Plan to Tame Big Tech’s Models by 2025

Europe’s New AI Code of Conduct: Inside the Plan to Tame Big Tech’s Models by 2025

Introduction

The European Union is rolling out a groundbreaking AI Code of Practice for general-purpose AI models – essentially a voluntary code of conduct aimed at taming the likes of ChatGPT and other big AI systems ahead of sweeping new laws. The EU AI Act, the bloc’s landmark AI regulation, will fully apply in August 2025, bringing hefty new rules and penalties. In the meantime, Brussels is pushing AI providers to self-regulate now. The European Commission has just finalized the first Code of Practice for General Purpose AI (GPAI), designed as a toolkit for AI developers to self-assess and mitigate risks before the hard regulatory “crackdown” kicks in digital-strategy.ec.europa.eu nelsonmullins.com. This code – co-created with industry and experts – is poised to become a model blueprint for safe AI development in the EU and potentially worldwide.

Why does this matter? Europe’s approach is the first of its kind: a voluntary but incentivized rulebook for AI models that can generate text, images, code, and more. By adhering to the code now, companies can earn goodwill with regulators, enjoy lighter red tape, and claim a “presumption of compliance” when the law comes into force digital-strategy.ec.europa.eu nelsonmullins.com. It’s a high-stakes experiment in balancing innovation with oversight – essentially “behave now, or face the consequences later.” In this in-depth report, we break down everything you need to know about the EU’s GPAI Code of Conduct: what it is, why it was created, what it demands from AI providers, and how experts and industry are reacting.

The EU AI Act and the General-Purpose AI Challenge

The EU AI Act is the world’s first comprehensive AI law, adopting a risk-based framework for AI systems. After intense negotiations, EU lawmakers agreed on strict rules for “high-risk” AI applications (like those used in healthcare or hiring) and even bans on some harmful uses. But when it came to general-purpose AI (GPAI) – highly versatile AI models that can perform a broad range of tasks – regulators faced a challenge. Systems like OpenAI’s GPT-4, Google’s upcoming Gemini, or Anthropic’s Claude are not built for one specific use; they can be adapted for countless downstream applications. This generative AI boom arrived late in the lawmaking process, forcing a rethink on how to police such powerful, general systems euronews.com euronews.com.

EU member states initially worried that slapping heavy rules on foundational models might stifle innovation and homegrown AI startups euronews.com. The European Parliament, on the other hand, pushed for tough requirements on these models to address concerns around market dominance and potential abuses of AI euronews.com. The compromise was a “third way”: a co-regulatory approach where the detailed obligations for general-purpose AI would be fleshed out in codes of practice and technical standards, rather than spelled out rigidly in the law euronews.com. EU Internal Market Commissioner Thierry Breton championed this idea, drawing inspiration from the EU’s 2018 Code of Practice on Disinformation – a voluntary pact with tech firms to fight fake news euronews.com. The logic was that flexible codes could keep pace with fast-evolving AI technology better than hard law, while still holding companies accountable.

This approach was written into the AI Act (notably Article 56, which encourages codes of conduct for compliance). By late 2023, as the AI Act neared final approval, the Commission kicked off work on a General-Purpose AI Code of Practice. The aim: define how providers of large-scale AI models should ensure safety, transparency, and fundamental rights protection when their models are used across various contexts nelsonmullins.com nelsonmullins.com. In essence, the EU recognized that foundation models need a foundation of rules – but opted to let industry experts and regulators draft those rules collaboratively.

Why Launch a Voluntary Code Now?

With the AI Act formally approved in 2024, a countdown began. The law’s provisions on general-purpose AI enter into application on August 2, 2025 digital-strategy.ec.europa.eu. This means that from that date, any company offering a qualifying AI model in Europe must meet new obligations (on transparency, risk controls, data governance, etc.). However, enforcement won’t happen overnight – the Act gives some grace periods: an extra year for new models and two years for existing models before full penalties apply digital-strategy.ec.europa.eu. Still, the message from Brussels has been clear: there will be no delays or pauses in rolling out the AI Act. “There is no grace period. There is no pause,” a European Commission spokesperson insisted when asked if the AI Act’s enforcement might be postponed nelsonmullins.com.

The Commission introduced the GPAI Code of Practice to bridge the gap until these legal deadlines bite. By encouraging companies to voluntarily adopt best practices now, regulators hope to reduce chaos and non-compliance once the law is live. Commission officials call it a win-win: companies that sign up to the code early will have “reduced administrative burden” and greater legal certainty when the Act kicks in iapp.org. In fact, adhering to the code will create a “rebuttable presumption of conformity” with the AI Act’s requirements – essentially a safe harbor that shows regulators you are playing by the rules nelsonmullins.com. As one legal expert noted, following the code gives providers a beneficial safe harbor and avoids “unnecessary scrutiny or uncertainty from the EU AI Office” down the line nelsonmullins.com nelsonmullins.com.

Another reason for launching the code now is the collaborative process itself. Drawing up the code has taken months of negotiation among stakeholders. The Commission convened 13 independent experts in late 2024 to draft the code, and sought input from over 1,000 stakeholders – including major model providers, startups, academics, AI safety researchers, copyright holders, and civil society groups digital-strategy.ec.europa.eu. This broad consultation was meant to ensure the code is technically feasible and innovation-friendly, not just a bureaucratic wish list. By publishing the code in July 2025 (after initially hoping for May), the EU squeezed it out just weeks before the Act’s GPAI rules start – making good on an informal deadline iapp.org. It sets expectations early: “Any company providing AI models to EU markets should assess their compliance readiness now,” urged a team of tech lawyers, noting that conforming to the code now effectively jump-starts EU AI Act compliance nelsonmullins.com nelsonmullins.com.

Development of the GPAI Code of Practice

The path to the final code was not entirely smooth. Work began around September 2024, and multiple draft versions were circulated. The process was, by design, a co-regulatory exercise with the newly formed EU AI Office (a body created by the AI Act to oversee implementation) coordinating the effort. Four working groups tackled different aspects of the code, chaired by experts to ensure balance euronews.com. Interested stakeholders were invited to contribute through workshops and public consultations, making the drafting as inclusive as possible euronews.com. However, critics worried that letting industry draft the rules could lead to watered-down commitments – the classic concern with voluntary codes that companies might “do too little, too late” euronews.com. EU officials tried to counter this by involving civil society and independent academics, echoing the strategy used to strengthen the disinformation code in later years euronews.com euronews.com.

By July 10, 2025, the final version of the GPAI Code of Practice was delivered to the Commission. It represents the first detailed compliance framework under the AI Act specifically for general AI models nelsonmullins.com. The code’s drafters had to strike a balance between competing pressures. On one side, industry groups argued earlier drafts were too restrictive and cumbersome. On the other, AI safety advocates and some lawmakers felt the code risked being too lenient, deferring too much to tech companies’ preferences iapp.org. “The process was dogged with complaints on both sides,” an IAPP report noted – industry said the code was overly strict, while civil society feared drafters were “too deferential to technology companies’ demands.” iapp.org iapp.org

In fact, even as the ink dries, a political tug-of-war continues over the code’s contents. MEP Brando Benifei, a lead architect of the AI Act in the European Parliament, praised the final draft for preserving “important provisions on fundamental rights and copyright protections”, which he and colleagues fought to include iapp.org. “Model providers obtained important concessions, so there’s no excuse not to uphold the Code,” Benifei said, urging that “the credibility of Europe’s AI framework now depends on [the] AI Office’s ability to translate these commitments into practice with robust oversight [and] real consequences for non-compliance.” iapp.org On the other hand, Benifei and several other MEPs have also complained that the Commission made last-minute changes without properly consulting Parliament. In a public letter, a group of lawmakers (including Axel Voss, Sergey Lagodinsky, Kim van Sparrentak and others) alleged that certain transparency and risk assessment measures were weakened in the final hour iapp.org. They argued the Commission allowed narrowing of the Act’s scope via the code and questioned the process: “How does the Commission consider the objectives of the AI Act to be safeguarded if the European Parliament was not consulted on such significant changes… while most providers reportedly received the full text of the final draft?” the letter stated iapp.org. This highlights the fine line the Commission walked: incorporating industry feedback to make the code workable, while trying not to undermine the law’s intent.

Despite these debates, the Commission is now moving forward. The GPAI Code of Practice is voluntary, but it won’t be a toothless document. Once the code is formally endorsed by the Commission and EU Member States, likely in the coming weeks, AI model providers will be invited to sign on and commit to its provisions digital-strategy.ec.europa.eu iapp.org. Signatories will effectively pledge to implement the code’s measures in developing and deploying their AI models. The more companies that join, the stronger the code’s impact will be. It remains to be seen if all the major players – from OpenAI and Google to open-source model developers – will embrace this “clear, collaborative route to compliance” as the Commission hopes iapp.org.

Key Provisions of the AI Code of Practice

The General-Purpose AI Code of Practice is structured into three main chapters, reflecting the core obligations that the AI Act imposes on GPAI providers: Transparency, Copyright, and Safety & Security digital-strategy.ec.europa.eu. Each chapter lays out detailed measures and documentation that providers should implement to demonstrate compliance with the corresponding legal requirements. Below, we break down the key commitments in each area:

  • Transparency Obligations: Every provider of a general-purpose AI model will need to maintain comprehensive documentation about their model – essentially a technical “model card” that can be shared with regulators. The code includes a Model Documentation Form as a template, which captures details like the model’s technical specifications, architecture, training data characteristics, performance benchmarks, and even the computational resources and energy consumption used during training nelsonmullins.com nelsonmullins.com. By filling out this standardized form, companies can fulfill their documentation duties under Article 53(1)(a)-(b) of the AI Act natlawreview.com. The code specifies that such information need only be provided to the EU’s AI Office or national authorities upon request (with a legal basis), rather than published outright iapp.org. This addresses industry concerns about exposing trade secrets, while still ensuring regulators can get info when needed. Notably, open-source AI models are exempt from some documentation requirements unless they pose systemic risks iapp.org. In practice, transparency also means providers must publish certain summaries or disclosures to inform the public about their AI (e.g. stating that output was AI-generated, where applicable) – though much of the heavy documentation remains internal until requested natlawreview.com. All these steps aim to make even the most powerful AI systems more traceable and accountable, without stifling their development.
  • Copyright & Data Governance: In a world where AI models train on vast internet data, the EU is insisting that models respect intellectual property rights. Article 53(1)(c) of the AI Act requires providers to have a policy to comply with EU copyright law and honor content owners’ rights natlawreview.com. The Code of Practice fleshes this out with multiple concrete measures natlawreview.com. Signatories must implement a robust copyright compliance policy, including mechanisms to prevent their AI from infringing copyrights in its outputs and to handle complaints from rightsholders nelsonmullins.com nelsonmullins.com. For example, a provider should set up an online channel for copyright complaints (an email or web form/API) so that creators can report unauthorized use of their works nelsonmullins.com. Companies also need to designate a point of contact specifically for copyright issues and publish those contact details nelsonmullins.com. The code further commits providers to responsible data sourcing practices. This means abiding by website owners’ wishes not to be scraped: honoring robots.txt files that opt out of web crawling and not bypassing paywalls or technical measures designed to protect content iapp.org nelsonmullins.com. The European Commission will even maintain a blocklist of websites “persistently infringing” copyright (sites notorious for pirated content), and AI model crawlers should avoid ingesting data from those sources iapp.org. The code explicitly asks providers to promise not to circumvent measures intended to prevent unauthorized access to copyrighted material iapp.org. Together, these commitments seek to ensure that the training and operation of GPAI models do not ride roughshod over creators’ rights. In effect, Europe is saying: AI models shouldn’t build their power by stealing artistic or journalistic content – and if they do, there must be recourse for those affected.
  • Safety & Security Measures: This third chapter applies only to the most advanced “frontier” AI models – those general-purpose systems considered to pose “systemic risk.” The AI Act’s Article 55 introduces extra obligations for very powerful models that could have far-reaching impact on society or safety natlawreview.com. The Code of Practice defines criteria for these frontier models, including a quantitative threshold: models exceeding 10^25 FLOPs (floating-point operations) in compute are a likely benchmark for this category nelsonmullins.com. In 2025, only an estimated 5–15 companies worldwide (think OpenAI, Google, Microsoft, Anthropic, etc.) have models at that scale nelsonmullins.com. But as AI compute efficiency grows, more may enter this elite group. Providers of such high-risk GPAI models must adopt enhanced governance and risk management frameworks nelsonmullins.com. Concretely, the code mandates establishing formal internal governance structures with independent oversight – for example, an external advisory board or internal committee focused on AI risk nelsonmullins.com. These models should undergo rigorous evaluations by independent experts (“red teaming”) to probe for vulnerabilities, biases, or harmful behavior before release, after any major upgrade, and periodically during operation nelsonmullins.com nelsonmullins.com. In other words, the biggest models need a continuous safety-testing regimen by qualified outsiders, not just in-house engineers. If tests reveal serious issues, the provider is expected to mitigate them and, under the AI Act, report serious incidents to authorities. The code details an incident reporting timeline – potentially as short as 2 to 15 days after a provider becomes aware of a major incident involving their model, they should alert the AI Office nelsonmullins.com. Cybersecurity is another focus: frontier model providers must implement state-of-the-art security controls to protect their AI systems from tampering or misuse nelsonmullins.com. This includes end-to-end encryption of model access, strict access controls, and insider threat protections to prevent leaks or malicious use of these powerful models nelsonmullins.com. Additionally, these providers should produce a comprehensive “model report” compiling all the details about the AI system, its known risks, and the mitigations in place natlawreview.com. This report can be shared with the EU AI Office to demonstrate ongoing compliance and risk management. The overarching idea is to treat frontier AI a bit like the aviation or pharma industries: heavy-duty oversight, testing, and safety checks, given the stakes.
  • Support for Downstream Users: An often overlooked aspect of the code is the requirement for GPAI providers to help those who use their models in end applications. Many companies will build AI-powered products using a foundation model (for instance, integrating an open-source language model into a medical app). The code asks model developers to provide information and tools to downstream developers so they too can comply with the AI Act nelsonmullins.com. This might mean publishing best practices, sharing details on model limitations, or offering optional filters and settings that downstream users can apply to reduce risk. The goal is a compliant AI ecosystem: the obligations shouldn’t stop at the model provider, but they should enable the next player in the value chain to also uphold safety and transparency.

It’s clear the GPAI Code of Practice goes well beyond vague principles – it lays out concrete steps, forms, and technical expectations. As one analysis put it, the code’s requirements amount to “the most concrete guidance to date” on how to meet the EU AI Act’s complex rules for general AI nelsonmullins.com nelsonmullins.com. Many provisions align with what leading AI companies claim to do already (e.g. OpenAI has model cards and red-team exercises), but enshrining them in a code with regulatory backing is a significant move. Importantly, the code’s measures also mirror emerging global AI governance trends. For instance, the U.S. NIST AI Risk Management Framework and various international AI principles emphasize documentation, transparency, and risk mitigation – so investments to comply with the EU code could pay off in multiple jurisdictions nelsonmullins.com.

Why Would AI Providers Sign Up?

Committing to a voluntary code means extra work – so what’s the incentive for companies? The European Commission has been quite blunt about the benefits. Signatories will find compliance easier and safer once the AI Act is enforced. By adhering to the code, providers can demonstrate they meet the Act’s requirements on transparency, safety, and copyright, giving them a “reduced administrative burden” and increased legal certainty digital-strategy.ec.europa.eu iapp.org. Regulators are likely to view code-abiding companies more favorably, possibly streamlining audits or approval processes. In legal terms, following the code grants a presumption of conformity – meaning if you’re ever in a dispute or inspection, you can point to your code compliance as evidence you took appropriate measures nelsonmullins.com. It doesn’t guarantee immunity from penalties, but it sets a strong starting point in your favor.

EU officials also pitch the code as a way to shape the rules collaboratively rather than having one-size-fits-all enforcement. “Co-designed by AI stakeholders, the Code is aligned with their needs,” noted Henna Virkkunen, a European Commission Vice-President, adding that companies should join to keep AI “safe and transparent while allowing for innovation.” iapp.org. By signing up, firms signal to the public and regulators that they are responsible actors not needing to be dragged into compliance kicking and screaming. This could be a reputational boost, especially amid rising public scrutiny of AI. The code effectively operationalizes the AI Act’s principles – so joining it early could give companies a head-start and avoid last-minute scrambles to retrofit their processes in 2025.

There’s also a strategic, global angle. The EU’s AI regulation is likely to set a de facto standard internationally (much as GDPR did for data privacy). Companies that align with it now may gain an edge in global trust and be better prepared as other regions craft their own AI rules. Tech policy experts highlight that investments in compliance capabilities for the EU “will likely serve multiple regulatory regimes” as similar initiatives spring up elsewhere nelsonmullins.com. In short, signing the code could future-proof a company’s AI governance. It’s worth noting that some major AI players have already shown interest in responsible AI commitments (for example, several U.S. companies pledged voluntary AI safety measures in talks with the White House). The EU’s code provides a concrete checklist to act on those commitments.

Finally, consider that not signing the code might actually put a target on a company’s back. Regulators may wonder why a big AI provider refuses to pledge to practices that peers have agreed are reasonable. One legal analysis warned that “deviating from [the code] may invite unnecessary scrutiny or uncertainty from the EU AI Office.” nelsonmullins.com nelsonmullins.com In other words, if most of the industry follows the code and one outlier doesn’t, that outlier could face tougher questions once enforcement starts. Thus, even though the code is voluntary, market pressure and regulatory pragmatism could drive companies to the table.

Mixed Reactions and Expert Commentary

The launch of the GPAI Code of Conduct has drawn a wide spectrum of reactions – from enthusiastic support to sharp criticism – reflecting the high stakes involved.

European policymakers have largely welcomed the code as a necessary step. They view it as a proof of concept that the AI Act’s novel approach can work. As mentioned, MEP Brando Benifei sees no excuse for AI firms not to comply, given that they won some compromises in the code’s drafting iapp.org. Other lawmakers, however, remain cautious. A group of Members of European Parliament publicly worried that the code might weaken parts of the AI Act if not done right, and they were displeased with the Commission’s opaque finalization process iapp.org iapp.org. These MEPs argue that Parliament – which co-wrote the AI Act – should have had more say in the code’s outcome to ensure it doesn’t undercut the law’s intent. This political friction hints at a potential battle: if the code ends up too lax or if companies don’t sign on, the Parliament could push for stricter measures or oversight of the Commission’s implementation.

From the industry side, there is relief that the code provides clarity, but also concern about its depth. The Computer & Communications Industry Association (CCIA), representing tech firms, warned that the final code is “overly prescriptive and beyond the Act’s scope, putting any signatory that chooses to agree to [it] at a higher regulatory burden.” iapp.org Boniface de Champris, a senior policy manager at CCIA Europe, acknowledged that some onerous safety/security steps were streamlined in drafting, but he criticized that handling of copyright complaints and opt-out mechanisms got even stricter in the final text iapp.org. In his words, “with so little time left, companies are left in the dark, and it’s clear more time is needed to finalise the overall framework and give companies a fair compliance window.” iapp.org This captures a common industry gripe: the code was published mere weeks before the law’s start date, so companies must scramble to adjust if they sign on. Businesses had lobbied for a delay or “grace period” for AI Act enforcement (some floated a two-year pause), but as noted, the Commission firmly shot that down nelsonmullins.com.

Representatives of global tech firms are taking a measured stance. Marco Leto Barone, policy director at the Information Technology Industry Council (ITI), said companies will now have to “decide if the code is workable.” iapp.org He emphasized the need for clear guidance from the Commission to interpret the code’s measures properly and “grant sufficient time for implementation and compliance, given the imminent entry into application of the AI Act rules” iapp.org. In other words, industry wants practical guidance and maybe some flexibility as they implement the code’s many requirements. There’s also an element of “trust, but verify” – ITI and others will be watching how EU authorities treat code signatories versus non-signatories, and whether the code truly aligns with the Act (no hidden surprises).

On the flip side, AI ethics and civil society advocates generally applaud the EU for not watering down the AI Act and for involving a broad coalition in the code development. However, some are cautious that a voluntary code is only as good as its uptake and enforcement. If key players refuse to sign or if compliance isn’t monitored, the code could end up as a paper tiger. These groups call for the new EU AI Office to be vigilant. As MEP Benifei noted, Europe’s AI credibility depends on the AI Office to ensure the code’s commitments are translated into real action, with “robust oversight, real consequences for non-compliance, and ongoing dialogue with civil society” iapp.org. Essentially, trust but verify applies here too: trust that companies will do their part, but verify through audits and penalties if they don’t.

There’s also the question of global competitive impact. Some non-EU companies privately worry that the code (and AI Act) could impose constraints that make their AI less agile or innovative, at least in Europe. Yet others see an opportunity: compliance could become a quality mark. If European consumers and enterprises prefer AI systems that meet EU standards (for safety, transparency, no copyright theft, etc.), then signing the code might be good for business. The dynamic is similar to how many U.S. companies adopted GDPR practices globally because users valued privacy. The global tech community is watching closely. Even outside the EU, experts recognize that Europe is effectively “stress-testing” AI governance with this approach. Success could set a template for other countries; failure would be a cautionary tale.

Broader Impact and Next Steps

With the GPAI Code of Practice now unveiled, several immediate steps are on the horizon. First, the code needs official endorsement. The European Commission and Member State representatives (likely through the AI Act’s governance bodies) will review the final text and formally endorse it, possibly with a political sign-off by EU ministers digital-strategy.ec.europa.eu. Given the code was developed by Commission-appointed experts and has input from many countries, endorsement is expected, though any last-minute objections by Member States could tweak details.

Simultaneously, the Commission has promised to issue guidelines clarifying key concepts, notably defining which AI providers and models count as “general-purpose” under the Act natlawreview.com natlawreview.com. These guidelines, expected by end of July 2025, will help companies know for sure if they fall under the GPAI category (e.g. Does a smaller language model with 500 million parameters count? What about open-source models integrated into products?). This clarity is essential so that firms can determine whether they should bother with the code and related compliance. As of the code’s publication, some uncertainty remains around the exact scope – an uncertainty industry groups have been fretting about iapp.org iapp.org.

Once endorsed, the focus shifts to company sign-ups. The Commission has been actively urging organizations to join. “I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU’s AI Act,” said Commission Vice-President Henna Virkkunen in a statement iapp.org. We will likely see public announcements from major AI players about their stance. Some may jump on board immediately, expressing commitment to trustworthy AI. Others might take a wait-and-see approach, or negotiate certain interpretations of the code’s provisions before signing. The AI Office could facilitate workshops or Q&A sessions with industry to address concerns as companies evaluate signing.

Another key development to watch is the interaction with standards. The AI Act envisions technical standards (through CEN/CENELEC, ISO, etc.) to complement these codes of practice. Now that the code is in place, work may accelerate on formal standards for AI model documentation, risk assessment, transparency reports and so on. These standards could eventually be harmonized with the code, giving companies even more concrete checklists to follow.

Looking further ahead, the true test will come after August 2025 when the AI Act’s provisions take effect. At that point, any GPAI providers (code signatory or not) are legally bound to comply. The EU’s new AI Office (a central enforcement coordinator) will start overseeing compliance, alongside national authorities digital-strategy.ec.europa.eu. If a company is following the code, it should, in theory, be largely in compliance with the law. The AI Office might then use the code as a benchmark in its regulatory inspections. If a company chose not to follow the code, it can still comply via its own methods, but it may face more intense scrutiny to prove it meets the Act’s requirements. As one industry rep noted, everyone will be watching how regulators “assess how closely the code aligns with the Act’s requirements” and how they treat those who embrace it versus those who don’t iapp.org iapp.org.

There is also the possibility that the code evolves. The Commission can update codes of practice as technology changes or if gaps are identified (much easier than amending the law). Indeed, the code itself recognizes the “rapid pace of AI development” and stresses that a “purposive interpretation focused on systemic risk assessment and mitigation is particularly important to ensure [the Safety chapter] remains effective, relevant, and future-proof.” iapp.org iapp.org We might see new chapters or addendums in the future, e.g., if novel AI risks emerge (such as advanced AI agents). The co-regulatory model is meant to be flexible – but it will require ongoing engagement between regulators, industry, and other stakeholders to keep it on track.

Globally, the EU’s move could be influential. Policymakers in the U.K., U.S., Canada, and elsewhere are deliberating how to handle general AI models. The EU’s GPAI Code of Practice serves as a concrete example of a detailed governance framework. If it succeeds – meaning it draws broad participation and demonstrably increases AI transparency and safety – other jurisdictions might adopt similar voluntary codes or even make some aspects mandatory. It could also feed into discussions at international bodies like the GPAI (Global Partnership on AI) or the G7’s Hiroshima AI process, which emphasize AI safety and norms. Conversely, if the code flounders (say, only a few companies sign on, or signatories don’t actually live up to it), it could reinforce skeptics’ views that only hard law works.

In summary, the EU’s Code of Conduct for General-Purpose AI is a bold experiment in governing cutting-edge technology through a mix of voluntarism and impending regulation. Europe is essentially saying to AI developers: prove to us you can be responsible, and we’ll work with you – but if you don’t, our law has sharp teeth ready. The coming months will show how AI giants respond. Will they embrace this chance to shape their destiny, or drag their feet until the absolute legal deadline? One thing is certain: with AI advancing at breakneck speed, doing nothing is not an option. As EU Commissioner Thierry Breton often puts it, Europe aims to ensure AI is developed “with trust and safety by design.” The GPAI Code of Practice is now on the table to make that vision a reality – and the world is watching closely to see if it delivers on its promise of making AI systems safe, transparent, and worthy of our trust iapp.org iapp.org.

Sources

  • European Commission – General-Purpose AI Code of Practice now available (Press Release, 10 July 2025) digital-strategy.ec.europa.eu digital-strategy.ec.europa.eu
  • IAPP News – European Commission receives final version of GPAI Code of Practice (C. Andrews, 10 July 2025) iapp.org iapp.org
  • Nelson Mullins Law Alert – EU Commission Publishes GPAI Code of Practice: Compliance Obligations Begin Aug 2025 (J. Kelly et al., 11 July 2025) nelsonmullins.com nelsonmullins.com
  • Euronews (Opinion) – Pivotal moment for EU AI regulation: Who will lead the GPAI code of practice? (K. Zenner & C. Kutterer, 26 Aug 2024) euronews.com euronews.com
  • Hunton Andrews Kurth Privacy Blog – EU Publishes General-Purpose AI Code of Practice (14 July 2025) natlawreview.com natlawreview.com
  • Reuters – “Artificial intelligence rules to go ahead, no pause,” quoting EC spokesperson (T. Regnier, 4 July 2025) nelsonmullins.com
  • Various expert statements and letters via IAPP, CCIA, ITI, and EU officials iapp.org iapp.org, illustrating the range of reactions to the new code.

Tags: , ,