Claremont-UC Undergraduate Research Conference on the Claremont-UC Undergraduate Research Conference on the European Union European Union

In the face of Chinese advances in AI in terms of technological prowess and influence, there has been a call for collaboration between the EU and the US to create a foundation for AI governance based on shared democratic beliefs. This paper maps out the EU, US, and Chinese approaches to AI development and regulation as we analyze the capacity of the US and EU to establish international standards for AI regulation through channels such as the TTC. As the EU rolled out a proportionate and risk-based approach to ensure stricter regulation for high-risk AI technologies, it laid the foundation for international rule-shaping in the AI domain. The important question is whether the EU can effectively collaborate with the US in response to China’s aggressive AI initiative and, more importantly, lead the transatlantic effort to become the “world-leading region on developing and deploying cutting-edge, ethical and secure AI.”


Introduction
In the face of Chinese advances in AI in terms of technological prowess and influence, there has been a call for collaboration between the EU and the US to create a foundation for AI governance based on shared democratic beliefs (Sheehan et al, 2021). The ethical impetus underlying the EU's current regulatory AI framework, particularly at this moment of fraught geopolitical rivalry and weakening democratic norms, represents a watershed moment in global politics. It is the first ever AI legal framework to impose ethical guidelines and requirements as well as compliance mechanisms with extraterritorial reach. Thus, the motivating question of our inquiry is not only whether the EU can cooperate with the US in forming the guidelines for ethical and secure AI, but if the Union can effectively leverage its first-mover AI policy status coupled with its digital strategy by harnessing its market power and regulatory reach to lead the transatlantic effort.
Before diving into the work that has been done to regulate AI, we first need to consider the definition and scope of the system of technologies in question. Historically, philosophers and computer scientists have looked to the Turing Test as a definition for a machine's ability to display intelligent behaviors. By defining AI as systems that strive to replicate human behaviors, we find most of the disciplines of AI: natural language processing, knowledge storing, automated reasoning, computer vision, and machine learning. Looking beyond the Turing Test as an incomplete definition of mechanics of AI, Stuart Russel and Peter Norvig define AI as "the study of agents that receive percepts from the environment and perform actions" (Russell & Norvig, 2010). Each such agent implements a function that maps "percept sequences to action" to achieve the best outcome or constrained optimal outcome for a given environment and performance parameters independent of idealized human approaches. From this definition, we can see a source for the definition agreed upon by the European Commission's High-Level Expert Group on Artificial Intelligence: "systems that display intelligent behaviour by analysing their environment and taking actions--with some degree of autonomy--to achieve specific goals" (European Commission, 2019). This definition of AI is echoed through legislation passed by the United States, United Kingdom, and the OECD. With similar understandings, we can appreciate the importance of AI as it is rapidly diffusing across all sectors of society from information/communication networks and social media platforms to finance, healthcare, and transportation sectors to defense and military applications.

Assessing the EU's Position in the AI Landscape
Authors of the recent National Security Commission Report on Artificial Intelligence assert: "The race for AI supremacy is not like the space race to the moon. AI is not even comparable to a general-purpose technology like electricity. However, what Thomas Edison said of electricity encapsulates the AI future: 'It is a field of fields ... it holds the secrets which will reorganize the life of the world'" (NSCAI, 2021, p. 7). The race for AI supremacy has been discussed in great length, and it not only sheds light on who will lead the coming 'AI era' in terms of technological prowess but also in terms of regulatory reach. As the following overview of various assessments of global AI competition reveals, the EU is characterized as a laggard, but scrutiny of these reports in terms of the metrics and methodologies employed permits a slightly different conclusion, one in distinct contrast to the media-hyped messaging around the AI race as one chiefly of two contestants-China and the United States.
No other report reinforces the race of two narrative better than the Center for Data https://scholarship.claremont.edu/urceu/vol2022/iss1/11 Innovation's assessment published in 2019 and updated in 2021 (Castro et al., 2019;Castro & McLaughlin, 2021). The clear takeaway in both reports was that the United States maintains a first-place lead in AI while China continues to close the gap and Europe falls further behind. The report examines the progress China, the European Union, and the United States have made in AI relative to each other--examining 30 metrics across 6 categories: talent, research, development, hardware, adoption, and data, finding that the United States leads in four categories (talent, research, development, and hardware), and China in two (adoption and data). Yet, as their own overall rankings indicate, the EU is second in four out of six categories (talent, research, development, and adoption). Furthermore, as the authors point out in the opening summary of the 2021 report, EU performance in overall AI strength when controlled for size of labor force, also situates them in second rather than last place. Additionally, their updated analysis showed a rather mixed picture when the EU was compared with the US. For instance, EU venture capital and private equity funding as a percentage of US funding increased from 13% to 22% from 2016 to 2019, and the quality of their AI research papers increased as measured by the field-weighted citation impact (FWCI), whereas that of the US decreased (Castro & McLaughlin, 2021, p. 3). Not to mention, the authors acknowledge many of the indicators are proxies, and there is no authoritative and reliable data for industrial classifications for firms developing AI technologies, nor is there consideration of government R&D spending. Also, the analysis fails to emphasize how American tech giants such as Amazon and Google allow the US to reap tremendous benefits solely due to location of their headquarters. yet much of their success derives from their business activities in European and Asian markets. Another widely cited gauge of AI competitiveness is the Global AI Index published by Tortoise Media. The Global AI Index analyses 54 countries and creates an index that is underpinned by very similar indicators as the Castro et al. (2019) report. Their rankings portray the United States and China as number one and two, but interestingly France and Germany, the EU's two largest Member States, are ranked at 5 and 6 and Ireland comes in at number 10 (Tortoise Media, 2022). With the findings of both reports in consideration, it seems a misinterpretation to constantly describe the EU as an AI laggard.
A final report we reviewed to gain insight into the debates and different measures of AI competitiveness was the 2020 Carnegie Endowment for International Peace Working Paper European and AI: Leading, Lagging Behind or Carving Its Own Way, authored by Erik Brattberg, Raluca Csernatoni and Venesa Rugova. What is most interesting about this report is the inclusion of a measure for the strength of regulatory and ethical frameworks, reinforcing our view that both empirical and normative factors should be considered in providing an overall assessment of AI competition. In sum, these authors conclude: Europe is well positioned to help establish best practices and set global standards and norms to steer the future direction of AI developments toward applications that truly have meaningful values for societies and ensure the security of citizens. With its human-centric focus, the EU's strategy is distinctive, given its emphasis on the trustworthy and secure development of AI in full respect of citizens' rights, freedoms and values (Brattberg, 2020, p. 32). While the authors acknowledge that the EU may be carving its own way, they also caution that normative principles and regulations will not be sufficient to be a global AI leader. Their report was published one year prior to the roll-out of the EU's AI Act and its accompanying data strategy, and as our following analysis shows, the framework should simultaneously The EU's Capacity to Lead the Transatlantic Alliance in AI Regulation promote innovation while providing personal protection.
What is most significant about the literature discussing the AI landscape is how the economic market share derived from the adoption of AI applications is generally assumed to be a zero-sum game; Chinese advances are viewed as major threats to the West, largely assuming that big push industrial policies will create competitive advantage within the knowledge economy. We contend that this assumption should not be taken for granted seeing that advances in AI research as well as development of AI by corporations would be economically beneficial for all countries.

The European Union Approach to AI
3.1. The EU AI Act Figure 1: EU AI Timeline As the timeline we have constructed in Figure 1 indicates, the European Union has been deliberating publicly over its AI strategy for at least the past three years. Although spearheaded by the European Commission in its traditional role as the sole proposer of legislation, the development of the EU's AI approach has clearly been a multi-stakeholder process of consultation with technologists, researchers, ethicists, business leaders, as well as policy makers at all levels of government.
In sum, the EU's Regulation Laying Down Harmonized Rules on Artificial Intelligence (commonly referred to as the AI Act) takes a "proportionate and risk-based approach" that ensures stricter regulation for AI technologies that pose a higher risk to our lives. The framework classifies the use of AI into the following four categories: 1. minimal or no risk at all applications such as spam detector AIs, which are permitted without restrictions 2. limited-risk applications such as ticket booking AIs, which are subject to transparency obligations 3. high-risk applications that "interfere with important aspects of our lives" such as job application filtering AIs, which are subject to several different obligations that include checks to ensure unbiased and highquality training data and sufficient oversight in the design and implementation of the system 4. prohibited applications that "use subliminal techniques to cause physical or psychological harm to someone" such as social scoring systems, which would be banned altogether. In short, Commissioner Vestager in her press conference announcing the legislation stressed that the legal framework "shapes the trust we must build if we want people and businesses to embrace AI solutions" (European Commission, 2021c). Most importantly, rather than a rights-based or values-based approach, it pursues a proportionate, risk-based one, ensuring that the framework encourages innovation and shapes competitive advantage.
Thus, the EU is insistent that the regulation will be proportionate with the strictest https://scholarship.claremont.edu/urceu/vol2022/iss1/11 rules to be applied only to high-risk AI while the rest remain voluntary, non-binding guidelines. This will prevent overburdening entrepreneurs or stifling innovation through complex and unnecessary requirements and compliance costs. The EU especially wants to avoid being "excessively prescriptive" or putting disproportionate burden on small and medium-sized enterprises (SMEs) as they also stressed in the White Paper (European Commission, 2021a). One of the most innovative aspects of the legislation that shows the commitment to promoting SMEs' capacity to use AI is the regulatory sandbox instrument: The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation. (European Commission, 2021b, p. 17) This element is yet another attempt by the EU to encourage SMEs to innovate while ensuring compliance to the regulation. As Anu Bradford commented: "The US and China have been the ones that have been innovators, and leading in investment into AI, [b]ut this regulation seeks to put the EU back in the game. It is trying to balance the idea that the EU needs to become more of a technological superpower and get itself in the game with China and the US, without compromising its European values or fundamental rights" (Espinoza & Murgia, 2021).

The Broader EU Digital Strategy
The EU's AI regulation needs to be understood as an integral component of the much broader digital strategy based on the Electronic Commerce Directive (European Union, 2000) and now articulated in the recently released 2030 Digital Compass. The accompanying Digital Services Act, Digital Markets Act, Data Governance Strategy, and Data Act--the last two of which we will analyze in further detail--build on the e-Commerce Directive alongside the AI Act with concrete mechanisms to achieve the EU's digital transformation objectives by 2030. The AI legislation is not only a major part of the EU's digital transformation but is also intrinsically related to the accompanying legislation proposed. We contend that these pieces of legislation will bolster the EU's potential to become a global AI leader in terms of both technological competitiveness and regulatory power.
Before we delve into the EU's current data strategy, we would be remiss if we did not discuss the EU's flagship General Data Protection Regulation (GDPR), which set the global standard for access to personal data. The GDPR's "privacy by design" principle incorporated in Article 25 of the regulation, increasingly ensures that products are designed to a single standard. Though it will require much more scrutiny of how the AI regulation is finalized, particularly regarding defining where the boundaries are between high and low risks AI applications, the GDPR will be inextricably tied to its implementation and sets the foundation for the EU data strategy, which aims to achieve the union's objective of achieving "global competitiveness and data sovereignty" (European Commission, 2022).
In order to realize this vision, the European Commission adopted the Regulation on European Data Governance (commonly called the Data Governance Act) on 25 November 2020, which proposes to establish several mechanisms that boost data availability and sharing across states and sectors through the re-use of certain public data, the potential anonymization or pseudonymization of personal data, and methods to allow individuals and non-profit companies to provide consent to "process personal data pertaining to them" (European Commission, 2020a).
The EU's Capacity to Lead the Transatlantic Alliance in AI Regulation The Data Governance Act (DGA) will have major implications for the EU's AI framework, as access to large datasets is crucial to accurately train models in today's AI systems. A major critique of the EU's AI prowess is its lack of public, easily accessible data. In fact, the EU places behind the US and China in nearly every single data metric in the Castro et al. (2019) report, having considerably lower accessible data than the US and especially China in domains ranging from internet of things to productivity. The DGA is what the European Commission hopes will enable the EU to become a leading data economy and, in the process, a giant in the AI space, all while remaining true to its values and principles. In fact, the impact assessment support study on the DGA projects savings of approximately €120 billion a year in the EU health sector and up to €20 billion a year in tranportation labor costs with the increased availability of health and mobility data alone (European Commission, 2020b). Furthermore, the European Digital SME Alliance, the largest European network of information and communication technology SMEs, strongly welcomes the regulation, albeit concerns remain regarding the lack of legal clarity when it comes to data protection, privacy, and intellectual property (European Digital SME Alliance, 2021).
On February 23 rd , 2022, the EU unveiled the second key pillar of its 2020 European Data Strategy: Regulation on Harmonised Rules on Fair Access to and Use of Data--now called the EU Data Act. The Data Act proposes major legislative overhaul in the way data is accessed, shared, and leveraged. The act proposes fundamental changes in the design of products in a way that makes associated data easily accessible; an "unfairness test" in data sharing contracts between businesses to prevent exploitation of SMEs; several data interoperability measures to facilitate switching between data processing services; as well as obligations to protect EUheld non-personal data from international access (European Commission, 2022).
The Data Act significantly builds upon the DGA's measures to increase data availability and further the Digital Market Act (DMA)'s objectives by introducing data interoperability measures and measures to empower SMEs, as well as induce more actors to participate in the data economy. In advancing the objectives of the DGA and DMA, the Data Act will have positive implications for the development of EU AI by bolstering the data economy. The territorial scope of the provisions detailed in the legislation is also noteworthy as the data in question are not simply dependent on the location of the data provider but rather any data placed on the EU market, including non-personal data (Allen & Overy, 2022). The ubiquity of data and the size of the EU market will strongly urge non-EU digital service providers to make appropriate changes.
Compared to the DGA, the Data Act is very bold in what it aims to accomplish and is thus likely to encounter major pushback by the industry and governments alike (Perarnaud & Fanni, 2022). In fact, in the Commission's May 2021 Inception Impact Assessment, there was strong opposition from the industry towards the proposed data transfer provisions (European Commission, 2021d). The obligatory nature of the provisions and restrictions on data sharing with non-EU countries has already caused technological and automotive industries to push back (Bertuzzi, 2022). Most concerning, however, is the clear division among EU member states regarding their views on the Data Act, with the Netherlands even publishing a non-paper on the Data Act in January criticizing the proposed data-transfer obligations among other provisions (Kingdom of the Netherlands, 2021). The EU has been increasingly cautious in their digital strategy to avoid being overly prescriptive with their legislation for good reason--evidenced especially by their choice to implement a risk-based approach for the AI Act--and while the Data Act is a noticeable shift away from this trend, we will https://scholarship.claremont.edu/urceu/vol2022/iss1/11 have to wait to see how the legislation evolves over the next year at least to better gauge its potential impact on the EU's AI space.

The United States Approach to AI
Unlike the EU with its GDPR regime, the United States lacks a comprehensive centralized set of regulations that protect the privacy of personal information. As evidenced by our discussion of the GDPR, data and their respective legislative protections in the EU lay the foundation for the current efforts to establish AI frameworks and policies. Thus, the lack of directly implemented policies and protections for American citizens and consumers signifies a key difference for the US Government in terms of their approaches to develop and regulate emerging technology, choosing to prioritize military and commercial competitiveness over ethics-based regulation.

Executive Approaches
Russel Vought of the Office of Management and Budget was unequivocal in this general policy disposition, underscoring the Trump Administration's reluctance to regulate AI in his memorandum to the White House executive departments: Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth (...) Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary (The White House, 2020). The Biden Administration's approach, however, is more open to international regulatory cooperation and receptive to a values-based approach to AI policy. Through appointments and prioritization of the Office of Science and Technology Policy (OSTP), Biden aims to invest in AI both as a geopolitical tool and a scientific advancement, unlike Trump's sole approach in utilizing AI as a geopolitical tool (Hao, 2021). Director Dr. Alondra Nelson and Dr. Eric Lander of the OSTP emphasize the need for an AI Bill of Rights to "clarify the rights and freedoms we expect data-driven technologies to respect," specifically citing sources of problems within AI systems such as insufficient datasets to represent American society in cases of discriminatory arrests, and algorithmic tendencies towards extreme bounds like negative "sentiment analysis" of race, gender, and general internet trends (Schwartz, 2021). In recognition of the issues associated with AI systems, Lander & Nelson (2021) attempt to place the burden away from the lack of regulation and on the competitive landscape: In the United States, some of the failings of AI may be unintentional, but they are serious and they disproportionately affect already marginalized individuals and communities... In a competitive marketplace, it may seem easier to cut corners. But it's unacceptable to create AI systems that will harm many people. The intentional avoidance of regulation has allowed companies to operate without regulatory checks and adherence to data privacy concerns under previous presidencies. At this time, federal remedial steps to address data privacy, marginalization, safety, and security may influence some change in how corporations develop AI. The OSTP proposes the utilization of federal contracts to increase adherence to the potential AI Bill of Rights. However, the leverage of federal contracts pales in comparison to the EU with respect to penalization of regulatory infractions. Thus, working with the EU through the EU-US Trade and Technology Council (TTC) to address emerging technologies may allow the US to regain some semblance of regulatory capacity over tech companies domestically and abroad.
The EU's Capacity to Lead the Transatlantic Alliance in AI Regulation

National Security Commission on Artificial Intelligence
Since 2018, the National Security Commission on Artificial Intelligence (NSCAI) has encouraged AI innovation by spending and centralizing resources to focus on defense. The NSC's 2021 report evaluates the status and prospects of AI development in the US in the context of national security and foreign threats posed to the US AI agenda. In addition to the increased efforts in development, the report included policy and societal approaches to help foster the development of safe and accepted AI. Efforts like establishing "Justified Confidence in AI Systems" aim to ensure democratic values are secured with regard to privacy, civil liberties, and civil rights (NSCAI, 2021).
Like the EU and their assessment of high-risk AI technology, the NSC recognizes that due to the lack of complete transparency into the mechanics of AI algorithms and methodology, the results may seem unreliable and unpredictable, thus undermining confidence in AI. The report goes on to recommend that the US government ensure that faith and trust are secured throughout the lifecycles of AI technologies through "AI Risk Assessment Reports" and "AI Impact Statements" (NSCAI, 2021, Chapter 8).
Overall, the NSC AI report paints a picture of the US as significantly behind the Chinese government in competing in the AI markets. In terms of policy recommendations and government oversight, the report mentions the need for regulations and concrete definitions of high-risk AI and accountability. Nevertheless, the vagueness of these recommendations underscores the upper hand the EU has in leading the transatlantic AI effort.

US Congress and Federal Agencies
Current congressional debates indicate a growing interest in pursuing regulation of emerging data-based technologies through federal antitrust legislation. Through the Federal Trade Commission Act (1914) and Equal Credit Opportunity Act (1974), the legal basis exists to prevent corporations from using "unfair or deceptive practices" or "biased algorithms" on the counts of race, color, religion, sex, age, and more. Despite limited resources, the recent antitrust litigation against tech giants at both the state and federal level indicates the US's willingness to enforce stricter and more rigorous rules for commercial development of artificial intelligence-based technologies. The information displayed in Table 1 encapsulates the broad approach the US is taking regarding AI regulation: Overall, the Congressional approach to regulating and promoting the influences of AI has shifted to address algorithmic impact and data privacy. In addition, the Federal Trade Commission (FTC) has developed a new approach to target corporations utilizing deceptive data practices: "Algorithmic Destruction" (Kaye, 2022) With examples of Weight Watchers in 2022 and Cambridge Analytica in 2019, the FTC has increased enforcement against duplicitous data harvesting systems.
Mirroring the EU's risk-based regulatory framework, the National Institute of Standards & Technology (NIST) released the AI Risk Management Framework: Initial Draft in March 2022. As the framework provides guidance for the AI lifecycle, it coincides with many principles of the EU AI Act: robustness, safety, privacy, accountability, and transparency. However, due to its status as initial voluntary guidelines, the risk management framework (RMF) lacks explicit protection against discrimination as well as protection for consumers and the environment. It also falls short of established risk profiles to guide regulatory enforcement. As such, this initial American approach to AI risk management offers an opportunity to develop collaborative international standards depending on the eventual synergy of the future NIST classifications and the established EU classifications for risk in AI.
Significantly less of the American legislation considers the ethical implications of algorithmic technologies. Additionally, the US does not guarantee privacy as a fundamental right in the same way as the EU does. However, with recent advancements in corporate data collection and algorithmic oversight in the US, the transatlantic alliance may converge on standards for personal digital identity protection. Thus, specific differences in ethics and values are yet to be explored, pending further legislation from the US. The EU-US Trade and Technology Council will enable a united push to set the international standards on "ethical and secure" AI technologies and provide a stronger counter to Chinese AI advancements in R&D and regulatory influence.

Transatlantic Discrepancies: The CLOUD Act vs. the EU Data Regime
The US Clarifying Lawful Overseas Use of Data (CLOUD) Act serves as a framework for facilitating bilateral international agreements on transferring personal electronic data and evidence for use in criminal proceedings. The act was enacted on March 23, 2018 to allow US law enforcement agencies to access data stored in other countries. The crux of the problem is that the corporations operating on US soil with data stored in EU locations must comply with the CLOUD Act, but the data stored within EU boundaries must also comply with GDPR guidelines.
The CLOUD framework differentiates from EU guidelines and existing legislation in two ways: the security of personal data and autonomy of corporations. This is exemplified by the Schrems II case in the context of transatlantic data sharing. As stated in Rojszczak (2020): The essence of the Schrems case, as well as other judgments of the CJEU regarding the transfer of personal data outside the EEA, was the assessment of whether the legislation of a third country actually contained adequate mechanisms to protect the rights of data subjects, due to which there was no risk of breach of the guarantees arising from the Charter of Fundamental Rights (CfR). Thus, the ability to protect individual rights depends on the existence of an appropriate bilateral agreement linking the United States with the country whose laws would be violated The EU's Capacity to Lead the Transatlantic Alliance in AI Regulation due to a US warrant. As there has not been an explicit agreement established between the US and the EU, the CLOUD Act is unenforceable for corporations like Microsoft within the EU. As the US Supreme Court noted in a patent infringement case Microsoft Corp. v. AT&T Corp., "United States law governs domestically but does not rule the world." The result of this case shows that an agreement needs to be made between these two jurisdictions to further the goal of a transatlantic AI framework. As the CLOUD Act attempts to establish extraterritorial access of data, it is in contradiction with the EU's GDPR and most likely the recently proposed EU Data Act, which addresses the protection of EUheld non-personal data from international access. On another note, Rep. Mike McCaul, a co-chair of the Congressional Internet Caucus and ranking member of the House Foreign Affairs Committee, voiced concerns about the legislation proposed in the EU with respect to digital strategy, such as the Digital Market Act, claiming that it would "localize the cloud to only the EU," preventing "free flow of information" (Baksh, 2022).
The success of the transatlantic effort hinges upon the resolution of these conflicts, including data sharing agreements, to allow multi-national corporations to conduct business. In fact, resolving these conflicts are items on the TTC agenda waiting to be addressed.

EU-US Trade & Technology Council
Established under the direction of US President Biden and European Commission President von der Leyen in June 2021, the EU-US Trade and Technology Council serves as the preliminary forum for bilateral international agreements with respect to emerging technologies. Through this Council, ten working groups have been established to address topics such as technological standards cooperation as well as data governance and technological platforms. Through the established Joint Technology Competition Policy Dialogue and goals to expand trade and investment, reduce technical barriers in trade, and facilitate cooperation, this collaboration provides common ground for establishing standards on an international level (countering those of the Chinese).
The second meeting of the TTC, scheduled for May 2022, originally planned to address sharing mechanisms for economic data and shared definitions of trustworthy AI (Bruwell, 2022). However, due to the crisis in Ukraine, analysts predict that the TTC will instead address the economic isolation of Russia as a coordinated effort between the US and EU. Thus, the timeline for convergence of AI standards is in question.

6.
The Chinese Approach to AI As Rep. Robin Kelly (D-Ill.) asserted in the EU parliamentary hearing, "[n]ations that do not share our commitment to democratic values are racing to be the leaders in AI and set the rules for the world…We cannot allow this to happen" (Overly & Heikkil, 2021). The EU and US are starkly aware of the challenges a non-democratic actor may pose to the international order in regard to AI governance. However, it is important to empirically establish the Chinese approach to AI as they aim to portray themselves not only as a technical leader but also as a global standard setter and ethical authority.
According to a study by the Center for Strategic and International Studies (Livingston, 2021), the astonishing growth in Chinese technical progress has come as a direct result of increased involvement of the Chinese Communist Party (CCP) in the management of the domestic economy. Through the promotion of "party building" provisions and increased symbiotic relationships between corporations and the CCP, Beijing can influence the technological sector with more precision and to a far greater degree than the US and EU can. https://scholarship.claremont.edu/urceu/vol2022/iss1/11 In addition, with ambitious efforts from the Chinese government to increase influence in global technical standards-setting processes (Clarke, 2021), China is positioning itself as a heavyweight in international technology policy and standards. Examples of this can be seen through 5G technology and global navigation systems. Currently, Huawei, China's national flagship telecommunications company, holds 15.39% of the 5G patent families, with Chinese companies all together covering over 30%. Following Huawei is US' Qualcomm at 11%, and following China in country share is Korea at 16% (Samsung & LG). In addition, China's BeiDou navigation satellite systems are rivaling the US' GPS systems, with countries like Pakistan switching over from long-standing GPS constellations.
While the exponential rise in Chinese technological prowess is clear, what of their ethical vision? The Beijing Academy of Artificial Intelligence (BAAI) published the Beijing Artificial Intelligence Principles (Table 2), outlining their perception of the Chinese government's ethical principles. These principles mirror those of the EU and the OECD. However, the steps taken to cement China as an AI leader have been put into question, as their published ethical frameworks tend to contradict their actions. As the fundamental beliefs of freedom, privacy, and security do not coincide between the Chinese Communist Party and those of the EU and US, similarities of ethical frameworks or general AI development approaches may be limited to the broad objectives of "Doing Good" and "Being Responsible." In addition, the authors of the 2020 Carnegie Endowment for International Peace report point out that Chinese AI products are becoming more difficult to export as Western governments' concerns about data privacy standards and security risks grow, making it even more difficult to encourage other countries to adopt the Chinese standards (Sheehan et al., 2021). Nonetheless, the very fact that the Chinese AI strategy includes a narrative about ethical AI illustrates the capacity of the EU as a norm shaper given that the Chinese visionhollow as it may be--mimics the EU and OECD ethical frameworks.

Conclusion
The EU's normative imprint is quite apparent in even a cursory glance of our overview summary of different AI ethical principles and guidelines. The EU has exercised global leadership as a norm entrepreneur by inspiring the emphasis on human-centric, as well as trustworthy, ethical, and secure AI. The key difference with the US framework is that it is voluntary and does not have the compliance mechanism that the EU has promulgated in its regulation. The Chinese ethical principles must be considered with circumspection, especially in contrast to their current practices and uses of AI, such as widespread facial recognition for social scoring, which the EU's regulation expressly prohibits.
The EU's Capacity to Lead the Transatlantic Alliance in AI Regulation As is clear from the more in-depth analysis of the United States' strategy on AI, the transatlantic alliance's prospects to become a global standard setter in the AI field may hinge on whether the two regions collaboratively address issues of algorithmic transparency and data privacy. The newly formed EU-US Trade and Technology Council should serve to promote greater cooperation and coordination. The crucial question is whether the US will be able to implement pending legislation and regulatory frameworks and play it forward in terms of the purported shared values. In case of stalled progress from the US, the EU seems prepared to assert its strategic autonomy and pursue with all due speed the role of a global leader through a values-based and human-centric approach to the pervasive and defining technology of the 21 st century, and we believe it will do so with the force of public opinion and moral weight on its side.
The EU's overarching premise behind its regulatory and ethical approach to AI is centered on building trust and promoting understanding of what AI is and how it is used. This direction of public sentiment on both sides of the Atlantic seems to be well aligned with the spirit and the letter of the EU's AI approach to build public confidence while also remaining open to the benefits that AI can bring to humanity. This emphasis as well as EU's first-movers advantage lead us to conclude that the EU indeed has the capacity to steer the transatlantic alliance towards a more ethically grounded and transparent AI era.