top of page
Search

Generative AI:Regulatory differences across U.S., EU and China


PHAEDO JUNE 2023

PERSPECTIV








Table of Contents



Abstract………………………………………………………………………………….3

Introduction……………………………………………………………………………...4

What is Generative AI?………………………...…………………………………..….6

Sources of Information for Generative AI………………….……..………….7

Types of Knowledge………………………………………………….………..8

Parenting Generative AI……………………………………………………………...10

China and Generative AI…………………………………………………..…10

U.S. and Generative AI………………………………………………..….…..14

EU and Generative AI…………………………………………………………18

Generative AI and the Meaning of Reality…………………………………………..21





Averill Campion, PhD

Founding Director, PHAEDO

June 2023


Executive Summary

The intention of this analysis is to examine the basis for the differences in approaches to regulating generative AI technology amongst the U.S., European Union, and China. The release of OpenAI’s ChatGPT in November 2022 signaled an unprecedented leap in artificial intelligence capabilities. As a result, governments have taken initial steps of control. In line with this, the U.S., China, and EU have very different initiatives for controlling generative AI: (1) the U.S.’s “Blueprint for an AI Bill of Rights (2) China’s Draft Measures for the Management of Generative AI Services through its cyber super regulator the Cyberspace Administration of China (CAC) and (3) the EU’s AI Act. This perspective paper explores the meaning behind these differences and distinguishes three parenting styles to generative AI.





Keywords: generative AI, large language models, ChatGPT, regulation

1. Introduction

Since 2017, technological developments have advanced from big data and data science analytics, towards the realization of artificial intelligence and machine learning, both supervised and unsupervised by humans. Governments have openly acknowledge the AI arms race, focusing mainly on the deep tech competition between the U.S. and China. At the same time, American, European, and Chinese governments also recognize the need to exert some control over artificial intelligence to reduce potential harm for citizens.

The release of OpenAI’s ChatGPT symbolizes the advent of generative AI, yet the future of specific commercial products for generative AI are still uncertain and speculative. A handful of companies such as Google, Microsoft, Baidu, and Alibaba are rushing to commercialize this technology as the market penetration potential for these products is projected to be in the tens of billions in terms of revenue.

Yet, one key distinction between private sector proprietary knowledge in the U.S. and China is that private Chinese companies are directly interconnected to the Chinese Communist Party, which creates a data and information sharing relationship between government and industry that is dissimilar from the U.S. and the EU relationship between industry and government. In China, the idea is to ensure that government can interfere with business, so that they are one and the same.

In contrast, private companies in the U.S. are not obligated to share general intellectual property with government. Many tech companies now selling their “software as a service,” which makes it even more difficult to understand the innerworkings of the technology, for instance. American companies like OpenAI are “openly” guarding their trade secrets and data.[1] As such, this exemplifies how tech companies are protected from government interference in America.

The foundational principle of the United States is the protection of the individual from government. This seems to have translated to the realm of companies as well. So government “interference” takes on a different form in America. The Declaration of Independence was written in the context of ending what was perceived to be too much (British) government control over the colonies. The Constitution thus aims to ensure the freedom of individual choice of options, instead of taking that freedom away by forcing a certain action.

On the other hand, in EU the intention is the enable government to protect individual freedom, like the protection of their personal data, from private sector opportunism and behavioral monitoring. The General Data Protection Regulation (GDPR) is the case and point of this type of regulatory action to create a harmonization in the application of rules throughout member states.

In line with this, the U.S., China, and EU have very different initiatives for controlling generative AI: (1) the U.S.’s “Blueprint for an AI Bill of Rights (2) China’s Draft Measures for the Management of Generative AI Services through its cyber super regulator the Cyberspace Administration of China (CAC) and (3) the EU’s AI Act.


2. What is generative AI?

Generative AI, the technology behind ChatGPT, can be very broadly defined as an AI system capable of producing new content ranging from text, images, and audio files due to its recognition of patterns learned from existing data. Other examples of LLMs besides OpenAI/Microsoft’s ChatGPT are Google’s Bard, Baidu’s Ernie, and Alibaba’s Tongyi Qianwen. One main distinction of generative AI is that it “generates new data based on the training data”[2].

In particular, ChatGPT is a large language model, a type of generative AI, that has acquired its robustness from a combination of computational power and access to vast amounts of data and parameters and human feedback. Whereas, “traditional AI’ mainly works with numbers, and more rarely with text like natural language processing, and machine learning models make predictions and classifications on new, unseen data[3].

LLMs are known for their ability to generate natural-sounding language[4] through machine learning. LLMs can be defined as deep learning models, or neural networks, that are trained from millions of parameters on a vast amounts of text and can: question and answer, summarize documents, classify text, and generate text. ChatGPT-3 contains over 175 billion parameters, for example.

In order to build a generative AI model several factors are crucial: billions in funding, training data that encompasses the majority of information on the Internet[5] (ex: 45 terabytes of text), and massive amounts of processing power (e.g. 285,00 processor cores and 10,000 graphic cards) which is both costly and environmentally un-friendly. While exact estimates are unknown, it has been hypothesized that electricity consumption to run ChatGPT is equivalent to that of the consumption 175,000 people per month and an estimated 1.287 MWh to train[6]. This helps explain why only a handful of companies are able to currently do this.

2.1 Source of Information for Generative AI

Tech companies are secretive about what data they use to train these AI models. OpenAI has not disclosed its training data, but the Washington Post reveals through an analysis of the Google C4 data set that sources of data to train Meta’s LLaMA and Google’s T5 include around 10 million websites, dominated by websites from journalism, entertainment and software development industries[7]. However, well-known websites like Wikipedia, Scribd, and even state voter registration databases were included, according to the Post investigation.

This also leads to question if websites like Scribd are “subscription only” digital libraries, then costs for these projects must also cover the fees for accessing scientific journals or subscription only information, which must be very expensive. From an ethical standpoint, an argument could be made as to why this type of locked up knowledge isn’t open source so that normal humans have access to it, as it is financially straining for a curious individual to pay for all of those subscriptions on a topic of interest, for example.

The engineers from OpenAI indicated that they took steps to make the Common Crawl dataset better in terms of quality through filtering it, performing a fuzzy deduplication for redundancy, and added known high quality references to the training mix[8]. Even when human reinforcement learning is applied with billions of parameters, the sources of information represent a bigger question of what we mean by “knowledge”, especially high quality knowledge.


2.2 Types of Knowledge

There are of course, different types of knowledge: in general, there is explicit knowledge and tacit knowledge, e.g. information that is codifiable and information that can only be obtained through personal experience, insight, or wisdom. Explicit knowledge, which is easily transferable and extractable categorizes the information that can be obtained from the Internet through encyclopedias, books, scientific articles. In the case of LLMs, this type of knowledge is then shaped through human reinforcement learning and the application of parameters. However, there are limitations to the text that can be generated from this information and the answers a chatbot can provide. Now this becomes an epistemological question.

If LLMs are supposed to enhance decision making for humans by providing summaries, answers, and text generation, the source of this knowledge can be classified as “know-that” knowledge or knowledge of facts. In epistemology, there are three types of knowledge: knowledge acquaintance (like knowing a parent or friend), knowledge-that or the knowledge you gain from facts, and knowledge-how, the knowledge of knowing how to play the piano, etc. While it is beyond the scope of this to analyze the different facets of this debate, we know that tacit knowledge is important. In simple terms, gaining tacit knowledge, is why researchers visit labs in other countries in order to obtain insight and wisdom from a renowned scientist, in a way that cannot be transmitted through other means, for example. This is the basis of corporate espionage or intellectual property theft—obtaining implicit knowledge about something that is embedded in a context.

In its current form, an LLM can present a summary of facts and synthesize the insights from that text by inferring patterns and relationships amongst words. So decision making is enhanced or supported, through an effective presentation of “know-that”. Developers must attempt to understand if possible to make a model learn from artificial experience and have a sort of “self-attention”, in the same way a human acquires wisdom from experience and interaction with others, or the insights that come from a “know-how”. Currently, ChatGPT uses a “multi-head” attention mechanism, which is a method for iterating rather than just performing steps[9].

Self-attention through iteration makes the model able to attune itself to sub-meaning and complex relationships within inputs[10]. But self-attention can be more akin to introspection than to learning from experience or being able to impart the wisdom of knowing how to do something like actually knowing how to do something versus knowing the facts about how to do something. We do not yet understand the limitations of reinforcement learning from human feedback in its ability to guide and set boundaries, but inability to transfer completely our intangible world. In sum, if we as humans operate and are shaped by combinations of intangible and tangible knowledge, and we learn from experience, and we know things that cannot be verbalized or codified, then no matter how efficient text content generation can become for many uses its application is not omnipotent. At least for now, generative AI lacks this very important human capability.


3. Parenting Generative AI

Children are born innocent. We often comment on how precious a baby seems because of this purity. As children grow, responsible parents continuously present their child with a set of values, guide them, and correct their behavior so they understand boundaries. And eventually, our children go out into the world and may test their values and beliefs over time, consciously or unconsciously. Eventually, behavior becomes shaped by a combination of sources. There is also a moment throughout this process, that parents realize just how much their child is their own being that the combination of genetics and nurture has thus created its own unique way. Yet, while a parent does not have access to all of the thoughts and feelings a child may have, they still know their child.

But can we say generative AI is born innocent? Are generative AI genetics the training data and the nurture is the parameters and human reinforcement learning? Developers are responsible for growing and shaping the evolution of this technology, or perhaps we should refer to them as parents. We all raise our kids differently. So when it comes to raising generative AI, there are distinct approaches to control that are emerging across cultures. What does this actually mean? Let’s examine the different parenting of generative AI between the US, China, and EU.


3.1 China and Generative AI

It is no surprise that the Chinese government has banned OpenAI’s ChatGPT, as this is in line with similar bans through its firewall on all American social media brands such as Facebook (Meta),Twitter, and Google alike. In China, social media, mobile payments, and messaging are instead all controlled with the WeChat app. This is because China is fully dedicated to control and censorship of information. As a result, generative AI in China must uphold the surveillance state and desired control of information in line with the CCP.

The way in which the CCP will attempt to align generative AI with CCP values is reflected in its recent regulatory draft measures. In April 2023, the Cyberspace Administration of China release its draft “Measures for Generative Artificial intelligence Services” that dedicates rules for creating generative AI. To uphold socialist values and avoid undermining the CCP, the rules regulate the type of content products generate[11]. In particular, this regulatory effort places the developers as accountable for the underlying data used to train algorithms and for inappropriate content (as deemed by the CCP) generated by the platforms[12]. Moreover, the main intention is to ensure that generative AI reflects “indigenous development”[13] in a way that is unique to China.

According to translations of the draft measure, the governance of the data used to train the generative AI must meet both broad and demanding requirements, with liability put in hands of the companies or service providers[14]. So both the organization and the individuals that create the models are responsible. There are some similarities of their principles that are also desired internationally such as: the elimination of bias as the Chinese regulation states that discrimination is not acceptable, the respect of intellectual property rights, authenticity, and security of information. Both the EU and China also require products must be registered in a database. The uncertainty, however, is about the authenticity of upholding these common standards like: anti-discrimination, protection of intellectual property, and information security and if such acknowledgement from China is merely ceremonious attempt to comply to international norms.

Other similarities between the Chinese and EU approach such as outright bans and the need for product review before commercial release. What distinguishes the Chinese approach is that the CAC threatens that a service may cease to exist if violations of social or commercial ethics occur (e.g. the release of content that subverts state power or overthrow of system), and that products should be submitted to the CAC for security reviews before public release[15].

AI Parenting: China

In that sense, it is less about outright harm to citizens and more about the erosion of CCP values. For the EU, it is about preventing outright harm. Interestingly, this could be interpreted as meaning that China sees great risk in technology’s ability to change beliefs and values of its citizens. For the CCP, it appears that the risk of being over-throwed because of a value change amongst the population is a real threat. Until now, the thought was that globalization[16] might softly influence China to adopt more democratic processes, but instead, it has only become more authoritarian. So it is interesting that generative AI seems to possess such an unprecedented power that the CCP is taking aggressive measures to control its development and alignment.

Previously, social media technology, for example, has been viewed as more of the organizing approach for facilitation communication and interaction over revolutionary ideas that were already formed amongst a group of people, such as during the Arab Spring; not the causal mechanism[17]. However, the fact that the CCP sees generative AI as risking the erosion of socialist values and social morality puts the capability of generative AI at an unprecedented level of influence over a population.

At the same time, keeping this “unprecedented level of influence” out of the hands of China is the argument that American tech companies use for expediting generative AI model advancements with as little restraint as possible, despite uncertainty about its consequences. For Americans, the fear is more about the threat of control, rather than the fear of disruption. Yet both viewpoints represents a zero sum game and the evolution of a bifurcated or fragmented system. For China, its objective of parenting generative AI is to ensure that the information citizens receive through various mediums aligns completely its subjective CCP reality, which itself becomes a sort of fake reality.

Ironically, it is the concept of “fake reality” which the CCP fears as weapon for undermining its own distinct version of reality. Every year, thousands of Chinese students enroll and attend international universities where their values may be confronted and questioned in daily life. Yet, the CCP still promotes these exchanges, and has historically also created similar institutional exchanges for international students interested in China. So, then it must be a question of scale: thousands of students studying abroad and experiencing the Western way is not the same as a billion citizens accessing information technology that could psychologically target beliefs and values at an unpredictable rate.

If CCP “socialist beliefs” are fragile enough to be changed through generative AI, does this say something about the value system or about the capacity of the technology? Are all beliefs and values ultimately corrodible? For now, that remains and open conversation. For the Americans, the fear is not necessarily about a sudden change in values, but that this technology could further enable the power for authoritarian control that prevents the freedom enabled by democratic values: to hold different conversations and perspectives, to possess multiple, subjective viewpoints in accordance with the rule of law.

In sum, the generative AI products developed by Chinese will reflect the cultural and political context in which they are born. An optimistic perspective would say that while it may take longer, Chinese developers will eventually figure out how to comply with the demands of the regulation and create the trillions of parameters and human enforcement learning feedback, and find ways around the data quantity limitations needed to adequately structure generative AI alignment with CCP values.

The question is whether or to what extent the openness of the Western system could become a vulnerability for the same reason the Chinese approach is maintaining the system as a closed one. Overall, the Western approach seems to have more confidence in its parenting ability—that openness will not be a way for authoritarian values to slowly replace democratic values through such a free-flowing access.



3.2 US and Generative AI

The U.S. has no official regulatory plans when it comes to controlling generative AI, but the Biden administration has presented its “Blueprint for the AI Bill of Rights,” and various stakeholders have nonetheless expressed their interest in promoting some sort of regulation in the future. In its current form, the blueprint is a framework and a handbook of principles on how to guide the “design, use, and deployment of systems to protect the public”. Throughout the AI Bill of Rights, it is emphasized though that the technology must not sacrifice the protection democratic values and civil rights.

In particular, accountability, in the American context is expressed as being a matter of co-responsibility with civil society in that, automated system development should always consult a variety of communities, stakeholders, and experts. The reality of consistently implementing this ideal of co-development and cooperation of technology amongst civil society is a key risk. This expression too is culturally accurate. For example, Alexandre de Tocqueville wrote about his impression of the power of associations, foundations, nonprofits in America to contribute to shaping society in ways that a top-down government could not do in Europe.

This unique power of the American populace (especially the organization of civil society) and bottom-up and federalist based approach seemed to Tocqueville to counter any centralized control in decision making. Indeed, the foundational principles of the American Constitution protect the separation of the state and the individual. This is why in the U.S. too harsh of an infringement of the state upon the freedom of a company can be potentially viewed as anti-American.

In line with this, Big Tech in America has its own stance and viewpoint about the regulatory path of the technology. OpenAI, for instance, has expressed its own ideas for governance of superintelligence. This is clearly expressed on their website, “we think it’s important to allow companies to develop models below a significant capability threshold, without the kind of regulation like: burdensome mechanisms such as licenses or audits[18].”

The stance is not counter to the Biden Administration’s Blueprint, per say, in that OpenAI still supports public oversight, the power of the people to decide and influence about the way AI will behave.

In the American context, a more normative approach has emerged in lieu of a coercive of regulatory approach to institutional control. Behavior is controlled through prescribing what is right and wrong through an institutionalizing process, rather than through an outright ban and punishment for behavior as seen in China and the EU for failure of compliance. However, this does not mean that some type of regulatory and more coercive behavior control won’t ever emerge in the American system.

The difference between the Chinese tactic and the American tactic is that the Americans too want to make sure the technology upholds values (e.g. democratic values), but democratic values enable more the freedom to have different conversations and the exchange different viewpoints (within the rule of law), whereas an authoritarian regime does not allow it—there is only one viewpoint. Freedom is a threat to total control, and total control is the threat to freedom.

But one should not underestimate the normative-cognitive cultural approach to change. As renowned author Yuval Harari points out, the entire success of feminism as social movement after thousands of years of no change is based on a strategy of story-telling, discussion, talking and trying to change people’s minds. Its success was not due coercion or violence.


AI Parenting: U.S.

The U.S. approach to parenting generative AI entails a sense of openness and transparency, which are democratic values. The irony is that companies like OpenAI are in fact not open, and do not share much public information. To what extent are democratic values supposed to promote and ensure an open-source approach to technology development? Because in some ways, while the normative ideal expressed is that AI parenting in America is in the hands of civil society, the reality seems to be it is in the hands of mainly Big Tech companies. So the normative approach to parenting protects the commercial interests, which is also an important American principle, but potentially puts the intention of civil society influence the technology development at a lesser advantage. For civil society to realize and practice this power of controlling behavior, it must develop its awareness and realize its agency.

The result is that China uses coercion to control the behavior of its companies to comply to the CCP ideal, so that generative AI will maintains CCP subjective reality; and the U.S. enables norms to prescribe how AI companies should behave, which ultimately results in the freedom for generative AI to create an unlimited number of fake realities in that sense. What is the difference between a “fake” reality, constructed and aligned with the view of the CCP or a multitude of “fake” realities that can take on whatever form they want as may be the case in the U.S.? In one situation, reality is subjectively controlled, while in the other situation, the freedom of subjectivity is protected. Technological fears become bifurcated too: the fear of being over-throwed vs. the fear of being controlled. There are perhaps certain advantages to the American approach, which in an optimistic light could lead to a more natural and organic path of responses to this unprecedented change.


3.3 EU and Generative AI

The AI Act is the EU’s attempt to pursue a regulatory response to generative AI. This regulatory approach aims to create obligations for producers and users based on risk levels, ranging from outright bans, to assessments before marketplace release. The EU takes a step further for upholding international standards than China by specifically requiring published summaries of copyrighted data used to train models, that models are not designed to generate illegal content, and bans on AI for biometric surveillance, emotion recognition and predictive policing.

The basis of the EU approach is for government to protect citizens from technology companies’ miscalculation, opportunism or greed. The unique feature is the three tier risk based approach to assessing AI systems for people’s safety. The way in which companies or producers must comply to the regulation would include things like: registering models in EU database before market release, highlight when content is AI generated or not as well as deep fakes vs real images, for example. This obviously will require extensive documentation on the producer-end with fines and investigatory power as consequences for noncompliance[19].

A distinction between the US and EU approach is that the EU focuses on requiring individual’s rights to explanation; in this case to know that the information they are receiving, in various forms, is AI generated[20]. Here, the differentiation between reality and fake reality should have direct disclosure, whereas in the U.S., it is currently up to the discretion of companies to make that disclosure (or not). Clearly, the EU is taking a proactive approach to mitigating the highest levels of risk posed to society in a people-centered informationally accessible manner.

That is not to say that legislatively, the U.S. has taken no steps to address the problem of deep fakes that will be posed by generative AI. In fact, law such as the National Defense Authorization Act in 2021 and the Identifying Outputs of Generative Adversarial Networks Act require extensive research and reporting by federal agencies like the Department of Homeland and Security and Department of Defense to take place for the next five years[21]. Through the financial support of public funds for research, the U.S. could end up creating its own ways to mitigate risks rather than depending on companies to do it themselves. It also doesn’t mean that U.S. companies don’t recognize the gravity of taking non-action to identifying generative AI content or deep fakes.


AI parenting: EU

The EU’s style of parenting to generative AI is, as usual, more paternal in reach, and as the legislation conveys, this style seems more human-centric. This model is very reasonable, leaving AI applications that are not high risk, largely unregulated. The Treaty of Rome, a foundational document on the European Union, is based on principles of ensuring its efforts constantly improve the living and working conditions of its peoples. Thus, government is seen as playing a direct role in improving standards of living in the EU. So companies wanting to benefit from the EU market, must meet EU standards. While OpenAI, may never have to disclose its data model in America, currently, it will have to do so if it wants access to the EU.

EU AI parenting sets rules for basic good practices to guide the behavior of generative AI, but doesn’t go so far as to attempt to oversee the development itself, like what may be the case in China. While this middle way receives critique from American Big Tech—that regulation can stifle innovation, the more relaxed style of American parenting has been proven to have its faults too. Most notably, after the Facebook-Cambridge Analytica scandal, when it was discovered that Facebook was using its behavioral science capacity to harvest data and apply it in micro-targeting techniques for political campaigns.

Predicting the so-called Brussels Effect for generative AI is tricky. The AI Act has done a great job in identifying several areas that everyone can probably agree on: the need for companies to disclose when content being generated on their platforms is considered deepfakes and citizens’ (or in the case of China, the CCP’s) right to have information about data used to train generative AI models, especially to create a check and balance for copyrighted material. The problem is identifying a deep fake doesn’t necessarily deter people from watching or looking at deepfakes or preventing its release into the public sphere. So perhaps the strongest aspect of this legislation is its proactiveness in banning unacceptable risk like government based social scoring, and attempting to protect artist and creators and researchers copyrighted material.

Both the China and EU styles are very protection oriented, but for different reasons and based on different values. China’s generative AI regulation aims to protect the power of the state, while the EU’s aims to protect citizens, and America’s relaxed style benefits business. This time, the EU may be right about its proactive approach, especially when numerous technologist have called for a halt to advancing generative AI temporarily, and scholars and politicians alike agree that for the first time in history, the next ten years is unpredictable in an unprecedented way in terms of the capabilities generative AI will provide.


4.0 Generative AI: The meaning of reality

If generative AI is capable of contributing to fake reality, where people can access images, videos, audio files that have been synthetically modified to mimic something real, then there is the potential for populations to experience a combination of both consciously choosing to engage in fake reality, or unconsciously doing so. The more generative AI becomes the primary source of information for people, it calls to question what is actually the difference between information filtered through CCP generative AI, which clearly creates a type of subjective, “fake” reality, or the multitude of fake realities that will be accessible in the Western world. Indeed, is anything objective after all? Because breakthrough research in quantum physics tells us: no[22].

Quantum physics examines the building blocks of nature itself. By examining the smallest components of our existence, quantum mechanics reveals based on experiments of measuring the state of elementary particles called photons, it is possible that two people can observe and experience two realities which are at odds with one another. And both coexist[23]; the act of measurement itself, destroys the whole. So, the second a quantum system is observed or measured, as soon one constituent is pinned down, another is lost.

To turn this into a metaphor: As a parent, if my child is scared of the big bad wolf, and wants to hear a comforting poem, then I can easily reach out to generative AI application like ChatGPT for help. And ChatGPT is capable of producing something better than I could potentially improvise. In this moment two realities (or a multitude) exist: one in which I choose generative AI and the other in which I improvise. Individual agency is important; it gives me the power to present the type of reality I want to my child. And in the moment I choose not to use the generative AI solution, the potentiality of the other version is lost.

Because I have a personal relationship with my child, I know from experience that maybe all my kid really needs is a kiss and a hug and for me to stay next to him while he falls asleep. I can only sense this from experience. From experience, I know what it is like not to communicate through words with my child. In fact, parents spend almost the first two years of life interacting continuously with a babbling baby, yet this experience of development creates an inexplicable connection between parent and child. And let us not forget how much knowledge and understanding is acquired through this intangible interaction between parent and baby. Let’s us not forget its meaning. Information is not just knowledge, and knowledge is not just information.



References


Briscoe, S. (2021, January 12). U.S. Law addresses deepfakes. Asis international

Brown, H., Guskin, E., Mitchell, A. (2012, November) The role of social media in Arab uprisings. Pew Research Center. https://www.pewresearch.org/journalism/2012/11/28/role-social-media-arab-uprisings/


Dimitropoulos, S. (2022, June 29). Objective reality may not exist at all, quantum physicists say. POPULAR MECHANICS.


Engler, A. (2023 April 25). The EU and U.S. diverge on AI regulation: A transatlantic comparison. BROOKINGS. https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/


Heaven, W.D. (2023, March). GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why. MIT Tech Review


Ludvigsen, K. (2023 March). ChatGPT’s electricity consumption. Towards data science.


McMorrow, R. Liu, N. (2023 April). China slaps security reviews on AI products as Alibaba unveils ChatGPT challenger. Financial Times. https://www.ft.com/content/755cc5dd-e6ce-4139-9110-0877f2b90072


McKinsey and Company (2023). What is generative AI?


Ruby, M. (2023 January). How ChatGPT works. Towards data science.


Shaul, K., Chen, S., Tiku, N. (2023 April). Inside the secret list of websites that make ai like ChatGPT smart. Washington Post.



Triolo, P. (2023 April). ChatGPT and China: How to think about large language models and the generative AI race. The China Project



Trotta, F. (2023 June). A gentle introduction to generative AI for beginners. Towards Data Science https://towardsdatascience.com/a-gentle-introduction-to-generative-ai-for-beginners-8c8752085900


Toner, H., Haluza, Z. Luo, Y. Dan, X. Sheehan, M. Huang, S. Chen, K., Creemers, R., Triolo, P. Meinhardt, C. How will China’s generative AI regulations shape the future. A DigiChina forum. STANFORD DigiChina.


Wu, Y. (2023 May). Understanding China’s new regulations on generative AI. China Briefing.





[1] MIT Technology Review. (2023, March 14). GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why [2] Federico Trotta (2023, June). A gentle introduction to generative AI for beginners. Towards Data Science. [3] Ibid [4] Google Techspert (2023, April 11). What is generative AI? [5] McKinsey & Co (2023, January 19). What is generative AI? [6] Ludvigsen, K. (2023, March 1). ChatGPT’s electricity consumption. Towards Data Science. [7] Schaul, K. et al (2023, April 19). Inside the secret list of websites that make AI like ChatGPT sound smart. WASHINGTON POST. [8] Brown et al. (2020, July). Language models are few-shot learners. ARXIV.ORG https://arxiv.org/pdf/2005.14165.pdf [9] Ruby, M. (2023, January 30). A brief introduction to intuition and methodology behind the chat bot you can’t stop hearing about. TOWARDS DATA SCIENCE. [10] Ibid [11] Wu, Y. (2023, May 23). Understanding China’s New Regulations on Generative AI. CHINA BRIEFING. [12] Triolo, P. (2023, April 12). ChatGPT and China: How to think about LLMs and the generative AI race. THE CHINA PROJECT. [13] Ibid [14] Toner at al. (2023, April 19). How will China’s generative AI regulations shape the future? A DigiChina forum. DIGICHINA Stanford. [15] McMorrow, R., Liu, N. (2023, April 11). China slaps security reviews on AI products as Alibaba unveils ChatGPT challenger. FINANCIAL TIMES. [16] Council on Foreign Relations. China’s approach to global governance. https://www.cfr.org/china-global-governance/ [17] Brown et al. (2012 November 28). The role of social media in the Arab uprisings. PEW RESEARCH CENTER. [18] OpenAI https://openai.com/blog/governance-of-superintelligence [19] Engler, A. (2023 April 25). The EU and U.S. diverge on AI regulation: A transatlantic comparison. BROOKINGS. [20] Ibid [21] Briscoe, S. (2021, January 12). U.S. Law addresses deepfakes. ASIS. [22] MIT Technology Review. (2019, March 12). https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-as-objective-reality/ [23] Dimitropoulos, S. (2022, June 29). Objective reality may not exist at all, quantum physicists say. POPULAR MECHANICS.

bottom of page