French President Emmanuel Macron said it correctly: “We are over-regulating and underinvesting. If we continue with our usual agenda over the next 2 to 3 years, we will be out of the market. I have no doubt!”
AI is transforming my world. I’m building a startup in the AI space, advising a venture capital firm on AI, and have been teaching AI for over 10 years. And I agree with Macron… “I have no doubt!”
Europe talks about regulating AI but they seem to miss that there is a balance between fostering innovation and upholding societal values. In my eCornell course on AI and products, I had an amazing discussion with Aneesh Chopra, who recently joined the National AI Advisory Committee. He emphasized the need to balance the interests of citizens, governments, and businesses in shaping AI policy.
Let’s discuss the three key levers we should look at when talking about regulation.
First, transparency and control of AI systems are essential in both their development and deployment. As I often remind my students working on rapid AI prototyping: “Bias in, bias out.” The quality of the data fed into AI systems directly affects the outcomes. If the data is biased, the results will be too. Therefore for any regulatory framework “transparency” will be crucial. We need to be able to control the quality of how AI systems make decisions. Open data models, or at least open weights, like Facebook’s approach with Llama, are positive steps toward this goal.
Second, regulation should avoid focusing on what’s not feasible. And unfortunately they are. Take Large language models (LLMs): They rely on probability distributions. For example, when you hear “Life is like a box of…,” most people think “chocolates,” because of the popular Forrest Gump quote. Why not “box of surprises”? An LLM tends toward the statistical middle. This makes it difficult to explain AI’s reasoning, and asking for explainability is like asking cars not to scare horses (which was once a real regulatory demand). If regulations require the impossible, we risk stifling AI development. As Emmanuel Macron said: “we are over-regulating”.
Third: Instead of focusing on the technology itself, regulations should target how AI is used. Existing frameworks already address biases and ensure fairness without reinventing the wheel. For example, if an AI-driven healthcare app misdiagnoses a patient, current product liability laws hold companies accountable. Similarly, AI tools used in hiring must not introduce bias based on race, gender, or other protected categories. Anti-discrimination laws, like the Civil Rights Act in the U.S., already govern hiring practices and can be applied to AI-driven decisions.
Don’t get me wrong, there is a need for regulation. AI is powerful, and we should establish checks and balances on that power. But we also need to recognize that regulation doesn’t happen in a vacuum. In my courses, I often talk about car insurances. They are essentially an AI model predicting the likelihood of a car accidents and it’s damage. Gender as an input into the AI, like it or not, has predictive value. The EU, reflecting its societal values, has regulated that gender should not be used in AI models, even though this may lead to less effective risk distribution and higher overall costs. As a society, we may be willing to pay that price, but what if other societies are not? I wrote about China’s use of data in AI for the think-tank *Intereconomics* five years ago. Data is the lifeblood of AI. Those with access to data will build better AI. If one society, like Europe, restricts data collection to protect its citizens, while another, like China, aggressively pushes data collection, China over time will have more and better model. China will create more economic value?
Take Reddit, for example—they sold their data to Google for $60 million. Why? Because Google knows that access to data is critical for training its models. The way countries manage data will have a profound impact on their economic future. Data laws are the new trade laws, and they will play a pivotal role in shaping the AI-driven societies of tomorrow.
Emmanuel Macron is right: “We are over-regulating and underinvesting.” While Europe aims to safeguard societal values, it risks falling behind in the global AI race if innovation is stifled. Regulators should focus carefully on transparency but only if it is feasibility, and targeted use-case regulation instead of regulation of technology. AI holds immense powers that will change our landscape and I hope that Europe will not “be out of the market”.
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

