Fletcher breaks down this story in English. Octavio reacts and expands in Spanish. Follow along with the live transcript, tap any word for its translation. Upper Intermediate level — perfect for confident speakers refining their skills.
So I want to start with a moment.
I was in Brussels about two years ago, talking to some people in the European Parliament, and one of them said to me, completely seriously, 'We are going to regulate artificial intelligence before the Americans even agree on what it is.' I laughed.
I shouldn't have laughed.
Bueno, es que tienen razón.
Well, they're right.
En 2024, el Parlamento Europeo aprobó la Ley de Inteligencia Artificial, la primera ley de este tipo en el mundo.
In 2024, the European Parliament passed the Artificial Intelligence Act, the first law of its kind in the world.
Es un documento de más de cuatrocientas páginas que intenta controlar cómo se usa la IA en Europa.
It's a document of more than four hundred pages that tries to control how AI is used in Europe.
Four hundred pages.
Which, honestly, is about three hundred and ninety pages more than most tech CEOs have read.
But here's what gets me, before we get into the details: why Europe?
Why is it always Brussels that goes first on this stuff?
Mira, hay una razón histórica.
Look, there's a historical reason.
Europa tiene una memoria muy específica de lo que pasa cuando los gobiernos usan la tecnología para controlar a las personas.
Europe has a very specific memory of what happens when governments use technology to control people.
El nazismo, el estalinismo, la Stasi en Alemania del Este.
Nazism, Stalinism, the Stasi in East Germany.
Eso no es abstracto aquí.
That's not abstract here.
Es historia reciente.
It's recent history.
The extraordinary thing is how directly that shows up in the actual text of the law.
There are prohibitions that read less like tech policy and more like, I don't know, a direct response to the twentieth century.
Exactamente.
Exactly.
La ley organiza la inteligencia artificial en niveles de riesgo.
The law organizes artificial intelligence into risk levels.
Hay cosas que están completamente prohibidas, cosas que son de alto riesgo y necesitan supervisión, y cosas que tienen poco o ningún riesgo y son básicamente libres.
There are things that are completely prohibited, things that are high-risk and need oversight, and things that have little or no risk and are basically free.
Right, so let's talk about what's actually banned.
Because some of this is genuinely striking.
The law prohibits AI systems that manipulate people without their knowledge, using what it calls 'subliminal techniques.' It bans systems that exploit the vulnerabilities of specific groups.
Sí, y lo más importante, en mi opinión: prohíbe los sistemas de puntuación social.
Yes, and the most important thing, in my opinion: it prohibits social scoring systems.
Eso es lo que hace China con sus ciudadanos, asignarles una puntuación basada en su comportamiento.
That's what China does with its citizens, assigning them a score based on their behavior.
En Europa, eso es ilegal ahora.
In Europe, that is now illegal.
I spent time in Beijing in 2015 and I was already hearing early versions of this system.
And I'll tell you, the journalists I knew there, local journalists, were terrified of it.
Not because of what it was then, but because of what it could become.
The chilling effect on behavior is the whole point.
Exacto.
Exactly.
Y la ley también limita el reconocimiento facial en tiempo real en espacios públicos.
And the law also restricts real-time facial recognition in public spaces.
No está completamente prohibido porque hay excepciones para casos de seguridad muy graves, como el terrorismo.
It's not completely banned because there are exceptions for very serious security cases, like terrorism.
Pero el principio general es: no.
But the general principle is: no.
Now, to be fair, the exceptions worry some civil liberties groups quite a bit.
Because 'terrorism' is a category that has a way of expanding over time.
I've watched that happen in several countries I've covered.
But let's stay with the framework for a moment.
Bueno, los sistemas de alto riesgo incluyen la IA que se usa en educación para evaluar a los estudiantes, en selección de empleados, en sistemas de crédito bancario, en medicina.
Well, the high-risk systems include AI that is used in education to evaluate students, in employee selection, in bank credit systems, in medicine.
Estas aplicaciones no están prohibidas, pero tienen que cumplir requisitos muy estrictos de transparencia y supervisión humana.
These applications are not prohibited, but they must meet very strict requirements for transparency and human oversight.
The employment angle is the one that interests me most personally.
Because we already know that algorithmic hiring tools have demonstrated biases against women, against people with certain names, against older workers.
This has been documented.
The law puts that in scope.
Sí, y aquí viene la parte interesante.
Yes, and here comes the interesting part.
Esto no es la primera vez que Europa hace algo así.
This is not the first time Europe has done something like this.
El RGPD, el Reglamento General de Protección de Datos, se aprobó en 2018 y todo el mundo dijo que era exagerado, que iba a destruir la economía digital europea.
The GDPR, the General Data Protection Regulation, was passed in 2018 and everyone said it was excessive, that it was going to destroy the European digital economy.
I remember that.
Tech companies were practically in mourning.
And then something strange happened, which is that the rest of the world started quietly copying it.
There's actually a name for this phenomenon: the Brussels Effect.
Claro, el efecto Bruselas.
Of course, the Brussels Effect.
La idea es que cuando Europa establece un estándar regulatorio, las empresas multinacionales prefieren seguir una sola regla global en lugar de tener sistemas diferentes para cada mercado.
The idea is that when Europe sets a regulatory standard, multinational companies prefer to follow one global rule rather than having different systems for each market.
Entonces, en la práctica, la regla europea se convierte en la regla mundial.
So in practice, the European rule becomes the global rule.
Look, I find that argument genuinely compelling, but I also think it deserves some scrutiny.
Because there's a counterargument, and it's not a stupid one: what if you're regulating a technology you don't fully understand yet, and you freeze the development of something that could be enormously beneficial?
Es que ese argumento lo usan siempre las empresas tecnológicas.
That argument is always used by tech companies.
'No nos reguléis todavía porque no entendéis la tecnología.' Pero si esperas a entenderla completamente, ya es demasiado tarde.
'Don't regulate us yet because you don't understand the technology.' But if you wait until you fully understand it, it's already too late.
El daño ya está hecho.
The damage is already done.
The American approach has been essentially the opposite, right?
Let it run, see what breaks, then maybe write some guidelines.
And the result is, I don't know, Facebook and the 2016 election, algorithmic radicalization, a generation of kids with serious mental health problems.
La verdad es que creo que hay una diferencia filosófica profunda entre Europa y Estados Unidos en este tema.
The truth is I think there is a deep philosophical difference between Europe and the United States on this issue.
En Europa, creemos que el mercado tiene que servir a las personas, no al revés.
In Europe, we believe the market has to serve people, not the other way around.
En Estados Unidos, parece que a veces es al contrario.
In the United States, it sometimes seems to be the opposite.
No, you're absolutely right about that, and I say that as an American who has spent a lot of time outside America.
There's a reason the concept of 'human dignity' appears in the European Charter of Fundamental Rights and doesn't have an obvious equivalent in US law.
Mira, en España tenemos una historia muy particular con esto.
Look, in Spain we have a very particular history with this.
Durante el franquismo, el estado usaba información sobre los ciudadanos para controlarlos, para perseguirlos.
During Francoism, the state used information about citizens to control them, to persecute them.
Mis abuelos vivieron eso.
My grandparents lived through that.
Entonces, cuando hablamos de sistemas de vigilancia automática, no es teórico para nosotros.
So when we talk about automatic surveillance systems, it's not theoretical for us.
That's a point I don't think most American tech critics fully appreciate.
The conversation in Silicon Valley about AI regulation is almost entirely abstract and economic.
It's about market share and innovation cycles.
It's not about what it feels like to live under surveillance.
A ver, hay otro aspecto de la ley que es muy importante y que quiero explicar: los modelos de IA de uso general, como ChatGPT o Gemini.
Right, there's another aspect of the law that is very important and that I want to explain: general-purpose AI models, like ChatGPT or Gemini.
La ley tiene requisitos específicos para estos sistemas porque son tan potentes que pueden usarse para muchas cosas diferentes.
The law has specific requirements for these systems because they are so powerful they can be used for many different things.
And this is where it gets genuinely complicated.
Because OpenAI and Google and Meta have been lobbying hard in Brussels.
And there are real questions about whether the final text of the law was softened in response to that pressure.
Sí, y es una crítica legítima.
Yes, and it's a legitimate criticism.
Los modelos de IA más potentes, los que llaman 'de impacto sistémico', tienen obligaciones más estrictas: transparencia sobre los datos de entrenamiento, gestión de riesgos, cooperación con las autoridades.
The most powerful AI models, the ones they call 'of systemic impact,' have stricter obligations: transparency about training data, risk management, cooperation with authorities.
Pero algunos expertos creen que no es suficiente.
But some experts believe it's not enough.
Here's the thing about regulatory capture, and I've seen this in financial regulation, in pharmaceutical regulation, in media regulation: the companies that have the resources to comply with complex rules are often the big companies.
And compliance becomes a barrier to entry that protects them from smaller competitors.
Eso es un riesgo real.
That is a real risk.
Las pequeñas empresas europeas que trabajan con inteligencia artificial pueden tener dificultades para cumplir todos los requisitos.
Small European companies that work with artificial intelligence may have difficulty meeting all the requirements.
La ley tiene algunas excepciones para startups y pequeñas empresas, pero muchos expertos creen que no son suficientes.
The law has some exceptions for startups and small businesses, but many experts believe they are not enough.
So there's a real irony here.
A law designed to protect people from the power of big tech could end up entrenching the position of big tech.
I mean, that's a genuinely uncomfortable possibility.
Bueno, es posible.
Well, it's possible.
Pero también hay que considerar lo contrario.
But you also have to consider the opposite.
Sin ninguna regulación, las grandes empresas hacen lo que quieren porque tienen más poder que los gobiernos pequeños.
Without any regulation, big companies do what they want because they have more power than small governments.
Al menos con una ley europea, hay un contrapoder real.
At least with a European law, there is a real counterpower.
Right, and the democratic legitimacy argument matters here.
The EU has 450 million citizens.
That's a market no tech company can afford to walk away from.
Which means Brussels has leverage that, say, Ecuador does not have.
Exacto.
Exactly.
Y la implementación va a ser gradual.
And the implementation is going to be gradual.
Las prohibiciones absolutas empezaron a aplicarse en febrero de 2025.
The absolute prohibitions started applying in February 2025.
Las normas para los sistemas de alto riesgo entran en vigor en 2026.
The rules for high-risk systems come into force in 2026.
Y hay un período de adaptación hasta 2027 para los modelos de uso general.
And there is an adaptation period until 2027 for general-purpose models.
Three years is a long time in AI.
I mean, think about where we were three years before ChatGPT launched.
Nobody was having this conversation.
The technology moves faster than any regulatory process can.
Sí, y por eso la ley incluye mecanismos de revisión.
Yes, and that's why the law includes review mechanisms.
No es estática.
It's not static.
Se puede actualizar para incluir nuevas categorías de riesgo.
It can be updated to include new risk categories.
La idea es que sea un marco flexible, no un texto rígido que queda obsoleto en dos años.
The idea is that it's a flexible framework, not a rigid text that becomes obsolete in two years.
You know, there's a historical parallel I keep coming back to.
When the FDA, the Food and Drug Administration in the US, was created in 1938, it was partly in response to a drug called Elixir Sulfanilamide that killed over a hundred people.
Regulation followed catastrophe.
Maybe Europe is trying to get ahead of that this time.
Mira, creo que hay también una dimensión cultural importante.
Look, I think there's also an important cultural dimension.
En Europa, y especialmente en países como Alemania o Francia, hay una tradición de desconfiar del poder sin control, ya sea del estado o de las empresas.
In Europe, and especially in countries like Germany or France, there is a tradition of distrusting unchecked power, whether from the state or from companies.
Esa desconfianza es el motor de esta ley.
That distrust is the engine behind this law.
So what happens if it doesn't work?
What if the enforcement is weak, what if companies find loopholes, what if the AI Office that's supposed to oversee all of this doesn't have the technical capacity to actually audit these systems?
Es una preocupación legítima.
It's a legitimate concern.
El RGPD tenía ese problema al principio.
The GDPR had that problem at the beginning.
Las multas eran enormes en teoría, hasta el cuatro por ciento de la facturación global de una empresa, pero la aplicación fue lenta durante años.
The fines were enormous in theory, up to four percent of a company's global turnover, but enforcement was slow for years.
Con la ley de IA puede pasar lo mismo.
The same thing could happen with the AI Act.
And yet.
The GDPR fines did eventually start landing.
Meta has paid over a billion euros in GDPR fines.
That changes behavior.
Even if enforcement is slow, the threat is real.
La verdad es que, aunque yo creo en la necesidad de esta regulación, también reconozco que Europa tiene un problema de competitividad en tecnología.
The truth is, although I believe in the need for this regulation, I also recognize that Europe has a competitiveness problem in technology.
No tenemos un Google, ni un Amazon, ni un OpenAI.
We don't have a Google, or an Amazon, or an OpenAI.
Y eso es serio.
And that's serious.
I'm glad you said that, actually, because I think there's a tension between two things Europe genuinely wants: to be a global standard-setter on ethics, and to be competitive in the technology that's going to define the next fifty years.
Those goals are not automatically compatible.
No, espera.
No, wait.
Yo creo que sí pueden ser compatibles.
I think they can be compatible.
Si Europa consigue demostrar que es posible desarrollar inteligencia artificial de forma ética y segura, y que eso genera confianza entre los ciudadanos y las empresas, eso también es una ventaja competitiva.
If Europe manages to show that it's possible to develop artificial intelligence in an ethical and safe way, and that this generates trust among citizens and companies, that is also a competitive advantage.
That's actually a compelling reframe.
Trustworthy AI as a brand.
I mean, we've seen how badly the 'move fast and break things' brand has aged.
There might be a real market for the alternative.
Exactamente.
Exactly.
Y si el mundo quiere hacer negocios con Europa, o acceder al mercado europeo, tendrá que seguir las normas europeas.
And if the world wants to do business with Europe, or access the European market, it will have to follow European rules.
Eso es el efecto Bruselas en acción.
That is the Brussels Effect in action.
No es imperialismo regulatorio, es influencia a través del mercado.
It's not regulatory imperialism, it's influence through the market.
Right, so here's where I'd land for anyone listening who wants to actually understand why this matters.
This isn't just a tech story.
It's a story about what kind of society you want to live in, about who controls the tools that are increasingly shaping every decision that affects your life.
Sí, y lo que me parece fundamental es esto: la inteligencia artificial va a tomar decisiones que afectan a la vida de las personas, a sus trabajos, a su acceso a servicios, a cómo son tratados por las instituciones.
Yes, and what I find fundamental is this: artificial intelligence is going to make decisions that affect people's lives, their jobs, their access to services, how they are treated by institutions.
La pregunta es si esas decisiones van a tener algún control democrático o no.
The question is whether those decisions are going to have any democratic oversight or not.
And that question has no easy answer.
But I think the fact that Europe is at least trying to ask it formally, in law, is significant.
Whether the law works or not is a separate debate.
That it exists at all is something.
Octavio, as always, you've given me more to think about than I came in with.
Bueno, para eso estamos.
Well, that's what we're here for.
Y para los que escucháis: la ley es compleja, el debate es real, y nadie tiene todas las respuestas todavía.
And for those of you listening: the law is complex, the debate is real, and nobody has all the answers yet.
Pero es importante entender lo que está en juego.
But it's important to understand what is at stake.
La próxima vez que uséis una aplicación que os recomienda algo, que os evalúa, que toma una decisión sobre vosotros, recordad que alguien, en algún lugar, tuvo que decidir si eso necesitaba reglas o no.
The next time you use an app that recommends something, that evaluates you, that makes a decision about you, remember that someone, somewhere, had to decide whether that needed rules or not.