Voices
First-person perspectives on the world of work
Photo: iStock/Akacin Phonsawat

The Future of Work Podcast

Episode 51
Artificial intelligence

The rise of AI in China – Digital technologies and their regulation

14 November 2023
00:00

Over the last decade, China has become a leading developer and user of digital technologies, in particular artificial intelligence. Whether in e-commerce, ride hailing services or autonomous vehicles, Chinese companies operating in these areas are world competitors and contribute significantly to China's development. In parallel with the rise of these industries, Chinese policy makers have enacted a series of regulations around data and consumer protection to ensure the proper use of these technologies and to prevent market abuses by dominant players. To date, its regulatory scope and breadth is well ahead that of leading competitors in Europe or the United States. Yet, little is known about the impact of both technological development and regulatory activism on the world of work in China.

Kendra Schaefer shares with us her in-depth knowledge about the evolution of digital technologies and their regulation in China. Given her hands-on experience in advising both Chinese and foreign companies in this area, she is well placed to discuss technological trends in China, the motives Chinese regulators bring to the market and how effective they are in implementing regulation. She will also help us to better understand the impact these trends are having on Chinese workers, for instance in the ride hailing industry and beyond.

Transcript

[music]

-Hello. Welcome to this edition of the ILO's Future of Work podcast.

I'm Ekkehard Ernst and I'm your host today.

Over the last decade,

China has become a leading developer and user of digital technologies,

in particular, artificial intelligence.

Whether in e-commerce, ride-hailing services,

or autonomous vehicles,

Chinese companies operating in these areas are world competitors

and contribute significantly to the development of their home country.

At the same time,

Chinese policymakers have accompanied these developments

by enacting a series of regulations

around data and consumer protection

to ensure the proper use of these technologies

and to prevent market abuses by dominant players.

Today its regulatory scope and breadth

is well ahead that of leading competitors in Europe or the United States.

Yet, little is known about the effective impact

of both the technological development and the regulatory activism

on the world of work in China.

Today, we want to explore the rise of digital China and the specific approach

Chinese policymakers take

in both developing and regulation, the use of artificial intelligence.

To explore this topic,

we could not have a better guest than Kendra Schaefer,

who is with us today in the show.

Kendra is head of tech policy research at Trivium China and is based in Beijing.

She works on Chinese government data infrastructure,

social credit system technology,

and other aspects of digital regulation and development.

Kendra, welcome to the Future of Work podcast.

-It's great to be here.

-Before we dive deeper into developments around AI in China,

can you tell us a bit more about yourself and your work?

-Absolutely.

I run Trivium China's tech policy monitoring service.

What that basically means is that I oversee a team of analysts

that looks at Chinese government documents all day long.

Those documents are related to tech and data,

and then we analyze them and inform investors,

governments, and corporations in terms of what those documents mean.

-Excellent, thank you.

Thank you very much.

Tell us more about

the recent developments in the tech sector in China.

What are some of the key developments that you see

and that we should keep an eye on, and how, in particular, in your opinion,

will they affect the world of work in China?

-Absolutely. I think first,

as an entry point to looking at AI in China

and AI regulation and how it impacts the labour force specifically,

I think we need to understand the social backdrop of labour in China

and where algorithms plug into that backdrop.

Many of your listeners may know that

China is home to one of the world's largest

domestic migrant labour populations.

You've got about 300 million migrant workers

who are typically rural residents.

They live in villages where there aren't a lot of work opportunities.

It's very common for them to leave the villages,

go to the big cities, find work there,

and send the money home to their kids or their parents.

This is a common social phenomenon.

It used to be that those labourers were absorbed mostly

by traditional industries like construction, for example,

or heavy industry traditional factories.

More and more now,

those labourers are being absorbed by the platform economy.

They're going to the cities and they are doing restaurant delivery,

or they are working for courier services,

or they are working for logistics companies

as drivers within a particular city,

or if they have a car, they're working as ride-hailing drivers in many locations.

China also has a growing and quite severe youth unemployment crisis.

As youth are waiting to find longer-term jobs,

and they're exiting college,

they're also turning to the platform economy

to support their entry or to supplement their entry

to the workforce or to supplement their income,

same thing, they're doing restaurant delivery.

These workers are working for these major apps

like Meituan-Dianping or Ele.me,

or DiDi, China's largest ride-hailing company,

or JD.com, one of biggest China's

domestic logistics networks,

and Shansong, which is a intercity errand-running company

who will send packages

from one side of the city to the other within 30 minutes,

this kind of thing.

Gig work, the emergence of gig work has totally changed the labour environment

that has been driven by platforms in the platform sector.

Those platforms are run by algorithms.

What we increasingly see now is regulators turning their attention

to the way that those algorithms are impacting that labour force.

-Excellent.

Maybe looking to the next step, once we have seen the impact

or the rise of digital technologies is obviously a key question,

what Chinese policymakers do with these developments?

As I mentioned at the beginning,

from the outside, it looks like they're extremely active

in regulating a lot of these new technologies.

In some sense and my impression is that they're actually far ahead

of their counterparts in Europe or the United States.

Maybe you can tell us a bit more about the specific way

Chinese policymakers are regulating AI,

also their motivation that drives them, and how these policymakers in Beijing

and elsewhere in the country try to square the challenge of on the one hand,

as you said, developing these new services,

and at the same time regulating

the digital tools for the benefit of society.

-Sure, absolutely.

China has essentially, to date, put out three major regulations

that govern AI specifically.

Not all of those regulations relate to labour.

I'll try to keep my comments focused on how those algorithms impact

or how those regulations impact the labour market.

Just to give everybody a big-picture view, China has done a lot of work on AI

and data and cybersecurity laws and regulations.

There are three big laws

that form the foundation of China's cyber governance regime

more broadly outside of AI.

That's the Cybersecurity Law.

Very simply, that's about the security of critical networks.

There's the Data Security Law

which governs data related to national security.

There's the Personal Information Protection Law which,

as the title says, governs data related to individuals,

sensitive personal information, this kind of thing.

Those three laws really only got formulated

a year and a half, two years ago, and finalized and put into place.

On top of those three laws,

China's cyberspace regulator, the CAC started looking at

how algorithms could be--

specifically algorithms and AI should be regulated.

The first major algorithm and AI-related regulation to come out

targeted recommendation algorithms,

and this is the regulation that impacts labour most specifically.

A recommendation algorithm under this regulation

basically refers to any kind of algorithm--

when we think of recommendation algorithms,

we usually think of e-commerce or social media,

some kind of engine that looks at maybe what you've put in your cart before,

what you've clicked on or what ads you've viewed

and then recommends products that you might want to buy,

or if it's social media,

it recommends content based on videos you've viewed before

or content you've clicked on before things you've liked, that kind of stuff.

This regulation does encompass those kind of recommendations,

but here's the critical piece.

It also encompasses algorithms

that recommends things like driver delivery schedules.

For example,

let's say that I am a driver for a restaurant delivery app,

and I need to deliver a meal

from point A to point B.

I do that and the algorithm notes that I made that delivery in 30 minutes.

It assumes that that delivery can be made in 30 minutes,

and now the next person who has to make that delivery,

that is the time the algorithm will predict,

that I should be delivering that package in that amount of time

or delivering that food in that amount of time.

The algorithm would also control things like

how much rest drivers might get per hour,

how many packages they could be expected to deliver within a certain day.

That is also included in the scope of algorithm regulations.

This regulation was implemented to deal with that.

You can immediately see

how platforms could abuse that kind of algorithm.

The platforms in China have very, very low delivery fees.

Think about USD maybe 50 cents per delivery or thereabouts.

I ordered DoorDash in the United States

and it's $10, $15 per delivery, it's quite expensive.

In China, the margins on those deliveries are incredibly low.

Workers are getting paid very little, profit margins

on each delivery for the platforms are quite low,

so that incentivizes the platforms to use these algorithms

to basically optimize delivery schedules

in a way that violates labour rights on a de facto basis,

that might encourage them to drive unsafely,

running red lights to get to where they're supposed to go

by the time the algorithm says they should be there,

or incentivizes them not to rest or take any breaks

or incentivizes them to work overtime

to hit these quotas that the algorithm has set for them and this kind of thing.

That is where this regulation for the workforce really focuses,

is how can we prevent-- it basically forbids algorithms

from implementing those kind of stipulations

that violate labour rights in any way.

Now, that is not the only thing these regulations do.

They're focused on a huge broad in terms of your original question,

where does China stand against

maybe other countries

in terms of algorithm regulation, they also look at lots of other things.

Because it is China, they focus on censorship,

what kind of content algorithms are allowed to disseminate

on the Chinese Internet, content algorithms, for example.

They focus on privacy, they focus on transparency,

they focus on the protection of minors online.

You can't recommend content to kids, it's inappropriate.

They focus on a whole host of other things as well.

That was the very first AI-related regulation

that came out in China.

Two other regulations have come out since then,

I won't get as deep into those because they're not really labour-related,

but the second one was related to Deepfake.

Governing algorithms that generate audio and video content

that was generated by a machine and not a human being.

The third policy was related to ChatGPT-like services,

AI-generated content.

Those are both no other country to my knowledge

has yet put out a AI-generated content regulation,

certainly not one as broad or sweeping as China has implemented.

From those perspectives, we can say that China is certainly ahead.

We also know that China's legislative plan

for the next five years potentially includes an artificial intelligence law.

What that law is going to have in it is unclear yet.

That law hasn't been drafted.

It's in the pre-drafting research stage where policymakers are talking about

what should be in it.

It'll probably be at least a year

and more likely three, four, even five years

before that law comes out and is finalized,

but China already has that policy draft on the book.

Really moving forward very, very quickly in that regard.

-Excellent. I think it's certainly timely to see that, at least in China,

the regulator seems to move ahead,

given the speed of the technological development.

I wanted to follow up on one specific question or point that you raised

which I found very interesting is that,

if I understand you correctly,

these algorithmic or these AI regulations, in a sense, to a certain extent

try to support the enforcement of labour rights

which some of these platforms might try to circumvent.

Are you saying that essentially so far

the labour rights are not sufficiently enforced,

and in a sense, these AI regulations are supposed to help

the regulator or the policymakers

to enforce these regulations through these platforms?

-You could definitely say

that the AI tools were structured in a way to support.

Essentially, what the regulations say is

you cannot circumvent traditional labour rights via technology.

In other words, they're just putting down on paper

the fact that no, it is not okay to violate labour rights

because a machine violated those rights and you as a human didn't do it

or you as a company didn't do it.

A machine did it.

That's also not allowed.

It's basically all those regulations say at the moment.

In reality, these companies have struggled

to implement those regulations

and have frankly done the bare minimum necessary

to meet baseline requirements in those regulations.

The regulations are not very detailed.

They're actually quite general.

You mentioned a minute ago that China tends to move very quickly

in response to emerging technologies on the regulatory front.

That's true.

The reason that China is able to move so quickly

is because the regulations

that they release are quite general, to begin with.

You'll see an emerging technology come out,

like the recommendation algorithms,

you'll see a regulation come out six months or a year later

before policymakers really even understand the technology very well.

They don't know what they're dealing with and they don't know how to regulate it,

but they'll release a policy that includes some general principles

for how they want to see companies treat that technology

even before they really have a grasp on specifics.

Then they'll iterate on that regulation as it becomes clear

where the real actual problems are.

This initial regulation essentially just said in very simple terms,

"You can't use algorithms to violate labour rights,"

and a little bit of language

that makes it specific to the gig economy or delivery drivers.

As time goes on, I expect to see those regulations

become a lot more specific

and a lot more detailed

as regulators figure out really where the issues are,

but companies like Meituan did respond to the release of those regulations

by doing things like promising

to slow down delivery schedules

and not allow the algorithm to tighten, tighten, tighten.

You have to get there in 30 minutes,

29 minutes, 28 minutes, 27 minutes...

and to ensure that there's adequate rest time

and this kind of stuff.

There isn't very strict guidance yet from the regulatory side

on specifically what those algorithms have to do.

-Excellent. I think this is a good segue to my next question

because in a sense,

what you're describing for China is also true in other countries.

One of the key concerns that we have in Western countries

is often the possible bias and discrimination

that results from the widespread use and implementation of AI.

Many policymakers in this part of the world are very concerned

that underprivileged groups on the labour market

will face even worse forms of exclusion

if these digital tools become widely used.

How are Chinese policymakers dealing with these issues?

-That's actually a really interesting question

and an area where I would say Chinese policy differs a little bit from

in terms of their focus area than maybe Western countries,

the EU, the US, or something like that.

I say that because

a lot of the international or Western conversation around AI

focuses on things like algorithmic sorting of HR platforms.

I look through your resume, and if you are a person of colour,

your resume goes to the bottom of the pile or something like that.

That's an obvious and egregious example,

but there's much more subtle examples of discrimination and bias

that happen in algorithmic job selection.

That's much less of a focus in China.

That said,

there are discrimination and bias clauses in most of China's AI regulations.

What they're mostly focused on is religion and ethnicity

and I think, health.

Not so much on sexual orientation

and not so much for the purposes of job selection.

Usually, what they're looking at is

when you are creating group segments

within your algorithm to identify individuals,

there are certain things that they discourage you

from using as an identifier, and religion is on the list,

ethnicity is on the list,

but that hasn't really been focused on the job market yet.

-Thank you. Maybe one last question I have,

and in a sense, you had already started to allude to a bit,

is the outlook of these regulatory developments.

One of the tensions I see in the type of work

that we are doing at the ILO is that a lot of countries struggle

to find the right balance between on the one hand

wanting to be a technological leader in this new area,

and at the same time, regulating all these issues

that you mentioned before on platforms, on discrimination, et cetera.

The question is, actually, there are two parts to it.

One of the question is to what extent we will see

that Chinese policymakers,

now that they have already introduced all this regulation,

might become a bit more, let's say,

relaxed and continue focusing on the development of the technology.

The other question, or the related question is

to what extent will we see an increasing segregation

with the regulatory space across the world?

In the past,

when we looked at data protection regulation,

it seemed Europe would take the lead and would implement that

through the General Data Protection Regulation,

a way of enforcing a global standard, but in AI regulation,

it doesn't seem to be the case.

It rather looks like that with China, now we're moving in a specific direction,

Europe trying to follow to a certain extent,

and the US moving certainly in a way different direction,

that we actually see a very increasing segregation.

Is that something that you observe as well,

or would you see a more relaxed approach in China as well,

focusing more on the technological development

rather than on the regulatory front?

-Yes, sure. In terms of your first question,

whether or not Chinese policymakers will relax now.

I think the answer to that is both yes and no.

I'll explain what I mean.

We actually have a very good example.

One of the very first,

let's take China's most recent regulation on AI for example,

which is this regulation for AI-generated content.

One of the things that regulation requires companies to do

if they want to release a public-facing ChatGPT style algorithm,

or app that uses a ChatGPT style algorithm,

is that they have to file their algorithm with the cyberspace regulator.

That requirement was released

after companies started developing AI tools.

What I mean by that is ChatGPT came out,

I think kind of hit public consciousness in November of last year.

Chinese companies immediately started responding

with their own innovative tools.

Then the CAC came in and said, "Whoa, whoa, whoa.

There are censorship issues here.

There are safety issues here.

There are too many risks.

We don't have a good handle on these risks."

We have talked to some of the companies in China,

like Baidu, for example.

They went to these companies and said, "You need to slow your role

and we don't want to see you releasing any public-facing algorithms

until these regulations are finalized

and you are compliant with these regulations."

Many AI companies were really just sitting on their hands

waiting for the CAC to get its act together

so that they could get this license registration done

and launch to the public.

Meanwhile, in the US, companies were kind of iterating

very, very quickly on artificial intelligence tools.

You had a situation

where that regulation slowed down development,

but now that bottleneck has cleared.

Now we see companies

one after another in China launching public-facing AIGC tools

very, very rapidly.

The floodgates are open for innovation in AI.

I don't think regulators are going to relax.

I think now what they're going to do is watch for additional risks

and put out increasingly strict regulations

to mitigate those risks as time goes on.

That's the answer to that.

The answer to your second question

in terms of whether or not I start to see increasing segregation.

It's funny, I see

segregation happening along one channel and unification happening along another.

What I mean by that is that most countries, China included,

or at least the major players in the AI innovation space,

have drafted some top-level UN-friendly

AI principles document that says AI shouldn't hurt people,

and AI should be this or that or protect privacy,

and AI shouldn't be used for military purposes,

and this kind of general high-level stuff

that sounds great in an international forum,

but the definition of--

these principles are almost too general to really be helpful.

Those words, privacy, protecting freedom,

those words mean different things depending on who's saying them.

I'm not sure.

On one hand, there's a general understanding that AI

among everybody, that understanding is common and widespread,

that AI is a risk, that it needs to be regulated,

that it could be quite dangerous, corrosive to economies,

corrosive to society, et cetera.

On the other hand, the actual way that AI is being regulated

in various countries, as you said,

is starting to diverge along very different tasks.

There's definitely some kind of segmentation happening.

I don't think there's any chance at the moment

that the EU, for example, looks to China and says, "Oh, well,

we're going to pick up what you're doing

because you've written the first regulations on that."

Typically what tends to happen is the EU looks at Chinese regulations,

sees that half of the regulation is censorship-focused,

and throws the baby out with the bath water,

and says, "That's a useless Chinese Communist party regulation."

That's a little bit unfortunate because while absolutely,

half of those regulations is about content censorship,

the other half includes some very forward-thinking,

unique consumer protections or unique technological protections

that it would definitely be worth a thorough read and study.

I think what is likely to happen is that

China will continue developing its own regulatory pathway.

The EU will still probably lead

the Western world in AI thought leadership and AI regulation.

The US will continue to lag behind primarily

because it hasn't formulated at the federal level,

even the most fundamental laws

necessary for privacy protection and AI protections,

such as a federal data privacy law.

I think that's the state of play right now.

We'll see if countries are able to bring those things

into closer alignment in an international forum,

but right now it's pretty discombobulated.

-Thank you so much, Kendra.

That was an excellent conversation.

I definitely learned a lot today

and I hope that you will be able to join us again to share new insights

once they come along.

Thanks again for joining us today,

and for those of you who listened in today,

you can find more about Kendra's work and Trivium on the links on our website

that we share together with this podcast.

For now, let me wish you goodbye.

I hope you will join us again soon

for another edition of the ILO Future of Work podcast.

Thank you very much.