[music]
Hello and welcome to this edition of the ILO's Future of Work podcast.
I'm Ekkehard Ernst, Economist at the ILO.
I'm Ayisha Piotti,
Director of Artificial Intelligence Policy at the Centre for Law
and Economics at the Swiss Federal Institute of Technology, ETH Zurich,
as well as the Managing Partner of the Swiss-based firm, RegHorizon.
This edition of the Future of Work podcast
is a coproduction with Geneva Macro Labs, RegHorizon,
and the Centre for Law and Economics at the ETH Zurich.
Have you ever talked to a computer during an interview?
Have you ever been made aware of a machine screening your CV,
or did you ever get the impression that you had been selected
by an algorithm for a job you interviewed for?
How did this make you feel?
Today, we want to explore the rising use
of artificial intelligence in the hiring process.
To explore this topic,
we could not have a better guest than Mona Sloane.
Mona is Research Assistant Professor
at the New York University Tandon School of Engineering
and Senior Research Scientist at the NYU Center for Responsible AI.
She's also Principal Investigator at the Tübingen AI Center
of the EbEkkehard Karls University Tübingen in Germany.
Mona, welcome to the Future of Work podcast.
As you have worked extensively on the role of AI in the hiring process,
can you tell us a bit more about your research?
Of course. Thank you so much, Ekkehard and Ayisha, for having me.
It is an absolute pleasure to be here.
I am very happy to talk about my work.
I should say that I am a sociologist, and as such,
I am mostly interested generally in the ways in which AI expresses
and constitutes social relations
and how AI has become integral to how we organize society.
One of my main interests is how AI affects
discretionary decision-making in the professions
and in recruiting specifically.
I currently conduct extensive research,
extensive qualitative research by way of interviews with recruiters,
with talent acquisition managers, with sourcers,
and also with HR tech vendors
to understand how AI shifts and changes
and shapes professional decision-making in the profession of recruiting.
Excellent. Thank you.
We have heard that policymakers
have become increasingly active to ensure safe
and fair use of these tools
and to prevent discrimination during the hiring process.
The European Union, for instance, has been particularly active,
but we also see regulatory activity in other countries,
such as the United States, where you are based.
Mona, you're based in New York, in particular,
can you tell us a bit more about
the discussion among policymakers in the US
and compare this with the situation
in Europe that you also know very well?
Thank you for that great question.
I think it's very important
to have a good understanding
how AI regulation is forming on both sides of the pond, as it were.
Indeed, there's a similar spirit but a very different approach
to how policymakers actually come at the problem of AI regulation.
In Europe, of course, we have the European Commission,
who has crafted the EU AI Act
that we are all expecting to kick in sometime this year.
That regulation is what we would call an omnibus regulation,
so a big regulatory framework that has been in the making
for a very long time that has one approach
for AI as a sort of almost super technology
that affects all areas of social life.
That approach is risk-based.
We have almost a top-down way of mitigating
the risk that AI is posing potentially to citizens and to society.
Then in the United States, on the federal level,
we have comparable attempts
in that there are attempts to introduce
bills that similarly regulate AI
as a general purpose technology.
For example, we had the Algorithmic Act that was introduced in 2019,
but those haven't really come to fruition yet.
What we see instead in the United States is a more localized
approach whereby local policymakers either on the state level
or on the even more local level, so in city governments,
approach specific AI problems in smaller regulatory frameworks.
This is a phenomenon that Stefaan Verhulst,
the co-director of the GovLab, and I have called AI localism.
In AI localism, we have, as I said, specific problems that are addressed.
One of the flagship bills that we have here
in New York on AI is, indeed,
on the use of AI tools in hiring and employment.
That bill was signed into law last year.
It was supposed to be enacted in January 2023.
It's currently a little bit on pause because regulators realized
that they wanted a little bit more input from stakeholders.
That bill has very specific mandates
with regards to how, essentially,
AI-driven technology in the HR sector
is being assessed as it enters the market and affects recruiting.
Very, very interesting, actually.
Indeed, we see the regulatory landscape changing, right?
You talked about EU, you talked about New York.
At the same time, I think we are also seeing much higher pressure
from the civil society for companies to, in fact,
comply not only with these changing rules
but also to ensure ethical development
and deployment of these technologies.
How do you see, Mona, companies handling this?
Specifically in terms of providers of AI tools, in your experience,
how are they reacting to these rising demands
and these additional requirements?
That is a great question, Ayisha.
The interesting thing about AI in recruiting
or AI in HR is that it often is sold
as the solution to human bias.
We do know that HR and recruiting, particularly,
has forever been prone to human bias.
It's a problem.
Employment discrimination is an old problem.
Employment discrimination is a problem that is addressed through existing
and long-standing non-tech-focused regulation, both in Europe, of course,
and in the United States.
Some of these technologies are marketed
as actually fixing various kinds of bias in the recruiting
and assessment process.
That's I think an important piece of information.
When it comes to questions around compliance,
we really are a little bit in a limbo space right now,
whereby we know a lot about the regulatory frameworks
and regulatory approaches but they haven't actually been passed yet.
Again, we're waiting for the EU AI Act.
The New York City bill hasn't kicked in yet either.
There are various bills that have kicked in, for example,
the Illinois Interview Act, which focuses on regulating, specifically,
interviewing software that is used in the recruiting process.
Other than that, we're still waiting for those regulations to fully kick in
and to see what compliance actually looks like.
That means when I talk to companies who design these tools
and deploy these tools and make money off these tools, they do,
of course, have a pressing interest in either being told
or finding out what sufficient compliance
actually could look like because, of course,
not knowing is a risk for them.
I think there is an urgent need actually
for innovating compliance
in the context of AI regulation.
The reason why I'm saying that is because we have this uncertainty
going on, and in absence of clearer guidelines,
we will inevitably fall
into a precedent rhythm whereby powerful actors
will show us what an interpretation of this legislation could look like.
That doesn't necessarily mean
that those precedents are truly independent
or they are sufficient.
I think we really need an investment,
and I mean that quite literally into figuring out
what compliance in the HR regulatory space could look like.
I think it's imperative that money is made
available for this as a research problem, as a problem for NGOs,
for interdisciplinary research, for collaborations with industry,
of course, with standards bodies, and so on,
but it is currently a little bit of a race
to define what compliance means by precedent, I would say.
Oh, I think that's really relevant, right?
What you've pointed out is this gap in a way that exists.
On the one hand, we're asking for compliance,
we're saying you should be doing many of these things,
but it's not so clear.
Thank you for that. Very clear.
When I talk to trade union activists, in particular,
but I think that's also true for people at business associations
or employees association,
I often get the impression that there's a strong sense
of urgency to address the challenges also that you mentioned
and others that these AI hiring tools pose while, at the same time,
there's a lack of knowledge
as to where and how these tools are being currently used,
and so what dangers they might pose.
We hear a lot about biases and discrimination,
but I think there's a lack of understanding,
what are these potential dangers that we haven't really fully understood,
and in particular, how trade unions can best support their members.
In your view, how should trade unions address these issues,
and how can policymakers support them, for instance,
by requiring HR departments to fully disclose the use of such tools?
Thank you, Ekkehard.
That also is a great and extremely important question.
I want to start answering that by stating that very often,
AI technology can lead
to a furthering of power imbalances,
just by way of being very complex
and being infrastructural to various organizational processes
or business processes in that AI technologies get secretly
or quietly embedded into existing processes
without knowledge of all the stakeholders
or literacy around how these tools work, what their assumptions are,
what data they're collecting, how they're analyzing the data
and interacting dynamically with the environments
that they're being embedded into.
That's a general sociotechnical dynamic that is unfolding.
Of course, it becomes quite acute
when we have a growth of the mediating role
that this technology plays vis-a-vis workers and employers.
Because AI always is a scaling technology,
it's deployed to increase efficiencies in decision-making
and organizational decision-making,
so the opaqueness that is baked into the system is also scaled up
when that happens.
I think, at the minimum,
what should be required really is that,
as you said, Ekkehard,
HR departments provide registries for their workers stating,
"These are the technologies that we're using."
When they are large corporations
and they have significant purchasing power in the HR tech market,
so let's say McDonald's is buying a license for a tool, they have,
of course, significant purchasing power.
They could mandate more transparency from the AI vendor and could,
for example, as part of the licensing or purchasing process,
require more AI transparency and explainability.
For example, they could ask,
"What are the assumptions that you bake into your system?
What is the data that you're using?
What are you optimizing for?
What are the possibilities for AI transparency that can be provided
for the people that get enrolled into the use of the system,
so the workers?"
I think that is something that trade unions can concretely ask for,
so basically, create a space in which AI transparency
and explainability is mandated by way of purchasing
power because I think that's the more effective way.
Then, of course, I want to give a shout-out to Dr. Christina Colclough,
who runs the Why Not Lab and works a lot
with trade unions on the threats
and challenges, but also opportunities of new technology,
specifically AI technology.
There's a lot of need for increased literacy, of course, here.
I think that is something where trade unions
can more actively engage with scholars
and researchers but also with NGOs
who are working on these problems.
The space of accountable, fair,
and transparent AI is large and it is growing,
and there's a lot of appetite for impact
and for collaborations with organizations such as trade unions.
I think that's another thing that trade unionists could embark upon.
Mona, thank you so much, actually, for what you've shared.
What I see a little bit here is, of course,
the need for the work councils to be aware of what is happening and,
of course, to be involved, essentially.
I'm just wondering because I heard something in Germany.
I believe they have a process, which is in place.
I don't remember if this is because of the law
or it's just a practice where this is done.
I was just wondering if you know a bit more about it
and if you could share that
and also whether such a system of co-creation or consultation
would make sense in other parts of the world as well.
Yes, thank you for that question.
I will say that I'm not an expert in that regulation,
specifically in Germany.
I have read about it a little bit, but I'm not an expert.
What I can say as a native German but also as someone
who has conducted a little bit of research
on co-determination in German companies
is that the social organization
of a worker's council in Germany has tradition.
There's traditionally strong worker representation
and participation in fundamental decisions about an organization
by way of the elected workers' council representatives.
It is somewhat unsurprising that an important topic,
such as the use of AI systems in the selection assessment
and the evaluation of workers,
is a workers' council topic in that workers' council
representatives engage actively in questions around that.
What I'm trying to say here is that this participation
in questions around the use of AI and hiring assessment
and evaluation flows from the strong worker organization,
worker representation in German organizations.
It is not something that flows from tech regulation.
That is something I think that's really interesting
and relevant to understand
the cultural specificity but then to also explore,
can tech regulation actually be interpreted
as a way to strengthen worker organization and worker participation?
Almost to flip the script, if we look at it in other cultural contexts,
and if you look in the US, interestingly enough,
over the past three to four to five years,
we've actually had increased unionization,
actually in the tech sector among "white-collar workers"
and solidarity movements with "blue-collar workers,"
such as the Amazon warehouse workers.
There is something really interesting
that is going on here around worker organizing
and technology-driven organizational decision-making,
control, and surveillance.
Yes, that's how I would respond to that question.
I love that.
I think this idea of flipping it,
and perhaps that could be something to be looked more into,
I think absolutely.
Thank you for sharing that.
Yes, I think it's a great way of closing this podcast.
Mona, thank you very much for joining us today.
If you want to find out more about Mona's work and research,
you can find links on the web page of this podcast and on the ILO website.
My thanks also to Ayisha Piotti,
who you have been a wonderful co-moderator.
For now, let me wish you goodbye.
I hope you will join us again soon
for another edition of the ILO Future of Work podcast.
[music]