The use of artificial intelligence (AI) in recruitment has been in the news recently, not for solving our recruitment challenges but perpetuating the bias that has been plaguing some of our processes.
Dr Tiberio Caetano, Chief Scientist at the Gradient Institute and a speaker at our upcoming Future of Talent retreat, has been working in AI for over 20 years as a practitioner, researcher and entrepreneur.
I chatted with him to find out more about the state of AI and machine learning, what are the ethics involved, and how can the Talent Acquisition (TA) process maintain its integrity as we continue to use these technologies.
Tiberio, can you explain what exactly is machine learning and how does it relate to AI?
If AI is the product, machine learning is the key technology that enables that product. In order to build an AI system, we need to make use of machine learning.
Machine learning is a computer programming paradigm in which the programmer provides to a machine a goal to be achieved, as well as “resources” to achieve the goal in the form of data, and the machine itself infers from the data which precise steps it should take in order to achieve the goal.
For example, to create an AI that plays chess, we provide to the machine a goal to be achieved (say, “checkmate” or “don’t be checkmated”) and data (the rules of chess plus lots of examples of games) and the machine infers what it has to do in any position so as to maximise its chances to win.
Is it a problem then, if AI can predict what we are going to do?
It can be a problem. A machine that correctly predicts which customers are more likely to renew their insurance policy after a significant increase in premium can be used to exploit vulnerable customers by charging an unfairly high premium.
But not necessarily. A machine that correctly predicts that a driver is about to fall asleep from monitoring his or her eye movements can trigger an alarm to wake the person up.
What determines whether the deployment of an AI system is a problem or not is both the purpose for which it is used and how well fit its operation is for that purpose.
Do you think then, if we can trust AI to make our hiring decisions for us?
Should we trust people to make our hiring decisions? Well, it depends on the people, right? It is the same answer for an AI: it depends on the AI.
For instance, if in a given context the AI has been demonstrated to perform better (against objective measures of performance) than the best available human benchmark in that context, we will need a good reason not to hand over a decision to an AI.
In the same way as different people perform differently across different tasks, so do AI.
I believe it is more likely that, at least in the short term, AI will be used to assist us in making hiring decision as opposed to making final decisions themselves.
How is ethics related to AI? Why do you think ethical AI has risen upon us?
The more power we have, the more important it is that we ensure that power is used for good since the harms of using that power in the wrong way can be devastating.
AI can be a very powerful technology, not unlike bio or nuclear engineering, and as such it can be used create a lot of good or a lot of harm.
Therefore it is imperative that we understand how to make ethical AI. This will require both technical tools and non-technical tools, such as regulation.
What responsibilities do organisations have for the ethical integrity of the recruitment technologies they employ? And how might these obligations be discharged in your opinion?
The same responsibility they have for the ethical integrity of their existing hiring decisions. How is the ethical integrity of current decisions assessed?
There is plenty of evidence of unconscious human bias in hiring practices and this is a very hard question to be answered precisely.
One direction is to ask how to we measure what constitutes an ethical decision to begin with? Once we have a handle on that, we can then assess AI systems (as well as humans) against that criterion.
When a candidate participates in an AI-based hiring process, who in your opinion owns the data? What are the data ownership and privacy obligations?
These are complex questions to which I believe there aren’t yet good answers. The situation is clearer from a privacy perspective as there is a lot of legislation in this space.
As for property rights of data, this is a big open question out there which I don’t believe there are good pragmatic answers at this point in time – there is also much less regulation there.
The thinking behind consumer data rights is relevant here. Recent moves on data portability, for example in Australia with the new open banking regulation, are welcome in principle as they give power to consumers to take the data with them.
What advice would you give for organisations who are looking at implementing AI-powered recruitment technologies?
Start by defining what constitutes a good hiring process from a conceptual viewpoint, and be as precise as necessary to be able to assess any hiring process against that criteria.
If you don’t have a clear definition of what success looks like, then how can you assess if an AI is doing a good job?
Are there any good resources on ethics in AI that you would recommend?
I’d suggest this blog post from Gradient Institute: https://gradientinstitute.org/blog/1/
Dr Tiberio Caetano will be sharing a framework for creating actionable ethics for AI-driven recruitment at the upcoming Future of Talent retreat. Join him and 35 of Australia’s most senior TA leaders at this exclusive, 2.5-day retreat. If you are an internal TA executive for a corporate organisation, we would love for you to be part of this. Register your interest here.
Leave a Reply