With the proliferation of AI and Gamified assessments in the HR tech arena in recent years, it’s easy to be dazzled by the allure of their promise to enhance candidate experience, increase process efficiency and drive improved quality of hire.
But if you are thinking it all sounds too good to be true, this may be one case where the hype does meet reality, provided you do your homework.
Here are the three questions you should be asking when choosing your next assessment provider:
1. Does the science stack up?
An assessment is still an assessment, so the traditional rules around validity and reliability that psychologists use to evaluate the rigour behind an assessment still apply.
Whilst Gamification and AI capture and use data differently, the measures still need to be reliable, in that they are stable over time, and they need to be valid, that is, we have evidence to show they are measuring what they are supposed to measure. Without this, we could be using unreliable data that is not particularly meaningful, to make some pretty important decisions.
2. How are you building and maintaining your algorithms?
All algorithms need to learn from a data set. The data set that an assessment provider trains their algorithm on is absolutely crucial. Algorithms work best when they are customised to your organisation and trained on a data set that reflects where it will be used and the challenges you are trying to solve for.
For example, if you are wanting to build a model for graduate hiring and you know that you get a spike in turnover at the two-year mark, you want to make sure that you are building a model based on data captured on your current high-performing grads who have stayed beyond the two-year mark. This way you are capturing what is unique about that population in your model, thereby selecting more high-performing graduates in the future who are likely to stay.
Also, keep in mind that algorithms have a shelf-life. They need to be refreshed at least annually to ensure their ongoing relevance. Does the vendor actually provide this service?
Finally, be careful around building models from CV or demographic information such as age, gender, postal address, or data collated through social media profiles. These types of data are inconsistent and have the potential to introduce unwanted bias into your selection process.
3. How do you mitigate bias and support diversity?
Bias in algorithms is a topic that has gained significant momentum in recent media, with government departments such as the Australian Human Rights Commission and the Personal Data Protection Council in Singapore, as well as the World Economic Forum all working to develop standards and guidelines for the ethical use of AI.
You want to understand from the vendor how they approach the mitigation of bias. For example, you will want them to consider the diversity of the data set upon which they build their models, whether they are front-end testing to ensure models are de-biased before they are deployed, and back testing for adverse impact.
You will also want to understand what tool the vendor is using to help them eradicate bias from their algorithms. The availability of open sourced bias-detection tools such as Audit-AI means they should have a clear plan as to how they are keeping their algorithms balanced and fair.
The use of gamification and AI in assessment, whilst relatively new science, is here to stay. We are looking forward to its continued evolution, and see that rather than just perpetuating a more efficient society as it stands today, its true potential is to make the world a fairer place – and that’s something we can all be proud of.
Cover image: Shutterstock
This article is contributed by pymetrics.
Leave a Reply