Nowadays, you don’t have to look far to find a technological fear-monger warning that artificial intelligence (AI) will soon take over the world.
While I could write a dissertation on why nearly all their fears are either unfounded or easily avoidable, it’s hard to argue with the fact that in 2021, AI truly is everywhere we look.
We each interact with it daily in obvious ways—when we ask Alexa to order us more batteries, or when our self-driving Tesla takes us to the store to pick them up ourselves—as well as in more subtle ways—like when we watch targeted ads on Hulu or get personalized search results on Google.
These increasingly popular use cases for AI share a primary goal: offer convenience for the customer in exchange for profits for the provider (both by cutting costs and generating revenue, often in the form of collecting and selling personal data).
It’s a simple formula that most consumers have been largely, if not skeptically, on-board for, leading to astonishing valuations for some of the most innovative AI startups and record-breaking investments by some of the largest corporations in the world. (In fact, a recent report by PwC anticipates that AI will contribute over $15 trillion to the global economy by 2030—a staggering figure that illustrates that this technological revolution is gaining speed and isn’t going to stop anytime soon.)
As an organization, Wonderlic is committed to science and, above all else, fairness. It’s in our DNA. That’s why, when considering our own investment in data science capabilities, we asked ourselves if and how we could use AI responsibly—for good. This line of thinking led us to the development of our ethical AI principles, which anchor on transparency, bias-mitigation, user privacy, and user experience.
Our AI Innovation practice established these priorities before we even wrote our first line of code, and our commitment to them has remained steadfast ever since. With several AI-powered tools now live in our product, and many more launching soon, we believe now more than ever that a responsible, thoughtful approach to AI, with the good of humanity in mind, can break the concerning patterns that have overshadowed many of the commercial AI innovations of the past several years.
AI and better job-matching
Our determination to use AI to improve the fairness of pre-hire assessments led us first to solutions surrounding job-matching.
During the job-matching process, users of our platform match the roles for which they are hiring to pre-programmed job profiles we can use to score candidates. Unfortunately, this is not always an easy task, especially at smaller organizations where hybrid roles are common, and the reality of a role’s responsibilities may not accurately reflect those traditionally assigned to that job title. Poor matches can result in setting standards for applicants that are too low, too high, or simply way off, which can easily invalidate the results of a pre-hire assessment like WonScore.
This challenge led us to introduce a series of AI-powered innovations that tackle this problem head-on. The first was our semantic job search tool, which searches for job profile matches based on the meaning of the words in a job title, rather than looking for direct keyword matches. This simple solution had a huge impact, increasing the occurrence of successful job matches by over 50%. We weren’t entirely satisfied, however, since this solution only accounted for details included in job titles but ignored the nuances that are often revealed in job descriptions.
This led us to build on our tool with the introduction of an optional job description search, where our proprietary AI reads a provided job description, parses out the job-related competencies required for that role, and cross-references those against our database of job profiles—all in a matter of seconds! It goes without saying that these tools not only serve our primary objective of improving fairness, but also dramatically improve our platform’s user experience by saving countless hours of valuable time.
AI and bias-mitigation
While we continue to develop significant breakthroughs in our job-matching software, including a major upcoming expansion to our supported network of jobs, we’re also focusing heavily on developing software that will mitigate the bias in our cognitive ability assessments.
For decades, cognitive ability tests have had an industry-wide challenge: racial and ethnic differences in scores. That is to say, if used incorrectly, cognitive ability test scores can lead to adverse impact. To overcome this problem, our AI team is developing new kinds of assessments that won’t suffer from the same level of racial and ethnic subgroup differences that the current generation of cognitive ability assessments do.
Specifically, our research is focused on “alternate inputs”: assessments that go beyond the typical multiple-choice format, thanks to the innovative approaches to gathering and processing data that artificial intelligence and machine learning make possible.
One of the most promising methods we’re exploring are free-response text questions, in which candidates are asked to respond to open-ended, workplace-related questions. Our preliminary findings show that free-response text scores align well with the scores of our existing cognitive ability tests, while suggesting smaller differences between racial and ethnic groups, as well as between genders: exactly the results we were hoping for.
We feel confident that these bias-mitigation innovations will be our most important and impactful use case to-date. And we can’t wait to see the impact that our fairness-focused AI has on bolstering diversity and inclusion at the organizations, small and large, that we serve every day.
To dig deeper into Wonderlic’s AI-related projects, visit our team’s blog. And to learn more about another recent innovation spearheaded by the project team (not AI-related, but still very cool), check out this article on our new Candidate Feedback Reports.
About the Author: Ross Piper is the Head of Artificial Intelligence here at Wonderlic.