Ethical Recruiting & Hiring Automation Considerations

Kristen Flores
August 19, 2020

Automation is an integral part of hiring, especially today-- teams are leaning on automation more than ever before with fewer resources and more applicants. This means it’s increasingly important to understand biases that might be impacting your recruiting process and putting certain groups of people at a disadvantage. It also means considering what’s important to candidates - not just which tech tools you’ve implemented but understanding if and how candidates are feeling engaged, communicated with, receiving closure and leaving the process feeling like they were evaluated fairly. 


With many AI tools on the market, some important ethical considerations to keep in mind include:

- Datasets: Whenever human data is involved, you have bias involved. Structural injustice and unbalanced representation of the population can be represented in your dataset, impacting the rest of your process. This can cause discrimination against certain protected classes. 

- Designers: All the decisions you make while designing an algorithm, the biases from your development team, and who you have at the table impact the software you develop. At the end of the day, algorithms project the designers’ values and experiences into the system and risk including their biases in the process.

- Privacy: It’s important to be mindful of the data you’re using throughout the recruiting process and what is appropriate, job relevant, and legal for an employer to consider. 


One of the best ways we can reduce bias is to start with experts who aren’t technologists. People data has sociological meaning that a technologist might not be as expert in uncovering, and working with an expert in this field (such as an IO psychologist) can help determine how to proceed based on your goals. They can also help you assess whether the data you’re using is fair, as oftentimes certain data points can serve as proxies for determining other factors such as gender or race. 


What is “fair”?

The concept of fairness is defined differently based on the audience and purpose. It’s important to understand the groups of people the tools will be used on as well as where and when these tools will be used. Certain time-bound events, such as COVID-19, can introduce different types of candidates from those who historically applied for certain positions and this should be accounted for in processes. 


Discard data where you see substantial differences across groups and start looking at more universal and broadly applicable data points that are less likely to show differences across individuals. pymetrics does this by using behavioral science, measures that are applicable across gender, age, and cultures - removing some baked-in biases commonly used such as university, test scores, or GPA. This allows you to level the playing field. 


Make sure to take into account the population you’ll be using the tools for and making sure the population is represented in the data set and that it’s equal across the group. As you develop, it’s important to audit for performance across certain groups. 


In the case of rapid shifts or unexpected events such as COVID-19, a bright spot of technology is that you’re able to run simulations using prior populations - you’re able to look at data on candidates in different industries such as hospitality and healthcare, can compare to see how they perform similarly and differently to understand where they might fit into new industries as large groups are looking for new roles. 


Keeping technology free of bias

When it comes to AI tools, there are a lot of shiny objects out there. It’s essential to differentiate those with scientific validity in predicting success in a specific job from those that do not. Tools like facial recognition are not predictive of job performance, skills, or aptitudes. Humans are not static, they change and learn, their success is based on past experience, skills, aptitudes, relation to others on the team, etc. Be cautious of these technologies and use the filter “would I measure this quality if there weren’t technology available” to determine if you should include it in your process. 


Formalize your decision making process - don’t leave it to subjective decisions. Have an audit trail you can map back to and be able to explain to yourself, your team, and the candidate why a decision is made. Constantly review your process, data, and technology to get feedback and improve. The landscape will continue to shift with your processes. 


Transparency 

When partnering with a vendor, transparency is key. Be clear on what you’re trying to accomplish and what has and hasn’t worked. If you’re not careful you can reproduce existing patterns that include bias. Make sure the vendor’s technology is relevant to your goals and your specific organization, identifying best-fit candidates for your specific setting. Be transparent about the situation in your organization. If you want to reduce existing biases in your workforce, the focus should be on minimizing this by optimizing your algorithm for fairness before it’s launched. Transparency is also important for candidates to understand the tools you're using and what’s being assessed, as well as for internal teams to understand how to leverage results and interpret them. 


The candidate experience

For candidates specifically, their experience should matter to more than the HR team - these people can easily be clients, customers, or referrals for other candidates. Think about ways to make it a valuable experience even if it doesn’t result in a job offer. For example, candidates that play the pymetrics games are able to access the results and learn more about themselves. This can help in career pathing, personal development, considering other opportunities, etc. Candidates walk away from this experience with a more positive perception of the company and increased perception of fairness. 


To really be a brand that works to increase diversity and do social good, it’s important to work with and value your candidates - help them understand strengths and weaknesses to find success. 


The future of AI 

AI isn’t going anywhere, if anything it’s going to become more embedded in HR processes. Regulation is still in the process of catching up - creating standards, governance models, best practices, defining responsible development, how to handle due diligence - and then teaching companies how to develop these capacities. Legislation is growing around how to use AI in hiring and lawmakers are taking note. 


It will be increasingly useful in not only hiring but also workforce management. As job needs and skill change, these tools can help redeploy employees to different roles based on their potential, aptitudes, and existing skills. 


Where to start?

Start by understanding your current workforce - before you look externally, look internally to understand how teams are performing, what kinds of skills and employees you already have, and what you need going forward. Take the time while operations might be slower to do due diligence on your process and the technology you’re considering to make sure it’s fair to your employees and candidates. 


If you’d like to learn more about introducing technology that can help reduce bias in your recruiting process, reach out to us here.