The Ethics of AI: 5 Implications for HR Leaders to Consider
Artificial intelligence (AI) has become a cultural phenomenon that promises to transform how workplaces operate. But when any exciting new technological innovations emerge, our society tends to put the cart before the horse. And we haven’t had a technology this powerful in many years (maybe ever).
While there are people on both extremes of the AI argument—some ready to dive head-first into an AI-powered future and others warning of an imminent robot apocalypse—most AI experts fall somewhere in the middle. They’re cautiously optimistic and recommend that governments and businesses alike spend more time thinking through the potential risks and ethical implications of AI before implementing the technology on a large scale.
When it comes to people operations, HR leaders have a bit of a tightrope to walk with AI. At a time when HR teams are being asked to do more with less, AI tools are already becoming available to streamline tasks, free up time, and make work more efficient and effective. At the same time, people leaders understand the importance of maintaining employee privacy and trust, data security, and keeping the “human” in human resources.
As AI continues to be integrated into HR solutions, we want to help you think through some of the ethical considerations of using AI in HR. In this article, we’ll cover five implications to consider as you create AI policies and integrate the technology into your own workflows.
How is AI currently being used in HR?
The use of AI and automation in HR technologies is expected to expand rapidly, and we’ll have much to learn about how to best leverage those capabilities in the coming months and years. But we’re already seeing some use cases that give us a peek into what is (and will be) possible in HR and beyond.
Talent acquisition has experienced the biggest boom in AI solutions by far, with tools for sourcing candidates, reviewing resumes, and even conducting screening interviews. Many teams are already using generative AI chatbots like ChatGPT for various tactical tasks. They may use it to ideate collaboratively, engage in scenario-based roleplays, generate content and questions, or foster brainstorming sessions.
Other AI systems are allowing HR to scale internal talent management. A great example comes from the HR team at IBM, who use their own AI technology (IBM Watson) to help connect employees with growth opportunities within the organization. Predictive analytics helps match employees with potential roles based on factors like their location, pay grade, job role, and experience. Over 1,500 IBM employees have shifted to new jobs internally since the program launched.
“Our biggest opportunity area as we look forward is growing skills and skills management,” said Obed Louissaint, vice president of talent, Watson health & employee experience at IBM.
As more AI solutions emerge, there’s a huge potential to transform HR in a positive way. But that bright future requires diligence and care from HR leaders who want to embrace AI technologies while keeping the workplace safe, ethical, and human-centric.
5 ethical issues to watch out for
To ensure you use the power of AI for good in your organization, consider the following ethical implications. Understanding these potential risks ahead of time allows you to put policies and protections in place before you implement new AI-powered tools.
1. Biased hiring decisions
As more companies use AI in the recruiting process, concerns about bias in the technology have been making headlines. Generative AI models (like those used in ChatGPT) are trained on vast amounts of data sets created by humans. As we all know, humans are inherently biased, so some of those biases are therefore introduced into the AI algorithms.
To understand how AI bias can create unfair hiring practices, consider this example from Textio. To check for gender bias in ChatGPT, the company analyzed performance feedback written using the tool. They found that when discussing “a bubbly receptionist,” ChatGPT assumed the employee was a woman, while presuming “the unusually strong construction worker” was a man. And when discussing a kindergarten teacher (regardless of other traits), the AI used she/her pronouns 100% of the time. Nurses were also presumed to be women 9 out of 10 times.
Over-reliance on AI or using it to make hiring decisions without checks and balances in place can damage employee trust, undoing any benefits that might be gained. According to a Pew Research survey, two-thirds of workers said they wouldn’t want to apply for a role if AI was used to make the hiring decision. To earn the trust of employees and prospective talent, HR leaders must use the tools in ways that make hiring more fair and inclusive rather than less.
2. Invasion of employee privacy
A 2022 study by PwC found that 95% of HR leaders have either implemented new methods to track remote worker productivity or plan to do so in the future. Many of these productivity tracking tools have AI capabilities already or will be offering them soon. Whether or not you choose to use monitoring tools, it’s essential to protect employees’ privacy rights and personal data.
Privacy and security, in general, should be a primary concern when using AI for work purposes. While some AI tools (like ChatGPT-3) are unable to retain or reuse information received in user prompts, others take the information users enter and feed it back into their machine learning modules, potentially putting employee or customer data at risk.
Howard Ting, CEO of Cyberhaven, a data security firm, told SHRM that when deciding what information to use in an AI prompt, don’t share anything you wouldn’t want to share publicly.
“It would probably be fine to ask ChatGPT to write a job description,” Ting said. “But you wouldn’t want to provide ChatGPT with details about a harassment allegation against an employee or information about an employee’s leave of absence due to a medical problem.”
When using any AI technology, employee or customer data—or any other proprietary company information—should be anonymized. If you’re unsure whether or not it’s safe to share something with an AI tool, always err on the side of caution.
3. Inequitable performance management & compensation decisions
Problems with bias in AI algorithms aren’t limited to talent acquisition. Performance evaluation tools that use AI technology might favor certain employee groups, leading to discrimination in talent management. New technologies used in creating content for training and development can also inadvertently reinforce stereotypes and unconscious biases.
More egregiously, these issues can create unfair or discriminatory compensation practices in an organization. Without human oversight, AI tools may unintentionally perpetuate pay disparities based on historical salary data.
To consider the ethics in AI when it comes to performance evaluation, HR teams should manually review all AI recommendations, ensure criteria is fair and inclusive, and involve diverse teams in assessment processes. For learning and development, curate content carefully, verify AI-suggested learning paths, and ensure a balanced representation of diverse perspectives.
When it comes to compensation practices, HR management should conduct regular pay equity audits, establish transparent models, and address any identified gaps as soon as they’re discovered.
4. Automated termination decisions
Terminating an employee is a job no people leader takes lightly, and relying solely on AI for termination decisions without human oversight should simply never happen.
Unfortunately, issues with unfair uses of AI in HR have become so widespread that even legislators have taken notice. Senator Bob Casey of Pennsylvania recently put forth two new bills targeting the use of AI in personnel decisions and employee surveillance.
Here’s a snippet from Casey’s “No Robot Bosses Act,” shared by NBC News: “Systems and software, not humans, are increasingly making decisions on whom to interview for a job, where and when employees should work, and who gets promoted, disciplined, or even fired from their job.”
Empathy and the ability to understand human behavior firsthand are essential to understanding why someone may be having performance issues. At the very least, employees have a right to be heard (by a human) and should be given the chance to provide feedback in any case of dismissal.
5. Lack of transparency and accountability
While many employees are still getting used to the idea of AI in the workplace, most would be comfortable with it if their employers were just more transparent about how it will be used. Integrating AI solutions into HR practices without clear explanations can erode employee trust.
Company leaders should always opt for transparency in how they’re using AI, especially if it’s being used to track employees or influence decisions that impact them directly.
Kerry Wang, CEO and co-founder of Searchlight, a San Francisco-based AI technology company, recently shared an example with SHRM to illustrate the power of transparency.
Wang referenced an AI tool that can predict if an employee is about to quit based on their workplace behavior. The kicker is, research found that the tool’s predictions only work if employees don’t know the technology is in place.
“For it to work, it would have to be kept secret from the workforce,” she said. “And that is not something I’m willing to do, even if it can accurately predict turnover.” She said when her company partners with a client, they first send out communication to employees with details on what is happening, why it’s happening, and what to expect.
“When we do that, 70-80% of employees opt into the data collection,” she said. “When you give people the choice and explain to them the benefits of using AI, the majority will agree to opt in.”
Putting guardrails in place
AI has the potential to change the game in some really exciting ways if we can learn to (ethically) make it part of our personal and professional lives. But the work must start today.
Strategic HR leaders have an opportunity to pave the way by implementing clear policies on data collection, usage, and employee consent. And with regular human oversight, HR teams can ensure any information produced by AI is accurate, fair, and used appropriately.
But it’s not all on HR leaders to ensure these guardrails are put in place. AI developers and technology providers also have a duty to regularly audit their AI models and provide developers with diversity training.
When you’re engaging with potential AI and HR vendors, don’t be afraid to ask them how bias is addressed in their systems. Ask how data is used and how employee and customer information can be anonymized. And most importantly, ask them how they will address any concerns with the tools that may arise.
As Amanda Halle, seasoned HR leader and founder of Mindful Growth Partners, told us, “It all begins with learning, education, and training. Trust and support come when you clearly understand, define, and communicate the what, the why, and the how of AI at your organization.”
Amanda recommends three basic steps for HR leaders when rolling out any AI-powered tools:
- Level-set and make sure that you’re all speaking the same language. This starts with AI education and training to level the playing field and foundational understanding.
- Safeguard by protecting personally identifiable information and proprietary company information. You want to create the guidelines, guardrails, and philosophies that fit your organization.
- Encourage and reward learning, experimentation, and sharing. Encourage the safe use of tools, share the learnings that come out of this use, and reward that behavior.
Fair and ethical AI can enable HR professionals to make a greater impact for employees. Just be sure to consider all the factors at play and exercise discretion and restraint when using any new technology. Make a plan for how you and your team will appropriately leverage the technology to improve employee experience and business outcomes.
Get the HR guide to ChatGPT
Are you considering using ChatGPT to improve efficiencies on your team? Read our latest guide, HR + AI: 6 Ways Strategic People Leaders Can Leverage ChatGPT.
You’ll learn the potential benefits as well as the ethical considerations and risks of using generative AI. You’ll also find HR use cases you can implement today, with example prompts to help you get started.