Join us for Workable Next! Get an exclusive look at our upcoming product releases. Save your seat

Ethical AI: guidelines and best practices for HR pros

Learn how to ensure ethical AI implementation in your HR processes with this all-encompassing guide on ethical AI in the workplace. From establishing an AI ethics committee to regularly auditing AI systems, developing ethical AI policies, fostering collaboration, engaging in industry-wide conversations, and promoting a culture of continuous learning, you can use AI to your advantage while ensuring smart and equitable use of evolving tech.

Keith MacKenzie
Keith MacKenzie

Passionate about human resources, employment, and business management, and an expert at sharing that expertise.

ethical AI

As AI continues to revolutionize the field of human resources, concerns about the ethical implications of this technology are growing. People are worried that AI will be used for deceptive and malicious means. And even when not maliciously used, inequality may increase as a result of the adoption of generative AI in the workplace.

Striking a balance between harnessing the power of AI and addressing its challenges is possible. Many are driving that conversation – and you, in human resources, are part of this as well. Your work directly involves human beings, so it makes sense that you want to approach AI ethically as well.

We’ll help you out here. We share examples of how ethical use of AI has been established in various circles, and then we’ll guide you in how to ensure ethical AI standards are met in your own work.

Learn about Workable's upcoming and new features

Hear directly from Workable's top execs on exciting developments in our software. New tools, Q&A, and more!

Watch now

Contents

Real-life examples of power and responsibility

Uncle Ben’s famous quote to Peter Parker rings loud and true here: “With great power comes great responsibility.” In that spirit, we have real-life examples of organizations and individuals who are driving the importance of balancing the power of the latest technologies with the challenges they present.

Ethical Intelligence founder Olivia Gambelin is one such example. In a LinkedIn post, she discussed the potential risks associated with generative AI, including security, bias, patenting and more – and emphasized that there’s an opportunity at play here: the opportunity to build an ethical AI framework from the start so that we can maximize the good that we can do with it.

There are also formal organizational and individual projects that have already happened over the last few years – let’s look at three of them right now:

1. IBM: Trusted AI Initiative

IBM made significant efforts to ensure ethical and responsible use of AI through their Trusted AI initiative. In that, IBM has developed AI solutions that prioritize fairness and transparency while minimizing bias.

By establishing a set of guidelines, best practices and tools, IBM ensures that their AI technologies are developed and implemented ethically. Their AI Fairness 360 toolkit, for example, is an open-source library that provides metrics and algorithms to help detect and mitigate bias in AI systems.

That’s more for developers who want to maintain high ethical standards in their AI work. However, it’s a powerful example of a leading brand that values ethical development of groundbreaking technology such as artificial intelligence.

2. Accenture: Responsible AI Framework

Like IBM, leading professional services company Accenture developed a Responsible AI Framework to address the ethical challenges that AI presents.

This framework outlines six core principles, including transparency, accountability and fairness, to guide the development and deployment of AI systems.

Accenture also established a dedicated AI Ethics Committee, pulling together experts from various disciplines to ensure that their AI solutions adhere to these principles and promote responsible AI use across the organization.

3. Dr. Timnit Gebru: Black in AI

Widely regarded AI researcher and ethicist Dr. Timnit Gebru has led the charge of advocating for responsible AI use for years. Her focus is on mitigating bias and ensuring fairness in AI systems – a growing concern with the surge of ChatGPT usage across all disciplines.

As part of her focus on AI bias mitigation, Timnit co-founded Black in AI, which aims to increase the representation of people of color in AI research and development. She continues to play a leading role through her research and advocacy.

Actionable tips for HR pros in ethical AI

Now, how about yourself? If you’re working in human resources, you’re likely already incorporating ChatGPT and other AI tools into your workflow through the automated creation of job descriptions, interview questions and other things.

But there is a risk of relying too much on AI to steer processes as Amazon learned the hard way in late 2018.

Also, diversity, equity, inclusion and belonging is likely a major priority in your work. So how do you combine the undeniable benefits of AI-driven optimization with maintaining fairness, decency and ethics in your work?

You can start right now with these seven focal areas:

1. Prioritize fairness and transparency

It’s likely you have already emphasized the importance of fairness and transparency throughout your organization in terms of communication, opportunity and collaboration. You’ll need to apply that same thinking to your AI systems. Here’s how:

Establish clear evaluation criteria

Develop a well-defined set of criteria for assessing the fairness and transparency of AI systems. This should include considerations such as data quality, explainability and the impact of the AI system on different employee groups.

Vet AI vendors thoroughly

When selecting AI solutions, carefully evaluate vendors based on their commitment to ethical AI principles. Inquire about their efforts to minimize bias, promote transparency and ensure data privacy.

Implement explainable AI

Choose AI systems that provide explanations for their recommendations, allowing you and your team to understand the reasoning behind AI-generated decisions.

Communicate AI usage with employees

Inform employees about the use of AI within the organization and the specific areas where it is being applied. Clearly communicate the goals and benefits of AI, addressing any concerns or misconceptions they may have.

Conduct bias and fairness assessments

Regularly assess your AI systems for potential biases and fairness issues. This can involve analyzing the training data, validating AI-generated decisions, and monitoring AI system performance across different employee groups.

Establish an AI ethics committee

Create a cross-functional team of stakeholders responsible for overseeing the ethical use of AI in your business. This committee should monitor AI implementation, enforce ethical guidelines, and address any ethical concerns that may arise. This team can consist of representatives from different teams including HR, IT, legal, and other relevant departments. That diverse approach is crucial here.

Provide training on AI ethics

Offer training and resources for HR professionals and other employees involved in AI implementation. This can help ensure that your team understands the importance of ethical AI use and is equipped to make informed decisions.

There’s no reason fairness and transparency should exist solely within human-driven processes. Your AI tools can absolutely be fair and transparent as well, but as the manager of those tools, it’s your job to ensure that your technologies don’t fail in this area.

2. Diversify AI development teams

The infamous ‘racist soap dispenser’ is a perfect example of the risks of non-diverse teams when designing products – since they are the brains behind the design and are the first testers of the product.

That thinking applies to AI development teams too. If you’re in the software development field, you want your teams to be diverse so as to avoid design faux pas like the one above. Here’s how you can ensure that diversity thrives where you are:

Expand talent sourcing

Broaden your search for AI talent by exploring diverse channels, such as niche job boards, online communities and professional networks that cater to or specialize in underrepresented groups. Or, if you represent one of those networks or communities, consider building your own branded job board.

See what a branded job board can do

Contact us to see more about our branded job board program and how your community can benefit.

Learn more

Review job descriptions

Ensure that your job postings are inclusive and free of gendered language or other biases that might discourage diverse candidates from applying.

Implement blind recruitment

Utilize blind recruitment techniques, such as anonymizing resumes, to reduce unconscious bias in the hiring process.

Foster an inclusive work environment

Create a workplace culture that values and promotes diversity, equity, and inclusion. This will not only attract diverse talent but also support their retention and career development.

Offer training and development opportunities

Provide training, mentorship and career advancement opportunities to underrepresented employees, helping them grow professionally and contribute to AI development.

Set diversity goals

Establish clear DEI objectives for AI development teams, and track their progress over time. This can help ensure that your organization remains committed to fostering diverse AI development teams and continues to focus on this area going forward.

Diversity may feel like a richly covered topic for many teams, but there’s a reason for that – it’s not just about the teams. It’s about the results of their work – a diverse team means an inclusive software, because unique experiences and perspectives are pulled together into a single production.

3. Regularly audit AI systems

We touched on the importance of setting goals in the last section. You want to be sure those goals are met regularly – to do that, you need a system in place that properly tracks and audits your AI systems so you can jump on any potential biases or unethical processes that your tools may churn out.

Regular audits not only ensure that you’re on top of anything that may happen – they also give you an opportunity to refine your AI implementation strategy to make sure your tools align with your business’ mission, vision and especially values.

Follow these guidelines for a failsafe audit process:

Establish a schedule

Create a regular schedule for auditing your AI systems, based on factors such as system complexity, usage frequency and potential impact on employees.

Define performance metrics

Identify relevant metrics to assess AI system performance, such as accuracy, fairness and explainability. This will help you tangibly evaluate and measure AI systems during audits.

Monitor AI system outputs

Keep a close eye on AI-generated decisions and recommendations, looking for any signs of bias, discrimination or other unintended consequences.

Review training data

Periodically examine the data used to train your AI systems. AI learns from real-life human experience and therefore skews AI-generated decisions – so it’s crucial to ensure that the sourced material itself is diverse, accurate and free of bias.

Engage external auditors

Consider working with external auditors or third-party organizations to conduct unbiased evaluations of your AI systems. The additional layer of scrutiny that this expertise provides can be invaluable.

Implement a feedback loop

Encourage employees to share their experiences and concerns about AI system usage. This feedback is indispensible in identifying potential issues and areas for improvement.

Update and refine AI systems

Based on your audit findings, make necessary adjustments to your AI systems, addressing any biases or performance issues uncovered during the audit process.

Nothing necessarily happens without proper oversight. To ensure that your AI tools and processes run free of bias, implement the above tips so that your company can reap the full benefits of AI in its workflows while mitigating and even eliminating potential risks coming from bias and prejudice.

4. Develop ethical AI policies

Now, you need clear ethical guidelines and policies for your colleagues to follow when they use artificial intelligence in their day-to-day work. Rulebooks mean structure, and structure is crucial to success. Not only do you need to establish these – you also must enforce them, with clear information on potential risks, ethical considerations and especially compliance requirements to ensure that AI is implemented responsibly.

Related: Our AI tool policy template can come in handy here.

Get started with these action items:

Conduct a risk assessment

Evaluate the potential ethical, legal and social risks associated with AI implementation in your organization. Consider factors such as data privacy, algorithmic fairness, and employee impact.

Consult relevant guidelines and frameworks

Refer to industry-specific guidelines, frameworks and best practices for ethical AI. You can check with professional organizations and even government agencies for examples of such guidelines.

Involve stakeholders

In line with the AI ethics committee recommendation above, you can collaborate with multiple stakeholders and leaders from various departments, including HR, IT, legal and executive teams, to develop comprehensive AI policies that address diverse perspectives and concerns. This can include policies unique to specific teams and functions.

Define AI usage boundaries

Clearly outline the permissible and prohibited uses of AI within your organization. Take into account different ethical considerations and regulatory requirements as you do so.

Incorporate transparency and accountability

Ensure that your AI policies highlight the importance of transparency in AI processes and decision-making – and establish clear lines of accountability for AI system performance and outcomes.

Communicate policies organization-wide

Be uniform and thorough in your communications. Share your ethical AI policies with all employees. Provide training or resources to ensure that everyone understands that they have a role in upholding these guidelines – and that they know what they must do to maintain standards.

Regularly review and update policies

Again, tracking and auditing is a must. Review your AI policies consistently to ensure that they remain up-to-date. Adjust accordingly to stay in line with evolving ethical considerations, industry standards and technological advancements.

Ensuring ethical use of AI – and also that the AI you use is in itself ethical and fair – will not happen in a vacuum, nor can it happen simply because you’ve advised your employees and colleagues to do so. You need to prescribe ethical AI throughout your organization and that can only happen with a clear prescription. That’s the value of building guides and policies – not just for AI, but for anywhere.

5. Foster collaboration

The workplace is by nature a collaborative environment. You can work this to your advantage when ensuring that ethical AI practices are consistently implemented and maintained throughout your teams.

Some tips to get you started:

Promote knowledge sharing

Encourage employees to share their expertise, experiences and insights when using AI in their workflows. This can be done via anonymous surveys and in-person workshops to foster a culture of continuous learning and improvement in the area.

Create internal communication channels

Another aspect of sharing knowledge is providing a space for employees to actively discuss AI-related topics in your organization. This can be a new chat channel, an intra-company forum, or even emails and regular meetings, giving employees multiple avenues to voice concerns, share ideas and collaborate on further AI initiatives.

Partner with AI vendors

Since you’re already auditing the AI systems being used in your company, you can also build strong relationships with AI vendors to address any ethical concerns that may arise, You can then optimize and fine-tune your systems to ensure fairness and inclusivity.

Engage with external experts

You can consult with external experts such as Dr. Timnit Gebru and other AI ethicists and industry leaders to gain insights and advice on ensuring ethical AI use and overcoming challenges.

Participate in industry events and forums

Likewise, you can learn from others in the ethical AI space (such as IBM, Accenture and more). Go to industry events, conferences and forums and actively engage in discussions. Learn from other organizations’ experiences and contribute to the shaping of best practices all around.

Again, ethical AI does not happen in a vacuum. Use the existing knowledge that’s out there to your advantage, and also contribute your own experiences. We can’t progress in isolation from one another – a culture of continuous learning through collaboration has tremendous value here.

6. Engage in industry-wide conversations

Following on the above, your peers are likely as engaged in the overall conversation around ethical AI as you are. For example, this LinkedIn post from Caroline Fairchild explicitly expresses concerns around the greater threat of AI on marginalized groups:

When you get involved in these conversations, be it in LinkedIn or at industry events, you can stay informed about best practices and experiences that will shape the future of AI in HR.

Follow these tips to advocate for responsible use of artificial intelligence and contribute to shaping AI policy and regulations as an HR professional:

Raise awareness

Educate employees, management, stakeholders and peers about the importance of responsible AI use. Shed light on the potential risks, ethical considerations and best practices as part of those interactions.

Promote ethical AI champions

Encourage and support employees who demonstrate a strong commitment to ethical AI practices. You may even incentivize them with public recognition and rewards. Empower them to lead the charge as advocates and role models throughout your company.

Collaborate with industry peers

Again, collaboration is huge here. You can network with other HR professionals to share insights, experiences and actionables related to responsible AI use. Your commitment is stronger as a collective than as an individual.

Share success stories

Everyone likes a success story. Those stories are inspirational and informative and deserve celebration. Put a spotlight on moments where your company has successfully implemented AI in an ethical and responsible manner – and more so, show the results and benefits.

When people share knowledge and success stories about those triumphs and accomplishments, that’s powerful information. Equally powerful is sharing challenges with your industry peers and seeking out best practices in overcoming those challenges. That dialogue is crucial to ensuring ethical AI across the board. The reasoning behind a moratorium on AI is understandable, but deeper within that is the call for conversation and understanding. That’s the value of industry-wide conversation.

You can be part of the ethical AI conversation

The primary takeaway from all of this for you as an HR professional is this: establish a culture of continuous learning. AI is growing exponentially and will continue to do so – it’s understandable if you’re struggling to keep pace with all the new developments and information around AI.

When that technology grows and evolves, the orbiting opportunities and challenges will grow with it – and that includes the ethical use of artificial intelligence.

It is crucial for you, as an HR professional, to embrace the opportunities that AI presents while ensuring smart and equitable use of the evolving tech. You don’t want to shy away from it altogether because it does have a place in your work – but you also don’t want it to get away from you either. Striking a careful balance between harnessing the benefits of AI and mitigating potential risks is what you’re aiming to do here.

Be proactive, driven and optimistic as you do so. Look at the real-life examples above – IBM, Accenture, Dr. Gebru, Caroline Fairchild, Olivia Gambelin – they’re all directly contributing to the conversation around ethical use of AI at work and at play. You can be part of that conversation too.

Need action and results in your DEI initiative?

Find diverse candidates, eliminate unconscious bias while hiring, and measure your impact.

Improve DEI

Let's grow together

Explore our full platform with a 15-day free trial.
Post jobs, get candidates and onboard employees all in one place.

Start a free trial