Five key considerations for using AI in legal recruitment
Five key considerations for using AI in legal recruitment
The Suited team recently attended the 2024 NALP Annual Education Conference in Boston where over 1,700 people joined–the most of any AEC to date. As the largest conference in the legal careers profession, it was a golden opportunity for us to gain insights into trends impacting law firms, connect with partners and peers, and share our latest product updates and exciting results with the industry.Â
To top it off, we had the privilege of leading a timely discussion on the regulatory landscape for AI-powered technology and the implications for legal recruitment efforts. Moderated by our Chief Product Officer Kaelin Burns, “AI Laws for Talent Tools: Making Sense of it All" featured a panel of technology leaders including Satish Ramakrishnan, VP of Engineering at HolisticAI, Alex Urbelis, Senior Counsel at Crowell LLP, and Aaron Myers, Chief Technology Officer of Suited.
Below are a few thought-provoking learnings from the education panel. Â
Five top takeaways regarding AI hiring toolsÂ
1. Understanding AI is key to properly evaluating and mitigating risks
With rapid advancements in AI technology, applications of AI across various business functions are accelerating. This has caused some ambiguity surrounding AI and how different communities interpret and apply it in talent acquisition.Â
During the panel, for example, audience members were polled to generate a word cloud of terms or phrases that come to mind when thinking of the word AI. Among responses, words like “intelligence”, “efficiency,” “the future,” and “technology” were most prominent.
To simplify the definition of AI, panelists mapped out its primary subcategories, including:
- Machine learning (ML)Â is a subset of AI focused specifically on how a system can think, perceive, and learn which heavily leverages the domains of mathematics and computer science.Â
- Generative AI (GenAI) leverages machine learning to generate any number of formats/outputs (like images, text, video) in response to a prompt.
- Large language models (LLMs) are a type of generative AI that leverages machine learning to produce "accurate" text responses to a prompt.
Without a firm understanding of the types of AI, businesses will not fully comprehend the criteria necessary to assess its capabilities or risks. The result could be deployment of AI tools that could have biased or inaccurate results, or it could mean not deploying anything at all and missing the incredible opportunities offered by these new technologies.Â
For legal recruitment specifically, talent teams must know which type of AI a given hiring tool employs so that they know which questions and concerns to consider before deploying it at any stage of the recruiting process, and thus, can mitigate risk.
Further, without clarity around what AI is (and what it isn’t), regulating how AI is built and used will prove problematic.
2. AI technology is not inherently biased, but the data it’s trained on can be
One of the main concerns around the use of AI today is whether it enables fair and accurate decision-making. For this reason, it’s critical to understand where bias originates so that recruiting teams know where to look for it, as well as how to identify and avoid it.Â
Panelists pointed out that AI on its own is not biased–however, the data that trains AI and machine learning models can be. This is a critical distinction that sends a clearer signal to organizations about the value of understanding the source and relationships in the training data so they can proactively control for and mitigate bias. To appropriately mitigate potential bias in AI, a detailed, iterative process is required to rigorously test both input data and output results for bias before deploying any AI-powered tools.Â
3. AI should never replace human decision-making—it should only ever augment it
As a tool with the power to interpret data and help humans understand more, AI should not be left to its own devices making final decisions in place of humans. Instead, AI should provide data to support human decision-making.
Take, for instance, AI-powered resume screeners; resume screening that automatically rejects or advances candidates based on keyword matches is a model in which AI substitutes humans to make final judgments or decisions that affect outcomes. When AI makes the sole determination of whether a candidate advances to an interview, the lack of human intervention can lead to bias.Â
On the other hand, AI can be effective at analyzing large amounts of data with complex relationships to help humans better interpret information and then use it to draw informed conclusions. Aaron Myers used Suited as an example to illustrate the use of machine learning models to analyze candidate data which is then passed on to humans (recruiters) who take these data points into consideration along with everything else in their recruiting process. In this case, AI provides a data-driven, objective view to supplement the subjective view of humans that allows for more accurate and equitable hiring decisions.Â
One audience member questioned how to solve and control for biases inherent in humans with AI. Ultimately, panelists stressed the shortcomings of purely technical-focused solutions and reiterated the importance of leaning on technology experts to more effectively control against AI bias. Â
4. Organizations must do their due diligence before utilizing AI products
Developers of AI must make sure the models being built are transparent, robust, and free of bias, but organizations using these tools must also diligence them fully. In the end, the user of the technology will endure the consequences if there are poor outcomes. This further emphasizes the role of humans in overseeing the development or use of AI to ensure it’s not being misused or influencing decision-making in negative ways.
Satish Ramakrishnan explained the role of AI governance platforms like his own company, HolisticAI, in accounting for bias, transparency, efficacy, and compliance so that businesses looking to adopt and scale AI can do so confidently.
5. Law firms should be leading on the use of AI tools in the talent space
AI technology is advancing exponentially, regulations are evolving frequently, and adoption across all industries is skyrocketing consistently. Since law firms will be advising their clients on the use of these technologies, they need to be at the forefront of both using and stress testing AI products to help guarantee the technology advances in a responsible and ethical manner. This means law firms must adopt new talent tools and urge AI developers to meet standards that they would advise their own clients to demand.
And while the legislation has not kept up with the pace at which AI is being built and optimized, there’s an even greater need for the legal industry to continue learning, testing, and investing in AI technology so that organizations can properly mitigate risks. Not to mention, law firms are uniquely positioned to evaluate the risks of using AI and understand and inform regulations that right now are highly complex and fragmented.
Want to stay in the loop?
Subscribe to our e-mail list for Suited-related updates, or visit our Newsroom here.