AI and algorithms: Why the human touch is important

[ad_1]

Students take part in a march in protest against the government’s handling of this year’s A-level results
Isabel Infantes/EMPICS Entertainment

The A-level results drama last month has placed a lot of attention on how algorithms are used in decision-making. As the use of AI expands, Esther Langdon looks at how organisations can make sure they get the best out of AI while keeping its outcomes fair.

Artificial intelligence and algorithms have been very much in the news recently, following the recent A-level results fiasco.

Stories of young students whose hopes were dashed simply because the “computer says no” made headlines for several days. The outsourcing of life-critical decisions to a machine, and the total authority of its (sometimes inexplicable) pronouncements, cut through. Put simply, it seemed unfair. The Prime Minister recognised the public outcry and called the algorithm “mutant”.

The A-level furore has arguably provided a very human face to the challenges presented by the greater use of artificial intelligence.

Many more of us are working remotely in response to the Covid-19 pandemic and the world of work continues to react. Workplaces are becoming less centralised and the way we communicate and collaborate is changing. We can expect the pace of AI being used in the workplace to gather pace and momentum.

Expanding use

Recruitment and onboarding were seen as early candidates for the use of AI in the workplace, where algorithms can be used in the screening and sorting process.

Other areas using AI include driving employee engagement, measuring employee wellbeing, tracking employee productivity, skill-gap assessment and performance management, organising workflow and so on. The list of areas where AI and people analytics is and can be used in the workplace is ever expanding.

HR practitioners need to prepare for this digital age and make sure that they are best placed to harness the power of AI at work, while not shying away from its complexities.

Confidence here is key. Many of us feel that AI is something outside of own area – be it HR, employment law, employee engagement, diversity and inclusion or performance management. There can be a tendency to shy away from engaging with the impersonal face of AI.

The human touch

However, artificial intelligence is just that – artificial. It needs human input to make it work and should be a servant, not a master. To make sure AI doesn’t become the master, it is vital that HR practitioners can approach AI questions with confidence.

Part of this is simply familiarity with key concepts and terminology – “artificial intelligence”, “machine learning”, “deep learning”, “network architecture”, “structured data”, “data sets”, “algorithm”, “virtual agent”, “image recognition” and so on.

Being fluent in these terms and the ideas, tools and processes behind them makes it easier for practitioners to challenge the application of AI and to frame the questions which need asking from an HR context.

This ability to scrutinise and challenge is an inherent part of using AI ethically and appropriately. As the recent A-level issues showed, impersonal AI decisions can be scrutinised and found wanting, and the government faced criticism and threats of legal action as a response.

But what would an employer’s scrutiny of a proposed application of AI before using it look like?

Accountability and transparency: As above, the starting point for any business using AI is to understand what it is asking of the AI, and precisely how this will be achieved. An employer should be able to answer questions from a candidate about its recruitment, without referring to a third-party vendor or supplier of an algorithm.

Addressing bias: The concept of a biased algorithm is now fairly well known – the concept of “bias in, bias out” or even “rubbish in, rubbish out”. Businesses must, however, move beyond a theoretical understanding of these risks to taking ownership of them and identifying the steps needed to mitigate them. Mitigation here is the key word – the risks may not be capable of elimination, in a machine any more than a human, but efforts must be made.

Employers must work with their vendors to understand how any algorithm works and “stress test” it for bias. Is the algorithm tainted with discrimination? Does it simply repeat a historic “good” outcome which is itself biased? What is its statistical accuracy? How is data sourced? Is the balance, quality and breadth of the data used good enough? Does it contain examples of each protected characteristic? How wide is the pool of people assessing the data? Are there some contexts the algorithm will not work? And so on.

Keeping AI in its place: For many of us, the idea of a wholly automated HR process – recruitment for example – would fill us with doubt, serving as a reminder that AI should be kept in its place. It is a powerful tool but one which should complement and be subject to human decision making, rather than replace it.

Keeping AI under constant review: Review and governance of AI at work needs to be continuous and meaningful.

Data protection: Increased use of AI in the workplace also brings with it complex considerations of data protection and data security, including the basic data protection principles of lawfulness, fairness and transparency.

Often, personal data, and special category personal data, will be processed, and there are particular considerations in relation to automated decision making. Legal obligations and the need to foster trust and engagement with the positive uses of AI means that these issues must be front of the agenda.

Getting the most out of an AI enhanced workplace is an exciting opportunity, but one that needs engagement with the issues and possible obstacles.

  HR Systems opportunities on Personnel Today


Browse more HR systems jobs

[ad_2]

Source link

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *