In my time working at British Airways in the 1980s, I interviewed hundreds of IT recruits.
We got many applications and a frequent topic of conversation between interviewers was how to make a first pass to cut down the time spent in the process. This came down to finding an appropriate algorithm to assess applications: at the time the possibilities were crude, as we didn’t have the volume of data or sophisticated systems available now.
Half-humorously, I suggested two possibilities. One was to reject anyone with a doctorate, as candidates with doctorates were far too academic for a practical role. The second was to reject anyone with a computer science degree, as we had to spend a lot of time getting them to unlearn a non-commercial approach to programming. (Neither of these was adopted.)
Dangerous assumptions
Since then, things have changed. The amount of data available has grown exponentially, and whether assessing an individual’s performance at work, their suitability for a job or their application for a bank loan, it can be tempting to delegate the decision to complex algorithms, often using AI, without any transparency as to how decisions are being made. But this is a risky route to take. If we don’t know how the algorithm assesses suitability, it is entirely possible that dangerous assumptions are being made.
In my suggested algorithms, I challenged an assumption in conventional recruiting – that computer science degrees were the best preparation for being a programmer in industry. Other assumptions can arise when selecting a good metric for making a choice. A classic example is the use of credit scores when recruiting. Credit scores are not even great at what they are supposed to do – the algorithms are relatively simplistic, and mostly opaque. But there is no evidence that they provide a good indicator of how well someone will do a job.
In IT history we have a very good example of assumptions coming back to bite us: the millennium bug. Between the 60s and 80s, when memory and storage were scarce resources, years were often stored effectively as two numbers – so, for example, ‘24’ would be assumed to refer to 1924. But if a program using dates in this format tried to calculate the age of someone born in (19)84 by taking this away from (20)24 the result would be a negative age and a program crash.
The system designers thought that 2000 was too far into the future to worry about – but this assumption came back to bite them. Another example of bad assumptions occurred when an attempt was made to improve teaching in Washington DC schools in the early 2000s. Quality of teaching is hard to measure. So it was assumed that the proxy metric of improvement in student performance over the year would do. This is far easier to monitor, but fails to take into account variance in student populations from year to year. One teacher scored 6 percent one year, then 96 the next. His skills had not changed. Instead, the scores reflected how capable the cohort already was at the start of the year.
Perpetuating Stereotypes
Because AI and algorithms don’t understand what they are trying to do, they can easily use flawed assumptions to make decisions. Left to their own devices, for example, many systems can perpetuate racial and sex-based stereotypes – for example, the fact that the majority of successful CEOs are men (simply because fewer women are CEOs) could easily result in a woman receiving a lower assessment score in applying for a CEO role simply for being female.
This may sound as if algorithms should never be used in decision making. It’s not the case. They are excellent when, for example, there are far too many options to practically choose between. But those responsible for such algorithms need to ensure that their decision making is transparent. It takes more effort to produce an algorithm that can explain its decision. But we can’t afford to take the lazy route. Unless we know how a choice has been made – and unless it can be explained in a comprehensible way – decisions that can have huge impacts on people’s lives can easily go wrong.