Predictive Modeling

The word algorithm might bring to mind computer coding 1s and 0s, but in actuality, algorithms are formulas that are making decisions for you big and small every day.  

Amazon uses algorithms when you make a purchase to make new suggestions for products you may like. You may notice after you buy dog food Amazon will suggest collars and chew toys. Algorithms use data collected to make decisions, however how the data used to make those determinations is often under tight lock and key by the companies who write them.  

This is important, as many companies are using algorithms to make larger decisions about people than simply what products to suggest. For example, job-finding services use algorithms to determine which applicants to suggest for high-paying jobs. According to Molly Schuetz, writing for Bloomberg Businessweek, “Researchers have documented that they’re less likely to refer opportunities for high-paying positions to women and people of color because those job-seekers don’t match the typical profile of people in those jobs — mostly white men.” 

This type of algorithm, called predictive modeling, makes inferences based on patterns in past data. This means the predictions can be biased if the data is used incorrectly or is misrepresenting a group of people. “A study from the University of California, Berkeley found that algorithmic lending systems were 40% less discriminatory than face-to-face interactions but still tended to charge higher interest rates to Latin and African-American borrowers. One reason: their profiles suggested they didn’t shop as much as other people,” Schuetz wrote. 

Or even more troubling, some police forces have employed predictive policing, using algorithms to forecast areas more likely to have violent crimes. However, using historical data of past police engagement, the algorithms sent the police back to the same areas.  Another example is that “Police in Durham, England used data from credit-scoring agency Experian, including income levels and purchasing patterns, to predict recidivism rates for people who had been arrested. The results suggested, inaccurately, that people from socio-economically disadvantaged backgrounds were more likely to commit further crimes,” Schuetz wrote. 

A Bloomberg article notes that the U.S. Congress is reviewing a bill called the Algorithmic Accountability Act of 2019 which would force companies to test for bias in their algorithms.  In the U.K., a government-led group is working on a report that is expected to promote a code of ethics for algorithmic integrity.  The good news is that focus is being turned to how these algorithms are used and some of the unintended consequences.