But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.
Yaël Eisenstat is a former CIA officer, National Security Advisor to Vice President Biden, and CSR leader at ExxonMobil. She was the Elections Integrity Operations Head at Facebook from June to November 2018.
Warnings that AI and machine learning systems are being trained using “bad data” abound. The oft-touted solution is to ensure that humans train the systems with unbiased data, meaning that humans need to avoid bias themselves. But that would mean tech companies are training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it. Has anyone stopped to ask whether the humans that feed the machines really understand what bias means?
Companies such as Facebook—my former employer—Google, and Twitter have repeatedly come under attack for a variety of bias-laden algorithms. In response to these legitimate fears, their leaders have vowed to do internal audits and assert that they will combat this exponential threat. Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.
In my six months at Facebook, where I was hired to be the Head of Global Elections Integrity Ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.
Over more than a decade working as a CIA officer, I went through months of training and routine re-training on structural methods for checking assumptions and understanding cognitive biases. It is one of the most important skills for an intelligence officer to develop. Analysts and operatives must hone the ability to test assumptions and do the uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events. They must also examine the biases of those providing information—assets, foreign governments, media, adversaries—to collectors.
This kind of training has traditionally been reserved for those in fields requiring critical analytic thinking and, to the best of my knowledge and experience, is less common in technical fields. While tech companies often have mandatory “managing bias” training to help with diversity and inclusion issues, I did not see any such training on the field of cognitive bias and decision making, particularly as it relates to how products and processes are built and secured.