Allison Koenecke, who received her PhD from Stanford’s Institute for Computational and Mathematical Engineering (ICME), describes how her experiences in academia and industry shaped her decision to return to academia. Currently a postdoc at Microsoft Research in the Machine Learning and Statistics group, she starts as an Assistant Professor of Information Science at Cornell University next year. Her research interests lie at the intersection of economics and computer science, with projects focusing on fairness in algorithmic systems and causal inference in public health.
Allison says in her career so far, she has always tried to keep as many doors open as possible but recognized, at some point, you have to start closing doors and specialize. After getting her bachelor’s degree in mathematics from MIT, she worked in economic consulting for a few years and realized she wanted to do something with more social benefit. While she was working in industry and during summer internships, she kept in touch with professors and kept up with her research so she could have that option open if she wanted to go back to school.
One of the main reasons she chose to stay in academia was industry and government did not offer what she was looking for. For example, if you stay in industry long-term and your research is critiquing big tech companies, you may run into roadblocks or not be seen as a neutral third-party observer, as you would be seen in academia. Or at a government think tank, your work wouldn’t necessarily have as much reach as in academia. But even more, a lot of the reason she stayed in academia was the people.
Allison’s research is interdisciplinary and falls into two categories. The first is a fairness in online services and algorithmic services, such as speech-to-text or online ads and looking at the racial disparities in those services. And the second branch is on causal inference, which is usually applied to things like public health. Most of her thesis focuses on fairness with the services that we use every day.
One of her research projects is about Google ads used to enroll people in food stamps and how to make decisions about fairness when it costs more to show those ads to Spanish speakers versus English speakers. She is also doing fairness research on racial disparities on speech-to-text systems developed by large tech companies to ensure systems are usable for African American populations that may not able to use their tools simply because they speak with a different variety of English than standard English. She says you need to have people thinking about fairness problems at all steps of the pipeline before you build a product that might harm certain groups of people. She’s hoping to bring awareness to different blind spots to make sure technology actually works for everyone.