Areas of Expertise: The intersection of technology and society, youth social media use, contemporary social inequities relation to technology, privacy, digital backchannels, and social visualization design.
Bio: Danah Boyd is a Principal Researcher at Microsoft Research and the founder of Data & Society. She is also a Visiting Professor at New York University's Interactive Telecommunications Program. She is an academic and a scholar and her research examines the intersection between technology and society.
For over a decade, Boyd's research focused on how young people use social media as part of their everyday practices. She wrote It's Complicated: The Social Lives of Networked Teens (2014) to document her findings.
Boyd also co-authored two books - Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media (2009) and Participatory Culture in a Networked Era (2015) to highlight different aspects of that work.
More recently, Boyd has turned her focus on understanding how contemporary social inequities relate to technology and society more generally. She is collaborating with an amazing network of researchers working on topics like media manipulation, the future of work, fairness and accountability in machine learning, combating bias in data, and the cultural dynamics surrounding artificial intelligence. These questions are core to the mandate of Data & Society, a research institute that Boyd founded in 2013.
Areas of Expertise: Free speech, global privacy, malware, and nation-state spyware
Bio: Eva Gaperin is the Director of Cybersecurity at the Electronic Frontier Foundation and a Technical Advisor at Freedom of the Press, Callisto, and the Center for Long-Term Cybersecurity.
Eva is an advocate for civil liberties, focusing on women’s safety. She’s successfully lobbied for companies to flag domestic abuse spyware and works to protect vulnerable groups from online surveillance.
Areas of Expertise: Cybersecurity, Hacking, Infosec, Malware, Programming
Bio: Marcus Hutchins is a computer security researcher for cybersecurity firm Kryptos Logic.
I became aware of Marcus Hutchins through his wild WIRED profile. He is known for temporarily stopping the WannaCry ransomware attack and then getting arrested for hacking. A former NSA hacker described Marcus as a "reversing savant.” He tweets and blogs about information security.
Areas of Expertise: AI
Bio: Stuart J. Russell is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley.
Russell co-authored the Artificial Intelligence: A Modern Approach, used in more than 1,400 universities in 128 countries. He also wrote Human Compatible: AI and the Problem of Control. Stuart is particularly skilled at making AI ethics issues understandable to us all.
Areas of Expertise: Government transparency, journalism, prisons
Bio: Kevin Gosztola is Managing Editor of Shadowproof and co-hosts a weekly podcast called Unauthorized Disclosure.
There is no accountability without transparency, and when it comes to the US federal government, that transparency is under attack. From state secrets to FISA courts to prosecutions of whistleblowers under the Espionage Act, keeping government accountable to the people has never been harder or more dangerous. Shadowproof is a great source for keeping up with the war on transparency.
Areas of Expertise: Social implications of data systems, machine learning and artificial intelligence.
Bio: Kate Crawford is a leading researcher and professor who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Senior Principal Researcher at MSR-NYC, the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, and the Miegunyah Distinguished Visiting Fellow at the University of Melbourne. Kate is the co-founder of the AI Now Institute at New York University, the world's first university institute dedicated to researching the social implications of artificial intelligence and related technologies.
In 2018, Crawford partnered with data viz guru Vladan Joler to create Anatomy of an AI System, a research paper and map that forensically analyzes the often exploitative extraction of natural resources, labor, and data to power a single AI system: the Amazon Echo. The project’s 23-by-16.5-foot map of this system has been featured in exhibitions at institutions like the Victoria & Albert Museum, Ars Electronica, and the Milan Triennale. Crawford has presented the project to policymakers at the Hague and the European Union, and has met with world leaders from France, Germany, Spain, and Argentina to discuss her research. She has a forthcoming book on AI and politics that will be published in 2021.
Areas of Expertise: AI, racial and gender bias, facial recognition technology
Bio: Joy Buolamwini is a "poet of code" who uses art and research to illuminate the social implications of artificial intelligence. She founded the Algorithmic Justice League to create a world with more ethical and inclusive technology. Her Featured TED Talk on algorithmic bias has over 1 million views. Her MIT Thesis methodology uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon. In late 2018 in partnership with the Georgetown Law Center on Privacy and Technology, Joy launched the Safe Face Pledge, the first agreement of its kind that prohibits the lethal application of facial analysis and recognition technology.
Joy's research has been covered in over 40 countries, and as a renowned international speaker she has championed the need for algorithmic justice at the World Economic Forum and the United Nations. She serves on the Global Tech Panel convened by the vice president of European Commission to advise world leaders and technology executives on ways to reduce the harms of A.I.
Areas of Expertise: Artificial intelligence (AI), machine learning, deep learning, computer vision and cognitive neuroscience.
Bio: Fei Fei Li is a Professor of Computer Science at Stanford University, a Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence, and a Co-Director of the Stanford Vision and Learning Lab. Li served as Vice President at Google and Chief Scientist of AI/ML at Google Cloud.
Dr. Li has published more than 200 scientific articles in top-tier journals and conferences. She is the founder of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. Li is also a leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.
Areas of Expertise: Copyright, monopoly, privacy.
Bio: Cory Doctorow is a blogger, journalist, and science fiction author. He writes at Pluralistic: Daily links from Cory Doctorow, Boing Boing, and other outlets.
Cory Doctorow is a clear and concise writer. He makes nerdy, science-y posts engaging and understandable. He walks the talk. As a proponent of liberalising copyright laws, he publishes his books under a Creative Commons license. And as a privacy proponent, he includes no ads, tracking, or data-collection on his blog or newsletter.
Areas of Expertise: Responsible innovation at scale with new & emerging technologies like Artificial Intelligence (AI).
Bio: Mia brings together diverse groups of stakeholders to build human-centric programs at the intersection of data, technology, and governance. Mia is the founder of Women in AI Ethics initiative. Mia Shah-Dand is the CEO of Lighthouse3, a research and advisory firm based in Oakland, California. Mia advises large organizations on responsible innovation at scale with new & emerging technologies like Artificial Intelligence (AI).
Mia Shah-Dand is CEO of Lighthouse3, a research & advisory firm focused on responsible innovation with emerging technologies. She is also the founder of Women in AI Ethics initiative and creator of 100 Brilliant Women in AI Ethics list.
Areas of Expertise: Cybersecurity, entrepreneurship, social media, "digital democracy"
Bio: Alex Stamos is a Greek American computer scientist and adjunct professor at Stanford University's Center for International Security and Cooperation. He is the former chief security officer at Facebook.
Prior to joining Stanford, Alex served as the Chief Security Officer of Facebook. In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks. Throughout his career, Alex has worked toward making security a more representative field and has highlighted the work of diverse technologists as an organizer of the Trustworthy Technology Conference and OURSA.
Areas of Expertise: Data privacy, algorithmic fairness, technological policy and government
Bio: Dr. Latanya Sweeney is the Professor of the Practice of Government and Technology at the Harvard Kennedy School. She was formerly the Chief Technology Officer, also called the Chief Technologist, at the U.S. Federal Trade Commission.
As Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences, Department of Government, Dr. Sweeney's mission is to create and use technology to assess and solve societal, political and governance problems, and to teach others how to do the same. Dr. Sweeney's earliest concern about technology's clash with society was in privacy, and her work initially launched the area of study known as data privacy and she is the founding Director of the Data Privacy Lab at Harvard. Similarly, Sweeney's work was first to demonstrate discrimination in online algorithms, and so, her work first founded the new emerging area known as algorithmic fairness. More recently, Dr. Sweeney's work with Ji Su Yoo and Jinyan Zang was first to demonstrate vulnerabilities in voter websites during the 2016 election.
Areas of Expertise: Social impacts of technology, privacy and surveillance, inequality, research methods and complex systems
Bio: Originally from Turkey, and formerly a computer programmer, Dr. Tufekci became interested in the social impacts of technology and began to focus on how digital and computational technology interact with social, political and cultural dynamics. She is an Associate Professor at the UNC School of Information and Library Science (SILS), a principal researcher at Carolina’s Center for Information, Technology, and Public Life (CITAP), and a faculty associate at the Harvard Berkman Klein Center for Internet and Society. She was previously an Andrew Carnegie Fellow and a fellow at the Center for Information Technology Policy at Princeton University. Dr. Tufekci is a contributing writer for The Atlantic and regularly writes columns for the New York Times, WIRED, and Scientific American.
Dr. Tufekci’s research interests revolve around the intersection of technology and society. Her academic work focuses on social movements and civics, privacy and surveillance, and social interaction. She has become a go-to source for national and international media outlets looking for insights on the impact of social media and the growing influence of machine algorithms. Her book, Twitter and Teargas: The Ecstatic, Fragile Politics of Networked Protest in the 21st Century (Yale 2018), examines the dynamics, strengths, and weaknesses of 21st century social movements.