THE ALGORITHMIC MAJORITIES PROJECT
AI is reshaping surveillance, affecting communities across the globe. This focus area investigates the impacts of surveillance technologies on communities’ privacy, autonomy, and agency. It draws connections between local experiences and knowledge to uncover patterns of harm and supports resistance and alternatives to extractive surveillance systems.
Emerging technologies, including AI, frequently reproduce and intensify existing social hierarchies across race, gender, disability, class, and other intersecting identities. Algorithmic harms often arise at these intersections. This focus area investigates how algorithmic systems shape lived experiences along these axes, with particular attention to autonomy, bias, violence, access, and epistemic justice. It aims to uncover such inequities and support context-sensitive, community-led technology design and governance.
Communities across the globe are developing strategies to resist harmful AI deployments – from surveillance-resistant practices to collective advocacy efforts. This focus area highlights grassroots networks building alternatives, fostering solidarity, and strengthening long-term resilience in the face of algorithmic injustice. It also explores proposals for alternative algorithmic futures, including democratic information ecosystems, participatory auditing and benchmarking, and other community-driven approaches to reshaping technological power.