Today’s society consists of humans living in a complex and interconnected world that is intertwined with a variety of computing, sensing and communicating devices – generating massive amounts of data that is systematically stored. AI systems, which are powered by algorithms that learn from data, are driving how humans interact with each other (e.g., social networks), interact with information (e.g., search engines, personalized news feeds), conduct business (e.g., financial trading, sharing economy platforms) and learn (e.g., educational technology).
Our goal is to consider new questions and address emerging problems via an interdisciplinary effort that leverages the understanding of all aspects of human-data-algorithm interactions and leads to the development of tools – both algorithmic and regulatory – that synthesize knowledge from computer science/data science and the social sciences/humanities. Ultimately, these developments will enable well-informed recommendations for approaches to policy-making and governance that are adapted to this new ecosystem.
Examples of questions and directions include:
1. Understand how humans behave in the face of algorithms: Measure how human behavior changes when interacting with and through algorithms, and how these changes affect individual and group outcomes.
2. Quantify/understand how algorithms impact humans: Develop quantitative measures and models of how training data affects algorithmic outputs, and how these algorithms, in turn, impact the options available to, opinions, and decision-making abilities of humans.
3. Design computational methods that empower society: Develop a theory and practice of regulating algorithms and develop tools that empower citizens and policymakers alike over key issues such as privacy, learning, and decision-making.
4. Devise and derive recommendations: Suggest approaches to policy/lawmaking and governance that are adapted to this new world of ubiquitous algorithmic decision-making.