The Impact Lab – Google AI blog

Globalized technology has the potential to create far-reaching social impact, and having a research approach based on existing international standards of human and civil rights is an essential component to ensure responsible and ethical AI development and deployment. The Impact Lab team, part of Google’s Responsible AI team, uses a variety of interdisciplinary methodologies to provide critical and rich analysis of the potential implications of technology development. The team’s mission is to explore the socioeconomic and human rights impacts of artificial intelligence, publish fundamental research, and incubate new mitigations that will enable machine learning (ML) practitioners to advance global equity. We research and develop scalable, rigorous and evidence-based solutions using data analytics, human rights and participatory frameworks.

The uniqueness of Impact Lab’s goals is its multidisciplinary approach and diversity of experience, including applied and academic research. Our goal is to broaden the epistemic lens of Responsible AI by centering the voices of historically marginalized communities and overcoming the practice of biased impact analysis by offering a research-based approach to understanding how diverse perspectives and experiences should influence technology development.

What are we doing?

In response to the accelerating complexity of ML and the growing connection between large-scale ML and humans, our team is critically examining traditional assumptions about how technology affects society to deepen our understanding of this interaction. We collaborate with academic scholars in social science and philosophy of technology and publish fundamental research focusing on how ML can be useful and useful. We also offer research support for some of our organization’s most challenging efforts, including the 1,000 Languages ​​initiative and ongoing work to test and evaluate language and generative models. Our work lends weight to Google’s AI principles.

For this purpose, we:

  • Conduct basic and exploratory research aimed at creating large-scale socio-technical solutions
  • Create datasets and research-based frameworks to evaluate ML systems
  • Define, identify and assess the social negative impacts of AI
  • Create responsive solutions for collecting data used to build large models
  • Develop new methodologies and approaches that support the responsible deployment of ML models and systems to ensure safety, fairness, stability, and user accountability.
  • Translate external community and expert feedback into empirical insights to better understand user needs and impacts
  • Seek fair cooperation and strive for a mutually beneficial partnership

We seek not only to redefine existing frameworks for assessing the negative impact of artificial intelligence to answer ambitious research questions, but also to promote the importance of this work.

Current research efforts

Understanding social issues

Our motivation to provide rigorous analytical tools and approaches is to ensure that socio-technical impact and equity are well understood in relation to cultural and historical nuances. This is quite important because it helps to build the incentive and capacity to better understand the communities who experience high burdens, and demonstrates the value of rigorous and focused analysis. Our goals are to proactively engage with external thought leaders in this problem space, reframe our existing mental models when assessing potential harms and impacts, and avoid relying on unfounded assumptions and stereotypes about ML technologies. We collaborate with researchers from Stanford, University of California, Berkeley, University of Edinburgh, Mozilla Foundation, University of Michigan, Naval Postgraduate School, Data & Society, EPFL, Australian National University and McGill University.

We explore systemic social issues and create useful artifacts to advance the development of responsible artificial intelligence.

Centering underrepresented voices

We also developed the Equitable AI Research Roundtable (EARR), a new community-based research coalition created to establish ongoing collaboration with external nonprofit and research leaders who are equity experts in education, law, social justice , in the fields of AI ethics and AI. economic development. These collaborations provide an opportunity to engage with multidisciplinary experts on complex research questions related to how we focus and understand justice using lessons from other fields. Our partners include PolicyLink; The Education Trust – West; Notley; Partnership on AI; Other and Affiliated Institute at UC Berkeley; Michelson Institute for Intellectual Property, HBCU IP Futures Collaborative at Emory University; Banatao Institute Center for Information Technology Research for the Benefit of Society (CITRIS); and Charles A. of the University of Texas at Austin. The Dana Center. The objectives of the EARR program are: expand the scope of expertise and relevant knowledge as it relates to our work on responsible and safe approaches to AI development.

Through semi-structured workshops and discussions, EARR provided critical perspectives and insights on how to conceptualize equity and vulnerability as they relate to AI technology. We’ve collaborated with EARR contributors on a range of topics: generative AI, algorithmic decision-making, transparency and explainability, with results ranging from adversarial surveys to frameworks and case studies. Of course, the process of translating research ideas into technical solutions across disciplines is not always easy, but this research has been a rewarding partnership. We present our initial assessment of this involvement in this paper.

EARR. Components of the ML development life cycle in which multidisciplinary expertise is key to mitigating human biases.

At the core of civil and human rights values

In partnership with our Civil and Human Rights Program, our research and analysis process is based on internationally recognized human rights frameworks and standards, including the Universal Declaration of Human Rights and the UN Guiding Principles on Business and Human Rights. Using civil and human rights frameworks as a starting point allows for a context-specific approach to research that considers how the technology will be used and its impact on the community. Most importantly, a rights-based approach to research enables us to prioritize conceptual and applied methods that emphasize the importance of understanding the most vulnerable users and the most salient harms for everyday decision-making, product design and long-term awareness. strategies.

Current work

A social context that aids data processing and evaluation

We strive to use an approach to data collection, model development, and estimation that is grounded in equity and avoids quick but potentially risky approaches such as using incomplete data or failing to account for historical and sociocultural factors associated with the database. Responsible data collection and analysis requires an additional level of careful consideration of the context in which the data is generated. For example, one may see differences in outcomes between demographic variables that will be used to build models and should question structural and systemic level factors, as some variables may ultimately be a reflection of historical, social, and political factors. Using proxy data, such as race or ethnicity, gender, or zip code, we systematically aggregate the lived experiences of an entire group of diverse people and use it to build models that can recreate and maintain whole personas that are harmful and inaccurate. profiles. populations. Critical data analysis also requires careful awareness that correlations or relationships between variables do not imply causation; that association we witness often caused with additional multiple variables.

The relationship between social context and model outcomes

Based on this expanded and nuanced social understanding of data and database construction, we also approach the problem of predicting or improving the impact of ML models when they are applied in the real world. There are many ways in which the use of AI in various contexts, from education to health, has exacerbated existing inequities because the developers and decision-making users of these systems lacked adequate social understanding, historical context, and did not involve relevant stakeholders. . This is a research challenge for the ML field in general and one that is central to our team.

Globally responsible AI focused community experts

Our team also recognizes the importance of understanding the global socio-technical context. In line with Google’s mission to “organize the world’s information and make it universally accessible and useful,” our team engages in global research collaborations. For example, we are collaborating with The Natural Language Processing team and the Human Centered team at the Makerere Artificial Intelligence Lab in Uganda to explore the cultural and linguistic nuances involved in language model development.

Conclusion

We continue to address the real-world impacts of ML models by conducting further socio-technical research and engaging external experts who are also part of historically disenfranchised and globally disenfranchised communities. Impact Lab is excited to offer an approach that promotes the development of solutions to applied problems through the use of social science, evaluation and human rights epistemologies.

Gratitude

We would like to thank every member of the Impact Lab team: Jamila Smith-Laud, Andrew Smart, Jelon Hall, Darlene Neal, Amber Ebinama, and Kazi Mamunur Rashid. — for all the hard work they do to ensure that ML is more responsible for its users and society in communities and around the world.

Source link