• Contact Us
  • Privacy Policy
Sunday, January 24, 2021
WorldGots.com
  • Home
  • World News
  • Politics
  • Business
  • Technology
  • Sports
  • Health
  • Entertainment
    • Lifestyle
  • Worldgots Ads
No Result
View All Result
  • Home
  • World News
  • Politics
  • Business
  • Technology
  • Sports
  • Health
  • Entertainment
    • Lifestyle
  • Worldgots Ads
No Result
View All Result
WorldGots.com
No Result
View All Result
Home Technology

Tips for applying an intersectional framework to AI development – TechCrunch

December 18, 2020
in Technology
5 min read
Tips for applying an intersectional framework to AI development – TechCrunch
0
SHARES
3
VIEWS
ShareShareShareShareShareShare
Kendra Gaunt (she/her or they/them pronouns) is a data and AI product owner at The Trevor Project, the world’s largest suicide prevention and crisis intervention organization for LGBTQ youth. A 2019 Google AI Impact Grantee, the organization is implementing new AI applications to scale its impact and save more young LGBTQ lives.

By now, most of us in tech know that the inherent bias we possess as humans creates an inherent bias in AI applications — applications that have become so sophisticated they’re able to shape the nature of our everyday lives and even influence our decision-making.

The more prevalent and powerful AI systems become, the sooner the industry must address questions like: What can we do to move away from using AI/ML models that demonstrate unfair bias?

How can we apply an intersectional framework to build AI for all people, knowing that different individuals are affected by and interact with AI in different ways based on the converging identities they hold?

Start with identifying the variety of voices that will interact with your model.

Intersectionality: What it means and why it matters

Before tackling the tough questions, it’s important to take a step back and define “intersectionality.” A term defined by Kimberlé Crenshaw, it’s a framework that empowers us to consider how someone’s distinct identities come together and shape the ways in which they experience and are perceived in the world.

This includes the resulting biases and privileges that are associated with each distinct identity. Many of us may hold more than one marginalized identity and, as a result, we’re familiar with the compounding effect that occurs when these identities are layered on top of one another.

At The Trevor Project, the world’s largest suicide prevention and crisis intervention organization for LGBTQ youth, our chief mission is to provide support to each and every LGBTQ young person who needs it, and we know that those who are transgender and nonbinary and/or Black, Indigenous, and people of color face unique stressors and challenges.

So, when our tech team set out to develop AI to serve and exist within this diverse community — namely to better assess suicide risk and deliver a consistently high quality of care — we had to be conscious of avoiding outcomes that would reinforce existing barriers to mental health resources like a lack of cultural competency or unfair biases like assuming someone’s gender based on the contact information presented.

Though our organization serves a particularly diverse population, underlying biases can exist in any context and negatively impact any group of people. As a result, all tech teams can and should aspire to build fair, intersectional AI models, because intersectionality is the key to fostering inclusive communities and building tools that serve people from all backgrounds more effectively.

Doing so starts with identifying the variety of voices that will interact with your model, in addition to the groups for which these various identities overlap. Defining the opportunity you’re solving is the first step because once you understand who is impacted by the problem, you can identify a solution. Next, map the end-to-end experience journey to learn the points where these people interact with the model. From there, there are strategies every organization, startup and enterprise can apply to weave intersectionality into every phase of AI development — from training to evaluation to feedback.

Datasets and training

The quality of a model’s output relies on the data on which it’s trained. Datasets can contain inherent bias due to the nature of their collection, measurement and annotation — all of which are rooted in human decision-making. For example, a 2019 study found that a healthcare risk-prediction algorithm demonstrated racial bias because it relied on a faulty dataset for determining need. As a result, eligible Black patients received lower risk scores in comparison to white patients, ultimately making them less likely to be selected for high-risk care management.

Fair systems are built by training a model on datasets that reflect the people who will be interacting with the model. It also means recognizing where there are gaps in your data for people who may be underserved. However, there’s a larger conversation to be had about the overall lack of data representing marginalized people — it’s a systemic problem that must be addressed as such, because sparsity of data can obscure both whether systems are fair and whether the needs of underrepresented groups are being met.

To start analyzing this for your organization, consider the size and source of your data to identify what biases, skews or mistakes are built-in and how the data can be improved going forward.

The problem of bias in datasets can also be addressed by amplifying or boosting specific intersectional data inputs, as your organization defines it. Doing this early on will inform your model’s training formula and help your system stay as objective as possible — otherwise, your training formula may be unintentionally optimized to produce irrelevant results.

At The Trevor Project, we may need to amplify signals from demographics that we know disproportionately find it hard to access mental health services, or for demographics that have small sample sizes of data compared to other groups. Without this crucial step, our model could produce outcomes irrelevant to our users.

Evaluation

Model evaluation is an ongoing process that helps organizations respond to ever-changing environments. Evaluating fairness began with looking at a single dimension — like race or gender or ethnicity. The next step for the tech industry is figuring out how to best compare intersectional groupings to evaluate fairness across all identities.

To measure fairness, try defining intersectional groups that could be at a disadvantage and the ones that may have an advantage, and then examine whether certain metrics (for example, false-negative rates) vary among them. What do these inconsistencies tell you? How else can you further examine which groups are underrepresented in a system and why? These are the kinds of questions to ask at this phase of development.

Developing and monitoring a model based on the demographics it serves from the start is the best way for organizations to achieve fairness and alleviate unfair bias. Based on the evaluation outcome, a next step might be to purposefully overserve statistically underrepresented groups to facilitate training a model that minimizes unfair bias. Since algorithms can lack impartiality due to societal conditions, designing for fairness from the outset helps ensure equal treatment of all groups of individuals.

Feedback and collaboration

Teams should also have a diverse group of people involved in developing and reviewing AI products — people who are diverse not only in identities, but also in skillset, exposure to the product, years of experience and more. Consult stakeholders and those who are impacted by the system for identifying problems and biases.

Lean on engineers when brainstorming solutions. For defining intersectional groupings, at The Trevor Project, we worked across the teams closest to our crisis-intervention programs and the people using them — like Research, Crisis Services and Technology. And reach back out to stakeholders and people interacting with the system to collect feedback upon launch.

Ultimately, there isn’t a “one-size-fits-all” approach to building intersectional AI. At The Trevor Project, our team has outlined a methodology based on what we do, what we know today and the specific communities we serve. This is not a static approach and we remain open to evolving as we learn more. While other organizations may take a different approach to build intersectional AI, we all have a moral responsibility to construct fairer AI systems, because AI has the power to highlight — and worse, magnify — the unfair biases that exist in society.

Depending on the use case and community in which an AI system exists, the magnification of certain biases can result in detrimental outcomes for groups of people who may already face marginalization. At the same time, AI also has the ability to improve quality of life for all people when developed through an intersectional framework. At The Trevor Project, we strongly encourage tech teams, domain experts and decision-makers to think deeply about codifying a set of guiding principles to initiate industry-wide change — and to ensure future AI models reflect the communities they serve.

Credit: Source link

ADVERTISEMENT
Previous Post

Ben Affleck & Jennifer Garner’s ‘Co-Parenting Dynamic’ Is Still Awesome – Even With Ana De Armas In The Mix!

Next Post

Supreme Court Rules Challenge To Trump’s Census Plan Is Premature

Related Posts

Instacart to eliminate about 2,000 jobs and GitHub head of HR resigns – TechCrunch
Technology

Instacart to eliminate about 2,000 jobs and GitHub head of HR resigns – TechCrunch

January 23, 2021
Remote dealmaking, rapid-fire IPOs, and how much $250M buys you – TechCrunch
Technology

How VCs and founders see 2021 differently – TechCrunch

January 23, 2021
‘Bridgerton’ is an addictive reimagining of Jane Austen-style romance – TechCrunch
Technology

‘Bridgerton’ is an addictive reimagining of Jane Austen-style romance – TechCrunch

January 23, 2021
Watch SpaceX launch its first dedicated rideshare mission live, carrying a record-breaking number of satellites – TechCrunch
Technology

Watch SpaceX launch its first dedicated rideshare mission live, carrying a record-breaking number of satellites – TechCrunch

January 23, 2021
Load More
Next Post
Supreme Court Rules Challenge To Trump’s Census Plan Is Premature

Supreme Court Rules Challenge To Trump's Census Plan Is Premature

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
How long is Devin Booker out? Suns guard leaves OT loss with leg injury

How long is Devin Booker out? Suns guard leaves OT loss with leg injury

January 23, 2021
Cory Monteith Allegedly Had To Comfort An Actress After Lea Michele Called Her ‘Ugly’!

Cory Monteith Allegedly Had To Comfort An Actress After Lea Michele Called Her ‘Ugly’!

June 5, 2020
Auburn’s Sharife Cooper continues incredible college start with first-half double-double vs. South Carolina

Auburn’s Sharife Cooper continues incredible college start with first-half double-double vs. South Carolina

January 23, 2021
Amber Heard & Elon Musk Text Messages During Johnny Depp Marriage Raise More Questions

Amber Heard & Elon Musk Text Messages During Johnny Depp Marriage Raise More Questions

June 25, 2020
UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

January 24, 2021
Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

January 23, 2021
Jay Cutler’s Southern Charm Flame Madison LeCroy Claims She’s ‘Unbothered’ Amid Reconciliation Rumors!

Jay Cutler’s Southern Charm Flame Madison LeCroy Claims She’s ‘Unbothered’ Amid Reconciliation Rumors!

January 23, 2021
Arizona Republicans Censure Cindy McCain, GOP Governor

Arizona Republicans Censure Cindy McCain, GOP Governor

January 23, 2021

Recent News

UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

January 24, 2021
Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

January 23, 2021
Jay Cutler’s Southern Charm Flame Madison LeCroy Claims She’s ‘Unbothered’ Amid Reconciliation Rumors!

Jay Cutler’s Southern Charm Flame Madison LeCroy Claims She’s ‘Unbothered’ Amid Reconciliation Rumors!

January 23, 2021
Arizona Republicans Censure Cindy McCain, GOP Governor

Arizona Republicans Censure Cindy McCain, GOP Governor

January 23, 2021
WorldGots.com

This is an online news portal that aims to share latest trendy news around US News, World News, Business, Tech, Sports, Entertainment & Lifestyle. Feel free to get in touch with us!

Recent News

UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

UFC 257 postfight analysis, breakdown: Patient Poirier gets his revenge

January 24, 2021
Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

Conor McGregor vs. Dustin Poirier 2 live fight updates, results, highlights from UFC 257

January 23, 2021

Subscribe Now

Loading
  • Contact Us
  • Privacy Policy

© 2020 worldgots.com - All rights reserved!

No Result
View All Result
  • Home
  • World News
  • Politics
  • Business
  • Technology
  • Sports
  • Health
  • Entertainment
    • Lifestyle
  • Worldgots Ads
  • en English
    ar Arabicen Englishfr Frenches Spanish

© 2020 worldgots.com - All rights reserved!

en English
ar Arabicen Englishfr Frenches Spanish