top of page

TIME UNTIL NEXT EVENT:

0d

:

0h

:

0m

:

0s

Evan Shieh

AI Ethics Researcher and Educator / Executive Director, Young Data Scientists League

Queens, NY, USA / Asian or Asian American

👨 he/him/his

🎓MS, Artificial Intelligence and Systems, Stanford
🎓BS, Computer Science, Stanford

😁 Evan loves the outdoors – hiking, fishing, gardening, and foraging

Evan Shieh

Evan inspires others to embrace curiosity and learn from the diversity of perspectives around them.

ABOUT HIS WORK

As an AI ethics researcher and educator, I could not be doing the work I'm doing without the diverse, multicultural communities I've been blessed to be a part of. Participating in AI ethics requires navigating the tensions, challenges, and insights that come from applying multiple disciplinary lenses (across STEM and the humanities). It also requires learning from youth perspectives, which we're especially proud of at the Young Data Scientists League.

Evan Shieh

WATCH & DISCUSS

Watch:
HAI Seminar: Intersectional Biases in Generative Language Models and Their Psychosocial Impacts

Discuss: 

  1. What does it mean for an AI to be “fair”?

  2. How can an AI system learn things that might be unfair or biased from the internet?

  3. Why might an AI describe or show some groups of people more often than others?

  4. How could that make someone feel if they don’t see themselves represented fairly?

  5. Why do you think it’s important for people from different backgrounds to help design and test AI systems?

  6. How can having a team with many perspectives make technology better for everyone?

  7. What might happen if only one group of people gets to decide what “normal” looks like in an AI’s answers or images?

  8. Have you ever seen technology that didn’t work well for everyone? What could have been done differently?

  9. How might AI systems affect how people see themselves or others in the world?

  10. What can we do to make sure technology includes and respects all kinds of people?

  11. If you could give advice to the people who make AI, what would you tell them to do to make it more fair?

  12. What responsibilities do we have as users to notice and speak up about unfairness in technology?

OTHER RESOURCES

ADVICE TO YOUNGER SELF

Watch: 
HAI Seminar: Intersectional Biases in Generative Language Models and Their Psychosocial Impacts

bottom of page