Algorithms aren’t right for everything on Twitter
, Notes the social network in a blog post on Wednesday. It is better for netizens to decide how to frame the picture.
The platform, which launched the work to make AI systems more ethical, investigated the biases seen by users.
The algorithm, which entered service in 2018, is designed to crop images according to what it considers most important, in order to reduce their size and thus reduce thread clutter.
Racial and gender bias
People on Twitter noticed instances where our program chose white people over blacks
Roman Choudhury, director of software, explains.
We tested our system on a large database to determine if there was a problem.
The group’s research team found that there was a 4% difference in favor of whites in general, and 7% in favor of white women in contrast to black women. The comparison between men and women overall showed an 8% gap in favor of women.
The San Francisco-based company also looked for the potential for so-called bias Guys stare
Meaning, the algorithm will choose the woman’s chest or legs rather than her face. But only 3% of the images tested were cropped outside people’s heads, and on non-physical items, such as a number on a sports shirt.
We did not find any evidence of objectivity bias.
The social network tested a new way to display photos without cropping them in March, then rolled out the tool around the world to allow people to see what their tweets looked like before posting.
It reduces our reliance on machine learning for a task users are able to perform.
, She added.