New 10-shade skin tone scale to test Google’s AI for bias

Google of Alphabet Inc. revealed a palette of ten skin tones on Wednesday, touting it as a step forward in creating gadgets and applications that better serve people of color.

The company claims that its new Monk Skin Tone Scale replaces the Fitzpatrick Skin Type, a flawed six-color standard that had become popular in the tech industry for determining whether smartwatch heart-rate sensors, artificial intelligence systems with facial recognition, and other products show color bias.

Fitzpatrick underrepresented persons with darker complexion, according to Tech experts. Last year, media claimed that Google was working on an alternative.

The startup teamed up with sociologist Ellis Monk of Harvard University, who studies colorism and felt dehumanized by cameras that failed to recognize his face and reflect his skin tone.

Fitzpatrick, according to Monk, is excellent at recognizing distinctions in lighter skin. However, because the majority of people are darker, he want a scale that “does a better job for the bulk of the planet,” he explained.

Monk picked 10 tones using Photoshop and other digital art tools, which is a workable quantity for those who assist train and evaluate AI systems. He and Google polled 3,000 people throughout the United States and discovered that a large percentage of them felt a 10-point scale fit their complexion just as well as a 40-shade pallet.

The Monk scale is “a nice compromise between being representative and being tractable,” according to Tulsee Doshi, head of product for Google’s responsible AI team.

It’s already being used by Google. Beauty-related Filtering results based on Monk is now possible in Google Images searches like “bridal makeup looks.” Images with varied skin tones are now displayed when searching for “cute infants.”

The Monk scale is also being used to guarantee that a wide spectrum of individuals are happy with Google Photos‘ filter options and that the company’s face-matching tech is not prejudiced.

Still, if firms don’t have enough data on each of the tones, or if the people or tools used to assess others’ skin are prejudiced by lighting variances or personal views, flaws might creep into goods, according to Doshi.

Latest articles

France plans to make abortion ‘constitutional right’

France is poised to embed the right to abortion as a constitutional guarantee after the Senate voted in favor of the move. The proposed...

know the history behind Burke, Virginia names

Former reporter David Martosko has initiated a campaign to rename Burke, Virginia, seeking to disassociate the town from Silas Burke, an enslaver, and instead...

Historic Texas wildfire continues devastation

A devastating wildfire in Texas, known as the Smokehouse Creek fire, continues to ravage the Panhandle area, becoming the second-largest wildfire in the state's...

Can Australian standards protect online safety?

Tech companies argue that upcoming Australian safety standards may inadvertently increase challenges for generative AI systems in identifying and preventing online child abuse and...

Related articles