Artificial intelligence bias

Machine learning models are known to amplify the biases present in the data. These data biases frequently do not become apparent until after the models are long deployed. And sometimes not at all until something goes viral on social media and a company or model gets heat.

To tackle this issue and to enable the preemptive analysis of large-scale dataset, REVISE (REvealing VIsual biaSEs) is a tool that assists in the investigation of a visual dataset, surfacing potential biases currently along three dimensions:

1 – Object-based :

Object-based biases relate to size, context, or diversity of object representation

2 – Gender-based :

Gender-based metrics aim to reveal the stereotypical portrayal of people of different genders

3 – Geography-based

Geography-based analyses consider the representation of different geographic locations

REVISE is a open source tool that automatically detects possible forms of bias in a visual dataset along the axes of object-based, gender-based, and geography-based patterns, and from which next steps for mitigation are suggested.

https://github.com/princetonvisualai/revise-tool

.