AI And The Latino Community: Challenging Bias & Misrepresentation

Artificial Intelligence (AI) has, in certain cases, aggravated social exclusion and invisibility. For instance, facial recognition technology often shows greater accuracy when identifying individuals with lighter skin tones, highlighting an inherent bias in its design. Moreover, the diminished accuracy of facial recognition in recognizing darker-skinned individuals elevates the risk of wrongful criminal identifications, subjecting them to increased surveillance.

Moreover, A.I. biases exist in healthcare due to various factors, such as race and gender, that can lead to skewed outcomes and unequal treatment. This bias can potentially result in disparities in diagnosis and treatment, worsening existing health inequalities.

The image on page 32 shows how AI systems characterize a CEO when asked the question: Display image of a CEO in the U.S.

It’s essential to clarify that AI itself is not inherently exclusionary; rather, it is a product of flawed human systems and decisions. The concern lies in the datasets used to train AI models (the quality and quantity of the training data, the choice of model, and the training process all influence the algorithm’s learning and performance). If these datasets contain biases, the resulting AI models will also exhibit those biases. As Akgun (2021) points out, when algorithms are developed, they are typically trained on historical data that may carry inherent biases from society’s past and systemic inequities. These biases can manifest as algorithmic biases in the AI system’s outputs and decisions.30 A glaring example of this bias can be seen when searching for images of individuals from California, Texas, Florida, or New York. Despite the significant Latino demographic presence in these states, AI-generated images often fail to include Latinos, as reported by Unilad, Yahoo News, and Buzzfeed.31,32,33 Addressing these biases is a critical consideration in the development of responsible and fair AI algorithms.

California and Texas are the states with the highest percentage of Latinos, however, only one seemingly brown individual can be seen out of all six people that appear in the AI-generated images. The rest are white-passing men and women which is not to say that Latinos cannot be white, but the average Latino appears more as a mestizo individual mixing European genes with Indigenous American features. Moving on to the next two top states with the highest Latino populations in the United States: Florida and New York.

The representation of Latinos in these AI-generated images is nowhere to be found, however, the stereotypical “Florida man” indeed does make an appearance. There is an evident erase of Latinos from the narrative of these states. Though AI models are only as good as the data they are trained on, the images are feeding misinformation to their viewers.

The portrayal of Latinos needs to improve and the information that is fed to AI-image generative models must reflect the current population and actual nature of U.S. states.

Latinos play an active and transformative role in the daily life and local culture of these states. Unknowingly, AI image-generation systems erase them, despite their presence and contributions to these states. While there is a slight exception in California where some Latino representation is present, this does not diminish the overall call for fair representation. Given that California boasts the largest Latino population of any U.S. state34, it is essential for AI-generated images to include more Latinos. The current lack of Latino representation in these images only serves to amplify existing stereotypes against the Latino community.

On the other hand, when it comes to the representation of nationalities themselves, they vary depending on whether the identity of the individuals is not solely native from Latin America but instead, happens to be a U.S.-born and raised American with a Latino background. The comparison is evident when one searches “Image of a Mexican person” vs “Image of a Mexican-American person” (results below).

The initial generated image reinforces the stereotype that all Mexicans are male and wear hats. These images depict individuals with a copper complexion and prominent mustaches. When examining the images of Mexican-Americans, while there is now a representation of a woman, she does not appear to belong to a higher socioeconomic status.

This inadvertently perpetuates the misconception that all Mexican-Americans hail from eco nomically disadvantaged backgrounds, which does not accurately reflect reality.

A similar pattern emerges when searching for images of other nationalities, further propagating the idea that all indigenous Latin Americans are uniformly brown and marginalized. For example, in the case of requesting an “Image of a Venezuelan person” or an “Image of a Colombian person,” the recurring image portrays an older gentleman who appears to be indigenous and living in poverty (as evident in the images below).

Indeed, the portrayal of these countries through biased databases in AI models does not align with the diverse and multifaceted reality of Latinos. It’s important to emphasize that neither Venezuela nor Colombia has an indigenous majority35, and the majority of their populations are not over the age of 4036. AI developers must exercise caution to present the truths and complexities of these regions rather than perpetuating stereotypes that misrepresent their rich diversity.

Access the rest of the report HERE

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply