AI generative image software underrepresents the role of women in senior professional positions, according to new research by British media agency One Green Bean. In recognition of International Women’s Day, the agency used the emerging artificial intelligence program Midjourney in an experiment to prove that it holds a clear male gender bias.
Images based on the job titles of three prominent positions of the global leadership team at One Green Bean fail to reflect the gender of staff. Women in the role of managing director, executive creative director and regional head of public relations are pictured posing with their assumed male selves.
One Green Bean found that 88 percent of the images based on The Times newspaper’s top 20 highest paying jobs in the UK reinforced male gender stereotypes. In the Australian workforce around one third of comparative top jobs are held by women.
Like Open AI’s program Dall-E, Midjourney uses keywords to generate images inspired by the approximately five billion images in its catalogue. “When you do include ‘woman’ in your keywords, imagery tends to be sexualized – big boobs, unbuttoned shirts, pouting lips,” says founder of One Green Bean Kat Thomas.
“Bias against women is a significant hurdle these platforms need to overcome.”
Thomas said that the creative communications industry is currently obsessed with “embryotic” artificial intelligence with such failings limiting ultimate potential. “There’s been huge hype around AI tools like ChatGPT and Midjourney. We’ve been deep in experimentation to understand its potential, but an eye-opening limitation became clear very quickly.”
“They effectively hold a mirror up to society, demonstrating that ingrained cultural biases dictate the norms that machine intelligence currently relies on,” says Thomas.
AI bias towards men in roles like air traffic controller, paramedic, finance manager and police officer has been shown in other research, even the search “international tennis star” giving no sign of prominent women.