As Google races to keep up with rival OpenAI, its commitment to ethical AI development appears to be faltering. Current and former employees claim that the company’s rush to launch its chatbot, Bard, in March, resulted in a product that provides low-quality information and has potentially dangerous implications. Employees testing the chatbot found that it gave misleading and sometimes hazardous advice. Despite these concerns, Google went ahead with the launch.

Google bard ChatGPT

The company’s focus on AI ethics has been overshadowed by its efforts to integrate generative AI into its products and compete with OpenAI’s popular chatbot. Google’s ethical AI team is reportedly demoralized and disempowered, with members being told not to hinder the development of generative AI tools.

AI ethicists have warned that when ethics are not prioritized over profit and growth, they will not be effective in guiding the development of new technologies. Google, however, maintains that responsible AI remains a top priority. The company has recently been making a series of generative AI product announcements, leading some employees to worry that there is not enough time to study the potential harms of these technologies.

The challenge of developing AI ethically has long been a source of internal debate at Google, with high-profile blunders such as its Photos service mislabeling images of Black individuals as “gorillas.” Since then, the company has attempted to improve its public reputation by reorganizing its responsible AI team and pledging to double its size.

However, employees working on ethical AI at Google have described a difficult working environment, with some claiming that their concerns are often disregarded or actively discouraged. The review process for AI ethics remains mostly voluntary, and some employees feel that the company’s public commitment to ethics is merely a facade.

As Google prioritizes the rapid development and release of generative AI products, ethical concerns may continue to take a back seat, potentially leading to more AI-related mishaps and controversies.