Google’s most advanced Artificial Intelligence (AI), Gemini, has made waves after its recent official launch.
Touted for its enhanced reasoning, planning, and understanding capabilities, Gemini represents a significant leap forward in AI technology.
This upgrade, building upon the foundation laid by the earlier Bard model, seamlessly integrates the powerful features of the “Ultra 1.0” model, promising unparalleled performance.
However, the excitement surrounding Gemini’s launch quickly turned to scrutiny over the weekend with a public backlash against the model’s image generation feature.
Users took to social media to criticize the tool for generating historically inaccurate images, with news outlets reporting”…with notable examples including depictions of non-white individuals as Nazi soldiers.”
The inaccuracies sparked widespread outrage and raised concerns about the potential implications of such flawed representations.
In response to the uproar, Google acknowledged the issue and issued a public apology for the inaccurate results.
The company pledged to address the concerns raised by users and committed to improving the tool’s ability to accurately represent diverse historical contexts. Google emphasized its dedication to responsible AI development and pledged to take proactive steps to rectify the situation.
Also, the firm decided to temporarily pause the image generation function of Gemini.
“This strategic move aims to allow Google’s engineering team to focus on resolving the issues surrounding historical inaccuracies and ensuring the responsible use of the technology.”
Google has committed to re-enabling the feature only once it meets the highest standards of accuracy and reliability.