Google's AI Just Made Up a Supreme Court Decision About Wine Shipping
The problem with Google Gemini goes well beyond images of Black George Washington and Asian Female Nazis.
As many of you may have read recently, Google’s generative artificial intelligence (AI) tool, Gemini, has provided some problematic responses to user queries. In particular, the AI’s ability to render historically accurate and reality-based images in response to prompts has come in for criticism. As the New York Post wrote:
“Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.”
As reported, Google has shut down its Gemini AI image generator for now. However, the AI’s generative text response generator is still up and running. This gave me a chance to use and test it to discover if its bias extended to text responses. I wondered if it’s—as the kids are saying today,—Woke Tendencies” extend to text response.
What I learned was that the AI delivers far more disturbing responses via text than an image of a black George Washington or an Asian female Nazi.
Google Gemini is making up Supreme Court decisions.
Consider the following exchange I had with Gemini concerning legal questions surrounding state laws and interstate wine shipping: