Искусственный интеллект Gemini от Google скоро возобновит отображение людей.

Google надеется активировать Gemini, свой мультимодальный генеративный инструмент искусственного интеллекта, для изображений людей, согласно основателю DeepMind

“`html

Google is optimistic that they will resolve Gemini’s historical image diversity problem in a few weeks. | ENBLE

📅 Published on August 1, 2024 🕒 5 min read


Google’s multimodal generative AI tool, Gemini, is set to regain its ability to produce depictions of people, according to DeepMind founder Demis Hassabis. In an interview at the Mobile World Congress in Barcelona, Hassabis revealed that the capability to respond to prompts for images of humans will be reinstated in the “next few weeks.” This comes after Google temporarily suspended the feature due to historical inaccuracies in the images produced.

The snafu occurred when users pointed out that Gemini was depicting historical figures, such as the US founding fathers, as a diverse group of individuals, instead of solely white men. When asked about the issue, Hassabis attributed it to Google’s failure to identify instances when users were seeking a “universal depiction” rather than diverse portrayals. He emphasized that addressing these nuances is a challenge in the advanced AI field.

According to Hassabis, prompts that request images of historical figures should result in a narrower distribution of outputs, aligning with historical accuracy. Google aims to reintroduce the feature after making the necessary adjustments in the coming weeks. However, when questioned about preventing the misuse of generative AI tools for propaganda purposes, Hassabis acknowledged the complexity of the issue. He suggested that combating this challenge would require joint efforts from civil society, governments, and tech companies.

The discussion also touched upon the risks associated with open-source general-purpose AI models, which Google also provides. Hassabis highlighted the need to ensure that downstream applications of these systems do not become harmful as they gain increasingly powerful capabilities. He emphasized the importance of addressing this issue before next-generation AI systems with planning and problem-solving capabilities become widespread.

Moving on to the future of AI devices and their impact on the mobile market, Hassabis predicted the emergence of next-generation smart assistants that are genuinely useful in people’s everyday lives. He suggested that such advancements might reshape the choice of mobile hardware, potentially challenging the dominance of smartphones. Hassabis speculated that glasses or other forms of wearable technology could enhance AI systems’ contextual understanding, making them even more helpful in users’ daily lives.

In conclusion, Google plans to reinstate the human depiction feature in its Gemini AI tool with the aim of providing accurate portrayals of historical figures. The challenges faced in addressing historical accuracy and preventing the misuse of AI tools highlight the need for broader societal discussions. Furthermore, the future of AI devices holds immense potential in revolutionizing our interaction with technology.


Q&A: Frequently Asked Questions

Q1: Why did Google suspend the ability of its AI tool, Gemini, to generate images of people?

Google temporarily suspended the feature due to historical inaccuracies in the images produced by Gemini. Instead of depicting historical figures, such as the US founding fathers, as exclusively white men, the tool was generating diverse representations. Google recognized the need to address these inaccuracies to ensure historical accuracy.

Q2: When will Gemini’s capability to generate images of people be reinstated?

According to Demis Hassabis, the founder of DeepMind, Google aims to reinstate this feature in the “next few weeks.” The company is working to make the necessary adjustments to Gemini to ensure accurate depictions of historical figures while offering a range of possibilities.

Q3: How can generative AI tools be prevented from being misused by bad actors?

Preventing the misuse of generative AI tools presents a complex challenge. According to Hassabis, addressing this issue requires collaboration between tech companies, civil society, and governments, as it involves determining and enforcing the limits of these tools. Ensuring values and intentions align with the technology’s creators and thwarting unauthorized usage are critical aspects that need to be addressed collectively.

Q4: What are the risks associated with open-source general-purpose AI models?

Open-source general-purpose AI models pose concerns regarding downstream usage. As these systems become more powerful, it becomes crucial to ensure that they are not misused or repurposed for harmful ends. As a nascent field, the potential risks are currently lower. However, as AI technology advances, society must seriously consider the potential proliferation and misuse of these systems by individuals or even rogue states.


🌐 For more information on this topic, check out the following links:

  1. Google pauses AI tool Gemini’s ability to generate images of people after historical inaccuracies (TechCrunch)
  2. Google’s Five Big Updates for the Pixel 8 and 8 Pro (Digital Trends)

“““html

Изображение

🎉 Увлекательные новости! Инструмент искусственного интеллекта Google Gemini скоро вернется со своей функцией человеческого изображения! Прочитайте все об этом и получите представление о вызовах исторической точности и будущем устройств ИИ. Не забудьте поделиться этой статьей с друзьями и коллегами в социальных сетях!

🤖🧠💡✨📲🌐


Примечание: Эта статья была переписана на основе оригинального контента и включает дополнительную информацию, анализ и контент в виде вопросов и ответов.

“`