Google DeepMind has been working with generative AI to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips
Earlier this year, Google, locked in an accelerating competition with rivals like Microsoft and OpenAI to develop AI technology, was looking for ways to put a charge into its artificial intelligence research.
So in April, Google merged DeepMind, a research lab it had acquired in London, with Brain, an artificial intelligence team it started in Silicon Valley.
Four months later, the combined groups are testing ambitious new tools that could turn generative AI — the technology behind chatbots such as OpenAI’s ChatGPT and Google’s own Bard — into a personal life coach.
Google DeepMind has been working with generative AI to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips, according to documents and other materials reviewed by The New York Times.
The project was indicative of the urgency of Google’s effort to propel itself to the front of the AI pack and signaled its increasing willingness to trust AI systems with sensitive tasks.
The capabilities also marked a shift from Google’s earlier caution on generative AI. In a slide deck presented to executives in December, the company’s AI safety experts had warned of the dangers of people becoming too emotionally attached to chatbots.
Though it was a pioneer in generative AI, Google was overshadowed by OpenAI’s release of ChatGPT in November, igniting a race among tech giants and startups for primacy in the fast-growing space.
Google has spent the last nine months trying to demonstrate it can keep up with OpenAI and its partner Microsoft, releasing Bard, improving its AI systems and incorporating the technology into many of its existing products, including its search engine and Gmail.
Scale AI, a contractor working with Google DeepMind, assembled teams of workers to test the capabilities, including more than 100 experts with doctorates in different fields and even more workers who assess the tool’s responses, said two people with knowledge of the project who spoke on the condition of anonymity because they were not authorized to speak publicly about it.
Scale AI did not immediately respond to a request for comment.
Among other things, the workers are testing the assistant’s ability to answer intimate questions about challenges in people’s lives.
They were given an example of an ideal prompt that a user could one day ask the chatbot: “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?”
The project’s idea creation feature could give users suggestions or recommendations based on a situation. Its tutoring function can teach new skills or improve existing ones, like how to progress as a runner; and the planning capability can create a financial budget for users as well as meal and workout plans.
Google’s AI safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from AI. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.
The tools are still being evaluated and the company may decide not to employ them.
A Google DeepMind spokesperson said, “We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map.”
Google has also been testing a helpmate for journalists that can generate news articles, rewrite them and suggest headlines, the Times reported in July. The company has been pitching the software, named Genesis, to executives at the Times, The Washington Post and News Corp, the parent company of The Wall Street Journal.
Google DeepMind has also been evaluating tools recently that could take its AI further into the workplace, including capabilities to generate scientific, creative and professional writing, as well as to recognize patterns and extract data from text, according to the documents, potentially making it relevant to knowledge workers in various industries and fields.
The company’s AI safety experts had also expressed concern about the economic harms of generative AI in the December presentation reviewed by the Times, arguing that it could lead to the “deskilling of creative writers.”
Other tools being tested can draft critiques of an argument, explain graphs and generate quizzes, word and number puzzles.
One suggested prompt to help train the AI assistant hinted at the technology’s rapidly growing capabilities: “Give me a summary of the article pasted below. I am particularly interested in what it says about capabilities humans possess, and that they believe” AI cannot achieve.
This article originally appeared in The New York Times.
Source:indianexpress.com