Google unveils AI updates: NotebookLM, a robot 'brain' and shopping tools | ForkLog


Google unveils AI updates: NotebookLM, a robot 'brain' and shopping tools | ForkLog

Google unveils NotebookLM Deep Research, SIMA 2 for robots, and new AI shopping tools.

Google unveiled a raft of new agentic-AI features, including a Deep Research mode in NotebookLM, SIMA 2 -- the "brain for robots" -- and shopping tools.

Google has updated its note-taking AI assistant NotebookLM, adding a tool to simplify complex research and support for more file types.

The service introduced Deep Research, an automation tool for online search. The company says it works like an independent researcher: it can produce a detailed report or recommend relevant articles, academic papers and websites.

Deep Research takes a question, drafts a research plan and scans the web. After a few minutes it delivers a source-backed report that can be added directly to the notebook.

The mode runs in the background, allowing other tasks to continue in parallel.

The tool is accessible via search. Users can choose a research style: the detailed Deep Research or the quicker Fast Research.

NotebookLM now supports Google Sheets, Drive files via URL, PDFs from Google Drive and Microsoft Word documents.

The updates will roll out within a week.

NotebookLM is an AI assistant from Google for notes, research and document work. It lets users upload materials -- PDFs, articles, spreadsheets, images, links, legal documents, lectures -- and build a structured knowledge base.

The service launched in 2023. Since then its capabilities have steadily expanded with AI. In early 2025, the Video Overviews feature arrived, turning complex multimedia material into clear visual presentations.

In May, NotebookLM became available on Android and iOS.

Google is pushing ahead with a "brain" for robotics.

Its DeepMind unit unveiled SIMA 2, a new generation of generalist AI agent. It "goes beyond simply following instructions", starting to understand and interact with its environment.

The first SIMA was trained on hundreds of hours of gameplay footage to learn to play various 3D games like a human. Introduced in March 2024, it could execute basic commands across virtual worlds, but managed complex tasks only 31% of the time.

SIMA 2 draws on Gemini's language and reasoning capabilities and runs on the 2.5 flash-lite version. Accuracy has risen to 65%.

"SIMA 2 is a qualitative leap over SIMA 1. It is a more generalist agent. It can perform complex tasks in a new environment it has not seen before," said senior research scientist at DeepMind Joe Marino at a press briefing.

The assistant can teach itself -- improving its skills based on experience. That is a step towards more general robots and systems, Marino noted.

Researchers at Google's AI division stressed that work on so-called embodied agents is critical to developing general intelligence. Such an assistant must be able to interact with the physical and virtual worlds through a body -- like a person or a robot.

A disembodied assistant can manage a calendar, take notes or execute code, Marino explained.

Senior research scientist Jane Wang, who has a neurobiology background, stressed that SIMA 2 goes far beyond ordinary game behaviour.

"We require it to truly understand what is happening, what it is being asked to do, and to respond sensibly and meaningfully. That is actually quite difficult," she said.

Integrating Gemini allowed SIMA 2 to double the metrics of its predecessor. The model combines advanced language and analytical abilities with embodied interaction skills acquired during training.

Marino demonstrated SIMA 2 in No Man's Sky. The agent described its surroundings -- a planet's rocky surface -- and determined its next actions, using Gemini for internal reasoning.

In another game the assistant was asked to approach a house the colour of a ripe tomato. The AI showed its chain of thought: "it is red, so I should go to the house of the corresponding colour." It then moved in the right direction.

Thanks to Gemini, the agent can even understand emoji instructions. The command "🪓🌲" will make it chop a tree.

SIMA 2 navigates photorealistic worlds generated with Genie. It correctly recognises objects such as benches, trees and butterflies, and can interact with them.

With Gemini, the new SIMA can self-learn with minimal human input, using provided data only as a basic guide.

The team drops the agent into a new environment, and a separate model generates tasks for it.

SIMA 2 analyses its shortcomings and gradually improves its skills. In essence, it is trial-and-error learning without a human in the loop: another AI system acts as the tutor.

DeepMind sees the new system as a step towards truly general-purpose robots.

"A system for performing tasks in the real world needs two key elements: a high-level understanding of the world and the ability to reason," noted senior research engineer Frédéric Besse.

If someone asks a humanoid robot to check how many cans of beans are left in a cupboard, it needs to understand what beans and a cupboard are, and be able to get to the right place.

SIMA 2 addresses precisely this "high level of behaviour", Besse said.

There is no timeline yet for integrating the new system into physical robots.

Another area that interests the search giant is AI-powered shopping. The company released a set of new tools for online shopping. Among them:

"We believe the shopping process should not be so tedious. The idea is to keep all the enjoyable parts -- the browsing, the serendipity -- and remove the boring and hard steps," said Vidhya Srinivasan, vice president and head of ads and commerce at Google.

One update is conversational purchasing in AI Mode. Users can talk to Search like a chatbot; it will display product images and add details such as price, reviews and availability.

The Gemini app can now generate fleshed-out ideas and collections rather than short text tips for shopping queries. For now, the feature is available only in the United States.

Agentic checkout automatically monitors changes related to a product of interest. The service can send price-drop notifications.

"This is useful for shoppers -- they don't need to keep checking the price of an item. And it's useful for merchants -- customers will come back when otherwise they would have left," said Lillian Rincon, vice president of product at Google Shopping.

Another new feature lets the AI call shops on a user's behalf to ask about stock and current promotions. It is built on the Google Duplex technology introduced in 2018, Shopping Graph and Google's payments infrastructure.

To use the tool, specify the desired product. The AI will call local shops, ask for details and deliver a short report.

In November, Google added message summaries, notification prioritisation and other AI features to Pixel smartphones.

Previous articleNext article

POPULAR CATEGORY

misc

18086

entertainment

19379

corporate

16147

research

9932

wellness

16055

athletics

20433