November 22, 2024

Westside People

Complete News World

Everything has been revealed including Gemini AI, Android 15, and more

Everything has been revealed including Gemini AI, Android 15, and more

At the end of I/O, Google’s annual developer conference at Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company said “AI” 121 times. This was, essentially, the gist of Google’s two-hour keynote — bringing AI into all of Google’s apps and services used by more than two billion people around the world. Here are all the major updates from Google’s big event, plus some additional announcements that came after the keynote.

Gemini Pro

Google

Google has announced an all-new AI model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency. The Flash sits between the Gemini 1.5 Pro and Gemini 1.5 Nano, the company’s smallest model that runs natively on the device. Google said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro for building AI-powered apps and services while retaining some things like a million-character long context window that set Gemini Pro apart from competing models. Later this year, Google will double the Gemini context window to 2 million tokens, meaning it will be able to process 2 hours of video, 22 hours of audio, and more than 60,000 lines of code or more than 1.4 million words in the same breath. the time. .

Astra ProjectAstra Project

Google

Google showed off Project Astra, an early version of a global AI-powered assistant, which Demis Hassabis, CEO of Google’s DeepMind, said is Google’s version of an AI agent “that could be useful in everyday life.”

In a video that Google says was filmed in one take, an Astra user moves around Google’s office in London holding his phone and pointing the camera at different things — a speaker, some code on a whiteboard, outside a window — and has a natural conversation with the app about what it looks like. The command. And in one of the most impressive moments in the video, she correctly tells the user where she left her glasses before without the user ever lifting her glasses.

See also  Random: Console mods make the 'Dummy Portable GameCube' a reality

The video ends with a surprise – when the user finds the missing glasses and puts them on, we learn that they have a built-in camera system and are able to use Project Astra to seamlessly have a conversation with the user, which could suggest that Google may be working on a competitor to Meta’s Ray Ban smart glasses.

Ask the picturesAsk the pictures

Google

Google Photos was already smart when it came to searching for specific photos or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you’ll be able to ask Google Photos a complex question like “Show me the best photo from every national park you’ve visited” when the feature is rolled out over the next few months. Google Photos will use your GPS information as well as its own judgment of what is “best” to provide you with options. You can also ask Google Photos to generate captions for posting photos on social media.

ViewView

Google

Google’s new AI-powered media creation engines are called Veo and Imagine 3. Veo is Google’s answer to OpenAI’s Sora. Google said it can produce “high-quality” 1080p videos that can last “more than a minute,” and can understand cinematic concepts like time-lapse.

Meanwhile, Imagen 3 is a text-to-image generator that Google claims handles text better than its predecessor, Imagen 2. The result is the company’s highest-quality text-to-image model with an “amazing level of detail” for “realistic, lifelike images.” ” and fewer artifacts — which essentially pits it against OpenAI’s DALLE-3.

See also  iPhone 14 comes in four models without the 'mini' version and more
Google searchGoogle search

Google

Google is making big changes to how search fundamentally works. Most of the updates announced today are like the ability to ask really complex questions (“Find the best yoga or Pilates studios in Boston and view details on offerings and walk times from Beacon Hill.”) and use search to plan meals and vacations. It won’t be available unless you sign up for Search Labs, the company’s platform that lets people try out beta features.

But the big new feature, which Google calls AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the United States. Google Search will now present AI-generated answers at the top of results by default, and the company says it will make the feature available to more than a billion users around the world by the end of the year.

Gemini on AndroidGemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 is released later this year, Gemini will be aware of what app, photo, or video you’re playing, and they’ll be able to drag it as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who do you know! Google didn’t bring this up at all during today’s keynote.

Google isn’t quite ready to roll out the latest version of its smartwatch operating system, but it is promising some major improvements in battery life when it comes. The company said Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 did bring improvements to battery life for the smartwatches that support it, but it could still be a lot better at managing device power. Google also provided developers with a new guide on how to conserve power and battery, so they can create more efficient applications.

See also  Google hosted the I/O 2024 Demo Slam and opened an employee hackathon

The Android 15 developer preview may have been going on for months, but there are still features coming. Theft Detection Lock is one of the new Android 15 features that will use AI (here it is again) to predict phone theft and lock things accordingly. Google says its algorithms can detect movements associated with theft, such as those associated with grabbing a phone and walking away, riding a bike, or driving away. If an Android 15 phone selects one of these situations, the phone’s screen will quickly lock, making it difficult for a phone hijacker to access your data.

There have been a bunch of other updates as well. Google said it will add digital watermarks to AI-generated videos and texts, make Gemini accessible in the side panel in Gmail and Docs, support an AI-powered virtual colleague in Workspace, listen to phone calls and detect if you’re actually being scammed time, and much more.

Follow all the news from Google I/O 2024 live here!

Updated May 15, 2:45 PM ET: This story was updated after publication to include details about the new Android 15 and WearOS 5 announcements released after the I/O 2024 keynote.