Refresh
Good morning and welcome to our Google Live from Paris blog.
It’s another exciting day in the world of AI as Google prepares to counter Microsoft’s big announcements for Bing and Edge yesterday. This week’s scuffle reminds me of the big tech heavyweight battles of the early 2010s, when Microsoft and Google traded minor attacks on mobile and desktop software.
But this is a new era and the square circle is now artificial intelligence and machine learning. Microsoft seems to think it can steal a march on Google Search – and against all odds, it may actually do so. I’ll reserve judgment until we see what Google announced today.
What exactly do we expect from Google today? Clearly search will be the main focus as we start to get more details on how Google’s conversational AI will be incorporated into search.
Any big change to the search engine would certainly be a huge deal as Google has hardly changed the external UI of the minimalist bar where most of us type without thinking.
But today, we’re more likely to see small increments – Google called Bard an “experimental” feature, and it’s only based on a “lightweight version” of the LaMDA AI technology (which stands for Language Model for Dialogue Applications if you were wondering).
Like Microsoft’s new Bing, any Bard search integration will likely be presented as an optional extra rather than a replacement for the classic search bar – but even that would be huge news for a search engine that has 84% market share (opens in a new tab) (at least for now).
4/ When people turn to Google for deeper information and understanding, AI can help us get to the bottom of what they are looking for. We start with AI-powered search capabilities that transform complex information into easy-to-digest formats so you can see the full picture and then discover more pic.twitter.com/BxSsoTZsrpFebruary 6, 2023
Only 15 minutes left until the start of Google’s “Live from Paris” event. One of the main questions for me is how interactive Google’s conversational AI will be – with the new version of Microsoft Bing, chat results are gradually expanding with more details.
This is a big change from traditional search because it means the first result can be the start of a longer conversation. Also, will Google cite its sources in the same way as the new Bing? Early screenshots were unclear on this, but we’ll find out more soon.
Then there are only two minutes left until the end of Google Live. It’s unclear why Paris was chosen as the location for the event – but perhaps it has something to do with these suggested Map features…
Senior Vice President Prabhakar Raghavan speaks on stage about “the next frontier of our information products and how AI is driving that future.”
He points out that Google Lens goes “beyond the traditional notion of search” to help you shop and place the virtual chair you want to buy in your living room. But, as he says, “search is never solved” and remains Google’s lunar product.
The news now is that Google has been using artificial intelligence technology for some time now.
A billion people use Google Translate. Google says many Ukrainians seeking shelter have used it to help them navigate their new environments.
The new technique of “no shot machine translation” learns to translate into another language without the need for traditional training. Added 24 new languages to Translate using this method.
Google Lens has also reached an important milestone – people now use it more than 10 billion times a month. It’s not new anymore.
Google Lens is gaining popularity. In the coming months, Lens users will be able to search their phone screen.
For example, long press the power button on your Android phone to search for a photo. As Google says, “if you see it, you can search it.”
Multisearch also allows you to find real objects in different colors – for example, a shirt or a chair – and is being implemented worldwide for any image search results.
Okay, let’s move on to “large language models” like LaMDA. That’s the strength of Google’s new AI “Bard” chat service, which it calls “experimental.”
As previously announced, Google is releasing a lightweight LaMDA model to “trusted testers” this week. There’s no word yet on when it will be released to the public other than the “coming weeks” that Google mentioned earlier this week.
Generative AI is popping up on Google Search, and Google gives more examples of how it will work.
For example, you’ll be able to ask “what constellations are best to look for when stargazing,” and then drill down to the time of year when they’re best seen. All very similar to Microsoft’s new Bing Chat.
Google also talks about generative images, which can create 3D images from stills. He says we can use them to design products or find the perfect pocket square for your new jacket. However, there are no details on how this will unfold.
Developers will also receive a large set of tools and APIs for building AI-powered applications.
Google’s vice president and general manager Chris Philips is now speaking on stage about Google Maps. “We’re changing Google Maps again,” he says.
A demonstration of the very impressive “Immersive View” we’ve seen before is underway. It uses AI to combine billions of Street View photos to get Superman flybys over major landmarks and restaurant interior views.
Good news: Immersive View is finally rolling out to several cities including London, Los Angeles, New York, and San Francisco, and more cities over the next few months. Now we’re looking at “Live View Search”…
Google Maps “Live View Search” combines AI with AR to help you visually find things nearby, such as restaurants, ATMs, and transportation hubs, by looking through your phone’s camera.
It’s already available in five cities for Android and iOS, with “in the coming months” expanding to Barcelona, Dublin and Madrid. Now we get the outdoor demo – you tap the camera icon in Google Maps to see the real places superimposed on the camera view, including those that are out of sight.
You can see if they are currently busy and how high they are rated. This is not a completely new feature, but definitely a useful one that is being implemented on a wider scale.
Google is now talking about the lesser-known “Indoor Live View” for airports, train stations, and shopping malls. In a few select cities and locations, this uses AR arrows to show you where things like elevators and baggage claim are located – damn handy, though a bit limited at the moment.
Thankfully, Google is rolling it out more widely with its “largest Indoor Live View extension yet”, bringing it to 1,000 new locations in airports, train stations and shopping centers in London, Tokyo and Paris “over the coming months.”
Good news for owners of electric cars. Google is also rolling out new Maps features for electric vehicle owners to make sure you have enough power.
“To eliminate range anxiety, we’ll use AI to suggest the best place to recharge, whether you’re going on a trip or just running errands nearby.”
It takes into account traffic, charge level and energy consumption while traveling. It will also show “very fast charge” stops for quick top-ups.
Hmm, it looks like the live stream isn’t working – maybe Microsoft’s Clippy has done some nefarious tricks. We hope to return soon.
There were a few more minor announcements to wrap up Google’s presentation, with the arrival of z Blob Opera (opens in a new tab).
Google also announced a suite of new AI-powered Arts and Culture features. Among the new tools is the ability to search – and study in detail – famous paintings by thousands of artists.
The company says it is making a conscious effort to preserve endangered languages by using AI and Google Lens to intuitively translate words for popular household items.
That’s all from Google for today – with a slightly abrupt pull of the livestream plug-in to mark the end of the event.
Overall, it was pretty disappointing and felt like a defensive ploy to pop Microsoft’s AI hype balloon. While there were a few mini announcements – the Indoor View extension for Google Maps, the “search screen” feature for Google Lens, and the Google Bard search demo – there was certainly nothing new in the scale of Microsoft’s new chat feature for Bing and Edge.
Google tweaked its own AI event a bit by hastily releasing a preview of Google Bard a few days ago – and we haven’t really gotten any new information on exactly how it will perform when it goes public in the “coming weeks”.
Alright, we’re off to see if Google Maps’ immersive view is up and running in London – thanks for tuning in, and keep an eye out for fresh updates here as we get more official post-event info from Google.
It’s a day after Google’s somewhat disappointing AI event in Paris – and it seems AI chatbots aren’t ready for prime time in search yet. Reuters (opens in a new tab) noticed that one of Bard’s answers in the Google Demo was inaccurate.
When asked to provide some examples of discoveries made by the James Webb Space Telescope, the AI chatbot suggested that JWST “took the first pictures of a planet from our own solar system” – when it actually did in 2004 with the Southern Observatory’s Very Large Telescope.
A Google spokesperson acknowledged that “this highlights the importance of the rigorous testing process we are starting this week with the Trusted Tester program.” I am confident that Bard’s answers meet high standards of quality, security and reliability in real information.”
Apparently, Microsoft forced Google to go public with Bard a little earlier than is convenient. Google stressed that the AI chatbot is “experimental”, with a disclaimer under the search box saying that “Bard may provide inaccurate or inappropriate information.”
However, that doesn’t mean we can’t wait to take the Bard for a spin when it’s available “in the coming weeks.”