Google puts AI on everything at I/O

Google puts AI on everything at I/O: There was a slew of other improvements to Google’s ridiculously large product line, most of which involved AI enhancements, in addition to the new gadgets that were paraded on the catwalk of Google I/O yesterday.

The most attention-getting aspect of Google’s I/O announcement extravaganza yesterday was its lineup of new gadgets, including the Google Pixel Watch, all of which appear to signify Google’s plans to become a bigger role in the devices industry and build its own digital ecosystem.

The problem with introducing 1000 other things at the same time is that some of them get lost in the shuffle – so let’s have a look at some of the other Google developments that were either properly announced or waxed lyrical about on their blog.

Translate by Google

Google Translate now supports 24 new languages, including the Americas’ first indigenous language. This boosts the software’s total number of languages it can translate to 133. These new languages were added using a technique known as Zero-Shot Machine Translation, in which a machine learning model only observes monolingual text — implying that it learns to translate without ever seeing an example. That’s quite brilliant, isn’t it?

Maps by Google

Thanks to breakthroughs in 3D mapping and machine learning, there’s a new feature in Maps called ‘immersive view,’ which is developed by merging billions of aerial and street photographs to create a high-quality representation of a place. Los Angeles, London, New York, San Francisco, and Tokyo will be the first cities to host it.

It appears to be a video game rendition of a city that you can zoom about in to get a sense of what the region is like — or, as Google puts it: “Imagine you’re planning a trip to London and want to know the greatest spots to visit and dine. Google puts AI on everything at I/O. You may virtually fly over Westminster with a fast search to see the neighborhood and breathtaking architecture of attractions like Big Ben up close. You can use the time slider to view what the neighborhood looks like at different times of day and in different weather conditions, as well as see where the busy locations are, with Google Maps’ helpful information placed on top.”

It also launched eco-friendly routing, which appears to achieve exactly what it says. Given the rising cost of gasoline, this could be one of the more practical aspects of the show.

Chapters on YouTube

Last year, Google introduced auto-generated chapters for YouTube videos, which are meant to make it quicker to go to the parts of a video that interest you the most. YouTube can now supposedly combine text, audio, and video to auto-generate chapters better and faster thanks to some DeepMind multimodal technology. In the meanwhile, speech recognition models are being used to transcribe films and automate translation into several languages.

Workspace by Google

Google has also directed its huge AI brains at Google Docs, which can now construct automatic summaries of large, long documents that you don’t want to read using various natural language processing tricks. The function, dubbed TL;DR, will also be rolled out to Google Chat, which means you’ll be able to get executive summaries of long-winded conversations with friends, which seems a little strange but hey, we’re all busy.

Assistant by Google

Glance and Talk – which basically means you can look at the screen of a Nest Hub Max and the device will know it’s you and you’ll be able to bark your orders without having to say ‘Ok Google’ first – was removed from Google’s AI voice interaction tool. What an incredible time to be alive.

AI Test Kitchen and LaMDA 2

Google bills LaMDA 2 as its “most advanced conversational AI ever.” Google puts AI on everything at I/O. This may be used for a variety of purposes, and it was released alongside something called AI Test Kitchen. It’s mostly for people who will be able to access and develop sophisticated AI resources.

“Say you’re writing a narrative and need some inspired ideas,” one application says. Perhaps one of your characters is diving into the depths of the ocean. You can inquire about how that could feel. LaMDA depicts a situation in the Mariana Trench in this scene. On the fly, it even produces follow-up questions. You might ask LaMDA to conjure up a list of possible inhabitants. Remember that the model was not hand-programmed for certain topics such as submarines or bioluminescence. These ideas were derived from the training data. That’s why you may ask about nearly anything, from Saturn’s rings to be in an ice cream world.”

It’s a little difficult to focus on what’s being discussed here and how it can be beneficial, but it’s all probably good stuff for dedicated programmers. Perhaps there are people out there who have been waiting for an AI tool to describe an ice cream planet to them. Google puts AI on everything at I/O. It’s a little difficult to focus on what’s being discussed here and how it can be beneficial, but it’s all probably good stuff for dedicated programmers. Perhaps there are people out there who have been waiting for an AI tool to describe an ice cream planet to them.

Hub for machine learning

Google stated that its data center in Mayes County, Oklahoma will host the world’s largest publicly available machine learning hub for Google Cloud clients. What’s going on beneath the hood? Eight Cloud TPU v4 pods, which are reportedly custom-built on the same networking architecture as Google’s most powerful neural models. This provided a total of nine exaflops of processing capacity, allowing complicated models and workloads to be run. Woof.

Android 13

The focus was on safety and privacy. Rich Communication Services has replaced SMS text messaging as a new standard (RMS). This provides end-to-end encryption as well as a slew of other communication features not available via SMS but available through apps like WhatsApp.

Google Wallet will be available on the upcoming Pixel Watch, and you’ll be able to keep items like driver’s IDs, hotel keys, and office pass cards on it soon, according to Google.

There was also a focus on interoperability, such as making Google’s platform’s tablets, phones, and watches function better together and share files, for example. This is definitely a significant element of its aim to create a true gadget ecosystem, and there’s an interesting emphasis on expanding the umbrella to include other manufacturers: “Since the launch of our unified platform with Samsung last year, there have been over three times as many active Wear OS devices.” Samsung, Fossil Group, Montblanc, Mobvoi, and others will begin to release Wear OS-powered products later this year. For the first time, Google Assistant will be available on Samsung Galaxy watches, starting with the Galaxy Gear.

And there you have it: the best of Google I/O 2016. Other announcements were made, but they appear to be the most important. Google puts AI on everything at I/O. The themes were clearly around cementing Android as a platform beyond your phone, strengthening security, and using its massive AI capabilities to adjust and improve anything under its enormously vast portfolio of products and services, as a non-AI generated TL; DR.

Read more: Google is creating its own ecosystem of devices

The amber nectar attracts Nokia

Leave a Comment