Contact Us

Tel:+86-20-37757840

Fax:+86-020-37757840

Mob:+86-13684955424

Contactus:Jon Qiu

E-mail: jonqiu@szwst.com

Contactus:James Mo

Mob:+86-13632195767

E-mail: jamesmo@szwst.com

Add:B Building, Laifu Industrial Park, Liyuanxilu, Heping, Fuyong, Baoan, Shenzhen, China.

Home > Knowledge > Content
What's new in the Google I/O Developer Conference 2018? May 10, 2018

On the morning of May 9, the three-day Google I/O Developer Conference was officially held. As an important annual launch event, Google I/O has a lot to offer. In addition to the new generation of Android P will be officially unveiled at the conference, Google will also talk about many new features in depth, for smart product consumers, this conference event is definitely worth the wait.

Changing the World's AI

What people did not expect was that the content that Google CEO Pichchai took to the stage was not the much anticipated Android P system, but Google’s layout of AI.

Pitchai said that using AI technology can "change the world" and use AI technology to allow Google to better serve the community. For example, he said that the use of AI technology allows doctors to find the cause by observing the retina, but also can predict the risk of patients may be encountered, indicating that AI technology is of great importance to the world.

Then, in the speech, Pychai introduced the audience to the AI voice separation and recognition function that Google is currently researching. In simple terms, it is able to intelligently separate voice dialogues of different characters in a video, and then add captions to them.

At the same time, Google will also expand AI capabilities to ensure the use of people with disabilities. The specific approach is that Google will add Moss code input function to future Gboard software, so disabled people can easily type the text content they want to input.

At the same time, future Gmail applications can automatically fill in some simple content in the future, which is part of the AI capabilities. Afterwards, the popular album app Google Photos will also incorporate AI functionality. In the future, Google Album will automatically identify the person in the photo and provide sharing suggestions. It can automatically brighten the photo exposure and convert the photo into a photo. In the PDF format, it is possible to adjust the local colors in the photos, add color to the black and white photos, etc. The function is very powerful.

At the same time, Google also launched the AI training platform TPU 3.0, customers can use this chip to more effectively train the AI, improve the intelligence of the AI.

Reborn Google Assistant

The highlight of the AI section is naturally Google's assistant. At Google I/O, Google also spent a lot of time introducing evolutionary Google assistants. In short, the new Google assistant is smarter, closer to real people and closer to life.

First of all, Google Assistant will add 6 kinds of tone, in the tone of the tone closer to the real people. At the same time, Google also announced a data. At present, Google Voice Assistant has already supported 80 countries and regions in the world, about 30 languages (it is regrettable that Chinese is not included in it). In order to respond to the pronunciations of users of different ages and languages in different parts of the world, Google Assistant has also optimized pronunciation recognition. Even elderly users who are not clear-minded can still use Google Assistant easily.

With the addition of AI technology, Google Voice Assistant’s multi-level semantic understanding and task execution capabilities have been significantly improved. For example, the new Google Assistant can understand that your question contains two questions and contains multiple levels of names, which Google Assistant will recognize and execute hierarchically.

However, what is most surprising is that Google Assistant can even help you make appointments by telephone. Live live demonstrations show that Google's assistant can almost completely simulate the human voice and tone and communicate with the staff at the other end of the phone, and can also respond to questions in a suspicious manner. The effect is shocking.

If Google Assistant can achieve such an effect in the actual experience, then it can be said that currently Google Assistant is the world's closest artificial human intelligence product.

Then Google brought us a new Google News application. The brand-new application is also based on AI technology, can offer news content that the user is most interested in according to big data training and study. At the same time, the new Google News app will intelligently present the causes and related content of the current reading news, so that users can experience the best reading experience.

AI-based new operating system

After a series of AI-related features, the new Android system finally unveiled.

The Android system has come to the decade, and Android P will have many improvements and changes compared to previous versions. The three themes of Android P are smart, concise, and digital life, which means that Android P, in addition to adding AI elements, will also provide new interactive logic and self-tuning for life scenarios.

The "smart" features of Android P are mainly reflected in the use of AI technology to make more use of user habits. For example, it will use AI for data analysis, self-determine the use of the user's App, and make targeted optimization to improve the battery life.

In addition, Android P will also usher in a new brightness adjustment. The new brightness adjustment function is not based on a new sensor, but uses AI for self-learning, data collection and analysis of user's usage habits, and self-judgment for brightness adjustment.

In addition, Android P will add a new design element Slices. This function will be combined with the application prediction function on Android P to show the user the results of the AI predictive user operation. At the same time, Google also introduced the system-level development kit MLKit, developers can quickly develop AI functions.

Subsequently, Google officially introduced a new operation on Android P: gestures. In the past, the three King Kong buttons have been reduced to only one Home button under normal conditions, and the shape has also undergone some changes. Briefly describe Android P's gesture operation, which is to draw the Home key position once, and it will bring up the multi-task interface and the recommended application bar at the bottom; then, once again, it will enter the application drawer.

Then move directly to the left and right of the Home button, and it will directly enter the multi-tasking interface. At the same time, you can move the multi-tasking card surface through the “capsule” button at the bottom, view the background application, and can directly perform text copying on the display content in the background card. .

In addition, volume menus and screen rotation functions are also adjusted to reduce the occurrence of misoperations.

The third part is about how Android P can improve the living conditions of users. First Android P joined a feature called Dashboard, which is actually an enhanced version of the App usage statistics interface. Users can learn more about how much time they spend on the App and how often they use it. Make appropriate adjustments to this.

In addition, Android P also natively incorporates features such as rollover-free, night mode, and more.

Surprisingly, five of the first vendors that can apply for Android P Beta testing are known handset manufacturers. Google Mobile, Nokia, vivo, Xiaomi, Sony, Essential, and OPPO are the first vendors to support Android P Beta. The beta Android P system can be downloaded for experience feedback, but the specific model needs to go to the Google Developer Forum. Inquire.

Here, the Android P system is basically introduced, and then the new Google Map and Google Lens will be introduced.

The changes in Google Maps are mainly to increase the map route change prompts and AR real-world navigation (the latter actually implements many domestic map applications have been implemented), while the Google Lens function is to increase the range of objects that can be identified, including materials, styles, and texts. , and the previous Google Lens feature basically only supports the identification of landmark buildings.

The Google Lens feature will support some vendors’ devices starting next week.

Subsequently, the CEO of Google's Waymo business made a speech to introduce Google's progress in driverless technology. According to Waymo CEO, the driverless technology developed by Waymo can reduce the error rate of identifying pedestrians by 100 times. At the same time, with AI's east wind, disturbances on the road can be perceived in real time and dodge. In addition, it can also predict and detect the movement behavior of other vehicles and reduce the risk of traffic accidents.

Looking at this first speech, you can basically think that this is Google's speech around AI technology, the biggest highlight is the evolution of Google Assistant, and Android P is just a brief introduction, one step over.

In this speech, Google gave us a rough idea of the extent to which they can use AI technology. In terms of speech content, Google did give us a lot of surprises. As for Google Assistant is not really easy to use, Andrews P what fun things, you need to wait for the product to land, the user can only be personally experienced after the conclusion.