X
    Categories: Tech

Google launched two new machine learning APIs into open beta  

The API listens to people talking and processes the information to do different things. Source: Daimto

Google announced an aggressive expansion of its cloud project on July 20, 2016. Two new machine learning APIs that can read the human language and process it in varied ways entered open beta today. Also, the company also announced an extensive expansion of their cloud service coverage in the United States, something gamers in the West Coast would really appreciate.

The first piece of software was inspired in the company’s Parsey McParseface, a very successful software that understands the structure of language. Also known as SyntaxNet, the software analyzes the sentences and discovers its root which is usually the verb.

Then, the software identifies all the other elements of the sentences based on the position they have regarding the root. When it collects all the data, the software can give an accurate evaluation of the text’s quality, in fact, it is the most accurate parsing software in the market.

Cloud Natural Language is analyzing client complaints

The evolution of that framework is the Cloud Natural Language (CNL), and it takes machine understanding of human language to a whole new level. CNL does more than only understanding the structure of a text, and it is also designed to recognize the sentiment behind a paragraph all while it identifies proper names whether they are people’s or companies’. In other words, this piece of software can understand what the author meant to say with a text and can differentiate names from common nouns.

Google stated in the announcement that this tool had a lot of uses. One of them would be to go through thousands and thousands of customer support reviews to identify tendencies. Ocado Technology is already running CNL through the client feedback database from their online supermarket. The British company said the API has made it simpler for them to process and understand the natural language of huge blocks of data, and that it wasn’t too difficult to apply to the systems they already had in place.

Computers will also be able to understand human speech

Human communication goes far beyond the written text. Throughout the history, our species developed a sophisticated verbal communication method. Now, it is time for computers to learn how to do it, and to do so, Google also released Cloud Speech.

The Speech Alpha test is much more attractive than CNL with more than 5,000 companies in the queue to try it. The API listens to people talking and processes the information to do different things. The most common are to transfer audio input to written text, yes, like a dictation machine. However, the designers uploaded 80 languages, and HyperConnect is giving it a sci-fi use.

The ultimate goal of the company is to allow people talk to each other regardless time, language and distance. Its app, Azar, reported more than 50 million downloads in more than 200 countries. The API can, for example, listen to an English speaker, translate the information to Spanish, and text the outcome to the person sitting at the end of the call. Another company, VoiceBase, is using the API to predict the income of call recordings.

The API can be easily improved by using hint words. As a result, developers can add an exclusive list of commands to enhance the listening capabilities. For example, a designer can set a smart TV to pay particular attention to commands such as “Pause” and “Play” while watching a movie.

Google also expanded its reach to the West Coast in the United States

The monster tech company also announced a new region was going to be available for cloud customers. The new Oregon Cloud Region will offer three primary services: Google Computer Engine, Google Cloud Storage, and Google Container Engine. Also known as Us-West1, the new zone is going to reduce the latency slowly in many areas. In other words, gamers from Vancouver, Seattle, Portland, San Francisco and Los Angeles can expect a 30-80% faster connection.

The director of Multiplay Game Services, Paul Manuel, confirmed the gaming giant is going to be one of the first to work with the new Oregon Region. The company hosts are gaming servers holds sports events and offers many other services to giant companies such as Ubisoft, Activision, Valve and more, much more. He also praised the logistics and customer support the company receives from Google.

Google’s cloud platform project is expanding aggressively, and it will reach Tokyo, the gaming capital of the world, later this year. Also, the company confirmed ten new locations for 2017.

All the developers who want to give these two new APIs a try can go to Google’s website to check on availability and pricing. The news was dropped at the company’s blog by Apoorv Saxena, Cloud Natural Language and Translate API Product Manager, Dan Aharon, Cloud Speech API Product Manager and Dave Stiver, Product Manager, Google Cloud Platform.

Source: Cloud Platform

Hector Morales:
Related Post