Our Analysis of Google Home
Initially | G2: Make clear how well the system can do what it can do.
★★★★☆
While the system is not always great at being clear about what you can and can’t ask it to do, the Google Home is actually quite good at telling you how good it is at doing something. For example, if it doesn’t fully understand your question, it will be very clear, giving you an answer such as “sorry, I don’t quite understand”, and if it understands part of your question, it will try to give you as relevant of an answer as it can, even if that just means giving you the top search results for your question. However, sometimes this can be annoying, since Google Home will give you unhelpful search results, which is why we docked it one point.
During Interaction | G5: Match relevant social norms.
★★★★★
Users are able to talk to the Google Home in a very casual tone, which makes talking with Google Home similar to an everyday conversation. Google Home is especially great at following up on answers with relevant questions, making this human-AI interaction even more sophisticated. If a user was to ask Google Home “should I wear a jacket today?”, Google Home would respond by sharing relevant weather information. This feedback is industry-leading and provides users with an especially practical personal AI assistant.
Trending AI Articles:
1. Deep Learning Book Notes, Chapter 1
2. Deep Learning Book Notes, Chapter 2
3. Machines Demonstrate Self-Awareness
4. Visual Music & Machine Learning Workshop for Kids
When Wrong | G11: Make clear why the system did what it did.
★★☆☆☆
When it comes to transparency as to how the system chooses what responses to give you, the Google Home gives the user little insight. On one hand, the Google Home can sometimes give responses that are irrelevant or vague, and on the other, it can sometimes even give too much information, and it’s never clear why it does this.
Over Time | G13: Learn from user behavior.
★☆☆☆☆
Google Home does a very poor job of learning from the interactions it has with its users. Over the course of owning a Google Home over the past few years, I have yet to see any noticeable improvements in how my Google Home interacts with me specifically. My Google Home interacts with users after months of ownership as it would with a new user. With Google Home in general, there is not much room for customizability but there is seemingly no attempt by this AI assistant to better understand user interactions.
Guidelines We Found Interesting
Initially | G1: Make clear what the system can do.
We chose Uber: ★★★★★
When using Uber, it is very clear why the price for a certain ride is what it is. When there is a lot of demand for Ubers in the area, Uber clearly explains to users that they are surging prices, meaning they are making it more expensive. When Ubers are less in demand, this demand is reflected through a lower price. Uber’s rating system is very intuitive, with both Uber drivers and Uber passengers being able to rate each other based on the experiences they had. This leads to higher rated Uber drivers being able to choose higher rated Uber passengers.
During Interaction | G4: Show contextually relevant information.
We chose Robinhood: ★★★★★
Robinhood, the hot commission-free stock trading app, makes it very easy for users to invest in new and different stocks. They utilize AI to suggest to users stocks that they think that users will further examine. This can be based on the stock positions currently held by that user or this can be based on more recent stock searches. Through these suggestions, Robinhood further facilitates this otherwise complicated financial transaction.
When Wrong | G9: Support efficient correction.
We chose email spam filters: ★★★★★
Spam filters are a prime example of an AI that is very easy to correct. For example, if an email that you would like sent to the spam section of your e-mail accidentally gets through to your inbox, you can easily tell your e-mail provider to send further e-mails of that type to the spam filter. On the other hand, if an important email gets sent to the spam folder, you can inform the e-mail provider to make sure similar e-mails get sent to your inbox.
Over Time | G14: Update and adapt cautiously.
We chose the Sony Aibo: ★★★★★
In the past year, Sony released a new version of their beloved robot dog, Aibo. Many users were impressed with how well Aibo gets adapted to your environment and how it constantly adapts to its space. For example, when you first start the Aibo in a room, it clumsily walks around and gets stuck often, but after a few days, it gets very good at understanding the layout of your room, and also gets better at understanding where its charging station is in relation to other parts of the room.
Don’t forget to give us your 👏 !
https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href



Okay Google: How Ethical Are You? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
Credit: BecomingHuman By: Sohum Gupta