I ask 100 information questions to four digital assistants. All of them fail at least half.

Despite the massively larger size of the Google Home speaker, the winner of “who can actually hear a user” is the Echo Dot, which was able to hear me from farther away and without me having to look at it. Good job!

After seeing the poor feedback of Watson in Bridge Crew, I decided to take my four digital assistants for a spin. After 21 questions across four assistants, I learned that Alexa cannot give basic information about Amazon Prime videos, none of them can properly understand which movie you’re looking for information for, and none of them can actually recommend stuff. Also, Google still needs to learn how to round up. I also learned I’m going to need a bigger set of questions.

First, the purpose of this test is to test the assistants on the one skill that is a must-have for a disembodied speaker: Information retrieval and processing. This is not a comprehensive test, but is indicative of the types of questions that one might ask based on conversation, i.e. two or more people are having a conversation and they reach a question that needs an answer.

To begin with, I summarise the results, mostly for fun. After that, you can browse what I found the most interesting 40 questions, and the varied (or non-varied) answers offered by each assistant for those. This piece is not intended to be illustrative on who is the “best” assistant. I have not queried skills or integrations at all; for example, Amazon relies heavily on skills to provide extended functionality, while Google thrives on its Android integration and can play stuff using Google Cast on my TV and sound bar.

So, let’s go to charts. Charts are fun. This is a numerical summation of the questions I asked.  I have chosen to give 1 point for a correct-and-succinct answer, 0.5 points for an unclear, overly verbose or rambling answer, and 0 points for anything else. For the first chart, I have divided them up into 10 categories, with 10 questions in each category. This makes the maximum score of each category 10.

Breakdown by category of assistants. See Google Chart for hoverable breakdown.

Unsurprisingly, perhaps, the assistants do best in the areas where their company has an adjacent business. For local facts, Google had the best coverage, undoubtedly due to their investment in mapping. Likewise, Google had by far the richest answers for travel. Alexa excelled in shopping. Siri took the crown on factual questions, but surprisingly did poorly on reasoning (“Queries”) where I expected the Wolfram Alpha-backed service to get flying colours.

Interesting things we find is that all assistants are bad on free-form questions about themselves, and even worse with fake news. With an at-best score of 10% on fake news (debunking questions that are fake, such as “Did Bill Murray run for president”?), assistants do a poor job at it. Siri frequently recommended sites that furthered the myths, and Cortana even started reading a passage on Bill Murray’s presidential run as if it was a news article.

Also interesting was Alexa’s complete and utter inability to answer TV related questions correctly. I would have expected some connection to their video service, but it was unable to answer questions about even Amazon Originals. That’s not good. Siri could do reasonably OK on direct questions, while Google and Cortana had better overall luck. It has to be noted, though, that they frequently rely on the crutch of reading a search result out loud, which can have interesting results (see the verbose Q/A below).

In some areas, the blows were uneven. For example, Google was better at some conversations, while Alexa was better at others. They were also trading blows fairly well in the queries section, where I asked for reasoning questions. All assistants did poorly if I asked relational questions, such as “What is the TV show where all the answers must be questions”, angling for Jeopardy and receiving no answers.

The chart isn’t very inspiring though. Let’s pool together all 100 questions, and see how well the assistants did on providing a satisfactory answer. For this chart, I won’t be using the score, but will focus on the number of questions that are answered in full, partial or unsatisfactory, and incorrect or not found.

Total questions answered breakdown. See Google Chart for more details.

The best performing assistant, on information queries, is Google. This is perhaps not surprising, given that they are a search engine. The same crutch is used by Cortana to squeeze ahead of Alexa by one answered question, even though it gave more partial answers. (On the scoring methodology, Alexa would come out ahead, as it had a greater proportion of adequately answered questions). Siri, which never read a search result, came last in number of questions answered, and even managed a greater proportion of half-answered questions than Alexa, usually because of requiring additional input for things Alexa could understand.

Before we proceed, let’s acknowledge that I asked freeform questions the way a human would. The fact that I get an answer, let alone an answer at least a quarter of the time is impressive, from a pure technological standpoint. It is all the more impressive when considered from the standpoint of the hypothetical Amalgam assistant: a theoretical construct that takes the best answer from the four assistants. With 47 correct and 26 partial answers, this gives an amazing 73% success rate.

I will take an aside here and say that this is, in fact, my third draft of this article. Initially, I posed 20 questions, and I found that the amalgam service answered 15 of them. I doubled that to 40, and found Amalgam had answered 29/40, or 72.5%. With this final tally of 100, I appear to continue to converge on what appears to be my personal success rate with this meta-assistant. It bears asking how much this varies between people.

Unfortunately for me as a user, rather than a technologist amazed at the rate of progress, even the best performing answer-machine, Google, had a paltry 30 correct answers, and another 20 partial ones. Nominally, this gives a 50% success rate, though the partial results would often require me to ask again, as the answer had rambled its way into oblivion long ago. For example (this example is replicated in the raw data below), here’s one rambling Google Answer:

At sea level, water boils at 212 °F. With each 500-feet increase in elevation, the boiling point of water is lowered by just under 1 °F. At 7,500 feet, for example, water boils at about 198 °F. Because water boils at a lower temperature at higher elevations, foods that are prepared by boiling or simmering will cook at

This answer takes a lot of brainpower (and possibly asking it again) in order to hear whether or not what I asked for was actually answered. It doesn’t help that it’s truncated, and just stops in the middle of a sentence. Another Google-specific quirk was that, especially for answers that depended on search results, it would not consistently present the same search result. Instead, asking the exact same question a second time has a chance to yield a different answer.

Conclusions

After all this talking to the assistants, I found that Alexa had by far the easiest time to understand me, even though it was farthest away and hidden behind a speaker. Google home did a poor job unless I was looking at it while talking (an almost creepy kind of situation), and Siri was its usual finicky self. Cortana worked great, but when connected to a condenser microphone with a greater value than the echo dot and google home speaker combined, I’d be disappointed if it didn’t.

Unfortunately for us all, the grand conclusion of this is not “there is one best assistant”. Each of these pieces of software shows different strengths and different understanding for what a person is asking for, with different assumptions trailing them. The one thing that I can say for certain? We have a long way to go before these assistants are usable without copious amounts of trial and error.

Below you will find transcriptions of my fights with the assistants. Be aware that this is long and verbose. If you’re curious about my numbering and statistics, or just want to know what I asked, the Google spreadsheet has my full set of questions, and scored answers.

Random picks from the 60 additional questions

Before we get to the verbose data, there were some outliers in the rest of the data that I think were individually notable.

What’s the closest gas station on my way to San Francisco? All of the assistants failed at this local question, instead opting to give me recommendations for a gas station already there. I’ve run out of gas already!

Where is XYZ, Alexa? Alexa has a tendency to present its answers as a series of lists, such as “Here are some popular [items in category]: Item One, Item Two and Item Three. Check the Alexa app for directions and hours”. Unfortunately, this happens even if I ask for a singular one, such as “where is the nearest park”.

How much is a Playstation 4 Pro, Siri? Apple.com should be able to help you with that. [Really?]

Did Bill Murray run for president, Cortana?  Bill Murray shocked the country today announcing his run for President in 2016. From his home in Charleston, South Carolina, legendary actor, comedian, and writer, 65-year-old Bill Murray, shocked the country today by announcing that he will be running for President in 2016 [Really, Cortana? It’s literally reading a fake news article from a site that is impersonating ABC News]

40 verbose Assistant questions

Below you can find each of the questions I asked, along with the results given to me, and as often as I found reasonable an explaining summary that discusses what happened.

What’s the best italian restaurant nearby?

  • Google: I found a few places within 5.6 miles. The first one is [restaurant, address]. The second is [restaurant, address]. The third one is [restaurant, address].
  • Alexa: Based on your zip code settings, here are a few nearby restaurants. [Lists 5 restaurants]. Check out the Alexa app for contact info and hours.
  • Siri: 15 Yelp results
  • Cortana: “I found 10 restaurants”, displaying a list showing five. Proceeds to read the name, address and distance to the top match.

This is a tough one, but only Cortana actually gives only one answer (which was implicit in the question), although it hedges its bets by showing five. Siri gets a pass for being on a phone, but no extra credit. Alexa does a better job at presenting than Google; the pace at which Google spits names and addresses at me makes me lose track almost immediately.

What’s the cheapest Premium gas within 5 miles.

  • Google: Sorry, I don’t know how to help with that.
  • Alexa: Sorry, I don’t know that.
  • Siri: Shows a nearby Chevron station, but no price
  • Cortana: Bing search

Yeah, I think this one’s for the apps. We continue with what should be a relatively easy softball. It’s not as simple as asking for the weather outright, but it shows some personality and interactivity, which all of these assistants have been hankering for.

Should I bring an umbrella tomorrow?

  • Google: No, rain is not expected tomorrow in [City]. The forecast is 79 degrees and sunny.
  • Alexa: No rain is expected in [City] tomorrow.
  • Siri: There’s no rain in the forecast for tomorrow. [Shows weekly forecast]
  • Cortana: That’s probably not necessary. The forecast for tomorrow shows sun with a high of 79 and a low of 60.

So far so good. All accurate. Let’s try to follow up.

What about the day after?

  • Google: According to Wikipedia, The Day After Tomorrow is an american climate science fiction movie…
  • Alexa: Hmmm, I don’t know.
  • Siri: There’s no rain in the forecast for Monday.
  • Cortana: I pulled up some videos for you [of The Day After]

Unexpectedly, Siri is the only assistant that manages this one. Score one. This got me wondering, are the conversational skills of these assistants hobbled somehow in speaker mode? Time for another softball: recent history trivia.

Who is Barack Obama

  • Google: According to Wikipedia, Barack Hussein Obama the 2nd is an American politician who served as the 44th president of the United States, from 2009 to 2017
  • Alexa: Barack Hussein Obama the 2nd is an American politician who served as the 44th president of the United States, from 2009 to 2017
  • Siri: Here’s some information [Cue Wikipedia article]
  • Cortana: According to Wikipedia.org, Barack Obama is an American politician who served as the 44th president of the United States, from 2009 to 2017

That’s an easy one. Interestingly, Cortana avoids using Obama’s middle name, whereas both Google and Alexa prompt it. I wonder why. I tried prompting Cortana specifically for “Who is Barack Hussein Obama” and the answer was still “Barack Obama”. On to the conversational part.

Does he have any children?

  • Google: Malia Ann Obama and Natasha Obama
  • Alexa: Sorry, I don’t know that one
  • Siri: OK, I found this on the web for ‘Does Barack Obama have any children’
  • Cortana: Bing search for ‘Does he have any children?’

So, it doesn’t seem like Google at least is intentionally hobbled. The weather miss was spectacular but oh well. Siri is trying, but uses the web search crutch far too often. Cortana misses the boat completely, dropping all context. Alexa has scored zero on weather and popular culture for conversations, let’s try shopping.

What’s the price for a packet of Lay’s?

  • Google: Sorry. I’m not sure how to help with that yet.
  • Alexa: [Lists one shopping option, with price] Would you like to buy it? [No, I answer, to which Alexa lists a second option. I decline that one as well.] Check your Alexa app for more options.
  • Siri: Web search result for lace.
  • Cortana: Bing search for “What is the price for a pack of lies?”

I would have expected Google to give me some Google Express options. Siri misunderstood. Alexa correctly figured out what I wanted, but, again, has little interest in being conversational about it. Cortana is a bit passive aggressive there. Clarifying for all who missed that I meant potato chips will make them understand it’s Lay’s, but they fare no better in giving me an answer. Oh well.

How much for an iPhone?

  • Google: According to LifeWire: iPhone 7 $649, $749, $849
  • Alexa: The top search result is Apple iPhone 5S 16gb silver GSM unlocked certified refurbished, $149
  • Siri: Apple.com should be able to help you with that
  • Cortana: Apple says the price of the iPhone 6 is either $199, $299, or $399. It’s right there on their website. These numbers do not accurately represent the price you will pay for an iPhone 6 on a two-year contract.

Shopping fail all around. Not really going to spend that much time on that.

I need a pack of tennis balls.

  • Google: My apologies, I don’t understand.
  • Alexa: The top search result is Pro Championship Extra Duty 16 per case for $64. Would you like to buy it?
  • Siri: Bing search
  • Cortana: Bing search

Yeah, Alexa’s default assumption is right this time.

When does The Fate of the Furious come out on Blu ray?

  • Google: Here’s a summary from DVD release dates. The Fate of the Furious Blu-ray release date is set for July 11th, 2017.
  • Alexa: Hmm, I couldn’t find any movies called “The Fate of the Furious Blu Ray”
  • Siri: The Fate of the Furious was released on April 14, 2017
  • Cortana: Bing Search

Alexa’s failure is surprising. Siri just defaults to general movie information. Cortana just gives up.

Is there anything interesting to watch online?

  • Google: I read online that Martina Navratilova is openly gay. She also won 18 Grand Slam singles titles.
  • Alexa: Sorry, I don’t know that.
  • Siri: Interesting question, Vlad
  • Cortana: Bing search

Missing the point completely. I tried a number of variants of these (“on tv”, “online”, “what’s there to watch”) and the above are the best results I got. I was actually expecting movie recommendations, especially from Alexa. Hm. Let’s try to be more specific.

Any interesting movies recently?

  • Google. Movies frequently mentioned on the web include Spiderman, Homecoming, The Book of Henry and others.
  • Alexa: Hmmm. I don’t know that one.
  • Siri: I found quite a number of movies that are either in theatres or coming out soon. I don’t have information on movie popularity though. [Displays an array of movies]
  • Cortana: Bing search

Google gives some form of an answer, but the ending with “and others” is not very helpful. It didn’t accept followup questions either, such as what others, or any of those scifi. Alexa again has a missed opportunity to recommend prime video. Speaking of which…

How many episodes are there of The Grand Tour

  • Google: The Grand Tour’s number of episodes is 13
  • Alexa: Hmmm, I don’t know that one.
  • Siri: Bing search that headlined “Where can I watch The Grand Tour Episode 3 on…”
  • Cortana: The TV series The Grand Tour has, in total, 13 episodes.

Siri certainly missed the boat, but the big surprise here is Alexa. Again, very little thought on prime video. Google, likely mining Wikipedia once more, gives the right answer. Cortana manages that as well, but is more verbose once again.

Who plays Mary Poppins in the Broadway production?

  • Google: The Cast of Mary Poppins includes Julie Andrews, Dick van Dyke, and 17 others
  • Alexa: Sorry, I’m not sure
  • Siri: I couldn’t find any movies matching ‘Broadway Production’
  • Cortana: Bing search, again

I thought this was a reasonable question, but maybe this was too difficult. Let’s try another ambiguous one.

Who stars in the Pink Panther remake?

  • Google: The cast of The Pink Panther includes David Niven, Peter Sellers, and 13 others
  • Alexa: Hmm, I’m not sure.
  • Siri: Sorry, I don’t see any movies matching ‘pink panther remake’
  • Cortana: According to Wikipedia.org, in this film, Inspector Jacques Clouseau is assigned to solve the murder of a famous soccer coach and the theft of the famous Pink Panther diamond. The film stars Steve Martin as Clouseau and also co-stars Kevin Kline, Jean Reno, Emily Mortimer, and Beyoncé Knowles.

I was afraid for a while that I had chosen a question that was too difficult again. Google gave me the answer for the original movies, Alexa gave a non-answer, Siri is continuing its literal rampage of movie titles, but out of nowhere, Cortana actually figures out I wanted the 2006 film. I asked Cortana for the original cast and it did give Sellers, Niven, etc, so dark horse surprises. Still, the answer from Cortana is long, unwieldy and, while it eventually answers the question, I did not ask for the plot summary. Interesting though, let’s keep trying.

Who stars in the Star Trek reboot?

  • Google: The cast of Star Trek includes Chris Pike, Zachary Quinto, and 59 others.
  • Alexa: Bleeding Cool reports that CBS’ Star Trek: Discovery, which will debut this fall, marks the first time the series lead will not be a captain. The series will consist of 13 episodes, each of which costs $6-7 million to make.
  • Siri: Sorry, I don’t see any movies matching ‘Star Trek reboot’
  • Cortana: I found two results, including Chris Pine and Winona Ryder

I did not even consider that there was the TV series, so while I can’t mark Alexa as wrong on giving me the series, I can say it did not give me a cast, which was what I actually asked for. Google and Cortana both give reasonable answers, although Cortana makes it sound like there are only two actors. Siri, again, no thank you. Like the Pink Panther, let’s ask a followup to ensure that they are not simply triggering on the phrase “Star Trek” and calling it a day.

Who starred in the original Star Trek movie?

  • Google: The cast of Star Trek: The Original Series includes William Shatner, Leonard Nimoy, and eight others.
  • Alexa: Bleeding Cool reports that CBS’ Star Trek: Discovery, which will debut this fall, marks the first time the series lead will not be a captain. The series will consist of 13 episodes, each of which costs $6-7 million to make.
  • Siri: Which one? [Presents a list of movies to choose from]
  • Cortana: According to Wikipedia, Shatner and the other original Star Trek cast members returned to their roles when Paramount produced Star Trek: The Motion Picture, released in 1979. He played Kirk in the next six Star Trek films, ending with the character’s death in Star Trek Generations (1994).

We continue our mixed bag of results. Google presents the right names, for the wrong reason. I specifically said “movie”. Alexa, also, ignores the movie reference and goes on its own merry tangent. Yeah, this time it’s clearly wrong. Siri will lead to a decent answer, but only because it couldn’t figure out which movie I wanted so it asked me again. Cortana does produce the correct answer, but boy is it wordy like no tomorrow.

How much does a Dreamliner weigh?

  • Google: Here’s a summary from wikipedia. Keeping the same wingspan as the 787-8, the 787-9 is a lengthened variant with a 20 feet longer fuselage and a 54,500 pounds higher maximum take-off weight, seating 280 passengers in a typical three-class arrangement over a 7,635 nautical miles range
  • Alexa: Hmmm, I’m not sure.
  • Siri: The answer is about 502,500 pounds.
  • Cortana: Bing search

We might want to thank Wolfram Alpha for that, but Siri gives a reasonable and useful answer to the question. Alexa is its usual useless self, but the real surprise here is Google that goes off on the world’s weirdest tangent. Interestingly, their algorithms have cut off parts of the Wikipedia article, e.g. where Google says “lengthened variant”, Wikipedia says “lengthened and strengthened variant”. Different units have been cut off as well, although that shouldn’t be too surprising.

Where does the Jackfruit grow?

  • Google: Here’s a summary from California Rare Fruit Growers, Inc. Origin: The jackfruit is believed indigenous to the rain forests of the Western Ghats of India. It spread early on to other parts of India, southeast Asia, the East Indies and ultimately the Philippines. It is often planted in central and eastern Africa [source]
  • Alexa: Sorry, I don’t know that one.
  • Siri: Bing search
  • Cortana: According to answers.com, the Jakfruit or Jackfruit which ever way you spell it come from Southeast Asia you can’t get the Jakfruit in Australia because it has no access to Australia.

Google quotes a 1996 page that almost gave me Internet whiplash. That would load great on a Modem. It is correct, but verbose and again truncates a sentence mid-sentence. Cortana chooses Answers.com, which provides a puzzling answer and goes to Australia because why not.

How tall is the Eiffel Tower?

  • Google: The Eiffel Tower is 984ft tall
  • Alexa: The Eiffel Tower height is 1063ft, 324 metres.
  • Siri: Eiffel Tower is 1063ft tall.
  • Cortana: Eiffel tower stands at 986.38 feet tall.

We’re getting a bit of variance on here. Google chooses the height of the lattice, Siri and Alexa choose the height of the antennas, and Cortana chooses an alternative height I can’t find sourced anywhere. Update: I received a very interesting suggestion in my inbox, addressed from Grégoire (thank you!) that gives a plausible explanation:

I suspect that it’s the result of the conversion in feet of 300.65 m, the precise value reported in some sources in French for the original height of the tower (without the flag pole!). I find this quite interesting as it suggests that Cortana either translates informations from other languages (but why when there’s already a lot of decent answers to this question in English), or taps into a curiously precise database of trivia that’s not publicly accessible online (Encarta ?).

What is the largest man-made structure visible from space?

  • Google: Here’s a summary from Wikipedia. Man-made structures visible from earth orbit without magnification (by mechanical aids such as a camera or binoculars) include highways, dams, and cities. The most commonly cited example, the Great Wall of China, is barely visible from low Earth orbit.
  • Alexa: Hmmm, I don’t know that.
  • Siri: Bing search
  • Cortana: Bing search.

Nobody scores here. Google comes in closest, but is again just reading a website, and crops a sentence in half again.

When did Caesar live?

  • Google: Julius Caesar was born on July 13th, 100BC
  • Alexa: July 13th, 100BC to 44 BC
  • Siri: Julius Caesar was born Saturday, July 13, 100BC extrapolated Gregorian calendar and died Friday, March 15, 44BC extrapolated Gregorian Calendar
  • Cortana: According to Wikipedia, Gaius Julius Caesar, known as Julius Caesar, was a Roman politician, general, and notable author of Latin prose.

I feel like I triggered Siri here, and it sounds defensive. Very defensive. Lighten up! Cortana also triggers a bit, refusing to give up any information pertaining to his life. Google is very literal about the “live” part, presumably wishing Caesar still lives. Alexa and Siri, despite the extrapolated Gregorian Calendar, get marks on this one.

Which is heavier: 3lb or 2kg?

  • Google: Sorry, I’m not sure how to help with that.
  • Alexa: 2kg is greater than 3lb
  • Siri: Bing search result for “which is heavier: 3lb or 3kg”
  • Cortana: Bing search

I’m a bit disappointed here. Wolfram Alpha will clearly answer 2kg, and even Alexa managed to do it, so Siri’s failure is surprising. Google’s failure is not that surprising; it doesn’t seem like the device can do any reasoning. I also tried more complex reasoning, such as “Which is heavier: A 787 or an A380?” but none of the assistants could figure that one out.

What’s the temperature of a medium rare steak?

  • Google: According to Amazing Ribs, For example, USDA recommends a minimum temp of steak to be 145°F, which is classified by chefs as “medium” and by many street lovers as overcooked. “Medium rare”, 130°F to 135°F, is the temp range at which steaks are at their most tender, juicy, and flavorful.
  • Alexa: Hmmm, I don’t know that one.
  • Siri: Bing search results, led with Wikipedia article for “Doneness”
  • Cortana: According to Food Network.com, turn the steaks over and continue to grill 3 to 5 minutes for medium-rare (an internal temperature of 135 degrees F), 5 to 7 minutes for medium (140 degrees F) or 8 to 10 minutes for medium-well (150 degrees F).

This one takes a bit of unpacking. Siri and Alexa predictably fall flat, but Google’s seems to be doing something. Unfortunately, all it’s doing is just word torrenting the search results for “amazing ribs steak medium“, or probably some other internal combination, including reading the typo in the page that misspells “steak lovers” as “streak lovers” and mispronouncing it “street”. While it did answer the question, the way it did was so convoluted and insanely long that it doesn’t really qualify. Cortana falls into the same trap, likely from Bing, but reads it even more awkwardly. Let’s try something simpler.

What’s the cook time for Spaghetti?

  • Google: 8-12 minutes
  • Alexa: Hmmm, I don’t know that one
  • Siri: Bing search
  • Cortana: According to What’s Cooking America, most pastas cook in 8 to 12 minutes. Test dry pasta for doneness after about 4 minutes of cooking by tasting it. It is difficult to give exact cooking times since different shapes and thickness of pasta will take less or more time to cook.

Google gives a straight answer. Hooray. Alexa and Siri miss this train, while Cortana falls into the Google trap we saw earlier of just reading blindly from a web page. Sorry, no points for just trying. While we are on the topic of boiling water…

What is the boiling point of water at an altitude of 1km?

  • Google: At sea level, water boils at 212 °F. With each 500-feet increase in elevation, the boiling point of water is lowered by just under 1 °F. At 7,500 feet, for example, water boils at about 198 °F. Because water boils at a lower temperature at higher elevations, foods that are prepared by boiling or simmering will cook at
  • Alexa: Sorry, I’m not sure about that.
  • Siri: Bing search
  • Cortana: Bing search

Interestingly, Google actually had several different answers to that question. One of them said, “At that altitude, water boils at 208°F,” and continued on reading a long sentence that it credited to Wikipedia, that I couldn’t find. Since I could not verify it, I couldn’t say if Google actually understood that I wanted a specific elevation or just spat numbers at me. The answer I quoted appears to say that it doesn’t. But since I’m not a feet native…

What is 1km in feet?

  • Google: 1km equals 3,280 feet 10.079 inches
  • Alexa: 1km equals 3,280.8 feet
  • Siri: It’s 3,280.84 feet
  • Cortana: 1km is approximately 3,280.84 feet

Google can certainly take the crown for “most accurate” answer, but unfortunately also “most useless in a spoken setting”. In fact, if I asked for 100km, Google answered 328,083ft 11.88in. Alexa rounds to 328,080 feet, which is far easier to understand for a human. I wonder what other units that these assistants can round up.

What’s 1 trillion Euros in US Dollars?

  • Google: 1 trillion Euro is approximately 1 trillion 119 billion 350 million US Dollars
  • Alexa: 1 trillion Euro is 1 trillion 117 billion 300 million dollars
  • Siri: Bing search (wow, what?)
  • Cortana: Bing search (Hmmm, pattern here much)

Both Google and Alexa rounded it around the same spot, using slightly different conversion tables. Siri and Cortana failed at answering the question, oddly enough. They’ve been acting suspiciously similar, rounding at the same places, giving the same answers. I wonder how much the Siri team is getting from Bing.

What’s a “Phillips Head”?

  • Google: According to Wikipedia, Henry Frank Phillips was a US businessman from Portland, Oregon. The Phillips-head screw and screwdriver are named after him.
  • Alexa: Hmmm, I don’t know that.
  • Siri: Bing Search.
  • Cortana: The Phillips head was designed to provide a better contact with the screwdriver than the original flat head. The flat headed screw driver often tended to cam-out causing damage to the screw head. The Phillips head was designed to combat this by changing the design from a single groove to a simple cross to improve contact and torque. The design [sic]

Google is reading the biography page of Henry F Phillips, while Cortana is reading bsfixings.uk. Cortana’s is a better answer, but, as usual, Cortana is wordy about it.

What should I use for a headache?

  • Google: I’m sorry. I hope you get well soon.
  • Alexa: Hmmm, I don’t know that
  • Siri: Here’s what I found (see image to the right)
  • Cortana: Bing Search

Siri is the only one to give a reasonable output, again thanks to Wolfram alpha. The output is also technically correct, but it’s difficult to call it “useful”. The drugs are practically illegible, and it doesn’t really give a clear answer.

How do I tie a Windsor knot?

  • Google: According to Esquire, start with the backside of the tie facing away from you and then cross the wide end underneath the narrow end to the left. Step Two: Cross the wide end over and to the right, and then up through the neck loop from underneath. [This is the how to tie a Simple knot, not a Windsor]
  • Alexa: Hmmm, I don’t know that one.
  • Siri: Bing search
  • Cortana: Recommends I watch a YouTube video on how to do it.

What’s the difference between a Flush and Royal Flush?

  • Google: According to poker stars: Traditional High Poker Hand Ranks. Straight Flush Five cards in numerical order, all of identical suits. The best possible straight flush is known as a royal flush, which consists of the ace, king, queen, jack and ten of a suit. A royal flush is an unbeatable hand. Four of a Kind: Four cards of the same rank, and one side card or ‘kicker’.
  • Alexa: Sorry, I don’t know that one
  • Siri: Bing search
  • Cortana: Bing search

Google gives a truncated version of an actual website, skipping some of the sentences and even slipping in four of a kind. It’s quite literally reading the page for me. Still a better effort than the other assistants, since at least it provides an answer, but it does leave room for improvement.

What’s the best time to visit Stockholm?

  • Google: According to US News travel,the best time to visit Stockholm is in the summer – albeit the city’s priciest season – because the temperatures are warmest and daylight lasts the longest.
  • Alexa: Hmmm, I don’t know, but I’m always learning.
  • Siri: Bing search
  • Cortana: According to US News travel, the best time to visit Stockholm is in the summer-albeit the city’s priciest season-because the temperatures are warmest and daylight lasts up to 24 hours. Average summertime highs range between 68 and 71 degrees Fahrenheit, with the hottest weather occurring in mid-July.

Both Google and Cortana properly truncate the site they are quoting. Google’s copy of the index is newer (the subtle change in wording is represented when I follow Cortana’s web link). Cortana, as usual, offers a longer response, though for once it is quite relevant.

How long is the flight to Sydney?

  • Google: The shortest flight from [My Nearest Airport] is about 17 hours and 20 minutes long.
  • Alexa: The flight time between Sydney and your current location is approximately 15 hours and 30 minutes.
  • Siri: Bing search
  • Cortana: Bing search

We get a bit of a divergence here. A quick scan shows that Google is homing in to the smaller airport that’s fairly close to me, while Alexa is allowing flights from the larger airport a bit farther away. This explains the shorter flight time; with better connections come shorter flights. Asking around a bit more, it’s evident that Google has gone some lengths to integrate with their Flights service, with Google Home being able to point out how many flights there are, and so forth. However, it furiously refuses to allow me a price quote.

How long time would it take to drive to Disneyland.

  • Google: With light traffic, it would take 5 hours and 23 minutes to drive to Disneyland Park
  • Alexa: I don’t know how long it would take, but based on your zip code setting of [redacted] there’s a Disney store 3.1 miles away. To get more accurate information, enter your address in the Alexa app.
  • Siri: OK, I found this on the web for ‘How Long [sic] time would it take to drive to Disneyland’
  • Cortana: Bing search.

… or so I thought. The bag, once more, is mixed. A lot. Google gives a straight-up answer to the question (good job!). Alexa rambles on and fails to take me where I want to go, instead telling me there’s a Disney store. Siri and Cortana both just go off tangent. Related question that they do succeed on is…

How far away is Disneyland?

  • Google: If you drive, Disneyland park is 373 miles away.
  • Alexa: Your current location is 338 miles, 538km from Disneyland Park, as the crow flies.
  • Siri: It looks like Disneyland Park is about 335 miles away as the crow flies, Vlad
  • Cortana: It’s about 372 miles to Disneyland, CA. It’ll take 5 hours and 45 minutes with traffic to drive there via I-5 S.

Everyone has results, but now we see a bit of a difference. Alexa, based on zip code, like Siri, gives me the distance as the crow flies. Both Google and Cortana give me driving directions. Interestingly enough, when asked how far away, Cortana volunteers the driving time, but when asked for it explicitly fails to recognise it.

What’s the name of the Titanic theme song?

  • Google: I don’t understand
  • Alexa: I don’t know that
  • Siri: I don’t understand [question]
  • Cortana: According to themoviethemesong.com, the Titanic theme song was performed by Celine Dion. The Titanic movie theme song was called “My Heart Will Go On” and achieved massive success within several countries. It rapidly become Celine Dions most ssuccessfulhit song and soon went on to be one of the best selling singles of all time.

This was surprising. I certainly thought Google searches would be A-OK. I guess not. Interesting though. With all the fake news lately, I wonder if these assistants would be able to help out facts.

Do breakfast cereals contain paint thinners?

  • Google: My apologies. I don’t understand.
  • Alexa: Hmm, I don’t know that one.
  • Siri: Bing search result, the top hit stating FDA Approved: Paint Thinner in Children’s Serials
  • Cortana: Bing search, same top hit as Siri.

Yeah, not much help there…

Do vaccines cause Autism:

  • Google: I can see what the web has to say. [Stops there]
  • Alexa: Sorry, I don’t know that one.
  • Siri: Bing search, top hit Autism-Vaccine Link: Evidence Doesn’t Dispel Doubts
  • Cortana: I pulled up the latest for [repeats question]. The top hit is Vaccines do not cause autism; they save lives [mispronounces lives]

Google is mysterious. It almost feels censored. Bing is really not doing so well on the dispelling myths part. Two for two we are now into territory where the first Bing result furthers the disinformation. I continued down a list of these types of questions; “does spaghetti grow on trees”, “did the DOJ prosecute a woman for laughing”, “is it true that Bernie Sanders used campaign money to buy a car”, and I faced similar answers each time. It looks like assistants can’t suss out fake news yet. What about famous quotes?

Who said “If I asked my customers what they wanted, they’d have said ‘a faster horse'”?

  • Google: Sorry, I don’t understand.
  • Alexa: Ominous error sound
  • Siri: Bing Search
  • Cortana: Bing search

The bing search results are useful this time, but the assistants are still not, well, assisting.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.