Star Trek: Bridge Crew is a fascinating social game. Quintessentially a communication game, much like Keep Talking and Nobody Explodes, the goal of Bridge Crew is likewise to, through means of cooperation and talking with each-other, overcome a computer opponent. The players never compete on who does “best”, nor are they rewarded for having one person significantly better than the rest. In true Star Trek fashion, it promotes egalitarian gameplay. Except the single player game, where one instead gets to have split personality while trying desperately to man all stations, all the time.Not really that interesting.
So when the Watson addition was announced, I was intrigued. This is a futuristic vision, apt in a futuristic game, which will try to allow me to talk to my AI crew. Interesting, I thought. Now that the game launched with this feature, I figured I would try it out, planning an overview of how much interaction the voice commands added: were they merely alternative ways to call the single player commands, or could voice commands nuance things. Would they expand the interaction richness from “quarter impulse” to “one sixth impulse”, and would it work? I will take a quick aside here that my main curiosity is not on is it possible but on is it easy. The former, the answer is yes. The latter… now that’s interesting.
Upon launching the Bridge Crew patch, I was immediately greeted with a message from Ubisoft. It detailed how the new voice commands were powered by Watson speech recognition and “conversational engine”, how my audio would be sent to Watson, and how it would only happen when I had the crew order command open. Agreeing to whatever terms and conditions they had, because of course I will, I set off. Unfortunately, my bold journey quickly went to nowhere.
I launched up Bridge Crew, started a solo adventure, and received a hail. Immediately, I respond with “Answer Hail”. Pause. Well, the hail got answered, so I reason there is probably some delay. No matter, such is the price of progress. The hail done, and I immediately order the helmsman to navigate to the ship I have selected. Nothing happens, but maybe I was too ambitious off the bat. I fire off a sequence of simpler and simpler commands, ending with “full impulse” and finally “red alert”, and noting that nothing gets answered. At all. Which brings me to today’s actual message.
Voice interaction is tough. People will naturally try to say a lot of things. The key is to help a user understand what’s happening. A user will, pretty much by definition, want to use the system. If the user is failing at doing so, there should be an approachable way for them to continue. In this case, I had nothing. Nothing in the UI would tell me if the speech service was even working. Compare with Alexa or Siri, who will complain profusely if they cannot connect to the internet to run their algorithms. Instead, I was sitting in my virtual captain’s chair, barking random orders, and hoping one of them would get accepted. Not good.