This week Facebook launched its bid for capturing and building the market in personal digital assistant services (for now only available to select groups of people in San Francisco). Facebook’s ‘M‘ interacts with the user via the Facebook Messenger app, but as with the competitors Siri (Apple), Now (Google), Cortana (Microsoft) and Echo (Amazon), the serious work is done through cloud services.
As I have already previously posted at length about my concerns with regards to the growing trend of processing potentially intimate data through could services, I will refrain from doing so again. On a technology level, a key differences to its competitors is that ‘M’ uses human assistants who step in to help answer questions when the algorithm is incapable of responding, which is then used to further train the AI. This is most likely the reason why the response latencies that have been reported for requests during informal testing of ‘M’ ranged from a few seconds to up to 30 minutes. Given the current state of even the best AI the inclusion of humans in the loop undoubtedly gives the service greater flexibility and power for answering complex queries.
A side effect of the ‘human in the loop’ may well turn out to be that users become more privacy conscious in their uses of the service. For better or for worse, it has been shown [ref avatar psychiatrist study] that people have a tendency to reveal more about themselves when they interact with fully automated system than when they know that their communications will be seen by a human. We might speculate that this is due to instinctual specie-ism akin to people not wanting some intimate acts to be observed by humans but not caring if they are observed by animals.
Ironically, as I will discuss in my position paper at EthiComp2015 this week, when dealing with personal digital assistants, the AI algorithm that tracks and accumulates all of the user’s past interactions with the system may well be a greater cause for concern regarding the privacy of the users than any one of the army of human assistants [see also my paper at the 2nd Internationa Conference on Intern Science].
Before jumping to the conclusion that the decision to include ‘human in the loop’ makes Facebook’s ‘M’ the preferred service for privacy conscious users (who for some reason nevertheless insist that they must use a personal digital assistant), it is important to remember that one reason why Facebook thinks it can outflank the competition in personal digital assistants is that Facebook hopes the rich database of personal ‘likes’ and social media information that it holds on all of its users, will give it an edge when it comes to mining for personalized recommendations.
Facebook’s David Marcus is apparently promising that ‘M’ will not engage in certain kinds of data-mining of the user without having received explicit permission to do so. Given their past attitudes towards privacy and user consent, however, I must admit to a high level of skepticism about their definition of ‘explicit permission’. Will it be another case of Terms and Conditions based consent to have generic defined ‘research‘ done with the user data?
Another feature of ‘M’ that appears prominently in Facebook’s advertising is the promise that ‘M’ can not only answer questions and give recommendations, but can go beyond that and even make purchases and reservations for you. Question is, how would you feel if the restaurant that your date brings you to was not directly chosen by your data, but rather by ‘M’ on the basis of your date’s Facebook profile? Would this make it less personal? Would it communicate less about what your date thinks of you? Or would this be no different than when your date chooses a place based on the top three search results that came back on Google search?
Final thought, what will the first major hack of a personal digital assistant look like? What kinds of information will the hackers get access to?