The most unexpected insights from our Connected homes were those that came as a direct result of people living in a Design Fiction. In real life people will gradually build intelligent systems that can interact with in-home technology and external services over a five year period. By turning real homes into fictionalised versions of 2024 homes, our participants were able to adjust to the experience of living there in fast-forward. They were therefore very aware of what changed, and also of how their expectations grew as their homes’ capabilities increased. This was most apparent in their interaction with the intelligent agents themselves.
Voice is Different
The medium of conversation itself creates very different levels of trust and expectation with intelligent agents. The concept of agency seems straightforward when we just have a few speakers in our homes, likely connected to some lightbulbs and a music streaming service. However, as with many connected systems, there are network effects at play here: the whole is very different to the sum of its parts. By this, I mean that in homes where most functions are connected to a single intelligent agent, there are fundamental changes in the way that humans perceive the system.
Voice is a far more natural mode of interaction than typing, tapping or swiping. We can speak far faster than we can type, but the difference is not as simple as speed alone. When we are having a conversation, all of our subconscious experience suggests that it is with another human. The suggestions made in that conversation are attributed to human recommendation. When we think about the long history of word of mouth as the most powerful influence on purchase decision, we start to see that the power of the spoken word outweighs the fact that it came from a virtual mouth
The personality of the home
This is primarily because participants very quickly and very naturally personified the voices in their home. At a very practical level this took the form of politeness, ensuring that the children said please and thank you when talking to Alexa. The specific functions that the agent played were also humanised – being variously described over the course of the research as “butler” “assistant” “servant” and “coach”. However it is personification of the system itself that has the most significant implications: when the agent controls the whole functionality of the home, it ceases to be seen as physically located in a speaker, and rather becomes the voice of the home itself.
The voice and agency of the home has two complementary and profound effects. Firstly, a huge jump in the level of expectation about what the system should be able to do. You expect that your home should already know everything about you, and therefore be able to predict what you want based on that knowledge. Participants knew rationally that this was not the case, but felt that it was the natural evolution of the system. They also sought to find ways to tell their home more about what they needed and wanted, in order to get closer to this goal of connected living.
Secondly, when agency resides in the home rather than in a speaker, concerns about Google and Amazon having access to that information appear to reduce. Expectations of intuition go hand in hand with willingness to provide more personal data in order to achieve it. Providing that there is a clear increase in the agent’s ability to predict and serve, the value exchange in a data ‘transaction’ is seen as favourable.
Now rationally this may not be the case, and ethically it gives rise to a whole new level of concern about the potential power of surveillance capitalism. However it was very clearly observable in reflexive interviews with research participants. This trust in the agent however doesn’t necessarily extend to other services and brands that are seeking a foothold in the connected home. As more and more intimate motivations and data are shared with the home, trust is going to be the most significant attribute a service or product requires to be invited in.
Trust in the collaborative system
Trust operates at many levels of course. Trust that the service will work is table stakes, and “working” means that it will slot seamlessly into the existing connected system of hardware, agent and interaction. We also need to be able to trust that a new service will be able to access the data that the system already holds, without either introducing vulnerabilities or asking again for something that it should already know. “Only asking once” is an overlooked priority – it signifies that services are themselves intelligent, and able to collaborate intuitively with the rest of the system. As service design for intuitive agents begins to access sensor and wearable data, to have control over security cameras or door locks, and to change its behaviour based on rules set by a banking, health or weather service, trust in the interoperability and collaboration of services is key to success. Importantly, our participants did not expect that they would personally have to either set the rules for the new service, or define how it collaborated with the system. They saw that as being taken care of in the background by their agent, in the same way that the agent manages the temperature or house cleaning today.