Defining Consumer Artificial Intelligence

Published on in

This is an excerpt from a report I co-authored in 2017 called: Consumer Artificial Intelligence: The unintended consequences of implementing artificial intelligence for personal use.

The term Artificial Intelligence (AI) has a long and intricate history with roots in philosophy, mathematics, psychology, neuroscience, computer science, and linguistics. Existing definitions of AI are often confused by the discipline’s various academic subfields (e.g. neural networks, machine learning, computer vision, natural language processing, etc.), as well as its portrayal in media and popular culture. It is therefore not surprising to discover that, when we asked, both public and private sector organisations claimed that the term had become overloaded with countless meanings.

The academic field of artificial intelligence is concerned with attempts to understand and build intelligent entities (Russell and Norvig, 2010). According to seminal work from Russell & Norvig (2010), the various definitions of AI exist on two dimensions, represented in Table 2 below. The top row is concerned with a system’s reasoning process, whereas the bottom row considers a system’s behaviour. Definitions to the left measure the success of a system in terms of human performance, and those to the right in terms of ideal intelligence. Ideal intelligence is referred to as rationality and reflects a system’s ability to do the right thing (ibid.). These definitions lead to four possible goals for AI research, each of which has been widely pursued using different methods by different people.


Human Performance
Ideal Intelligence
Reasoning Process
Thinking Humanely
Thinking Rationally
Behaviour
Acting Humanely
Acting Rationally
Table 2: Definitions of AI are organised into four categories (Russell and Norvig, 2010).

When asked about the goals of AI, organisations we spoke to for this report described systems that fell into one of two broad categories: anthropomorphic or analytical. When describing systems with anthropomorphic goals, we heard phrases such as “AI emulates human intelligence" and “AI completes human tasks". These views align with the human-centred group of definitions in Table 2, which measure success based on human performance. The second group of descriptions we heard were analytical, based on mathematics and advanced information processing. In this category, we heard phrases such as “AI is an evolution of statistical modelling" and “AI is made up of rational agents". These descriptions align with the rationalist approach to AI, which considers ideal intelligence and focuses on mathematics and engineering.

In our discussions, we also observed a loose relationship between an individual’s view of AI systems and their attitude towards regulation and policymaking. Individuals who identified the goals of AI systems as anthropomorphic were more likely to talk about the technology as if it was human; suggesting an extension of existing human-centred regulations to AI systems (e.g. taxation). On the other hand, individuals who adopted a rationalist approach to AI discussed regulation at the input or point-of-action rather than of the system as a whole. In this way, they created a distinction between the underlying mathematics (known as agent function) and a system’s actions.

This report will adopt a rationalist approach to AI. Intelligence will be viewed as being concerned with acting rationally “so as to achieve the best outcome or, when there is uncertainty, the best expected outcome" (Russell and Norvig, 2010, p.4). The report will focus on artificial intelligence as the study of intelligent agents; where an intelligent agent is a system that can perceive its environment through sensors, and then combine this with a program and actuators to take the best possible action upon its environment. All agents can come into being, and continuously improve their performance, by learning from an existing set of training data or simply through experience.

Intelligent agents are being applied to a wide range of consumer applications, making AI an increasingly significant part of everyday life. An example of adapting AI for personal use is UK-based Biobeats, who combine health data from wearables with evidence-based machine learning, to help consumers manage their health and productivity. Skim.it is a Google Chrome extension for students and teachers that uses AI to extract key information from articles on major news websites. Additionally, Jukedeck, with origins at the University of Cambridge, uses AI to compose and adapt professional-quality music, giving consumers audio files that are personalised to their needs. Consumer AI also exists beyond the realm of software agents, playing an increasingly important role in consumer electronics. As an example, the world’s most successful robotic vacuum cleaner, the Roomba, adopts AI techniques to cover surface areas more efficiently (Knight, 2015).

The emergence of personal assistants such as Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Google Assistant, are all examples of AI deployed for personal use. What is more, SoundCloud relies on machine learning to determine what songs to recommend next, while Netflix, Facebook, and Snapchat use subfields of AI such as computer vision, to improve their core products.

Informed by the discussion on artificial intelligence, we have coined the term Consumer Artificial Intelligence to describe this empirical trend.
Consumer AI is defined as the commercial development of intelligent agents for personal use.
In addition to these empirical examples of Consumer AI, our survey shows that the three most used technology categories among consumers are ones that make extensive use of AI: social networking (e.g. spam filtering), online shopping (e.g. product recommendations), and entertainment (e.g. computer vision applications). This ongoing development and adoption of Consumer AI is likely to lead to unintended consequences. This report explores unintended consequences in ten of the most commonly discussed interest areas, which emerged from our conversations with businesses, policymakers, and academics.

References

Knight, W. (2015). The Roomba Now Sees and Maps a Home. Technology Review. [Online]. Available at: https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/ [Accessed: 14 May 2017].

Russell, S. J. and Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Prentice Hall.