When it comes to introducing artificial intelligence (AI), there is a gulf opening up in organisations.
On one side of this chasm are the technical teams working on AI. They are full of enthusiasm for the technology and its potential. On the other side of the chasm, however, are the business stakeholders. Less inclined to trust technology or adopt it for its own sake, they are concerned about the risks of AI. If businesses are to take advantage of the potential of AI, they need to find a way to bridge this chasm and move forward.
A comparison with marketing
I think this AI chasm has much in common with the chasm identified by Geoffrey Moore in his book Crossing the Chasm. Moore’s book focuses on the marketing cycle, which identifies which types of people buy at which stages of technology development. Innovators love technology, and actively search for new things. Early adopters are similar: they want to be ahead of the curve, but are not looking to adopt technology for its own sake. Instead, they want to be able to use it, but recognise that there may be challenges. There is a small gap between those two groups, because innovators are prepared to buy technology just because it’s new, whereas early adopters want to be able to use it.
The big gap, however, the ‘chasm’ of the title, is between early adopters and the early majority. Early adopters want to be ahead of others, and they know that they must put up with some pain in order to do so. The early majority want a product that will be useful and easy to use. Above all, they want a smooth transition to new technology: increased productivity, without revolution.
The issue for marketers is that early adopters do not act as role models for the early majority. In other words, you cannot sell something to people in the early majority on the basis of the experience of early adopters. The early majority want to see examples of people like them using the technology. This, of course, begs the question of how you break into the early majority market, when everyone is waiting for someone else to buy first.
Applying the idea to AI
Early adopters and the early majority have quite a lot in common with the AI technology enthusiasts, and business users. Technology teams see the potential, and are willing to work to make things happen. Business users see the potential for things to go wrong: the risk that the technology will learn the wrong things, and take control away from the business, for example. They also see the challenges in implementation.
Can we therefore use the principles in Crossing the Chasm to find a way forward for AI adoption? I think so.
Moore argues that to break into the early majority market, businesses need to target a specific niche that is, for whatever reason, more receptive. This allows some use cases to be developed. With a small niche, the members will probably share experience, spreading the product more rapidly. For AI, this would be analogous to IT specialists working with a single forward-thinking business unit to build a new AI system.
It would also mean sharing information about the development. For example, we know from experience with computer games that AI systems can be tightly controlled. They can learn within defined boundaries. This means that their development is based on experience, but also limited, ensuring that the business remains in control at all times. Once this information spreads throughout a business, it is likely to mean that attitudes change. Knowledge is power, after all.
Another key aspect of breaking into the early majority market is to demonstrate practicality. The same applies to AI. It seems likely that AI will become more acceptable as its benefits become more obvious. For example, if AI improves customer experience, then customers—and by extension, executives—will be more willing to trust it.
Taking responsibility for development
Ultimately, however, the business needs to be prepared to accept and embrace change. Change is necessary for innovation to happen, and without serious, disruptive innovation, companies are likely to fail in a competitive environment. For AI to be effective, it must be developed responsibly. This requires organisations to take responsibility for AI developments: setting down the ethical requirements, and then monitoring development closely. This may be hard, but I think we have a responsibility to take on the challenge to deliver the wider benefits of AI for society.
Photo by Dave Dollar on Unsplash