Skip to main content

The First Step Towards Responsible AI Needs To Be About People Not Strategy!


I was recently consulting for an organisation that was looking to implement a framework to govern the implementation of Artificial Intelligence (AI)  technologies. Like many organisations in their sector, they had been running various ‘lab’ experiments for some time, and had seen positive results; but there was still something holding them back from wholesale investment.
A major consulting firm had encouraged them to ‘accelerate’ their innovation by using a framework to govern the roll-out. I asked them where they felt it needed more focus, and they responded saying that it felt somewhat vanilla, a re-hashing of any-old IT project management best practice. “Surely there is something different about AI”, they asked?
I couldn’t agree more.
There is no magic to AI. Today’s AI is a collection of methodologies that apply extreme reductionism to Big Data in order to elicit patterns, calculate probabilities or make predictions. What we called AI in yesteryear we now write off as ‘expert systems’ or ‘rule-based engines’. No doubt, the fancy math of the future will steal the crown from today’s AI and we will look back at this period aghast at how carried away we became.
While there is no magic to AI, that’s not to say its role in our workplace and our home is not going to be hugely impactful. This has very little to do with the technology itself, however, but rather because of how the technology has the potential to change the nature of the relationships between us as humans, between us and machines and also between machines.

Human to Human Interface

Take for example the human to human interface. Facebook’s newsfeed uses AI to decide what slices of information I see from my network. It makes countless computations, the result of which will mean that invariably result in some loss. Do I see the pictures from my friend’s birthday party or the update from another who has just lost her dog? Will I be angered by some outrageous news article or amused by a cat sitting on a robotic vacuum cleaner?
How information is filtered that impacts who, how, or when we communicate with each other carries an ethical challenge. Considering the issues and playing ombudsman to challenges is beyond the professional experience of the average (or even the exceptional data scientist), yet invariably these are the people that we leave these decisions to in the most part.

Human to Machine Interface

When historians write the story of the impact of digital technology on society, my guess is Bryan Lawson will be credited with a much bigger role than he is today recognised for.
His 1980 book How Designers Think marked the birth of the ‘design thinking’ movement which is the buzzword of the day across boardrooms around the world. Every hip consultancy now starts their deck with some variation of the Blockbuster Video/ Kodak story and how design thinking can help sleepy corporations avoid annihilation from some bot based solution dreamt up by a teenager in Shoreditch or Silicon Valley.
The result of this means we have collectively never thought so much about the design of our technology as we do so today. While this might seem like an obvious improvement, there is a subtle difference between clever design and manipulated effect which is all too often a line crossed by the tech industry.
The shape of the buttons on the interface, the color of the text, the choice of the words that are used on their captions – all have an impact on how people use technology. When the design of technology is intended to make it as addictive as possible, as confessed by Sean Parker – the early Facebook investor last year then few would deny there is a problem, but how many of us ensure that a code of ethics governs the design choices we make on user interfaces? Of the few that do, how many publish these?

Machine to Machine Interface

There are many aspects to the machine to machine interface that are worthy of exploration, but the one I’d like to highlight today is the fact that at the time of writing, in late 2018, humans are predominantly the integration point between machines.
Everyone who has ever faxed (yes, I hear they are still used) or emailed documents to back-office processing centres will be aware that some poor person will be faced with the task of re-keying that information into another system and is saving you that drudgery.
Offshoring and outsourcing of the late 1990s and 2000s took bad business processes and moved them to places where labor was cheap.Now labor costs are rising in Eastern Europe, South-East Asia and South America, we’re starting to look at Robotic Process Automation (RPA) solutions to mitigate this expense.
My former CEO at Deutsche Bank, John Cryan once said of the organisation that many of the staff worked “like robots” and implied a future of robots behaving like people would be preferable to the present state. This is by no means universally accepted and highlights an important issue. Leaders today will need to decide where the line is drawn, and what level of human touch or subjectivity ought to be preserved in the more fully-automated future.
There are countless studies on the future of work and the impact of automation on jobs, and the task now for organisations who want to lead us into the Fourth Industrial Revolution is to have that difficult conversation with us all as to where and when we should draw the lines.
This is why people need to be put before strategy. It would be easy to build a target state where the minimal headcount is required to keep a business running, but what of the social impact? To what extent do we owe an obligation to support those we lay-off? What are the considerations we should have for how our customers (and our colleagues) interact with technology? And with each other?
This is why you can only talk about responsible AI if you are responsible to people first.

Comments

Popular posts from this blog

The 4 Waves of AI: Who Will Own the Future of Technology?

Recently, I( Peter H. Diamandis ) picked up Kai-Fu Lee’s newest book,  AI Superpowers . Kai-Fu Lee is one of the most plugged-in AI investors on the planet, managing over $2 billion between six funds and over 300 portfolio companies in the US and China. Drawing from his pioneering work in AI, executive leadership at Microsoft, Apple, and Google (where he served as founding president of Google China), and his founding of VC fund Sinovation Ventures, Lee shares invaluable insights about: The four factors driving today’s AI ecosystems; China’s extraordinary inroads in AI implementation; Where autonomous systems are headed; How we’ll need to adapt. With a foothold in both Beijing and Silicon Valley, Lee looks at the power balance between Chinese and US tech behemoths—each turbocharging new applications of deep learning and sweeping up global markets in the process. In this post, I’ll be discussing Lee’s “Four Waves of AI ,” an excellent framework for discus...

C3 IoT Partners With Google Cloud On AI and IoT

C3 IoT announced on Tuesday a new strategic partnership with Google Cloud Platform (GCP), aimed at accelerating digital transformation with AI and IoT. C3 IoT announced on Tuesday a new strategic partnership with Google Cloud Platform (GCP), aimed at accelerating digital transformation through the use of artificial intelligence (AI) and the Internet of Things (IoT). As part of the announcement, C3 IoT confirmed its IoT platform has been integrated into GCP, leveraging the cloud platform’s infrastructure and AI capabilities. The businesses will work together on marketing, selling, and training initiatives. “The Google Cloud and C3 IoT partnership creates a solution that dramatically speeds up our customers’ digital transformations to allow them to attain new levels of operational efficiency, productivity, and competitive advantage,” said Ed Abbo, C3 IoT President and CTO. “Together, w...

Time to Put Artificial Intelligence in Proper Perspective

Artificial intelligence will be a disruptive technology across many industries, but it’s likely to be additive to human tasks, not a replacement. “Automate a mess, get an automated mess,” noted consultant Mike Hammer famously said a few decades back. That tried-and-true phrase has never been more applicable than in the current age of artificial intelligence and machine learning, in which decisions are delivered across systems and networks at blinding, real-time speeds. A moribund, floundering business may find speeding up decision-making only will hasten its demise. Analysts at McKinsey recently weighed in on this matter, seeing AI as a transformational force that will extend and expand human tasks. The McKinsey researchers, James Manyika and Kevin Sneader, take a positive view, noting AI technologies “will transform the nature of work and the workplace itself. Machines will be able to carry out more of the tasks done by humans, complement the work that hum...