Skip to main content

The First Step Towards Responsible AI Needs To Be About People Not Strategy!


I was recently consulting for an organisation that was looking to implement a framework to govern the implementation of Artificial Intelligence (AI)  technologies. Like many organisations in their sector, they had been running various ‘lab’ experiments for some time, and had seen positive results; but there was still something holding them back from wholesale investment.
A major consulting firm had encouraged them to ‘accelerate’ their innovation by using a framework to govern the roll-out. I asked them where they felt it needed more focus, and they responded saying that it felt somewhat vanilla, a re-hashing of any-old IT project management best practice. “Surely there is something different about AI”, they asked?
I couldn’t agree more.
There is no magic to AI. Today’s AI is a collection of methodologies that apply extreme reductionism to Big Data in order to elicit patterns, calculate probabilities or make predictions. What we called AI in yesteryear we now write off as ‘expert systems’ or ‘rule-based engines’. No doubt, the fancy math of the future will steal the crown from today’s AI and we will look back at this period aghast at how carried away we became.
While there is no magic to AI, that’s not to say its role in our workplace and our home is not going to be hugely impactful. This has very little to do with the technology itself, however, but rather because of how the technology has the potential to change the nature of the relationships between us as humans, between us and machines and also between machines.

Human to Human Interface

Take for example the human to human interface. Facebook’s newsfeed uses AI to decide what slices of information I see from my network. It makes countless computations, the result of which will mean that invariably result in some loss. Do I see the pictures from my friend’s birthday party or the update from another who has just lost her dog? Will I be angered by some outrageous news article or amused by a cat sitting on a robotic vacuum cleaner?
How information is filtered that impacts who, how, or when we communicate with each other carries an ethical challenge. Considering the issues and playing ombudsman to challenges is beyond the professional experience of the average (or even the exceptional data scientist), yet invariably these are the people that we leave these decisions to in the most part.

Human to Machine Interface

When historians write the story of the impact of digital technology on society, my guess is Bryan Lawson will be credited with a much bigger role than he is today recognised for.
His 1980 book How Designers Think marked the birth of the ‘design thinking’ movement which is the buzzword of the day across boardrooms around the world. Every hip consultancy now starts their deck with some variation of the Blockbuster Video/ Kodak story and how design thinking can help sleepy corporations avoid annihilation from some bot based solution dreamt up by a teenager in Shoreditch or Silicon Valley.
The result of this means we have collectively never thought so much about the design of our technology as we do so today. While this might seem like an obvious improvement, there is a subtle difference between clever design and manipulated effect which is all too often a line crossed by the tech industry.
The shape of the buttons on the interface, the color of the text, the choice of the words that are used on their captions – all have an impact on how people use technology. When the design of technology is intended to make it as addictive as possible, as confessed by Sean Parker – the early Facebook investor last year then few would deny there is a problem, but how many of us ensure that a code of ethics governs the design choices we make on user interfaces? Of the few that do, how many publish these?

Machine to Machine Interface

There are many aspects to the machine to machine interface that are worthy of exploration, but the one I’d like to highlight today is the fact that at the time of writing, in late 2018, humans are predominantly the integration point between machines.
Everyone who has ever faxed (yes, I hear they are still used) or emailed documents to back-office processing centres will be aware that some poor person will be faced with the task of re-keying that information into another system and is saving you that drudgery.
Offshoring and outsourcing of the late 1990s and 2000s took bad business processes and moved them to places where labor was cheap.Now labor costs are rising in Eastern Europe, South-East Asia and South America, we’re starting to look at Robotic Process Automation (RPA) solutions to mitigate this expense.
My former CEO at Deutsche Bank, John Cryan once said of the organisation that many of the staff worked “like robots” and implied a future of robots behaving like people would be preferable to the present state. This is by no means universally accepted and highlights an important issue. Leaders today will need to decide where the line is drawn, and what level of human touch or subjectivity ought to be preserved in the more fully-automated future.
There are countless studies on the future of work and the impact of automation on jobs, and the task now for organisations who want to lead us into the Fourth Industrial Revolution is to have that difficult conversation with us all as to where and when we should draw the lines.
This is why people need to be put before strategy. It would be easy to build a target state where the minimal headcount is required to keep a business running, but what of the social impact? To what extent do we owe an obligation to support those we lay-off? What are the considerations we should have for how our customers (and our colleagues) interact with technology? And with each other?
This is why you can only talk about responsible AI if you are responsible to people first.

Comments

Popular posts from this blog

Everything you should know about the coronavirus: COVID-19

  What are coronaviruses? SARS-CoV-2 belongs to a family of single-stranded RNA viruses known as coronaviridae, a common type of virus which affects mammals, birds and reptiles. In humans, it commonly causes mild infections, similar to the common cold, and accounts for 10–30% of upper respiratory tract infections in adults. More serious infections are rare, although coronaviruses can cause enteric and neurological disease. The incubation period of a coronavirus varies but is generally up to two weeks. Previous coronavirus outbreaks include  Middle East respiratory syndrome (MERS ), first reported in Saudi Arabia in September 2012, and severe acute respiratory syndrome (SARS), identified in southern China in 2003. MERS infected around 2,500 people and led to more than 850 deaths while SARS infected more than 8,000 people and resulted in nearly 800 deaths. The case fatality rates for these conditions were 35% and 10%, respectively. SARS-CoV-2 is a new strain of co...

C3 IoT Partners With Google Cloud On AI and IoT

C3 IoT announced on Tuesday a new strategic partnership with Google Cloud Platform (GCP), aimed at accelerating digital transformation with AI and IoT. C3 IoT announced on Tuesday a new strategic partnership with Google Cloud Platform (GCP), aimed at accelerating digital transformation through the use of artificial intelligence (AI) and the Internet of Things (IoT). As part of the announcement, C3 IoT confirmed its IoT platform has been integrated into GCP, leveraging the cloud platform’s infrastructure and AI capabilities. The businesses will work together on marketing, selling, and training initiatives. “The Google Cloud and C3 IoT partnership creates a solution that dramatically speeds up our customers’ digital transformations to allow them to attain new levels of operational efficiency, productivity, and competitive advantage,” said Ed Abbo, C3 IoT President and CTO. “Together, w...