top of page

AI for All

Writer: Zoë RoseZoë Rose

Updated: Feb 17



Yesterday at #CiscoLiveEMEA I was a panelist speaking with Rebecca StoneKimberly Moss, and Jason Warfield for the panel AI for All: bridging innovation and inclusive pathways. There were many fab points shared by the other panelists, I would recommend watching if you can see the recording, and a few thoughts I wanted to share below. 


Whilst AI is a sexy term and every vendor seems to be adding it to their marketing, I don't believe we truly understand the impact of what that entails.


Let's start from the beginning, as a consultant for many years, I joined clients to support projects. These could be failing projects or projects that were just starting out - the first thing I did in both situations was understand what the requirements are. If they weren't gathered, I then prioritisied gathering them. You cannot succeed if you don't know what you actually need or the goal you're working towards. 


Over time the industry talked about automation, again when the requirements were missing, that led to workflows failing faster. Then we talked about Machine Learning, and we were failing faster, but not realising until later on. Now with AI, if we still don't look at our actual needs and goals, we're failing faster but in a sexy sounding way and often without the ability to understand why, as so many times 'AI' is not understood. A favourite book of mine is Hello World by Hannah Fry, and she gives great examples of technological solutions that people accepted as they are without questioning, even when the results clearly don't make sense. She even talks about an autonomous car that ended up driving into the water instead of over a bridge, because it 'learnt' that roads are surrounded by grass on both sides. 


Back to the panel yesterday, we talked about how can we make use of AI in a way that doesn't progress our bias. How can we create an inclusive solution that improves our environments? I think it goes back to the beginning - understanding what we need. A few questions that come to mind are: 


  • What are you trying to achieve with AI? 

  • What data are you giving it, how is it learning? 

  • Can you correct the data at a later state? 

  • Is your data safe? Is it limited to your tenant, or being used to train the model that could lead to data leakage? 

  • How do you validate the results are accurate? 

  • How long will you be reviewing or sampling the results to ensure it continues to be accurate? 

  • How vital is accuracy with this workflow? 


As a hiring manager, I will look at the goal of the role I'm trying to hire for - my biggest fear is allowing my unconscious bias to hire in my own image and not give all parties a fair chance. I ask myself what skills are actually needed, what are nice to have, and I reflect on why someone may stand out to me from another. I also work with recruitment to ensure I'm seeing the right CV's, not just the ones that check all the boxes. 


As a people manager, I ask the people I'm managing how they want me to work with them. What I can do to support them, where they want to go in their careers, and what I can do to ensure they understand the needs of the business. 


How does this have to do with AI? I'm ensuring the teams I work in are diverse, the points of view, skills, experiences vary. When we're using tooling that automates our tasks, we question it, differently. We question the results, don't just accept them on face value, and we take our experiences to embed more robust workflows that work for everyone. It isn't perfect, and there will always be a level of bias, but with different eyes on the task and continuous improvements, we can hold ourselves accountable when things need to be adjusted. 

bottom of page