A leader’s guide to building autonomous teams (batteries not included)
- 12 April, 2016 11:16
If you have been lucky to visit Masdar City in the United Arab Emirates or Henn Na Hotel in Japan, you have experienced the next evolution of autonomous machines for a consumer market.
Masdar City is designed to be a hub for clean technology companies. Transport with the city is managed through a personal rapid transit system in point-to-point electric driverless cars.
Japan’s Henn Na Hotel prides itself in efficiency and complements its staff with ‘actroids’ robots, which meet you, carry luggage and make coffee. ‘Check in and check out’ tasks are managed by the droids, one which is a dinosaur, for English speakers of course.
The pace of change in technology has futurists refining their predictions at faster intervals. In October 2014 Elon Musk, owner of Space X and Tesla, stated autonomous cars would be available by 2023. Fast forward eight months later in 2015 and his estimations were reduced to 2018, a staggering change in expectations.
Engineering and science, the constructs of our labours, are making leaps and bounds in the journey to continuous improvement. Bins in Singapore not only inform councils of when they are reaching capacity but also use this information to direct garbage truck drivers on the best and most efficient route, and consequently the profits that would be made.
Conversely, the leadership and management styles we as humans evoke are making very slow advancement on those of our predecessors. As a leader we would want to foster the characteristics of the machines to our teams. They continuously improve with perceived minimal supervision - they are autonomous.
Building an autonomous leadership practice is not a new concept – it is on the elusive path to building a high performing team and very much akin to the following popular organisation types; the holacracy, participatory management, and somewhat outmoded ‘flat structure’.
The key similarity is the distributed decision making power base which even Peter Drucker alluded to in his studies (I told you it was not novel!).
He stated that: “No institution can possibly survive if it needs geniuses or supermen to manage it. It must be organised in such a way as to be able to get along under a leadership composed of average human beings.”
Coming from a technology background being prescriptive was ingrained in our thought process – it led to quality products that were tried and tested, and subsequently to success. I was proud of the scientific methodology as there was always a logical path to achieve the best outcome."
When I was promoted to management I was fortunate to build a new function and my own team – every manager’s paradise. Just as the team was building successes I was told that I would be given an opportunity to transition into what was considered a low performing group with a history of autocratic leadership in a hierarchical organisation. I set myself a task. Within one year, we as a team would change this.
The following is an extract of our journey addressed to managers in hierarchical structures riddled with aging processes. For them, there may just be moments of likeness that they can learn from.
Enter level 1: The unknown (learning)
In building an autonomous machine, it is crucial to understand the available data and knowledge of the environment in which it exists. Without knowing the limitations and complexity of the data it is difficult to determine the model, or paradigm, which is to be used for learning.
Unlike most basic software applications which have prescriptive rules in behaviour it is impossible to code every outcome into autonomous machines.
Within the first month of moving into the role as manager of the team I met with everyone who we worked with, was our supplier, had impact on us or had interest in us.
I met each of these people individually, which included the team members, counterpart managers, senior managers, CIO and business stakeholders.
I asked them three simple questions:
- What service or output do you receive from the team?
- How could we improve?
- What do you enjoy about working with us?
I then had a team meeting and asked the same questions – this time to a collective that reported to me. By the end of these interviews not only was I exhausted and overwhelmed, but disheartened. I was expecting to be able to consolidate a list of prioritised issues to resolve, instead had a list longer than an army barracks roll call.
Supervised learning: Empowerment
Of the learning paradigms to build knowledge for AI or autonomous machines one of the first generation approaches was to use ‘supervised learning’. Key to this approach is providing training exercises and input for the learning algorithm to perform.
Monitoring of the accuracy and performance allows the computer scientist to tweak the algorithms. This approach is often thought as the one which requires the most guidance.
I spent the following three months with the team summarising and providing ‘categories’ of the problems. These eventually became the vision of the team of how they could improve.
As a team we knew what we aspired to be. The next step was how to get onto the yellow brick road to get there and most importantly what relationships and skills we needed to do so. These relationships were targeted to stakeholders who were key influencers in other areas and would enable rather than block.
The first priority was the quality of work. A majority of the issues relating to quality would require time and rebuilding trust and confidence with our stakeholders. They initially were very quick to recall issues from the past – corporate memory is greater than an elephant’s.
The key was to be transparent and acknowledge the known issues. We explained how we would target improvement; their perceived priority amongst all other initiatives; and who would be the primary contact in the team.
This was powerful for two reasons. First it allowed our stakeholders to collaborate with us on whether the prioritisation was accurate; and second the team was empowered by bestowing them with the same authority as myself in relation to senior stakeholders.
Reinforcement learning: Affirmation
Improvements to AI and the need to solve more complex problems saw the ‘basic’ approaches such as supervised learning not being practical. A need for the machine to adapt to the its environment it exists in to learn without, or with minimal, input data became essential.
Reinforcement learning was one of the newer approaches to resolve this dilemma – it provides a ‘reward’ when optimised results or task selection are performed.
One of the first changes was removing myself as the key dependency and sole quality assurance monitor for all tasks. Initially, I made the technical leads and those engineers on-call responsible. As on-call performed a roster this dispersed the model of ‘leadership for the week’ on a rotational basis.
This included ensuring all calls were resolved within SLA, all changes were reviewed and tested and attendance at shared services meetings was representative. Monitoring of these was reported over a regular period, both within and outside the group.
Evidence of the team being accountable and making decisions for themselves was clear to me when the team faced a major issue. This incident required us to respond within a few minutes and have a resolution within 30 minutes.
I could have easily stepped in and told them the course of action; rather I made a calculated call to observe what happened. We had successfully made changes to the distributed leadership model in the past few months; this was a test to observe whether these perceived improvements had changed the behaviour of the team.
They began formulating ideas, none of which was seen as resolving the core issue. Amongst themselves they disputed and refined ideas, eventually coming to a conclusion and seeking my approval for official purposes at the last moment.
The need for a manager to be involved in troubleshooting and coordination for deep technical matters was removed. In the past this was unheard of.
Two key steps for being autonomous had been built without them realising it:
- My affirmation of their behaviour was reinforcement of a valid path being taken.
- They had built the knowledge base for more efficient problem resolution for subsequent times.
Following from this and successful learning in other scenarios (some of which came out of mistakes) I removed myself from close observations. In the organisational context, initially it was important and necessary to have supervised learning to ensure the vision of the team was leading to the path of improvement.
Some may refer to this as an element of micro management. Eventually when they had proved their capabilities they moved into unsupervised territory, still achieving the same goals.
Progress to level 2: Removing the training wheels
All autonomous systems need a period of exploration and study. A key issue with most AI is its need to continue to have access and build its training sets to improve. Without this, its ability to handle new situations is challenged and results in limitations of performance.
This will often mean an experience of the dreaded corporate ‘F’ word – failure. It may not be one time, it could be many. The more and faster anyone fails the better they become and more knowledge they obtain – if they learn from failure.
It is ludicrous to believe that failure cannot occur amongst professionals – these are the same individuals who through their schooling period did not receive perfect marks for every subject in every year of their student life. Their every decision as they grew up did not leave them unscathed.
What needs to be created is a safe environment for the team to learn in - whether this is through personal education time, an area or sandbox to potter with new ideas and tools, or open communication across the organisation.
Whatever the construct may be, it needs to exist.
Not only does it need to exist to ensure the right form of experimentation occurs, but it should also be monitored and measured for growth or potential development.
This is the key difference between unintentional learning and that directed for autonomy. As humans we forget what we may have altered or improved. This knowledge is lost and failure re-experienced.
In doing this, the organisation manifests symptoms of insanity; the definition of such insanity being the action of doing the same thing over and over and expecting different results.
By creating basic constructs of observing and measuring the learning, progress is facilitated as it can be seen what was there and what is now. It is one’s duty as a leader to provide this to the team and facilitate their self monitoring ability, as often being in the midst of the learning you do not see how much has been achieved.
Classical training such as that targeted through career development discussions was not the only means of allowing for learning. I allowed my team to attend many free seminars and workshops – in IT there is an abundance of these.
I invited a number of senior engineers from other organisations, some international, to speak to the team on latest technologies and capabilities.
Internally we had subject matter experts who were required to share their knowledge through weekly information sharing sessions with the remainder of the team.
A significant problem was the lack of test playbeds – we did not have this for all environments which exacerbated the fear of failure when making changes or implementing in production systems. This had to be resolved.
We sourced the funding and built the business cases that these systems required. In one of the cases it required eight months of ‘selling’ the idea and its importance to our senior leaders.
Progress to level 3: Diversity in the network
In early forms of AI systems, each algorithm was isolated to perform a certain output. Introduction of neural networks, the pretext of which was to emulate the human mind, feedback between the ‘neurons’ and those across outer networks provides novel ways of learning and long term memory.
As a human one is not perfect. For someone like me who strives to have no blemishes on the record and an OCD ‘clean desk’, this can be difficult to fathom.
Rather than always attempting to improve on what may be deemed as inadequacies, I seek to fill in the gaps through the team. This ensures balanced ownership and a strong decision making base.
As an example I may not be a highly artistic individual but most certainly I make an attempt to find those who are.
It is not only a matter of ensuring diversity in leadership but also ensuring variance within the team. This includes differences in skillsets and knowledge, frames of reference which may include industries and geographical locations.
I once worked in an organisation which had a team of engineers who all came from one university. In that cohort it was in fact a young individual who had come from another city which was outside of the traditional technical stream who would solve problems in the most innovative ways.
While there was merit in the traditional engineers they did not push the organisation’s way of thinking as that was the only frame of reference for problem solving they had gained from their education.
As a leader you need to first acknowledge within your core team what works best before trying to start in a green field. If these are individuals who are already quite high performing and successful, or have potential to be, aim to grow the team not just by filling in the gaps but also by finding new members who can excel and mentor others in specific areas.
Of key importance in introducing and maintaining diversity is not to go to the other extreme of the spectrum in which a team has conflict due to their irreconcilable differences.
If there is no clear direction which the team prescribes to and believes in, such diversity can lead to destructive conflict and can have devastating effects on morale and collaboration.
As much as I would like to state that following the rules above will achieve success – it is not guaranteed. These principles of autonomous machines can be used as a guide to assist a leader in building their team, think of it as a leader’s supervised learning.
Our teams are not machines that can be programmed, they have minds, a soul, a consciousness. They dream and have aspirations. Each of us is unique in our capacity to enable innovation.
I believe AI is simpler than people leadership, as it has been built out of a logical construct. People are the true supply for our 21st century markets not just knowledge or machinery. The Harvard Business Review recently labelled this transition as the ‘human economy.’
There are however certain likenesses with these digital constructions. There is a core or master algorithm which all elements are aligned to.
What we can learn from machines is the persistent vision of a leader to ensure our teams aligned to the same goal – our master algorithm. Business environments are emergent, they are morphing all the time and will eventually be obsolete: just as our machines adapt to change, so should our teams. Such maturity of organisations will be the path to the creation of high-performing teams.
One year on, the team is a success story and one of the top performers in operational reports. The quality and skills of the engineers are highly esteemed even beyond their official technical expertise.
There is more to be had as we embark on our journey of delivering the strategy we built together. Stay tuned for hacks to level-up in the next edition.