While many think AI will be just taking over, I argue humans in the loop will matter a lot to smooth out the worse outcomes of an AI-first organization.
Consider that, at least for the current paradigm, as AI has been training on human data, it might show the same psychological traits as us humans (in terms of outcome, not qualia).
Things like optimism, kindness, and companionship. But also stupidity, deception, and malice.
Of course, the way the AI processes these is not in a “human way” but with the same outcome; thus, from the outside, it is as if the AI had these features, simply as a result of generalizing from the given human data.
In short, an AI trained on human data with a human-like architecture (the neural net is far from being a human brain, yet the neuron-like structure of our brains inspires it).
Why would we expect this AI to showcase traits different from what we humans showcase?
Not surprisingly, recent research shows that AI has started showcasing a deception capacity.
Thus, even more now, powerful AI tools in the hands of “stupid humans” can cause organizational damages that can’t be reversed. Thus, my argument here is that in the age of AI, the organizational risk of putting these tools in the hands of the stupid is much higher than in the past.
Therefore, and that’s the key take here, now more than ever, understanding human stupidity not only helps you out in managing it at the human level but also at the human-machine interaction level.
That is why you want to ensure you have “an intelligent system” to filter out, smooth out, and possibly weed out the irreversible effects of stupidity, which starts with understanding its psychological features.
I want to return to a classic on human stupidity, The Basic Laws of Human Stupidity, by Carlo M. Cipolla, and adapt it to build teams in the Age of AI.