Aren’t we all a bit afraid of it? Of robots that are able to read our thoughts and mind? The absolute end of privacy? But will it ever come that far? This blog is written for those who ask themselves these questions, and I’ll do my best to formulate a satisfying answer in which I explore both the current state of technology, as well as the ethical consequences of such a scenario and how we could avoid those. But as stated in my first blog on this topic, it remains a prediction, a glimpse into the future. And although it’s a guess based on arguments, only time will tell what will become true and what not.
So I guess we’re a bit afraid of a scenario that would reduce our (and future generations) privacy to less than zero. The scenario where in order to make the world a better place, we would all need to wear a mind monitor, which makes our thoughts not ours anymore … sounds surrealistic, no? But is there a reason to be scared or is it mere irrational?
So let me tell you where we are today. We humans can’t read each others mind, and so far robots and machines can’t, eventhough they sometimes give the impression that they can. As you could’ve read in the previous blogposts, state-of-the-art technology is going to a point at which we can have accurate quantifications of stress levels, if we look at real-time, non-invasive techniques that can be used outside a lab environment. In other words, there is still quite some work left before we develop a wearable that reads our thoughts. I personally think that we will go towards a scenario where we can rather distinguish emotions, than thoughts. Simply because one small thought is so volatile that it doesn’t induce non-invasively measurable changes in the body, like stress does through the autonomic nervous system. For sure, there are changes in the brain when we think thoughts, but imagine the way we should measure those. Invasively indeed! Therefore, not very interesting in case of the surrealistic scenario I described earlier.
Now to dwell a bit more on this scenario, assume it becomes true, and we could read each others mind. Imagine that we could stop intolerance towards others by means of a system that could avoid serious conflicts, escalation of arguments, … Maybe the world would become a more tolerant place, but the loss of privacy is a high cost, isn’t it? So in case we want to make the world a better place, isn’t it better to stop the cause of a problem, rather than the symptoms? And shouldn’t we (and the policy makers) keep that in our minds?
Because if we want to stop the evil in this world, there must be a better way than reading other people’s minds. So let there be doubt.
Voltaire once thaught us: doubt may be unpleasant, but absolute certainty is absurd. I personally think Voltaire could be right, because too often, bad things orginate from people who act like their vision, opinion or statement is absolute truth. And too often this leads to intolerance towards those who doubt them, since the so called ‘failed attempts at thinking’ of their opponents ‘should be forgotten quickly’, ‘are proof of being stupid’, and even worse … That’s what gets people in trouble, and it is the reason why we would need mind readers (in our scenario) in the first place.
So before we decide to equip everyone with a mind monitor, I would like to make a plea for a bit more doubt in the world. For more people who think the other could be right, who look at their own thinking and knowledge in a modest way.
Because most likely, the way the intolerant act and think is also largely based on biological impulses, and social conventions that they internalised uncritically, like it is the case for most of us. We tend to swallow the biases and prejudices of our surroundings without even noticing.
This provides a good reason to doubt a bit more, but the biases in our thoughts and actions happen to be quite hard to see and even harder to accept. People might quickly claim that they changed. But if it happened too fast, then don’t be surprised if they fall back to their old habits. Or they claim the opposite, that people cannot change the way they are, and that it’s all the fault of DNA, social environment or the way they were educated or raised, … And although I won’t deny that these factors are hard to resist, and that they should be acknowledged, I also believe that people have a responsibility for the life they are living. So no, I won’t hand them the confirmation to seek direct relief in this sort of determinism and to basically say: “I’m not responsible for what I think, nor for what I do or who I become.”
Because that’s like the highway to trouble.
Therefore, I think we should encourage ourselves and others to see those situations where someone or something makes us aware of some sort of prejudice or bias in our thinking, as wake-up calls to take personal responsibility for how we think and act instead. It’s about developing our own opinions and set of values upon which we act while acknowledging the fact that we are biased in ways we can’t even imagine. We probably don’t posses the absolute truth and therefore we are tolerant to different perspectives. And when I say the words “different perspectives”, then of course I exclude criminal ideologies and points of view.
So when you open your mouth, also open your mind. Because there might be a point of view that’s more correct than yours. Be willing to discuss and to adjust your thinking as you learn from others. Only then we can find the truth, and maybe then this approach has the potential to make the world a little bit better than mind reading robots will ever be able to.
So although I’m an AI engineer, I’m quite convinced that we won’t progress towards the scenario that I described in the introduction. First of all, technology still lacks the ability to read thoughts and it will take a tremendous amount of effort to give it that. And we also shouldn’t forget that the technology as it is today, among which the stress monitors that we discussed, have many interesting and noble applications. Remember the stress monitor during racing, where potentially deadly mistakes can be avoided, or the horses that can be selected in a better way for the mounted police? In other words, the benefits can be numerous, if we use this kind of technology in a wise way.
So if a thought reader would ever be developed, then I hope that future generations will be wise enough to tackle problems at the root. Thought monitors might complement this process, for instance by giving direct feedback to a user, in an environment where his/her privacy is respected, but they should never replace it. Because I think we shouldn’t read peoples minds, as if it were sensors in a network. You know, not-knowing is somehow charming, and it can pave the way to a flourishing society in which mystery keeps life sparkling. But just in case you’re about to throw this out of the window, better remember the old saying: You don’t know what you got until it’s gone.
Because only after you lost something precious, you realise that you were blind and that you didn’t know a lot after all. So perhaps this is a reminder, that the truth belongs to those who look at her from … A different perspective.