Let me go back again to «the different waves of AI» which I introduced in the first part of this post. I would like to build their understanding with more details about their current form as they are manifesting themselves in society nowadays.
Internet AI is about personalization. This is AI as mainly used by facebook and google when they relate to users. This is what The Social Dilemma masterfully illustrates. This is the AI which can have an ongoing conversation with all users, once at a time, creating personalized information bubbles. This AI is at the root of all complains regarding technology and smartphones being highly addictive and polarizing society to the extremes.
Business AI is about optimization. This is the AI used by facebook and google when they relate to advertisers. This is also illustrated in The Social Dilemma regarding business and politics. Also, this is the AI as used by Microsoft and Amazon to better manage any business, any function, any industry, including their own. This AI is about optimizing and squeezing value out of every single opportunity. This AI is at the root of all major disruptive forces in industries as different as food delivery (Ubereats), carsharing (Uber), room rental (AirBnB) or new banking.
Perception AI is about amplification. This is AI when it bridges from digital to offline, back and forward, thru all kind of devices and sensor. This AI is at the core of speech technologies and surveillance technologies. It is also the root of all advanced health services, and everything related to smart cities and real-time monitoring of whatever thing to be monitored.
Autonomous AI is about human substitution in narrow areas of activity. This is the AI of driverless autonomous vehicles (from self-driving cars and trucks to autonomous drones) and robotics in general, and I mean by this almost everything robotic that you can imagine inside a building (including a factory or a hospital) which is repetitive or responds to a set of external variables.
Artificial General Intelligence, or AGI, is AI which behaves as a subject which cannot be distinguished from a human being, and which is able to perform a wide array of activities and tasks as any person. The AI in the movie She is an easy illustration of this. Of course, this is when AI gets the threshold of general intelligence which allows for replacement of any person at any task at any place at any moment.
Superintelligence is about a god-like entity with a much higher intelligence that human beings which can manage everything. People only option is to subordinate and surrender. The AI in the movie Trascendence is an easy illustration of this.
Just in case, let me remind here that AI targets full autonomy and automation, which means full independence of humans, which means full replacement of human beings, which means big big big machines working by themselves where humans help eventually on maintenance. In some situations, AI can be understood as a technology add-on of a human. However, again, AI targets to full automation and full replacement.
About the impact of AI in society
As such, impact on everything is enormous. This is widely known in the industry mainly because it is the digital industry creating it and setting the goals. No speculation here. Just remember that «the best way to guess the future is to create it«. And the digital industry is creating it in real-time.
Among all impact being considered as disruptive, there is plenty of positives all around which fully match all kind of dreams. However, regarding not so positive impact, there is common agreement of full disruption in two areas:
- the economy as we know it.
- the military as we know it.
The impact of AI in the economy has one big certainty in massive wealth creation and on big uncertainty regarding who will get it. Of course, if nothing changes and we keep in automatic pilot, new wealth creation will go to the pockets of the super-rich and the remainder 99.9% will end up worse than we are, which means poorer, as it has been already happening for the last two decades. The richest people in the world and the most valued corporations in the world are those of the digital industry. So, the uncertainty is not whether this is happening but HOW TO CHANGE THIS which is already happening. All the talk and noise and inventiveness about the future of work is a conversation about this change. The key thing here is not about which jobs will be but if there will be enough jobs for everybody. It is not about quality but about quantity considering that the AI target is to replace humans. When agreement is reached that there will be not enough jobs, so lots of people will not be able to work at all, then conversation opens to new tools for wealth distribution (different that jobs) like universal income.
The impact of AI in the military has also one big certainty in the massive manufacturing of lethal autonomous weapons (also called killer robots) and one big uncertainty regarding if there will be a large-scale AI war with unimaginable consequences. I read that current spending on AI by the US army was x6 bigger that all the non-military industries in 2016. We can easily guess that this is being followed by all other “modern armies”. A simple check on YouTube helps acknowledge that this is happening and that it started quite some years ago. Especially impactful is how much creativity is going to the so called killer drones which are target-guided bullets. The uncertainty here is about the new geopolitics that all this is provoking and how parties will react to changes in the balance of military power. There is a warning light in the industry, but nothing looks like happening at street level, so threats and risks are also growing “autonomously”.
One key thing regarding «impact» is setting the goals for AI. This maybe the most difficult challenge ever as the goals that we define will be embedded inside AI and will drive its full power and strength in one direction. Remember again that AI is looking for full autonomy and automation, which means full independence of humans, which means big big big machines working by themselves in automatic pilot pursuing the goals 24×7.
Currently, it looks like AI’s primary goal is economic and talks about «profit maximization». The Age of Surveillance Capitalism clearly opens to the organizational mindsets, strategies and processes that built and keep holding this goal of «profit maximization» . And, let’s recognize it, AI is doing extremely well. All digital companies are in the top 10 of most valuable companies in the world and all founders are in the top 10 of richest people in the world. However, the side effects, out of the scope and goal definition, are also easily recognized. We have The Social Dilemma here to pinpoint them. Just in case, i point two very important side-effects: drug-type addiction of technology among young people, and extreme polarization and confrontation of political views among citizens.
Of course, nothing is easily known about AI military goals. I heartly hope and wish that we stay like this, without knowing for centuries. No more comments.
About the specifics of the books
The book The Age of Surveillance Capitalism, by Shoshana Zuboff, mainly talks about waves 1 and 2 in the West and how much the industry is in automatic pilot, building extreme wealth inequality and skyrocketing the probabilities of negative impacts. The book is plenty of economic and business concepts that organize what is new in a interesting and useful approach, which I would like to thank her openly.
The book AI Superpowers, by Kai-Fu Lee, talks about how China is matching, ever surpassing all West efforts, and bringing a Chinese flavor to these AI game.
The book Life 3.0, by Max Tegmark, reflects about AI as a no limits, self-paced innovation process with extremely strong forces unfolding, forces that can be for the better or for the worse. It looks like to me that the we humans are behaving as witnesses without voice or vote in the AI self-paced creation process. Some side conversations are happening, but it feels that we society are fully in automatic pilot which means that we are conducting business for profit without accepting any responsibility for its negative side effects (now or later). Eventually, this book gets very theoretical (if you are not in the work of creating AI) and plays future in such a way that post-humane intelligence eventually will spread around the universe.
The book Origin, by Dan Brown, is a best-seller novel which I have read twice, first in English and later in Portuguese, which beautifully illustrates a story that helps here (no teaser).
All books represent a wakeup call about the huge threats and risks of AI. If we add these to other voices like Yuval Noah Harari and Tristan Harris, or my own voice, then it is extremely easy to accept and conclude that “AI is coming” like “the winter is coming” in Game of Thrones. The thing to be find is not even when it will arrive, as it is already reaching us a bit at a time, but whether we will be able to manage the side impact that it is bringing as it grows indefinitely.