I originally posted this on Quora, but it’s an interesting topic, and I wonder what other’s have to say.
Question: Do you have any idea about the contribution of AI and other emerging technologies in the 21st century?
It depends on if you’re talking about AI or AGI (artificial general intelligence). There’s a big difference, and the relative difficulty is enormous.
Current AI (i.e., machine learning) is really just a kind of semi-automated, statistical analysis — but we have to tell it if it got the answer right so it can generalise a model that is predictive of the training data with minimal incongruency and maximum generality.
Herein lies the problem trying to build one monolithic model that is general enough to apply to all circumstances yet also specific enough so that it gets the answers right — that ideal model in that representation will be so large and complex, and will require so many examples in so many unthinkable and unintuitive scenarios, that it is just practically unfeasible beyond imagination.
Despite this, current AI, when specialised appropriately, is already proving incredibly useful, and has contributed hugely to technology, and is making our lives more convenient by indentifying objects for us, automatically tagging images for searchability, learning our preferences to suggest things we might like (whatever else that data ends up being used for is another story).
It is already being used to make inferences that a human might not have thought of from datasets, and is also being applied to scientific papers and theorem proving.
It is even being used for technology that can detect diseases, which could make highly regular and detailed health checkups easy, so we can pick up the onset of disease earlier, and ultimately save lives.
It is clear even this kind of simple pattern spotting is a powerful tool.
But while there are certain things that this approach is able to do better than a human, there is a limit to it: there is no true cognition happening, no logical reasoning, no fluid representation of concepts that can be manipulated and reasoned about on the fly.
There is no mechanism of automated error detection and correction, nor of automated and intelligent searching of the solution space (in a sense the graph of all possible concepts), nor of figuring out ways to conduct experiments to determine external factors it cannot verify a priori.
I think, though, this question is really asking about AGI.
AGI has incredible potential to lead exponential change. Exponential because it will enable generalised self-replicating machines, which can very quickly grow to huge quantities and achieve incredible speed and efficiency far beyond the capabilities of humans.
However, the degree and the value of that potential change depends on a number of things:
Is it even morally permissible in human terms to force something with self-awareness and intelligence that may even exceed our own to do our bidding? Of course, that depends on a lot of things.
Is artificial general intelligence possible? I think there are good reasons to believe that it is.
Are we able to set up an artificial general intelligence with the proper system of incentives and values such that it is actually able to do something useful whilst also doing it in the way we want to it, and continues to do so? Unfortunately, there are good reasons to doubt this.
Can the technology be abused in a way that poses a serious threat to our existence? If so, are we able to prevent the technology from being abused in this way? Unfortunately, there are good reasons to believe yes to the former, and no the latter (seeing it isn’t a problem of material regulation, exactly).
Can humans adapt to the kind of society that might exist after the fruits of this technology are abound? I have no idea, but I think there are reasons to believe it won’t be smooth sailing.
If not, then depending on which point we get to, that could be really bad. It probably won’t be like terminator, but man, there are some pretty bleak possibilities.
If so, then yes, it will have massive potential to improve our existence.
We could set up an automated, self-repairing system of intelligent machines, factories, mining facilities and food farms that will:
Mine in the lowest, most oxygen deprived, unsafe for humans, depths of the earth and its oceans, in a way that won’t interfere with natural ecosystems.
a. Grow food in huge skyscraper factories.
Process the materials and food items using the most efficient and environmentally friendly approach.
Produce technology and processed food in the factories to given specifications.
Distribute them to people who have requested it.
Perhaps even mining asteroids will become feasible, but there may be enough stuff deep within the earth if we can get to it with advanced mining technology.
We could construct self-replicating mining robots that take materials from asteroids or perhaps Mercury and build what’s known as a Dyson swarm.
A Dyson sphere is not a feasible technology as we currently understand gravity and the limits of material science, as the sheering forces involved would tear any structure apart. However, a Dyson swarm is a multitude of small, mobile satellites that can be instructed to reflect sunlight to a specific point, and track that point.
This would allow us effectively unlimited solar energy at absolutely mind boggling energies. However, nuclear fusion might make all of that unnecessary.
When the cost in human time and resource is zero, you’re in a post scarcity society.
The question is then, can we regulate ourselves in such a way than we can coexist peacefully, when there is no resource or basic sustenance to worry or to war about, but only ideas and each other?
We’ve fought plenty of ideological and material wars, but the very challenge of maintaining existence and the incentives of profit gives all nations some common reason to cooperate that can supersede ideological differences — it has done greatly since the World Wars.
But when there is no other objective common ground external to ourselves except existence itself?
There may be a race condition, but it may also be a catch-22: if there is a way to structure society that is able to exist in post-scarcity, then we might need to implement it before it happens. Or at least hope that any potential period of chaos, if any, is transient whilst we establish a new status quo.
A question further still is, will any of this even happen at all?
Will humans want to give up the responsibility of maintaining mere existence? I suspect there will be a strong divide, and perhaps new nations will form in the process, as people who wish for a life before post-scarcity migrate to other lands to live the old way.
I would not be surprised if cults of anti-AI fanatics form, likely of orthodox-religious origin, probably even some terrorist factions. I bet there are a number of sci-fis that hit the nail on the head on this point.
Or would politics force a future of disproportionate access to the potential wealth of resource that could be?
Part of me doubts the feasibility of that. If it is known that effectively unlimited resource is available, and is easy to obtain through technology, then I don’t think the common man would sit tight without his share and obediently take what little scraps is given.
The majority of the worst authoritarian regimes have all come to or are coming to an end or have at least transformed into something a little less authoritarian and more liberal where people are able to lead a happy life.
The imprisonment of the world seems an impracticality to me, and perhaps an irrational fear.
Who knows. It’s a very deep question.