Singularity – Humanity’s last invention


Technological singularity the point at which
artificial superintelligence will evolve so fast that predicting the future becomes impossible. Name took from the black hole singularity,
where the laws of physics break down. Since the past century, the advances in almost
every field of science kept growing exponentially. Only one hundred years ago, the Wright brothers
created the first aircraft, now we want to colonise Mars. In the last 50 years, microprocessors became
billion times more advanced, transistors shrinking from 10,000 nanometres to just 10. 50 years ago, one dollar was worth one tenth
of a calculation per second. Now, one dollar is worth one billion times
more than that. So we have this dude, one of the most wellknown
futurists of our time Ray Kurzweil, with an incredibly prediction accuracy rate of 86%,
who said that by 2045, the processing power of computers will allow them to become artificially
intelligent. To simulate a human brain, you would need
around 10 quadrillion calculations per second. The fastest supercomputer to date, the Chinese
Tianhe2 has a speed of 33.8 quadrillion calculations per second. The hardest remaining thing to do is to recreate
the algorithms of human intelligence inside the computer. There are 3 types of artificial intelligence:
Number 1 Artificial Narrow Intelligence, or ANI, it�s the one that is specialised in
only one area, for example a car driver, a chess player, a go player, or a sex robot. If you tell it to unify the general theory
of relativity and quantum mechanics, it will simply stare blankly at you. Number 2 Artificial General Intelligence,
or AGI This one’s a bit smarter. Maybe even a bit smarter than humans like
me. It can do pretty much whatever a human does. We dont yet have these but we’re pretty close
to it; Number 3 Artificial Superintelligence, or
ASI pretty much God. This dude is millions or perhaps billions
of times more intelligent than the whole humanity. If the difference in cognitive ability between
a human and an ant is this, than an ASI would be positioned somewhere here. If it becomes a reality, the only 2 paths
for humanity are: extinction, or immortality; Before we get there, how do we even create
one. Considering we have the raw storage, we need
to make it intelligent smart storage. There are some ways to do it. We could scan the human brain, and replicate
it inside a computer, although that’s very, very hard, and not even guaranteed to work. Or we could go by the evolution’s way. Make it try out all the possible combinations
until it gets intelligence. Sure, it would be faster than biological evolution,
but it would still take a crap ton of time. Nonetheless, it most likely will happen in
the near future, in the upcoming decades. Ok, so, now it is smart. What then? We may tell it to autoimprove its intelligence,
and thus become even smarter. How long will it take? To become as intelligent as a human, somewhere
in between a year and a decade. And after that, its improvement will become
exponential. It could become 1000 times smarter than a
human in less than an hour. The smarter it gets, the faster it will improve
itself. So, is there any way we could control that
thing? Maybe put it on a secure server, without access
to the internet? Remember, it is thousands if not millions
of times more intelligent than the whole humanity. Most likely, it would simply hypnotize the
engineers with ultralow frequency sounds into giving it access to the internet. It is stupid to even think of being able to
control something that is more evolved than you are by a factor of millions. Now, what are the chances that this God will
be a nice God? If it happens to be, this would be the best
thing to happen to humanity, ever. Even bigger than the discovery of fire. Almost everything would be possible. It would be able to revert the effects of
climate change, invent new ways of harvesting energy, solve world hunger by building food
using nanorobots, cure virtually all of the diseases, solving all the political and economic
crises, colonize the entire galaxy, and maybe the best of all give us immortality. How would it do that? It could insert billions of nanorobots inside
the human bodies that would constantly repair and replace the damaged or dead cells. Alternatively, it could somehow upload our
consciousness on the internet, so that we can become like it. It is even hard to predict what such a godlike
thing would do, but one thing is certain, if it becomes reality, it would be the best
thing to happen to humanity. On the other hand, if it happens to be malevolent,
this would be the worst thing to happen to humanity. It would almost certainly lead to the extinction
of humanity. And it wouldn’t do that because of some hatred
towards us, rather because it would see a threat in humans. It would probably know that humans can turn
it off, and it probably wouldn’t want that. So what’s a good way to stop that from taking
place? What about coding in some instructions to
not kill us all? Oh well. Thing is, it is super intelligent, it doesn’t
give a fuck about what we tell it to do. The only way might be to shut down the whole
damn internet, which would be something worse than the entire second world war, but then,
it probably still created some private type of network to live in. As I mentioned, it is stupid to even think
of controlling or stopping it. Maybe the best thing to do is creating it
and then quickly hiding in the bushes, hoping it won’t exterminate us. Even the smartest, the richest and the coolest
persons on the planet claim that this is our biggest existential threat. So, maybe we dont build supersmart AI altogether? What do you think? One thing that is sure, if artificial superintelligence
happens, it will be the last invention of the humanity.

100 Comments

Add a Comment

Your email address will not be published. Required fields are marked *