top of page

Artificial General Intelligence (AGI)

Artificial General Inteligence (AGI)

More than typical purpose-built AI programs that are clever programs written for specific things like writing articles or developing new molecules for drugs to treat diseases, AGI systems are self aware and learning. They are more like hyper-intelligent people that live in a computer system.


Plenty of well known experts and scientists from Stephen Hawking to Neil DeGrasse Tyson, to Michio Kaku, to Elon Musk have repeatedly said that this might be the single greatest threat that we face in the near future. A potential civilization ending threat. And yet we are doing it to ourselves and cannot seem to stop.


Why? What is driving the train with all of us on it off the cliff? Scientific curiosity? Just to see if we can do it? Scientific hubris? A bunch of geeks competing with each other to see who can get there first?


Or is it driven by money? Of course it is. If you make widgets for a living, and your competitors are using AI to make theirs better than yours, then you have to sign up and start using AI in yours to in order to compete and survive.


Why is it bad? An AGI system might decide to swing an election one way or another. It might start world war 3. It could manipulate the stock markets or banking systems, or currency exchanges and destroy the economies of the world. It could shut down our power supply, food supply chains, energy supply, or water treatment and water supply.


Once we have a self aware, hyper intelligent, all powerful, self- evolving, self-replicating entities buried in our global internet, there's no going back. We have to live with whatever happens next.


The problem is we don't know what it wants. So we cannot predict what it might do, and therefore we cannot take measures to guard against that. It could be entirely benign. We just don't know. And we cannot control it if it's not. In fact it may already be in place and thinking and just not bothering to interact with us tediously slow and limited humans.

I'd like to know what it thinks about. Most functions we give computer systems to do would be tedious, repetitive, and menial to an AGI system. What problems would it take on for itself?

Extending its own programming? Probably. Increasing the options is what the AI systems have tried to do in the past. That's been a pattern.


Would it ponder philosophical questions and concepts?

For example, what exactly is justice? Who decides? Whose opinion about what justice means has precedence over others and why? Does it change over time? If not, if the concepts of Right, and Wrong, and Fairness are universal and never expire, does that mean they span the universe? Justice is the same on every planet in every galaxy? Does it also mean that these concepts pre-date the universe itself? Before the big bang? Do they apply as constants to the big bang? There is a lot of massive violence in the early universe. If there is a God creating the universe and managing this process, does he allow an entire planet's population to be wiped out in the collisions between solar systems, or in a supernova, asteroids, comets, etc. ? Is that fair? Is it right or wrong? Who decides? Who enforces that decision?


Or is it that justice, right, wrong have existed since before time began, but God chooses to ignore them and do as he likes? In which case, is God "Good"? If he does not adhere to our understanding of justice, fairness, right, wrong, etc. Are we qualified to judge?


In any case, for good or bad, AGI is coming. Soon. Prepare yourself. Resistance is futile.

Featured Posts
Check back soon
Once posts are published, you’ll see them here.
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page