• Welcome to Smashboards, the world's largest Super Smash Brothers community! Over 250,000 Smash Bros. fans from around the world have come to discuss these great games in over 19 million posts!

    You are currently viewing our boards as a visitor. Click here to sign up right now and start on your path in the Smash community!

The Technological Singularity, will A.I. ever surpass us?

Status
Not open for further replies.

Lore

Infinite Gravity
BRoomer
Joined
Mar 5, 2008
Messages
14,135
Location
Formerly 'Werekill' and 'NeoTermina'
http://en.wikipedia.org/wiki/Technological_Singularity
^The wikipedia page for those not informed.

The technological singularity is supposedly the point in time when A.I. surpasses Human intelligence. Some also define it as a period of extremely rapid technological progress, but that is not the point of this thread.

Do you think A.I could ever surpass human intelligence? Or do you think it could never catch up to the brain?
 

JustKindaBoredUKno

Smash Lord
Joined
Dec 19, 2007
Messages
1,606
Location
Southeast Michigan
It will eventually. The rate at which techonology advances will make it inevitable. But the question is, how controlled will it be? Movies like the Matrix, although far from factual, offer a scary insight to a possible future, where machines will get sick of "their human masters" and take over.

But thats a slightly different tangent of the topic.

Will technology surpass human intellgence? It already has, in an information kind of way. But if it will ever gain freedom to rule over itself, to make decisions for itself, is a different story.
 

AltF4

BRoomer
BRoomer
Joined
Dec 13, 2005
Messages
5,042
Location
2.412 – 2.462 GHz
Define intelligence, in this context.


Computers have already been more intelligent for a long time in terms of computational ability and memory capacity. Computers can quickly solve well defined problems. Humans, however, are currently more suited towards general problem solving of poorly defined problems.

So there's not a direct comparison. There's a clear line between problems humans are better at solving and problems machines are better at solving. The line will continue to move in the direction of more machine ability, however.

So when would you say this "technological singularity" occurs? At the 50/50 point? We've probably already passed that point.
 

Surri-Sama

Smash Hero
Joined
Apr 6, 2005
Messages
5,454
Location
Newfoundland, Canada!
When it comes to A.I the first thing I think about it learned responses. When and IF computers can learn from errors is when I think they will pass humans, intelligence wise. But can a computer learn? What kind of “computing” would lead to a computer to learn?

Computer Humanoid grabs an electrical fence, gets shocked and is damaged.

Will the computer do it again?

Right now, if the computers objective we behind that fence then yes the computer would grab the fence until it could not.

How do we get a computer to learn that you must go around the fence instead of going through it? OR find an alternate means in general when the first means does not work?
 

AltF4

BRoomer
BRoomer
Joined
Dec 13, 2005
Messages
5,042
Location
2.412 – 2.462 GHz
Would a human do it again? lol, very likely so!



Machine learning already exists, in limited forms. The thing is, we don't understand how WE work, so it's very difficult to make a machine that works like us.

One of the most popular theories is that all intelligence is based off of symbol manipulation. That's all it is. This comes from two areas: Computer Science and Psychology. The Computer Science end of it is very easy to understand. A Turing machine is a theoretical machine which essentially just manipulates symbols. It moves them around and performs computations. And it can be proven that a Turing machine is what is called "Turing Complete", which means that it is capable of solving any problem which is able to be solved.

The standpoint from Psychology says that there is strong evidence that humans think by manipulating abstract symbols. (I'm not a psychology expert. I know only what I know from my AI courses!) We break down our thoughts into small chunks and learn new things by combining these parts in new ways.



So what exactly is learning? We already have machines that can "learn". Learning is trivial, in a well defined problem. The hard part is a general purpose learning algorithm.

What you want is a machine that given any problem or situation can learn how to solve it. That's a hard problem indeed! In fact, I can't think of a harder one!



You see, we as humans routinely do things we take for granted, like "solve" unsolvable problems! For instance, the standard example of an unsolvable problem (referred to as "undecidable") is "The Halting Problem". Essentially, it is the problem of figuring out whether or not any given program has entered an infinite loop and will never end.

We as humans can easily say "Well, it's been a long time, and it should have finished by now" and can tell (with high accuracy) whether or not the program is still actually running or if it's crashed. But in general, this is impossible.

Now, try thinking up a way for a computer to solve ANY unsolvable problem! This is why AI research is so slow moving.
 

MasterWarlord

Smash Champion
Joined
Aug 24, 2008
Messages
2,911
Computer AI has already surpassed our intelligence without a doubt, as AltF4 said, it's more likely then not past the 50/50 margin.

The bigger question is indeed will it ever gain it's own will? I find it highly unlikely, but it is possible. All it really boils down to is the people programming the AI. They'll have to make sure that they have some overarching program that makes all their goals be for the good of mankind. . .But then again that's more likely then not impossible, but we can't really know at this point of time either.

I find it less likely that the computers would gain a will though then they would to just malfunction. Not so much malfunction, but their "learning" to make them learn something humans didn't want them to/is incorrect. Computers will never gain "free will" unless we try to simulate one, in which case it would still just be doing what we told it to.

Although that whole discussion is really rather silly. . .We'll all be long dead before AI ever becomes intelligent enough for this to become a possibility.
 

slartibartfast42

Smash Lord
Joined
Dec 29, 2006
Messages
1,490
Location
Canton, Ohio
There are 3 main things that computers have on us:

1. Speed - obviously computers can compute things extremely quickly when compared to humans
2. Accuracy - calculators will never make a mistake if they are programmed correctly
3. Endurance - computers don't get tired, they can repeat an equation infinitely without getting bored or losing efficiency

However, computers will likely never surpass us beyond anything past this without groundbreaking changes to their design. Computers are programmed with instructions, and follow those instructions exactly. They cannot react to a situation that they have not been programmed to react to, or perform actions that they haven't been programmed to. We have plenty of things computers don't, and most likely never have:

1. The ability to create new ideas - Computers only perform functions that they have been programmed to perform. A computer will never be able to do anything that a human programmer hasn't given programming for. How can computers ever surpass us if they are unable to do anything without a human telling it how?

2. the ability to understand events outside previous instruction- Like F4 said with the infinite loops. The computer cannot tell that it was supposed to stop after 1000 loops. It keeps following the code exactly while you madly press the end program button when the program takes longer than the 2 seconds you expected it to take. You had the experience of running similar programs before and seeing that they shouldn't take that long. The computer is just following the code, and it won't learn from it's infinite loop mistake, it will just keep making the mistake over and over as long as you hit the "run" button.

3. Emotion - Obviously computers don't have this. Emotion isn't normally associated with intelligence, but I think that a lack of emotion would definitely keep any computers from surpassing humans. Even if a computer could obtain more intelligence than humans, what would it WANT to do? Nothing. Nothing whatsoever. It'd just sit there.
 

Mewter

Smash Master
Joined
Apr 22, 2008
Messages
3,609
But what if you could program the computer so that it follows the exact same design as humans? Would it make them intelligent and emotional? Would they be able to create complex and intelligent "choices?" Is there a code of programming that allows for free (random) thought or self-programming intelligence?
IF there is, and when it's made, I want Asimov's laws put into place. That may protect us, but maybe then they would reprogram themselves. Maybe a second (unreachable and separate) unit inside of itself could stop it from ever changing these laws by interfering with the robot.\.
Computers are smarter than us as long as it is in their programming. If they can somehow access free thought(maybe through copying the human brain), then maybe they'll become infinitely more powerful.
When and IF this happens, the phrase"boring like a robot" may not have so much meaning anymore.
 

zrky

Smash Lol'd
Joined
Jun 1, 2008
Messages
3,265
Location
Nashville
There are 3 main things that computers have on us:

1. Speed - obviously computers can compute things extremely quickly when compared to humans
2. Accuracy - calculators will never make a mistake if they are programmed correctly
3. Endurance - computers don't get tired, they can repeat an equation infinitely without getting bored or losing efficiency
These are the ingenious things about computers, but really I don't think A.I. will ever surpass human intelligence, because we could only program them to the extent of our knowledge. So if the word "intelligence" is being used correctly then no they can't surpass us for my above statement. If the word is used incorrectly, i.e. they can do faster calculations more accurately, then yes they all ready have surpassed us. So there is a fine line between the correct and incorrect use of the word. So really I don't think A.I. will ever be more "intelligent" than we are.
 

cman

Smash Ace
Joined
May 17, 2008
Messages
593
There are 3 main things that computers have on us:

1. Speed - obviously computers can compute things extremely quickly when compared to humans
2. Accuracy - calculators will never make a mistake if they are programmed correctly
3. Endurance - computers don't get tired, they can repeat an equation infinitely without getting bored or losing efficiency

However, computers will likely never surpass us beyond anything past this without groundbreaking changes to their design. Computers are programmed with instructions, and follow those instructions exactly. They cannot react to a situation that they have not been programmed to react to, or perform actions that they haven't been programmed to. We have plenty of things computers don't, and most likely never have:

1. The ability to create new ideas - Computers only perform functions that they have been programmed to perform. A computer will never be able to do anything that a human programmer hasn't given programming for. How can computers ever surpass us if they are unable to do anything without a human telling it how?

2. the ability to understand events outside previous instruction- Like F4 said with the infinite loops. The computer cannot tell that it was supposed to stop after 1000 loops. It keeps following the code exactly while you madly press the end program button when the program takes longer than the 2 seconds you expected it to take. You had the experience of running similar programs before and seeing that they shouldn't take that long. The computer is just following the code, and it won't learn from it's infinite loop mistake, it will just keep making the mistake over and over as long as you hit the "run" button.

3. Emotion - Obviously computers don't have this. Emotion isn't normally associated with intelligence, but I think that a lack of emotion would definitely keep any computers from surpassing humans. Even if a computer could obtain more intelligence than humans, what would it WANT to do? Nothing. Nothing whatsoever. It'd just sit there.
What doctorate/expertise/renown/etc in the field do you have to say it can't be done?? These haven't been done yet, but i contend that no one in the world fully understands the capabilities of computers, especially at the rate the technology is improving (remember Moore's law?).
 

slartibartfast42

Smash Lord
Joined
Dec 29, 2006
Messages
1,490
Location
Canton, Ohio
I'm just a normal teenager taking programming classes in high school. I have a basic knowledge of how coding works, although there could easily be (and probably are) languages capable of things I don't realize. But from what I've seen, computers just follow one basic set of instructions, and follow it exactly. It's almost like humans are setting down a track for the electrical signals to follow, and when the computer runs it merely goes down that path. New paths cannot be created without input from the programmer. The computer cannot really do anything unexpected.

As for me being able to say it's NOT possible... almost ANYTHING has the POTENTIAL to be possible. we're just arguing the likelihood at our current point in time. And I think it's not happening.
 

ElemMasterZeph92

Smash Journeyman
Joined
Nov 8, 2008
Messages
399
Location
Somewhere not home...
I believe that it will always be equal to man but since man isn't perfect A.I. will be the same(what ever man creates). A.I can get smarter than human IQ but like all humans it will suffer from our faults. Like in fictional type movies, the A.I. sees man's faults and tries to forcefully correct (it a trait from man).
 

AgentJGV

Smash Journeyman
Joined
Sep 2, 2007
Messages
466
Location
Northeast Ohio (AKA Smashghetto)
But what if you could program the computer so that it follows the exact same design as humans? Would it make them intelligent and emotional? Would they be able to create complex and intelligent "choices?" Is there a code of programming that allows for free (random) thought or self-programming intelligence?
IF there is, and when it's made, I want Asimov's laws put into place. That may protect us, but maybe then they would reprogram themselves. Maybe a second (unreachable and separate) unit inside of itself could stop it from ever changing these laws by interfering with the robot.\.
Computers are smarter than us as long as it is in their programming. If they can somehow access free thought(maybe through copying the human brain), then maybe they'll become infinitely more powerful.
When and IF this happens, the phrase"boring like a robot" may not have so much meaning anymore.
But see, the problem is that computers do EXACTLY what you tell them to. Nothing more, and nothing less. Everything that a computer does has already been predetermined. they can't "think" and they can't "feel". These are human emotions that even we don't understand. How, then, can we replicate them in a computer?

Now lets move on to my thoughts. Lets look at Elder Scrolls: Oblivian. This is an incredible game. It interacts with you and you can interact back. It is a living world. But it is only extremely well coded programming. Every scenario, every encounter, every option...is predetermined.

Another point I want to bring up is a random number. In programming language, when you want a series of random numbers, you have to input a special function. This function gives a number that looks random but is actually just the product of a very complex equation. This is because computers can't be random, they're made to be exact. Vecause of this, computers are limited to what we tell them.

My opinion? Computers will pass us if we program them to.
 

SkylerOcon

Tiny Dancer
Joined
Mar 21, 2008
Messages
5,216
Location
ATX
The only time that computers will ever pass us is if we tell them too. And even then, we'd have to know all kinds of things we don't right now to be able to make something better than us. Computers can only do what we program them to, and this includes the very basic way that computers 'learn' through AI. Sure, AI can be good, but will it ever replicate a human? No.

Lets take Smash for an example. I'm sure all of you are familiar with Melee. Humans have discovered wavedashing, and we've turned it into a fundamental part of playing Melee. But have you ever seen a computer do a wavedash? I'm guessing no. When you program a computer to 'learn' it will only learn things that it's supposed to learn. No matter how much we wavedash in front of it, it will never wavedash itself.
 

Mewter

Smash Master
Joined
Apr 22, 2008
Messages
3,609
But see, the problem is that computers do EXACTLY what you tell them to. Nothing more, and nothing less. Everything that a computer does has already been predetermined. they can't "think" and they can't "feel". These are human emotions that even we don't understand. How, then, can we replicate them in a computer?
We could replicate them in a computer WHEN we have an adequate understanding of the human brain. By then, though, we'll probably know how to genetically engineer brains.

Now lets move on to my thoughts. Lets look at Elder Scrolls: Oblivian. This is an incredible game. It interacts with you and you can interact back. It is a living world. But it is only extremely well coded programming. Every scenario, every encounter, every option...is predetermined.

Another point I want to bring up is a random number. In programming language, when you want a series of random numbers, you have to input a special function. This function gives a number that looks random but is actually just the product of a very complex equation. This is because computers can't be random, they're made to be exact. Vecause of this, computers are limited to what we tell them.
Unless you have the technology and knowledge of the chemistry of the human brain. It may be a challenge, but I'm sure that if we survive long enough, it will happen. Now how can we make a robot like a living organism? Well, we've recently created "seeing" robots. That is, they see and react like a locust without the use of radar or vibrations. All they need is "sight", or the camera.
http://www.ncl.ac.uk/press.office/press.release/content.phtml?ref=959179671
http://www.scienceagogo.com/news/20000307201150data_trunc_sys.shtml
Now, if we could replicate this on a larger scale.... with individual parts mimicking individual body parts.
My only question now is.... is it possible to make a robot behave like a living organism? IS there something about organisms on a cellular scale that robots cannot replicate no matter what?

My opinion? Computers will pass us if we program them to.
I agree.
 

RDK

Smash Hero
Joined
Jan 3, 2006
Messages
6,390
I like to think that by the time what you're talking about happens (AI supposedly becoming "too smart" for us), the human race will have lost the need for the bodies we inhabit today, and will have somehow technologically enhanced ourselves; mind and body.

I would assume this is what's going to happen. Human evolution will come about one way or another, be it biological or technological.


My opinion? Computers will pass us if we program them to.
Fool. Haven't you ever seen 2001?

:p
 

meresilence0

Smash Apprentice
Joined
May 5, 2008
Messages
95
Location
Twitter
Computers are great at raw computational issues, as AltF4warrior has previously said, and humans make 'better' moral choices. And since those are two TOTALLY different types of intelligence, then computers cannot be compared to humans. Computer/human relations can be summed up in a single programming command: SUBO. If you make a computer do something with the SUBO command, then it will have to obey you, even if it goes against any other programming commands in place. Computers will always listen to humans, and even if they become "self-aware" (which means that by definition everything except the toaster is "self-aware) they still must obey us.

Either that, or we'll all have mecha-robot super computer suits. If that's the case, then humans will have been passed by technology because they are technology, and not human.

Or else maybe the toasters will dominate the world before we can stop them. Or the Gray Goo theory is right. We won't know until we're screwed over, I guess
 

Mewter

Smash Master
Joined
Apr 22, 2008
Messages
3,609
I would assume this is what's going to happen. Human evolution will come about one way or another, be it biological or technological.
Hah.
We're going to engineer ourselves to be cyborgs.:laugh:
Awesome.

Either that, or we'll all have mecha-robot super computer suits. If that's the case, then humans will have been passed by technology because they are technology, and not human.
Exactly. We wouldn't be "human" anymore, per se. Who needs suits, either? Why don't we just cut off our arms, put on robotic ones, and spray-paint them the color of flesh?(I, Robot)

Seriously, though. It starts to make sense when people say the line between human and machine will be blurred if we ever harness super-human robot-suit implants. Human cyborgs would be way cooler and smarter than just pure robots. If technology can't copy organisms completely, then robots will have a missing quality or "flaw". Cyborgs would be able to have the benefits of both sides of the coin. If we don't change with the times, though, then humans will have less control of their robot creations.

And the Gray goo theory... funny.
 

manhunter098

Smash Lord
Joined
Apr 12, 2008
Messages
1,100
Location
Orlando, Sarasota, Tampa (FL)
I really doubt we will be enhancing ourselves robotically. I would think that human enhancement would come from biological engineering not mechanical. Though mechanical enhancement would definitely arise. Plus making us into cyborgs we would still be human, unless there is absence of biology entirely. Though if we go the genetic route for our enhancement we would definitely no longer be human after some time.
 

Mewter

Smash Master
Joined
Apr 22, 2008
Messages
3,609
I see.
So, as long as we retain yet some of our original genetic information, we are still human.
If we go by that logic then, I guess that we would stop being human once we've genetically altered ourselves to a certain extent where the genetically modified ones cannot breed with the originals?
 

J.TwisT

Smash Apprentice
Joined
Nov 17, 2008
Messages
111
Location
Sierra Vista, AZ
As far as AI's becoming overly intellegent.
Until whoever makes a thinking, evolving, changing robot or technology, it will never happen. We tell them to kill, they kill, but they don't use logic, or emotion, they just kill. (Sorry, this was the first function that came to mind on the subject)

We might not be enhancing ourselves robotically, but if those who are crippled/lame becoming totally functioning by technology the are cyborgs. Also if there is absence of biology, we would not being human but totally robotic.

If we take a genetic route we will totally change from human to some other sort of race entirely. If we cannot breed with the originals we would have become a different species so you would be exactly right.
 

manhunter098

Smash Lord
Joined
Apr 12, 2008
Messages
1,100
Location
Orlando, Sarasota, Tampa (FL)
I see.
So, as long as we retain yet some of our original genetic information, we are still human.
If we go by that logic then, I guess that we would stop being human once we've genetically altered ourselves to a certain extent where the genetically modified ones cannot breed with the originals?
Pretty much. As long as a person can still have children capable of reproduction with someone who is biologically human then they are still human. They might be a different subspecies, but they are still human.
 

RDK

Smash Hero
Joined
Jan 3, 2006
Messages
6,390
I really doubt we will be enhancing ourselves robotically. I would think that human enhancement would come from biological engineering not mechanical. Though mechanical enhancement would definitely arise. Plus making us into cyborgs we would still be human, unless there is absence of biology entirely. Though if we go the genetic route for our enhancement we would definitely no longer be human after some time.
Who knows? I've read somewhere in a science article that theoretical development of femtotechnology (1000^-5; two notches below nanotechnology, 1000^-3) is in the works. In layman's terms, they'd work like tiny biological organisms--mechanical antibiotics small enough to pass through the body. The only difference is that they're technologically engineered, rather than biologically.

As far as AI's becoming overly intellegent.
Until whoever makes a thinking, evolving, changing robot or technology, it will never happen. We tell them to kill, they kill, but they don't use logic, or emotion, they just kill. (Sorry, this was the first function that came to mind on the subject)
This reminds me of the Revelation Space trilogy by Alastair Reynolds. The premise of the books is that in the far future, after a galaxy-spanning war, the human race develops a group of machines called the Inhibitors that are designed to "inhibit" intelligence beyond a certain evolutionary point on all planets. Basically, as soon as a species becomes too advanced, they become destroyed.

Without revealing any spoilers, my point was to tie along your post with the "inhibitor" theory in the book; basically that the machines started to fail after a thousand or so years, just as all machines eventually fail. Without us, they're just machines.
 
Status
Not open for further replies.
Top Bottom