• Listen to a special audio message from Bill Roper to the Hive Workshop community (Bill is a former Vice President of Blizzard Entertainment, Producer, Designer, Musician, Voice Actor) 🔗Click here to hear his message!
  • Read Evilhog's interview with Gregory Alper, the original composer of the music for WarCraft: Orcs & Humans 🔗Click here to read the full interview.

The Human Brain - Soon to be Obsolete?

Status
Not open for further replies.
Level 7
Joined
Jun 16, 2008
Messages
327
I'm sure the title would make you think this was the discussion of some loony science fiction franchise, right? Sadly, this is no fiction. Or at least, truth that has not yet been established. Now, what am I talking about? I'm talking about the rapid, unstoppable, streamlined process of computer development gradually surpassing all boundaries and limitations. Take a look at this chart, drawn up by American futurist Ray Kurzweil:

PPTExponentialGrowthof_Computing.jpg

What this depicts is the exponential growth in raw computer power and capacity through the 20th and 21st centuries. But did you notice something? Glance over to the right-hand-side column, and take a look at the two items pictured there. Yes, the human brain is being compared to computer processing power. Why? Because, according to futurist Ray Kurzweil, anywhere from between 2019 to 2023, an average $1000 personal computer will have the raw processing power of a human brain.

This raises many ethical and logical questions, with one of the foremost being 'does this mean that, soon, the human brain will become obsolete?' This merits a very bittersweet answer, but chances are looking extremely likely that it is a "yes." According to Ray Kurzweil, artifical intelligences (A.Is) will, eventually, replace humans, both in the parliament (government) and in the workplace. This sounds quite ominous for us, but I.T and computer development enthusiasts are spurring it on.

In my honest opinion, I believe it is ironic that the human race is eagerly spurring on its own replacement and obsolescence, and has been nudging it on to the gain the momentum to ensure its unstoppability and, in turn, ensure us -- the humans and supposedly most intelligent specie -- to do, in my opinion, the stupidest thing ever; replace ourselves with our own creations. Any opinions?
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Old news son, and more suited for Medivh's Tower

Frankly, even if we get to brain level, it's not like it will automatically "come alive". We need much more advancement in AI before that will happen.




Also, you do realize the entirety of the Internet has already reached past 1 brain or so?
 
Level 17
Joined
Nov 11, 2010
Messages
1,974
The only possible way I see how you can get a computer smarter than a human brain is if you manage to create some sort of 'learning' function.
Other than that, the human brain wins any day!
 
Level 7
Joined
Jun 16, 2008
Messages
327
Old news son, and more suited for Medivh's Tower

I see. My apologies for the outdated state of this information, but I guess I was just a little late in finding it in an issue of the Time magazine, and it caught my eye. I guess I couldn't resist sharing it. As for its aptitude to be in Medivh's tower, I lack enough reputation to discuss things in such a place. Besides, I wouldn't honestly consider this a debate; more an open-ended discussion. Still, I understand your point entirely.

Frankly, even if we get to brain level, it's not like it will automatically "come alive". We need much more advancement in AI before that will happen.
The only possible way I see how you can get a computer smarter than a human brain is if you manage to create some sort of 'learning' function.
Other than that, the human brain wins any day!


That's true; computers follow preset programs, and thus lack any "living" qualities. However, you yourself mentioned the possible; advancement in AI, perhaps to the point when learning things -- even basic algorithms -- becomes possible for a sophisticated A.I, through new methods of data collection and interpretation, perhaps through a widespread medium such as the internet. This has already, albeit partially and hardly in any sophisticated form, been accomplished through such supercomputers as the Echelon series, used by the United States government.(1)(2)
The raw processing power needed for such an operation (having the ability to learn basic algorithms through data collection and interpretation) would be, not the equivalent of one human brain, but many, but, still, from a futurist's point of view, it is possible, if not likely.



Also, you do realize the entirety of the Internet has already reached past 1 brain or so?

I can imagine it has, and I fear I have forgotten that point. My apologies. However, that's storage capacity. I'm talking about processing and interpretation power. A 'think tank', if you will.
 
Level 21
Joined
Jul 2, 2009
Messages
2,951
I fail to see on why people want to perfect the human brain, but you can't copy the soul of a human being, and a machine won't do any good without a soul.
You can make a lot of copies of humans, but they won't have the thing that makes them human.
The brain always wins, so this is kinda horse shit. LOL....
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
I can imagine it has, and I fear I have forgotten that point. My apologies. However, that's storage capacity. I'm talking about processing and interpretation power. A 'think tank', if you will.

No no no, processing power too. Interpretation is being worked on, nearest we have to that is search engines, but thats a very basic form of it.


I fail to see on why people want to perfect the human brain, but you can't copy the soul of a human being, and a machine won't do any good without a soul.
You can make a lot of copies of humans, but they won't have the thing that makes them human.
The brain always wins, so this is kinda horse shit. LOL....


Excuse me, but are you the type of person that believes only man is intelligent?
 
We make computers to think faster then us for a reason.

A computer's pretty much useless without someone to operate it anyway, even if a computer does become self aware, it does so as the product of human ambition and creation. There's certain factors of the human mind that are difficult to reproduce in a.i/computers anyway, like inventiveness, spontaneity or artistic outlook. Basically any accomplishment a computer comes to, is the result of human creation - the human mind isn't obsolete, it's just creating machines to think faster then it ever could to aide it.
A computer is a tool, why not have the best tool available?
 
Level 7
Joined
Jun 16, 2008
Messages
327
We make computers to think faster then us for a reason.

A computer's pretty much useless without someone to operate it anyway, even if a computer does become self aware, it does so as the product of human ambition and creation. There's certain factors of the human mind that are difficult to reproduce in a.i/computers anyway, like inventiveness, spontaneity or artistic outlook. Basically any accomplishment a computer comes to, is the result of human creation - the human mind isn't obsolete, it's just creating machines to think faster then it ever could to aide it.
A computer is a tool, why not have the best tool available?

That's completely true. However, it is in full concord with what I said; it would be a human achievement to develop an A.I capable of self-awareness and capable of fully emulating the human brain. Keep in mind, though, that it's this emulation of a human brain that could very well change the status of the computer as we know it from a tool and an aid, to a sentient threat. While this would sound corny due to the many sci-fi franchises based around it, once a computer can replicate human thinking, it can replicate human reasoning, which is primarily governed by emotion. Thus, it can replicate ambition. If it can do this, and is, in a way, superior to humans, it would automatically classify humans as obsolete. Why? The ambition we gave to it. One should not be so eager to nudge on development without giving due regard to where it may end up one day.

There's a difference between a simple, usable tool, and one that is capable of self-awareness and ambition; two things which could easily turn it from a tool, to a threat. Or, at the very least, a liability.
 
You don't think the AI creator would design it with fail-safes to prevent it from going through the whole cliche "humans are obsolete" reasoning?

Also replicating human emotion would be a particularly difficult task, as it's often illogical in nature - I guess you could create a more synthetic form of situational emotion based on circumstances and observation, but like a CGI image that's intended to be realistic it'd be unnerving and awkward, it'd seem as if there's something discretely wrong in it as realistic as it may be. Creating a truly realistic emulation of human emotion is still probably way off.
 

Dr Super Good

Spell Reviewer
Level 65
Joined
Jan 18, 2005
Messages
27,289
There is 1 fatal mistake with that graphic....
Human brains can not do calcualtions... They are neural networks which work via configuring pathways to get stuff approximatly correct. Thus why they are great at AI but suck for computation (why we can not answer our math paper in under a second).

Like wise, computers are shit at neural network calculations, which is why AI is so difficult.

Our brains basically work like a sort of water pump. Water flows in from 1 or more sources (like other water pumps or a river) and flows out to one or more devices (like homes or other water pumps). The flow rate out is diffferent on each output and varies in a non linear way to the flow rate on each input. The brain is continually in work and a lot of it is self ossilation which probably makes up your thoughts. Brains are also analogue, meaning there are a nearly uncountable number of input and output values from each neuron.

A computer however can be looked as like a worker who is assigned 1 job a day. He can eithor make roads, or move goods and can not do both at the same time and must do it for the whole day. Computers are syncrnious systems which work on clock pusles (most so the edge of a clock pulse) and behave extreemly predictably. Computers are finite state machines so they can only represent finite quantities in any opperation.

Thus both the human brain can never be as good as a processor but neithor can a processor be as good as the human brain. It is all dependant on purpose.

An example of what humans are bad at, count from 1 to 2^32 and look up a piece of data each time.
An example of what computers are bad at, detecting a wall is infront and walking around it.
 
Level 7
Joined
Jun 16, 2008
Messages
327
You don't think the AI creator would design it with fail-safes to prevent it from going through the whole cliche "humans are obsolete" reasoning?

I wouldn't be so quick to say human design is trustworthy. When man discovered a way to split an atom to release incredibly powerful nuclear energy, did he apply it to good use through design? No, because what happened next? The bombing of Hiroshima and Nagasaki, Japan, in 1945. What was the incentive for the rapid development of space technology during the 1960s? Conflict. Namely, between Russia and the U.S.A, which, today, we know as the Cold War. What do both of these examples have in common? Man's tendency to misuse his discoveries. The atomic bomb -- Enough has already been said. Instead of pooling their resources together to further man's knowledge, Russia and the U.S.A instead brought unforetold human casualties, albeit indirectly, such as those in Vietnam. You see, when it comes to man and technology, man has given a track record of untrustworthiness. This could very well occur with his development of A.Is. The ironic thing is, though, that the protocol the A.I would have to follow -- data interception, collection and interpretation -- would mean that the development of self-awareness is inevitable, if not necessary.

Also replicating human emotion would be a particularly difficult task, as it's often illogical in nature - I guess you could create a more synthetic form of situational emotion based on circumstances and observation, but like a CGI image that's intended to be realistic it'd be unnerving and awkward, it'd seem as if there's something discretely wrong in it as realistic as it may be. Creating a truly realistic emulation of human emotion is still probably way off.

Replicating human emotion would not be an installed protocol. Rather, it would be a learned one. Learned by the A.I itself. Remember; the foremost use for such a powerful A.I would be surveillance and data analysis through something such as the internet. In order for it to surpass any other previously built A.I, it would have to have the ability to process and interpret every single bit of information, and to check it itself for threats. In other words, it would have to be able to analyze for itself. The scenario that could happen then would be similar to exposing a young, eager-to-learn schoolboy to literature, and telling him to read. He's bound to not simply read, but to analyze, contemplate. Eventually, he would've developed an awareness of the subjects and topics he's read on. The same with an A.I.

And now, what is evident in all those "books"? Human thought. What guides human thought? Logic? Perhaps. But emotion overshadows logic easily, as has been demonstrated throughout human history. That means that an analysis of human thought would inevitably bring about an awareness and understanding of human emotion. All that's left for it to do is then, quite simply, its job: intercept, analyze, understand, absorb.

 
Level 22
Joined
Dec 31, 2006
Messages
2,216
An example of what computers are bad at, detecting a wall is infront and walking around it.

Lol, that's insanely simple for a computer. You could've at least tried to find something that is somewhat hard for a computer. Anyway, in the end, what computers are good and bad at comes down to how well we make the programs, which again comes down to what our brain is good or bad at.

We have already managed to make robots which learn. One of my teachers made a robot chicken which learned to walk on it's own (and it learned way faster than a human being too. Took a mere 15 seconds).

In fact, our brain is extremely simple if you look from the right angle. It's just a reward and punish system. It's quite similar to the A* pathfinder.
 

Dr Super Good

Spell Reviewer
Level 65
Joined
Jan 18, 2005
Messages
27,289
Lol, that's insanely simple for a computer.
Uh huh... So that car with anti accident breaks did not jsut ram at full speed into a stationary object last year in the motor show.

It may be easy in a game like SC2 where the computer has acess to object properties diectly, but I am talking in real life where it has to detect the wall using an analogue sensor without touching it.

Trust me, it is no easy task or the world would already have automaticly driving cars.

We have already managed to make robots which learn. One of my teachers made a robot chicken which learned to walk on it's own (and it learned way faster than a human being too. Took a mere 15 seconds).
That is true. We have robots able to learn a chair and table in 1 sitting with only a few examples. However this is still extreemly limited and their natural language abilities are poor. Remember that learning can be as simple as a few variables that keep changing until they hold the best results, but dynamic and extendable learning capable of learning abstract ideas, linking them and producing new thought is still in the distant future.

In fact, our brain is extremely simple if you look from the right angle. It's just a reward and punish system. It's quite similar to the A* pathfinder.
Our brains are not precise, thus why it takes ages to learn some ideas. Where as a computer can compute 1 + 1 and always get 2 no mater how many times it is done (radiation related error not included), a human will only get 2 if he was told it was 2 or understands the ideas behind arthimetic and the values of numbers. You could go around telling kids in developing countires with limited education that 1 + 1 = 3 and a lot of them will learn that unless they grasp the ideas behind addition, even though they may know what the numeric values of 1 and 3 are.

It is additionally why humans make mistakes. Try adding 9827481 to 5564868 in your head with only looking at the numbers for 1 second. A computer has no problem doing that as their logic is absolute where as the human brain has to attempt to apply ideas to it to approximate an answer. Do the number of elements involved, it is extreemly unlikly that you will be able to do it correctly in 1 attempt as the brain's approximate method of opperation is not suited for such exact tasks.

Our brains are brillient for learning and devising stratergies and can do so without much thought but are total shit for absolute information and it is easy for them to learn the wrong things. Example being my spelling, where as a computer can output any number of correctly spelt words, it can not create as complex sentences as I however I can not produce as perfect spelling as it can as my brain has not been trained enough on spelling certain words (and in some cases it was trained wrong by me guessing how to spell them).

Although it is possible to emulate neural networks like the human brain, due to the discontinious update properties and finite state properties of computers, an efficent or exact emulation is impossible. It is good enough to drive OCR or other visual recognition software and some basic AI like those in supreme commander, but it is not good enough to make a full AI as it is more reliable and more powerful to just employ more humans.

However, when and if we switch over to analogue computing or quantum computing, then it may be possible due to the more continious and less finite method of opperation.
 
Level 22
Joined
Dec 31, 2006
Messages
2,216
Uh huh... So that car with anti accident breaks did not jsut ram at full speed into a stationary object last year in the motor show.
Fail programming.

It may be easy in a game like SC2 where the computer has acess to object properties diectly, but I am talking in real life where it has to detect the wall using an analogue sensor without touching it.
I've made a robot using analogue sensors to detect walls as well as chasms in the ground. I had very limited time (was a contest) so I couldn't implement more advanced eyes so it could see that there were ways around the object, instead I programmed it to turn either to the right or to the left and continue trying. It also made a beep when it encountered obstacles. The beep was only supposed to be a placeholder for a mapping function I was going to implement, but again I had too little time. With that function you could send a robot into unknown areas and get a picture of how it looks. Our teacher also said that the chasm detection could be used to measure how much the icepoles melted.

Trust me, it is no easy task or the world would already have automaticly driving cars.
We have those (still in beta, but they exist and they are promising).

That is true. We have robots able to learn a chair and table in 1 sitting with only a few examples. However this is still extreemly limited and their natural language abilities are poor. Remember that learning can be as simple as a few variables that keep changing until they hold the best results, but dynamic and extendable learning capable of learning abstract ideas, linking them and producing new thought is still in the distant future.
I wouldn't say it's in the distant future. NASA has already developed robots which learn a lot of things and make advanced decisions based on the circumstances.

Our brains are not precise, thus why it takes ages to learn some ideas. Where as a computer can compute 1 + 1 and always get 2 no mater how many times it is done (radiation related error not included), a human will only get 2 if he was told it was 2 or understands the ideas behind arthimetic and the values of numbers. You could go around telling kids in developing countires with limited education that 1 + 1 = 3 and a lot of them will learn that unless they grasp the ideas behind addition, even though they may know what the numeric values of 1 and 3 are.

It is additionally why humans make mistakes. Try adding 9827481 to 5564868 in your head with only looking at the numbers for 1 second. A computer has no problem doing that as their logic is absolute where as the human brain has to attempt to apply ideas to it to approximate an answer. Do the number of elements involved, it is extreemly unlikly that you will be able to do it correctly in 1 attempt as the brain's approximate method of opperation is not suited for such exact tasks.
Of course.


Our brains are brillient for learning
Self-contradicting, much?

and devising stratergies
Nah, very few people are known to be good strategists.

but are total shit for absolute information and it is easy for them to learn the wrong things.
Indeed.

it can not create as complex sentences as I
Cleverbot. (And a few others)

Although it is possible to emulate neural networks like the human brain, due to the discontinious update properties and finite state properties of computers, an efficent or exact emulation is impossible.
We have the raw power, I don't think efficiency is such a huge problem. Sure, we need to make the software somewhat efficient, but it's not like it would be too hard for a powerful 6 core CPU @ 3.4 GHz which is dedicated to only that.


It is good enough to drive OCR or other visual recognition software and some basic AI like those in supreme commander, but it is not good enough to make a full AI as it is more reliable and more powerful to just employ more humans.
More computers :3
Anything wrong with clustering?

However, when and if we switch over to analogue computing or quantum computing, then it may be possible due to the more continious and less finite method of opperation.
If we ever manage to make quantum computers then I'm confident that we will get robots which easily outsmart us. But even without quantum computers making advanced robots seems like a very plausible task.
 
Level 22
Joined
Dec 31, 2006
Messages
2,216
Hmm, you may just be able to emulate the brain of a tiny fruit fly... Then again probably not.
Lol...

My GPU alone can run advanced pathfinding and complex AI for 1 000 000 units in real time as well as render the whole thing (in high quality, it's not stick men, mind you). So with a even more powerful processor it should be able to emulate a lot of the human brain, maybe even everything.

It's not like you have to do it exactly the same way as the brain does it, as long as you get similar (more likely better) results with a different approach (we are afterall trying to improve the wheel, and not reinvent the square wheel). Btw, a friend of mine is working on simulating neural networks and he has gotten far. He's using LUA too, which isn't even compiled into an executable.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Lol...

My GPU alone can run advanced pathfinding and complex AI for 1 000 000 units in real time as well as render the whole thing (in high quality, it's not stick men, mind you). So with a even more powerful processor it should be able to emulate a lot of the human brain, maybe even everything.

It's not like you have to do it exactly the same way as the brain does it, as long as you get similar (more likely better) results with a different approach (we are afterall trying to improve the wheel, and not reinvent the square wheel). Btw, a friend of mine is working on simulating neural networks and he has gotten far. He's using LUA too, which isn't even compiled into an executable.


Tell us when it says "I think therefore I am" without prior programming.
 

Dr Super Good

Spell Reviewer
Level 65
Joined
Jan 18, 2005
Messages
27,289
He's using LUA too, which isn't even compiled into an executable.
So he is emulating semi infinite continious systems on a processors? I highly doubt that... Yes we can get simlar results to a neural network via simple weighted node algerthims but they still do not have the continious aspect our brain has.

My GPU alone can run advanced pathfinding and complex AI for 1 000 000 units in real time as well as render the whole thing (in high quality, it's not stick men, mind you).
Uh hu... Screenshots please?
 
Level 22
Joined
Dec 31, 2006
Messages
2,216
Tell us when it says "I think therefore I am" without prior programming.
You do know humans have used MILLIONS of years to get to that, right?

So he is emulating semi infinite continious systems on a processors? I highly doubt that...
Go ahead and doubt, people doubt evolution too.

Uh hu... Screenshots please?
Saw it on amd.com.

I found it again, turns out it was for an older GPU, but here it is nonetheless (doesn't show everything)
ManagingLargeCrowds.png

It uses GPU tessellation on every character. More images:
ArtificialIntelligence.png


Up close they will be rendered with 1.6 million triangles.

Btw:
The demo employs state-of-the-art, massively parallel artificial intelligence computation for dynamic path finding and local avoidance on the GPU. The characters busily move from goal to goal while avoiding treacherous regions of the terrain. The characters spend time working at gold mines, foraging wild mushrooms, and napping at their camp sites. The user can explore every corner of this virtual world by flying around the environment using a variety of input paradigms. The user may also influence their behavior by placing new goals in the environment and even adding new obstacles such as dangerous poison fields and summoning frightening ghosts! As new goals and obstacles are placed in the environment, they adapt by dynamically changing their paths.


With two AMD Phenom X6 CPU's + 4 of the newest GPU which is ATI Radeon HD 6990 4GB (yup, thiz is possible) then simulating the human brain fairly well should be within the realm of possibility.
 
Level 22
Joined
Sep 24, 2005
Messages
4,821
The human brain can perform path finding and can emulate AI perfectly well. It can also render an infinite amount of units in detail, while processing countless number of functions of the body like sight, hearing, muscle coordination, hormone production, etc.

It's inappropriate to compare the brain's functionality against a GPU's rendering\computational capability since all that the GPU does is process numbers. It can't process anything else aside from numbers, while the brain can interpret a lot of abstract data types without it being declared first.
 
Well, I think you can't simulate human brain simply on the basis that the brain doesn't only compute movement and that kind of easy stuff.

As for what the brain computes:

Different smells with a intricate memory behind every entry which saves the smell to remind you, which differentiates into over 1 billion different smell's or even more depending on the human, Sight-depth in multiple dimensions and interactive interpretation of even unknown objects, colors, optical attributes and connecting them in a complex "data bank" to different objects (again, more as billion depending on the human), Visualization of touch in connection with remembering of the touch sensation and a connection to the texture of the touched object and that even connected to the sight to remember objects depending on their texture, additional the regulation of the muscle management and much much more.

If you think any computer can multithread over 100000 threads at the same time and additionally actively collect data in real time combined with constant change of data and evolving of data, you are wrong.
 

Dr Super Good

Spell Reviewer
Level 65
Joined
Jan 18, 2005
Messages
27,289
The brain learns. Computer can only calculate.

Then what is that opperating system you are running? The one you can change by installing a different one onto the computer?...

Computers are able to retain data, just like neurons in ones brain do via altering path response. The problem though is that data is absolute where as brains work approximatly so emulating a brain would require very complex data management, to a degree that computers can not handle (as brains are not finite state machines but computers are).
 
Status
Not open for further replies.
Top