• Listen to a special audio message from Bill Roper to the Hive Workshop community (Bill is a former Vice President of Blizzard Entertainment, Producer, Designer, Musician, Voice Actor) 🔗Click here to hear his message!
  • Read Evilhog's interview with Gregory Alper, the original composer of the music for WarCraft: Orcs & Humans 🔗Click here to read the full interview.

Watson

Status
Not open for further replies.
Level 4
Joined
Aug 17, 2011
Messages
119
saw this Watson thing for the first time today, more here

Watson is made up of ninety IBM POWER 750 servers, 16 Terabytes of memory, and 4 Terabytes of clustered storage. Davidian sontinued, “This is enclosed in ten racks including the servers, networking, shared disk system, and cluster controllers. These ninety POWER 750 servers have four POWER7 processors, each with eight cores. IBM Watson has a total of 2880 POWER7 cores.”


and they are now putting this things brain in a Wii, silly!



 
Level 34
Joined
Sep 6, 2006
Messages
8,873
I watched one of the episodes of Jeopardy with Watson. It was pretty amazing, although the contestants did really well for having to take on a computer.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,258
and they are now putting this things brain in a Wii, silly!
You seem to not have read the article properly. And it is clear the article author's opinions share the same as yours (fortunatly he also mentioned the truth).

The Wii U (not Wii) will use technology simlar to that which the AI Watson used in its servers. The Wii U (and especially the Wii) can not run any part of Watson as it is far too week. The technology however does probably allow for great cost to performance ratios.
 
Level 4
Joined
Aug 17, 2011
Messages
119
The Wii U (not Wii) will use technology simlar to that which the AI Watson used in its servers. The Wii U (and especially the Wii) can not run any part of Watson as it is far too week. The technology however does probably allow for great cost to performance ratios.

Oh heavy, I was kinda wondering how they were gonna fit all that stuff in one 'lil console... o.0

Still, awesome tech!


Re: IBM making all new processors. Okay, so I heard from a friend that a new generation of processor is being designed (or is designed, don't know). This guy explained it to my mate as the equivalent of a '99 core' processor.

Okay, so this friend of mine is apparently spoke to some guy that worked at Google. My mate was working at a pub and met the guy one night. They spoke for a while, it came out that this guy worked for Google, yadda-yadda, and then my mate was told not to buy a new PC for the next year/18-months. He was told about this new processor design technology, explained to him as the equivalent of a '99-core processor'. This happened about a week ago...


So I tried to Google (search) it, but after page 5, still nothing! Now, personally I doubt anyone has made a '99-core processor' (firstly, don't cores have to be symmetrical, so having an odd number is ridiculous?; and also, what is he point of 99 cores, running a billion different apps in the background? Don't you rather want fewer cores running at higher hertz instead?) So, I think that the concept of a 'super strong' processor was just explained as having '99-cores' (to this friend who is not that tech savvy), but he refuses to budge, saying a 99-core processor is coming!

So, can anyone confirm or deny this? Is a '99-core' processor being designed? Is it something else (not cores) that will make new processors that much stronger, that they were explained as having 99 cores? or was this all just drunken-big-talk/bullshit to start off with?
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,258
(firstly, don't cores have to be symmetrical, so having an odd number is ridiculous?;
Only the actual core layout has to be patterned in such a way to keep communication delays constant (usually gometrically involving multiples of 2). Nothing stops them from disabling some of those cores so an odd number is present.

and also, what is he point of 99 cores, running a billion different apps in the background? Don't you rather want fewer cores running at higher hertz instead?)
I take it you have been trapped in a time bubble for the last 7 years... There is a reason clock rates now (reaching 4 GHz for normal CPUs) are not much faster than clock rates then (about 3 GHz). One cannot make components run efficiently and stabilly at much higher clockrates. This is due to factors such as tollerence and capacitance (as you know, higher frequiencies lower the impedience of capacitive units such as the material between 2 layes of a printed circuit). More processors on the other hand allow you to perform more in parallel. As more is performed in parallel, the processor might to have all processors running all the time meaning that you can turn off extra processors and even lower clockrate to save power.

So, can anyone confirm or deny this? Is a '99-core' processor being designed?
I think this might be what he was refering to. Notice the large number of cores.
 
Level 4
Joined
Aug 17, 2011
Messages
119
I take it you have been trapped in a time bubble for the last 7 years...

hehe... must be, its called university mate. I don't study computers, so I don't have time to keep up to speed with the latest tech developments when I'm busy grinding away at Economics, Stats and Finance... So, I'll apologise for my incompetence, for example:

as you know, higher frequiencies lower the impedience of capacitive units such as the material between 2 layes of a printed circuit

so, that I actually did not know; but it makes sense that "small wires melt whan they get too hot", if that is more or less what you are implying in layman's terms...

and also, thanks for the link, that Larrabee stuff seems interesting. should keep me busy for a while!
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,258
so, that I actually did not know; but it makes sense that "small wires melt whan they get too hot", if that is more or less what you are implying in layman's terms...
That has nothing to do with what I said...

Capacitive elements have an impedience of 1/(jwc) where j=i=sqrt(-1), w=omega=2*pi*f and c is the capacitence in farad. This impedience translates to a resistance (magnitude) with a phase offset (as it is a complex component). One can easilly see that frequiency is inversly proportional to the resulting resistance from the impedience. At DC currents ((f)requiency = 0) you end up with division by 0 for the resistance which can be viewed as an impossibly high resistance which explains why capacitors block DC currents. At f=infinity, you end up with division by infinity which results in 0 which means that as f increases capacitors go towards open circuit.

The way processors are currently designed is very compact. This means that connections between components inside them are very very small. This brings about a variety of problems. Firstly, this means that they are very close together separated by semiconductor as an isolater (which is basically what a capacitor is) so it can be viewed as having disimlar connections joined by capacitors so current leakage occurs (and more so at higher frequiencies). Secondly, these metal tracks that make the channel have a very small inductance (as all wires do) resulting in systamatic signal skew. Finally distortion to magnetic fields add some element of randomness to skew.

This is not even getting to mention how the actual logical elements made up from transistors have skew due to their own means of opperation. Worse still, if a signal that is not very close to a logical level (low or high) enters a transistors there is a chance of metastability occuring which can potentially cascade through the entire processor causing it to produce unintended results.

Inorder to minimise these effects, a certain minimum time per transistor step (receivng, producing reliable result and letting result arrive at next transistor element) is defined. Ofcouse logical elements are made of multiple transistor elements and a computer functions by having many logical elements chained together. The end results is that the maximum clockrate of a computer is defined by the length of the critical path (the longest chain or logical elements) such that it achieves a reliabilty that is usable (will not error in months of opperation). The allocated clock is based on worse case tollerences and thus you can overclock (raise clock frequiency) of most CPUs a variable amount while still keeping high reliability values).

Ofcourse this does not mean that processors are stuck in the 3-4 GHz range. Only CISC processors like the x86/64 architecture (what most PCs use) are limited to reliable opperation in this range. RISC architectures have already hit past 6GHz due to shorter critical path but this does not mean they perform better than lower clock processors (the predecessor to the processor used in Watson was 6GHz but they found it more efficient to reduce clock to 4 GHz and use the strain that this freed to improve its performance in other ways).

You might be wondering how a square wave behaves when passed through a capacitor (afterall computer clocks are square waves). A square wave is actually a spectrum of waves that are at odd multiples of the frequiency of the square wave. As the frequiency of a square wave is increased, the magnitude of higher frequiency waves in the spectrum increase.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,258
The layout generally follows some form of symmetry to keep track distances simlar. However it does not stop them from disabling defective cores from a higher end models nor does it mean that they need to be even numbers, they just have to be positioned so signals travel a simlar distance.
 
Status
Not open for further replies.
Top