• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

[Need genius to explain] How does the computer work?

Status
Not open for further replies.

Deleted member 219079

D

Deleted member 219079

I mean yaya "google it b*tch" well I tried, I come up with "instructions are passed through cpu to achieve stuff" and "cpu reads ram" "cpu reads 101" etc.

But I mean at the basic level, how does it work?

I mean, how can it be dynamic?? I understand the Minecraft like gates, like nor, or, and, not etc., but how can that be dynamic?

And what's the role of CPU and RAM and HDD at all? Couldn't the gates be like on the MB itself?

o_O
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
You basically have a lot of electronic parts, and they work together through electric magic, and TADAM, a computer!

You can try to google how to make simple things with logical gates (aka transistors), like calculators.
Computers are the same, just with billions of em', and much more complicated circuits.

Yes, the gates could be on the motherboard, and are in some cases, but that removes the ability for people to replace parts, which is a big part of the computer world.
Silent Power (which seems to have died, sadly) is an example of people who built everything directly on the motherboard.

As to the HDD, you need some place to store permanent data, and DRAM sticks (aka RAM) are volatile, meaning they don't hold information once electricity stops going through them (this makes sense, right?).
 

Deleted member 219079

D

Deleted member 219079

And the role of CPU? Do the pins of CPU connect to the output of the gates? Why does it generate so much heat?


Thanks for explaining DRAM and HDD, although I still can't form the big picture :(
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
The CPU is literally your computer. It's the main event. It holds all the logic and runs everything (although, you might know that GPUs also have processor units on them!).
The pins are for inputs/outputs, and are connected with the rest of the computer components through buses (equivalent to electric cables, but built directly on the PCB (the usually-green thing all of the components of electrical applications tend to be on)).

Electronics generate heat because they are not ideal. When current passes through something that resists it, heat is generated. When something isn't ideal (as in everything, because ideal components can't exist), it has resistance.
 

Deleted member 219079

D

Deleted member 219079

I just searched a little.


Fundamentally, it's fucking simple. Whoever invented those gates kind of invented computers too.


Thanks for your answers!
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
CPUs actually do not use gates. Although their logic can be modelled to some degree with gates and in the old days mainframes were made out of gates the reality is that gates are not compact or fast enough to make modern processors.

Modern processors are designed using VLSI. They are made using transistor logic to form functional units which are then arranged over a huge (tiny but comparatively huge) area into bigger units. This repeats until you get the overall processor layout (what you can discern by looking at the actual piece of silicon through a microscope outside of the protective casing).

Transistor logic operates by diffusion of charge carriers (doping agents) inside silicon when exposed to an electric field. The gate of such transistors creates a charge carrier bridge between two doped sections when an appropriate voltage level is reached. This allows current to flow and so fundamentally form a transistor.

If only it was that easy. Since there are actually two different kinds of doping. N type transistors use negative charge carriers inside a positive charge well. As the name implies it is very good at carrying negative charge but is incapable of carrying positive charges. You also have P type (positive charge carriers) transistors which are the inverse, they have P type doping in an N type well. Since negative charge carriers flow more freely than positive charge carriers most substrate is by default p type. This does mean that for Pmos transistors you need to generate and Nwell.

Due to the size and scale of the circuits involved parasitic capacitance is a huge issue. As such layouts have to minimize parallel tracks. Additionally to carry away rouge charges and prevent signal leakage all transistor structures need to be amply connected to bulks to keep their well/substrate isolating the transistor active area.

Capacitance of the transistors themselves is also huge. In order for the circuits to function correctly at high clock rates they have to be matched in size based on what they are driving. so simple branching from 1 transistor into many is not possible without making the transistor large. There is also the issue of current propagation generating race conditions but digital circuit design handles that. However in doing so it introduces the idea of critical paths which limit maximum obtainable clock rate.

This pretty much concludes the VLSI part of circuit design. From there on it is all digital circuit design and the need to optimize the logic. This area itself is also immensely complicated. To reduce the critical path a technique called pipe-lining is used which breaks each step of instruction execution into small approximately equally long segments that can be executed in a chain. Doing this introduces the issue of decision logic stalling the pipeline which is combated to some degree with prediction.

Even logically simple operations like addition can be too slow for high clock rates. Either new and faster ways of computation are used or they are converted into functional units. These units then can run in parallel inside a highly open ended pipeline. This then requires logic to manage the pipeline so that everything executes correctly and stalls when necessary.

Eventually you get to a working instruction machine which you can call a processor. The processor can then communicate with external resources to gain more memory, persistent storage, communication with other processors etc. These are done through communication buses and protocols.

And what's the role of CPU and RAM and HDD at all? Couldn't the gates be like on the MB itself?
The motherboard is a circuit board not an integrated circuit. Although manufactured with similar principles and even tools, it is a completely different technology. You cannot place integrated circuits onto circuit boards.

Additionally out of the integrated circuits that make up your computer there can be differences in technology. The most widely known is that RAM uses a completely different technology from the CPU due to the way RAM operates. RAM is far more efficient power wise than the cache on your CPU for that reason but RAM technology cannot be made into high performance CPUs. Until recently it was impossible to make affordable integrated circuits with storage density similar to mechanical HDDs however with solid state that is changing. Solid state is printed much like RAM and CPUs but again using an entirely different technology.

Many modern computers such as from Apple integrate all 3 together to some extent. A CPU/GPU on a single die from AMD is soldered onto a motherboard circuit board along was RAM (often soldered directly onto the CPU/GPU chip) and FLASH RAM chips (solid state) are also soldered directly onto the motherboard. The result is a highly optimized system that can sit in unbelievably thin cases and weigh practically nothing at the expense of being completely unable to upgrade or replacing anything in the future.
 

Deleted member 219079

D

Deleted member 219079

The result is a highly optimized system that can sit in unbelievably thin cases and weigh practically nothing at the expense of being completely unable to upgrade or replacing anything in the future.
I'd be ok with it, as long as it's a stable product with long lifespan.

CPUs actually do not use gates. Although their logic can be modelled to some degree with gates and in the old days mainframes were made out of gates the reality is that gates are not compact or fast enough to make modern processors.
Hmm... Does the CPU in this video I watched use gate or the solution you described? Cuz that video makes the CPU look pretty simple to understand, would be cool if modern CPUs' structure was as simple :D

A CPU/GPU
My i7-3770 has an iGPU or something like that which makes it able to handle graphics, is that what you mean by CPU/GPU?
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
I'd be ok with it, as long as it's a stable product with long lifespan.
Which for Mac users is generally 1-2 years after all they cannot be seen in public using a Mac that is not the latest model. JK, although a lot of people have complained about the new design trend and prefer the dated upgradeable models instead.

Hmm... Does the CPU in this video I watched use gate or the solution you described? Cuz that video makes the CPU look pretty simple to understand, would be cool if modern CPUs' structure was as simple :D
No modern integrated CPUs use logic gates for the reason I mentioned earlier. They just are not compact enough, fast enough or efficient enough. Their fundamental logic can be modelled using gates but in reality they cannot as it does not factor in issues like clock skew, physical layout etc.

The processor shown looks kind of basic. Companies like Intel and AMD use massive super computers running 24/7 to simulate their processor designs for viability and functionality.

Physically processors use functional areas which are often neatly laid out in grids that contain functional units. These functional units can be multiple hand-optimized transistors to perform certain simple logic functions. Computers then choose and route most of the functional units together as it would not be economically viable to do manually.

A good example of this at work is with cache memory. To save on transistors it has imperfect digital access logic with transistor amplifiers to de-noise the input signal. Another example is where in high through-put digital circuits (like a functional unit) no buffers are used as the natural capacitance of the transistors is sufficient for information storage between clocks (why some devices can have a minimum clock speed).

Modern processors are extremely complicated. For example a modern I7 processor has a micro code system for backwards compatibility, huge tiered caches, multiple execution units, multiple pipelines, special purpose units for operations like video processing and encryption, special purpose floating point units, security logic for OSes etc. Each alone might not be that complicated to think about but when you combine them all together you understand why Intel and AMD invest several billion every year on designing them.

Games like minecraft simulate gates to some extent. However the issue as mentioned earlier is no such logic exists naturally in the technology used. Both a P and N mos transistor is required in order to achieve full digital logic operations as each only allows one kind of charge through.

My i7-3770 has an iGPU or something like that which makes it able to handle graphics, is that what you mean by CPU/GPU?
Yes, a recent trend has been towards having both the GPU and CPU manufactured at the same time on the same piece of silicon. Doing this can result in smaller, more efficient and cheaper to make complete solutions. The currently best known example is the PS4 and Xbox One which both use a single chip solution from AMD to be their main x86 processor and render the high definition graphics.

This used to not be possible due to the surface area required. However recent technology trends have made it that surface area is no longer a major issue and instead more physical properties like power consumption and yield dominate designs. Good luck getting something the size of your finger tip to dissipate 300 watts of power without developing localized hotspots.

Most modern processors are actually designed to be a lot more powerful than they turn out. Chances are the processor you are using has defective/unreliable units inside it which have been disabled in factory for improved yield. The perfect ones (with all units functioning) are sold as the performance/high end models and is why they cost a fortune. The cheaper models are often high end models with several issues that have been masked through disabling functionality. In the end it does not mater as long as the performance is sufficient and the reliability is still near perfect (even a 1 in a billion error is too high).
 

Deleted member 219079

D

Deleted member 219079

That's very informal post, thanks :) +rep

Most modern processors are actually designed to be a lot more powerful than they turn out. Chances are the processor you are using has defective/unreliable units inside it which have been disabled in factory for improved yield.
And how is that possible? Are you saying CPU is not dependent on all its parts? That just confuses me :(

I only have couple of questions about cores and threads. How can a CPU have more than 1 core inside it, doesn't it require two times the space? How can they decide which core does which instruction without interfering, and what even is a thread? Also how can all the cores connect into the bus (I'm expecting there's still bus - lines from CPU to RAM, like in the video) without messing the data on other cores?

And my i7 has this Hyper-Threading feature, how can this work without interfering?
Intel® Hyper-Threading Technology (Intel® HT Technology) uses processor resources more efficiently, enabling multiple threads to run on each core. As a performance feature, Intel HT Technology also increases processor throughput, improving overall performance on threaded software.


Damn, I watched that video and I was like "goddamn CPUs are easy to understand, imma make my own factory now and make some $$$" but after the point where you said they don't use gates anymore my dream shattered :/
 
Level 14
Joined
Sep 27, 2009
Messages
1,547
The word you meant to say was "informative".

And how is that possible? Are you saying CPU is not dependent on all its parts? That just confuses me :(

I don't know shit, but here's my guess: every time you make a CPU, there are some defective parts on it, so they're designed to be more powerful than they will be when the customer gets it. That way they can just disable the defective parts without decreasing the processing power so much that they'd have to scrap the whole CPU.
 

Deleted member 219079

D

Deleted member 219079

Like there'd be any difference >.>
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
And how is that possible? Are you saying CPU is not dependent on all its parts? That just confuses me :(
Because there is only a probability that what they fabricate is functional due to all the errors and inaccuracies involved.

Some lower end models can be specifically fabricated (such as micro controllers) however usually the high end chips are also sold as low end ones for yield.
 

Deleted member 219079

D

Deleted member 219079

After two years of hard deduction I've arrived at the conclusion that informal isn't the same as informative.

I don't know shit, but here's my guess: every time you make a CPU, there are some defective parts on it, so they're designed to be more powerful than they will be when the customer gets it. That way they can just disable the defective parts without decreasing the processing power so much that they'd have to scrap the whole CPU.
And I guess this is CPU binning.

...

How does GPU work? I'm learning OpenGL, in which the library provides functions to compile a shader on program execution and send it to the GPU. So it is in a format of sequence of instructions executed when they're reached in the pipeline.

All easily explained, but there's gazillion cores there on the GPU? How can the pipeline be handled? How can the parameters handled? Does GPU have cache/register beside its RAM?

I'm seeing OpenGL as a spaghetti of compile time ifs, resulting in function calls to the driver software, I doubt OpenGL actually interacts with GPU at all.

Edit: That "how does GPU work" is too broad question, I'm more interested in the processing of data.
 
Last edited by a moderator:

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
How does GPU work? I'm learning OpenGL, in which the library provides functions to compile a shader on program execution and send it to the GPU. So it is in a format of sequence of instructions executed when they're reached in the pipeline.

All easily explained, but there's gazillion cores there on the GPU? How can the pipeline be handled? How can the parameters handled? Does GPU have cache/register beside its RAM?

I'm seeing OpenGL as a spaghetti of compile time ifs, resulting in function calls to the driver software, I doubt OpenGL actually interacts with GPU at all.

Edit: That "how does GPU work" is too broad question, I'm more interested in the processing of data.
To simplify...

OpenGL calls -> driver -> hardware

The OpenGL calls interface with the graphic driver. The graphic driver then uses the OpenGL calls to modify the state of the GPU hardware. Calls such as compiling shaders cause the graphic driver to convert the shader code into shader instructions it can use to setup the GPU. Binding shader calls cause the driver to setup the GPU to run that shader at a certain state. The hardware ultimately executes the shaders and their given configurations. How the hardware executes it is based on the driver and the capabilities of the hardware. For example if the shader uses 64bit floats then it will execute much slower on non-cooperate hardware such as consumer grade GPUs due to the hardware having very inefficient support for 64bit float types.

The graphic pipeline comes about due to parallelism. While the GPU is running a shader to produce some results instructions can be queued so that immediately after the shader execution completes another shader or other operation can run keeping the GPU constantly busy. Even the graphic driver has some form of parallelism due to the use of hardware interrupts. With Vulcan/D3D12 there are multiple instruction queues so that the GPU can run different parts of itself in parallel and even potentially execute other instructions during resource or configuration stalls. Modern pipelines are mostly implemented in hardware and work by the modular nature of GPU/CPUs where single instructions cannot load every single part at the same time and the executions of certain parts take time.

For example a CPU can run multiple multiplication instructions at once since it has multiple such units available, with each multiplication unit taking multiple cycles to complete a multiplication operation. As long as the instructions do not depend on each other, the CPU can then run each multiplication in parallel, starting them at a rate of 1 per second. With enough such units the CPU could in theory execute 1 multiplication instruction per tick, however in reality there are far fewer due to that never being possible (other overhead such as memory read/write, logic or even dependencies causing instructions to stall awaiting results from previous instructions). Some CPUs go further with technology such as HyperThreading that add two decode and execute units which share the same ALU units allowing more of the ALU to be used at the same time such as when a resource stall occurs in one of the threads.

GPUs work in a similar way except that their pipelines are less complicated as they are not intended to handle a lot of flow control logic (they do so very badly). The graphic driver sends instructions (which it compiles from OpenGL calls and shaders) to the GPU using I/O which the GPU then executes. Instructions get buffered and the GPU can potentially be running multiple instructions at the same time due to pipelining.

GPUs do have caches, but they are not listed since unlike CPUs there is a well defined cache requirement for efficient GPU execution. Since they are very small they might not be CPU style caches, and instead be more like registers which use different layouts and have faster access times. GPUs also run slower than CPUs and have highly integrated memory so have less problem with memory I/O speed.

GPUs do not have "gazillion cores", rather they have a lot of ALU units that run on data in a massively parallel way. For example a single core of a GPU might run an instruction on hundreds of pixels per tick, unlike a CPU which can only run on a single data unit per tick. The Graphic driver instructs each of the cores to run on some set of data, which they do very efficient in a fraction of the time the CPU would take. For example if one wants to transform a large buffer of several thousand 3D verticies with some transformation matrix the GPU can potentially do so in a few cycles as it will transform a large subset of that buffer in parallel every tick using dedicated vector/matrix hardware rather than the CPU which will struggle to transfer even a single vertex per core per tick using its more general purpose logic hardware.
 
Status
Not open for further replies.
Top