Jump to content
Science Forums

IC architecture


Recommended Posts

If Intel doesn't survive, what's to keep the others in line?

You are correct Buffy, there needs to be 2 big powers to bring the pricing of their product down and have a strive for better quality product, so without Intel, AMD processors would be much more expensive and not as good as they are.

For the semi-conductor bit. Transistors are built on the Semi-conducting princible. It's defines their capabilies hence why the two Semis used (Silicon and Germanium).

They very well are, and they can and are used as semiconductors (in a way), although in most cases on the boards they are not. An example of a transistor being a semiconductor would be in the CRT monitor power supply, there is a diode and a transistor loopback bridge (not all monitors, but most have it) that goes into a microprocesssor that determines how much current to the monitor needs and opens or closes up one of the transistors that controls the exit point on one of the coils (transformers) before the power distribution.

As for my Experience? None. Java level and some C++. I know nexts to nothing practicle on this level. That's Why I wish to learn it.
You see and this is where the problems start arising, creating a processor is one thing, but it is like a computer with no bios, if you plug it in, yeah some channels will get opened, but it wont do anything. You will heed to create an instruction set for your processor to be usable, that instruction set is called Assembly, and you will probably have to write it in Hex or Binary (whichever you prefer, but hex needs translation to binary) code...

 

and yes I think I can do better than a thousand people. Even a million. I have a tendancy to see things others don't.
Dont doubt you a bit, it is possible and may even happen. History shows that single people have created revolutions in the past, so why not now? (Newton, Socrates, Tesla, Einstein...) You can do some research on current architectures (x86, Spark and others), you will need to get beyond the basics of electronics, and know and understand the workings of the very sophisticated boards, from power supplies to Nics and Motherboards, you also need to learn assembly, as TeleMad said, it will get you in touch with the inner workings of the processor, and last but not least, you need to get the actual prints for processors, to find out exactly what goes on inside. But if you want to go in that direction be ready to be in for a long trip, nothing that will take months, something that may take you your entire life, so be ready to make that kind of a commitment :hyper:
Link to comment
Share on other sites

...

I want to know how to design a Processor from scratch. Ultimately I want to make a new standard for Personal Computing. Update the current technology while thinning out the old stuff. Redesign the current standard to optimize performance and stuff.

 

I have an intense interest in basics. I wish to make the Off the store Components but I can't comprehend how a bunch of transistors can work together to produce a varible image and I/O.

I am curious about your thoughts. There are a lot of important questions when thinking on building a

microprocessor. First one I can think of which do you prefer "big endian" or "little endian" ? Basically,

do you want to read your bits from left or right. Do you wish to utilize "pipelining" ? If so, how deep

do you wish to make it. The concept pipelining is used in processors to queue up the multi-stage the

components of instructions being executed. Most modern microprocessors use multi-stage execution

to fetch, decode and execute an instruction. The depth of a pipeline is a clue to performance.

However, there is (as always) two camps of thought (longer or shorter pipelines).

AMD use shorter pipelines while Intel use longer pipelines and lots of stages.

Athlon (AMD), Pentium (Intel) use Big Endian while MIPS, PowerPC use Little Endian

(Note: PowerPC have a mode to switch Endian)

The PowerPC model 970 (IBM) is what is used in the Apple G5.

Another concept is SuperScalar (multiple execution units) How many do you want to use.

 

A lot goes into making a chip (design) before ever heating up any silicon... :)

 

From a later post...

...

My intention is two fold. Robotics which my real Dream above all else. and a New computer system. The current models are weighed down by that which made them Good. Old technology which worked then but only slows current progress.

 

Oh and I do paper work. I have no illusion that I could even afford to build a small processor lab. but I could learn theory and run simulations.

 

As for Electronics? I have a pretty good grasp but I need some hands on. I need to know the theory to make it practicle hence my asking this here.

As already mentioned, I would learn especially C++ (best high level language that can do system

programming I know). You can learn more Java if you're more interested in Internet development.

For Architecture, I have to agree with Telemad; it would do you best to get "hands dirty" and do

some assembly programming. This is not for the faint of heart. You will have many head banging

sessions learning the ins and outs. In the process, you would learn the architecture of the chip

(computer) you are working on. So only do this if you have boldness and the passion to take it

on. There are a few jobs with this skill (getting less).

 

Then to build a processor you need to learn VHDL and ASIC Design, FPGAs and whatever design

tool you choose (like Verilog). These are not cheap (say $30k+). So you best bet is go to a

Engineering College, Major in EE, get to Grad School (MS or higher). You might get to design

some simple circuits. I'm thinking of signing up this fall for EE560 at USC. Debating if I wish to

spend the time (My girl friend will be upset). :D So I forgot to mention the sleepless nights.

There will be a lot of those. ;)

 

Anyway, Good Luck! All the best. :)

 

Maddog

Link to comment
Share on other sites

Do you wish to utilize "pipelining" ? ... Another concept is SuperScalar (multiple execution units) How many do you want to use.

 

Right, if KAC is going to out-design the current leaders, he'll have to do some very advanced design. Besides pipelining and multiple execution units, he'll also have to have on-die cache, a cache controller, speculative execution, out-of-order execution, good algorithms for flushing and refilling the pipelines when a branch prediction misses, an advanced FPU, streaming SIMD instructions, multi-threading functionality, a CPU scheduler, algorithms to oversee the stack during context switches, a memory management unit offering a protected mode, and so on.

Link to comment
Share on other sites

As long as we're discussing where design should head, there are 2 steering vectors I think should be included in the design path toward denser processing capability. The first is a the ability to parallelize at a much smaller scale and integration level than current methods offer. Ultimately an ability to truly parallel task at the processor level is desired. This would maximize the ability for parallel processing channels to communicate progress with each other than what can be achieved using discrete components. Look what integration of processors and floating point coprocessors achieved. Imagine a 64 node cluster in your current PC frame :)

 

The second is development of optical processing. I believe it offers the greatest gain in processing speed with the minimum gain in processing heat. We'll look back at current processor technology the way we look at Z80s today, as archaic antiquated technology. Imagine a 64 node cluster running at light speed in your current PC frame :)

Link to comment
Share on other sites

Bio, nice name by the way, i do not unedrstand what exactly you are trying to ask, can you please explain a bit more, it may make sence to you, and even to me, but i'm not sure...

 

I am knot shure I hunderstendt yure cents of hewmourr.

 

But is it true that cannibals won't eat clowns becasue they taste funny?

Link to comment
Share on other sites

ooh, i see what you mean bio, sorry for asking, i was just not sure whether or not you were talking about the topic at hand, thus my confusion...

 

Anyways, let me just tell you a few specs of the cell processor so you can get an idea of the real coolness of it, check it out:

 

ok, first of all, there are no definate specs on cell processors, because the only real source of information is the patent that was filed in 02, and that needs to be deciphered, cuz it sounds like "it was written by a robotic lawyer running Gentoo in text mode", so all of the following is approximate and may or may not hold true in the end...

 

Firstly, the amount of money being spent on the project is termendous, IBM is building two 65nm chip fabrication facilities for billions of dollars each, Sony paid IBM hundreds of millions to setup a production line in Fishkill and then there are a few hundred mill being spent on the developement itself, so its not a cheap idea...

 

Cell architecture is designed for high-performance distributive computing, there are hardware and software cells, software cells are made up of apulets (programs) and hardware cells are the actual computation grounds for software. The cell architecture is not fixed which allows you to add/remove cells at your convenience, so having a hdtv, playstation 3, computer and a pda at your house, you can easily link them together to do whatever you want to, so you can sit there watchibng tv while your ps3 is churning through a SETI at home every 5-6 minutes or so (and yes your eyes must be popping out from just this one spec) You can distribute the software cells over a network, it is not media dependant, so be it the world wide web or a wifi connection at your house, everything can participate.

 

Hardware cells are made up of

1 PU (processing unit)

8 APUs (Attached Processing Units)

DMAC (Direct Memory Access Controller)

I/O interface

 

End result is a 4.6GHz, 1.3v processor with 6.4 Gigabit/sec off chip communication running at approximately 85 degrees celcius.

 

PU

Its been said that the PU part of the processor will be a Power architecture processor. Currently there are 3: Power 4, Power5 and PowerPC 970 (aka G5, the derivative of Power 4). IBMs press release showed a multi-thread multi-core processor, so it will probably be a derivative of a Power5 proc, probably something like what will appear in the G6.

APUs

APUs are self-contained vector processors completely independant of each other. Each has a 128x128 register. There are 4 floating point units capable of 32 Gigaflops and 4 integer units capable of 32 GOPS, also there will be a small 128k local memory instead of cache, because cache stinks, and no virtual memory system used at runtime. APUs are vector processing units (or SMID processors) that do multiple operations simultaneously with a single instruction. Eah APU so far apperas to ce capable of 4X32 bit operations a cycle (8 if you count multiply adds), in order to work, programs will have to be vectorised. Lack of Cache indicates that APUs operate different from any other conventional processor.

Conventional processors perform operations on registers that are directly read or written to main memory, operating on main memory is very, very slow, so caches are used to hide the effects of going to or from main memory. Caches work by sorting part of the memory the processor is oworking on, if you are working on a meg of data for example, there is likely that only a verys small portion of that will be present in cache. If data that is being worked on in cache is not there, the processor stalls and halts and waits for the data to be fetched. A highe-end server CPU spends upwards to 80% of their time wating for memory, so it is very, extremely inefficient.

To solve the complexity with cache, APUs took a radical approach of not includng any, and instead use a series of local memories, 1 in each APU. APUs operate on registers from the local data that are read/written to in blocks of 1024 bits, and APUs cant act directly on the main memory either, and there is no coherency mechanism to acess another APUs memory. This may sound inflexible, but this will deliver data to the APUs at a huge rate; if 2 registers can be moved per cycle to or from the local memory, its first incarnation will deliver 147 Gigabytes per second per APU, making the aggregate bandwidth of over a terabyte/s (insane huh?)

Just to compare with cell processors reaching their theoretical maxims, to equate to the processing power and capabilities you will need sufficiently overclocked (about 3Ghz) using optimised SSE assembly about 5 dual core opterons directly connected via HyperTranspotr to be able to achieve those kinds of processing power...

 

source: (http://www.blachford.info/computer/Cells/Cell0.html)

Link to comment
Share on other sites

what do you think of Cell processors?

 

They are a step in the path to full parallel tasking ability. It is a term you have to be careful with though. Cell processing is also another term for distributed computing like that done in the SETI project. Cell processors, OTOH are starting to integrate multiple FPUs on a single chip with multiple processing elements. I think it is a very promising acrhitecure.

Link to comment
Share on other sites

I know this is pretentious of me, but is it possible to repair the spelling or"architecture"? It drives me a little nuts.

 

PS- I apologize if I antagonized anyone by my compulsions.

 

I know how you feel. I cringe every time I see this thread surface too. I know it's a little retentive but I wish one of the admins would fix it too. Arcutecture makes me think of giant squid even though that is spelled architeuthis.

Link to comment
Share on other sites

Just ran across this tonight.

 

"Designed by IBM, Sony, and Toshiba, the new Cell chip contains a main PowerPC-based core microprocessor and up to eight additional processors.

 

The new CPU ... has a pushpin-sized 90-nanometer design. It uses 234 million transistors and can run at speeds of over 4 GHz.

 

...

 

... the eight 128-bit processors, called synergistic processing elements (SPEs), make the Cell ideal for managing high-bandwidth video. ..." (Many Processors in One, John R Quain, PC Magazine, April 12, 2005, p20)

Link to comment
Share on other sites

  • 1 year later...

Wow, I'm sorry I left for a while. You guys and gals are on top of this. I like cell architecture, It seems to me like a good Idea in general. It's an analog solution for a quantum problem.

 

One of the benefits of the Q-chips is that they could create a librarian for every book in the library therefore finding the book your looking for in a fraction of the time it would otherwise take. A cell processor could sorta do this.

 

I since was looking at some other things about the architecture of computers. Like buses for instance, how does double data rate work? Why is it that rambus doesn't beat DDR? It's faster right? Everyone here's looked at the processor of the PS3 right? It's pretty and I want it in my next computer.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...