Science Forums

# How does a 'computer' computes ?

## Recommended Posts

Suppose we have a n-bit finite computer with the fact that adding 1s comes to 0 again.

Let 's take an example : 2bits, implies 11(in base 2)=3 were in this case the biggest number, so that 1+1+1+1=0=4, hence it's like modulo 4.

So has it 5 elements (2^2+1) (0,1,2,3,4=0) ?

Would this mean that the 'field' would have a characteristic of 4, which could not allow for unique divisions ?

Does anyone know if the computer makes a mod p congruent field internally ?

##### Share on other sites

A23,

for someone who understands modulo arithmetic, I'm surprised you don't already understand how computers do calculations.

Computers make all their calculations in "words" of memory. In the early 70's, a word was one byte long, or 8 bits. So the biggest integer you could make in one word was 2^8 - 1, or 255. If you used the leftmost bit as a "sign" flag, then in one word you could make integers from -127 to +127.

The solution was to concatenate several words of memory together, making it possible to express much larger integers. With 2 words (16 bits), you could have integers from approximately -32,800 to +32,800. The more words you assigned to a number, the bigger the range of numbers.

Now, for "real" numbers (with decimal fractions) they used another programming trick. They used, lets say, 6 words of memory (48 bits) to express a real number. It was formatted this way:

For the real number 123.456, we can express that as 0.123456 * 10^3

The mantissa would be 123456, understood as a real fraction between 0 and 1.

(The decimal point always at the extreme left of the first non-zero digit)

The exponent would be 3.

So, the first (left) bit would be the sign of the mantissa. (you can have negative mantissas).

The next (say) 39 bits would be the mantissa itself. This gives you about 10 or 11 digits.

The next bit would be the sign of the exponent. (you can have negative exponents).

The last seven bits would be the exponent.

So, you can express real numbers from -0.100000000 * 10^(-127)

to +0.9999999999 * 10^(+127)

If you wanted to express zero, then all bits were set to 0.

If you wanted to express infinity (actually, numeric overflow), than all bits were set to 1

The "real" number 1 would be:

00000000 00000000 00000000 00000000 00000001 00000001

As technology grew, "words" of memory were increased to 16 bits, 32 bits, 64 bits.

Different companies used different (but similar) schemes for representing integers and reals.

But the essential approach remained the same.

Does that help?

##### Share on other sites

In the early 70's, a word was one byte long, or 8 bits.

weeelll, i wouldn't live up to my title if i didnt mention this

Until x86 architecture became the prevailing architecture that defined standards, a byte was actually a value from 6 to about 12 bits...

Oddies:

CDC C6000 scientific mainfraimes used 10 6-bit bytes, they also used a 12 bit architecture and oftenly referred to 12bits as a byte as well

PDP-10 used 1-36 bit bytes

Univac 1100/2200 series used 6 bit bytes

IBM 1401 also had a 6-bit byte

Russian Strela computer also had 6-bit bytes

And not to leave out Setun, a ternary logic computer using a 3-state bit well a trit they were called, and 6 made up a tryte, which is about 9.5 bits, so 2 trytes would represent 19 bits :rolleyes: (i wish there was more information on it, i think it was quite a soviet marvel, as was a lot of the soviet computing program... did you know, for example, that there were a few DIY compter kits available for electronics enthusiasts that told them how to build a computer out of more or less common bits of hardware available at electronics stors and whatnot)

##### Share on other sites

weeelll, i wouldn't live up to my title if i didnt mention this.

Until x86 architecture became the prevailing architecture that defined standards, a byte was actually a value from 6 to about 12 bits...

Alex,

thank you for reminding me of the Bronze Age of my life.

Yes, I programmed a CDC 6600 back in 73. 60 bit words. First computer I ever played "Star Trek" on. The OS had this horrible problem of "thrashing", where it slowed down to a crawl and then hung up. It didn't actually crash--it was just spending 100% of CPU time exchanging RAM pages out to and in from the hard drives. :D If you learned to detect the onset of slowing down, and you had good reflexes, it was possible to log out fast enough to avoid losing your "stuff".

That was the only non-8n bit computer I think I ever used.

PDP-11s were 16 bit. Actually, they were 17 bit! Yes.

The only way you could tell was with the word rotate (assembly) command. The bits rotated to the right, with the lowest bits appearing back on the left. But a phantom 17th bit rotated with them. Never understood why.

[ED] Anybody remember the original game of "Star Trek"????? :)

If anybody does, tell me and I will start a new thread on that subject.

##### Share on other sites

The OS had this horrible problem of "thrashing", where it slowed down to a crawl and then hung up.

I didn't know Windows was out in '73 already?

##### Share on other sites

Well, there was one or two issues with the cdc6600 OSes, because the system was pretty different from most mainframes, rather, it was pretty radical at the time, for example, you had a central processor that had no IO instructions, 10 parallel units that were able to process 10 instructions simultaneously, plust peripheral processors, taking care of io, which were themselves, formidable systems with 10 processors, tuned to running the OS and doing IO. Anyways, so because of those nuances, there were no OSes that would just run on these mainframes, so they had to write one.

So they quickly threw together COS, an os based on an OS from the CDC3000 series, dirty, but quickly developed for the unit delivery. The 6600 were however promissed SIPROS, simultaneous processing OS, that was supposed to have incredible features for the time. Well, SIPROS development was just not going well, and eventually CDC sanctioned further development of COS, that was delivered as SCOPE (what the developers referred to as Sunnyvale's Collection Of Programming Errors), and severely suffered from reliability issues, always... Anyways, then it got rewritten by one coder in his off-hours, and became MACE, which became the basis for KRONOS, system that used TELEX time sharing and BATCHIO, then there was a project to unify KRONOS and SCOPE, which was the system called NOS, which under the hood was kronos extended with scope features, though they didn't publisize this much.

But see unlike SCOPE or SIPROS, there was nothing special about the OSes that Microsoft developed support (rather crappily) only a hand-full of architectures, and are in no way innovative, so...

##### Share on other sites

• 1 year later...

Yes dude I am working on it, actually i am a computer programmer, i know it very well.

This is a great information about how does a computer convert in to computes and this is a helpful information to if some one work on it as for good.

##### Share on other sites

Let 's take an example : 2bits, implies 11(in base 2)=3 were in this case the biggest number, so that 1+1+1+1=0=4, hence it's like modulo 4.

__________________

##### Share on other sites

Yes dude I am working on it, actually i am a computer programmer, i know it very well.

Actually the majority of programmers know very little about how computers actually function. I cant recall how many times i've had to explain to "programmers" for example why a float like 42.1 is actually approximated when stored...

##### Share on other sites

Actually the majority of programmers know very little about how computers actually function. I cant recall how many times i've had to explain to "programmers" for example why a float like 42.1 is actually approximated when stored...

I agree, but your example is, I think, less a failure to understand how computers function (their electronics), than a failure to be able to do long division.

To some extent, this inability has to do with when one was in primary school. When I attended US public elementary school in the 1960s, our “new math” curriculum required us to be able to do arithmetic in any base system, so those of us who didn’t forget it have a pretty easy time understanding your example (ie: [imath]110100101 \div 1010 = 101010.0\overline{0011}[/imath] ). Though in my schools, we weren’t required to recognize repeating parts until the 7th grade, we knew empirically that the digital representation of many quotients didn’t terminate.

Alas, the new math fell out of favor in US public school systems about 10 years after its introduction, and even during its height, was not taught properly by many teachers during its heyday. Among professional computer programmers, it’s a rare one I find who’s able on demand to do 5th grade arithmetic of the 1960s US.

Another quirk of my education is that, in my 1978 to 1982 undergrad days, there was such a dearth of programming classes (easy As for those of us who programmed a lot) that nearly all of us took one or even 2 classes in COBOL. COBOL actually has some data types (eg: PACKED-DECIMAL), that are store numbers as 4-bit decimal digits (packed BCD). Nearly all (or perhaps all – I’m not mainframe hardware trivia-ist enough to be sure) IBM mainframes, and some DEV VAX machines, actually implement BCD (5 to 8 bit decimal digits) and packed BCD arithmetic in hardware.

I was surprised to learn that the latest generations (ca 2007) of IBM’s Power architecture RISC CPUs, POWER6 and 7, implement IEEE 754 floating point decimal arithmetic, which supports up to 34 decimal digits – promising for a couple of my favorite things, interpreted languages and arbitrary precision decimal calculators. :)

##### Share on other sites

my favorite question on the topic of floats, of all time has been this:

me - so you see 42.1 would be stored by your computer as 0100 0010 0010 1000 0110 0110 0110 0110

not me - oh i see, so how do you know which one represents the decimal point?

me - facepalm

Also as to your point about my example, point taken; i just don't know if talking about single n-channel BJT nor gates would quite tickle most people's fancy, though from the sound of it it, i am already looking forward to a reply, Craig :)

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.