Jump to content
Science Forums

Page Table Question


Recommended Posts

In terms of memory management, is there one page table for each process or is there a single table which manages the memory for each process? If there is only one page table, does it load in when the process is being handled by the CPU (multithreading) or is it there permanently and processes access their individual parts of it?

 

Any more information about the topic would be gratefully accepted. B)

Link to comment
Share on other sites

First of all, welcome to Hypography Leo ;)

 

In terms of memory management, is there one page table for each process or is there a single table which manages the memory for each process? If there is only one page table, does it load in when the process is being handled by the CPU (multithreading) or is it there permanently and processes access their individual parts of it?

Memory management is a really interesting subject, as i understand you are trying to ask how the computer memory is managed and what decides what the processor processes, when and what happens. So if i am correct, here you go:

 

first of all, lets break down memory management in levels, and lets see what we are looking at when we start a particular program.

 

ok, we start the program, the program is stored on the hard drive, but nothing is stupid enough to work with a program on there, so the program is copied into RAM. The program itself is nothing more than a set of instructions for your processor, and sections of the program get pulled into cache to get processed, how much is pulled depend on the size of the processor cache. At this point your program is operating in a sort of window, you assign and keep track of all allocations of memory, you write and read certain locations, delete others, and all of this is very much automated, but not by your processor at the time, but infact by your program, wait you think, how is that true? well lets skip back to the program creation... When you write code, be that Perl, Python, Lisp, C++ or C you create a set of word instructions, but at this point your code is text, it cant execute, because the whole point of programming languages is in representation of certain commands into cpu calls. You need to compile your program, and unlike a very popular beleif, the compilation process provides object code, that is not yet instructions, but a lot of it is unreadable, and many new parts point to many other things on the system such as libraries and premade system commands, display codes and a bunch more. It is in fact linking that translates object code into a machine language, a linker adds the necessary instruction to your code that will enable it to run. So as classic example a hello world program: your code says output "hello world", when you compile the code, the instructions say, get the code for display from iostream library on the system, make enough space to store H, e, l, ... (actually at this point in Hexadecimal notation of ascii, so 0x48, 0x65, 0x6C, 0x6C, ....) then it says take the output function and input the location of the 0x48 value in there, which then figures out that 0x48 is actually an H so it lights up the pixels needed to display an H on the screen, and a linker says, ok, now pull it all together, incorporate all code and translate into binary instructions for the processor. (and this is broad explanation so dont pay close attention to the details). Anyways, the programs own memory is managed by the program, now where does Ram tie in?

Ram

"In DRAM, a transistor and a capacitor are paired to create a memory cell, which represents a bit of data. The capacitor holds the bit of information 1 if it is charged and 0 if not so. Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows (wordlines). The intersection of a bitline and wordline constitutes the address of the memory cell. DRAM works by sending a charge through the appropriate column (CAS) to activate the transistor at each bit in the column. When writing, the row lines contain the state the capacitor should take on. When reading, the sense-amplifier determines the level of charge in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0. The counter tracks the refresh sequence based on which rows have been accessed in what order. The length of time necessary to do all this is so short that it is expressed in nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70 nanoseconds to completely read and recharge each cell.

 

Memory cells alone would be worthless without some way to get information in and out of them. So the memory cells have a whole support infrastructure of other specialized circuits. These circuits perform functions such as:

 

  • Identifying each row and column (row address select and column address select)
  • Keeping track of the refresh sequence (counter)
  • Reading and restoring the signal from a cell (sense amplifier)
  • Telling a cell whether it should take a charge or not (write enable)

Other functions of the memory controller include a series of tasks that include identifying the type, speed and amount of memory and checking for errors."

 

But aside from DRam, there is also static ram, static ram is what the CPU cache uses:

 

"Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds each bit of memory. A flip-flop for a memory cell takes four or six transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and that makes static RAM a lot more expensive. "

 

For more detailed information, take a look at http://computer.howstuffworks.com/cache0.htm and ram0.htm

 

Ok, back to cache, cache is regulated by the processor (i wont go deep into it, but you can read how conventional cache works in the "cell architecture" thread, first post, how conventional cache works...)

 

anyways, from there there is one link that i have not touched yet and that is how the OS runs your program. When you issue the execution command, the Operating system picks it up, it will look at all the processes running, certain system processes need to run, so they take the priority, and some other processes are not crutial, so the priority is lower. The OS will send signals to copy the program to Cache, at that same time, looking at how much space it takes and where it resides, to make sure that nothing overwrites the locations that the program is using. There is a whole bunch more that the os deals with, as it provides an interface for the program to acess the processor, and so forth. Anyhow, this is it for now, if you have more questions ask, again, i can be mistaken in a few places, so i'd recommend doing a search on it, there is many resources and books available on everything i've discussed, so take it easy ;)

Link to comment
Share on other sites

Thank you for your help. ;) I am currently doing a BSc in Computer Games Technology and my coursework requires me to simulate memory using multithreading. The description you gave is very helpful in explaining how the hardware affects memory management. However, I would also like information on how typical OS's manage memory for programs in multi-processor systems, if you have any information about that.

Link to comment
Share on other sites

here is a pretty discriptive OS memory mamagement page on OS X from apple (recommend you read this)

http://developer.apple.com/documentation/Performance/Conceptual/ManagingMemory/Concepts/AboutMemory.html

 

And here is another page on how Solaris deals with memory, solaris supports multithreading, there's (also recommend you to read this) also all kinds of charts you can use on there: http://developers.sun.com/solaris/articles/multiproc/multiproc.html

 

on programming level, here are some hints on the way linux memory allocation can be used:

http://www-106.ibm.com/developerworks/linux/library/l-osmig2.html?ca=dgr-lnxw09OS2LinP2

 

paper on unix mem allocation:

http://www.usenix.org/publications/login/1998-12/musings.html

 

a pdf on software part again:

http://peace.snu.ac.kr/publications/data/96/parallelcomputing2003.pdf

 

here is a pdf on performance of memory allocat on multithreaded systems in Linux:

www.citi.umich.edu/techreports/reports/citi-tr-00-5.pdf

 

very detailed paper by IBM on memory allocation (recommend you read this one especially):

http://www-128.ibm.com/developerworks/linux/library/l-memory/

 

and more tips and tricks on the programming part of all of it:

http://www.fastcgi.com/archives/fastcgi-developers/2001-October/001682.html

 

Have fun ;)

I really had no time to go through the info, but i glimpsed over most of it, (except pdfs), and most seems something that would interest you, so from here it is your turn ;)

Link to comment
Share on other sites

None of that addressed page tables. And I didn't see the original poster ask what source code was, or how source code gets compiled or linked, or what RAM is.

 

Well, Alex's response was long and thorough...at least you could help out by pointing us in the right direction, TM.

Link to comment
Share on other sites

Leo_E_49: In terms of memory management, is there one page table for each process or is there a single table which manages the memory for each process?

 

It depends.

 

In general, one separate page table is stored in memory for each process. In this case, each process has a pointer (PTBR, for page-table base pointer) that points to the base address of its page table: this pointer is maintained in the process's PCB (process control block) along with other information specific to that process (register values, instruction pointer, CPU flags, etc.). So when the CPU switches to working on a different process, all that needs to be done to use the appropriate new process's page table is automatically performed: that process's PTBR is loaded into the CPU as part of the normal process of a context switch and then referenced.

 

However, some platforms use what's called an inverted page table. In this case, there is only one 'master' page table that serves all processes.

 

Leo_E-49: If there is only one page table, does it load in when the process is being handled by the CPU (multithreading) or is it there permanently and processes access their individual parts of it?

 

In the case of an inverted page table, each physical address frame is represented in the page table with an entry. So the inverted page table would be there 'permanently' and each process, when it gets handed the CPU, would be allowed to access only its own entries (a processs ID is stored in each entry of the inverted page table to implement memory protection, ensuring that only the appropriate process - identified by its process ID - accesses the physical memory referenced in that entry).

 

 

Now I could go on and explain at great length and very thoroughly how glucose in metabolized in our cells, but that would be only slightly less relevant than what alexander presented.

Link to comment
Share on other sites

None of that addressed page tables. And I didn't see the original poster ask what source code was, or how source code gets compiled or linked, or what RAM is.

You know tele, the least you can do is actually read the post before replying like that...

Now, if you'd actually done so, you would probably notice the line: "as i understand you are trying to ask how the computer memory is managed and what decides what the processor processes, when and what happens" in which case, almost everything i've said is of relevance (note almost)(unlike the glucose metabolisis in our cells)

As to page tables, here is a wiki article that will describe to you page tables in general, inverted, multilevel, and virtualised page tables, the data, roles of page tables(with a nice diagram) as well as a little bit about virtual memory: http://en.wikipedia.org/wiki/Page_table

Link to comment
Share on other sites

alexander: You know tele, the least you can do is actually read the post before replying like that...

 

LOL! Nice attempt at spin alex. I understood what you said, and, as I pointed out, what you said was irrelevant to what was being asked.

 

And as we all can see by comparing your and my responses to the original person's questions, unlike you, I actually read AND UNDERSTOOD what was being asked. And also unlike you, I gave an appropriate resopnse to those questions.

 

Anyone who wants to see what a GOOD, RELEVANT response to the original person's questions about page tables looks like, read my first response to those questions.

 

Anyone who wants to see what a BAD, IRRELEVANT response to the original person's questions about page tables looks like, read alex's first response to those questions.

 

alexander: Now, if you'd actually done so, you would probably notice the line: "as i understand you are trying to ask how the computer memory is managed and what

decides what the processor processes, when and what happens" ...

 

Which only shows you didn't know what you were supposed to be talking about!! If you understood what the person was asking you'd be able to see how useless your first response is as an answer to the actual questions that were asked.

 

alexander: ... in which case, almost everything i've said is of relevance ...

 

Nope. Wrongo. The original person asked about ... PAGE TABLES. Your response was irrelevant to what that person asked.

 

I like the way your are tying to save face here ... it's quite entertaining. Let me paraphrase.

 

"Look TeleMad, we all know that nothing I said addressed what the original person asked, but look, I decided that since I didn't know anything about what s/he asked about - PAGE TABLES - that I'd completely change the topic of discussion to something I did know at least a little about ... simple things like what source code is, what compiling and linking are, and also throw in some quote about some stuff on RAM ... that way, I could pretend that I was answering his/her question."

 

alexander: As to page tables, here is a wiki article that will describe to you page tables in general, inverted, multilevel, and virtualised page tables, the data, roles of page tables(with a nice diagram) as well as a little bit about virtual

memory: http://en.wikipedia.org/wiki/Page_table

 

Uhm, no thank you. Unlike you, I already know what page tables are and how they are used. I don't have to fake answers like you do.

Link to comment
Share on other sites

In terms of memory management, is there one page table for each process or is there a single table which manages the memory for each process? If there is only one page table, does it load in when the process is being handled by the CPU (multithreading) or is it there permanently and processes access their individual parts of it?

 

It sounds to me like you are asking about object-oriented programming! :-)

 

Well, in simplest terms, an object is nothing more than the programmatic representation of a real-world ‘thing’. For example, a ‘chess geek’ might wish to write a chess program. To programmatically model the game, he would create King, Queen, Rook, Bishop, Knight, and Pawn objects in code.

 

One of the first steps in preparing to model a real-world object, like a chess piece, is to determine what characteristics of interest are common to all instances of that ‘thing’. For example, all chess pieces have an intrinsic value (a Queen would be worth approximately nine Pawns, and so on) as well as having a color (black or white) and a unique combination of a file and rank (e5, d4, etc.) which indicates what square it alone occupies. The individual pieces of descriptive data a program needs to track for an object are called its attributes or properties. But a list of attributes alone is not enough to fully model a real-world object because they describe only the object’s ‘appearance’, not what it can do. Another step is to determine what actions the object performs, as well as what actions are performed on it. A chess piece can move to an unoccupied square, capture an opponent’s piece, be captured by the opponent, check the opponent’s King, be promoted, and so on. The list of actions an object performs, or has performed on it, describe its behavior or functionality, and are implemented in code using object functions called methods. Loosely speaking, objects are nouns, attributes are adjectives, and methods are verbs.

 

To fully model a real-world entity a programmer needs to track both its attributes and its behaviors. In the pre-object oriented world, the two were kept separate because data was data and functions were functions; there was really no way to bundle the two together into a single construct. But with the advent of OOP (object-oriented programming), all of the data and all of the functions needed to fully model a real-world object could be bound together into a single unit – a process known as encapsulation. Besides organizing everything needed into a nice neat bundle, encapsulation also helps to prevent unwanted changes to an object’s data (you wouldn’t want a Pawn to somehow reach the 9th rank, because there isn’t one), and to hide the inner workings of the object to provide a simple interface that can be used to manipulate it (you don’t need to know the internal workings of a transmission in order to change gears using a gearshift). The self-contained code unit that encapsulates all of the attributes and actions needed to fully model a real-world object is known as a class.

 

Note that classes are not objects – rather, a class serves as a template or blueprint for the creation of objects. To help clarify the difference, let’s switch to discussing phones for a minute. Picture the blueprint: a large sheet of paper, sprawled out on a table, with a schematic showing the resistors, capacitors, and other electronics devices that together constitute a phone. Can you make a call with the blueprint? Of course not. It serves only as a list of instructions for constructing a real phone. What you need is an actual phone, created from the blueprint, to make a call. Relating this back to OOP, the blueprint for a phone is analogous to a class, and the actual phone you can make a call on is analogous to an object. To reiterate the point, a class is just a nonfunctional blueprint – an object is a functional manifestation of a class and has a physical existence in memory. The process of creating an object from a class is called instantiation, and each object created from a class is an instance of that class.

 

Because objects encapsulate all of their own data, each instance of a class has its own copy of its class’s data members. Therefore, data members are also commonly referred to as instance variables. And since each object contains and maintains it own copy of an instance variable, two objects of the same class can have different values for the same attribute (one ChessPiece might have its Color property set to “white” whereas another could have its set to “black”).

 

Three of the primary concepts of OOP are encapsulation, inheritance, and polymorphism, of which only the first has been discussed so far. So what about the second one…what is inheritance? Suppose a programmer has a class – such as a ChessPiece class – that she has worked on for some time: she’s managed to eliminate all bugs, optimize performance, and get everything exactly as she wants. The problem is, it’s too basic – the class describes what is common to all chess pieces, but she needs to model specific types, like Kings and Queens, each of which has features unique to it. The programmer could create a King class from scratch, but that would be like reinventing the wheel. Why can’t she base the new class on her preexisting, perfected ChessPiece class, instantly gaining all of the fields, properties, and methods of that class instead of writing them all over again? Why isn’t that ChessPiece class’s code reusable? Actually, it is, and that’s what inheritance is all about. Inheritance is the process of creating a new class from a preexisting class, wherein the new class inherits the parent class’s attributes and behaviors. In order to distinguish the derived class from the parent (and from any other classes derived from the same parent), the programmer tailors it by adding new data members and/or methods, and/or by overriding the implementation of one or more of the parent’s methods. The preexisting class that is inherited from is called the base class, or sometimes the parent class or super class; the new class that inherits from the base class is called the derived class, or sometimes the sub class. Because a derived class typically adds new members on top of those it inherited, a derived class is said to extend its base class.

 

Each derived class has a special kind of relationship with its base class, which is called an “is a” relationship. For example, a King is a ChessPiece. Note that the opposite is not true: that is, it is wrong to say that a ChessPiece is a King. This one-way relationship from a derived class to its base class allows a derived-class object to be used anywhere that a base-class object is called for. If a function expects to receive a ChessPiece object as a parameter, the program can pass a King (or a Queen, or a Rook, etc.) instead, because, a King is a ChessPiece.

 

Inheritance creates a class hierarchy (typically, a nested hierarchy). For example, if a programmer were modeling the field of biology, she could create a class hierarchy that represents the standard taxonomic levels: kingdom, phylum, class, order, family, genus, and species, where each is a collection of the level that follows it (for example, a genus is a collection of related species). The programmer could have a base class Animal to represent the kingdom Animalia. From that base class, several new classes could be derived, such as Chordate to represent the phylum Chordata. Even though Chordate would be a derived class, it could also serve as a base class for more specialized classes. For example, Mammal, which would represent the taxonomic class Mammalia, could be one of the classes that extends (inherits from) Chordate. And from Mammal several new, more-specialized classes could be derived, one of which could be called Primate, to represent the order Primate; and so on, down to the species level. The previously mentioned one-way relationship exists throughout a class hierarchy. A Primate is a Mammal; a Mammal is a Chordate; and a Chordate is an Animal. So anywhere that an Animal object is expected in code, either a Chordate object, or a Mammal object, or a Primate object could be used in its place. Note that moving up the class hierarchy leads to more-generalized classes (Animal) while moving down leads to more-specialized classes (Primate).

 

Well, that makes it two out of three… so what does polymorphism mean? Simple, it means “existing in many forms”. That’s good to know, but not really useful. A better explanation is that polymorphism means “one interface, multiple methods”. But again, that definition does little good without some background.

 

In some strongly typed languages, there are multiple numeric data types and each is distinct. That is, a short integer that stores the number 29 is fundamentally different from a long integer that stores a value of 29, and an error could result if a programmer attempted to use one where the other is expected. If the strongly type language lacks polymorphism (such as the popular C language), then to find the absolute value of a number requires one function for each of the many different numeric data types, with each function requiring a different name. Thus, there would be a fabs() function (used to find the absolute value of a float variable), and a labs() function (used for long integers), and so on. In such cases, it is up to the programmer to make sure she invokes the correct function based on the type of variable she is working on, or an error will occur. Doesn’t it seem that it should be possible to have just one function name, abs(), since the action being performed is the same in all cases? With polymorphism, it is indeed possible.

 

Function overloading allows the programmer to create multiple functions with the same name, as long as their signatures differ. Since a function signature is derived from a combination of the function’s name and the number, order, and data types of its parameters, the functions abs(long lNumb), abs(float fNumb), abs(double dNumb), and so on, each has a different signature. Because the compiler can differentiate between the various overloaded functions based on their signatures, it knows which one of the many to invoke based on what data type the function call uses. (Note that function overloading does not eliminate the need to write multiple functions. Each of the various overloaded functions implements the action of interest in its own way, and so still needs its own code). Thus, the programmer uses a single “interface” (that is, the same syntax, such as myAbs = abs(someNumber);) to call one of several different, but related, implementations of a particular action: “one interface, multiple methods”…polymorphism.

 

Related to function overloading is operator overloading, in which the programmer defines the action a particular operator (such as the addition operator, +) performs relative to a given class, with the end result being that the operator performs multiple actions overall (but only one per class). As an example, in many strongly typed languages the + operator performs only one action – to add two numeric values together, yielding the sum. What if a programmer wanted to be able to simply ‘add’ two strings together by tacking the second one onto the end of the first one (a process known as string concatenation) using the + operator? With operator overloading he could. He would write a function that defines the steps needed to ‘add’ two strings together and tie it to the + operator for the string class. The way the + operator works relative to other classes would not be changed, so it would perform more than one action overall: “one interface, multiple methods”…polymorphism.

 

Another way that polymorphism is implemented is through inheritance. Remember what was stated previously: since a derived class is a base class, the derived class can be used anywhere that the base class is expected. Therefore, (1) if several classes are derived from the same base class, and (2) each implements a given function inherited from the base class differently, and (3) that function is invoked indirectly, through a pointer or reference to the base class, then the various versions of that function can be called using the same statement: it just depends upon which object type it is called for. Perhaps an example would help clear that up.

 

The classes King, Queen, Rook, Bishop, Knight, and Pawn are all derived from ChessPiece. Furthermore, the algorithm needed to determine what moves a King can make is different from the one used for a Queen, and so on: so each derived class would override (i.e., implement its own code in order to redefine the functionality of) the base class function. Consequently, the list of instructions that constitute the King class’s GetMoves() method would differ from that for the Pawn class, and so on. That much would all be setup in class definitions: the next steps occur in the actual program code. First, a reference or pointer to the base class is created and pointed to a particular object – let’s say a Rook. This is legal only because a Rook is a ChessPiece (that’s the role inheritance plays in this form of polymorphism). Finally, the GetMoves() method is invoked indirectly, through the base class pointer or reference. Since the base class’s method is declared to be overridable, the runtime is smart enough to call the version of GetMoves() associated with the object type pointed to, not the type of pointer being used. So it is the Rook’s GetMoves() method that is called; yet the programmer didn’t have to write the code to specifically call the Rook class’s version. And if she next points the base class pointer to a different piece type – let’s say a Bishop – and uses the same exact invocation, then the Bishop’s GetMoves() method is called. In fact, that very same invocation would work for any type of chess piece. Thus, a single statement, the exact same syntax, can invoke any of several redefinitions of functionality: “one interface, multiple methods”…polymorphism.

 

After that long trek, an answer to the question we all forgot about can be offered. Polymorphism is having multiple ‘functions’ with the same name exist in the same program (either by having different signatures, being inherited and overridden, or being redefined relative to a given class), with the programmer being able to call a particular version at will, even though the exact same syntax is used in each case; “one interface, multiple methods”.

 

A common buzzword in the industry these days is reusable code: everyone wants it, but how does one go about getting it? As mentioned earlier, one method of reusing code is inheritance, which reuses preexisting code to create the bulk of a new class (instead of reinventing the wheel from scratch). Another method of code reuse in OOP is known as composition, in which one class has a “has a” relationship with another class. For example, a Computer has a Monitor, and it has a HardDrive, and it has a Keyboard, and so on. The component classes – Monitor, HardDrive, Keyboard, CPU, Speakers, etc. – would be coded, debugged, and perfected as distinct entities with meaning in their own right. Later, they would all be bundled together to form a new Computer class without writing one new line of code (at least in the ideal world!).

 

As one might expect, there’s more to OOP than what has been covered so far. For example, no mention has been made of a couple of special class methods called constructors and destructors. Remember those attributes and properties that a class contains? Wouldn’t it be nice to be able to set them to 0 or some other default value at the moment an object is created, rather than having to first create an “empty” object, and then having to go back and set the data members’ values? Initializing an object with values when it is created can be done by defining a special type of class method called a constructor, which is automatically invoked by the compiler or runtime once for each object at the moment of instantiation. And as one might expect, constructors, like other class methods, can be overloaded. For example, a class could have one that is a default constructor, that is, a constructor that accepts no parameters and merely sets its member variables to some default values; as well as one or more parameterized constructors, which do accept values from the instantiation call and assign them to the instance variables as the object is being created. In C-style languages, a constructor has the same name as its class; in Visual Basic.NET, it is a sub named new.

 

Is there balance in the Universe? If there is a class method that is implicitly invoked when an object is constructed, is there one that fires automatically just before an object is “deconstructed”? Yes. A destructor is a special class method that the runtime implicitly invokes just before an object is destroyed. A common use of destructors is to run clean-up code (code that closes files that were opened, frees memory that was allocated, closes database connections, or otherwise releases resources). In C-style languages, a destructor has the same name as its class, but with a tilde (~) prepended; in .NET, the general role of a destructor is implemented using the Finalize method.

 

So I hope that answers your question about page tables! :-)

Link to comment
Share on other sites

Uhm, no thank you. Unlike you, I already know what page tables are and how they are used. I don't have to fake answers like you do.

first of all, I've almost always admitted that you know more than me, otherwise you would not have had your degree and GPA that you've mentioned somewhere before, and a poor freshman college student can not compare to your knowledge and experience in the field. secondly the above quote is completely irrelevant as i was aiming the information at LEO, not you, since you've definately showed your knowledge and understading of the subject a post before that.

 

Its sad that we dont see you around here all that much, as there is much that we can learn from you, maybe you should bestow your knowledge upon us tele not just correct my wrongs. Any literature or websites or anything to point in the right direction?

Link to comment
Share on other sites

Anyone who wants to see what a GOOD, RELEVANT response to the original person's questions about page tables looks like, read my first response to those questions.

 

Anyone who wants to see what a BAD, IRRELEVANT response to the original person's questions about page tables looks like, read alex's first response to those questions.

 

...snip...

 

Uhm, no thank you. Unlike you, I already know what page tables are and how they are used. I don't have to fake answers like you do.

 

What the hell is this? If you want to treat our mods (or anyone else here) like that - go somewhere else. Enough already.

Link to comment
Share on other sites

TeleMad: Anyone who wants to see what a GOOD, RELEVANT response to the original person's questions about page tables looks like, read my first response to those questions.

 

Anyone who wants to see what a BAD, IRRELEVANT response to the original person's questions about page tables looks like, read alex's first response to those questions.

 

...snip...

 

Uhm, no thank you. Unlike you, I already know what page tables are and how they are used. I don't have to fake answers like you do.

 

Tormod: What the hell is this?

 

It's called facts. For example, read the first two paragraphs of mine you quoted there and please explain to us how they are not facts? You can't, because they are facts.

 

Tormod: If you want to treat our mods (or anyone else here) like that - go somewhere else. Enough already.

 

I'm sorry, I thought he was a MOD, not a GOD. Instead of letting things go, he wanted to do a littel posturing. Well, he needed to be put in his place and I did. You don't like that...tough ****. He's not above being wrong, out of line, or corrected. Neither are you.

 

Now, do you have anything of worth to add to this thread? Do you have any corrections to my factual statements you want to make? Or are you just flexing your muscles to impress everyone here?

Link to comment
Share on other sites

In terms of memory management, is there one page table for each process or is there a single table which manages the memory for each process? If there is only one page table, does it load in when the process is being handled by the CPU (multithreading) or is it there permanently and processes access their individual parts of it?

 

Any more information about the topic would be gratefully accepted. :)

Leo,

 

I see no one really started answering your thread until #7. Let me have a go... :)

Memory management for machines are somewhat dependent upon the architecture.

What is in common with all machines (microprocessors or larger systems) is that

Memory Management is of two flavors. Since most systems these days have

operating systems, a page table of allocated pages is kept for each [user] task in

the system and one common pool for [system] tasks. How a page table is managed

is dependent upon the OS. For example most Unix flavors (including Linux) loads a

table to allocate at load time of the task. How the table is defined (size, etc) is

based upon size of RAM (memory machine), OS parameters, defaults. Primarily,

this is because a program and data can grow larger tham RAM (memory) and

things must be paged in and out. A number of assumptions are made (mostly on

what to keep and what to dump). Least Recently Used (LRU) is most common and

basically keep what is most recent in use. This can cost (such as a page hit) where

this assumption bit you in the keester and you need to get back what you just

dumped.

Typically embedded systems don't often need Memory Management, though that is

the start of another topic. Another point is Memory Management is NOT Cache

Management. That is also another HUGE topic.

 

So for the case of Multi-Threading. A process can spawn multiple threads if the

OS supports Multi-Threading like Solaris, MS Win NT/2k/XP. Each thread can make

requests to page in memory. The processes page table handles all requests for all

threads in that process. Whereas a system task can elect to take from the system

pool or manage its own private pool of RAM pages. What is important for system

tasks is at what level of non-maskable interupt (i.e. Unix) to accept. The important

thing is a thread is like a timeslice of the machine.

 

This may all change for gaming as I understand Sony is about to introduce Cell

Architecture to gaming with PS3. I am anxious to see the specs on this setup.

In essence this is Parallel Processing which is very similar to multithreading.

The difference is that PP allows threads to scheduled out and gathered when

done. Another method is Data Flow Computing (common for Cells). Here a

Cell will work on a stream of data doing the least with each item and passing

on. This is best when the data volume is high like Radar/signal processing

or Video capture/editing. A lot of the CPU vendors are all considering

Multi-Core (multiple CPUs per die). If you are going for a Gaming degree

in CS make sure you learn these latest thing to stay ahead of the pulse of

technology. Good Luck to You!

 

Maddog

Link to comment
Share on other sites

first of all, I've almost always admitted that you know more than me, otherwise you would not have had your degree and GPA that you've mentioned somewhere before, and a poor freshman college student can not compare to your knowledge and experience in the field.

 

Well now, I wouldn't say all that.

 

Yes, I know more than you do - ABOUT SOME THINGS. Guess what...you know more than I do ABOUT SOME THINGS. For example, have each of us create a web site from scratch, without any assistance, and you'd come out on top. And remember the subdiscussion in a different thread that revolved around security? I know some basic stuff about security but you know much more. But when it comes to C++ programming and page tables, I have the edge (at least for now!)

 

 

*************************************

PS: As far as why people won't catch me "faking" answers, I either (1) never respond in the first place, often because I don't know enough about the subject to respond, or (2) say one or two things but then drop out of the discussion because it has advanced past my level of knowledge, or (3) know the subject well enough and can answer all of the questions asked of me.

 

Basically, I know my strengths and weaknesses and remain within those limits. I am not going to enter a Mr. Universe contest nor am I going to enter a male-model contest. And if I ever did enter one of those, others that deserve to be in those contests would rightly single me out and say things about me. That would be my fault, not there's.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...