Jump to content
Science Forums

Mass storage and Efficiency


Guest loarevalo

Recommended Posts

its the only perspective

 

beta1 is virtually useless, a shrimp Hors d'oeuvres, a very pretty one, and a very tasty one, but one that despite its ludicrous size leaves one salivating for much more.

 

VIsta is insanely power hungry.

 

on a P4 3.0E prescott cpu, 3 gigs of ram, and a 256 meg x800 ati aiw with audigy 2 zs audio the OS runs like a dream... but can't seem to 'do' anything with all that power.

 

ATI says that they will not create AIW multimedia drivers for the OS, maybe for beta2 because they will need to prove their superiority over nvidia. the creative audio drivers require powerful voodoo and a 3 hour long ritual to even get the OS to support the card... even then i can't get it running well.

 

for now beta one is a 6 gigabyte demo that will not run much of anything multimedia, for running day to day netcentric apps then fine you're white elephant works well, not good for gaming or multimedia.

 

 

beta2 will bring nvidia and possibly proper audio drivers into the game, making beta 2 a proper xp replacement for pioneers with a powerful machines.

 

and why rag on wintel for require so much power? apple requires a bit of power to run its OS as well. its not like linux as anywhere near as user friendly or compatible.

Link to comment
Share on other sites

I thought about this when noticing that my digital pictures aren't really 5MB resolution but about 500KB in the worst cases, and 3MB in the best - though they all took 5MB of hard drive
There’s a confusion of terms, here.

A typical digital camera has 5 mega-pixels (not mega-bytes) on its light-sensing CCD chip. Each pixel is capable of measuring a single color. 3 pixels together produce a “24 bit (3 byte) RGB color”, 8 bits (1 byte) each of the colors Red, Green, and Blue. For reasons having to do with the human eyes preference, 4 pixels are usually used to get the 24 bits of RGB data – Red, Green, Blue, and an extra Green. Some use tricks to pretend they’re only using 3.

 

So, image only, with no formatting or compression, a picture requires about 5 - 3.25 MB.

 

A typical digital camera stores the data using the JPEG (.JPG, etc) file format. JPEG uses complicated color transformation and data compression algorithms to reduce the size of the resulting file. It usually does a realy good job, so the initial 3.25 MB of data will typically compress down to the 0.5 - 3 MB you’re experiencing.

 

As Turtle notes, the JPED file spec leaves a place to stick all sorts of useful text – titles, dates, exposure info – but this few hundred bytes of text doesn’t add much to the size of the file.

 

How much compression you’ll actually get is complicated to calculate, but, roughly, the more complicated the color pattern your photographing, the less compression you’ll get. To experiment, try photographing (with flash off) a blank sheet of paper, then something with very high contrast edges, such as a printed poster, and compare the resulting files’ sizes.

 

:lol: You can make a game out of trying to guess the size of a particular photo. :)

Link to comment
Share on other sites

possibly as we store several types of information for the same thought/memory.

 

if you were to digitally convert that information even heavily compressed (requiring tons of processing to access those memories on the fly) certainly you'd run into TB

 

consider that or memory is made of reconstructions of events, and that mostly our memories are visual tied to olphactory stimulus, firstly our vision is estimated at 200+ fps at the tiny macular area and then of reduced quality and frequency further out where the pickup is only a few times per second. how the heck would you even store a smell? it its just chemical ingredients then smell wouldn't take up much more than a text file worth of data, you'd tie that "smell" into generic memories, like this smell file labelled rose automatically links to this picture file rose.jpg, which loads progressively as your macular dot moves around the picture, going from hazy blur to 1200dpi where you are focusing, meaning the file itself needs to be 1200dpi but only stream out what you need to see for bandwidth efficiency. furthermore if your macular falls onto a thorn you might link to another sense memory, the tactile pain response of gripping a rose for the first time and discovering that thorns hurt, tactile response would require quite a few bits of data, force of grip, textures, heat, and possible light and chemical information if you wanna get fancy. sound bit should be fairly rich but don't need to be since unless you purposely recorded something you wanted to remember you're not likely going to be able to remember it in hidef, while you could supersample the lowres sound to increase the gain.

 

it makes you wonder then if the brains of babies and children aren't built to suck data in at supersample rates so that the first memories are as visceral as possible.

 

how to test this? watch a commercial for the first time... its very likely that several minutes later when you see it again you notice something. it seems far shorter this times. even though you're accessing both the raw information coming in and comparing it to the memories you just put down the exprience seems fleeting the second time around.

 

which as you gain more experiences overlaps into new experiences.

 

video games for instance, the more of them you play the less visceral the experiences become.

 

thus once you reach adulthood your only tweaking all the information you've already laid down very little is knew. so at adulthood you should be hitting your data storage limit. which as our species grows so will that limit, certain individuals will have more and some less but for the most part people have roughly that same amount. taxi drivers and einstein being exceptions, with visibly larger brains to hold much more 3d spatial type information. visual and conception data.

 

what could happen for androind and humans both is offloading memory altogether. sammy seems to think they can implant memory chips into peoples brains, once they can do that its only a few steps toward streaming that information to a datacenter and linking the brain to it via wifi [wiMAX] or something similar. the question of wifi telepathy being another thread i'll leave it alone for now.

 

would people be more productive keeping only links to data locally while keeping true memory locked away crosslinked to a databank of human memory?

 

questions of that making people vulnerable to 1984 like rewriting of history. probably fodder for another thread...

 

but 17 terabytes, hmm if that was just video at internet streaming quality you get 720x480 at 3 megabit with only what you can see, because even at that modest full resolution a person with exceptional recall would only be able to recall 2 months of visual information, people do not have such perfect recall skill and do remember several years worth of information, if we can assume our compression ratio allows for something like 720x480 to be imploded to something like 100x100 and then recall can rebuild it to 1080p HD, (totally not for accuracy), the less you remember something the higher the compression ratio gets and the higher the possibility of data corrpution or total loss, but once you remember something you can rebuild it, which is the strongest point of the human brain, if you have a similar memory you can repair old ones or build entire new ones, which is equally a horrible flaw since someone else can get you to create false memories you can easily believe are real. with about 17 TB at 100x100 the memory would represent .1 megabits compared to the 3 megabits original memory, you could hold hold 5 years of data ripe for reconstruction, depending on how much time you spent you could recreat the memory several ways and possibly lose the original memory in the process. but again the numbers are totaly BS and only consider visual data while smell and tactile memory is very important if not as distinct or prevelant.

 

possibly the basis of our creativity is that ability to generate information based on years of experience as much as the roiling chemical imbalances in our brain.

Link to comment
Share on other sites

or if neurons can be emulated without resorting to just software or qubits.

 

it would be like emulating a network in software. in that case 1 neuron can't be equated to 1 logic gate, but how many cycles it takes to calculate all the possible (and then relevant) connections that neuron is capable of in order to complete a simple task like pun recognition.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...