Jump to content
Science Forums

Ui, Where Do We Go Next?


Recommended Posts

In the mobile market place I suspect the direction we be driven by the marketing people.

The UI is heavily linked to brand id.

 

My guess is that as desktop apps fade away under the cloud, web apps not aimed at the conventional office environment may find pressure to move away from the keyboard/mouse and screen combo.

Link to comment
Share on other sites

Well, specialized desktop apps may still have a fight, i.e. games and such, server apps are here to stay, high end computing is not going anywhere, but i agree that the desktop apps are not going to live long as such, and web-based apps are here to stay and prosper. Mobile market is a very specific ui for the app with few standards (i know, i have written a couple). But my question is more broad, how will we improve the UI further, how will our interaction be better or different with the computer in say 10 years? where is the next jump in interfaces?

Link to comment
Share on other sites

I think augmented reality in the home or office could be fun, virtual monitors, or photo frames on walls, mixed with finger tacking etc. for input.

Virtual secretary on other side of desk with... :tongue:

 

Problem with glasses will always be frame rate and update lag, we did some work with Nintendo using our SuperFX chip and their VR headset. Any lag makes you sick as a dog for large areas of screen and dead irritating for small items.

Link to comment
Share on other sites

I second that, I also think AR will be the thing. When compared to VR it is much easier to make cool :-), that is why I guess AR wins over VR

 

Well we found AR on glasses was harder than VR when just using motion tracking, and I suspect cameras doing scene recognition and edge tracking will not help much.

The human eye is very good at detecting movement, and if your overlay graphic are not perfectly solid against the background it will drive the user mad.

There are two problems :

1: Update rate, when the user moves their head, your objects stay still between updates for tiny fractions of a second, exposing background edges.

2: Motion prediction, rendering takes time and is normally pipelined so render time is longer than update period. So when you start drawing the graphics you have to do it to the predicted position required at the predicted end of draw time.

Link to comment
Share on other sites

Yeah, but before AR on glasses you would have more and more on "normal" screens and that is easier, I think.

 

Well our guesses as to what would be acceptable, because what we could do with a FX chip at the time we not.( 20 fps with 15th second lag)

Was fully opaque glass showing camera output with overlay graphics 60 fps minimum update with 30th second lag.

 

Transparent glasses with overlay 200 fps 100th second lag.

The eye just spots the overlay moving against the background all the time, very irritating.

 

EDIT: Sorry misread your answer a bit

Link to comment
Share on other sites

  • 2 weeks later...

Hi Alexander,

 

So now that the web apps start looking and feeling more and more like desktop apps, what do you think is the next step for UI development?

 

I saw a few very good German videos of a FPV (Fly Per View) radio controlled model aeroplane that used sensors in VR goggles to actually control the r/c plane.

 

While the throttle was fixed he could turn the model aeroplane left, right, up or down just by moving his head, all while sitting down on a couch inside a house. I kinda like the concept of the monitor reflecting the real outcomes of the inputs given as opposed to the results obtained through calculated perceptions and their inherrent calculated perceptual errors.

 

It's possibly not legal though, depending on where you fly.

Link to comment
Share on other sites

Check out Sixth Sense (not the movie, this one here: http://www.pranavmistry.com/projects/sixthsense/ ), especially while thinking and talking about augmented reality, vr is hard, and it doesnt feel natural, i.e. you are replacing the real world with something else, we dont really have a good display technology for that, and it takes a fair amount of computing power, and limits your movement and what you do, while i think interaction with the real world, with systems that understand gestures and objects and can project over those objects in real time and create more use out of them is something that is a feasible interface with your devices in the future... and you can almost do this with mobile phones, i mean hell, take a Kal-El processor, add a camera, maybe even a plenoptic camera, and a small projector, couple of lipo batteries, and lots and lots of clever code, and you can has a 6th sense device that you can wear around your neck with enough processing power to run the system (you'd want wifi, 3g-4g maybe, but you dont need a screen which saves weight and gives you more space for batteries for such a contraption) Siding on the side of AR for user interaction with mobile devices in the future... but that still leaves the desktop, the original question...

Link to comment
Share on other sites

So now that the web apps start looking and feeling more and more like desktop apps, what do you think is the next step for UI development?

I think Jef Raskin had it pretty much right in his 1993 opinion piece Down With GUIs! (a few minute read (and longer digest) which I consider essential for anyone reading a thread like this), which starts

Bluntly: Graphical User Interfaces (GUIs) are not human-compatible. As long as we hang on to interfaces as we now know them, computers will remain inherently frustrating, upsetting, and stressful.

and goes on to define OS and application

An operating system, even the saccharine Mac or Windows desktop, is the program you have to hassle with before you get to hassle with the application

What he's getting at is the idea that, when an interface, even a user interface is working well, the user is barely aware it exists. He or she just does things to a physical object - some sort of input device - and the computer does what she or he wants. Raskin's ideal computer had lots of physically distinct input devices - things that looked like pens and brushes and musical instruments, a QWERY keyboard, etc - that were dedicated to doing what you expect their non-computer analogs to do - drawing, making sounds, making text, etc - not navigating to an app to do something. In the same way that, when you're using, say, a good hammer to drive nails, you're not much aware of the hammer, when you're using a good computer to do something, you shouldn't be much aware of the computer.

 

As best I can tell, despite Raskin and lots of people liking and working on the idea (Raskin was - unfortunately, he died too young in 2005 - very well connected in the computer industry - google "father of the mac", and you'll get him) Raskin's ideal vision of a computer that the user is barely aware is there didn't get very far, and remains pretty much confined to computer labs. It's very antithesis - OS/UIs dedicated to launching apps from big, alphabetically-by-name organized heaps - is the dominant UI on small devices - phones by any other name - and show signs of overtaking and rendering obsolete the traditional keyboard-with-a-screen terminal & computer device, while at the same time these traditional devices look more and more like giant phones. The present day browser, essentially (or, in the case of OS/UIs like Chrome, actually) is an app launcher and container. Raskin's thing-you-hassle-with-before-you-can-do-what-you-want-to-do (or wanted to do, if by the time you've got the necessary app started, possibly having had to shop for and install it first, you've forgotten what it was you wanted to do) has become the dominant computer UI paradigm.

 

While I'd not go as far as say Raskin's computer-that-does-what-it-looks-like-you'd-expect-it-to-do is the next step in UI development, I've a quixotic hope that something like it will do to the current pile-of-apps UI what the GUI did to the CLI, and a hunch that it it does, some sort of VR/AR will enable it - that is, what you're actually looking at when you look at the computer-that-does-what-it-looks-like-you'd-expect-it-to-do won't be a real object, but something seen vai stereoscopic eyewear and felt through a haptic glove.

Link to comment
Share on other sites

Personally i see the trouble with UI's is that they're too slow. Touch pads and screens seem nicer than peripherals but they're still not very nice in my opinion. In fact i think navigation is worse now than it was, as hotkeys seem to have been displaced by touching. Every time i take my fingers off the keyboard to touch the pad it adds to my annoyance.

 

I think computers are too slow or the code is too bloated and poorly written. I see hardly any difference in speed between windows 3.1 and osx 10.6 (ok that's an exaggeration but not much of 1). It just looks better. Design has displaced function, which, to the general user, is good because they like seeing a pretty desktop but for any power user it's almost redundant. It's like VR, 3D etc., add nothing to productivity in their current form as far as i can see.

 

So i think the next step in UI will be something that increases the speed of workflow substantially. The internet drastically needs it. So much info and no way to access efficiently. Search engines still look like usenet newsgroups did 15 years ago.

 

In my opinion the next step will be holographic, although of course something may come out tomorrow that trumps it. I want to tell some hot holographic woman what i want and how i want it, and i want it now!

Link to comment
Share on other sites

I would hope there would be no difference in speed, it is the magnitude of velocity and i would hope that it wouldn't differ between operating systems. And i have this one major question, and it comes to me every time i see that commercial about some bloatware that is supposed to "speed up" your pc. What is "fast" (or rather "speed") and how is it measured for these "tests". I can speed up my computer by taking it in the car with me, relativistic of the planet surface, but that doesnt apparently make my computer fast, so what is a "fast computer"?

 

People often have these weird notions, infact i have heard one just 2 weeks ago that i had to disagree on: "well my windows 3.11 booted faster then windows 7, and it was running on a [blah][blah][blah]". Question then is, is that the benchmark by which the "speed" is measured, and if so it is a faulty one at best and is influenced by a lot of factors, we sometimes need to add extra stuff to make operating systems, programs, whatever, work with the new technology... Take a Win 3.11 vs 7, there is so much in terms of software, apis, and even hardware that is different, you cant put one to the other and say that that one is better. For software, we have libraries that allow you to create things that you would normally have to write from scratch, that are different for various hardware and contain hundreds of thousands of lines of code, take DX or OpenGL. For APIs we have messaging, we have kernel, we have all sorts of systems that allow us to easily interact with whatever parts we need, without having to hook into them and write masses and masses of code to do otherwise very simple tasks. For hardware, take just the modern day multi core processors, you need code to run it, you need code to schedule it, you need code to abstract the management complexity away from the applications, you need code to reserve it, some protection to make sure that the code that you are executing is the code that you wrote, not someone injecting it. There is a lot that new systems do, so much that the complexity of windows 3.1 can be equated to a console-based web browser when compared to chrome or firefox....

 

And with these extra pieces, i can assure you, on the modern hardware windows 7 will kill 3.11, simply because it can run whatever you throw at it concurrently. I throw 16 processes at each windows, on a an i7 and 7 will finish these tasks 8 times faster then 3.11, because it will take 2 processor cycle sets on W7 and 16 on W3.11. And if you had to develop something complex, you can focus on your piece of the puzzle vs having to reinvent the wheel that everyone else has to reinvent, so if both you and i were given a high demand game to write, and we typed at exact same speed, i would be done with my game and playing it a lot faster then you would even have your graphics engine researched...

Link to comment
Share on other sites

Aye all true enough, although my post was more of a polemic about waiting around for computers to do things rather than complete ignorance of the obvious complexity of present day hardware and software.

 

I still maintain though that 'doing things' on computers seems to take a long time. Computers transcend time and it's annoying. 'Click and forget' would be oh so much better, which is why i like the idea of telling some pocket hologram what i want and it doing it for me.

Link to comment
Share on other sites

I was just thinking about this again and actually the next ui is already here i reckon, but just isn't being produced for mass consumption yet. It's that computer screen where everything is done with your hands. Full mac tablets. Minority report style. Guy gave a lecture on it years ago at ted, i forget his name, now works for microsoft i think. Done the whiteboard wii thing as well.

 

But yeah, after that i want holograms! And then i want to be a cyborg! ha

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...