Imagine your friend shopping and telling you to feel the softness of a cashemere sweather she is considering buying and yo having to respond that you can’t actually feel it because of lack of bandwidth and processing power. Sound strange? Just because you are not used to haptic virtual reality, yet!
Haptic Virtual Reality lets you feel virtual objects
Researchers at the Swiss Federal Institute of Technology (ETH) in Zurich have demonstrated a system for creating a virtual model of an object through 3D laser scanning (already commercially existing and used to create 3D images of statues in museums aroud the world) and supplementing it with information on texture and touch feelings so that a person can look at the virtual object and manipulate it with an haptic interface to feel it as if it were in her hand.
When I read this news I started to consider what would it take to bring this idea into a mass market that I assume would be interested in augmenting the possibility of communication from voice and images to real presence. Touch is a fundamental sense to provide us with a perception of “being there”.
The work being done at ETH is important because it provides the conceptual tools and practical devices to convert the sense of touch into a model that can be transmitted and re-enacted to let people feel the object. But how much would the burden on the infrastructure?
For a single object it is not that much. You need to create a model of it that contains the visual information, hence you have to take a 360 degree image of it, let’s say you have to take some 32 images (each one covering 45 degrees horizontally and vertically to provide a high quality modelling). That’s equivalent to little more than one second of filming the object. Then you have to take sample of the texture of the object with an haptic device so that this texture feeling can be recreated at the viewing end. This is not much information, may be equivalent to doubling the previous amount of data used for the visual part.
Let’s say then in order to create this model (and transfer it) you need an equivalent of less than three seconds of high definition movie, that is something like 4-10 MB of data (depending on the quality you desire). This is ok to create a virtual representation in the “web”. Now, this is not a tremendous amount of data “per-se” but if we imagine that MMSs will morph into HMMSs (Haptic MMS) then it means to multiply by 30 times (on the average) the amount of data created today by MMSs.
On the receiving side there are two options: transferring the model to the terminal device being used and have it perform the transformation of the model into a perception or have some application in the network (or connected to the network) to take care of this. The former requires a high processing power in the terminal the latter a high bandwidth channel to the terminal. The problem is that our sense of touch needs to be stimulated 1,000 times per second (compare this with the 16-20 times per second to create the perception of movement through still images, as it happens in movies and television) and this requires high bandwidth and low latency.
It is likely that a mixture of network and terminal based processing will take place. Processing is not a really limiting factors since we know that it keeps increasing at the Moore’s law rate. More tricky is the demand on power that is the really limiting factor on mobile, battery powered terminals. This is the reason why network based computation may be required. But, alas, this brings along the requirement of bandwidth. It will be interesting to see the evolution in the second part of this decade as a variety of applications, the one being discussed here is just one of them, will steadily increase the bandwidth demand.