Posts Tagged ‘wireless’

Wireless in 2025

Friday, July 26th, 2013 by Roberto Saracco
Peering into the crystal ball

Peering into the crystal ball

Yesterday I had a meeting with some Huawei researchers to brainstorm on the future of wireless with an horizon at 2025. Some interesting points came up.

Technology is now supporting a 100Mbps downstream and 50Mbps upstream. There is already a defined roadmap to multiply by 10 this capacity by the end of this decade.

One should expect Gbps capabilities to be available in many places by 2025, supported by 5G (Future Radio Access – FRA) and beyond.

However, it is not just about capacity and performance. It is about scaling from 10 billion users to 100+billion users in 2025 (clearly the lion share taken by “things”).

Achieving this performances requires the availability of larger chunks of spectrum, different ways of using it but more than that it requires an economic sustainability.

These latter forces architectures that are mixing private investment in micro networks (WiFi and smaller) and in back bones and radio cells similar to what we have today.

In turns, this requires the capability to manage both horizontal roaming (moving form one cell to the next with the same technology, as it is being done today) and to manage vertical roaming, moving from one small cell to a bigger one once you step out of that small cell (it is unlikely that one can provide an overlapping of small cells) and the other way round. It was noted that with present technology moving from a small cell to a larger overlapping one is feasible but the other way round requires more research. It should be achievable by the next decade.

One way to manage the complexity can be through the C-RAN, Cloud-Radio Access Network.

Here the issues are both technological and organisational: who is going to own the cloud, who will be operating it? Will it be an integral part of the infrastructure or will it be a service provided to decouple infrastructure from services?

Another interesting question is: by 2025 are we going to have cell phones? Or will have they be morphed into something different, like wristwatch, table top, shirts, purses…?

And if they will have been morphed into something else, wouldn’t it be that we are now becoming terminals, each of us with her IP address? The overall communications will take place across ambient and we will be the ones defining, by the fact of being in a given place, the ambient. Different people, in the same environment will likely have different communications ambient.

Just by moving my eyes I will be able to direct an incoming video to a specific display surface available in this ambient.

Filling the gap between radio and optical transmission …

Thursday, May 30th, 2013 by Roberto Saracco

Optical transmission operates in the THz range and photons do not interact with each other nor with atoms aggregated in a specific way like the ones forming an optical fibre. On the contrary, electrical transmission operates up to a few GHz and electrons interact with one another and with the ones in the medium -copper wires. A fibre doesn’t get warm when you use it, a copper wire does!

The "beamer" transmitting at 200-280 GHz to a receiver on the skyscraper in the background, achieving 40Gbps bit rate.

The “beamer” transmitting at 200-280 GHz to a receiver on the skyscraper in the background, achieving 40Gbps bit rate.

Radio waves are a bit in between. The waves that can be conveniently used in transmission are in the range of MHz up to a few GHz and do not interact one another, but can be absorbed by atoms (that’s the way you cook the chicken in the microwave oven, operating at 1.2GHz). The problem with waves is that it gets difficult to separate one from the other (interference) and from the background radiation (noise). Another -general- problem is that you can only cram as many bit per Hz and no more (Shannon Theorem) and therefore if you want to transmit more bits per unit of time you need a broader spectrum and this is scarce on radio waves.

Well, researchers of the Fraunhofer Institute of Applied Solid State Physics and the Karlsruhe Institute for Technology have found a way to use radio waves in the hundreds of GHz and more precisely the spectrum between 200 and 280GHz to transmit information. That 80GHz of spectrum allows for a 40 Gbps transmission (and if they were to be used at the same efficiency level we have reached today in cell phone networks they could sustain up to 160 Gbps!), a capacity that compares to the one provided by an optical fibre with a single “channel”.

This kind of capacity has been sustained over a 1km link and given the frequency used the transmission is less sensitive to atmospheric conditions (as it is the case in optical communication over air – fibre over air- that suffers from fog and rain). Actually, researchers expect to be able to extend the range, and the capacity, over the next few years.

This kind of transmission is “point to point”, it is therefore quite different from a radio cell that covers a broad area. It can be used as a replacement for an optical fibre, getting rid of deployment cost. One application can be to serve rural areas. The receiving point will have to create a radio cell to provide access over a broad area.

The high frequency chip only measures 4 x 1.5 mm², as the size of electronic devices scales with frequency / wavelength. Photo: Sandra Iselin / Fraunhofer IAF

The high frequency chip only measures 4 x 1.5 mm², as the size of electronic devices scales with frequency / wavelength. Photo: Sandra Iselin / Fraunhofer IAF

The results of the Fraunhofer researchers represent the new record for radio transmission capacity so far.

The researchers have  created a chip able to sustain this high capacity transmission. Given the high frequency being used the size of the antenna can be very small (the rule of thumb for the antenna dimension os half the wavelength and that for a frequency of 200 GHz is 1.5mm) and indeed the chip, shown in the photo on the left, is very tiny.

I guess that in the future we are going to see many more applications of very high radio frequency to connect objects in a small ambient.

Solving the Cocktail Party Problem…

Thursday, March 21st, 2013 by Roberto Saracco

I am not an expert in cocktail parties, nor in discos, but I have to say I noticed that whenever I am in a crowd I have problem in hearing the person I am talking with because of the overwhelming noise. However, after a while, the noise seems to disappear and I can focus on my conversation.

Location of sites with signifcant LF phase-ITC (left) and HG power-ITC (right) in both conditions. The colors of the dots represent the ITC value at each site

Location of sites with signifcant LF phase-ITC (left) and HG power-ITC (right) in both conditions. The colors of the dots represent the ITC value at each site

This empirical “sensation” has now found an explanation by studies from researchers of the Columbia University and other universities, who managed to look “inside” the brain of people to see what happens when one wishes to concentrate on a sound in a noisy environment.

All sounds in the environment clearly get to our ears (we do not have the options of some dogs that can move their ears in the direction on a sound to better capture it and single it out from other sources).
This is shown in the activity of the brain that gets all sounds. Scientists have discovered, by experimenting with people suffering from epilepsy, that there are two regions in the brain involved with sound. One is basically capturing any sound detected by the ears, the other is focussing on a specific sound the person  is paying attention to.

What they have further discovered is that it is this area that connects to the cortical neurones where perception of the sound arises. Hence, we perceive what we pay attention to. They also shown that the process of selectivity is a dynamical one. As our brain works out the meaning of a discourse it is able to create a representation that in turns further focus attention. Part of this representation fills in those sounds that get lost in the noise. In a way the brain reconstruct the meaning of a conversation by filling in what it is expecting to hear. This of course may generate false understanding from time to time, but in general it works pretty well.

It is just another example of the creation of a semantic network.

The research aims at helping people suffering from epilepsy by decreasing the burden of sounds that can originate an epilepsy episode as signals spread through the brain. A surgical intervention may interrupt the fibres spreading the signals thus decreasing the number of epilepsy episodes.

When reading this news I was attracted by the relation it has on semantic networks and the fact that we can really learn a lot by looking at how Nature can solve problems by working around them.

I was also interested by the connection I can see with the issues we have in wireless networks where the level of noise, that more and more is created by our own networks, is decreasing the amount of signals that can be transmitted.

In transmission we are constrained by the Shannon theorem that defines the ration between the signal and the noise that allows to transmit a certain amount of bits. With a semantic network we are not invalidating the Shannon theorem but we can find a reasonable work-around. By introducing an understanding of the message we can use less bit for the message and still work out its meaning, and then reconstruct it.

It is like saying in a discourse: “I will be going tomorrow from Rome to Milan” and because of the noise being unable to transmit the whole message. What if I am just sending ” I tomorrow from Rome Milan”?  Well, if you place this into a context you will probably be able to work out the meaning and reconstruct the full sentence (this is just an example to give the basic idea, in reality it is much more complex, as more complex is the approach followed by the brain).

Connections in our brain, from the Connectome Project

Connections in our brain, from the Connectome Project

In the example given, the first sentence would have required 42 basic information (characters) to be sent, the second one achieves the same result with 25. And what if from the context you can tell that I am in Rome, and that I am talking about myself so that you just say “tomorrow Milan”. You would cut down the information transfer to 14 characters, 1/3 of the original sentence. In other words you might claim that using a semantic network you would be able to multiply by 3 the amount of information sent!

Once you move into the “semantic” space, you find yourself in a domain where correlations are as important, even more important, than the single pieces of information. In a future where we will have pervasive networks and thousands of objects creating connectivity, not just using it, you will have a structure that more and more resemble the wiring in the brain, with continuous alteration in the strength of each link that in turns represents a meta-information that can be used to assess meaning. We will change from a communications based on signals to a communications where signals change states of the network and this leads to new emerging properties (meaning) that can be perceived and that create the “message”, as it happens in the brain where our consciousness and perception is more a consequence of a brain state change than a response to external stimuli.

Broadcasting from the brain

Tuesday, March 12th, 2013 by Roberto Saracco

As you have noticed I often post news related to advances in the “brain” area. This is in synch with the feeling of many scientists that the coming  years will be remembered as the years of the “brain”. The advances in technologies for observing and analysing brain structures are opening the way to a greater understanding of its working and to the possibility of mimicking that inside artificial structures.

Indeed, the European flagship project “The Human Brain” has among its goal the possibility to create new computation paradigms based on the understanding of the brain.

Wireless1There are of course other goals, very important ones, like helping people with disabilities, congenital and acquired.
So far the communication with (and from) the brain has required bulky equipment and wires to connect the patient to a computer, such as Braingate.

Now, scientists, neuro-engineers, at Brown University have developed the first implantable wireless brain-computer interface.

As it is shown in the photo on the left the implant has a matchbox size (and it is going to shrink, as we all know, in the next few months), can be placed under the scalp becoming almost invisible and connects to the brain capturing signals from as many as a hundred neurones.

It does not seem than many, particularly if you think that there are billions of them, but it is all about getting the sensor in the right place of the cortex. In this sense its performances are equivalent to the ones provided by Braingate that required a wired connection to a computer and stuck out of the head of the patient, as shown in the figure below, where a paralised patient can instruct a robot to provide her a drink by thinking about it:

DrinkingMoment

In the coming years we may expect progress in three directions:

1. miniaturisation and lower power requirement (and this we may take it for granted),

2. greater number of sensors to pick up signals from more neurones also locate in different parts of the brain cortex (more complex then No. 1 but still within reach in the next 5 years)

3. higher sophistication of software to decode the signals generated by the neurones (progress in the understanding of the brain is instrumental in this evolution).

I consider all this progress a fantastic progress in our capability of understanding ourselves getting very close to the thinking machine that is able to think in a scientific way, through experiment, about “itself” going beyond what philosophers have tried to do in the last 25 centuries.

Communications beyond 2020: towards a fibre radio infrastructure

Thursday, November 8th, 2012 by Roberto Saracco

The radio evolution will progress significantly and the huge investment required for massive fibre deployment will create a situation where the performances supported by radio infrastructure will approach the ones supported by the existing fixed infrastructure (that by the end of this decade will still contain a significant percentage of copper).

For the last 20 years we have seen a five years gap between the performances offered by the wireline and wireless infrastructure, that is the wireless infrastructure can match the performances provided by the fixed one five years ago. As investment in the fixed infrastructure slow down, with respect to the ones on the wireless infrastructure we see this gap getting smaller. And it might tend to zero in the next decade.

As a matter of fact the gap considered in terms of voice transmission capacity is already, to all practical purposes, equal to zero, and we see that in many developing Countries this has resulted in a stop in the evolution of the fixed network. Something similar might happen in developed Countries where there will not be significant economic pressure to enhance the fixed infrastructure.

It is more than likely that radio performances in the next decade will be able to meet most needs of residential customers, and that the tiny part that cannot be met will not provide sufficient revenues to convince Network Operators to invest massively in fibre.

We can expect in the next decade small cells where frequency of 10 GHz (and even 50 GHz indoor) might be used to provide 5-10 Gbps capacity per cell (in the 50 m range).

The use of multichannel, multicarrier and COMP (COordinated MultiPoint) will contribute to increase the capacity of the cell.

One of the driver pushing an ever greater performance in wireless is the expectation of users to be able to have the same service independently inside or outside.

Clearly, it is unlikely that wireless infrastructure can support UHDT (Ultra High Definition Television), 4k and 8k, but it is likely that the pressure of consumer electronics manufacturers to push their new products on the market will have resulted in alternative ways for content delivery (4k televisions are a reality today and the broadcasting systems is not going to be ready to transmit at that level of definition/bandwidth for quite a few years), based possibly on local rendering and local media buoys that accrue content in bulk rather than streaming mode.

If one looks much more down the lane, let’s say beyond 2030, the need to substitute the copper infrastructure because of “copper aging” will result in a fiber based fixed infrastructure but by that time a significant portion of the distribution network will be radio based for the last tens of meters.

And, in that time frame, wireless will have progressed even further with computation at the edges making possible to solve interference and hence to multiply the capacity of every cells.

Soft spectrum ahead…

Sunday, June 17th, 2012 by Roberto Saracco

The use of radio spectrum continues to rise, according to CISCO the amount of data transmitted over mobile wireless is expected to increase 18 fold from now to 2016 (that is faster than the Moore’s law: 18 versus 6 times) and the available spectrum is not going to increase. Hence we need to be more efficient in using it.

This is something that has kept busy researchers in the last decade and as a matter of fact we have multiplied the capacity of squeezing bits per Hz. But we are reaching now the Shannon limit, so we need to find some ways to circumvent this barrier.

There are two ways for doing this: dynamically change the frequency used for transmitting data as a frequency becomes available (recency hopping), or and decrease the noise impact (solve the interference problem). Work is going on on both ways.

For the later we already have the theoretical foundation and we are waiting for much lower energy consumption chips, so that cell phones can talk one another to solve interference. It may take till the end of this decade to have a practical solution (although the MIMO systems already in use are a step in this direction).

Smart cognitive radio made easy...

For the former, frequency hopping, we are seeing good progress being made. At the core is the idea of “cognitive radio” a radio that knows what is being used at any specific instant and can hop to that frequency that in that particular instant is not used. Since most of the for is being done in software this is also referred to as “Software Defined Radio”.  Statistically speaking this is always the case, there is always some part of the spectrum that is not being used, but this changes over time, and quite rapidly. So the trick is to find ways to hop instantaneously from one frequency to another as “holes” appears.

Meet Radio Technology Systems, as mall company in New Jersey, that is doing just that.

In the photo you can see their product. It is still quite expensive, around 6,000$, but has the capacity of transmitting up to 400 Mbps, enough to carry 20 HD movies at the same time.

Most importantly, it does that by looking at what frequency is available (not being used) between 100 MHz and 7.5 GHz and can switch (hop) from one frequency to another in a microsecond (although they are claiming a maximum time of 50 microseconds to sense available frequency and switch).

These approaches are at the core of the National Science Foundation initiative to create a mobile radio centric Internet with some experiments going on in several universities in the USA.

Clearly, this evolution is creating ripples in the Mobile Operators world since it will lead to a decreased value of spectrum (and licenses). At the same time, this is bound to increase the demand for wireless capacity and the offer of services. Hence, Operators will need to be in synch with this evolution to be ready to exploit the growing usage of wireless in terms of services.

Real simultaneous wireless communication

Tuesday, April 5th, 2011 by Eduardo Mucelli R. Oliveira

Any book that addresses the topic of wireless communication could use the metaphor of the “one way street” to describe the radio communication. The flow occurs in only one direction, either transmits or receives, so this is the basic form of communication that we know, used in products such as Nextel, or even walkie-talkie. We talk and listen “at the same time” when using the mobile phone through a workaround that requires an expensive infrastructure that  would not be appropriate for a wireless network, for example, in our homes. Recently, researchers from the Computer Science and Electrical Engineering at Stanford University developed the first wireless radio where you can send and receive signals at the same time, the project Full-Double Wireless Design.

The most obvious and direct impact of this result is the doubling of transmission capacity since transmission and reception can occurr at the same time. The scope of the impacts of this new approach is unpredictable because there since the mobile need to, for example, air traffic control. One thing is certain, the future is promising, since software and hardware are designed to take advantage of simultaneous communication.

Leapfrogging LTE…

Monday, February 14th, 2011 by Roberto Saracco

LTE is now being deployed in some parts of the world and in the coming 2-3 years it will be the technology of choice to upgrade present 3G networks, particularly in those areas where a strong demand for data communication is present.

Wireless data growth estimated by Cisco

Wireless data growth estimated by Cisco

Cisco in a recent report (February 1st, 2011) is predicting a tremendous amount of wireless data growth in this coming five years to reach 75 EB worldwide by 2015, with two third contributed by video. Another interesting projection is that 788 million people in 2015 will use wireless Internet access only (they will not have access to wireline connection).

However, in the same report, CISCO forecasts that a significant portion of Internet browsing through smartphones and mobile devices (like tablet) will not use the wireless network but the wireline network connecting via Femtocells and WiFi areas.

So, now it is the time to turn to InStat and read what they say about WiFI evolution.

Today’s fastest WiFi is provided by the 802.11n standard but researchers are working to standardize, by the end of this year, the new 802.11ac, a WiFi designed to provide up to 1 Gbps bandwidth, thus leapfrogging the LTE capacity.

The chipset supporting this new standard should be on the market at the end of 2012 and devices embedding them should become common place in the following years.

By 2015 inStat expects over 800 million cell phones equipped with WiFi access and at that time all new hot spots will be 802.11ac enabled.

We can expect WiFi to play a most significant role in connectivity in this decade and its pervasiveness will likely condemn mobile Telco Operators to provide flat rate bundle and will contribute to push towards business models beyond connectivity.

This may be a bad news, but at the same time the pervasiveness of wireless access at very low cost is bound to stimulate the aggregation of a lively ecosystem that can be mined for revenues.

Going wireless still requires wires…

Thursday, January 6th, 2011 by Roberto Saracco

This year has seen a tremendous expansion of wireless, both in public networks and in local networks, with many products creating home wireless networks to connect various devices, from computers to televisions. This is why in the annual ranking of technologies having made an impact on the year published by technology review the wireless is top of the list:

http://www.technologyreview.com/computing/26986/page1/

However, one cable is still required to power up the equipment. This is where PowerMat comes handy.

The PowerMat recharges your cell phones through induction

The PowerMat recharges your cell phones through induction

Unfortunately it is not all that easy. To power your “stuff” you need to equip each item with a receiver and this requires extra “stuff”! The hope is that device manufacturers will soon embrace the wireless powering revolution and start to embed these receivers in the devices. That will really change the game!

Take a look at their ads:

Let me also mention that at a macro level we are going to see in this decade a tremendous growth of wireless connectivity with a proliferation of antennas (which is also good from the electromagnetic pollution point of view since the more antennas you use to cover a territory the less electromagnetic field per each square metre) and that will require even more pervasive fibre infrastructure.

In turns, this will decrease the need for energy since it requires 1,000 fold less energy to transport bits over a fibre than beaming them wirelessly over great distances.

Towards infrastructure less communications?

Friday, July 23rd, 2010 by Roberto Saracco

Well, it looks highly unlikely. You need roads to travel, you need wires and towers to communicate.

Suppose you have a real good off-road SUV, one of those behemoth that can also cross rivers. May be you caould go from A to B without having to use a road.

Ah! I hear you saying. You might be able to do so but it would take much more than using a nice paved road and it will cost you a fortune in gasoline. Right. But what if you are in the middle of nowhere, let’s say in the Australian outback. What would be quicker AND cheaper: get the behemoth or build a road?

I guess I made my point. By having a sufficiently smart and flexible car you can make do without the road infrastructure.

Let’s turn to communications. The very first telephones were sold without any infrastructure to plug them on. You would buy a (pair of) phone and you would ask somebody to lay a wire (actually 4 of them) to establish the connection. Luckily a full blown infrastructure was put in place, with a lot of investment and over a long period of time, and today we can easily connect our phones. Then cell phones were invented and the connectivity perception moved away from the users since the last part was via “air” and most of the times the towers were hidden to the user’s sight.

Nowadays, and more so tomorrow, terminals, like cell phones, are very powerful devices that can morph into a network node. In perspective they can become the network. Clearly I am not thinking that the big backbone will disappear. On the contrary, They will become even more capacious and will extend to metropolitan areas to what today we call the backhauling. But the distribution network may fade away under the pressure on the one side of the extended backbone and on the other side on the take over by terminals.

Terminals may communicate one another whenever they are sufficiently close, and for data communications. From an energy standpoint it is cheaper to compute than to transmit and energy dissipation grows at least with the square of the distance. So if two terminals are closer one another than they are close to the antenna connected to the backbone it makes more “energy wise” sense to communicate with one another, one acting as a bridge towards another terminal (and so on) or towards the backbone antenna.

In this perspective the future network will be a composite of dynamically connected nodes and such a network will also be a set of connected information. Quite a different view from today hierarchical structure. It will be much more like a “natural creation” rather than an engineered one.

In order to make it happen we need a lot of research (and it is going on today) and the vision to let the market evolve.

What is LightSquared from LightSquared on Vimeo.

 

I should say that this post was prompted by looking at Lightsquare, a 7B$ initiative by Harbinger aiming at creating a wireless coverage in the US deploying 40,000 antennas as access point. I would consider this as a first step towards the flat network of terminals I am suggesting. It seems to me that 40,000 access point can really sustain a US wide network only once terminals will play also the rle of network nodes, extending the reach of each access point.

A closer example of this vision “in the field” can be the Serval project,

http://www.servalproject.org/

being deployed in Australia where cell phones play the role of towers to provide connectivity to areas not covever, creaitng a network by themselves with one (or more) cell phone acting as gateway when it happens to be within coverage of a network access.