Archive for August, 2011

Much more than a screen

Sunday, August 21st, 2011 by Roberto Saracco

Screen evolution is under our eyes. Every week some new television is on display making us regretting having bought one last month. If we only waited a bit more…

A filter for extracting energy from one polarization of light

But there are some evolutions that is not that visible and that will change the way we use screens. One example, I already posted, is the embedding of videocameras (in the plural since there are hundreds of them in a single screen) in the screen itself so that the screen can see how we are watching and adjust the content to our reaction…(of course through an application based somewhere in the television itself or in the network).

Another one is this news from Technology Review. Two researchers groups, one from the University of California, LA, and the other from the University of Michigan, are looking at ways to make use of the back panel lights that is needed for LCD screen. What we see as an image in an LCD screen uses only 5% of the light that is provided by the back panel so it is a very inefficient way of generating an image energy-wise. On the other hands, LCD screen manufacturing is so cheap that they dominate the market.

The researchers have invented a filter that incept the polarized light (that has to be filtered to avoid messing with the LDC image) and rather than cancel it as it occurs with today’s polarizers convert it back into electricity.

Basically, they are transforming an energy wasting component (the polarized filter) into an energy saving one. So far the energy saving is very low, in the order of 3-4 % but they expect to be able to increase it to a 10% soon. Since the screen is the part that consume most energy in television and a good portion in computers, this saving is surely welcome.

The world becomes aware …

Saturday, August 20th, 2011 by Roberto Saracco

The presence of sensors everywhere, in every objects and the connectivity fabric bringing all them together will be changing our perception of the world. It won’t happen in a single shop but one step at a time, some so small that we might not notice what is happening.

The fun is just starting

This is why I decided to report on the announcement made by Sifteo, a start up originated from work at the Media Lab where I saw the first prototypes of Siftable.

They have announced the possibility to pre-order a set of siftable, tiny cubes that embeds sensors and a PC having a screen on one of their faces.

They propose a set of three for 149$ (any extra cube comes at a 45$ tag), that’s not cheap but if you think that you get three PCs, three screens and plenty of sensors… well it is a bargain! By moving the cubes, placing them in specific positions, juxtaposing one another you have them behave in different ways. This is the consequence of each sensing its environment and reacting to it.

Another interesting aspect is that the siftables are open to third party programming so that you can both program them yourself or get applications from the ecosystem they create.

I see this as an example of a change that will characterize this decade: more and more objects will be able to sense what’s going on around them and will adjust their behavior accordingly. And, of course, they will be progressively able to sense our presence, and communicate with our electronic agent so that they can morph into our living space.

 

A new production infrastructure

Friday, August 19th, 2011 by Roberto Saracco

Following up on yesterday post about the hiring of 3 million robots by Foxconn, I kept brooding on what the future may be like in terms of production plants.

One of the statements I proposed yesterday was that robotization of production, by substituting human workers, decrease the importance of the salary factor hence production is no longer attracted by those areas where the salary is lower. On the other hand, robotic production when on a scale like Foxconn where million of robots are involved is very much capital intensive and tends to create itself an infrastructure difficult to duplicate.

Global intermediation portal

Are we going to see the birth of infrastructure oligopoly smiler to what we had in the past in areas like car manufacturing?

In my opinion this is what may happen in the medium term but longer term evolution is possibly different. Production infrastructures may become clusters of lower capital intensive production areas linked by efficient logistics making it possible to increase competition among various production islands whilst keeping the advantage of scale.

To see what I mean think about Alibaba. It is the largest portal to access manufacturing capabilities in China. Wherever you are in the world you can use Alibaba to connect to manufacturers in China, without even knowing their names, place your RFQ and get an answer within a few days, sometimes within a few hours. You can ask for quotation to develop a custom chip sending the specs and you will receive quotations for a prototype and for the final product. It is like owning Intel without the capital needed for it and being able to produce your own chip.

Now, project this into a twenty years horizon (but I know that as Einstein once said the future comes sooner than you would expect…). You will have billion of robots clustered in many places in the world, highly flexible in terms of what they can do, and waiting to get specs on what to build. These specs may be fragmented into various parts, say one for the mechanical parts, one for the electronics, one for bioengineering and so on, and each part can be outsourced. An intermediator can take care of ensuring the end to end manufacturing to deliver the product.

I can see this happening, and I see that the Cloud (in the sense of a borderless coordination infrastructure virtualizing manufacturing capabilities and implementing the required logistics) can become an important piece of the worldwide manufacturing capability.

The interesting thought is that such an approach does not need to be designed top down, it can become reality through a number of bottom up innovations couched into a unity by some careful design of interfaces. Also, once you are moving manufacturing to the Cloud, you will find yourself in an environment where also customers are present with their data and their virtual ambient and therefore you will see a continuum between manufacturing, sales, customer care and customized evolution of the products.

Besides, the Clouds will become the natural economic ecosystems and marketplace. Quite a different situation from what we have today although the Apps world provides a clear indication of how this new manufacturing world may look like.

New hiring strategy: 1 million robots in three years…

Thursday, August 18th, 2011 by Roberto Saracco

One of the Foxconn plants in Shenzhen

According to Xinhuanet Foxconn is planning to “hire” 1 million robots within the next three years to replace a significant part of its 1.2 million workers.

Currently Foxconn uses 10,000 robots in its production lines and plans to get 300,000 more by next year to reach 1 million by 2014. They will be working on components mounting, welding and varnishing, activities that today are performed by blue collar workers. Curiously, the announcement was made at a workers’ dance party.

Foxconn is a Taiwanese company with several plants in mainland China where it produces products for Apple, Sony and Nokia among others. In the recent years they have been harshly criticized for a string of suicides attributed to the harsh working conditions.

The announcement may be read as a step in the direction of moving to robots those activities that are more repetitive. There has been no statement on what will happen to workers that today are doing the activities that will be taken up by robots in the coming years.

Clearly, this is a much broader issue affecting many companies and many workers all around the world. In a way as robots will become more and more flexible (and thus their cost can be spread over longer periods of time and more products lines) many human activities in the factory will be taken over by them and this might have a greater impact on countries having absorbed most of the production jobs of the world.

Robots are making the labour cost an even field for every country so we might see by the end of this decade a reverse trend with manufacturing plants returning to today’s high labour cost countries. The robots will require highly skilled personnel and this will be a deciding factor on selecting a manufacturing plant location, no more the cost of salary.

Interestingly, the robots flexibility will also made possible to increase the customization and this in turn will change the relation between the point of sale and the manufacturing plant. And, what we have in between is telecommunications networks with the product becoming an integral part of the telecommunication infrastructure and pathway from the customer to the service.

After Web 2.0, what will fade out?

Wednesday, August 17th, 2011 by Roberto Saracco

Just read an interesting blog on Technology Review about the likely demise of the term Web 2.0 and I cannot help to wonder what will be next.

Occurrencies of the word Web 2.0 on ... the web

As shown in the graph the number of citation of the word Web 2.0 peaked in 2007 and it has been declining ever since.

Of course any fancy name gets dusty after a while and new ones are needed. So no surprise there. It was inevitable that the Web 2.0 would have been replaced, sooner or later but the Web 3.0.

On the other hand, Web 2.0 has not been just a name but a concept that we have seen put into practice. Nowadays most of the times we look for information we do that by using a service and the information is encapsulated into the service. Think about looking for the weather forecast ten years ago and remember yourself typing something like www.weatherchannel.com and what we are doing today: just click on a sun icons to get the info we are interested in, because the place we are interested is already memorized in the icon.

With the Web 3.0 we are moving a step forward embedding our context into the actions performed so that the results will fit the specific interest we have at this particular time, in this particle place.

So welcome Web 3.0.

Now, if I think about other names, copycat of the Web 2.0, like Enterprise 2.0 or Telco 2.0 I do not see we made some real “quantic” progress with what we had before (apart the name and the consultancy fees associated with it…). I guess part of the reason is that in the Web 2.0 the change was brought forward by the Web ecosystem through hundreds of thousands of players each one pigging back on the others creating a self sustaining stream of evolution.

With Telcos and Enterprises what happened was the attempt, sometimes successful, to create some new services or to change some tiny parts of the whole according to the 2.0 concept. But changing the whole of the enterprise to match that would have required reinventing the company and this was out of the question. You can probably create an Enterprise 2.0 from scratch but you cannot transform an existing, efficient company in something that is completely different.

Coming to the headline of this post: are we going to see Enterprise 2.0, Telco 2.0 fading away soon? I think so. We will be trying to ride the new wave of the 3.0 and the 2.0 simply will become too old to be pursued.

So, what could an Enterprise 3.0 be like? Personally I would connect that to the idea of a completely delocalized enterprise, no more having a physical location but only a connective fabric, a sort of Enterprise in the Cloud. An enterprise missing a strong centralized control creating its offer through the harmonized cooperation of independent groups of specialists, much more flexible to respond to market dynamics.

And what about a Telco 3.0? Again, picking my brain, I would say that a Telco 3.0 will be a Telco dealing with data connectivity, not with wire (less) connectivity; it will be much less tied up to what we consider today as infrastructures (although it will still have plenty of atoms to deal with, in form of data centers).

Both the Enterprise 3.0 and Telco 3.0 are such because they are focussing on semantics rather than on syntax. Syntax remains the asset that needs to be leveraged (infrastructures and processes) but the real competitive advantage will derive from the management of semantics, in particular that of their customers (and users). They will need to know what their customers context is, what their background is and also what their intentions are and deliver personalized services and products at mass market price.

2041: A pervasive self sustained connectivity fabric

Tuesday, August 16th, 2011 by Roberto Saracco

I have been asked to outline, in 250 words, my views on telecommunications in 2041. Here they are for you to comment!

Thirty years is too long a period to think linearly. In 1981 there was basically no cell phones, now there are close to 5 billions of them.

Everything, information, living being, objects, is part of a connectivity fabric

Electronics will no longer be the leading technology, although almost everything will embed some electronics. By the middle of the next decade the Moore’s law will no longer be sustained by silicon but optics and bio will ensure its validity. Connectivity will bring on line so many devices (and living things, from algae to humans) that the sheer number of connected points will exceed the thresholds on manageability using today’s paradigm. More than that. Most of these “points” will behave as connection nodes, each creating a connectivity space that by overlapping with nearby ones will result in a pervasive self sustained fabric.
This will have changed the rules of the game, in terms of regulatory framework and players. From a technical point of view, it will have revolutionized our ideas of network architectures.
Calling it a connectivity fabric can be misleading. It will be both connectivity, processing and data, more like a brain then like today’s networks where we can make a clear separation among the three of them. Autonomics will rule at the local level and the whole will behave like a dynamically changing ecosystem around stability points minimizing energy consumption.
This fabric, being formed by objects (including living ones), will change the way we look at objects, at enterprises and processes. Objects are communicating entities and players in the ecosystem. Services are likely to become ecosystem states and their contextualization becomes a direct fall out of objects being nodes affecting and being affected by the whole.
Get ready for this holistic perception of your world.

Just feel it!

Monday, August 15th, 2011 by Roberto Saracco

Disney and CMU have demonstrated a new tactile technology  that is a significant leap forward with respect to present haptic interfaces where you can feel something with your hand. Now, it’s you whole body that can sense what’s going on!

The name given to this technology is, aptly so, Surround Haptics and it is based on a set of vibrators that are surrounding your body. The vibration is controlled by a computer, on the bases of perception models to recreate the feeling of motion, of being hit by objects and much more.

You can read the details on line.

What matters to me is that once this kind of devices becomes mainstream people will discover a new need for optical fibres. Only with this kind of connectivity you’ll be able to really feel what’s going on because our sense of touch requires refresh times in the order of 1/1,000 of a second (millisecond) and that means basically no delay in the telecommunications infrastructure, something that can only be achieved once you have a full optical connectivity, end to end.

 

 

ParaSail and beyond

Sunday, August 14th, 2011 by Roberto Saracco

Dual Core, MultiCore, Massive Distributed Processing… Just a few words to say that technology has progressed to offer us multiple processors to work on a single task. Multiplying the processors you increase the overall computational power without increasing exponentially the energy required, as it would be the case if you were to speed up a single processor to match the same performances.

Parasail ... a new programming language for parallel processors

The challenge with this approach is to be able to write a code, a program, to take effective advantage of the parallel processing, no small feat at all!

Now a new programming language is available to ease the life of programs: ParaSail, Parallel Specification and Implementation Language.

The language has been designed by Tucker Taft, CTO of Softcheck a software company based in Boston, to overcome the problems programmers run into when dealing with multicore. There is a tradeoff when using these chips: you can stick to conservative programming but then you are not exploiting the parallelisms offered by the multiple cores or you can parallelize everything but risk creating out of sequence operation that leads to errors.

Current Dual Core on the average can increase the processing speed by 20/30% depending on the task at hand, they are not doubling it. ParaSail uses an approach of pico threading, dividing the program (automatically) in as many elemental operations as possible and threading each one on a different core, unless the programmers block parts into sequences. In other words, it assumes that everything can be parallelized unless it is told not to.

The compiler should be released this September and it has already created an interest both in the programmers community and at Intel.

In this decades we will see the number of cores reaching the hundreds. One of the reasons this has not happened so far is because with present programming languages the more cores you have the less efficient their use so it does not make sense to increase their number. If ParaSail meet its promises the situation will change.

What interests me most, however, is the fact that the concept of multicore can be “stretched” to include massively distributed processing. With UBB (UltraBroadBand) and low latency provided by end to end optical connectivity it may start to make sense to micro thread (not pico thread…) computation onto geographically dispersed computers (SETI on steroids…).

Pervasive Computing and Networking, I am convinced of it, will deeply change our view of computation. Ambients will become aware and responsive and this requires distributed computation that is based both on signal exchange and on edge sensing, just the way it happens in our body where cells respond to nervous signals and to the chemical environment surrounding them.

I also see a parallel, and a support in this vision, by the evolution of networks, where the ones provided by the Operators can be assimilated to the nervous systems and the ones being created bottom up at the edges (viral networks, sensors networks, ambient communications) can be assimilated to the local chemical soup conditioning the reaction of the cells.

There is plenty of new research needed in this area that is at the crossing point of bio, electronics, cybernetics, autonomics and semantics.

Exploiting the 3rd dimension

Saturday, August 13th, 2011 by Roberto Saracco

Most of our production technology for silicon has been based on two dimensions. We basically print, sometimes we etch but even when subsequent layers are produced, effectively created a 3 dimension chips, the approach is still a two dimensional one.

3D growth on a scaffolding structure

Now, researchers at the University of Illinois Urbana, have found a way to create real 3D structures and are building photonics crystal the can be used in photovoltaic solar cells panels and many other applications.

Photonic crystals have been in use since the 1980ies but as two dimensional structures to control light. Bringing in the third dimensions makes it possible to increase the control managing rays coming in from any direction, thus increasing the efficiency, something that is most desirable in solar cells.

Basically these crystals are made by etching tiny holes in carefully designed patterns and doing this in the third dimension, using an industrial process to keep cost down, has been impossible so far. Add to this that in several applications you want to use the crystal not just to control the light but also to convert light into electricity or the other way around (like in LED) and the complexity grows even more.

This is why the achievements of the Urbana researchers are so important.

Rather than trying to take a crystal and create the required holes in the appropriate pattern, the researchers created a scaffolding, shown in the figure on the left in bronze by using sort of marbles of different sizes and then had the crystal grow inside it. By removing, chemically, the scaffolding you are left with the crystal with the desired holes.

The “marbles” are nano spheres that are stacked one on the others and the crystal is created by percolating the scaffolding with gas carrying the desired molecules. These stick in the empty spaces and create the crystal.

Commercialization is probably a few years away also because all applications, so far, have been created using what is available, that is 2D crystals, and engineers will need to find ways of using this new technology.

What’s beyond the tablet? The room!

Friday, August 12th, 2011 by Roberto Saracco

I was reading an interview to Microsoft Chief Researchers, Craig Mundie, on the evolution of the office and I was struck by his prediction about tge future of computing devices. Now the focus is on tablet but by the end of the decade the room will become our computer.

Any surface is a window into the world of bits

What struck me is not the prediction per se, I started to voice the same vision in 1999, but the fact that Microsoft is approaching the point where they see the PC disappearing into a cloud. Their new Office 365 is clearly a step in this direction.

I have posted the nice video from Corning, A world made of glass, some time ago, and there you see exactly this same vision, an environment that becomes a seamless interface to information and services.

According to Mundle:

“We will see a lot more displays in the office, and they will be built into surfaces horizontally and also be on the walls or in the walls. I think that a kind of completely continuous model, where you are using speech, gesture, and touch in a more integrated way, will become more commonplace. There will be a subset of that fixed environment that you will want to take with you, called the portable office, and the evolution of the laptop will be that. And there will be a mobile environment, which is the phone and other devices [including] tablets of certain types.”

This is clearly a vision that extends to any ambient, including the home. There are still significant hurdles in creating an effective “ambient” interface. One thing is to have a screen on a surface (this is a technology challenge and we see how to tackle it), quite another is to transform a complete ambient into a “coherent” interface.

Another point in his interview is the way he refers to the cloud: as an infrastructure and as a commodity. You don’t even realize it is there, as you no longer pay attention to the wires providing energy to your lamps. Now this is something that I resonate with but it is not something that would create a significant business for the “Manager of the Cloud”. I think that Operators, as they embark on providing Cloud related biz, need to create a portfolio of services that are much more data related and less infrastructure related.