Posts Tagged ‘processing’

In search for Cheapium…

Monday, January 27th, 2014 by Roberto Saracco

Performance is great, but unless it is cheap it will remain confined in a niche. And the world did change over these last fifty years because researchers have been able to make their results available for peanuts! We really leave in a world where we can get amazing devices at an even more amazing low cost.

So it does not come as a surprise that researchers are trying to look for even cheaper material, without having to compromise on performance, what researchers at the Duke University call the search for Cheapium!

Compound-forming vs non-compound-forming systems.Circles indicate agreement between experiment and computation—green for compound-forming systems, gray for non-compound-forming systems. Yellow circles indicate systems reported in experiment to have disordered phases, for which low-energy compounds were found in this work. Ru-Cr is the only system (yellow square) experimentally reported to include a disordered phase where no low-temperature stable compounds were found. Red squares mark systems for which low-temperature compounds are found incomputation but no compounds are reported in experiment. (Adapted from G. Hart et al./Phys. Rev. X)

Compound-forming vs non-compound-forming systems.Circles indicate agreement between experiment and computation—green for compound-forming systems, gray for non-compound-forming systems. Yellow circles indicate systems reported in experiment to have disordered phases, for which low-energy compounds were found in this work. Ru-Cr is the only system (yellow square) experimentally reported to include a disordered phase where no low-temperature stable compounds were found. Red squares mark systems for which low-temperature compounds are found incomputation but no compounds are reported in experiment. (Adapted from G. Hart et al./Phys. Rev. X)

To do this they are relying on the processing power of a supercomputer to design material composite with the specific characteristics they are looking for. And, as shown in the figure above, their approach is proving fruitful. There are some components that are theoretically possible based on computation but that engineers have not being able to create, the red squares, but in most cases the result of computation can indeed result in a composite with the desired characteristics.

In particular, the Duke researchers have been able to identify 37 platinum alloys that can be used in a variety of applications  out if over 40,000 theoretical composites processed by the supercomputer.

What is fascinating to me is the possibility that we have to work on bits and then convert them into atoms. And of course working on bits is so much cheaper and faster than working on atoms. Boeing is designing its commercial aircrafts, it started with the 777, completely on computers and moving directly from the computer to the manufacturing plan; but here we are seeing the design of materials based on the desired properties, quite a different story, and some order of magnitude more complex. That is why supercomputers are being used to provide the massive processing power needed of simulating interactions among billions of atoms.

Horizon 2020: Data Processing

Sunday, October 23rd, 2011 by Roberto Saracco

A world of data being crunched in many ways and many places

As data are becoming the real raw material of the Information Society they will be used by a variety of players and there will be forces to share them and other to restrict access. Data will be everywhere and aggregated in different forms. Most of the time processing will happen at each aggregation point. Other times, processing will require usage of data contained in different aggregation and this requires management of ownership boundaries along with privacy, authentication and much more.

The sheer number of data demands, in many cases, huge processing capacity, in other cases it will be a matter of coordinating and aggregating several local processing. We are heading to a very complex framework in terms of processing and processing will be more and more intertwined with networking.

By 2020 we can expect to:

❏    Have increased 100 fold the performance, using multi-core, multi parallel systems.

❏    Be based on new widespread distributed, clustered processing architecture (processing cloud).

❏    Be performed in a variety of objects, as tiny as sensors and tags and as big as supercomputers clusters.

❏    Have seen the flanking of alternative processing paradigm, namely molecular and quantum computing. Whilst it can be reasonably predicted that molecular computing will be used in specific niches (like genomics) it is difficult to make any prediction about quantum computing. If it pans out, issues like new cryptographic systems will have to be addressed.

The cost of processing will continue to decrease in this decade, at the same rate it did in the last four decades. The processing capacity for mass market will reach a plateau since it exceeds demand, probably in this first part of the decade. Some mass market processing needs, however, will continue to put pressure on processing performances, such as the chips for the rendering of video signals. As video will move in this decade to the 4k standard higher performances will be required for signal processing in television sets, in video cameras and related devices.

Increasing performance will be seen, coupled with lower energy demand, in handheld equipment and sensors. This latter will change some processing architecture (processing is cheaper than transmission in terms of energy bill). Particularly, sensor networks are likely to exploit local sensors processing capability for decreasing the number of data transmitted.

Also, signal processing in terminals may become much more demanding, particularly towards the end of the decade once the terminal can be asked to employ more sophisticated signal analyses to increase spectrum efficiency. In turns, this will lead to a change in the communications protocols and architectures (see 4.4).

The massive distributed processing where the “cloud” becomes a giant computer brings to the fore issues of latency and this in turn may push towards optical networks architectures not requiring an electronic signal manipulation (passive optical add drop).

  • Processing and communications impact architectures and COMSOC should be involved in this.
  • The processing at the network edges displaces the intelligence and affects the current network architecture. It can result, as some are claiming, in a transparent network or in a diffused network control. This latter may be the case once we consider the network as spanning beyond the present boundaries to include the networks at the edges. The problem in this expansion, of course, is the ownership domain  that does not span across these networks.
  • Sensors networks cannot be considered separating the aspects of communications from the ones of processing. A unified view is required.
  • The cloud is going to be distributed over the network, over the edge networks, over the terminals (in many cases indistinguishable from edge networks) and over objects. Its processing is coupled with its inner and external communications capabilities (especially when latency is an issue) and shall be an important area in COMSOC.

The next decade will see, eventually, the failing of the Moore’s law applied to silicon. This will create a major earthquake in many industry sectors. It is likely that the overall processing power will continue to increase but such an increase will be based on carbon rather than silicon.

From Moore’s to Koomey’s law

Wednesday, September 14th, 2011 by Roberto Saracco

You should take a look at this article published few days ago on Technology Review.

The Eniac computer, mentioned in the TR article was extremely power hungry

It doesn’t really say anything new but it shows that as technology progresses the way we look at it and our way or gauging it changes.

Moore’s has set the pace to the electronic industry for almost 50 years akin it a must to be able to double the processing power (the density on a chip) every 18 months. However in these last 3 years the focus has shifted from processing power to energy efficiency.

On the one hand the processing power being delivered by mass market PC started to exceed the need and hence the perception of users. On the other hand the tremendous uptake of mobile devices has turned people’s attention to the battery and how long it can resist before having to be recharged. Recharging a battery is inconvenient and running out of juice is even worse.

Hence, manufacturers’ attention has turned to create low power chips to decrease the drain on the battery.

Because of the energy saving focus researchers have started to look into the energy consumption history and as reported in the article a professor at Stanford, Jonathan Koomey, has found out that the energy efficacy, measured in terms of number of processed instructions per second has also decreased, halved every 18 months.

Actually, this is not surprising, since the increase in the density in the chip leads to shorter paths and in turns requires less energy. Still, it is nice to notice that. But now it comes the interesting part.

As the focus has shifted on energy consumption and new architectures are created (like the multicore) we are going to see that whilst the Moore’s law may start to slow down, the Koomey’s law is likely to progress.

In the next few years the Moore’s law will keep its validity (with some slowing possibly) but in the next decade it will have to stop (that is using silicon, the adoption of carbon based chips can extend it further). The increased pressure to save energy so that we can continue to increase the performances of our handheld devices and multiply the number of devices off the mains will push researchers to keep the pace of the Koomey’s law as before they did with Moore’s.

An interesting twist bringing more value to the user. However, we should be aware that the overall consumption is likely to keep increasing since the Koomey’s observation (I prefer to call it this way) is about a fixed amount of processing power and we keep increasing the overall amount of it.

3GW vs 30W

Tuesday, September 28th, 2010 by Roberto Saracco

I was at a panel on road-mapping the future of Digital Society in Brussel at ICT2010,

http://ec.europa.eu/information_society/events/ict/2010/

and I had the pleasure of listening to a fantastic presentation given by Henry Markram, a neuroscientist from the Ecole Politechnique of Lausanne, making parallels between the (our) brain and ICT.

One of the point that was raised is that by 2020 we could have computers with the power to simulate a  brain (in 2005 we had the power to simulate a single neuron, today we can simulate hundred millions  neurons). With 2007 technology that 2020 computer would require 3GW of power to work. IBM, Cray and SGI are working to develop computation technologies that can achieve that processing power at just 20 MW. That is impressive, a decrease of 150 times in energy consumption. But according to the presentation I heard our brain only needs some 30W to crunch information. That is 100 million less times energy than today’s technology and almost a million less energy than those promised by technologies we are studying today.

An interesting observation made by Henry in the presentation is that the evolution of ICT is leading to performances similar to the brain  following an almost linear progression. However, if we can manage to really mimic the brain that would lead to a revolutionary progress. In particular there are some characteristics of the brain workings that are not present in today’s ICT, although they are getting more an more desirable and important, such as resilience (a brain can lose up to 50% of its neurons and you hardly notice it), imaginery (the brain sees what it is processing and therefore can make smart decision on what and how to process), storage (the brain stores fragment and reuses what it has already stored, thus both saves and makes information much more robust), processing (the brain computation is a state change and as such it remembers and builds on all previous computation, it learns).

How likely is it that your stored information is still correct?

Sunday, September 12th, 2010 by Roberto Saracco

Information is stored in magnetic disks or in a flash memory as a tiny lump of electrons. The difference between a 0 and a 1 is as little as 100 electrons. This means that any accidental loss or accumulation of electrons may change the value of your bit. To avoid misinterpretation of values a sophisticated mechanism (more sophisticated in magnetic disk than in flash memory) checks the information and is able to find out if some bits have flipped and recover it. Actually, on the average there is 1 faulty bit every 1,000 (and this is quite a lot: in a stored picture you took with your camera there may be 100,000 “wrong” values!).

This mechanism uses statistical functions to find the bit that went astray, and this process consumes energy and time.

The Lyric Semiconductor statistical processor

The Lyric Semiconductor statistical processor

Now, a US company, Lyric Semiconductor www.lyricsemiconductor.com , has come up with a new type of processor that can easily perform statitical analyses. It is not based on the classic NAND circuitry, rather on a Bayesian NAND where the output is based on the probability associated to the input.

With this chip, embedded into a Hard Drive or in a compact flash card, it gets faster and less power consuming to analyse the bits and to identify those that need repair.

The chip may also be of interest to other applications, such as determining if a mail is a good one or a spam, or in providing suggestions based on a profile (and therefore a probability of interest). However, these area of applications are still further down the road.  Lyric Semiconductor has just announced the availability of a chip to be used in storage control whilst for other types of applications it may take three more years to become viable.

The problem is to program this kind of chip and to make it coexists with the classic chips.

It is interesting to notice the evolution of non-Von Neumann architectures, like the quantum and molecular computer, since they may provide some rule changing scenario in the future. A future that, anyhow, is not too close. Probably we are going to see new architectures beginning to have an impact by the end of this decade and by that time it remains to be seen if the evolution of classical architecture will not be such as to further delay the uptake of new ones.

Faster and faster…

Tuesday, September 7th, 2010 by Roberto Saracco

Last February IBM researchers managed to create a graphene transistor (a transistor made of carbon layers one atom thick) able to switch 100 billion times per second (100 GHz). That was pretty good and made this technology (still on the lab’s benches) a promising substitute of the bulkier (relatively speaking) and costlier ones based on gallium arsenide and used to power mobile devices like our cell phones.

Rendering of a graphene transistor (the real thing would be too small to see...)

Rendering of a graphene transistor (the real thing would be too small to see...)

Now, researchers at the UCLA have created a graphene transistor that is able to switch at 300 GHz, three times faster, and that just half a year after the IBM achievement!

What is amazing is that the Moore’s law keeps its validity, over forty years after Moore made his prediciton and after several scientists have said, over the years, that we have reached a limit. Studies, research and ingenuity keep pushing this limit farther away (we know that the physical limits for the Moore’s law lay somewhere 400 years from now).

What is also most interesting is that every time a new technology is created the tagging line is not just “a faster one” but also “a cheaper one” and this is what makes evolution rolling!

We may expect graphene transistors in our hands probably by the end of this decade but in the meantime present technologies will keep getting better.

Simulating brain on multi-cores: that’s so today, and tomorrow?

Monday, June 14th, 2010 by Antonio Manzalini

Neuroscience is witnessing increasing knowledge about the electrophysiological properties of brain.

 

More and more detailed models of neurons and their interconnections are determining a high computational complexity when simulating the brain as a network of neurons. For example one of the most popular techniques in simulations of network of neurons is the compartmental modelling: each neuron’s morphology is represented by a set of cylinders of different length and diameter, so-called compartments, which are electrically coupled with axial conductance. These simulations are highly time (and resources) consuming.

 

On the other hand, the evolution of multi-cores is providing us with several times of the previously available computational power. However, it is not that simple: exploiting the potential of multi-cores requires adaptation of algorithms, models and parallel programming.

 

Moreover, splitting and balancing algorithms may lead to increased inter-core communications. Inter-core latency is an issue, and memory bandwidth is a limiting factor if all cores use a common front side bus. One possible development (they say) to overcome this bottleneck is the shift towards NUMA (Non-Uniform Memory Architecture) multi-core architectures where different memory controllers instead of one central memory controller are used. Let’s see.

 

In any case it should be also clear that current multi-core systems are not mimicking the brain. A brain emulator is something more: it should models the states and functional dynamics of a brain at a relatively fine grained level of detail, within certain time windows. This is a grand challenge.

 

Neuroscience claims that two major organizational principles of cortex are segregation and integration. These two principles are complementary and interdependent. The two major characteristics of brain processing are rapid extraction of information (elimination of redundancy, efficient coding, maximum information transfer) and coordination of distributed resources to create coherent states: in the brain both these problems are solved simultaneously, within a common neural architecture !

 

This is not the case in current multi-cores…but what about future developments? Any hopes to meet said challenges? Economic impact of brain emulations could have so profound societal consequences, that even low probability events like this merit investigations and discussions.

 

http://www.youtube.com/watch?v=kRB6Qzx9oXs

Faster and faster … barely enough!

Saturday, January 30th, 2010 by Roberto Saracco

The US Department of Energy and IBM have signed a partnership to build a 20 petaflop machine (that is a computer crunching 20,000,000,000,000,000 instructions per second)by 2011-2012 and to follow up with an Exaflop machine (a 1 followed by 18 zeros), providing the processing power of one billion PC of today.

This machine will be able to process the ExaByte of data that are expected to be generated everyday by the Square Kilometre Array telescope project www.skatelescope.org . The project includes the development of a new form of solid data storage, the “Racetrack Memory”.

 

Clearly this is a starting project and many obstacles will need to be tackled and solved. Also, it is focussing on highly scientific objective and the solutions are likely to be very expensive. Nevertheless, we have learnt that major scientific endeavour generates a fall out of results applicable to the lay man world. I expect that a project aiming at managing Exabytes of data day in day out will create amazing opportunities for our communities in managing and understanding the data they produce.

In 2008-2009 IBM delivered machines sporting a Petaflop processing power. The short term target is multiplying by 20 that power and the 10 year term multiplying by 1,000. This processing power is awesome, however it is what it takes to process the huge amount of data created by the new telescope (and other physical experiments like the LHC are not joking either). It is definitely too much for our everyday needs but also in this area we will see a tremendous growth in data and we will surely benefit from more processing power, may be available on demand in a “cloud” through pervasive, distributed computing. Just think about the personalised medicine where our genome will be used to create the right drug to cure or prevent a desease. What today would take a few months (decoding the genome, analysing its various genes and loci and creating the right protein) should take only a few hours. Even then, we will still require much less processing power than the one targeted to support the SKA telescope.

Major breakthrough are needed in power consumption. Today it will require a nuclear plant to power such a computer. Data transfer will also be a major challenge. Moving around an Exabyte of data per day is equivalent to all data moved around the globe through all telecommunications networks in the year 2000 (voice included, of course).

Researchers are looking into stream computing, a technique to analyse and sort out data on the fly as they are moved around on networks, storing only those that are needed and discarding the rest. Although storage density keeps increasing there is a need for radically different storage technology when you aim at storing EB.

A promise comes from spintronics memories, being studied by IBM, http://www.almaden.ibm.com/spinaps/research/sd/?racetrack

 

For more info on these futuristic computers take a look at:

http://www.computerworld.com.au/article/319128/ska_telescope_provide_billion_pcs_worth_processing

STS: New development of communications

Monday, October 5th, 2009 by Roberto Saracco

Augmented reality and presence are considered by Cisco as the driving forces for the evolution of telecommunications and they see themselves as service providers that accidentally sell equipment. In the future may lay holographic displays to further enhance the sense of presence. This would also require much bigger bandwidth.

The present business models for services is gone, and it is likely to be gone forever. Only 18% of the Apple’s Apps downloaded are paid (the others are offered for free). Customers are moving beyond the “network-service” provider association and see services as something independent of the network (they no longer see) and from the access, they increasingly control through their terminal (WiFi, 3G, roaming access….).

For Orange the future will be about making customer life simpler, any service delivered should be seamless from a customer point of view. There is no more such a thing as a single company leading. The world is interconnected, what one companies does affects many others. France Telecom has set up research labs around the world (small research groups) that are basically interacting with the local environment and work for integrating what’s coming up in the various areas.

For NanoQuebec president the future will be influenced to a great extent by the pervasiveness of sensors. They will provide data that in principle can influence our life. The infrastructure is there to bring these distributed data to some area able to make sense out of that.

According to the CEO of Creative Commons, Japan, the network neutrality and transparency will be driving evolution, shifting focus from network to open networks. Connectivity will be among data rather than between termination points.

Technology evolution in performance and price (affordability) is steering telecommunication in the next decade

- storage – as much as desired, 1 TB in the cell phone, 10 Tb at home

- processing – low power consumption (enabling increased performance in handheld), very low cost with printed electronics (embedding processing, interface and radio comm) lead to objects becoming part of the Internet.

- higher screen resolution 4k and beyond – enabling sense of presence

- combination of terminal, image recognition and tagging enable augmented reality

- sensors increased flexibility, self powering through scavenging, makes for ambient awareness

- autonomic systems

- massive DBs and statistical data analyses, enable new ways to make sense of the world

A low cost pervasive infrastructure offers low transaction cost to the ecosystem of independent enterprises, SMEs. This means a wealth of services being developed at the edges of the network. 

The network, and the network owners, can do much more than just offering pure low cost connectivity.

The cost of transaction in service offering and management depends on connectivity cost as well as other aspects, like authentication, security, billing, regulatory framework. All of these can be offered by network providers. 

But services, as such, are no more the turf of network providers. iTunes and the iPhone are a point in case. No company that I know of would have been able to deliver 85,000 apps in a year. Are they all sound, interesting apps? Of course not. But they are reaching the long tale and by doing that they are opening the marketplace bringing in more people.

Spectrum efficiency limits have been basically reached

- dense bandwidth is more needed than high speed peak performance

- need for pervasive optical infrastucture to support 10 times more cells

- this is also going to drive down energy consumption (but it does not decrease today’s consumption, simply it does not require the same kind of increase that would be required by today’s architecture)

- growth of access cells independently managed

And who is qualified to decide if a service is stupid or makes sense? We gauge, as Operators, a service based on revenues. If it does not bring large revenues to make margins we consider that service not worthwhile. People have a completely different gauging system. If they use something, than it makes sense. The more they use it the more it makes sense. 

Connectivity is being commoditised, it suffers from its success. It is becoming so essential that, like bread, it is considered a right for all, not just for the wealthy or well-to-do.

Companies that have been thriving on connectivity sale are suffering and will be suffering. Making connectivity more valuable is not going to solve the problem, in a way it is likely to make it worse. 

Improving connectivity country-wide requires huge investment, and the returns are not at all in line with that. Improving connectivity in a few spots can make sense from the point of view of a network provider but may not make sense country wise. Focussed investment is unlikely to create a thriving ecosystem since the ecosystem thrives in the long tale, not in the wealthy sectors that are all too well commanded by few-big corporations.

This is a challenge for the future since it takes big investment to provide unlimited communications capability at low cost and those that may have the funds are not the ones willing to use it. 

Opportunities are visible by having new revenue sources. These cannot come from the present customer base that is already paying as much as they can afford and it is not looking to pay more but to get more for less.

They come from the briging of objects with the web and the willingness to pay from companies that want to use those objects as gateways to reach customers. In doing so they see an opportunity to decrease cost in the delivery chain and that frees money. Part of it can go to network providers coffres.

We are going to see in the next decade that more and more objects will be offered to the market with embedded communications, the same that is happening today with the Amazon Kindle.

People will take for granted that an object is connected to service and information. They will not know, nor care, how this connectivity is made possible, nor who is the network provider ensuring the connectivity.

This embedded connectivity will change the way we look at and use objects. 

More than that it is going to create a biz ecosystem around each type of object, something that is very difficult to predict before hand in its characteristics but that can easily be envisaged in general terms.

Internet 2020 The Internet with Things: Enabling Factors – Part 1

Sunday, April 5th, 2009 by Roberto Saracco

Embedding of processing power and of communication capabilities is at the core of the Internet with Things (as it is for Internet of Things). In general, though, the processing power required on the object is very limited since it is sufficient to have a unique identity displayed and work through this identity to provide information and service related to that object. Most objects, indeed, will have just this identity etched or printed on them in form of an RFID. Communication is enabled by the reader that is also providing the required energy. The availability of printed electronics to “print” RFID on objects is probably one of the most significant enabling step in this direction. This is likely to become mass market (e.g. usable in all supermarkets through hand held printer) in the next decade.

It is also likely that electronic identification will overcome the bar code identification in the production and delivery chain so that any object will be produced with this embedded identification.

Cell phones will be able to read this identification seamlessly.

Another crucial technology is the availability of platforms supporting mash ups. This is crucial to open the market and let offer of information and services based on objects bloom.

The access to information and services, associated to an object, needs to be a seamless and rewarding experience. This entails reaction time below 0.5 seconds and one click interaction.

To create a marked stimulating offer creation, the price level to engage in the interaction should be below the user perception. In most cases interaction to access the portfolio of offer should be absolutely free. Connectivity shall not be perceived both in terms of delay and in terms of cost.

The fruition of information and services may of course be on a pay per use or subscription scheme and the price will depend on the value perceived by the user. However, to stimulate the audience, a significant amount of information and some services should be made available for free.