Sunday, September 9, 2007

Volunteer computing using BOINC

VOLUNTEER COMPUTING USING BOINC

Introduction and A Brief history of Volunteer Computing

Scientists have developed accurate mathematical models of the physical universe, and computers programmed with these models can approximate reality at many levels of scale:an atomic nucleus, a protein molecule, the Earth's biosphere, or the entire universe. Using these programs, we can predict the future, validate or disprove theories, and operate "virtual laboratories" that investigate chemical reactions without test-tubes .In general, greater computing power allows a closer approximation of reality. This has spurred the development of computers that are as fast as possible. One way to speed up a computation is to "parallelize" it - to divide it into pieces that can be worked on by separate processors at the same time.In the 1990s two important things happened. First, because of Moore'sLaw, PCs became very fast - as fast as supercomputers only a few years older. Second, the Internet expanded to the consumer market. Suddenly there were millions of fast computers, connected by a network. The idea of using these computers as a parallel supercomputer occurred to many people independently.

In 1999, a project, SETI@home, was launched, with the goal of

detecting radio signals emitted by intelligent civilizations outside Earth. SETI@home acts as a "screensaver", running only when the PC is idle, and providing a graphical view of the work being done. SETI@home's appeal extended beyond hobbyists; it attracted millions of participants from all around the world. It inspired a number of other academic projects, as well as several companies that sought to commercialize the public computing

paradigm.

Volunteer Computing Now

BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects.

It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify

how their resources are allocated among these projects It is being used for applications in

physics, molecular biology, medicine, chemistry, astronomy, climate dynamics, mathematics, and the study of games. There are currently about 40 BOINC based

projects and about 400,000 volunteer computers performing an average of over 400 TeraFLOPS.

GOALS

BOINC’s general goal is to advance the public resource computing paradigm: to encourage the creation of many projects, and to encourage a large fraction of the

world’s computer owners to participate in one or more projects. Specific goals include:

Reduce the barriers of entry to public-resource computing.

BOINC allows a research scientist with moderate computer skills to create and operate a large public-resource computing project with about a week of initial work and an hour per week of maintenance. The server for a BOINCbased project can consist of a single machine onfigured with common open-source software (Linux, Apache, PHP, MySQL, Python).

Share resources among autonomous projects.

BOINC-based projects are autonomous. Projects are not centrally authorized or registered. Each project operates its own servers and stands completely on its own. Nevertheless,PC owners can seamlessly participate in multipleprojects, and can assign to each project a resource share determining how scarce resource (such as CPU and disk space) are divided among projects. If most participants register with multiple projects, then overall resource utilization is improved: while one project is closed for repairs, other projects temporarily inherit its computing power. On a particular computer, the CPU might work for one project while the network is ransferring files for another.

Support diverse applications.

BOINC accommodates a wide range of applications; it provides flexible and scalable

mechanism for distributing data, and its scheduling algorithms intelligently match requirements with resources.Existing applications in common languages (C, C++, FORTRAN)can run as BOINC applications with little or no modification. An application can consist of several files(e.g. multiple programs and a coordinating script). New

versions of applications can be deployed with no participant involvement.

Reward participants.

Public-resource computing projects must provide incentives in order to attract and

retain participants. The primary incentive for many participants is credit: a numeric measure of how much computation they have contributed. BOINC provides a credit accounting system that reflects usage of multiple resource types (CPU, network, disk), is common across multiple projects, and is highly resistant to cheating (attempts to

gain undeserved credit). BOINC also makes it easy for projects to add visualization graphics to their applications, which can provide screensaver graphics.

DESIGN AND STRUCTURE OF BOINC

BOINC is designed to be a free structure for anyone wishing to start a distributed computing project. BOINC consists of a server system and client software that communicate with each other to distribute, process, and return work units.

Technological innovation

The most recent versions of BOINC client and server have incorporated BitTorrent file sharing technology into the application distribution subsystem. The application distribution subsystem is different from the work unit distribution subsystem at the server end -- but not at the client end.With BitTorrent fully in place by clients and servers late-2007, great savings are expected in the telecommunication cost structures of the current server user base.

Server structure

A major part of BOINC is the backend server. The server can be run on one or many machines to allow BOINC to be easily scalable to projects of any size. BOINC servers run on Linux based computers and use Apache, PHP, and MySQL as a basis for its web and database systems.

BOINC does no scientific work itself. Rather, it is the infrastructure which downloads distributed applications and input data (work units), manages scheduling of multiple BOINC projects on the same CPU, and provides a user interface to the integrated system.

Scientific computations are run on participants' computers and results are analyzed after they are uploaded from the user PC to a science investigator's database and validated by the backend server. The validation process involves running all tasks on multiple contributor PCs and comparing the results.

Another feature provided by these servers are

homogeneous redundancy (sending work units only to computers of the same platform -- e.g.: Win XP SP2 only.) work unit trickling (sending information to the server before the work unit completes) locality scheduling (sending work units to computers that already have the necessary files and creating work on demand) work distribution based on host parameters (work units requiring 512 MB of RAM, for example, will only be sent to hosts having at least that much RAM)

Client structure

BOINC on the client is structured into a number of separate applications. These intercommunicate using the BOINC remote procedure call (RPC) mechanism.

These component applications are:

  • The program boinc (or boinc.exe) is the core client.

The core client is a process which takes care of communications between the client and the server. The core client also downloads science applications, provides a unified logging mechanism, makes sure science application binaries are up-to-date, and schedules CPU resources between science applications (if several are installed).

Although the core client is capable of downloading new science applications, it does not update itself. BOINC's authors felt doing so posed an unacceptable security risk, as well as all of the risks that automatic update procedures have in computing. On Unix, the core client is generally run as a daemon (or occasionally as a cron job). On Windows, BOINC initially was not a Windows service, but an ordinary application. BOINC Client for Windows, Versions 5.2.13 and higher add, during installation, the option of "Service Installation". Depending on how the BOINC client software was installed, it can either run in the background like a daemon, or starts when an individual user logs in (and is stopped when the user logs out). The software version management and work-unit handling provided by the core client greatly simplifies the coding of science applications.

One or several science applications. Science applications perform the core scientific computation. There is a specific science application for each of the distributed computation projects which use the BOINC framework. Science applications use the BOINC daemon to upload and download work units, and to exchange statistics with the server.

  • boincmgr (or boincmgr.exe), a GUI which communicates with the core application over RPC (remote procedure call). By default a core client only allows connections from the same computer, but it can be configured to allow connections from other computers (optionally using password authentication); this mechanism allows one person to manage a farm of BOINC installations from a single workstation. A drawback to the use of RPC mechanisms is that they are often felt to be security risks because they can be the route by which hackers can intrude upon targeted computers (even if it's configured for connections from the same computer).

The GUI is written using the cross-platform WxWidgets toolkit, providing the same user experience on different platforms. Users can connect to BOINC core clients, can instruct those clients to install new science applications, can monitor the progress of ongoing calculations, and can view the BOINC system message logs.

  • The BOINC screensaver. This provides a framework whereby science applications can display graphics in the user's screensaver window. BOINC screensavers are coded using the BOINC graphics API, Open GL, and the GLUT toolkit. Typically BOINC screensavers show animated graphics detailing the work underway, perhaps showing graphs or charts or other data visualisation graphics.

Some science applications do not provide screensaver functionality (or stop providing screensaver images when they are idle). In this circumstance the BOINC screensaver shows a small BOINC logo which bounces around the screen.

In Mac OS X, the program is able to dynamically take up extra processor speed while you work, varying how much processor time BOINC receives based on how intensively the computer is being used.

A BOINC network is similar to a hacker/spammers botnet. In BOINC's case, however, it is hoped that the software is installed and operated with the consent of the computer's owner. Since BOINC has features that can render it invisible to the typical user, there is risk that unauthorized and difficult to detect installations may occur. This would aid the accumulation of Boinc-credit points by hobbyists who are competing with others for status within the BOINC-credit subculture.

PROJECTS USING BOINC FRAMEWORK

The BOINC platform is currently the most popular volunteer-based distributed computing platform. Some examples For popular Projects are

Performance of BOINC projects:

  • over 1,021,000 participants
  • over 1,980,000 computers
  • over 550 TeraFLOPS (more than supercomputer BlueGene)
  • over 12 Petabytes of free disk space
  • SETI@home: 2.7 million years of computer time (2006)

FUTURE OF VOLUNTEER COMPUTING

The majority of the world's computing power is no longer in supercomputer centers and institutional machine rooms. Instead, it is now distributed in the hundreds of millions of personal computers all over the world. This change is critical to scientists whose research requires extreme computing power.The number of Internet-connected PCs is growing rapidly, and is projected to reach 1 billion by 2015. Together, these PCs could provide

many PetaFLOPs of computing power. The public resource approach applies to storage as well as computing.If 100 million computer users each provide 10 Gigabytes of

storage, the total (one Exabyte, or 1018 bytes) would exceed the capacity of any centralized storage system.

REFERENCES

http://boinc.berkeley.edu/

http://en.wikipedia.org/wiki/Boinc

http://en.wikipedia.org/wiki/List_of_distributed_computing_projects

http://www.boincstats.com/

Thursday, July 26, 2007

SINGLE ELECTRON TUNNELING TRANSISTOR

Contributor : Midhun M.K.

The chief problems that are faced by chip designers are regarding the size of the chip. According to Moore’s Law, the numbers of transistors on a chip will approximately double every 18 to 24 months. Moore\'s Law works largely through shrinking transistors-the circuits that carry electrical signals. By shrinking transistors, designers can squeeze more transistors into a chip. However, more transistors mean more electricity and heat compressed into an even smaller space. Furthermore, smaller chips increase performance but also compound the problem of complexity. To solve this problem, the single-electron tunneling transistor (SET) - a device that exploits the quantum effect of tunneling to control and measure the movement of single electron was devised. Experiments have shown that, charge does not flow continuously in these devices but in a quantized way. This paper discusses the principle of operation of SET, its fabrication and its applications. It also deals with the merits and demerits of SET compared to MOSFET. Although it is unlikely that SETs will replace FETs in conventional electronics, they should prove useful in ultra-low-noise analog applications. Moreover, because it is not affected by the same technological limitations as the FET, the SET can approach closely the quantum limit of sensitivity. It might also be a useful read-out device for a solid-state quantum computer. In future when quantum technology replaces the current computer technology, SET will find immense applications. Single Electron Tunneling transistors (SETs) are three-terminal switching devices that can transfer electrons from source to drain one by one. The structure of SETs is similar to that of FETs. The important difference, however, is that in an SET the channel is separated from source and drain by tunneling junctions, and the role of channel is played by an “island”. The particular advantage of SET is that they require only one electron to toggle between ON and OFF states. So this transistor will generate much less heat and require less power to move the electrons around - a feature very important in battery-powered mobile devices, such as cell phones. We know that the Pentium chips become much too hot and require massive fans to cool them. This wouldn’t\'t happen with a Single Electron Transistor, which uses much less energy, so they can be arranged much closer together.

Reference:

• http://physicsweb.org/articles/world/11/9/7/1
• http://emtech.boulder.nist.gov/div817b/whatwedo/set/set.htm

COLLECTIVE INTELLIGENCE BRICKS

Contributor: Anish Samuel

INTRODUCTION:

Collective Intelligent Bricks (CIB) deals with massive automated storage system. It is the
future of the data storage systems. In the earlier days of computer development the data storage systems were not so highly developed. In those days computers were not comm only used owing to high technological knowledge and experience one must have in order to deal with the storage system. In those days absence of high-density storage system also added to the problem. The situation in the computer field in earlier days could be seen from the words of the computer giants.

• 1943 -IBM Chairman Thomas Watson predicts, "There is a world market for maybe five computers". (In 1951 there were 10 computers in the U.S)

• 1977 -Kenneth Olson, President of Digital Equipment: “There is no reason for anyone to have a computer in their home.”

• 1981 –Bill Gates: “640K ought to be enough for anybody.”


These statements emphasize that it is difficult to predict the future technology trends and industry needs. Another good example is the current situation in the IT world, the amazing growth in digital data stored in special storage systems have led to high costs of administration and a real problem to storage administrator’s work. Nowadays storage systems contain several terabytes of data, but in the near future, according to the growing pace, these storage systems capacities will be increased to petabytes. To lower the cost of the administration and to help creating an easy to manage storage systems, vendors are working on intelligent storage-bricks. These bricks will consist of the shelf components, thus lowering costs, and the intelligent to self-manage the storage by themselves with no human assistant. The bricks are the future of the IT world, without them storing and managing data in the near future will be impossible.

Let us now briefly describe about the various sections intended to be dealt with in this seminar topic.

AUTONOMIC STORAGE

The basic goal of autonomic storage is, is to significantly improve the cost of ownership, reliability, and ease-of-use of information technologies. As explained, the main problem of information technology is the cost and ease of administration. Nowadays a storage administrator must have wide knowledge not only in disk technology, but also in several network protocols and architectures (like TCP/IP and Fiber-Channel), file systems and system architecture. This administrator faces various problems like installation of new storage components and/or systems, the configuration of these components and systems and the reconfiguration and adjustments of the entire system, upgrading of existing systems and components, monitoring and tracking problems. Another perspective in autonomic storage is the engineering challenges. Virtually every aspect of autonomic storage offers significant engineering challenges, testing and verification of such systems, and helping the storage administrator by easing installation, configuration and monitoring of those systems.
To achieve the promises of autonomic storage systems, systems need to become more self-configuring, self-healing and self-protecting, and during operation, more self-optimizing.

Concept Current Storage Autonomic Storage
Self-configuration Corporate data centers have multiple vendors and platforms. Installing, configuring and integrating systems are time consuming and error prone. Automated configuration of components and systems follows high-level policies. Rest of system adjusts automatically and seamlessly.
Self-optimization Systems have hundreds of manually set, nonlinear tuning parameters, and their number increases with each release. Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-healing Problem determination in large, complex systems can take a team of programmers’ weeks. System automatically detects, diagnoses, and repairs localized software and hardware problems.
Self-protection Detection of and recovery from attacks and cascading failures is manual. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.

The idea to move computing ability to the disk is not new and it was already introduced in active-disks concept, nevertheless the autonomic storage is a new approach with far-reaching consequences and its aspects will be the next storage trends. Obviously, it will take several years until all the challenges of autonomic computing will be achieved, but meanwhile storage systems incorporate autonomic computing features at several levels. The first level is the component level in which components contain features, which are autonomic. The next level, homogenous or heterogeneous systems work together to achieve autonomic. The third level, heterogeneous systems work together towards a goal specified by the managing authority. An example for the second level is the work of several storage bricks on a collective intelligent storage system.


STORAGE BRICKS

The storage-brick concept bundles the use of of-the-shelf components, such as hard disks, processor, memory and network, together with the autonomic strategy aimed to ease the administrative work. Building bricks from these components will provide a combination of several disks together with an intelligent control and management and network connectivity, while keeping the cost low. Figures illustrate the basic structure of a storage brick.

Storage-bricks can be stacked in a rack creating a storage system with large capacity, as shown in Figure. The adding procedure is very easy, plug and play, without special configuration and interruptions to other bricks and ongoing background work

Several vendors already provide storage-bricks, the bricks are built from 8-12 hard disks, 200 (or more) MIPS processor, dual Ethernet ports and with a proprietary OS, their cost range from 10,000 $/TB to 50,000 $/TB. These bricks can run various applications, such as: SQL and mail. The table shows the available storage bricks

Company Product Name Capacity
Snap Appliance Snap Server 80 GB – 2.16 TB
NetApp NetApp Server/Filer 50 GB – 48 TB

Still these bricks do not fulfill all autonomic storage aspects actually these are non-intelligent bricks that need administration and supervision. Currently only two vendors supply intelligent brick: EquaLogic and LeftHand Networks. Both companies supply storage bricks with 2 TB capacity and an automatic scaling features. Adding a new brick to a working storage-bricks system has no affect on other bricks and the only work needed is to plug the brick. The new brick will be automatically recognized by all the other bricks and will be added to the storage pool. Another self-management feature is the well-known load balancing done on both disks and network interfaces. Of course, the bricks have other sophisticated features such as replications, snapshots, disaster recovery and fail-over. EquaLogic’s self-managing architecture is called Peer Storage, in this architecture not only adding a brick is easy but also the management of the entire system, which contain numerous bricks, is easy. The entire management is automated and the administrator does not have to self configure and provision the system, he just have to describe the system his needs and the bricks will co-work to respond to his requests.

COLLECTIVE INTELLIGENT BRICKS

To overcome this challenging problem of floor space and to create an autonomic storage brick that minimize floor space consumption, IBM launched the IceCube project (now named CIB-Collective Intelligent Bricks). The purpose of this project is to create highly scalable, 3-dimentional pile of intelligent bricks with self-management features. The use of the 3-dimentional pile, as illustrated in Figure 4.1, enables extreme reduction of physical size (a tenfold reduction). Because the pile will consume a lot of power, a thermal problem is inevitable. Therefore, IBM has used a water-cooling system instead of an air-cooling system. This way even more floor-space can be saved and the total power of the system is decreased. Another side effect, due to the usage of water-cooling system, is the reduced noise.

IBM’s brick consists of twelve hard disks (total capacity of 1.2 TB), managed by three controllers tied to a strong microprocessor and connected to an Ethernet switch (future implementations will use infiniband). A coupler is located on each side of the brick, that way the brick can communicate at the rate of 10 GB/sec with adjoining bricks. The total throughput of a brick is 60 GB/sec and the total throughput of a cube can rise up to several terabits per second, based on how many of the external facing couplers are linked up to a wire interface. The future goal of IBM is to create a cube with up to 700 bricks. These goals are achieved by simple and common concepts such as RAID and copies and by intelligent software that automatically move, copy and spread data from one brick to another to eliminate hot spots and to enable load balancing. After adding a new brick, the configuration procedure is done automatically and other bricks will transfer data to it. Another self-managements feature implemented by IBM is the fail-in-place concept, when a brick has malfunctioned no repair action is taken and the faulty brick is left in place. All other bricks will learn the problem and will work around the faulty brick. Because the data is scattered among several brick the data continues to be available. Thus, no human action is needed, except for adding bricks as system need more storage. The construction of a Collective Intelligent Brick is shown below.

Saturday, July 21, 2007

COMPUTER VISION FOR INTELLIGENT VEHICLES

Contributor : Cinsu Thomas

ABSTRACT

Vision is the main sense that we use to perceive the structure of the surrounding environment. Due to the large amount of information that an image carries, artificial vision is an extremely powerful way to sense the surroundings also for autonomous robots.

In many indoor applications, such as the navigation of autonomous robots in both structured and unknown settings, vision and active sensors can perform complementary tasks for the recognition of objects, detection of free-space, or check for some specific object characteristics. The recent advances in computational hardware ,such as a higher degree of integration, allows to have machines that can deliver a high computational power ,with fast networking facilities, at an affordable price .In addition to this, current cameras include new important features that allow to address and solve some basic problems directly at the sensor level. The resolution of the sensors has been drastically enhanced. In order to decrease the acquisition and transfer time, new technological solutions can be found in CMOS cameras, with important advantages such as the pixels can be addressed independently like in traditional memories, and their integration on the processing chip seems to be straight forward.

The success, of computational approaches to perception, is demonstrated by the increasing numbers of autonomous systems, that are now being used in structured and controlled industrial environments and, that are now being studied and implemented to work in a more complex and unknown settings .In particular, the last years have witnessed an increasing interest towards the use of vision techniques for perceiving automotive environments, both for highway and urban scenarios, will become a reality in the next decades. Besides the obvious advantages of increasing road safety and improving the quality and efficiency for people and goods mobility, the integration of intelligent features and autonomous functionalities on vehicles will lead to major economical benefits such as reductions in fuel consumption, efficient exploitation of the road network. Furthermore, not only the automotive field is interested in these new technologies, but other sectors as well, each with its own target (industrial vehicles, military systems, mission critical and unmanned rescue robots).

Thursday, July 19, 2007

Digital Light Processing

Contributor : Nitin S. Madhu

Digital Light Processing, a technology that led to miniaturisation of projectors, has a potential to replace the ageing film-based projection of movies in theatres.
Although initially developed for projection processes , and over the years, has spawned a new category of small, ultra-portable mobile projectors .It is based around a specialised optical semiconductor chip-Digital Micro mirror Device (DMD)

Being mirror-based, DLP systems use light very efficiently. That helps DLP images achieve outstanding brightness, contrast ratio and black levels. DLP images are also known for their crisp detail, smooth motion, and excellent color accuracy and consistency. Also, DLP-based TVs are probably the most resistant to screen burn-in.

The DLP chip is probably the world's most sophisticated light switch. It contains a rectangular array of up to 2 million hinge-mounted microscopic mirrors; each of these micro mirrors measures less than one-fifth the width of a human hair. The 3-chip system found in DLP Cinema projection systems is capable of producing no fewer than 35 trillion colours.

Although initially developed for projection processes, DLP is now used in telecommunications, scientific instrumentation, volumetric displays, holographic data storages, lithography and medical imaging.

Reference:

http://en.wikipedia.org/wiki/Digital_Light_Processing

http://www.answers.com/topic/dlp?cat=technology

ZIGBEE

Contributor : Veena C.

Introduction:

Zigbee is a wireless protocol that allows small, low-cost devices to quickly transmit small amounts of data, like temperature readings for thermostats, on\off requests for light switches, or keystrokes for a wireless keyboard. It is a global specification for reliable, cost effective, low power wireless applications based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). ZigBee is targeted at RF applications that require a low data rate, long battery life, and secure networking.Zigbee is a rather new wireless technology that looks to have applications in a variety of fields. It uses low data rates technology allows for devices to communicate with one another with very low power consumption, allowing the devices to run on simple batteries for several years. Zigbee is targeting various forms of automation, as the low data rate communication is ideal for sensors, monitors, and the like. Home automation is one of the key market areas for Zigbee, with an example of a simple network.

ZigBee is designed for wireless controls and sensors. It could be built into just about anything you have around your home or office, including lights, switches, doors and appliances. These devices can then interact without wires, and you can control them all from a remote control or even your mobile phone. It allows wireless two-way communications between lights and switches, thermostats and furnaces, hotel-room air-conditioners and the front desk, and central command posts. It travels across greater distances and handles many sensors that can be linked to perform different tasks.

ZigBee works well because it aims low. Controls and sensors don't need to send and receive much data. ZigBee has been designed to transmit slowly. It has a data rate of 250kbps (kilobits per second).Because ZigBee transmits slowly; it doesn't need much power, so batteries will last up to 10 years. Because ZigBee consumes very little power, a sensor and transmitter that reports whether a door is open or closed, for example, can run for up to five years on a single double-A battery. Also, operators are much happier about adding ZigBee to their phones than faster technologies such as Wi-Fi; therefore, the phone will be able to act as a remote control for all the ZigBee devices it encounters.

ZigBee basically uses digital radios to allow devices to communicate with one another. A typical ZigBee network consists of several types of devices. A network coordinator is a device that sets up the network, is aware of all the nodes within its network, and manages both the information about each node as well as the information that is being transmitted/received within the network. Every ZigBee network must contain a network coordinator. Other Full Function Devices (FFD's) may be found in the network, and these devices support all of the 802.15.4 functions. They can serve as network coordinators, network routers, or as devices that interact with the physical world. The final device found in these networks is the Reduced Function Device (RFD), which usually only serve as devices that interact with the physical world.

References:

http://en.wikipedia.org/wiki/ZigBee

http://seminarsonly.com/Zigbee

http://ZigBee Tutorial-Reports_Com.htm

CELL MICROPROCESSOR

Contributor: Nithin S. Madhu CELL MICROPROCESSOR

Cell is shorthand for Cell Broadband Engine Architecture commonly abbreviated as CBEA.

Cell microprocessor is a result of a US $400 million joint effort by STI, the name of the formal alliance formed by Sony, Toshiba and IBM, over a period of 4 years

Cell is a heterogeneous chip multiprocessor that consists of an IBM 64-bit Power Architecture core, augmented with eight specialized co-processors based on a novel single-instruction multiple-data (SIMD) architecture called Synergistic Processor Unit (SPU), which is for data-intensive processing, like that found in cryptography, media and scientific applications. The system is integrated by a coherent on-chip bus.

The basic architecture of the Cell is described by IBM as a "system on a chip" (SoC)

One cell working alone has potential of reaching 256 GFLOPS (Floating Point Operations Per Second). (Home PC can reach 6 GFLOPS, if you use a good graphics card)

The potential processing power of cell blows away existing processors, even super computers


The Cell architecture is based on the new thinking that is emerging in the world of multiprocessing. The industry focus has shifted from maximizing performance to maximizing performance per watt. This is achieved by putting more than one processor on a single chip and running all of them well below their top speed. Because the transistors are switching less frequently the processors generate less heat and since there are atleast two hotspots on each cheap the heat is spread more evenly over it and thus is less damaging to the circuitry. The Cell architecture breaks ground in combining a light-weight general-purpose processor with multiple GPU-like coprocessors into a coordinated whole Software adoption remains a key issue in whether Cell ultimately delivers on its performance potential.
Some Cell statistics:
  • Observed clock speed: > 4 GHz

  • Peak performance (single precision): > 256 GFlops

  • Peak performance (double precision): >26 GFlops

  • Local storage size per SPU: 256KB

  • Area: 221 mm²

  • Technology: 90nm SOI

  • Total number of transistors: 234M

APPLICATION

Cell is optimized for compute-intensive workloads and broadband rich media

Applications, including computer entertainment, movies and other forms of digital content

The first major commercial application of Cell was in Sony's PlayStation 3 game console

Toshiba has announced plans to incorporate Cell in high definition television sets.

IBM announced April 25, 2007 that it will begin integrating its Cell Broadband Engine Architecture microprocessors into the company's line of mainframes

In the fall of 2006 IBM released the QS20 blade server using double Cell BE processors for tremendous performance in certain applications, reaching a peak of 410 gigaflops per module

Mercury Computer Systems, Inc. has released blades, conventional rack servers and PCI Express accelerator boards with Cell processors


Reference:

http://en.wikipedia.org/wiki/Cell_microprocessor

http://www.sony.net/SonyInfo/News/Press/200502/05-0208BE/index.html