Sunday, September 9, 2007

Volunteer computing using BOINC

VOLUNTEER COMPUTING USING BOINC

Introduction and A Brief history of Volunteer Computing

Scientists have developed accurate mathematical models of the physical universe, and computers programmed with these models can approximate reality at many levels of scale:an atomic nucleus, a protein molecule, the Earth's biosphere, or the entire universe. Using these programs, we can predict the future, validate or disprove theories, and operate "virtual laboratories" that investigate chemical reactions without test-tubes .In general, greater computing power allows a closer approximation of reality. This has spurred the development of computers that are as fast as possible. One way to speed up a computation is to "parallelize" it - to divide it into pieces that can be worked on by separate processors at the same time.In the 1990s two important things happened. First, because of Moore'sLaw, PCs became very fast - as fast as supercomputers only a few years older. Second, the Internet expanded to the consumer market. Suddenly there were millions of fast computers, connected by a network. The idea of using these computers as a parallel supercomputer occurred to many people independently.

In 1999, a project, SETI@home, was launched, with the goal of

detecting radio signals emitted by intelligent civilizations outside Earth. SETI@home acts as a "screensaver", running only when the PC is idle, and providing a graphical view of the work being done. SETI@home's appeal extended beyond hobbyists; it attracted millions of participants from all around the world. It inspired a number of other academic projects, as well as several companies that sought to commercialize the public computing

paradigm.

Volunteer Computing Now

BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects.

It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify

how their resources are allocated among these projects It is being used for applications in

physics, molecular biology, medicine, chemistry, astronomy, climate dynamics, mathematics, and the study of games. There are currently about 40 BOINC based

projects and about 400,000 volunteer computers performing an average of over 400 TeraFLOPS.

GOALS

BOINC’s general goal is to advance the public resource computing paradigm: to encourage the creation of many projects, and to encourage a large fraction of the

world’s computer owners to participate in one or more projects. Specific goals include:

Reduce the barriers of entry to public-resource computing.

BOINC allows a research scientist with moderate computer skills to create and operate a large public-resource computing project with about a week of initial work and an hour per week of maintenance. The server for a BOINCbased project can consist of a single machine onfigured with common open-source software (Linux, Apache, PHP, MySQL, Python).

Share resources among autonomous projects.

BOINC-based projects are autonomous. Projects are not centrally authorized or registered. Each project operates its own servers and stands completely on its own. Nevertheless,PC owners can seamlessly participate in multipleprojects, and can assign to each project a resource share determining how scarce resource (such as CPU and disk space) are divided among projects. If most participants register with multiple projects, then overall resource utilization is improved: while one project is closed for repairs, other projects temporarily inherit its computing power. On a particular computer, the CPU might work for one project while the network is ransferring files for another.

Support diverse applications.

BOINC accommodates a wide range of applications; it provides flexible and scalable

mechanism for distributing data, and its scheduling algorithms intelligently match requirements with resources.Existing applications in common languages (C, C++, FORTRAN)can run as BOINC applications with little or no modification. An application can consist of several files(e.g. multiple programs and a coordinating script). New

versions of applications can be deployed with no participant involvement.

Reward participants.

Public-resource computing projects must provide incentives in order to attract and

retain participants. The primary incentive for many participants is credit: a numeric measure of how much computation they have contributed. BOINC provides a credit accounting system that reflects usage of multiple resource types (CPU, network, disk), is common across multiple projects, and is highly resistant to cheating (attempts to

gain undeserved credit). BOINC also makes it easy for projects to add visualization graphics to their applications, which can provide screensaver graphics.

DESIGN AND STRUCTURE OF BOINC

BOINC is designed to be a free structure for anyone wishing to start a distributed computing project. BOINC consists of a server system and client software that communicate with each other to distribute, process, and return work units.

Technological innovation

The most recent versions of BOINC client and server have incorporated BitTorrent file sharing technology into the application distribution subsystem. The application distribution subsystem is different from the work unit distribution subsystem at the server end -- but not at the client end.With BitTorrent fully in place by clients and servers late-2007, great savings are expected in the telecommunication cost structures of the current server user base.

Server structure

A major part of BOINC is the backend server. The server can be run on one or many machines to allow BOINC to be easily scalable to projects of any size. BOINC servers run on Linux based computers and use Apache, PHP, and MySQL as a basis for its web and database systems.

BOINC does no scientific work itself. Rather, it is the infrastructure which downloads distributed applications and input data (work units), manages scheduling of multiple BOINC projects on the same CPU, and provides a user interface to the integrated system.

Scientific computations are run on participants' computers and results are analyzed after they are uploaded from the user PC to a science investigator's database and validated by the backend server. The validation process involves running all tasks on multiple contributor PCs and comparing the results.

Another feature provided by these servers are

homogeneous redundancy (sending work units only to computers of the same platform -- e.g.: Win XP SP2 only.) work unit trickling (sending information to the server before the work unit completes) locality scheduling (sending work units to computers that already have the necessary files and creating work on demand) work distribution based on host parameters (work units requiring 512 MB of RAM, for example, will only be sent to hosts having at least that much RAM)

Client structure

BOINC on the client is structured into a number of separate applications. These intercommunicate using the BOINC remote procedure call (RPC) mechanism.

These component applications are:

  • The program boinc (or boinc.exe) is the core client.

The core client is a process which takes care of communications between the client and the server. The core client also downloads science applications, provides a unified logging mechanism, makes sure science application binaries are up-to-date, and schedules CPU resources between science applications (if several are installed).

Although the core client is capable of downloading new science applications, it does not update itself. BOINC's authors felt doing so posed an unacceptable security risk, as well as all of the risks that automatic update procedures have in computing. On Unix, the core client is generally run as a daemon (or occasionally as a cron job). On Windows, BOINC initially was not a Windows service, but an ordinary application. BOINC Client for Windows, Versions 5.2.13 and higher add, during installation, the option of "Service Installation". Depending on how the BOINC client software was installed, it can either run in the background like a daemon, or starts when an individual user logs in (and is stopped when the user logs out). The software version management and work-unit handling provided by the core client greatly simplifies the coding of science applications.

One or several science applications. Science applications perform the core scientific computation. There is a specific science application for each of the distributed computation projects which use the BOINC framework. Science applications use the BOINC daemon to upload and download work units, and to exchange statistics with the server.

  • boincmgr (or boincmgr.exe), a GUI which communicates with the core application over RPC (remote procedure call). By default a core client only allows connections from the same computer, but it can be configured to allow connections from other computers (optionally using password authentication); this mechanism allows one person to manage a farm of BOINC installations from a single workstation. A drawback to the use of RPC mechanisms is that they are often felt to be security risks because they can be the route by which hackers can intrude upon targeted computers (even if it's configured for connections from the same computer).

The GUI is written using the cross-platform WxWidgets toolkit, providing the same user experience on different platforms. Users can connect to BOINC core clients, can instruct those clients to install new science applications, can monitor the progress of ongoing calculations, and can view the BOINC system message logs.

  • The BOINC screensaver. This provides a framework whereby science applications can display graphics in the user's screensaver window. BOINC screensavers are coded using the BOINC graphics API, Open GL, and the GLUT toolkit. Typically BOINC screensavers show animated graphics detailing the work underway, perhaps showing graphs or charts or other data visualisation graphics.

Some science applications do not provide screensaver functionality (or stop providing screensaver images when they are idle). In this circumstance the BOINC screensaver shows a small BOINC logo which bounces around the screen.

In Mac OS X, the program is able to dynamically take up extra processor speed while you work, varying how much processor time BOINC receives based on how intensively the computer is being used.

A BOINC network is similar to a hacker/spammers botnet. In BOINC's case, however, it is hoped that the software is installed and operated with the consent of the computer's owner. Since BOINC has features that can render it invisible to the typical user, there is risk that unauthorized and difficult to detect installations may occur. This would aid the accumulation of Boinc-credit points by hobbyists who are competing with others for status within the BOINC-credit subculture.

PROJECTS USING BOINC FRAMEWORK

The BOINC platform is currently the most popular volunteer-based distributed computing platform. Some examples For popular Projects are

Performance of BOINC projects:

  • over 1,021,000 participants
  • over 1,980,000 computers
  • over 550 TeraFLOPS (more than supercomputer BlueGene)
  • over 12 Petabytes of free disk space
  • SETI@home: 2.7 million years of computer time (2006)

FUTURE OF VOLUNTEER COMPUTING

The majority of the world's computing power is no longer in supercomputer centers and institutional machine rooms. Instead, it is now distributed in the hundreds of millions of personal computers all over the world. This change is critical to scientists whose research requires extreme computing power.The number of Internet-connected PCs is growing rapidly, and is projected to reach 1 billion by 2015. Together, these PCs could provide

many PetaFLOPs of computing power. The public resource approach applies to storage as well as computing.If 100 million computer users each provide 10 Gigabytes of

storage, the total (one Exabyte, or 1018 bytes) would exceed the capacity of any centralized storage system.

REFERENCES

http://boinc.berkeley.edu/

http://en.wikipedia.org/wiki/Boinc

http://en.wikipedia.org/wiki/List_of_distributed_computing_projects

http://www.boincstats.com/

Thursday, July 26, 2007

SINGLE ELECTRON TUNNELING TRANSISTOR

Contributor : Midhun M.K.

The chief problems that are faced by chip designers are regarding the size of the chip. According to Moore’s Law, the numbers of transistors on a chip will approximately double every 18 to 24 months. Moore\'s Law works largely through shrinking transistors-the circuits that carry electrical signals. By shrinking transistors, designers can squeeze more transistors into a chip. However, more transistors mean more electricity and heat compressed into an even smaller space. Furthermore, smaller chips increase performance but also compound the problem of complexity. To solve this problem, the single-electron tunneling transistor (SET) - a device that exploits the quantum effect of tunneling to control and measure the movement of single electron was devised. Experiments have shown that, charge does not flow continuously in these devices but in a quantized way. This paper discusses the principle of operation of SET, its fabrication and its applications. It also deals with the merits and demerits of SET compared to MOSFET. Although it is unlikely that SETs will replace FETs in conventional electronics, they should prove useful in ultra-low-noise analog applications. Moreover, because it is not affected by the same technological limitations as the FET, the SET can approach closely the quantum limit of sensitivity. It might also be a useful read-out device for a solid-state quantum computer. In future when quantum technology replaces the current computer technology, SET will find immense applications. Single Electron Tunneling transistors (SETs) are three-terminal switching devices that can transfer electrons from source to drain one by one. The structure of SETs is similar to that of FETs. The important difference, however, is that in an SET the channel is separated from source and drain by tunneling junctions, and the role of channel is played by an “island”. The particular advantage of SET is that they require only one electron to toggle between ON and OFF states. So this transistor will generate much less heat and require less power to move the electrons around - a feature very important in battery-powered mobile devices, such as cell phones. We know that the Pentium chips become much too hot and require massive fans to cool them. This wouldn’t\'t happen with a Single Electron Transistor, which uses much less energy, so they can be arranged much closer together.

Reference:

• http://physicsweb.org/articles/world/11/9/7/1
• http://emtech.boulder.nist.gov/div817b/whatwedo/set/set.htm

COLLECTIVE INTELLIGENCE BRICKS

Contributor: Anish Samuel

INTRODUCTION:

Collective Intelligent Bricks (CIB) deals with massive automated storage system. It is the
future of the data storage systems. In the earlier days of computer development the data storage systems were not so highly developed. In those days computers were not comm only used owing to high technological knowledge and experience one must have in order to deal with the storage system. In those days absence of high-density storage system also added to the problem. The situation in the computer field in earlier days could be seen from the words of the computer giants.

• 1943 -IBM Chairman Thomas Watson predicts, "There is a world market for maybe five computers". (In 1951 there were 10 computers in the U.S)

• 1977 -Kenneth Olson, President of Digital Equipment: “There is no reason for anyone to have a computer in their home.”

• 1981 –Bill Gates: “640K ought to be enough for anybody.”


These statements emphasize that it is difficult to predict the future technology trends and industry needs. Another good example is the current situation in the IT world, the amazing growth in digital data stored in special storage systems have led to high costs of administration and a real problem to storage administrator’s work. Nowadays storage systems contain several terabytes of data, but in the near future, according to the growing pace, these storage systems capacities will be increased to petabytes. To lower the cost of the administration and to help creating an easy to manage storage systems, vendors are working on intelligent storage-bricks. These bricks will consist of the shelf components, thus lowering costs, and the intelligent to self-manage the storage by themselves with no human assistant. The bricks are the future of the IT world, without them storing and managing data in the near future will be impossible.

Let us now briefly describe about the various sections intended to be dealt with in this seminar topic.

AUTONOMIC STORAGE

The basic goal of autonomic storage is, is to significantly improve the cost of ownership, reliability, and ease-of-use of information technologies. As explained, the main problem of information technology is the cost and ease of administration. Nowadays a storage administrator must have wide knowledge not only in disk technology, but also in several network protocols and architectures (like TCP/IP and Fiber-Channel), file systems and system architecture. This administrator faces various problems like installation of new storage components and/or systems, the configuration of these components and systems and the reconfiguration and adjustments of the entire system, upgrading of existing systems and components, monitoring and tracking problems. Another perspective in autonomic storage is the engineering challenges. Virtually every aspect of autonomic storage offers significant engineering challenges, testing and verification of such systems, and helping the storage administrator by easing installation, configuration and monitoring of those systems.
To achieve the promises of autonomic storage systems, systems need to become more self-configuring, self-healing and self-protecting, and during operation, more self-optimizing.

Concept Current Storage Autonomic Storage
Self-configuration Corporate data centers have multiple vendors and platforms. Installing, configuring and integrating systems are time consuming and error prone. Automated configuration of components and systems follows high-level policies. Rest of system adjusts automatically and seamlessly.
Self-optimization Systems have hundreds of manually set, nonlinear tuning parameters, and their number increases with each release. Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-healing Problem determination in large, complex systems can take a team of programmers’ weeks. System automatically detects, diagnoses, and repairs localized software and hardware problems.
Self-protection Detection of and recovery from attacks and cascading failures is manual. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.

The idea to move computing ability to the disk is not new and it was already introduced in active-disks concept, nevertheless the autonomic storage is a new approach with far-reaching consequences and its aspects will be the next storage trends. Obviously, it will take several years until all the challenges of autonomic computing will be achieved, but meanwhile storage systems incorporate autonomic computing features at several levels. The first level is the component level in which components contain features, which are autonomic. The next level, homogenous or heterogeneous systems work together to achieve autonomic. The third level, heterogeneous systems work together towards a goal specified by the managing authority. An example for the second level is the work of several storage bricks on a collective intelligent storage system.


STORAGE BRICKS

The storage-brick concept bundles the use of of-the-shelf components, such as hard disks, processor, memory and network, together with the autonomic strategy aimed to ease the administrative work. Building bricks from these components will provide a combination of several disks together with an intelligent control and management and network connectivity, while keeping the cost low. Figures illustrate the basic structure of a storage brick.

Storage-bricks can be stacked in a rack creating a storage system with large capacity, as shown in Figure. The adding procedure is very easy, plug and play, without special configuration and interruptions to other bricks and ongoing background work

Several vendors already provide storage-bricks, the bricks are built from 8-12 hard disks, 200 (or more) MIPS processor, dual Ethernet ports and with a proprietary OS, their cost range from 10,000 $/TB to 50,000 $/TB. These bricks can run various applications, such as: SQL and mail. The table shows the available storage bricks

Company Product Name Capacity
Snap Appliance Snap Server 80 GB – 2.16 TB
NetApp NetApp Server/Filer 50 GB – 48 TB

Still these bricks do not fulfill all autonomic storage aspects actually these are non-intelligent bricks that need administration and supervision. Currently only two vendors supply intelligent brick: EquaLogic and LeftHand Networks. Both companies supply storage bricks with 2 TB capacity and an automatic scaling features. Adding a new brick to a working storage-bricks system has no affect on other bricks and the only work needed is to plug the brick. The new brick will be automatically recognized by all the other bricks and will be added to the storage pool. Another self-management feature is the well-known load balancing done on both disks and network interfaces. Of course, the bricks have other sophisticated features such as replications, snapshots, disaster recovery and fail-over. EquaLogic’s self-managing architecture is called Peer Storage, in this architecture not only adding a brick is easy but also the management of the entire system, which contain numerous bricks, is easy. The entire management is automated and the administrator does not have to self configure and provision the system, he just have to describe the system his needs and the bricks will co-work to respond to his requests.

COLLECTIVE INTELLIGENT BRICKS

To overcome this challenging problem of floor space and to create an autonomic storage brick that minimize floor space consumption, IBM launched the IceCube project (now named CIB-Collective Intelligent Bricks). The purpose of this project is to create highly scalable, 3-dimentional pile of intelligent bricks with self-management features. The use of the 3-dimentional pile, as illustrated in Figure 4.1, enables extreme reduction of physical size (a tenfold reduction). Because the pile will consume a lot of power, a thermal problem is inevitable. Therefore, IBM has used a water-cooling system instead of an air-cooling system. This way even more floor-space can be saved and the total power of the system is decreased. Another side effect, due to the usage of water-cooling system, is the reduced noise.

IBM’s brick consists of twelve hard disks (total capacity of 1.2 TB), managed by three controllers tied to a strong microprocessor and connected to an Ethernet switch (future implementations will use infiniband). A coupler is located on each side of the brick, that way the brick can communicate at the rate of 10 GB/sec with adjoining bricks. The total throughput of a brick is 60 GB/sec and the total throughput of a cube can rise up to several terabits per second, based on how many of the external facing couplers are linked up to a wire interface. The future goal of IBM is to create a cube with up to 700 bricks. These goals are achieved by simple and common concepts such as RAID and copies and by intelligent software that automatically move, copy and spread data from one brick to another to eliminate hot spots and to enable load balancing. After adding a new brick, the configuration procedure is done automatically and other bricks will transfer data to it. Another self-managements feature implemented by IBM is the fail-in-place concept, when a brick has malfunctioned no repair action is taken and the faulty brick is left in place. All other bricks will learn the problem and will work around the faulty brick. Because the data is scattered among several brick the data continues to be available. Thus, no human action is needed, except for adding bricks as system need more storage. The construction of a Collective Intelligent Brick is shown below.

Saturday, July 21, 2007

COMPUTER VISION FOR INTELLIGENT VEHICLES

Contributor : Cinsu Thomas

ABSTRACT

Vision is the main sense that we use to perceive the structure of the surrounding environment. Due to the large amount of information that an image carries, artificial vision is an extremely powerful way to sense the surroundings also for autonomous robots.

In many indoor applications, such as the navigation of autonomous robots in both structured and unknown settings, vision and active sensors can perform complementary tasks for the recognition of objects, detection of free-space, or check for some specific object characteristics. The recent advances in computational hardware ,such as a higher degree of integration, allows to have machines that can deliver a high computational power ,with fast networking facilities, at an affordable price .In addition to this, current cameras include new important features that allow to address and solve some basic problems directly at the sensor level. The resolution of the sensors has been drastically enhanced. In order to decrease the acquisition and transfer time, new technological solutions can be found in CMOS cameras, with important advantages such as the pixels can be addressed independently like in traditional memories, and their integration on the processing chip seems to be straight forward.

The success, of computational approaches to perception, is demonstrated by the increasing numbers of autonomous systems, that are now being used in structured and controlled industrial environments and, that are now being studied and implemented to work in a more complex and unknown settings .In particular, the last years have witnessed an increasing interest towards the use of vision techniques for perceiving automotive environments, both for highway and urban scenarios, will become a reality in the next decades. Besides the obvious advantages of increasing road safety and improving the quality and efficiency for people and goods mobility, the integration of intelligent features and autonomous functionalities on vehicles will lead to major economical benefits such as reductions in fuel consumption, efficient exploitation of the road network. Furthermore, not only the automotive field is interested in these new technologies, but other sectors as well, each with its own target (industrial vehicles, military systems, mission critical and unmanned rescue robots).

Thursday, July 19, 2007

Digital Light Processing

Contributor : Nitin S. Madhu

Digital Light Processing, a technology that led to miniaturisation of projectors, has a potential to replace the ageing film-based projection of movies in theatres.
Although initially developed for projection processes , and over the years, has spawned a new category of small, ultra-portable mobile projectors .It is based around a specialised optical semiconductor chip-Digital Micro mirror Device (DMD)

Being mirror-based, DLP systems use light very efficiently. That helps DLP images achieve outstanding brightness, contrast ratio and black levels. DLP images are also known for their crisp detail, smooth motion, and excellent color accuracy and consistency. Also, DLP-based TVs are probably the most resistant to screen burn-in.

The DLP chip is probably the world's most sophisticated light switch. It contains a rectangular array of up to 2 million hinge-mounted microscopic mirrors; each of these micro mirrors measures less than one-fifth the width of a human hair. The 3-chip system found in DLP Cinema projection systems is capable of producing no fewer than 35 trillion colours.

Although initially developed for projection processes, DLP is now used in telecommunications, scientific instrumentation, volumetric displays, holographic data storages, lithography and medical imaging.

Reference:

http://en.wikipedia.org/wiki/Digital_Light_Processing

http://www.answers.com/topic/dlp?cat=technology

ZIGBEE

Contributor : Veena C.

Introduction:

Zigbee is a wireless protocol that allows small, low-cost devices to quickly transmit small amounts of data, like temperature readings for thermostats, on\off requests for light switches, or keystrokes for a wireless keyboard. It is a global specification for reliable, cost effective, low power wireless applications based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). ZigBee is targeted at RF applications that require a low data rate, long battery life, and secure networking.Zigbee is a rather new wireless technology that looks to have applications in a variety of fields. It uses low data rates technology allows for devices to communicate with one another with very low power consumption, allowing the devices to run on simple batteries for several years. Zigbee is targeting various forms of automation, as the low data rate communication is ideal for sensors, monitors, and the like. Home automation is one of the key market areas for Zigbee, with an example of a simple network.

ZigBee is designed for wireless controls and sensors. It could be built into just about anything you have around your home or office, including lights, switches, doors and appliances. These devices can then interact without wires, and you can control them all from a remote control or even your mobile phone. It allows wireless two-way communications between lights and switches, thermostats and furnaces, hotel-room air-conditioners and the front desk, and central command posts. It travels across greater distances and handles many sensors that can be linked to perform different tasks.

ZigBee works well because it aims low. Controls and sensors don't need to send and receive much data. ZigBee has been designed to transmit slowly. It has a data rate of 250kbps (kilobits per second).Because ZigBee transmits slowly; it doesn't need much power, so batteries will last up to 10 years. Because ZigBee consumes very little power, a sensor and transmitter that reports whether a door is open or closed, for example, can run for up to five years on a single double-A battery. Also, operators are much happier about adding ZigBee to their phones than faster technologies such as Wi-Fi; therefore, the phone will be able to act as a remote control for all the ZigBee devices it encounters.

ZigBee basically uses digital radios to allow devices to communicate with one another. A typical ZigBee network consists of several types of devices. A network coordinator is a device that sets up the network, is aware of all the nodes within its network, and manages both the information about each node as well as the information that is being transmitted/received within the network. Every ZigBee network must contain a network coordinator. Other Full Function Devices (FFD's) may be found in the network, and these devices support all of the 802.15.4 functions. They can serve as network coordinators, network routers, or as devices that interact with the physical world. The final device found in these networks is the Reduced Function Device (RFD), which usually only serve as devices that interact with the physical world.

References:

http://en.wikipedia.org/wiki/ZigBee

http://seminarsonly.com/Zigbee

http://ZigBee Tutorial-Reports_Com.htm

CELL MICROPROCESSOR

Contributor: Nithin S. Madhu CELL MICROPROCESSOR

Cell is shorthand for Cell Broadband Engine Architecture commonly abbreviated as CBEA.

Cell microprocessor is a result of a US $400 million joint effort by STI, the name of the formal alliance formed by Sony, Toshiba and IBM, over a period of 4 years

Cell is a heterogeneous chip multiprocessor that consists of an IBM 64-bit Power Architecture core, augmented with eight specialized co-processors based on a novel single-instruction multiple-data (SIMD) architecture called Synergistic Processor Unit (SPU), which is for data-intensive processing, like that found in cryptography, media and scientific applications. The system is integrated by a coherent on-chip bus.

The basic architecture of the Cell is described by IBM as a "system on a chip" (SoC)

One cell working alone has potential of reaching 256 GFLOPS (Floating Point Operations Per Second). (Home PC can reach 6 GFLOPS, if you use a good graphics card)

The potential processing power of cell blows away existing processors, even super computers


The Cell architecture is based on the new thinking that is emerging in the world of multiprocessing. The industry focus has shifted from maximizing performance to maximizing performance per watt. This is achieved by putting more than one processor on a single chip and running all of them well below their top speed. Because the transistors are switching less frequently the processors generate less heat and since there are atleast two hotspots on each cheap the heat is spread more evenly over it and thus is less damaging to the circuitry. The Cell architecture breaks ground in combining a light-weight general-purpose processor with multiple GPU-like coprocessors into a coordinated whole Software adoption remains a key issue in whether Cell ultimately delivers on its performance potential.
Some Cell statistics:
  • Observed clock speed: > 4 GHz

  • Peak performance (single precision): > 256 GFlops

  • Peak performance (double precision): >26 GFlops

  • Local storage size per SPU: 256KB

  • Area: 221 mm²

  • Technology: 90nm SOI

  • Total number of transistors: 234M

APPLICATION

Cell is optimized for compute-intensive workloads and broadband rich media

Applications, including computer entertainment, movies and other forms of digital content

The first major commercial application of Cell was in Sony's PlayStation 3 game console

Toshiba has announced plans to incorporate Cell in high definition television sets.

IBM announced April 25, 2007 that it will begin integrating its Cell Broadband Engine Architecture microprocessors into the company's line of mainframes

In the fall of 2006 IBM released the QS20 blade server using double Cell BE processors for tremendous performance in certain applications, reaching a peak of 410 gigaflops per module

Mercury Computer Systems, Inc. has released blades, conventional rack servers and PCI Express accelerator boards with Cell processors


Reference:

http://en.wikipedia.org/wiki/Cell_microprocessor

http://www.sony.net/SonyInfo/News/Press/200502/05-0208BE/index.html

Sunday, July 15, 2007

iMouse

INTRODUCTION
iMouse is an integrated mobile surveillance and wireless sensor system. Wireless sensor networks (WSN) provide an inexpensive and convenient way to monitor physical environments. Traditional surveillance systems typically collect a large volume of videos from wallboard cameras, which require huge computation or manpower to analyze. Incorporating the environment-sensing capability of wireless sensor networks into video based surveillance systems provides advanced services at a lower cost than traditional systems. The iMouse's integrated mobile surveillance and easy to deploy wireless sensor system uses static and mobile wireless sensors to detect and then analyze unusual events in the environment. The iMouse system consists of a large number of inexpensive static sensors and a small number of more expensive mobile sensors. The former is to monitor the environment, while the latter can move to certain locations and gather more advanced data. The iMouse system is a mobile, context-aware surveillance system.
The three main components of the iMouse system architecture are (1) the static sensors, (2) the mobile sensors and (3) an external server. The system is set up so that the user could issue commands to the network through the server at which point the static sensors would monitor the environment and report events. When notified of an unusual event or change in behavior, the server notifies the user and dispatches the mobile sensors to move to the emergency sites, collect data and report.

SCOPE
iMouse can be used in various home security applications, surveillance, biological detection, emergency situations. The iMouse system combines two areas, WSN and surveillance technology, to support intelligent mobile
surveillance services. The mobile sensors can help improve the weakness of traditional WSN. On the other hand, the WSN provides context awareness and intelligence to the surveillance system. Therefore, the weakness of traditional “dumb” surveillance system is greatly improved because the real critical images/video sections can be retrieved and sent to users

Botnets

Contributor: Prithi Anand

The term “botnet” is used to refer to any group of bots. It is generally a collection of compromised computers (called zombie computers) running programs under a common command and control infrastructure. A botnet’s originator can control the group remotely, usually through means such as IRC, for various purposes.

The establishment of a botnet involves the following:

Exploitation: . Typical ways of exploitation are through social engineering. Actions such as phishing, email, buffer overflow and instant messaging scams are common among infecting a user’s computer.

Infection: After successful exploitation, a bot uses Trivial File Transfer Protocol (TFTP), File Transfer Protocol (FTP), HyperText Transfer Protocol (HTTP) or IRC channel to transfer itself to the compromised host.

Control: After successful infection, the botnet’s author uses various commands to make the compromised computer do what he wants it to do.

Spreading: Bots can automatically scan their environment and propagate themselves using vulnerabilities. Therefore, each bot that is created can infect other computers on the network by scanning IP ranges or port scanning.

Scope

A botnet is nothing more than a tool. There are many different motives for using them. It is used in computer surveillance. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and even report back to its operator through the Internet connection. They are used widely by law enforcement agencies armed with search warrants. There are also warrantless surveillance by such organizations as the NSA. Packet sniffing is monitoring of data traffic into and out of a computer or network. Other uses may also be criminally motivated (eg. Denial of service attack, key logging, packet sniffing, disabling security applications, etc.) or for monetary purposes (click fraud).

EDGE

Contributor:Sreejith M.S.

Introduction
EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packet-switched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps.

A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers.

GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.

Technical differences between GPRS and EGPRS

Introduction
Regarded as a subsystem within the GSM standard, GPRS has introduced packet-switched data into GSM networks. Many new protocols and new nodes have been introduced to make this possible. EDGE is a method to increase the data rates on the radio link for GSM. Basically, EDGE only introduces a new modulation technique and new channel coding that can be used to transmit both packet-switched and circuit-switched voice and data services. EDGE is therefore an add-on to GPRS and cannot work alone. GPRS has a greater impact on the GSM system than EDGE has. By adding the new modulation and coding to GPRS and by making adjustments to the radio link protocols, EGPRS offers significantly higher throughput and capacity.

GPRS and EGPRS have different protocols and different behavior on the base station system side. However, on the core network side, GPRS and EGPRS share the same packet-handling protocols and, therefore, behave in the same way. Reuse of the existing GPRS core infrastructure (serving GRPS support node/gateway GPRS support node) emphasizes the fact that EGPRS is only an "add-on" to the base station system and is therefore much easier to introduce than GPRS . In addition to enhancing the throughput for each data user, EDGE also increases capacity. With EDGE, the same time slot can support more users. This decreases the number of radio resources required to support the same traffic, thus freeing up capacity for more data or voice services. EDGE makes it easier for circuit-switched and packet-switched traffic to coexist, while making more efficient use of the same radio resources. Thus in tightly planned networks with limited spectrum, EDGE may also be seen as a capacity booster for the data traffic.

EDGE technology
EDGE leverages the knowledge gained through use of the existing GPRS standard to deliver significant technical improvements. Figure 2 compares the basic technical data of GPRS and EDGE. Although GPRS and EDGE share the same symbol rate, the modulation bit rate differs. EDGE can transmit three times as many bits as GPRS during the same period of time. This is the main reason for the higher EDGE bit rates. The differences between the radio and user data rates are the result of whether or not the packet headers are taken into consideration. These different ways of calculating throughput often cause misunderstanding within the industry about actual throughput figures for GPRS and EGPRS. The data rate of 384 kbps is often used in relation to EDGE. The International Telecommunications Union (ITU) has defined 384 kbps as the data rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000) standard in a pedestrian environment. This 384 kbps data rate corresponds to 48 kbps per time slot, assuming an eight-time slot terminal.

DIGITAL CINEMA

Contributor:Jyothimon C.

Definition
Digital cinema encompasses every aspect of the movie making process, from production and post-production to distribution and projection. A digitally produced or digitally converted movie can be distributed to theaters via satellite, physical media, or fiber optic networks. The digitized movie is stored by a computer/server which "serves" it to a digital projector for each screening of the movie. Projectors based on DLP Cinema® technology are currently installed in over 1,195 theaters in 30 countries worldwide - and remain the first and only commercially available digital cinema projectors.

When you see a movie digitally, you see that movie the way its creators intended you to see it: with incredible clarity and detail. In a range of up to 35 trillion colors. And whether you're catching that movie on opening night or months after, it will always look its best, because digital movies are immune to the scratches, fading, pops and jitter that film is prone to with repeated screenings.Main advantage of digital movies are that, expensive film rolls and postprocessing expenses could be done away. Movie would be transmitted to computers in movie theatres, hence the movie could be released in a larger number of theatres.

Digital technology has already taken over much of the home entertainment market. It seems strange, then, that the vast majority of theatrical motion pictures are shot and distributed on celluloid film,just like they were more than a century ago. Of course, the technology has improved over the years, but it's still based on the same basic principles. The
reason is simple: Up until recently, nothing could come close to the image quality of projected film. Digital cinema is simply a new approach to making and showing movies. The basic idea is to use bits and bytes (strings of 1s and 0s) to record, transmit and replay images, rather than using chemicals on film.

The main advantage of digital technology (such as a HYPERLINK "http://entertainment.howstuffworks.com/cd.htm" CD ) is that it can store, transmit and retrieve a huge amount of information exactly as it was originally recorded. Analog technology (such as an audio tape) loses information in transmission, and generally degrades with each viewing. Digital information is also a lot more flexible than analog information. A computer can manipulate bytes of data very easily, but it can't do much with a streaming analog signal. It's a completely different language.

Digital cinema affects three major areas of movie-making:
" Production - how the movie is actually made
" Distribution - how the movie gets from the production company
" to movie theaters
" Projection - how the theater presents the movie
. Production

With an $800 consumer digital camcorder, a stack of tapes, a computer and some video-editing software, you could make a digital movie. But there are a couple of problems with this approach. First, your image resolution won't be that great on a big movie screen. Second, your movie will look like news footage, not a normal theatrical film. onventional video has a completely different look from film, and just about anybody can tell the difference in a second. Film and video differ a lot in image clarity, depth of focus and color range, but the biggest contrast is frame rate. Film cameras normally shoot at 24 frames per second, while most U.S. television video cameras shoot at 30 frames per second (29.97 per second, to be exact).

MOBILE AD HOC NETWORK

Contributor:Anoop V.M.

Definition
In the recent years communication technology and services have advanced. Mobility has become very important, as people want to communicate anytime from and to anywhere. In the areas where there is little or no infrastructure is available or the existing wireless infrastructure is expensive and inconvenient to use, Mobile Ad hoc Networks, called MANETs, are becoming useful. They are going to become integral part of next generation mobile services.
A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. The special features of MANET bring this technology great opportunity together with severe challenges.

The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. However, they face a number of problems.. Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for ad hoc networking technology are discussed in detail that are expected to promote the development and accelerate the commercial applications of the MANET technology.During the last decade, advances in both hardware and software techniques have resulted in mobile hosts and wireless networking common and miscellaneous. Generally there are two distinct approaches for enabling wireless mobile units to communicate with each other:

Infrastructured
Wireless mobile networks have traditionally been based on the cellular concept and relied on good infrastructure support, in which mobile devices communicate with access points like base stations connected to the fixed network infrastructure. Typical examples of this kind of wireless networks are GSM, UMTS, WLL, WLAN, etc.

Infrastructureless
As to infrastructureless approach, the mobile wireless network is commonly known as a mobile ad hoc network (MANET) [1, 2]. A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. It has many important applications, because in many contexts information exchange between mobile units cannot rely on any fixed network infrastructure, but on rapid configuration of a wireless connections on-the-fly. Wireless ad hoc networks themselves are an independent, wide area of research and applications, instead of being only just a complement of the cellular system. In this paper, we describes the fundamental problems of ad hoc networking by giving its related research background including the concept, features, status, and applications of MANET. Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for adhoc networking technology are discussed in details that are expected to promote the development and accelerate the commercial applications of the MANET technology.

Reference:

http://www.antd.nist.gov/wahn_mahn.shtml

http://www.mcs.vuw.ac.nz/cgi-bin/wiki/dsrg?MeshNetworking

4G

Contributor: Nitin S. Madhu

4G is the short term for fourth-generation wireless, the stage of broadband mobile communications that will supercede the third generation (3G). While neither standards bodies nor carriers have concretely defined or agreed upon what exactly 4G will be, it is expected that end-to-end IP and high-quality streaming video will be among 4G's distinguishing features. Fourth generation networks are likely to use a combination of WiMAX and WiFi.

Technologies employed by 4G may include SDR (Software-defined radio) receivers, OFDM (Orthogonal Frequency Division Multiplexing), OFDMA (Orthogonal Frequency Division Multiple Access), MIMO (multiple input/multiple output) technologies, UMTS and TD-SCDMA. All of these delivery methods are typified by high rates of data transmission and packet switched transmision protocols. 3G technologies, by contrast, are a mix of packet and circuit-switched networks.

A Japanese company, NTT DoCoMo, is testing 4G communication at 100 Mbps for mobile users and up to 1 Gbps while stationary.
While a typical 3G communication allows the transmission of 384 kbps for mobile systems and 2 mbps for stationary systems

The high speeds offered by 4G will create new markets and opportunities for both traditional and startup telecommunications companies. 4G networks, when coupled with cellular phones equipped with higher quality digital cameras and even HD capabilities, will enable vlogs to go mobile, as has already occurred with text-based moblogs. New models for collaborative citizen journalism are likely to emerge as well in areas with 4G connectivity..

Reference :

http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci749934,00.html
http://en.wikipedia.org/wiki/4G

Saturday, July 14, 2007

Smart Dust

Contributor:Saritha Mary Zachariah

Picture being able to scatter hundreds of tiny sensors around a building to monitor temperature or humidity. Or deploying, like pixie dust, a network of minuscule, remote sensor chips to track enemy movements in a military operation.

"Smart dust" devices are tiny wireless microelectromechanical sensors (MEMS) that can detect everything from light to vibrations. Thanks to recent breakthroughs in silicon and fabrication techniques, these "motes" could eventually be the size of a grain of sand, though each would contain sensors, computing circuits, bidirectional wireless communications technology and a power supply. Motes would gather scads of data, run computations and communicate that information using two-way band radio between motes at distances approaching 1,000 feet.

Smartdust is a term used to describe groups of very small robots which may be used for monitoring and detection. Currently, the scale of smartdust is rather small, with single sensors the size of a deck of playing cards, but the hope is to eventually have robots as small as a speck of dust. Individual sensors of smartdust are often referred to as motes because of their small size. These devices are also known as MEMS.

MEMS stands for Micro Electro-Mechanical Systems, referring to functional machine systems with components measured in micrometers. MEMS is often viewed as a stepping stone between conventional macroscale machinery and futuristic nanomachinery. MEMS-precursors have been around for a while in the form of microelectronics, but these systems are purely electronic, incapable of processing or outputting anything but a series of electrical impulses. However, modern MEMS-fabrication techniques are largely based upon the same technology used to manufacture integrated circuits, that is, film-deposition techniques which employ photolithography.

Largely considered an enabling technology rather than an end in itself, the fabrication of MEMS is seen by engineers and technologists as another welcome advance in our ability to synthesize a wider range of physical structures designed to perform useful tasks. Most often mentioned in conjunction with MEMS is the idea of a "lab-on-a-chip," a device that processes tiny samples of a chemical and returns useful results. This could prove quite revolutionary in the area of medical diagnosis, where lab analysis results in added costs for medical coverage, delays in diagnosis and inconvenient paperwork.

Brief description of the operation of the mote:

The Smart Dust mote is run by a microcontroller that not only determines the tasks performed by the mote, but controls power to the various components of the system to conserve energy. Periodically the microcontroller gets a reading from one of the sensors, which measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or air pressure, processes the data, and stores it in memory. It also occasionally turns on the optical receiver to see if anyone is trying to communicate with it. This communication may include new programs or messages from other motes. In response to a message or upon its own initiative the microcontroller will use the corner cube retroreflector or laser to transmit sensor data or a message to a base station or another mote.

Longer description of the operation of the mote:

The primary constraint in the design of the Smart Dust motes is volume, which in turn puts a severe constraint on energy since we do not have much room for batteries or large solar cells. Thus, the motes must operate efficiently and conserve energy whenever possible. Most of the time, the majority of the mote is powered off with only a clock and a few timers running. When a timer expires, it powers up a part of the mote to carry out a job, then powers off. A few of the timers control the sensors that measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or air pressure. When one of these timers expires, it powers up the corresponding sensor, takes a sample, and converts it to a digital word. If the data is interesting, it may either be stored directly in the SRAM or the microcontroller is powered up to perform more complex operations with it. When this task is complete, everything is again powered down and the timer begins counting again.

Another timer controls the receiver. When that timer expires, the receiver powers up and looks for an incoming packet. If it doesn't see one after a certain length of time, it is powered down again. The mote can receive several types of packets, including ones that are new program code that is stored in the program memory. This allows the user to change the behavior of the mote remotely. Packets may also include messages from the base station or other motes. When one of these is received, the microcontroller is powered up and used to interpret the contents of the message. The message may tell the mote to do something in particular, or it may be a message that is just being passed from one mote to another on its way to a particular destination. In response to a message or to another timer expiring, the microcontroller will assemble a packet containing sensor data or a message and transmit it using either the corner cube retroreflector or the laser diode, depending on which it has. The corner cube retroreflector transmits information just by moving a mirror and thus changing the reflection of a laser beam from the base station. This technique is substantially more energy efficient than actually generating some radiation. With the laser diode and a set of beam scanning mirrors, we can transmit data in any direction desired, allowing the mote to communicate with other Smart Dust motes.

Smartdust has theoretical applications in virtually every field of science and industry. Research in the technologies is well-funded and sturdily based, and it is generally accepted that it is simply a matter of time before smartdust exists in a functional manner.

The Defense Advanced Research Projects Agency (DARPA) has been funding smartdust research heavily since the late 1990s, seeing virtually limitless applications in the sphere of modern warfare. So far the research has been promising, with prototype smartdust sensors as small as 5mm. Costs have been dropping rapidly with technological innovations, bringing individual motes down to as little as $50 each, with hopes of dropping below $1 per mote in the near future.

Applications

  • Defense-related sensor networks
    • battlefield surveillance, treaty monitoring, transportation monitoring, scud hunting, ...
  • Virtual keyboard
    • Glue a dust mote on each of your fingernails. Accelerometers will sense the orientation and motion of each of your fingertips, and talk to the computer in your watch. QWERTY is the first step to proving the concept, but you can imagine much more useful and creative ways to interface to your computer if it knows where your fingers are: sculpt 3D shapes in virtual clay, play the piano, gesture in sign language and have to computer translate, ...
    • Combined with a MEMS augmented-reality heads-up display, your entire computer I/O would be invisible to the people around you. Couple that with wireless access and you need never be bored in a meeting again! Surf the web while the boss rambles on and on.
  • Inventory Control
    • The carton talks to the box, the box talks to the palette, the palette talks to the truck, and the truck talks to the warehouse, and the truck and the warehouse talk to the internet. Know where your products are and what shape they're in any time, anywhere. Sort of like FedEx tracking on steroids for all products in your production stream from raw materials to delivered goods.
  • Product quality monitoring
    • temperature, humidity monitoring of meat, produce, dairy products
    • impact, vibration, temp monitoring of consumer electronics
      • failure analysis and diagnostic information, e.g. monitoring vibration of bearings for frequency signatures indicating imminent failure (back up that hard drive now!)
  • Smart office spaces
    • The Center for the Built Environment has fabulous plans for the office of the future in which environmental conditions are tailored to the desires of every individual. Maybe soon we'll all be wearing temperature, humidity, and environmental comfort sensors sewn into our clothes, continuously talking to our workspaces which will deliver conditions tailored to our needs. No more fighting with your office mates over the thermostat.

Energy use is a major area of research in the field of smartdust. With devices so small, batteries present a massive addition of weight. It is therefore important to use absolutely minimal amounts of energy in communicating the data they collect to central hubs where it can be accessed by humans.

Development of smartdust continues at a breakneck speed, and it will no doubt soon be commonplace to have a vast army of thousands or millions of nearly invisible sensors monitoring our environment to ensure our safety and the efficiency of the machines around us.

REFERENCES

1. www.computerworld.com

2. www.robotics.eecs.berkeley.edu

3. www.bsac.eecs.berkeley.edu

4. www.nanotech-now.com

5. www.wikipedia.org

Blade servers

Contributor: George Mamman Koshy


Blade servers are self-contained computer servers, designed for high density. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations while still having all the functional components to be considered a computer. A blade enclosure provides services such as power, cooling, networking, various interconnects and management - though different blade providers have differing principles around what should and should not be included in the blade itself (and sometimes in the enclosure altogether). Together these form the blade system.

In a standard server-rack configuration, 1U (one rack unit, 19" wide and 1.75" tall) is the minimum possible size of any equipment. The principal benefit of, and the reason behind the push towards, blade computing is that components are no longer restricted to these minimum size requirements. The most common computer rack form-factor being 42U high, this limits the number of discrete computer devices directly mounted in a rack to 42 components. Blades do not have this limitation; densities of 100 computers per rack and more are achievable with the current generation of blade systems.

Server blade

In the purest definition of computing (a Turing machine, simplified here), a computer requires only;

1. memory to read input commands and data

2. a processor to perform commands manipulating that data, and

3. memory to store the results.

Today (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, which produces heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as Keyboard, Video and Mouse and serial) etc. only support the basic computing function, yet add bulk, heat and complexity, not to mention moving parts that are more prone to failure than solid-state components.

In practice, these components are all required if the computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualised (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and (in theory) cheaper to manufacture.

Blade enclosure

The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade computers require components that are bulky, hot and space-inefficient, and duplicated across many computers that may or may not be performing at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilization is more efficient. The specifics of which services are provided and how vary by vendor.

Power

Computers operate over a range of DC voltages, yet power is delivered from utilities as AC, and at higher voltages than required within the computer. Converting this current requires power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may be in the form of a power supply in the enclosure or a dedicated separate PSU supplying DC to multiple enclosures. This setup not only reduces the number of PSUs required to provide a resilient power supply, but it also improves efficiency because it reduces the number of idle PSUs. In the event of a PSU failure the blade chassis throttles down individual blade server performance until it matches the available power. This is carried out in steps of 12.5% per CPU until power balance is achieved.

Cooling

During operation, electrical and mechanical components produce heat, which must be displaced to ensure the proper functioning of the components. In blade enclosures, as in most computing systems, heat is removed with fans.

A frequently underestimated problem when designing high-performance computer systems is the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade enclosure designs feature high speed, adjustable fans and control logic that tune the cooling to the systems requirements.

At the same time, the increased density of blade server configurations can still result in higher overall demands for cooling when a rack is populated at over 50%. This is especially true with early generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers.

Networking

Computers are increasingly being produced with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence more complexity. High-speed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilise all the bandwidth available.

The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. These may be presented in the chassis itself, or in networking blades.

Storage

While computers typically need hard-disks to store the operating system, application and data for the computer, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, SCSI, DAS, Fibre Channel and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade. This may have higher processor density or better reliability than systems having individual disks on each blade.

Uses

Blade servers are ideal for specific purposes such as web hosting and cluster computing. Individual blades are typically hot-swappable. As more processing power, memory and I/O bandwidth are added to blade servers, they are being used for larger and more diverse workloads.

Although blade server technology in theory allows for open, cross-vendor solutions, at this stage of development of the technology, users find there are fewer problems when using blades, racks and blade management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers; increasing numbers of third-party software vendors are now entering this growing field.

Blade servers are not, however, the answer to every computing problem. They may best be viewed as a form of productized server farm that borrows from mainframe packaging, cooling, and power supply technology. For large problems, server farms of blade servers are still necessary, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.


REFERENCES

  1. www.wikipedia.org
  2. www.SearchDataCenter.com
  3. www.serverwatch.com