Thursday, July 26, 2007

SINGLE ELECTRON TUNNELING TRANSISTOR

Contributor : Midhun M.K.

The chief problems that are faced by chip designers are regarding the size of the chip. According to Moore’s Law, the numbers of transistors on a chip will approximately double every 18 to 24 months. Moore\'s Law works largely through shrinking transistors-the circuits that carry electrical signals. By shrinking transistors, designers can squeeze more transistors into a chip. However, more transistors mean more electricity and heat compressed into an even smaller space. Furthermore, smaller chips increase performance but also compound the problem of complexity. To solve this problem, the single-electron tunneling transistor (SET) - a device that exploits the quantum effect of tunneling to control and measure the movement of single electron was devised. Experiments have shown that, charge does not flow continuously in these devices but in a quantized way. This paper discusses the principle of operation of SET, its fabrication and its applications. It also deals with the merits and demerits of SET compared to MOSFET. Although it is unlikely that SETs will replace FETs in conventional electronics, they should prove useful in ultra-low-noise analog applications. Moreover, because it is not affected by the same technological limitations as the FET, the SET can approach closely the quantum limit of sensitivity. It might also be a useful read-out device for a solid-state quantum computer. In future when quantum technology replaces the current computer technology, SET will find immense applications. Single Electron Tunneling transistors (SETs) are three-terminal switching devices that can transfer electrons from source to drain one by one. The structure of SETs is similar to that of FETs. The important difference, however, is that in an SET the channel is separated from source and drain by tunneling junctions, and the role of channel is played by an “island”. The particular advantage of SET is that they require only one electron to toggle between ON and OFF states. So this transistor will generate much less heat and require less power to move the electrons around - a feature very important in battery-powered mobile devices, such as cell phones. We know that the Pentium chips become much too hot and require massive fans to cool them. This wouldn’t\'t happen with a Single Electron Transistor, which uses much less energy, so they can be arranged much closer together.

Reference:

• http://physicsweb.org/articles/world/11/9/7/1
• http://emtech.boulder.nist.gov/div817b/whatwedo/set/set.htm

COLLECTIVE INTELLIGENCE BRICKS

Contributor: Anish Samuel

INTRODUCTION:

Collective Intelligent Bricks (CIB) deals with massive automated storage system. It is the
future of the data storage systems. In the earlier days of computer development the data storage systems were not so highly developed. In those days computers were not comm only used owing to high technological knowledge and experience one must have in order to deal with the storage system. In those days absence of high-density storage system also added to the problem. The situation in the computer field in earlier days could be seen from the words of the computer giants.

• 1943 -IBM Chairman Thomas Watson predicts, "There is a world market for maybe five computers". (In 1951 there were 10 computers in the U.S)

• 1977 -Kenneth Olson, President of Digital Equipment: “There is no reason for anyone to have a computer in their home.”

• 1981 –Bill Gates: “640K ought to be enough for anybody.”


These statements emphasize that it is difficult to predict the future technology trends and industry needs. Another good example is the current situation in the IT world, the amazing growth in digital data stored in special storage systems have led to high costs of administration and a real problem to storage administrator’s work. Nowadays storage systems contain several terabytes of data, but in the near future, according to the growing pace, these storage systems capacities will be increased to petabytes. To lower the cost of the administration and to help creating an easy to manage storage systems, vendors are working on intelligent storage-bricks. These bricks will consist of the shelf components, thus lowering costs, and the intelligent to self-manage the storage by themselves with no human assistant. The bricks are the future of the IT world, without them storing and managing data in the near future will be impossible.

Let us now briefly describe about the various sections intended to be dealt with in this seminar topic.

AUTONOMIC STORAGE

The basic goal of autonomic storage is, is to significantly improve the cost of ownership, reliability, and ease-of-use of information technologies. As explained, the main problem of information technology is the cost and ease of administration. Nowadays a storage administrator must have wide knowledge not only in disk technology, but also in several network protocols and architectures (like TCP/IP and Fiber-Channel), file systems and system architecture. This administrator faces various problems like installation of new storage components and/or systems, the configuration of these components and systems and the reconfiguration and adjustments of the entire system, upgrading of existing systems and components, monitoring and tracking problems. Another perspective in autonomic storage is the engineering challenges. Virtually every aspect of autonomic storage offers significant engineering challenges, testing and verification of such systems, and helping the storage administrator by easing installation, configuration and monitoring of those systems.
To achieve the promises of autonomic storage systems, systems need to become more self-configuring, self-healing and self-protecting, and during operation, more self-optimizing.

Concept Current Storage Autonomic Storage
Self-configuration Corporate data centers have multiple vendors and platforms. Installing, configuring and integrating systems are time consuming and error prone. Automated configuration of components and systems follows high-level policies. Rest of system adjusts automatically and seamlessly.
Self-optimization Systems have hundreds of manually set, nonlinear tuning parameters, and their number increases with each release. Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-healing Problem determination in large, complex systems can take a team of programmers’ weeks. System automatically detects, diagnoses, and repairs localized software and hardware problems.
Self-protection Detection of and recovery from attacks and cascading failures is manual. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.

The idea to move computing ability to the disk is not new and it was already introduced in active-disks concept, nevertheless the autonomic storage is a new approach with far-reaching consequences and its aspects will be the next storage trends. Obviously, it will take several years until all the challenges of autonomic computing will be achieved, but meanwhile storage systems incorporate autonomic computing features at several levels. The first level is the component level in which components contain features, which are autonomic. The next level, homogenous or heterogeneous systems work together to achieve autonomic. The third level, heterogeneous systems work together towards a goal specified by the managing authority. An example for the second level is the work of several storage bricks on a collective intelligent storage system.


STORAGE BRICKS

The storage-brick concept bundles the use of of-the-shelf components, such as hard disks, processor, memory and network, together with the autonomic strategy aimed to ease the administrative work. Building bricks from these components will provide a combination of several disks together with an intelligent control and management and network connectivity, while keeping the cost low. Figures illustrate the basic structure of a storage brick.

Storage-bricks can be stacked in a rack creating a storage system with large capacity, as shown in Figure. The adding procedure is very easy, plug and play, without special configuration and interruptions to other bricks and ongoing background work

Several vendors already provide storage-bricks, the bricks are built from 8-12 hard disks, 200 (or more) MIPS processor, dual Ethernet ports and with a proprietary OS, their cost range from 10,000 $/TB to 50,000 $/TB. These bricks can run various applications, such as: SQL and mail. The table shows the available storage bricks

Company Product Name Capacity
Snap Appliance Snap Server 80 GB – 2.16 TB
NetApp NetApp Server/Filer 50 GB – 48 TB

Still these bricks do not fulfill all autonomic storage aspects actually these are non-intelligent bricks that need administration and supervision. Currently only two vendors supply intelligent brick: EquaLogic and LeftHand Networks. Both companies supply storage bricks with 2 TB capacity and an automatic scaling features. Adding a new brick to a working storage-bricks system has no affect on other bricks and the only work needed is to plug the brick. The new brick will be automatically recognized by all the other bricks and will be added to the storage pool. Another self-management feature is the well-known load balancing done on both disks and network interfaces. Of course, the bricks have other sophisticated features such as replications, snapshots, disaster recovery and fail-over. EquaLogic’s self-managing architecture is called Peer Storage, in this architecture not only adding a brick is easy but also the management of the entire system, which contain numerous bricks, is easy. The entire management is automated and the administrator does not have to self configure and provision the system, he just have to describe the system his needs and the bricks will co-work to respond to his requests.

COLLECTIVE INTELLIGENT BRICKS

To overcome this challenging problem of floor space and to create an autonomic storage brick that minimize floor space consumption, IBM launched the IceCube project (now named CIB-Collective Intelligent Bricks). The purpose of this project is to create highly scalable, 3-dimentional pile of intelligent bricks with self-management features. The use of the 3-dimentional pile, as illustrated in Figure 4.1, enables extreme reduction of physical size (a tenfold reduction). Because the pile will consume a lot of power, a thermal problem is inevitable. Therefore, IBM has used a water-cooling system instead of an air-cooling system. This way even more floor-space can be saved and the total power of the system is decreased. Another side effect, due to the usage of water-cooling system, is the reduced noise.

IBM’s brick consists of twelve hard disks (total capacity of 1.2 TB), managed by three controllers tied to a strong microprocessor and connected to an Ethernet switch (future implementations will use infiniband). A coupler is located on each side of the brick, that way the brick can communicate at the rate of 10 GB/sec with adjoining bricks. The total throughput of a brick is 60 GB/sec and the total throughput of a cube can rise up to several terabits per second, based on how many of the external facing couplers are linked up to a wire interface. The future goal of IBM is to create a cube with up to 700 bricks. These goals are achieved by simple and common concepts such as RAID and copies and by intelligent software that automatically move, copy and spread data from one brick to another to eliminate hot spots and to enable load balancing. After adding a new brick, the configuration procedure is done automatically and other bricks will transfer data to it. Another self-managements feature implemented by IBM is the fail-in-place concept, when a brick has malfunctioned no repair action is taken and the faulty brick is left in place. All other bricks will learn the problem and will work around the faulty brick. Because the data is scattered among several brick the data continues to be available. Thus, no human action is needed, except for adding bricks as system need more storage. The construction of a Collective Intelligent Brick is shown below.

Saturday, July 21, 2007

COMPUTER VISION FOR INTELLIGENT VEHICLES

Contributor : Cinsu Thomas

ABSTRACT

Vision is the main sense that we use to perceive the structure of the surrounding environment. Due to the large amount of information that an image carries, artificial vision is an extremely powerful way to sense the surroundings also for autonomous robots.

In many indoor applications, such as the navigation of autonomous robots in both structured and unknown settings, vision and active sensors can perform complementary tasks for the recognition of objects, detection of free-space, or check for some specific object characteristics. The recent advances in computational hardware ,such as a higher degree of integration, allows to have machines that can deliver a high computational power ,with fast networking facilities, at an affordable price .In addition to this, current cameras include new important features that allow to address and solve some basic problems directly at the sensor level. The resolution of the sensors has been drastically enhanced. In order to decrease the acquisition and transfer time, new technological solutions can be found in CMOS cameras, with important advantages such as the pixels can be addressed independently like in traditional memories, and their integration on the processing chip seems to be straight forward.

The success, of computational approaches to perception, is demonstrated by the increasing numbers of autonomous systems, that are now being used in structured and controlled industrial environments and, that are now being studied and implemented to work in a more complex and unknown settings .In particular, the last years have witnessed an increasing interest towards the use of vision techniques for perceiving automotive environments, both for highway and urban scenarios, will become a reality in the next decades. Besides the obvious advantages of increasing road safety and improving the quality and efficiency for people and goods mobility, the integration of intelligent features and autonomous functionalities on vehicles will lead to major economical benefits such as reductions in fuel consumption, efficient exploitation of the road network. Furthermore, not only the automotive field is interested in these new technologies, but other sectors as well, each with its own target (industrial vehicles, military systems, mission critical and unmanned rescue robots).

Thursday, July 19, 2007

Digital Light Processing

Contributor : Nitin S. Madhu

Digital Light Processing, a technology that led to miniaturisation of projectors, has a potential to replace the ageing film-based projection of movies in theatres.
Although initially developed for projection processes , and over the years, has spawned a new category of small, ultra-portable mobile projectors .It is based around a specialised optical semiconductor chip-Digital Micro mirror Device (DMD)

Being mirror-based, DLP systems use light very efficiently. That helps DLP images achieve outstanding brightness, contrast ratio and black levels. DLP images are also known for their crisp detail, smooth motion, and excellent color accuracy and consistency. Also, DLP-based TVs are probably the most resistant to screen burn-in.

The DLP chip is probably the world's most sophisticated light switch. It contains a rectangular array of up to 2 million hinge-mounted microscopic mirrors; each of these micro mirrors measures less than one-fifth the width of a human hair. The 3-chip system found in DLP Cinema projection systems is capable of producing no fewer than 35 trillion colours.

Although initially developed for projection processes, DLP is now used in telecommunications, scientific instrumentation, volumetric displays, holographic data storages, lithography and medical imaging.

Reference:

http://en.wikipedia.org/wiki/Digital_Light_Processing

http://www.answers.com/topic/dlp?cat=technology

ZIGBEE

Contributor : Veena C.

Introduction:

Zigbee is a wireless protocol that allows small, low-cost devices to quickly transmit small amounts of data, like temperature readings for thermostats, on\off requests for light switches, or keystrokes for a wireless keyboard. It is a global specification for reliable, cost effective, low power wireless applications based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). ZigBee is targeted at RF applications that require a low data rate, long battery life, and secure networking.Zigbee is a rather new wireless technology that looks to have applications in a variety of fields. It uses low data rates technology allows for devices to communicate with one another with very low power consumption, allowing the devices to run on simple batteries for several years. Zigbee is targeting various forms of automation, as the low data rate communication is ideal for sensors, monitors, and the like. Home automation is one of the key market areas for Zigbee, with an example of a simple network.

ZigBee is designed for wireless controls and sensors. It could be built into just about anything you have around your home or office, including lights, switches, doors and appliances. These devices can then interact without wires, and you can control them all from a remote control or even your mobile phone. It allows wireless two-way communications between lights and switches, thermostats and furnaces, hotel-room air-conditioners and the front desk, and central command posts. It travels across greater distances and handles many sensors that can be linked to perform different tasks.

ZigBee works well because it aims low. Controls and sensors don't need to send and receive much data. ZigBee has been designed to transmit slowly. It has a data rate of 250kbps (kilobits per second).Because ZigBee transmits slowly; it doesn't need much power, so batteries will last up to 10 years. Because ZigBee consumes very little power, a sensor and transmitter that reports whether a door is open or closed, for example, can run for up to five years on a single double-A battery. Also, operators are much happier about adding ZigBee to their phones than faster technologies such as Wi-Fi; therefore, the phone will be able to act as a remote control for all the ZigBee devices it encounters.

ZigBee basically uses digital radios to allow devices to communicate with one another. A typical ZigBee network consists of several types of devices. A network coordinator is a device that sets up the network, is aware of all the nodes within its network, and manages both the information about each node as well as the information that is being transmitted/received within the network. Every ZigBee network must contain a network coordinator. Other Full Function Devices (FFD's) may be found in the network, and these devices support all of the 802.15.4 functions. They can serve as network coordinators, network routers, or as devices that interact with the physical world. The final device found in these networks is the Reduced Function Device (RFD), which usually only serve as devices that interact with the physical world.

References:

http://en.wikipedia.org/wiki/ZigBee

http://seminarsonly.com/Zigbee

http://ZigBee Tutorial-Reports_Com.htm

CELL MICROPROCESSOR

Contributor: Nithin S. Madhu CELL MICROPROCESSOR

Cell is shorthand for Cell Broadband Engine Architecture commonly abbreviated as CBEA.

Cell microprocessor is a result of a US $400 million joint effort by STI, the name of the formal alliance formed by Sony, Toshiba and IBM, over a period of 4 years

Cell is a heterogeneous chip multiprocessor that consists of an IBM 64-bit Power Architecture core, augmented with eight specialized co-processors based on a novel single-instruction multiple-data (SIMD) architecture called Synergistic Processor Unit (SPU), which is for data-intensive processing, like that found in cryptography, media and scientific applications. The system is integrated by a coherent on-chip bus.

The basic architecture of the Cell is described by IBM as a "system on a chip" (SoC)

One cell working alone has potential of reaching 256 GFLOPS (Floating Point Operations Per Second). (Home PC can reach 6 GFLOPS, if you use a good graphics card)

The potential processing power of cell blows away existing processors, even super computers


The Cell architecture is based on the new thinking that is emerging in the world of multiprocessing. The industry focus has shifted from maximizing performance to maximizing performance per watt. This is achieved by putting more than one processor on a single chip and running all of them well below their top speed. Because the transistors are switching less frequently the processors generate less heat and since there are atleast two hotspots on each cheap the heat is spread more evenly over it and thus is less damaging to the circuitry. The Cell architecture breaks ground in combining a light-weight general-purpose processor with multiple GPU-like coprocessors into a coordinated whole Software adoption remains a key issue in whether Cell ultimately delivers on its performance potential.
Some Cell statistics:
  • Observed clock speed: > 4 GHz

  • Peak performance (single precision): > 256 GFlops

  • Peak performance (double precision): >26 GFlops

  • Local storage size per SPU: 256KB

  • Area: 221 mm²

  • Technology: 90nm SOI

  • Total number of transistors: 234M

APPLICATION

Cell is optimized for compute-intensive workloads and broadband rich media

Applications, including computer entertainment, movies and other forms of digital content

The first major commercial application of Cell was in Sony's PlayStation 3 game console

Toshiba has announced plans to incorporate Cell in high definition television sets.

IBM announced April 25, 2007 that it will begin integrating its Cell Broadband Engine Architecture microprocessors into the company's line of mainframes

In the fall of 2006 IBM released the QS20 blade server using double Cell BE processors for tremendous performance in certain applications, reaching a peak of 410 gigaflops per module

Mercury Computer Systems, Inc. has released blades, conventional rack servers and PCI Express accelerator boards with Cell processors


Reference:

http://en.wikipedia.org/wiki/Cell_microprocessor

http://www.sony.net/SonyInfo/News/Press/200502/05-0208BE/index.html

Sunday, July 15, 2007

iMouse

INTRODUCTION
iMouse is an integrated mobile surveillance and wireless sensor system. Wireless sensor networks (WSN) provide an inexpensive and convenient way to monitor physical environments. Traditional surveillance systems typically collect a large volume of videos from wallboard cameras, which require huge computation or manpower to analyze. Incorporating the environment-sensing capability of wireless sensor networks into video based surveillance systems provides advanced services at a lower cost than traditional systems. The iMouse's integrated mobile surveillance and easy to deploy wireless sensor system uses static and mobile wireless sensors to detect and then analyze unusual events in the environment. The iMouse system consists of a large number of inexpensive static sensors and a small number of more expensive mobile sensors. The former is to monitor the environment, while the latter can move to certain locations and gather more advanced data. The iMouse system is a mobile, context-aware surveillance system.
The three main components of the iMouse system architecture are (1) the static sensors, (2) the mobile sensors and (3) an external server. The system is set up so that the user could issue commands to the network through the server at which point the static sensors would monitor the environment and report events. When notified of an unusual event or change in behavior, the server notifies the user and dispatches the mobile sensors to move to the emergency sites, collect data and report.

SCOPE
iMouse can be used in various home security applications, surveillance, biological detection, emergency situations. The iMouse system combines two areas, WSN and surveillance technology, to support intelligent mobile
surveillance services. The mobile sensors can help improve the weakness of traditional WSN. On the other hand, the WSN provides context awareness and intelligence to the surveillance system. Therefore, the weakness of traditional “dumb” surveillance system is greatly improved because the real critical images/video sections can be retrieved and sent to users

Botnets

Contributor: Prithi Anand

The term “botnet” is used to refer to any group of bots. It is generally a collection of compromised computers (called zombie computers) running programs under a common command and control infrastructure. A botnet’s originator can control the group remotely, usually through means such as IRC, for various purposes.

The establishment of a botnet involves the following:

Exploitation: . Typical ways of exploitation are through social engineering. Actions such as phishing, email, buffer overflow and instant messaging scams are common among infecting a user’s computer.

Infection: After successful exploitation, a bot uses Trivial File Transfer Protocol (TFTP), File Transfer Protocol (FTP), HyperText Transfer Protocol (HTTP) or IRC channel to transfer itself to the compromised host.

Control: After successful infection, the botnet’s author uses various commands to make the compromised computer do what he wants it to do.

Spreading: Bots can automatically scan their environment and propagate themselves using vulnerabilities. Therefore, each bot that is created can infect other computers on the network by scanning IP ranges or port scanning.

Scope

A botnet is nothing more than a tool. There are many different motives for using them. It is used in computer surveillance. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and even report back to its operator through the Internet connection. They are used widely by law enforcement agencies armed with search warrants. There are also warrantless surveillance by such organizations as the NSA. Packet sniffing is monitoring of data traffic into and out of a computer or network. Other uses may also be criminally motivated (eg. Denial of service attack, key logging, packet sniffing, disabling security applications, etc.) or for monetary purposes (click fraud).

EDGE

Contributor:Sreejith M.S.

Introduction
EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packet-switched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps.

A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers.

GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.

Technical differences between GPRS and EGPRS

Introduction
Regarded as a subsystem within the GSM standard, GPRS has introduced packet-switched data into GSM networks. Many new protocols and new nodes have been introduced to make this possible. EDGE is a method to increase the data rates on the radio link for GSM. Basically, EDGE only introduces a new modulation technique and new channel coding that can be used to transmit both packet-switched and circuit-switched voice and data services. EDGE is therefore an add-on to GPRS and cannot work alone. GPRS has a greater impact on the GSM system than EDGE has. By adding the new modulation and coding to GPRS and by making adjustments to the radio link protocols, EGPRS offers significantly higher throughput and capacity.

GPRS and EGPRS have different protocols and different behavior on the base station system side. However, on the core network side, GPRS and EGPRS share the same packet-handling protocols and, therefore, behave in the same way. Reuse of the existing GPRS core infrastructure (serving GRPS support node/gateway GPRS support node) emphasizes the fact that EGPRS is only an "add-on" to the base station system and is therefore much easier to introduce than GPRS . In addition to enhancing the throughput for each data user, EDGE also increases capacity. With EDGE, the same time slot can support more users. This decreases the number of radio resources required to support the same traffic, thus freeing up capacity for more data or voice services. EDGE makes it easier for circuit-switched and packet-switched traffic to coexist, while making more efficient use of the same radio resources. Thus in tightly planned networks with limited spectrum, EDGE may also be seen as a capacity booster for the data traffic.

EDGE technology
EDGE leverages the knowledge gained through use of the existing GPRS standard to deliver significant technical improvements. Figure 2 compares the basic technical data of GPRS and EDGE. Although GPRS and EDGE share the same symbol rate, the modulation bit rate differs. EDGE can transmit three times as many bits as GPRS during the same period of time. This is the main reason for the higher EDGE bit rates. The differences between the radio and user data rates are the result of whether or not the packet headers are taken into consideration. These different ways of calculating throughput often cause misunderstanding within the industry about actual throughput figures for GPRS and EGPRS. The data rate of 384 kbps is often used in relation to EDGE. The International Telecommunications Union (ITU) has defined 384 kbps as the data rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000) standard in a pedestrian environment. This 384 kbps data rate corresponds to 48 kbps per time slot, assuming an eight-time slot terminal.

DIGITAL CINEMA

Contributor:Jyothimon C.

Definition
Digital cinema encompasses every aspect of the movie making process, from production and post-production to distribution and projection. A digitally produced or digitally converted movie can be distributed to theaters via satellite, physical media, or fiber optic networks. The digitized movie is stored by a computer/server which "serves" it to a digital projector for each screening of the movie. Projectors based on DLP Cinema® technology are currently installed in over 1,195 theaters in 30 countries worldwide - and remain the first and only commercially available digital cinema projectors.

When you see a movie digitally, you see that movie the way its creators intended you to see it: with incredible clarity and detail. In a range of up to 35 trillion colors. And whether you're catching that movie on opening night or months after, it will always look its best, because digital movies are immune to the scratches, fading, pops and jitter that film is prone to with repeated screenings.Main advantage of digital movies are that, expensive film rolls and postprocessing expenses could be done away. Movie would be transmitted to computers in movie theatres, hence the movie could be released in a larger number of theatres.

Digital technology has already taken over much of the home entertainment market. It seems strange, then, that the vast majority of theatrical motion pictures are shot and distributed on celluloid film,just like they were more than a century ago. Of course, the technology has improved over the years, but it's still based on the same basic principles. The
reason is simple: Up until recently, nothing could come close to the image quality of projected film. Digital cinema is simply a new approach to making and showing movies. The basic idea is to use bits and bytes (strings of 1s and 0s) to record, transmit and replay images, rather than using chemicals on film.

The main advantage of digital technology (such as a HYPERLINK "http://entertainment.howstuffworks.com/cd.htm" CD ) is that it can store, transmit and retrieve a huge amount of information exactly as it was originally recorded. Analog technology (such as an audio tape) loses information in transmission, and generally degrades with each viewing. Digital information is also a lot more flexible than analog information. A computer can manipulate bytes of data very easily, but it can't do much with a streaming analog signal. It's a completely different language.

Digital cinema affects three major areas of movie-making:
" Production - how the movie is actually made
" Distribution - how the movie gets from the production company
" to movie theaters
" Projection - how the theater presents the movie
. Production

With an $800 consumer digital camcorder, a stack of tapes, a computer and some video-editing software, you could make a digital movie. But there are a couple of problems with this approach. First, your image resolution won't be that great on a big movie screen. Second, your movie will look like news footage, not a normal theatrical film. onventional video has a completely different look from film, and just about anybody can tell the difference in a second. Film and video differ a lot in image clarity, depth of focus and color range, but the biggest contrast is frame rate. Film cameras normally shoot at 24 frames per second, while most U.S. television video cameras shoot at 30 frames per second (29.97 per second, to be exact).

MOBILE AD HOC NETWORK

Contributor:Anoop V.M.

Definition
In the recent years communication technology and services have advanced. Mobility has become very important, as people want to communicate anytime from and to anywhere. In the areas where there is little or no infrastructure is available or the existing wireless infrastructure is expensive and inconvenient to use, Mobile Ad hoc Networks, called MANETs, are becoming useful. They are going to become integral part of next generation mobile services.
A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. The special features of MANET bring this technology great opportunity together with severe challenges.

The military tactical and other security-sensitive operations are still the main applications of ad hoc networks, although there is a trend to adopt ad hoc networks for commercial uses due to their unique properties. However, they face a number of problems.. Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for ad hoc networking technology are discussed in detail that are expected to promote the development and accelerate the commercial applications of the MANET technology.During the last decade, advances in both hardware and software techniques have resulted in mobile hosts and wireless networking common and miscellaneous. Generally there are two distinct approaches for enabling wireless mobile units to communicate with each other:

Infrastructured
Wireless mobile networks have traditionally been based on the cellular concept and relied on good infrastructure support, in which mobile devices communicate with access points like base stations connected to the fixed network infrastructure. Typical examples of this kind of wireless networks are GSM, UMTS, WLL, WLAN, etc.

Infrastructureless
As to infrastructureless approach, the mobile wireless network is commonly known as a mobile ad hoc network (MANET) [1, 2]. A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any pre-existing fixed network infrastructure. It has many important applications, because in many contexts information exchange between mobile units cannot rely on any fixed network infrastructure, but on rapid configuration of a wireless connections on-the-fly. Wireless ad hoc networks themselves are an independent, wide area of research and applications, instead of being only just a complement of the cellular system. In this paper, we describes the fundamental problems of ad hoc networking by giving its related research background including the concept, features, status, and applications of MANET. Some of the technical challenges MANET poses are also presented based on which the paper points out the related kernel barrier. Some of the key research issues for adhoc networking technology are discussed in details that are expected to promote the development and accelerate the commercial applications of the MANET technology.

Reference:

http://www.antd.nist.gov/wahn_mahn.shtml

http://www.mcs.vuw.ac.nz/cgi-bin/wiki/dsrg?MeshNetworking

4G

Contributor: Nitin S. Madhu

4G is the short term for fourth-generation wireless, the stage of broadband mobile communications that will supercede the third generation (3G). While neither standards bodies nor carriers have concretely defined or agreed upon what exactly 4G will be, it is expected that end-to-end IP and high-quality streaming video will be among 4G's distinguishing features. Fourth generation networks are likely to use a combination of WiMAX and WiFi.

Technologies employed by 4G may include SDR (Software-defined radio) receivers, OFDM (Orthogonal Frequency Division Multiplexing), OFDMA (Orthogonal Frequency Division Multiple Access), MIMO (multiple input/multiple output) technologies, UMTS and TD-SCDMA. All of these delivery methods are typified by high rates of data transmission and packet switched transmision protocols. 3G technologies, by contrast, are a mix of packet and circuit-switched networks.

A Japanese company, NTT DoCoMo, is testing 4G communication at 100 Mbps for mobile users and up to 1 Gbps while stationary.
While a typical 3G communication allows the transmission of 384 kbps for mobile systems and 2 mbps for stationary systems

The high speeds offered by 4G will create new markets and opportunities for both traditional and startup telecommunications companies. 4G networks, when coupled with cellular phones equipped with higher quality digital cameras and even HD capabilities, will enable vlogs to go mobile, as has already occurred with text-based moblogs. New models for collaborative citizen journalism are likely to emerge as well in areas with 4G connectivity..

Reference :

http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci749934,00.html
http://en.wikipedia.org/wiki/4G

Saturday, July 14, 2007

Smart Dust

Contributor:Saritha Mary Zachariah

Picture being able to scatter hundreds of tiny sensors around a building to monitor temperature or humidity. Or deploying, like pixie dust, a network of minuscule, remote sensor chips to track enemy movements in a military operation.

"Smart dust" devices are tiny wireless microelectromechanical sensors (MEMS) that can detect everything from light to vibrations. Thanks to recent breakthroughs in silicon and fabrication techniques, these "motes" could eventually be the size of a grain of sand, though each would contain sensors, computing circuits, bidirectional wireless communications technology and a power supply. Motes would gather scads of data, run computations and communicate that information using two-way band radio between motes at distances approaching 1,000 feet.

Smartdust is a term used to describe groups of very small robots which may be used for monitoring and detection. Currently, the scale of smartdust is rather small, with single sensors the size of a deck of playing cards, but the hope is to eventually have robots as small as a speck of dust. Individual sensors of smartdust are often referred to as motes because of their small size. These devices are also known as MEMS.

MEMS stands for Micro Electro-Mechanical Systems, referring to functional machine systems with components measured in micrometers. MEMS is often viewed as a stepping stone between conventional macroscale machinery and futuristic nanomachinery. MEMS-precursors have been around for a while in the form of microelectronics, but these systems are purely electronic, incapable of processing or outputting anything but a series of electrical impulses. However, modern MEMS-fabrication techniques are largely based upon the same technology used to manufacture integrated circuits, that is, film-deposition techniques which employ photolithography.

Largely considered an enabling technology rather than an end in itself, the fabrication of MEMS is seen by engineers and technologists as another welcome advance in our ability to synthesize a wider range of physical structures designed to perform useful tasks. Most often mentioned in conjunction with MEMS is the idea of a "lab-on-a-chip," a device that processes tiny samples of a chemical and returns useful results. This could prove quite revolutionary in the area of medical diagnosis, where lab analysis results in added costs for medical coverage, delays in diagnosis and inconvenient paperwork.

Brief description of the operation of the mote:

The Smart Dust mote is run by a microcontroller that not only determines the tasks performed by the mote, but controls power to the various components of the system to conserve energy. Periodically the microcontroller gets a reading from one of the sensors, which measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or air pressure, processes the data, and stores it in memory. It also occasionally turns on the optical receiver to see if anyone is trying to communicate with it. This communication may include new programs or messages from other motes. In response to a message or upon its own initiative the microcontroller will use the corner cube retroreflector or laser to transmit sensor data or a message to a base station or another mote.

Longer description of the operation of the mote:

The primary constraint in the design of the Smart Dust motes is volume, which in turn puts a severe constraint on energy since we do not have much room for batteries or large solar cells. Thus, the motes must operate efficiently and conserve energy whenever possible. Most of the time, the majority of the mote is powered off with only a clock and a few timers running. When a timer expires, it powers up a part of the mote to carry out a job, then powers off. A few of the timers control the sensors that measure one of a number of physical or chemical stimuli such as temperature, ambient light, vibration, acceleration, or air pressure. When one of these timers expires, it powers up the corresponding sensor, takes a sample, and converts it to a digital word. If the data is interesting, it may either be stored directly in the SRAM or the microcontroller is powered up to perform more complex operations with it. When this task is complete, everything is again powered down and the timer begins counting again.

Another timer controls the receiver. When that timer expires, the receiver powers up and looks for an incoming packet. If it doesn't see one after a certain length of time, it is powered down again. The mote can receive several types of packets, including ones that are new program code that is stored in the program memory. This allows the user to change the behavior of the mote remotely. Packets may also include messages from the base station or other motes. When one of these is received, the microcontroller is powered up and used to interpret the contents of the message. The message may tell the mote to do something in particular, or it may be a message that is just being passed from one mote to another on its way to a particular destination. In response to a message or to another timer expiring, the microcontroller will assemble a packet containing sensor data or a message and transmit it using either the corner cube retroreflector or the laser diode, depending on which it has. The corner cube retroreflector transmits information just by moving a mirror and thus changing the reflection of a laser beam from the base station. This technique is substantially more energy efficient than actually generating some radiation. With the laser diode and a set of beam scanning mirrors, we can transmit data in any direction desired, allowing the mote to communicate with other Smart Dust motes.

Smartdust has theoretical applications in virtually every field of science and industry. Research in the technologies is well-funded and sturdily based, and it is generally accepted that it is simply a matter of time before smartdust exists in a functional manner.

The Defense Advanced Research Projects Agency (DARPA) has been funding smartdust research heavily since the late 1990s, seeing virtually limitless applications in the sphere of modern warfare. So far the research has been promising, with prototype smartdust sensors as small as 5mm. Costs have been dropping rapidly with technological innovations, bringing individual motes down to as little as $50 each, with hopes of dropping below $1 per mote in the near future.

Applications

  • Defense-related sensor networks
    • battlefield surveillance, treaty monitoring, transportation monitoring, scud hunting, ...
  • Virtual keyboard
    • Glue a dust mote on each of your fingernails. Accelerometers will sense the orientation and motion of each of your fingertips, and talk to the computer in your watch. QWERTY is the first step to proving the concept, but you can imagine much more useful and creative ways to interface to your computer if it knows where your fingers are: sculpt 3D shapes in virtual clay, play the piano, gesture in sign language and have to computer translate, ...
    • Combined with a MEMS augmented-reality heads-up display, your entire computer I/O would be invisible to the people around you. Couple that with wireless access and you need never be bored in a meeting again! Surf the web while the boss rambles on and on.
  • Inventory Control
    • The carton talks to the box, the box talks to the palette, the palette talks to the truck, and the truck talks to the warehouse, and the truck and the warehouse talk to the internet. Know where your products are and what shape they're in any time, anywhere. Sort of like FedEx tracking on steroids for all products in your production stream from raw materials to delivered goods.
  • Product quality monitoring
    • temperature, humidity monitoring of meat, produce, dairy products
    • impact, vibration, temp monitoring of consumer electronics
      • failure analysis and diagnostic information, e.g. monitoring vibration of bearings for frequency signatures indicating imminent failure (back up that hard drive now!)
  • Smart office spaces
    • The Center for the Built Environment has fabulous plans for the office of the future in which environmental conditions are tailored to the desires of every individual. Maybe soon we'll all be wearing temperature, humidity, and environmental comfort sensors sewn into our clothes, continuously talking to our workspaces which will deliver conditions tailored to our needs. No more fighting with your office mates over the thermostat.

Energy use is a major area of research in the field of smartdust. With devices so small, batteries present a massive addition of weight. It is therefore important to use absolutely minimal amounts of energy in communicating the data they collect to central hubs where it can be accessed by humans.

Development of smartdust continues at a breakneck speed, and it will no doubt soon be commonplace to have a vast army of thousands or millions of nearly invisible sensors monitoring our environment to ensure our safety and the efficiency of the machines around us.

REFERENCES

1. www.computerworld.com

2. www.robotics.eecs.berkeley.edu

3. www.bsac.eecs.berkeley.edu

4. www.nanotech-now.com

5. www.wikipedia.org

Blade servers

Contributor: George Mamman Koshy


Blade servers are self-contained computer servers, designed for high density. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations while still having all the functional components to be considered a computer. A blade enclosure provides services such as power, cooling, networking, various interconnects and management - though different blade providers have differing principles around what should and should not be included in the blade itself (and sometimes in the enclosure altogether). Together these form the blade system.

In a standard server-rack configuration, 1U (one rack unit, 19" wide and 1.75" tall) is the minimum possible size of any equipment. The principal benefit of, and the reason behind the push towards, blade computing is that components are no longer restricted to these minimum size requirements. The most common computer rack form-factor being 42U high, this limits the number of discrete computer devices directly mounted in a rack to 42 components. Blades do not have this limitation; densities of 100 computers per rack and more are achievable with the current generation of blade systems.

Server blade

In the purest definition of computing (a Turing machine, simplified here), a computer requires only;

1. memory to read input commands and data

2. a processor to perform commands manipulating that data, and

3. memory to store the results.

Today (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, which produces heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as Keyboard, Video and Mouse and serial) etc. only support the basic computing function, yet add bulk, heat and complexity, not to mention moving parts that are more prone to failure than solid-state components.

In practice, these components are all required if the computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualised (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and (in theory) cheaper to manufacture.

Blade enclosure

The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade computers require components that are bulky, hot and space-inefficient, and duplicated across many computers that may or may not be performing at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilization is more efficient. The specifics of which services are provided and how vary by vendor.

Power

Computers operate over a range of DC voltages, yet power is delivered from utilities as AC, and at higher voltages than required within the computer. Converting this current requires power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may be in the form of a power supply in the enclosure or a dedicated separate PSU supplying DC to multiple enclosures. This setup not only reduces the number of PSUs required to provide a resilient power supply, but it also improves efficiency because it reduces the number of idle PSUs. In the event of a PSU failure the blade chassis throttles down individual blade server performance until it matches the available power. This is carried out in steps of 12.5% per CPU until power balance is achieved.

Cooling

During operation, electrical and mechanical components produce heat, which must be displaced to ensure the proper functioning of the components. In blade enclosures, as in most computing systems, heat is removed with fans.

A frequently underestimated problem when designing high-performance computer systems is the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade enclosure designs feature high speed, adjustable fans and control logic that tune the cooling to the systems requirements.

At the same time, the increased density of blade server configurations can still result in higher overall demands for cooling when a rack is populated at over 50%. This is especially true with early generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers.

Networking

Computers are increasingly being produced with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence more complexity. High-speed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilise all the bandwidth available.

The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. These may be presented in the chassis itself, or in networking blades.

Storage

While computers typically need hard-disks to store the operating system, application and data for the computer, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, SCSI, DAS, Fibre Channel and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade. This may have higher processor density or better reliability than systems having individual disks on each blade.

Uses

Blade servers are ideal for specific purposes such as web hosting and cluster computing. Individual blades are typically hot-swappable. As more processing power, memory and I/O bandwidth are added to blade servers, they are being used for larger and more diverse workloads.

Although blade server technology in theory allows for open, cross-vendor solutions, at this stage of development of the technology, users find there are fewer problems when using blades, racks and blade management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers; increasing numbers of third-party software vendors are now entering this growing field.

Blade servers are not, however, the answer to every computing problem. They may best be viewed as a form of productized server farm that borrows from mainframe packaging, cooling, and power supply technology. For large problems, server farms of blade servers are still necessary, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.


REFERENCES

  1. www.wikipedia.org
  2. www.SearchDataCenter.com
  3. www.serverwatch.com

Configurable processors

Contributor: Thomas Uthupp Koshy

ABSTRACT

Configurable processors first appeared in the late 1990s with the promise that they would change the design and development of systems-on-chip (SoCs). The fundamental premise was a methodology that would relieve developers from being forced to design processors with features they didn’t need, and make it easier for them to add the features they wanted, without the prohibitive cost and lengthy development time required by fixed processor architectures. Instead, they could design the exact processor dictated by their system requirements with only the features and functions they chose. This would lead to a better product, better product differentiation and faster time-to-market.

A configurable processor core allows the system designer to custom tailor a microprocessor to more closely fit the intended application (or set of applications) on the SOC. A “closer fit” means that the processor’s register set is sized appropriately for the intended task and that the processor’s instructions also closely fit the intended task. For example, a processor tailored to efficiently execute digital audio applications may need a set of 24-bit registers for the audio data and a set of specialized instructions that operate on 24-bit audio data using a minimum number of clock cycles.

However, there were several issues to be resolved before this new approach was ready for use in large-scale SoC designs. These included design problems, development tools issues and the fact that systems built with configurable processors were difficult to verify. Over the last five years, these issues have been addressed, and the promise of configurability is being realized by developers with the design expertise and understanding of target application requirements needed to take on this new design approach. The methodology has clear benefits for designers in today’s environment of extreme competitive pressures—including performance increases and decreases in power consumption and area—as well as certain tradeoffs.

What is a Configurable Processor?

A configurable processor is one that can be modified or extended to address specific design issues by changing the processor’s feature set. Developers can add their product’s differentiating “secret sauce,” to perform a task much faster, in a much smaller area or with less power consumption. Speed, area and power optimizations can be traded off, as the designer chooses an optimal balance among these opposing factors. Other major benefits of configurable processors are flexibility, the absence of a fixed architectural framework and the use of HDL (hardware description languages) rather than proprietary languages.

Configurability and extendibility are two distinct benefits of configurable processors. Configurability lets the designer change the processor’s predefined architectural framework to meet design requirements. Examples of configurability include altering cache sizes or the number of registers in a register file, or deciding whether to include a multiplier or barrel shifter. Extendibility means that additions can be made to the processor.

There are several methodologies for finding the ideal mix of features in a configurable processor design. At one extreme is the point-and-click method, which is easy to use. At the other end are methods that require designers to learn new proprietary languages. The middle ground is a point-and-click method that also enables designers to use Verilog or VHDL (Very high speed integrated circuit Hardware Description Language) to specify extension logic, allowing the work to be done quickly and with conventional EDA (Electronic Design Automation) tools.

The point-and-click methodology fits nicely into existing EDA design flows. Unfortunately, such easy-to-use methodologies can be restrictive. It might seem that using a proprietary language would provide all the flexibility needed, but that’s not actually the case. Processors are designed with a fixed framework in place within which a proprietary language must remain. As a result, constraints are imposed on the designer, leaving some types of configurability impossible to accomplish.

However, with the middle-of-the-road approach, there’s no limit on what designers can do to the processor, such as changing the bus structure or the bus interfaces, or extending the architecture to perform special tasks never contemplated by the original processor architects. The architect of a configurable processor designed using this approach does not need to consider every possible way in which the core may be used. With a proprietary language, however, designers are constrained by the limits of the language and the surrounding framework as designed in advance by the architects.

Realizing the Promise

Typically, designing with a configurable processor is a straightforward process. Using the vendor’s default configurations, and some educated guesses about how that configuration should be modified, the designer generates the processor’s CAS (Cycle Accurate Simulator). An un-optimized version of the application is then simulated to determine whether it meets performance, area and power requirements. If not, the designer uses a profiling tool to determine cache misses, pipeline stalls and hot spots. The configuration is then changed or extension instructions are added, and the process is repeated.

This is an iterative and often creative process for finding the proper balance among speed, area and power. Fundamentally, this process generates empirical data used to make key architectural decisions. It is therefore useful to keep a spreadsheet of results so that, during iterations, one can determine the impacts of configuration decisions on speed, area and power.

Processor tailoring offers several benefits. Tailored instructions perform assigned tasks in fewer clock cycles. For real-time applications such as audio processing, the reduction in clock cycles directly lowers operating clock rates, which in turn cuts power dissipation. Lower power dissipation extends battery life for portable systems and reduces the system costs associated with cooling in all systems. Lower processor clock rates also allow the SOC to be fabricated in slower IC-fabrication technologies that are both less expensive and dissipate less static power.

The bottom line is that configurable processors will become increasingly important tools for designers and their companies to use in meeting the extreme competitive pressure that their new products will face in the marketplace. With all of their complexity, they can deliver astounding results. With proper training, their use can deliver a significantly differentiated product that cannot be matched using conventional methodologies. For designers who are in the game to win, configurable processors are well worth the effort.

REFERENCES

  1. www.rtcmagazine.com
  2. www.tensilica.com
  3. www.cera2.com

Flŭd backup

Contributor:Aarti S. Nair

INTRODUCTION

Flŭd backup is a completely decentralized system for creating and maintaining online backup of data. Its unique architecture allows it to provide nearly infinite backup resources for free, in a way that is virtually immune from failure.

flŭd is completely decentralized. There is no central server or authority, no controlling company, organization, or individual. All participants in the flŭd network share resources using an enforced fairness mechanism; in order to gain storage resources, a participant must provide resources. These sharing relationships are symmetrical and cheat-resilient, and are sometimes referred to as smart contracts. flŭd backs up your data by pushing encrypted pieces of each file to a multitude of other computers running the flŭd protocol, using techniques that can produce the most resilient and secure data archival mechanism for free.

In order to tolerate a massive failure of participating nodes in the flŭd network, erasure coding techniques are used to protect data. As of January 2007, these techniques allow virtually all data to be recoverable from the network even when nearly half of the nodes have failed. The generous erasure coding techniques employed by each flŭd node should allow even such unlucky participants to recover all data .

Individual nodes in the flŭd network are allowed to use any methods they choose to make decisions about where to store data and with who to enter into resource-sharing relationships. This agency allows for diversity in storage strategy. The flŭd backup prototype maintains localized trust records, which serve as a history of action of other known agents. These trust metrics are used to decide which nodes are most reliable, and each node tries to maximize its interactions with highly reliable peers while minimizing those with less reliable peers.

The main attractions of Flud backup are:

  1. Free : flŭd is decentralized, and one of the main benefits of this is that there are no central operating costs.

2. Resilient :flŭd is designed to survive not only hardware failures, network glitches, and malicious software, but also correlated failures and natural catastrophes

3. Indestructible : flŭd is immune to the foibles of human operator error present in centralized backup services.

4. Secure and private : All data encryption is done before data leaves your computer, and only you know the credentials necessary to restore your data.

5. Easy : Flŭd provides reasonable default settings and a very simple set-it-and-forget-it interface. Recovering data, requires only a single identification credential.

SCOPE & UTILITY

    • The center of gravity of computing has been moving away from centralization for several decades. All of the decentralized computing resources provide an incredibly diverse and resilient platform for creating the next generation of software and services
    • targetted squarely at individual end-users and the small office / home office. As such, the software is very simple to install and run. No technical expertise is needed. No servers need to be installed or maintained. No tech support staff is required.
    • indestructible backup. Neither flood, famine, hurricane, nuclear strike affecting many nodes in the flud network, nor the complete failure or death of the flud backup software team (or supporting company[ies]) will render flud backup unusable.
    • out-of-the-box working. If the user does nothing more than just install the software, it should have reasonable defaults for backing up important data (home directories, MyDocuments, etc). No configuration is required (though customized configuration is possible).
    • flŭd has been designed to encourage the richness of diversity. The software is open.

Multiple Graphics Processing Unit

Contributor: Aravind S.K.

In order to increase graphics performance, two or more GPU’s are used to simultaneously render the graphic. Scan-Line Interleave (SLI) from 3dfx is a method for linking two (or more) video cards or chips together to produce a single output. It is an application of parallel processing for computer graphics, meant to increase the processing power available for graphics. SLI from 3dfx was introduced in 1998. But 3dfx moved out of the scene and the two major players, NVIDIA and ATI technologies have their own multi-GPU solutions.

NVIDIA Corporation reintroduced the name SLI (renamed as Scalable Link Interface) and intends for it to be used in modern computer systems based on the PCI Express bus. SLI is, two graphics processors doing the work of one. Each graphics card is assigned 50% of the visual workload for a given scene and both GPUs render their share concurrently, effectively doubling the output. SLI offers two rendering and one anti-aliasing method for splitting the work between the video cards:

* Split Frame Rendering (SFR): This analyzes the rendered image in order to split the workload 50/50 between the two GPUs. To do this, the frame is split horizontally in varying ratios depending on geometry..

* Alternate Frame Rendering (AFR): Here, each GPU renders entire frames in sequence - one GPU processes even frames, and the second processes odd frames, one after the other.

* SLI Antialiasing. This is a standalone rendering mode that offers up to double the antialiasing performance by splitting the antialiasing workload between the two graphics cards, offering superior image quality. One GPU performs an antialiasing pattern which is slightly offset to the usual pattern (for example, slightly up and to the right), and the second GPU uses a pattern offset by an equal amount in the opposite direction (down and to the left). Compositing both the results gives higher image quality than is normally possible.

ATI technologies have named their multi GPU solution as CrossFire. This technology also makes use of two PCI Express cards. The CrossFire system supports four different rendering modes, each offering their own specific advantages and disadvantages.

* SuperTiling: It divides the screen up like a checkerboard, allocating adjacent squares ('quads') to alternate GPUs. (One card would render the white squares, and the other the black).

* Scissor: Divides the screen into two rectangles, one above the other. This render mode is more commonly known as Split Frame Rendering (SFR), which is how nVidia refers to it in SLI. Using Scissor mode means that the system has to carefully choose the "cutting point" in order to balance the load.

* Alternate Frame Rendering: Alternate Frame Rendering (as the name suggests) sets one GPU to render odd frames, and one the even frames. While this produces a high performance boost, it is incompatible with games using render-to-texture functions because one card doesn't have direct access to the texture buffer of the other.

* CrossFire Super Anti-aliasing: It is intended to improve the quality of the frames rendered. Super AA is able to double the anti-aliasing factor without any drop in frame rate.

The scope for multi GPU systems lies in

· Higher performance gaming computers

· Better physics processing (in the field of animation)

To name a few and many more uses in better graphics rendering.

References:

· www.slizone.com

· http://www.bit-tech.net/hardware/2004/06/30/multi_gpu_tech/1

· http://en.wikipedia.org/wiki/Scalable_Link_Interface

· http://www.extremetech.com/article2/0,1697,2136956,00.asp