Wednesday, March 26, 2008

Universal Mobile Telecommunications System - UMTS

UMTS
Universal Mobile Telecommunications Systems (UMTS) is a third-generation technology being developed into a 4G technology. It is standardized by the 3GPP and is the European answer to the requirements for 3G cellular radio systems.

For effective, efficient communications, standardization is critical, and nowhere is this more evident than in the areas of mobile computing and cellular telephony.

If you need data access or e-mail through your cell phone, you are likely to be using one of two different technologies. In the US the main approach for voice communication is called Code Division Multiple Access (CDMA), and it is the basis for the major network services offered by Verizon Wireless and Sprint Nextel Corp., among others.

In Europe and most of the rest of the world, however, a very different technology called global System for Mobile Communication has dominated the market. GSM uses a Time Division Multiple Access approach to frame structure. GSM service is available in the US primarily through T-Mobile USA Inc., and Cingular Wireless LLC. These carriers maintain GSM networks that are distinct from their other digital networks. Both CDMA and GSM are 2G technologies, and they have co-existed for several years. Each technology has its supporters. CDMA phones are engineered specifically for an individual carrier, whereas GSM phones make use of a removable memory card call the Subscriber Identity Module (SIM). Physically smaller than a secure digital flash memory card, a SIM card contains all the key information required to activate a phone, including the user’s telephone number, personal identification number, address book and encoded network identification details. A user can easily move a SIM from one phone to another.

Though GSM phones are interoperable with one another, different countries use different parts of the frequency spectrum, so “world phones” typically must be capable of using several frequencies.

Today, the fastest-growing use of cellular network is for the transmission of all kinds of data and rich media, including Web sites, video, music, images, and maps and driving directions. The older 2G network simply couldn’t handle the volume of traffic, and they couldn’t offer the speed needed for transmitting large files. The answer was to make the services faster and build out the networks to deal with more traffic.

Here, too, the CDMA and GSM paths continued their separate but parallel development. CDMA brought us CDMA 2000 and 1xRTT networks. The most recent developments are 1x Evolution Data Optimized, or EV-DO, and 1x Evolution Data/Voice, or EV-DV.

Similarly, GSM begat General Packet Radio Service, or GPRS, which begat enhanced data rates for GSM evolution, or EDGE. EDGE was developed to enable the transmission of large amounts of data at a high speed, 384Kbit/sec. The latest generation is called Wideband Code Division Multiple Access (WCDMA).

And this finally brings us to Universal Mobile Telecommunication System. The International Telecommunication Union (ITU), a specialized agency for the United Nations, has attempted to coordinate these competing technologies to improve throughput and increase interoperability. The International Mobile Telecommunications 2000 standard is a 3G digital communications specification from the ITU. And the European implementation of IMT-2000 is UMTS, which is based on WCDMA. Previous cellular telephone data systems were mostly circuit-switched, requiring a dedicated connection. WCDMA is packet-switched, using Internet Protocol. The first commercial WCDMA network was launched in Japan in 2001.

Technical Details
UMTS has been specified as an integrated application for mobile voice and data systems with wide-area coverage. Using globally harmonized spectrum in paired and unpaired bands, early implementations of UMTS offer theoretical bit rates of up 384Kbit/sec. In situations where the mobile device is actually moving. The current goal is to achieve 2Mbit/sec when both ends of the connection are stationary.

UMTS operates on radio frequencies identified by the ITU IMT-2000 specification document and licensed to operator, using a 5 MHz wide channel that simplifies deployment for network providers that have been granted large, contiguous blocks of spectrum. Most UMTS systems use frequencies between 1885 and 2025 MHz.

UMTS assigns separated carrier frequencies to incoming and outbound signals, a process called frequency division duplexing (FDD). For symmetric traffic, allowing uplink and download data rates to be equal, in contrast to technologies such as Asymmetric Digital Subscriber Line service.

Ongoing work within the 3rd Generation Partnership Project promises increased throughput speeds over the WCDMA Radio Access Network. High-Speed Downlink Packet Access and High-Speed Uplink Packet Access technologies are already standardized, and commercial operators in Asia and North America are putting them through network trials. With theoretical download speed as high as 14.4 Mbit/sec and uplink speed of up to 5.8 Mbit/Sec., these technologies will make it possible for UMTS to offer data transmission speed comparable to those of hard-wired Ethernet-based networks.

Virtual Machine

Virtual Machines
A virtual machine (VM) is a software implementation of a machine that executes programs like a real machine. It is a program running on a computer that creates a self-contained operating environment and presents the appearance to the user of a different computer.

At the simplest level, computing environments are thought to consist of hardware, an operating system that runs on the hardware and applications that urn on the OS – thought in embedded systems, the operating system is sometimes eliminated and applications run directly on the hardware. The OS is aware of all the capacity and capability of the underlying hardware and controls it directly.

If another layer of software were placed between the OS and the CPU, then the OS would know only what that extra layer of software told it. Its understanding of the capacity and capability of the underlying hardware would depend on the intervening software layer, and it would be able to control the underlying hardware only in ways the intervening layer of software allowed it to.

The intervening layer of software could tell the OS everything there was to know about the hardware and simply pass through control directives without translation. But it also might not reveal everything about the underlying hardware and might add some control of its own as it passes on the control directives to the OS. In either case, the configuration would not be the standard tripartite configuration. It would be one of the many possible configurations that is called a virtual machine.

Of course, there are servers, networks and Web interfaces, as well as other devices and interfaces that add nuance and complexity to computing environments. But using a software layer to package a set of computing resources and behaviors and to present it as an available computing environment is at the core of what is means to create a virtual machine.

A virtual machine is a computing environment whose set of resources and behaviors is built through software on top of some other computing environment.

Hypervisor VMs
Virtual machines are at the core of server technologies like VMware Inc.’s ESX Server and the open-source Xen virtual machine monitor. Both of these products offer servers that run multiple x86-based OSs simultaneously. Their approaches are slightly different variations of what are called hardware-level, bare-metal or hypervisor virtual machines. The intermediary software layer called the virtual machine monitor or hypervisor is between the OS and the hardware. The hypervisor gives all the OSs that are running the illusion that they are the only OS running on the hardware.

Running multiple OSs on one server platform offers several advantages. It makes it possible to more fully use the resources of very powerful servers, provide backward compatibility for legacy programs and partition applications to different OSs so they can’t corrupt one another.

VMware uses transparent virtualization, which means that the OSs that urn on the hypervisor do not need to be modified. Xen uses paravirtualization, which means that it needs to modify the OSs to make them run simultaneously on the hardware. Xen claims that paravirtualization increases speed and efficiency.

Hosted VMs
Microsoft Corp’s Virtual PC and VMware’s GSX Server and Workstation are called hosted virtual machines. In these products, the VM is like any other application running on an OS. The VM application is divided into an intermediary software layer, and OS and an application running on that OS.

This scheme is less efficient and less powerful than that used for hypervisor servers, but it provides the same kind of advantages, allowing a user to run legacy programs and to partition applications from the rest of the system. A user who wants to visit dangerous Web sites, for example, could add a layer of protection by doing his surfing via a virtual machine.

Application-level VMs
Application-level VMs, such as the Java virtual machine, are similar to the hosted model in that they run as applications. These VMs, however, combine the intermediary software layer with the OS. The Java VM runs like an application on the native OS, and the Java application runs on the VM.

One of the advantages claimed form this programming paradigm is that a Java program will run on any Java VM without recompilation. That is left to the provider of the Java VM, which must make it run on a variety of native OSs.

Parallel Virtual Machine
The parallel VM is a slightly different approach to creating a virtual machine. In this case, the intermediary software layer exists as a daemon, or a server program, along with a set of library calls, which must be compiled into the application that is going to be run on the parallel VM. The library calls, which interact with the server programs, make a network of computers appear to be a single computer with parallel processors.