Wednesday, January 27, 2010
First 100 Gbps Ethernet backbone link
BAD HARDWARE: Mainly for 1000 km span metro area network.
THERE are few facets of society that have remained untouched by the internet. From business communication to leisure activity, the net has transformed the way we behave.
Yet at its heart the internet has stagnated. As a slew of bandwidth-hungry services come on-stream, the fibre-optic backbone that forms its trunk routes are at risk of becoming overwhelmed by too much data. It's due for an upgrade.
The first inklings of what the upgrade might look like can be seen in an ultra-fast 900-kilometre fibre-optic link between Paris in France and Frankfurt in Germany installed by telecoms firm Verizon. It is a foretaste of a high-speed internet backbone with enough capacity to satisfy bandwidth-hungry applications well into the future.
Today, the fastest throughput on most of the global telecommunications network is 10 billion bits (gigabits) per second - so sending the contents of a full DVD would keep a link tied up for around 4 seconds. It has been that way since 1996 - an era when users stepped onto the information superhighway via dial-up modems and the original Netscape Navigator browser.
Masses of optical fibre cables were added to the backbone during the dotcom boom a decade ago, initially producing a huge glut in capacity. Now new users and new services - social media, video downloads, streaming audio and video, file sharing and cloud computing - are filling up those fibre pipes. More capacity will soon be needed, but providing it poses considerable challenges.
In today's fibre-optic backbone, digital 1s and 0s are represented by switching a laser beam on and off. Lasers send dozens of separate signals down each optical fibre at slightly different wavelengths, which together can convey 10 gigabits of data per second. But this techniques has its limitations: trying to raise the data rate for each wavelength won't work, as the signals start to blur together. The problems of signal integrity are "100 times worse at 100 gigabits than they are at 10", says Dimple Amin of network equipment maker Ciena of Linthicum, Maryland.
The starting point for the new 100-gigabit technology was to ditch the off-and-on switching, and instead modulate the phase of the light waves - moving them ahead or behind by a fixed increment. The simplest approach is to shift the phase by 90 degrees - one-quarter of a wavelength - to distinguish a 0 from a 1. Higher data rates require a more elaborate process, called quadrature phase-shift keying, which has four possible shifts, +135, +45, -45 and -135 degrees, each representing a different pair of bits, 00, 01, 10 or 11.
The 100-gigabit system abandons on-off switching in favour of changing the phase of the light waves
That alone isn't enough to reach 100 gigabits. To achieve that goal requires signals with two different polarisations, which can be separated at the receiver, each carrying 50 gigabits.
Even then, after passing through hundreds of kilometres of fibre, the input signal must be processed with light from an internal laser to extract a clear signal. The receivers are equipped with powerful electronic circuits, which analyse the signal and minimise noise added along the cable, says Amin. "The end points got a lot smarter and can deal with everything in between."
Without this, "we could never have gotten into the ultra long haul" of 1000 to 1500 kilometres, says Glenn Wellbrock, Verizon's director of network backbone architectures.
The Canadian telecoms equipment company Nortel, which built the Verizon system, has shown it can transmit signals more than 2000 kilometres in a test on an Australian network owned by Telestra. "The 2000 kilometres was a bit of heroism. For most applications we're saying it's more like 1000 kilometres," says John Sitch, senior adviser on optical R&D at Nortel.
There are still some problems facing the ultra-fast backbone. Noise can be a killer if 10 and 100-gigabit channels are transmitted through the same fibre at closely spaced wavelengths. And the first generation of 100-gigabit systems can only stretch half as far as today's 10-gigabit systems before signals are lost, Wellbrock says.
"But you don't need to try 4000 kilometres," Wellbrock points out. "The majority of traffic in the US is 1500 kilometres or less, and it's less in Europe." As first steps go, a near 900-kilometre link isn't a bad effort.
When you can't afford to wait
The latest fibre-optic links boost internet speeds in more ways than one. As well as data capacity, they also improve a facet of data transmission which for some applications is even more important. That is the link's "latency" - the time lag between sending a command to a remote server and getting a response.
The round trip from your computer to a remote server takes time, and although light travels at 200,000 kilometres per second in an optical fibre, the delays can add up. For example, if the page you are accessing includes 100 discrete elements, retrieving each one is a separate operation. For a server 1000 kilometres away, the 100 round trips would add up to a full second's delay.
Existing 10-gigabit systems make matters worse by forcing the signals to travel a longer distance than they need to. To minimise the blurring caused by the interaction of light with long stretches of glass (see main story) additional lengths of fibre with subtly different optical properties have to be added - typically 15 to 20 kilometres for every 100 kilometres of transmission fibre. The new 100-gigabit technology does away with the need for extra fibre repeaters. That is exactly why it is Ethernet suitable.
Where time is money, latency matters. Last year, the New York Stock Exchange bought a 100-gigabit system to transmit stock data in the New York and London areas, which it hopes will reduce latency by 60 to 70 milliseconds. It reckons this investment is worth making because the improvement will enable its staff to make trades ahead of competitors.
THERE are few facets of society that have remained untouched by the internet. From business communication to leisure activity, the net has transformed the way we behave.
Yet at its heart the internet has stagnated. As a slew of bandwidth-hungry services come on-stream, the fibre-optic backbone that forms its trunk routes are at risk of becoming overwhelmed by too much data. It's due for an upgrade.
The first inklings of what the upgrade might look like can be seen in an ultra-fast 900-kilometre fibre-optic link between Paris in France and Frankfurt in Germany installed by telecoms firm Verizon. It is a foretaste of a high-speed internet backbone with enough capacity to satisfy bandwidth-hungry applications well into the future.
Today, the fastest throughput on most of the global telecommunications network is 10 billion bits (gigabits) per second - so sending the contents of a full DVD would keep a link tied up for around 4 seconds. It has been that way since 1996 - an era when users stepped onto the information superhighway via dial-up modems and the original Netscape Navigator browser.
Masses of optical fibre cables were added to the backbone during the dotcom boom a decade ago, initially producing a huge glut in capacity. Now new users and new services - social media, video downloads, streaming audio and video, file sharing and cloud computing - are filling up those fibre pipes. More capacity will soon be needed, but providing it poses considerable challenges.
In today's fibre-optic backbone, digital 1s and 0s are represented by switching a laser beam on and off. Lasers send dozens of separate signals down each optical fibre at slightly different wavelengths, which together can convey 10 gigabits of data per second. But this techniques has its limitations: trying to raise the data rate for each wavelength won't work, as the signals start to blur together. The problems of signal integrity are "100 times worse at 100 gigabits than they are at 10", says Dimple Amin of network equipment maker Ciena of Linthicum, Maryland.
The starting point for the new 100-gigabit technology was to ditch the off-and-on switching, and instead modulate the phase of the light waves - moving them ahead or behind by a fixed increment. The simplest approach is to shift the phase by 90 degrees - one-quarter of a wavelength - to distinguish a 0 from a 1. Higher data rates require a more elaborate process, called quadrature phase-shift keying, which has four possible shifts, +135, +45, -45 and -135 degrees, each representing a different pair of bits, 00, 01, 10 or 11.
The 100-gigabit system abandons on-off switching in favour of changing the phase of the light waves
That alone isn't enough to reach 100 gigabits. To achieve that goal requires signals with two different polarisations, which can be separated at the receiver, each carrying 50 gigabits.
Even then, after passing through hundreds of kilometres of fibre, the input signal must be processed with light from an internal laser to extract a clear signal. The receivers are equipped with powerful electronic circuits, which analyse the signal and minimise noise added along the cable, says Amin. "The end points got a lot smarter and can deal with everything in between."
Without this, "we could never have gotten into the ultra long haul" of 1000 to 1500 kilometres, says Glenn Wellbrock, Verizon's director of network backbone architectures.
The Canadian telecoms equipment company Nortel, which built the Verizon system, has shown it can transmit signals more than 2000 kilometres in a test on an Australian network owned by Telestra. "The 2000 kilometres was a bit of heroism. For most applications we're saying it's more like 1000 kilometres," says John Sitch, senior adviser on optical R&D at Nortel.
There are still some problems facing the ultra-fast backbone. Noise can be a killer if 10 and 100-gigabit channels are transmitted through the same fibre at closely spaced wavelengths. And the first generation of 100-gigabit systems can only stretch half as far as today's 10-gigabit systems before signals are lost, Wellbrock says.
"But you don't need to try 4000 kilometres," Wellbrock points out. "The majority of traffic in the US is 1500 kilometres or less, and it's less in Europe." As first steps go, a near 900-kilometre link isn't a bad effort.
When you can't afford to wait
The latest fibre-optic links boost internet speeds in more ways than one. As well as data capacity, they also improve a facet of data transmission which for some applications is even more important. That is the link's "latency" - the time lag between sending a command to a remote server and getting a response.
The round trip from your computer to a remote server takes time, and although light travels at 200,000 kilometres per second in an optical fibre, the delays can add up. For example, if the page you are accessing includes 100 discrete elements, retrieving each one is a separate operation. For a server 1000 kilometres away, the 100 round trips would add up to a full second's delay.
Existing 10-gigabit systems make matters worse by forcing the signals to travel a longer distance than they need to. To minimise the blurring caused by the interaction of light with long stretches of glass (see main story) additional lengths of fibre with subtly different optical properties have to be added - typically 15 to 20 kilometres for every 100 kilometres of transmission fibre. The new 100-gigabit technology does away with the need for extra fibre repeaters. That is exactly why it is Ethernet suitable.
Where time is money, latency matters. Last year, the New York Stock Exchange bought a 100-gigabit system to transmit stock data in the New York and London areas, which it hopes will reduce latency by 60 to 70 milliseconds. It reckons this investment is worth making because the improvement will enable its staff to make trades ahead of competitors.