Latency!!!!
The answer to the cloud threat may lie in lightning speed
The need for lightning latency has
given rise to an entire set of cloud providers that are willing to have their
data centres located close to clients’ own data centres
Latency
is a term which has a special meaning when used in the world of information
technology. Latency simply means the speed at which an instruction reaches a
computer in order for it to then carry out the instruction. While most of us
think of computing speed in terms of the power of the processor or chip, we are
also painfully aware that slower bandwidth connections can cause a large lag in
response from the computer that we are connecting with. You just have to watch
your phone suddenly drop from 4G speeds to 2G speeds, which seems to happen
with alarming regularity with many telcos, to know what I am talking about.
So
is higher bandwidth the answer? Well, only partially. It turns out that the
physical distance at which the server or computer is located from you is also
particularly important, despite the fact that the Internet Protocol is supposed
to work in a seamless way by finding the most efficient route from one
computing device to another. So this means companies that need to gain the best
performance from the cloud (i.e. having their hardware owned and maintained by
someone else like a Google or Amazon) also look to make sure that the physical
distance between their on-site servers and computers and the ones they need to
communicate with on the cloud is as short as possible. This is especially
important for certain mission-critical applications that run and manage the
corporation.
Nowhere
is this more evident than among stock exchanges and their primary
clients—investment banks that trade securities both for their clients and on
their own behalf. Many investment banks have made an absolute science of what
is called “program trading”—computer programs that are trained to look for
small anomalies between stock exchange prices that sometimes only last for a
few milliseconds. The moment this millisecond anomaly is noticed by the
investment banks’ computers (which in many instances are situated bang next
door to the stock exchanges), the computer program takes over and executes a
trade in favour of the investment bank to use the arbitrage anomaly created by
these millisecond disparities in price. Trading transactions executed at
lightning speed are key to profits. In 2010, when HCL Technologies Ltd signed a
deal with Singapore Exchange to manage the latter’s hardware infrastructure,
the deal was the first of a kind. Though some of the infrastructure was
remotely managed from India, the stock exchange’s data centres stayed put to
make sure that latency issues did not crop up for its many clients (the
investment banks).
So
as the cloud takes over the realm of hardware, many firms will still choose to
have at least some of their mission control systems still with them, while
others migrate to the cloud. This need for lightning latency has given rise to
an entire set of cloud providers, which, unlike Google and Amazon, are willing
to have their data centres located in close physical proximity to their
clients’ own data centres.
Equinix
Inc., the market’s largest colocation firm by revenue, says it provides a
latency drop of 42%, which in the world of millisecond program trading, can be
crucial. Digital Realty Inc. is another such colocation firm. Colocation firms
allow for customer-owned computers and servers to be in close proximity to the
cages which host Amazon and Google’s servers that service that same customer.
This colocation market is expected to exceed $50 billion within the next few
years, according research firms that are familiar with this market. Such shared
facilities are necessary because they can accommodate companies that want to
move more nimbly between the cloud and their own data centres.
As
the digital revolution takes place in the manufacturing and financial sectors,
many of the new systems that will be spawned from such efforts will lead to an
increase in the “hybrid cloud” market. The good news for Indian IT service
providers is that the barrier to entry to build and manage client specific data
centres that are colocated (or closely located) is low. This means they can add
such capacity every time they win a new client or expand at an existing one.
And the good news is this: each of these clients presents a large revenue
stream, from the design and build of the new systems that are actually geared
to being hosted in the cloud to the actual hosting and managing of a low
latency “hybrid cloud” solution. The key for the provider is to decide whether
or not to make the capital investments in multiple small data centres or to
collaborate with the Equinixes of this world who already provide such
colocation services, but not necessarily the sophisticated technology
engineering help to allow their client corporations to
re-architect and design their systems to fit this new “hybrid cloud” paradigm.
re-architect and design their systems to fit this new “hybrid cloud” paradigm.
The
rise of the Amazons, Googles and Microsofts in the cloud space with their
highly automated hardware maintenance systems still poses a significant threat
to the traditional “remote infrastructure maintenance” as practiced by both
Indian and non-Indian firms out of India. But the need for latency may still
provide this cloud a silver “lightning”.
Source | Mint | 3 June 2016
Regards
Pralhad
Jadhav
Senior
Manager @ Library
Khaitan
& Co
Best
Paper Award | Received the Best Paper Award at TIFR-BOSLA National Conference on
Future Librarianship: Innovation for Excellence (NCFL 2016) on April 23,
2016. The title of the paper is “Removing
Barriers to Literacy: Marrakesh VIP Treaty”
Note | If anybody use these post for forwarding in any social media coverage
or covering in the Newsletter please give due credit to those who are taking
efforts for the same.
No comments:
Post a Comment