We are constantly
reminded of the vulnerability of data that flies through the Internet, or
personal and other sensitive information sitting in file servers (physical
storage devices… offsite, we call them the “cloud”) that seem to get hacked
despite the best pledges of security, and that annoying feeling you get –
totally justified – when something you just wrote in a personal text or email
somehow provokes a slew of online ads about that same subject area. Privacy? Forgetaboutit!
We have solid examples that confirm that savvy hackers can ferret out our
governmental secrets – thank you WikiLeaks – disrupt our election processes and
negate our financial systems that have become fully dependent on Internet
communications. We’ve even detected little experimental “hacks” that let us
know how easy it would be to implement power grid failure.
What’s worse, there is no
completely hack-proof system, since even a slightly infected computer or flash
drive, connected to an otherwise secure system, can introduce back-door access,
keystroke replication software in addition to ransomware and malware that costs
us billions every year. But with an austerity-driven Congress, the necessary
money to upgrade our cyber-security systems – beyond military governmental
areas into the financial sector and power grid alone – is simply not a major
priority. The information gleaned from hacked personal sites also opens those
with compromised accounts to blackmail and extortion. But threaten to take mobile
and online access from almost any American consumer and you would get screams
of outrage. And then there are the mistakes, sometimes human errors, sometimes
simple overload, where systems just shut down, websites crash and the system
fails.
So it becomes interesting
to examine the assumptions that were made when the Internet was originally envisioned
and then created. It was initially contemplated as a military network. The
underlying fears, the concerns that motivated the early architects of such a network,
were born in the Cold War, particularly during the 1950s/1960s… the “duck and
cover” years when school children were run through “nuclear blast reaction
drills” that clearly would not have done much to save their lives in a real
nuclear attack. The notion of creating a system that could contain a nuclear
attack was at the core of the structuring design.
On their website, Rand Corporation
explains: “US authorities considered ways to communicate in the aftermath of a
nuclear attack. How could any sort of ‘command and control network’ survive? Paul
Baran, a researcher at RAND, offered a solution: design a more robust
communications network using ‘redundancy’ and ‘digital’ technology.
“At the time, naysayers
dismissed Baran's idea as unfeasible. But working with colleagues at RAND,
Baran persisted. This effort would eventually become the foundation for the
World Wide Web…
“[When Baran looked the
variables in the 1960s,] At that time, RAND focused mostly on Cold War-related
military issues. A looming concern was that neither the long-distance telephone
plant, nor the basic military command and control network would survive a
nuclear attack. Although most of the links would be undamaged, the centralized
switching facilities would be destroyed by enemy weapons. Consequently, Baran
conceived a system that had no centralized switches and could operate even if
many of its links and switching nodes had been destroyed.
“Baran envisioned a
network of unmanned nodes that would act as switches, routing information from
one node to another to their final destinations. The nodes would use a scheme
Baran called ‘hot-potato routing’ or distributed
communications.
“Baran also developed the
concept of dividing information into ‘message blocks’ before sending them out
across the network. Each block would be sent separately and rejoined into a
whole when they were received at their destination. A British man named Donald
Davies independently devised a very similar system, but he called the message
blocks ‘packets,’ a term that was eventually adopted instead of Baran's message
blocks.
“This method of ‘packet switching’ is a rapid
store-and-forward design. When a node receives a packet it stores it,
determines the best route to its destination, and sends it to the next node on
that path. If there was a problem with a node (or if it had been destroyed)
packets would simply be routed around it.”
According to the March 5th
FastCompany.com, “The packetized technology that underlies most of the internet
was created by Paul Baran as part of an
effort to protect communications by moving from a centralized model of
communication to a distributed one. While the Internet Society questions
whether the creation of the internet was in direct response to concerns about
nuclear threat, it clearly agrees that ‘later
work on Internetting did emphasize robustness and survivability, including the
capability to withstand losses of large portions of the underlying networks.’
“From there, the
foundation was laid for an internet that treated the distributed model as a key component
to ensuring reliability. Almost 50 years later, consolidation around
hosting and mobile and the development of the cloud have created a
model that increases concentration on top of few key players: Amazon,
Microsoft, and Google now host a large number of sites across the web. Many of
those companies’ customers have opted to host their infrastructure in a single set of data centers, potentially increasing the
frailty of the web by re-centralizing large portions of the net.
“That’s what happened
when Amazon’s S3 service, essentially a large hard drive used by companies like
Spotify, Pinterest, Dropbox, Trello, Quora, and many others, lost one of its data centers on Tuesday morning. The problem
began around 9:37 a.m. Pacific, the company later explained, after an employee tried to fix a
problem with S3’s billing system: ‘an authorized S3 team member using an
established playbook executed a command which was intended to remove a small
number of servers… Unfortunately, one of the inputs to the command was entered
incorrectly and a larger set of servers was removed than intended.’
“Companies that had
content stored in those sets of servers, located in Northern Virginia,
essentially stopped functioning properly, prompting experts to recommend that companies look at storing
data across multiple data centers to increase reliability. The failure
rippled across Amazon’s other services, many of which depend upon S3, leading to
‘increased error rates’ for sites that rely on AWS, and making engineers’
efforts at recovery that much more difficult. Even the webpage Amazon uses to
alert customers to outages was affected.” So in effect, over time and common
practice, we have outsoured so much of that data storage and security to a few big
players that we are actually backing into that very centralized model the
original architects wanted to avoid. It is clearly a lot cheaper to do it that
way, but the risks are obvious.
“With every new largely
centralized system that comes online, the internet becomes more brittle, as
centralization creates an increased number of single points of failure. In a
world where hackers are looking for new ways to take down infrastructures,
those centralized services must double down on increasing security and
reliability if we want the internet to survive.
“Startups relying on
standardized infrastructures can go to market faster and more cheaply, but
complete reliance on a single set of servers is akin to building a castle on a swamp.
While companies like Amazon, Microsoft, Google, and others have a
responsibility to ensure the infrastructures they provide remain stable, it is
important for any company to consider how to best balance their offerings
across different data centers and how to adapt in case of failures.
“Unfortunately, that is
not what the large cloud providers want you to do. While Adrian Cockroft, vice
president for cloud architecture strategy at Amazon Web Services, acknowledged
that many big corporate customers like to split their business among multiple
cloud providers, as a risk mitigation strategy, he encouraged them to steer most of their business to a single
favorite (such as AWS), in order to obtain the best discounts and minimize the
need for duplicate training of their own information-technology employees. In a world where Amazon is increasingly becoming a core part of
the internet’s infrastructure, it makes sense for them to push for
centralization on their own servers but such effort could lead to further
problems.” FastCompany.com.
Aside from that “doubling
down” on cyber-security – probably a necessity no matter what – bigger
companies need to start storing their data in unlinked but mirrored sites in
other venues, expensive but seemingly necessary. Amazon and some of the bigger
outsource players are working out internal partitions, having their own
mirroring structures that will not be impacted by malware or physical
destruction of their file servers to protect their customers. But as you can
guess… we not only have a long way to go, but as quickly as we create
protective barriers and anti-malware, hackers are busy trying to get around
those walls. Increasingly complex with too many moving parts.
I’m
Peter Dekom, and sometimes, when I just fear what will happen if I look too
deeply into my concerns, I know I just have to grit my teeth and find out.
No comments:
Post a Comment