Entries in security (6)

Wednesday
Sep192018

How do you explain the unreasonable effectiveness of cloud security?

With the enormous attack surface of cloud providers like AWS, Azure, and GCP, why aren't there more security problems? Data breaches and cyber attacks occur daily. How do you explain the unreasonable effectiveness of cloud security?

Google has an ebook on their security approach; Microsoft has some web pages. Both are the equivalent of that person who is disgustingly healthy and you ask them how they do it and they say "I don't know. I just eat right, exercise, and get plenty of sleep." Not all that useful. Most of us want a hack, a trick to good health. Who wants to eat right? 

I'm sure Amazon also eats right, exercises, and gets plenty of sleep (probably not the people who work there), but AWS also has a secret that when that disgustingly healthy person starts talking about at a party, you just can't help leaning in and listening. 

What's the trick to 6-pack security? Proving systems correct. Does your datacenter do that? I didn't think so. AWS does. 

Dr. Byron Cook gave an enthusiastic talk on Formal Reasoning about the Security of Amazon Web Service. He's clearly excited about finally applying his research in a real-world setting. This is the trojan horse the FloC (Federated Logic Conference) community has been waiting for. It's almost as if he's a FLoC guy working at AWS rather than an AWS guy giving a FLoC talk.

The main take-aways for me were:

Click to read more ...

Tuesday
Nov182008

Scalability Perspectives #2: Van Jacobson – Content-Centric Networking

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Van Jacobson

Van Jacobson is a Research Fellow at PARC. Prior to that he was Chief Scientist and co-founder of Packet Design. Prior to that he was Chief Scientist at Cisco. Prior to that he was head of the Network Research group at Lawrence Berkeley National Laboratory. He's been studying networking since 1969. He still hopes that someday something will start to make sense.

Scaling the Internet – Does the Net needs an upgrade?

As the Internet is being overrun with video traffic, many wonder if it can survive. With challenges being thrown down over the imbalances that have been created and their impact on the viability of monopolistic business models, the Internet is under constant scrutiny. Will it survive? Or will it succumb to the burden of the billion plus community that is constantly demanding more and more? Does the Net Need an Upgrade? To answer this question a distinguished panel of Van Jacobson, Rick Hutley, Norman Lewis, David S. Isenberg has discussed the issue on the Supernova conference. In this compelling debate available on IT Conversations, the panel addresses the question and provides some differing perspectives. One of the perspectives is Content-based networking described by Van Jacobson.

A New Way to look at Networking

Today's research community congratulates itself for the success of the internet and passionately argues whether circuits or datagrams are the One True Way. Meanwhile the list of unsolved problems grows. Security, mobility, ubiquitous computing, wireless, autonomous sensors, content distribution, digital divide, third world infrastructure, etc., are all poorly served by what's available from either the research community or the marketplace. In this amazing Google Tech Talk Van Jacobson use various strained analogies and contrived examples to argue that network research is moribund because the only thing it knows how to do is fill in the details of a conversation between two applications. Today as in the 60s problems go unsolved due to our tunnel vision and not because of their intrinsic difficulty. And now, like then, simply changing our point of view may make many hard things easy.

Content-centric networking

The founding principle of Content-centric networking is that a communication network should allow a user to focus on the data he or she needs, rather than having to reference a specific, physical location where that data is to be retrieved from. This stems from the fact that the vast majority of current Internet usage (a "high 90% level of traffic") consists of data being disseminated from a source to a number of users. The current architecture of the Internet revolves around a conversation model, created in the 1970s to allow geographically distributed users to use a few big, immobile computers. The content-centric approach seeks to make the basic architecture of the network to current usage patterns. The new approach comes with a wide range of benefits, one of which being building security (both authentication and ciphering) into the network, and at the data level. Despite all its advantages, this idea doesn't seem to map very well to some of the current uses of the Web (like web applications, where data is generated on the fly according to user actions) or real-time applications like VoIP and instant messaging. But one can envision an Internet where content-centric protocols take care of the diffusion-based uses of the network, creating an overlay network, while genuine conversation-centric protocols stay on the current infrastructure.

Solutions or workarounds?

There are many solutions or workarounds for the problems posed by traditional conversation based networking such as Content Delivery Networks, caching, distributed filesystems, P2P and PKI. By taking the perspective of Van Jacobson we can investigate new dimensions of these problems. What could be the impact of this perspective on the future of the Internet architecture? What do you think? I recommend the New Way to Look at Networking video by Van Jacobson. He tells us the brief history of Networking from the phone system to the Internet and his vision for dissemination networking.

Information Sources

Click to read more ...

Friday
Nov142008

Private/Public Cloud

Data centers are reshaping themselves by taking ideas from public cloud providers, such as Amazon and Google. The idea is to make the data center more cost-effective by enabling on-demand utility-based computing rather than dedicated machines. At the same time, it is clear that to make IT operations more effective, it doesn't make sense to run all the applications that are currently hosted in a company's data center in the private cloud. This calls for an integration between private and public cloud. In this post i discuss some of the challenges involved in making that happen: 1. How do we design applications to be cloud-agnostic? 2. How do we enable seamless fail-over to a public cloud? 3. Future-proofing: There are many cases in which we can't make a clear decision as to where our application should be running at the time of writing or developing the application. We would like to be in a position to change the decision as to where our application will be running even after our application has been completely developed.

Click to read more ...

Sunday
Aug172008

Wuala - P2P Online Storage Cloud

How do you design a reliable distributed file system when the expected availability of the individual nodes are only ~1/5? That is the case for P2P systems. Dominik Grolimund, the founder of a Swiss startup Caleido will show you how! They have launched Wuala, the social online storage service which scales as new nodes join the P2P network. The goal of Wua.la is to provide distributed online storage that is:

  • large
  • scalable
  • reliable
  • secure
by harnessing the idle resources of participating computers. This challenge is an old dream of computer science. In fact as Andrew Tanenbaum wrote in 1995: "The design of a world-wide, fully transparent distributed filesystem fot simultaneous use by millions of mobile and frequently disconnected users is left as an exercise for the reader" After three years of research and development at at ETH Zurich, the Swiss Federal Institute of Technology on a distributed storage system, Caleido is ready to unveil the result: Wuala. Wuala is a new way of storing, sharing, and publishing files on the internet. It enables its users to trade parts of their local storage for online storage and it allows us to provide a better service for free. In this Google Tech Talk, Dominik will explain what Wuala is and how it works, and he will also show a demo. The availability problem is solved by redundancy (just like in Google File System). However simple replication techniques would result in too much overhead because of the low availability of the nodes. Instead Wuala employs erasure coding and splits the data into small pieces. Optimal erasure codes produce n/r fragments where any n fragments is sufficient to recover the original message. These pieces are then distributed in the P2P network providing good availability at a reasonable overhead. The P2P network consists of client, storage and routing nodes. The Wuala architecture uses a mix of regular and random graphs to optimize routing. Dominik also explains how Wuala architecture is designed to provide security and fairness. Wuala employs the 128 bit AES algorithm for encryption and the 2048 bit RSA algorithm for authentication. If you're interested in how Wuala manages encryption, have a look at their publication on Cryptree. They have also implemented distributed reputation audit and maintenance functions. Check out the Tech Talk! It is worth the time!

Click to read more ...

Thursday
Jul102008

Can cloud computing smite down evil zombie botnet armies?

In the more cool stuff I've never heard of before department is something called Self Cleansing Intrusion Tolerance (SCIT). Botnets are created when vulnerable computers live long enough to become infected with the will to do the evil bidding of their evil masters. Security is almost always about removing vulnerabilities (a process which to outside observers often looks like a dog chasing its tail). SCIT takes a different approach, it works on the availability angle. Something I never thought of before, but which makes a great deal of sense once I thought about it. With SCIT you stop and restart VM instances every minute (or whatever depending in your desired window vulnerability).... This short exposure window means worms and viri do not have long enough to fully infect a machine and carry out a coordinated attack. A machine is up for a while. Does work. And then is torn down again only to be reborn as a clean VM with no possibility of infection (unless of course the VM mechanisms become infected). It's like curing cancer by constantly moving your consciousness to new blemish free bodies. Hmmm... SCIT is really a genius approach to scalable (I have to work in scalability somewhere) security and and fits perfectly with cloud computing and swarm (cloud of clouds) computing. Clouds provide plenty of VMs so there is a constant ready supply of new hosts. From a software design perspective EC2 has been training us to expect failures and build Crash Only Software. We've gone stateless where we can so load balancing to a new VM is not problem. Where we can't go stateless we use work queues and clusters so again, reincarnating to new VMs is not a problem. So purposefully restarting VMs to starve zombie networks was born for cloud computing. If a wider move could be made to cloud backed thin clients the internet might be a safer place to live, play, and work. Imagine being free(er) from spam blasts and DDOS attacks. Oh what a wonderful world it would be...

Click to read more ...

Tuesday
May272008

Secure Remote Administration for Large-Scale Networks

This website has been a great resource for helping me to understand the successful (and failed) scalable network designs from organizations that have actually done it, but I haven't seen any explicite explanations about secure remote administration of these systems. I understand that the *nix people love to SSH, and the windows gang has their RDP, but how does one go about creating a network architecture that both allows one to manage their systems and does its best to avoid hacker interest? As I imagine, no big website will have the SSH/RDP/FTP ports open on the web server, so how is it that they go about remotely administering their geographically diverse groups of servers securely?

Click to read more ...