Entries in perspectives (5)

Tuesday
Dec302008

Scalability Perspectives #5: Werner Vogels – The Amazon Technology Platform

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Werner Vogels

Dr. Werner Vogels is Vice President & Chief Technology Officer at Amazon.com where he is responsible for driving the company’s technology vision, which is to continuously enhance the innovation on behalf of Amazon’s customers at a global scale. Prior to joining Amazon, he worked as a researcher at Cornell University where he was a principal investigator in several research projects that target the scalability and robustness of mission-critical enterprise computing systems. He is regarded as one of the world's top experts on ultra-scalable systems and he uses his weblog to educate the community about issues such as eventual consistency. Information Week recently recognized Vogels for this educational and promotional role in Cloud Computing with the 2008 CIO/CTO of the Year award.

Service-Oriented Architecture, Utility Computing and Internet Level 3 Platform in practice

Amazon has built a loosely coupled service-oriented architecture on an inter-planetary scale. They are the pioneers of Utility Computing and Internet Platforms discussed earlier in Scalability Perspectives. Amazon's CTO, Werner Vogels is undoubtedly a thought leader for the coming age of cloud computing.

Cloud Computing CTO or Chief Cloud Officer?

Vogels' name and face are often associated with Amazon's cloud, but Amazon Web Services isn't a one-man show, it is Teamwork. Amazon's CTO has emerged as the right person at the right time and place to guide cloud computing - until now, an emerging technology for early adopters - into the mainstream. He not only understands how to architect a global computing cloud consisting of tens of thousands of servers, but also how to engage CTOs, CIOs, and other professionals at customer companies in a discussion of how that architecture could potentially change the way they approach IT. If all goes as planned, Amazon's cloud will serve as an extension of corporate data centers for new applications and overflow capacity, so-called cloud bursting. Over time, Amazon will then take on more and more of the IT workload from businesses that see value in the model. Customer-centric? What Amazon's doing goes beyond that. Amazon's cloud becomes their cloud; its CTO, their CTO. As an expert of distributed systems Vogels shares interesting insights on scalability related issues on his blog such as:

The Amazon Technology Platform

Werner Vogels explains how Amazon has become a platform provider, and how an increasing number of diverse businesses are built on the Amazon.com platform in this QCon presentation. The most important thing to understand is that Amazon is a Technology Platform with the emphasis on Technology. The scalable and reliable platform is the main enabler of Amazon's business model. Dr Werner describes Amazon’s platform business model and its ‘flywheel’ for growth on the latest episode of the Telco 2.0 ‘executive brainstorm’ series on Telecom TV. Amazon has many platforms that fuels growth such as:
  • Amazon Merchants
  • Amazon Associates
  • Amazon E-Commerce Platform
  • Web Scale Computing Platform
  • Amazon Kindle
  • Telecommunications platfrom using Amazon's platform?
Werner provides interesting insights about these platforms in his presentation. Check out his blog and other resources to learn more about his vision and Amazon's future.

Information Sources

Click to read more ...

Friday
Dec052008

Scalability Perspectives #4: Kevin Kelly – One Machine

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind. Warning #2: this post is wild.

Kevin Kelly

Kevin Kelly is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He co-founded the ongoing Hackers' Conference, and was involved with the launch of the WELL, a pioneering online service started in 1985. He authored the best-selling New Rules for the New Economy and the classic book on decentralized emergent systems, Out of Control

One Machine

There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born. You and I are alive at this moment. Is this global web of computers, servers and trunk lines a mere mechanical circuit, a very large tool, or does it reach a threshold where something, well, different happens? Kevin Kelly's hypothesis is this: The rapidly increasing sum of all computational devices in the world connected online, including wirelessly, forms a superorganism of computation with its own emergent behaviors. I define the One Machine as the emerging superorganism of computers. It is a megasupercomputer composed of billions of sub computers. The sub computers can compute individually on their own, and from most perspectives these units are distinct complete pieces of gear. But there is an emerging smartness in their collective that is smarter than any individual computer. We could say learning (or smartness) occurs at the level of the superorganism.

The Next 6500 Days of the Web

Kevin Kelly recently gave a short talk on the upcoming Web 10.0 at the Web 2.0 Summit in San Francisco. It is like an update to his previous TED talk on Predicting the next 5000 days of the web. He makes us realize that the Web is only around 6500 days old and argues that the next 6500 days will be something entirely different.

Dimensions of the One Machine

Kevin Kelly's post on his blog The Technium back from 2007 shows us the dimensions of the One Machine: The next stage in human technological evolution is a single thinking/web/computer that is planetary in dimensions. This planetary computer will be the largest, most complex and most dependable machine we have ever built. It will also be the platform that most business and culture will run on. Today it contains approximately 1.2 billion personal computers, 2.7 billion cell phones, 1.3 billion land phones, 27 million data servers, and 80 million wireless PDAs. The processor chips of all these parts are feeding the computation of the internet/web/telecommunications system. A very rough estimate of the computing power of this Machine then is that it contains a billion times a billion, or one quintillion (10 ^ 18) transistors. There are about 100 billion neurons in the human brain. Today the Machine has as 5 orders more transistors than you have neurons in your head. And the Machine, unlike your brain, is doubling in power every couple of years at the minimum. If the Machine has 100 quadrillion transistors, how fast is it running? If we include spam, there are 196 billion emails sent every day. That's 2.2 million per second, or 2 megahertz. Every year 1trillion text messages are sent. That works out to 31,000 per second, or 31 kilohertz. Each day 14 billion instant messages are sent, at 162 kilohertz. The number of searches runs at 14 kilohertz. Links are clicked at the rate of 520,000 per second, or .5 megahertz. There are 20 billion visible, searchable web pages and another 900 billion dark, unsearchable, or deep web pages. The average number of links found on each searchable web page is 62. Assuming the same count for dynamic pages that means there's 55 trillion links in the full web. We could think of each link as a synapse -- a potential connection waiting to me made. There is roughly between 100 billion and 100 trillion synapses in the human brain, which puts the Machine in the same neighborhood as our brains. We could start by saying the Machine currently has 1 HB (Human Brain) equivalent. That measure might hold up for a decade or so, but after it gets to 100 HB, or 10,000 HB, it begins to feel like using inches to measure galactic space. Check out Kevin Kelly's blog for the conclusions and more (wild?) ideas. How do You see the future of the Web?

Information Sources

Click to read more ...

Monday
Nov242008

Scalability Perspectives #3: Marc Andreessen – Internet Platforms

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Marc Andreessen

Marc Andreessen is known as an internet pioneer, entrepreneur, investor, startup coach, blogger, and a multi-millionaire software engineer best known as co-author of Mosaic, the first widely-used web browser, and founder of Netscape Communications Corporation. He was the chair of Opsware, a software company he founded originally as Loudcloud, when it was acquired by Hewlett-Packard. He is also a co-founder of Ning, a company which provides a platform for social-networking websites. He has recently joined the Board of Directors of Facebook and eBay.

Marc is an investor in several startups including Digg, Metaplace, Plazes, Qik, and Twitter. His passion is to create new technologies, to start new companies, and to scale them up.

Internet Platforms Rule the Cloud

From Marc's Blog Post on the Three Kinds of Platforms:

One of the hottest of hot topics these days is the topic of Internet platforms, or platforms on the Internet. However, the concept of "platform" is also the focus of a swirling vortex of confusion -- lots of platform-related concepts, many of them highly technical, bleeding together; lots of people harboring various incompatible mental images of what's about to happen in our industry as a consequence of various platforms. I think this confusion is due in part to the term "platform" being overloaded and being used to mean many different things, and in part because there truly are a lot of moving parts at play that intersect in fascinating but complex ways.

Marc attempts to disentangle and examine the topic of "Internet platform" in detail. He has identified three distinct approaches to providing an Internet platform and shows us where each of the three approaches could go.

Internet Platforms Defined

A "platform" is a system that can be programmed and therefore customized by outside developers -- users -- and in that way, adapted to countless needs and niches that the platform's original developers could not have possibly contemplated, much less had time to accommodate.
...
The key term in the definition of platform is "programmed". If you can program it, then it's a platform. If you can't, then it's not.

The Internet gives rise to three new models of platform that is playing out in the Internet industry today. Marc calls these Internet platform models "levels", because as you go from Level 1 to Level 2 to Level 3, each kind of platform is harder to build, but much better for the developer. Further, each level typically supersets the levels below.

Level 1 Platform - "Access API"

This is the kind of Internet platform that is most common today. This is typically a platform provided in the form of a web services API -- which will typically be accessed using an access protocol such as REST or SOAP.

Architecturally, the key thing to understand about this kind of platform is that the developer's application code lives outside the platform -- the code executes somewhere else, on a server elsewhere on the Internet that is provided by the developer.

Examples: eBay, Paypal, Flickr, Delicious

  • The entire burden of building and running the application itself is left entirely to the developer
  • The easiest kind of Internet platform to create

Level 2 Platform - "Plug-In API"

This is the kind of platform approach that historically has been used in end-user applications to let developers build new functions that can be injected, or "plug in", to the core system and its user interface.

In the Internet realm, the first Level 2 platform that I'm aware of is the Facebook platform.

When you develop a Facebook app, you are not developing an app that simply draws on data or services from Facebook, as you would with a Level 1 platform. Instead, you are building an app that acts like a "plug-in" into Facebook -- your app literally shows up within the Facebook user experience, often as a box in the middle of a page that Facebook otherwise defines, such as a user profile page.

  • The third-party app itself lives outside the platform
  • The entire burden of building and running a Level 2 platform-based app is left entirely to the developer
  • Unlike a Level 1 platform where the burden of exposing the app to users is also placed on the developer, Level 2 Internet platforms -- as demonstrated by Facebook -- will be able to directly help their developers get users for their apps
  • Level 2 platforms are significantly harder to create than Level 1 platforms

Level 3 Platform - "Runtime Environment"

In a Level 3 platform, the huge difference is that the third-party application code actually runs inside the platform -- developer code is uploaded and runs online, inside the core system. For this reason, in casual conversation I refer to Level 3 platforms as "online platforms".

In addition, it is highly likely that a Level 3 platform will also superset Level 2 and Level 1 -- i.e., a Level 3 platform will typically also have some kind of plug-in API and some kind of access API.

Put in plain English? A Level 3 platform's developers upload their code into the platform itself, which is where that code runs. As a developer on a Level 3 platform, you don't need your own servers, your own storage, your own database, your own bandwidth, nothing... in fact, often, all you will really need is a browser. The platform itself handles everything required to run your application on your behalf.

Obviously this is a huge difference from Level 2. And this difference -- and what makes it possible -- is why I think Level 3 platforms are the future.

  • Level 3 platforms are much harder to build than Level 2 platforms.
  • The level of technical expertise required of someone to develop on your platform drops by at least 90%, and the level of money they need drops to $0
  • The Level 3 Internet platform approach is much more like the computer industry's typical platform (PC) model than Levels 2 or 1.

Who is building Level 3 Internet platforms?

Marc has built one - Ning has been built from the start to be a Level 3 platform.

Other Level 3 platforms include:

How will we see the new platforms of the future?

We are used to seeing platforms ship as products -- you buy and install a PC or a server and you build an app that runs on it, or equivalently you download and install an open source platform such as Perl or Ruby and you build an app that runs on it.

The platforms of the future won't be like that. The platforms of the future will be online services that you will tap into over the Internet, perhaps with nothing more running locally than a browser. They won't have anything you download, or even an SDK. They will look more like services than software. To paraphrase the Book of Matthew, "you will know them by their URLs".

Read Marc's full blog post for more details! Can you add more Level 3 Platforms?

Information Sources

Tuesday
Nov182008

Scalability Perspectives #2: Van Jacobson – Content-Centric Networking

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Van Jacobson

Van Jacobson is a Research Fellow at PARC. Prior to that he was Chief Scientist and co-founder of Packet Design. Prior to that he was Chief Scientist at Cisco. Prior to that he was head of the Network Research group at Lawrence Berkeley National Laboratory. He's been studying networking since 1969. He still hopes that someday something will start to make sense.

Scaling the Internet – Does the Net needs an upgrade?

As the Internet is being overrun with video traffic, many wonder if it can survive. With challenges being thrown down over the imbalances that have been created and their impact on the viability of monopolistic business models, the Internet is under constant scrutiny. Will it survive? Or will it succumb to the burden of the billion plus community that is constantly demanding more and more? Does the Net Need an Upgrade? To answer this question a distinguished panel of Van Jacobson, Rick Hutley, Norman Lewis, David S. Isenberg has discussed the issue on the Supernova conference. In this compelling debate available on IT Conversations, the panel addresses the question and provides some differing perspectives. One of the perspectives is Content-based networking described by Van Jacobson.

A New Way to look at Networking

Today's research community congratulates itself for the success of the internet and passionately argues whether circuits or datagrams are the One True Way. Meanwhile the list of unsolved problems grows. Security, mobility, ubiquitous computing, wireless, autonomous sensors, content distribution, digital divide, third world infrastructure, etc., are all poorly served by what's available from either the research community or the marketplace. In this amazing Google Tech Talk Van Jacobson use various strained analogies and contrived examples to argue that network research is moribund because the only thing it knows how to do is fill in the details of a conversation between two applications. Today as in the 60s problems go unsolved due to our tunnel vision and not because of their intrinsic difficulty. And now, like then, simply changing our point of view may make many hard things easy.

Content-centric networking

The founding principle of Content-centric networking is that a communication network should allow a user to focus on the data he or she needs, rather than having to reference a specific, physical location where that data is to be retrieved from. This stems from the fact that the vast majority of current Internet usage (a "high 90% level of traffic") consists of data being disseminated from a source to a number of users. The current architecture of the Internet revolves around a conversation model, created in the 1970s to allow geographically distributed users to use a few big, immobile computers. The content-centric approach seeks to make the basic architecture of the network to current usage patterns. The new approach comes with a wide range of benefits, one of which being building security (both authentication and ciphering) into the network, and at the data level. Despite all its advantages, this idea doesn't seem to map very well to some of the current uses of the Web (like web applications, where data is generated on the fly according to user actions) or real-time applications like VoIP and instant messaging. But one can envision an Internet where content-centric protocols take care of the diffusion-based uses of the network, creating an overlay network, while genuine conversation-centric protocols stay on the current infrastructure.

Solutions or workarounds?

There are many solutions or workarounds for the problems posed by traditional conversation based networking such as Content Delivery Networks, caching, distributed filesystems, P2P and PKI. By taking the perspective of Van Jacobson we can investigate new dimensions of these problems. What could be the impact of this perspective on the future of the Internet architecture? What do you think? I recommend the New Way to Look at Networking video by Van Jacobson. He tells us the brief history of Networking from the phone system to the Internet and his vision for dissemination networking.

Information Sources

Click to read more ...

Monday
Nov102008

Scalability Perspectives #1: Nicholas Carr – The Big Switch

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Nicholas Carr

A former executive editor of the Harvard Business Review, Nicholas Carr writes and speaks on technology, business, and culture. His provocative 2004 book Does IT Matter? set off a worldwide debate about the role of computers in business.

The Big Switch – Rewiring the World, From Edison to Google

Carr's core insight is that the development of the computer and the Internet remarkably parallels that of the last radically disruptive technology, electricity. He traces the rapid morphing of electrification from an in-house competitive advantage to a ubiquitous utility, and how the business advantage rapidly shifted from the innovators and early adopters to corporate titans who made their fortune from controlling a commodity essential to everyday life. He envisions similar future for the IT utility in his new book ... and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine. - Thomas Edison Carr's vision is that IT services delivered over the Internet are replacing traditional software applications from our hard drives. We rely on the new utility grid to connect with friends at social networks, track business opportunities, manage photo collections or stock portfolios, watch videos and write blogs or business documents online. All these services hint at the revolutionary potential of the new computing grid and the information utilities that run on it. In the years ahead, more and more of the information-processing tasks that we rely on, at home and at work, will be handled by big data centers located out on the Internet. The nature and economics of computing will change as dramatically as the nature and economics of mechanical power changed with the rise of electric utilities in the early years of the last century. The consequences for society - for the way we live, work, learn, communicate, entertain ourselves, and even think - promise to be equally profound. If the electric dynamo was the machine that fashioned twentieth century society - that made us who we are - the information dynamo is the machine that will fashion the new society of the twenty-first century. The utilitarians as Carr calls them can deliver breakthrough IT economics through the use of highly efficient data centers and scalable, distributed computing, networking and storage architecture. There's a new breed of Internet company on the loose. They grow like weeds, serve millions of customers a day and operate globally. And they have very, very few employees. Look at YouTube, the video network. When it was bought by Google in 2006, for more than $1 billion, it was one of the most popular and fastest growing sites on the Net, broadcasting more than 100 million clips a day. Yet it employed a grand total of 60 people. Compare that to a traditional TV network like CBS, which has more than 23,000 employees.

Goodbye, Mr. Gates

So is the title for Chapter 4 of the book. “The Next Sea change is upon us.” Those words appeared in an extraordinary memorandum that Bill Gates sent to Microsoft's top managers and engineers on October 30, 2005. “Services designed to scale to tens or hundreds of millions [of users] will dramatically change the nature and cost of solutions deliverable to enterprise or small businesses.” This new wave, he concluded, “will be very disruptive.”

IT in 2018: From Turing’s Machine to the Computing Cloud

Carr's new internet.com eBook concludes that thanks to the theory of Alan Turing's Universal Computing Machine and the rise of modern virtualization technologies:
  • With enough memory and enough speed, Turing’s work implies, a single computer could be programmed, with software code, to do all the work that is today done by all the other physical computers in the world.
  • Once you virtualize the computing infrastructure, you can run any application, including a custom-coded one, on an external computing grid.
  • In other words: Software (coding) can always be substituted for hardware (switching).

Into the Cloud

Carr demonstrates the power of the cloud through the example of the answering machine which have been vaporized into the cloud. This is happening to our e-mails, documents, photo albums, movies, friends and world (google earth?), too. If you’re of a certain age, you’ll probably remember that the first telephone answering machine you used was a bulky, cumbersome device. It recorded voices as analog signals on spools of tape that required frequent rewinding and replacing. But it wasn’t long before you replaced that machine with a streamlined digital answering machine that recorded messages as strings of binary code, allowing all sorts of new features to be incorporated into the device through software programming. But the virtualization of telephone messaging didn’t end there. Once the device became digital, it didn’t have to be a device anymore – it could turn into a service running purely as code out in the telephone company’s network. And so you threw out your answering machine and subscribed to a service. The physical device vaporized into the “cloud” of the network.

The Great Enterprise of the 21st Century

Carr considers building scalable web sites and services a great opportunity for this century. Good news for highscalability.com :-) Just as the last century’s electric utilities spurred the development of thousands of new consumer appliances and services, so the new computing utilities will shake up many markets and open myriad opportunities for innovation. Harnessing the power of the computing grid may be the great enterprise of the twenty-first century.

Information Sources

Click to read more ...