Entries in google (39)

Wednesday
Nov192008

High Definition Video Delivery on the Web?

How would you architect and implement an SD and HD internet video delivery system such as the BBC iPlayer or Recast Digital's RDV1. What do you need to consider on top of the Lessons Learned section in the YouTube Architecture post? How is it possible to compete with the big players like Google? Can you just use a CDN and scale efficiently? Would Amazon's cloud services be a viable platform for high-definition video streaming?

Click to read more ...

Monday
Nov102008

Scalability Perspectives #1: Nicholas Carr – The Big Switch

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Nicholas Carr

A former executive editor of the Harvard Business Review, Nicholas Carr writes and speaks on technology, business, and culture. His provocative 2004 book Does IT Matter? set off a worldwide debate about the role of computers in business.

The Big Switch – Rewiring the World, From Edison to Google

Carr's core insight is that the development of the computer and the Internet remarkably parallels that of the last radically disruptive technology, electricity. He traces the rapid morphing of electrification from an in-house competitive advantage to a ubiquitous utility, and how the business advantage rapidly shifted from the innovators and early adopters to corporate titans who made their fortune from controlling a commodity essential to everyday life. He envisions similar future for the IT utility in his new book ... and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine. - Thomas Edison Carr's vision is that IT services delivered over the Internet are replacing traditional software applications from our hard drives. We rely on the new utility grid to connect with friends at social networks, track business opportunities, manage photo collections or stock portfolios, watch videos and write blogs or business documents online. All these services hint at the revolutionary potential of the new computing grid and the information utilities that run on it. In the years ahead, more and more of the information-processing tasks that we rely on, at home and at work, will be handled by big data centers located out on the Internet. The nature and economics of computing will change as dramatically as the nature and economics of mechanical power changed with the rise of electric utilities in the early years of the last century. The consequences for society - for the way we live, work, learn, communicate, entertain ourselves, and even think - promise to be equally profound. If the electric dynamo was the machine that fashioned twentieth century society - that made us who we are - the information dynamo is the machine that will fashion the new society of the twenty-first century. The utilitarians as Carr calls them can deliver breakthrough IT economics through the use of highly efficient data centers and scalable, distributed computing, networking and storage architecture. There's a new breed of Internet company on the loose. They grow like weeds, serve millions of customers a day and operate globally. And they have very, very few employees. Look at YouTube, the video network. When it was bought by Google in 2006, for more than $1 billion, it was one of the most popular and fastest growing sites on the Net, broadcasting more than 100 million clips a day. Yet it employed a grand total of 60 people. Compare that to a traditional TV network like CBS, which has more than 23,000 employees.

Goodbye, Mr. Gates

So is the title for Chapter 4 of the book. “The Next Sea change is upon us.” Those words appeared in an extraordinary memorandum that Bill Gates sent to Microsoft's top managers and engineers on October 30, 2005. “Services designed to scale to tens or hundreds of millions [of users] will dramatically change the nature and cost of solutions deliverable to enterprise or small businesses.” This new wave, he concluded, “will be very disruptive.”

IT in 2018: From Turing’s Machine to the Computing Cloud

Carr's new internet.com eBook concludes that thanks to the theory of Alan Turing's Universal Computing Machine and the rise of modern virtualization technologies:
  • With enough memory and enough speed, Turing’s work implies, a single computer could be programmed, with software code, to do all the work that is today done by all the other physical computers in the world.
  • Once you virtualize the computing infrastructure, you can run any application, including a custom-coded one, on an external computing grid.
  • In other words: Software (coding) can always be substituted for hardware (switching).

Into the Cloud

Carr demonstrates the power of the cloud through the example of the answering machine which have been vaporized into the cloud. This is happening to our e-mails, documents, photo albums, movies, friends and world (google earth?), too. If you’re of a certain age, you’ll probably remember that the first telephone answering machine you used was a bulky, cumbersome device. It recorded voices as analog signals on spools of tape that required frequent rewinding and replacing. But it wasn’t long before you replaced that machine with a streamlined digital answering machine that recorded messages as strings of binary code, allowing all sorts of new features to be incorporated into the device through software programming. But the virtualization of telephone messaging didn’t end there. Once the device became digital, it didn’t have to be a device anymore – it could turn into a service running purely as code out in the telephone company’s network. And so you threw out your answering machine and subscribed to a service. The physical device vaporized into the “cloud” of the network.

The Great Enterprise of the 21st Century

Carr considers building scalable web sites and services a great opportunity for this century. Good news for highscalability.com :-) Just as the last century’s electric utilities spurred the development of thousands of new consumer appliances and services, so the new computing utilities will shake up many markets and open myriad opportunities for innovation. Harnessing the power of the computing grid may be the great enterprise of the twenty-first century.

Information Sources

Click to read more ...

Sunday
Oct192008

Alternatives to Google App Engine

One particularly interesting EC2 third party provider is GigaSpaces with their XAP platform that provides in memory transactions backed up to a database. The in memory transactions appear to scale linearly across machines thus providing a distributed in-memory datastore that gets backed up to persistent storage.

Click to read more ...

Monday
Oct132008

Challenges from large scale computing at Google

From Greg Linden on a talk Google Fellow Jeff Dean gave last week at University of Washington Computer Science titled "Research Challenges Inspired by Large-Scale Computing at Google" : Coming away from the talk, the biggest points for me were the considerable interest in reducing costs (especially reducing power costs), the suggestion that the Google cluster may eventually contain 10M machines at 1k locations, and the call to action for researchers on distributed systems and databases to think orders of magnitude bigger than they often are, not about running on hundreds of machines in one location, but hundreds of thousands of machines across many locations.

Click to read more ...

Monday
Aug112008

Distributed Computing & Google Infrastructure

A couple of videos about distributed computing with direct reference on Google infrastructure. You will get acquainted with: --MapReduce the software framework implemented by Google to support parallel computations over large (greater than 100 terabyte) data sets on commodity hardware --GFS and the way it stores it's data into 64mb chunks --Bigtable which is the simple implementation of a non-relational database at Google Cluster Computing and MapReduce Lectures 1-5.

Click to read more ...

Wednesday
Apr232008

Behind The Scenes of Google Scalability

The recent Data-Intensive Computing Symposium brought together experts in system design, programming, parallel algorithms, data management, scientific applications, and information-based applications to better understand existing capabilities in the development and application of large-scale computing systems, and to explore future opportunities. Google Fellow Jeff Dean had a very interesting presentation on Handling Large Datasets at Google: Current Systems and Future Directions. He discussed: • Hardware infrastructure • Distributed systems infrastructure: –Scheduling system –GFS –BigTable –MapReduce • Challenges and Future Directions –Infrastructure that spans all datacenters –More automation It is really like a "How does Google work" presentation in ~60 slides? Check out the slides and the video!

Click to read more ...

Monday
Apr072008

Rumors of Signs and Portents Concerning Freeish Google Cloud

Update 2: Rumor no more. Google Jumps Head First Into Web Services With Google App Engine. The quick and dirty of it: developers simply upload their Python code to Google, launch the application, and can monitor usage and other metrics via a multi-platform desktop application. There were 10,000 developer slots open and of course I was too late. More as the cobra strikes. Update: TechCrunch reports Google To Launch BigTable As Web Service next week. It competes with Amazon's SimpleDB. Though it won't be truly comparable until they also release an EC2 and S3 equivalent. An internet hit for each data access is a little painful. As Jimmy says in Goodfellas, "That's the way. You don't take no sh*t from nobody. " First Dave Winer hallucinates a pig on the mean streets of Walnut Creek that told him Google's long foretold cloud offering will be free for bloggers of "modest needs." GigaOM then says a free cloud service is how Google could eat Amazon's bacon for lunch. The reason for this free cloud buffet is said to be the easier integration of acquisitions who must presumably be in the Google cloud to be taken out. All the free stuff Google offers earns almost no money. They make money on search. Hosting every last CPU cycle on earth has to be costly. What's the return? Cheaper integration of new startups that will also provide no new revenue? Perhaps I am simply not clever enough to see the revolutionary brilliance in this line of thought. Though I would be quite pleased to have Google shareholders subsidize my projects. Folknologist thinks Google may keep costs down by requiring developers to code to a Cloud Virtual Machine based on Java byte codes... Applications would be built using G-ROR, a javascript style RoR framework. Revenue generation would come from an upsell of more memory and CPU. But aren't VMs already the perfect encapsulation from the cloud provider perspective? They just load 'em and run 'em. Seems cost effective enough. For the developer VMs also allow all required flexibility. You don't need to be locked into one environment. You can pick from a large number of operating systems and even wider variety of frameworks. Why lock in? If the model is to treat the cloud like one giant Tomcat application server so you can squeeze more users on the same amount of hardware then Google would just be the worlds largest shared hosting company. Not a cloud at all. And multi-tenant execution of applications in the same application server was always a really bad idea given how one badly programmed app can bring down the whole bunch. Not to mention security concerns. VMs offer better control, manageability, and security. I could see an Adoption Led market angle for Google. You could start small in a shared container and then as you grow move your app without change to a larger, more powerful, unshared container. We certainly do need a better way to create, deploy, and manage applications across VMs and data centers, but I don't quite see how this allows Google to make money offering an expensive service any better than the current VM approach. Though with all their cash maybe they plan to just wait it out until all the others bash themselves apart on the rocky shores of free. Just in case this is an April fools joke, I already know I am an idiot, so no harm done.

Click to read more ...

Friday
Jan252008

Google: Introduction to Distributed System Design

Update: Google added videos on Cluster Computing and MapReduce. There are five lectures: Introduction, MapReduce, Distributed File Systems, Clustering Algorithms, and Graph Algorithms. Advanced website design depends on deep distributed system design knowledge. Where do you get this knowledge? Try Google. They have a a whole Code for Educators program with tutorials and lectures on AJAX programming, distributed systems, and web security. Looks pretty nice.

Click to read more ...

Sunday
Jan132008

Google Reveals New MapReduce Stats

The Google Operating System blog has an interesting post on Google's scale based on an updated version of Google's paper about MapReduce. The input data for some of the MapReduce jobs run in September 2007 was 403,152 TB (terabytes), the average number of machines allocated for a MapReduce job was 394, while the average completion time was 6 minutes and a half. The paper mentions that Google's indexing system processes more than 20 TB of raw data. Niall Kennedy calculates that the average MapReduce job runs across a $1 million hardware infrastructure, assuming that Google still uses the same cluster configurations from 2004: two 2 GHz Intel Xeon processors with Hyper-Threading enabled, 4 GB of memory, two 160 GB IDE hard drives and a gigabit Ethernet link. Greg Linden notices that Google's infrastructure is an important competitive advantage. "Anyone at Google can process terabytes of data. And they can get their results back in about 10 minutes, so they can iterate on it and try something else if they didn't get what they wanted the first time." It is interesting to compare this to Amazon EC2:

  • $0.40 Large Instance price per hour x 400 instances x 10 minutes = $26.7
  • 1 TB data transfer in at $0.10 per GB = $100
For a hundred bucks you could also process a TB of data!

Click to read more ...

Page 1 ... 1 2 3 4