Entries in cloud (63)

Tuesday
Jul252017

7 Interesting Parallels Between the Invention of Tiny Satellites and Cloud Computing 

 

CubeSats are revolutionizing space exploration because they are small, modular, and inexpensive to build and launch. On an episode of embedded.fm, Professor Jordi Puig-Suari gives a fascinating interview on the invention of the CubeSat. 195: A BUNCH OF SPUTNIKS.

What struck me in the interview is how the process of how the CubeSat was invented parallels how the cloud developed. They followed a very similar path driven by many of the same forces and ideas. 

Just what is a CubeSat? It's a "type of miniaturized satellite for space research that is made up of multiples of 10×10×10 cm cubic units. CubeSats have a mass of no more than 1.33 kilograms per unit, and often use commercial off-the-shelf (COTS) components for their electronics and structure."

Parallel #1:  University as Startup Incubator

Click to read more ...

Wednesday
Apr272016

The Platform Advantage of Amazon, Facebook, and Google

Where’s the mag­ic? [Amazon] The databas­ing and stream­ing and sync­ing in­fras­truc­ture we build on is pret­ty slick, but that’s not the se­cret. The man­age­ment tools are nifty, too; but that’s not it ei­ther. It’s the trib­al knowl­edge: How to build Cloud in­fras­truc­ture that works in a fal­li­ble, messy, un­sta­ble world.

Tim Bray, Senior Principal Engineer at Amazon, in Cloud Eventing

Ben Thompson makes the case in Apple's Organizational Crossroads and in a recent episode of Exponent that Apple has a services problem. With the reaching of peak iPhone Apple naturally wants to turn to services as a way to expand revenues. The problem is Apple has a mixed history of delivering services at scale and Ben suggests that the strength of Apple, its functional organization, is a weakness when it comes to making services. The same skill set you need to create great devices is not the same skill set you need to create great services. He suggests: “Apple’s services need to be separated from the devices that are core to the company, and the managers of those services need to be held accountable via dollars and cents.”

If Apple has this problem they are not the only ones. Only a few companies seemed to have cross the chasm of learning how to deliver a stream of new features at a worldwide scale: Amazon, Facebook, and Google. And of these Amazon is the clear winner.

This is the Amazon Web Services console, it shows the amazing number of services Amazon produces, and it doesn’t even include whole new platforms like the Echo:

Click to read more ...

Wednesday
Mar302016

Should Apple Build their Own Cloud?

This is one of the most interesting build or buy questions of all time: should Apple build their own cloud? Or should Apple concentrate on what they do best and buy cloud services from the likes of Amazon, Microsoft, and Google?

It’s a decision a lot of companies have to make, just a lot bigger, and because it’s Apple, more fraught with an underlying need to make a big deal out of it.

This build or buy question was raised and thoroughly discussed across two episodes of the Exponent podcast, Low Hanging Fruit and Pickaxe Retailers, with hosts Ben Thompson and James Allworth, who regularly talk about business strategy with an emphasis on tech. A great podcast, highly recommended. There’s occasional wit and much wisdom.

Dark Clouds Over Apple’s Infrastructure Efforts

Click to read more ...

Wednesday
Jan062016

Let's Donate Our Organs and Unused Cloud Cycles to Science

There’s a long history of donating spare compute cycles for worthy causes. Most of those efforts were started in the Desktop Age. Now, in the Cloud Age, how can we donate spare compute capacity? How about through a private spot market?

There are cycles to spare. Public Cloud Usage trends:

  • Instances are underutilized with average utilization rates between 8-9%

  • 24% of instance reservations are unused

Maybe all that CapEx sunk into Reserved Instances can be put to some use? Maybe over provisioned instances could be added to the resource pool as well? That’s a lot of power Captain. How could it be put to good use?

There is a need to crunch data. For science. Here’s a great example as described in This is how you count all the trees on Earth. The idea is simple: from satellite pictures count the number of trees. It’s an embarrassingly parallel problem, perfect for the cloud. NASA had a problem. Their cloud is embarrassingly tiny. 400 hypervisors shared amongst many projects. Analysing all the data would would take 10 months. An unthinkable amount of time in this Real-time Age. So they used the spot market on AWS.

The upshot? The test run cost a measly $80, which means that NASA can process data collected for an entire UTM zone for just $250. The cost for all 11 UTM zones in sub-Sarahan Africa and the use of all four satellites comes in at just $11,000.

“We have turned what was a $200,000 job into a $10,000 job and we went from 100 days to 10 days [to complete],” said Hoot. “That is something scientists can build easily into their budget proposals.”

That last quote, That is something scientists can build easily into their budget proposals, stuck in my craw.

Imagine how much science could get done if you didn’t have the budget proposal process slowing down the future? Especially when we know there are so many free cycles available that are already attached to well supported data processing pipelines. How could those cycles be freed up to serve a higher purpose?

Netflix shows the way with their internal spot market. Netflix has so many cloud resources at their disposal, a pool of 12,000 unused reserved instances at peak times, that they created their own internal spot market to drive better utilization. The whole beautiful setup is described Creating Your Own EC2 Spot Market, Creating Your Own EC2 Spot Market -- Part 2, and in High Quality Video Encoding at Scale.

The win: By leveraging the internal spot market Netflix measured the equivalent of a 210% increase in encoding capacity.

Netflix has a long and glorious history of sharing and open sourcing their tools. It seems likely when they perfect their spot market infrastructure it could be made generally available.

Perhaps the Netflix spot market could be extended so unused resources across the Clouds could advertise themselves for automatic integration into a spot market usable by scientists to crunch data and solve important world problems.

Perhaps donated cycles could even be charitable contributions that could help offset the cost of the resource? My wife is a tax accountant and she says this is actually true, under the right circumstances.

This kind of idea has a long history with me. When AWS first started, I like a lot of people wondered, how can I make money off this gold rush? That’s before we knew Amazon was going to make most of the tools to sell to the miners themselves. The idea of exploiting underutilized resources fascinated me for some reason. That is, after all, what VMs do for physical hardware, exploit the underutilized resources of powerful machines. And it is in some ways the idea behind our modern economy. Yet even today software architectures aren’t such that we reach anything close to full utilization of our hardware resources. What I wanted to do was create a memcached system that allowed developers to sell their unused memory capacity (and later CPU, network, storage) to other developers as cheap dynamic pools of memcached storage. Get your cache dirt cheap and developers could make some money back on underused resources. A very similar idea to the spot market notion. But without homomorphic encryption the security issues were daunting, even assuming Amazon would allow it. With the advent of the Container Age sharing a VM is now way more secure and Amazon shouldn’t have a problem with the idea if it’s for science. I hope.

Monday
Mar032014

The “Four Hamiltons” Framework for Mitigating Faults in the Cloud: Avoid it, Mask it, Bound it, Fix it Fast

This is a guest post by Patrick Eaton, Software Engineer and Distributed Systems Architect at Stackdriver.

Stackdriver provides intelligent monitoring-as-a-service for cloud hosted applications.  Behind this easy-to-use service is a large distributed system for collecting and storing metrics and events, monitoring and alerting on them, analyzing them, and serving up all the results in a web UI.  Because we ourselves run in the cloud (mostly on AWS), we spend a lot of time thinking about how to deal with faults in the cloud.  We have developed a framework for thinking about fault mitigation for large, cloud-hosted systems.  We endearingly call this framework the “Four Hamiltons” because it is inspired by an article from James Hamilton, the Vice President and Distinguished Engineer at Amazon Web Services.

The article that led to this framework is called “The Power Failure Seen Around the World.  Hamilton analyzes the causes of the power outage that affected Super Bowl XLVII in early 2013.  In the article, Hamilton writes:

As when looking at any system faults, the tools we have to mitigate the impact are: 1) avoid the fault entirely, 2) protect against the fault with redundancy, 3) minimize the impact of the fault through small fault zones, and 4) minimize the impact through fast recovery.

The mitigation options are roughly ordered by increasing impact to the customer.  In this article, we will refer to these strategies, in order, as “Avoid it”, “Mask it”, “Bound it”, and “Fix it fast”...

Click to read more ...

Tuesday
Jan142014

SharePoint VPS solution

Microsoft SharePoint is an ideal solution for companies who have multiple offices and staff members who are on the move. Using SharePoint, documents and other materials can be easily shared with both colleagues and managers. Other features include advanced document management, which allows users to virtually check out a document, modify it or just read it, then check in the document again. This allows managers/company owners to see exactly when their staff members are working and just what they are doing. When combined with a highly customizable workflow management system and group calendars, SharePoint can improve the way in which your company functions and operates.

However, many organizations are observed to be failing with SharePoint implementation. So with this article, we are trying to make it simpler for organizations in-house IT administrators to help implement SharePoint over a virtual server environment.

Here we are going to see following key points:

Click to read more ...

Wednesday
Jan082014

Under Snowden's Light Software Architecture Choices Become Murky

Adrian Cockcroft on the future of Cloud, Open Source, SaaS and the End of Enterprise Computing:

Most big enterprise companies are actively working on their AWS rollout now. Most of them are also trying to get an in-house cloud to work, with varying amounts of success, but even the best private clouds are still years behind the feature set of public clouds, which is has a big impact on the agility and speed of product development

While the Snowden revelations have tattered the thin veil of trust secreting Big Brother from We the People, they may also be driving a fascinating new tension in architecture choices between Cloud Native (scale-out, IaaS), Amazon Native (rich service dependencies), and Enterprise Native (raw hardware, scale-up).

This tension became evident in a recent HipChat interview where HipChat, makers of an AWS based SaaS chat product, were busy creating an on-premises version of their product that could operate behind the firewall in enterprise datacenters. This is consistent with other products from Atlassian in that they do offer hosted services as well as installable services, but it is also an indication of customer concerns over privacy and security.

The result is a potential shattering of backend architectures into many fragments like we’ve seen on the front-end. On the front-end you can develop for iOS, Android, HTML5, Windows, OSX, and so on. Any choice you make is like declaring for a cold war power in a winner take all battle for your development resources. Unifying this mess has been the refuge of cloud based services over HTTP. Now that safe place is threatened.

To see why...

Click to read more ...

Wednesday
Jun262013

Leveraging Cloud Computing at Yelp - 102 Million Monthly Vistors and 39 Million Reviews

This is a guest post by Yelp's Jim Blomo. Jim manages a growing data mining team that uses Hadoop, mrjob, and oddjob to process TBs of data. Before Yelp, he built infrastructure for startups and Amazon. Check out his upcoming talk at OSCON 2013 on Building a Cloud Culture at Yelp.

In Q1 2013, Yelp had 102 million unique visitors (source: Google Analytics) including approximately 10 million unique mobile devices using the Yelp app on a monthly average basis. Yelpers have written more than 39 million rich, local reviews, making Yelp the leading local guide on everything from boutiques and mechanics to restaurants and dentists. With respect to data, one of the most unique things about Yelp is the variety of data: reviews, user profiles, business descriptions, menus, check-ins, food photos... the list goes on.  We have many ways to deal data, but today I’ll focus on how we handle offline data processing and analytics.

In late 2009, Yelp investigated using Amazon’s Elastic MapReduce (EMR) as an alternative to an in-house cluster built from spare computers.  By mid 2010, we had moved production processing completely to EMR and turned off our Hadoop cluster.  Today we run over 500 jobs a day, from integration tests to advertising metrics.  We’ve learned a few lessons along the way that can hopefully benefit you as well.

Job Flow Pooling

Click to read more ...

Monday
Nov052012

Are we seeing the renaissance of enterprises in the cloud?

A series of recent surveys on the subject seems to indicate that this is indeed the case:

Research conducted by HPclip_image001 found that the majority of businesses in the EMEA region are planning to move their mission-critical apps to the cloud. Of the 940 respondents, 80 percent revealed plans to move mission-critical apps at some point over the next two to five years.

A more recent survey, by research firm MeriTalkclip_image001[1] and sponsored by VMware and EMC (NYSE:EMCclip_image001[2]), showed that one-third of respondents say they plan to move some mission-critical applications to the cloud in the next year. Within two years, the IT managers said they will move 26 percent of their mission-critical apps to the cloud, and in five years, they expect 44 percent of their mission-critical apps to run in the cloud.

The Challenge - How to Bring Hundreds of Enterprise Apps to the Cloud

The reality is that cloud economics only start making sense when there are true workloads that utilize the cloud infrastructure.

If the large majority of your apps fall outside of this category, then you’re not going to benefit much from the cloud. In fact, you’re probably going to lose money, rather than save money.

The Current Approach

  • Focus on building IaaS - Current cloud strategies of many enterprises has been centered on making the infrastructure cloud ready. This basically means ensuring that they are able to spawn machines more easily than they were before. A quick look at many initiatives of this nature shows that there is still only a small portion of enterprises whose applications run on such new systems.
  • Build a new PaaS - PaaS has been taught as the answer to run apps on the cloud. The reality however, is that most of the existing PaaS solutions only cater to new apps and quite often the small, and “non” mission-critical share of our enterprise applications, which still leaves the majority of our enterprise workload outside of our cloud infrastructure.
  • App Migration as a One Off Project - The other approach for migrating applications to the cloud has been to select a small group of applications, and then migrate these one by one to the cloud. Quite often the thought behind this approach has been that application migration is a one-off project. The reality is that applications are more of a living organism – things fail, are moved, or need to be added and removed over time. Therefore it’s not enough to move apps to the cloud using some sort of virtualization technique, it’s critical that the way they’re run and maintained will also fit the dynamic nature of the cloud.

Why is This not Going to Work?

Simple math shows that if you apply this model to the rest of your apps, it’s probably going to take years of effort to migrate all your apps to the cloud. The cost of doing so is going to be extremely high, not to mention the time to market issue which can be even an even greater risk in the end, as it will reflect on cost of operation, profit margins and even the ability to survive in this an extremely competitive market, if it is too long.

What's missing?

What we’re missing is a simple and systematic way to brings all these hundreds and thousands of apps to the cloud.

Moving Enterprise Workloads to the Cloud at a Massive Scale

Instead of thinking of cloud migration as a one-off thing, we need to think of cloud migration on a massive scale.

Thinking in such terms drives a fairly different approach.

In this post, I outlined what i believe should be the main principles for moving enterprise application at such a scale.

Read full post: http://www.cloudifysource.org/2012/10/30/moving_enterprise_workloads_to_the_cloud_on_a_massive_scale.html

Wednesday
Oct172012

World of Warcraft's Lead designer Rob Pardo on the Role of the Cloud in Games

In a really far ranging and insightful interview by Steve Peterson: Game Industry Legends: Rob Pardo, where the future of gaming is discussed, there was a section on how the cloud might be used in games. I know there are a lot of game developers in my audience, so I thought it might be useful:

Q. If the game is free-to-play but I have to download 10 gigabytes to try it out, that can keep me from trying it. That's part of what cloud gaming is trying to overcome; do you think cloud gaming is going to make some inroads because of those technical issues?

Click to read more ...