Entries in cloud computing (40)

Thursday
Jun272019

2019 Open Source Database Report: Top Databases, Public Cloud vs. On-Premise, Polyglot Persistence

2019 Open Source Database Report: Top Databases, Public Cloud vs. On-Premise, Polyglot Persistence

Ready to transition from a commercial database to open source, and want to know which databases are most popular in 2019? Wondering whether an on-premise vs. public cloud vs. hybrid cloud infrastructure is best for your database strategy? Or, considering adding a new database to your application and want to see which combinations are most popular? We found all the answers you need at the Percona Live event last month, and broke down the insights into the following free trends reports:

Click to read more ...

Tuesday
Apr162019

MySQL High Availability Framework Explained – Part III: Failover Scenarios

MySQL High Availability Framework Explained – Part III: Failover Scenarios

In this three-part blog series, we introduced a High Availability (HA) Framework for MySQL hosting in Part I, and discussed the details of MySQL semisynchronous replication in Part II. Now in Part III, we review how the framework handles some of the important MySQL failure scenarios and recovers to ensure high availability.

MySQL Failover Scenarios

Scenario 1 – Master MySQL Goes Down

  • The Corosync and Pacemaker framework detects that the master MySQL is no longer available. Pacemaker demotes the master resource and tries to recover with a restart of the MySQL service, if possible.
  • At this point, due to the semisynchronous nature of the replication, all transactions committed on the master have been received by at least one of the slaves.
  • Pacemaker waits until all the received transactions are applied on the slaves and lets the slaves report their promotion scores. The score calculation is done in such a way that the score is ‘0’ if a slave is completely in sync with the master, and is a negative number otherwise.
  • Pacemaker picks the slave that has reported the 0 score and promotes that slave which now assumes the role of master MySQL on which writes are allowed.
  • After slave promotion, the Resource Agent triggers a DNS rerouting module. The module updates the proxy DNS entry with the IP address of the new master, thus, facilitating all application writes to be redirected to the new master.
  • Pacemaker also sets up the available slaves to start replicating from this new master.

Thus, whenever a master MySQL goes down (whether due to a MySQL crash, OS crash, system reboot, etc.), our HA framework detects it and promotes a suitable slave to take over the role of the master. This ensures that the system continues to be available to the applications.

Scenario 2 – Slave MySQL Goes Down

  • The Corosync and Pacemaker framework detects that the slave MySQL is no longer available.
  • Pacemaker tries to recover the resource by trying to restart MySQL on the node. If it comes up, it is added back to the current master as a slave and replication continues.
  • If recovery fails, Pacemaker reports that resource as down – based on which alerts or notifications can be generated. If necessary, the ScaleGrid support team will handle the recovery of this node.
  • In this case, there is no impact on the availability of MySQL services.

Scenario 3 – Network Partition – Network Connectivity Breaks Down Between Master and Slave Nodes

This is a classical problem in any distributed system where each node thinks the other nodes are down, while in reality, only the network communication between the nodes is broken. This scenario is more commonly known as split-brain scenario, and if not handled properly, can lead to more than one node claiming to be a master MySQL which in turn leads to data inconsistencies and corruption.

Let’s use an example to review how our framework deals with split-brain scenarios in the cluster. We assume that due to network issues, the cluster has partitioned into two groups – master in one group and 2 slaves in the other group, and we will denote this as [(M), (S1,S2)].

  • Corosync detects that the master node is not able to communicate with the slave nodes, and the slave nodes can communicate with each other, but not with the master.
  • The master node will not be able to commit any transactions as the semisynchronous replication expects acknowledgement from at least one of the slaves before the master can commit. At the same time, Pacemaker shuts down MySQL on the master node due to lack of quorum based on the Pacemaker setting ‘no-quorum-policy = stop’. Quorum here means a majority of the nodes, or two out of three in a 3-node cluster setup. Since there is only one master node running in this partition of the cluster, the no-quorum-policy setting is triggered leading to the shutdown of the MySQL master.
  • Now, Pacemaker on the partition [(S1), (S2)] detects that there is no master available in the cluster and initiates a promotion process. Assuming that S1 is up to date with the master (as guaranteed by semisynchronous replication), it is then promoted as the new master.
  • Application traffic will be redirected to this new master MySQL node and the slave S2 will start replicating from the new master.

Thus, we see that the MySQL HA framework handles split-brain scenarios effectively, ensuring both data consistency and availability in the event the network connectivity breaks between master and slave nodes.

This concludes our 3-part blog series on the MySQL High Availability (HA) framework using semisynchronous replication and the Corosync plus Pacemaker stack. At ScaleGrid, we offer highly available hosting for MySQL on AWS and MySQL on Azure that is implemented based on the concepts explained in this blog series. Please visit the ScaleGrid Console for a free trial of our solutions.

Wednesday
Apr032019

2019 PostgreSQL Trends Report: Private vs. Public Cloud, Migrations, Database Combinations & Top Reasons Used

2019 PostgreSQL Trends Report: Private vs. Public Cloud, Migrations, Database Combinations & Top Reasons Used

PostgreSQL is an open source object-relational database system that has soared in popularity over the past 30 years from its active, loyal, and growing community. For the 2nd year in a row, PostgreSQL has kept the title of #1 fastest growing database in the world according to the DBMS of the Year report by the experts at DB-Engines. So what makes PostgreSQL so special, and how is it being used today? We found the answers at the Postgres Conference in March where we surveyed PostgreSQL users, contributors, and SQL and NoSQL database administrators alike. In this free PostgreSQL Trends Report, we break down PostgreSQL hosting use across public cloud vs. private cloud vs. hybrid cloud, most popular cloud providers, migration trends, database combinations with Postgres, and why PostgreSQL is preferred over popular RDBMS alternatives.

Private Cloud vs. Public Cloud vs. Hybrid Cloud

Click to read more ...

Tuesday
Feb192019

Intro to Redis Cluster Sharding – Advantages, Limitations, Deploying & Client Connections

Intro to Redis Cluster Sharding – Advantages, Limitations, Deploying & Client Connections

Redis Cluster is the native sharding implementation available within Redis that allows you to automatically distribute your data across multiple nodes without having to rely on external tools and utilities. At ScaleGrid, we recently added support for Redis Clusters on our platform through our fully managed Redis hosting plans. In this post, we’re going to introduce you to the advanced Redis Cluster sharding opportunities, discuss its advantages and limitations, when you should deploy, and how to connect to your Redis Cluster.

Sharding with Redis Cluster

Click to read more ...

Thursday
Jun052014

Cloud Architecture Revolution

The introduction of cloud technologies is not a simple evolution of existing ones, but a real revolution.  Like all revolutions, it changes the points of views and redefines all the meanings. Nothing is as before.  This post wants to analyze some key words and concepts, usually used in traditional architectures, redefining them according the standpoint of the cloud.  Understanding the meaning of new words is crucial to grasp the essence of a pure cloud architecture.

<<There is no greater impediment to the advancement of knowledge than the ambiguity of words.>>
THOMAS REID, Essays on the Intellectual Powers of Man

Nowadays, it is required to challenge the limits of traditional architectures that go beyond the normal concepts of scalability and support millions of users (What's Up 500 Million) billions of transactions per day (Salesforce 1.3 billion), five 9s of availability (99.999 AOL).  I wish all of you the success of the examples cited above, but do not think that it is completely impossible to reach mind-boggling numbers. Using cloud technology, everyone can create a service with a little investment and immediately have a world stage.  If successful, the architecture must be able to scale appropriately.

Using the same design criteria or move the current configuration to the cloud simply does not work and it could reveal unpleasant surprises.

Infrastructure - commodity HW instead of high-end HW

Click to read more ...

Tuesday
Jan142014

SharePoint VPS solution

Microsoft SharePoint is an ideal solution for companies who have multiple offices and staff members who are on the move. Using SharePoint, documents and other materials can be easily shared with both colleagues and managers. Other features include advanced document management, which allows users to virtually check out a document, modify it or just read it, then check in the document again. This allows managers/company owners to see exactly when their staff members are working and just what they are doing. When combined with a highly customizable workflow management system and group calendars, SharePoint can improve the way in which your company functions and operates.

However, many organizations are observed to be failing with SharePoint implementation. So with this article, we are trying to make it simpler for organizations in-house IT administrators to help implement SharePoint over a virtual server environment.

Here we are going to see following key points:

Click to read more ...

Tuesday
Nov052013

10 Things You Should Know About AWS

Authored by Chris Fregly:  Former Netflix Streaming Platform Engineer, AWS Certified Solution Architect and Purveyor of fluxcapacitor.com.

Ahead of the upcoming 2nd annual re:Invent conference, inspired by Simone Brunozzi’s recent presentation at an AWS Meetup in San Francisco, and collected from a few of my recent Fluxcapacitor.com consulting engagements, I’ve compiled a list of 10 useful time and clock-tick saving tips about AWS.

1) Query AWS resource metadata

 

Can’t remember the EBS-Optimized IO throughput of your c1.xlarge cluster?  How about the size limit of an S3 object on a single PUT?  awsnow.info is the answer to all of your AWS-resource metadata questions.  Interested in integrating awsnow.info with your application?  You’re in luck.  There’s now a REST API, as well!

Note:  These are default soft limits and will vary by account.

2) Tame your S3 buckets

 

Delete an entire S3 bucket with a single CLI command:  

aws s3 rb s3://<bucket-name> --force

Recursively copy a local directory to S3:

aws s3 cp <local-dir-name> s3://<bucket-name> --region <region-name> --recursive

3) Understand AWS cross-region dependencies

Click to read more ...

Monday
Apr082013

NuoDB's First Experience: Google Compute Engine - 1.8 Million Transactions Per Second

This is a repost of the blog entry written by NuoDB's Tommy Reilly.  

We at NuoDB were recently given the opportunity to kick the tires on the Google Compute Engine by our friends over at Google. You can watch the entire Google Developer Live Session by clicking here.  In order to access the capabilities of GCE we decided to run the same YCSB based benchmark we ran at our General Availability Launch back in January. For those of you who missed it we demonstrated running the YCSB benchmark on a 24 machine cluster running on our private cloud in the NuoDB datacenter. The salient results were 1.7 million transactions per second with sub-millisecond latencies...

Click to read more ...

Monday
Nov052012

Are we seeing the renaissance of enterprises in the cloud?

A series of recent surveys on the subject seems to indicate that this is indeed the case:

Research conducted by HPclip_image001 found that the majority of businesses in the EMEA region are planning to move their mission-critical apps to the cloud. Of the 940 respondents, 80 percent revealed plans to move mission-critical apps at some point over the next two to five years.

A more recent survey, by research firm MeriTalkclip_image001[1] and sponsored by VMware and EMC (NYSE:EMCclip_image001[2]), showed that one-third of respondents say they plan to move some mission-critical applications to the cloud in the next year. Within two years, the IT managers said they will move 26 percent of their mission-critical apps to the cloud, and in five years, they expect 44 percent of their mission-critical apps to run in the cloud.

The Challenge - How to Bring Hundreds of Enterprise Apps to the Cloud

The reality is that cloud economics only start making sense when there are true workloads that utilize the cloud infrastructure.

If the large majority of your apps fall outside of this category, then you’re not going to benefit much from the cloud. In fact, you’re probably going to lose money, rather than save money.

The Current Approach

  • Focus on building IaaS - Current cloud strategies of many enterprises has been centered on making the infrastructure cloud ready. This basically means ensuring that they are able to spawn machines more easily than they were before. A quick look at many initiatives of this nature shows that there is still only a small portion of enterprises whose applications run on such new systems.
  • Build a new PaaS - PaaS has been taught as the answer to run apps on the cloud. The reality however, is that most of the existing PaaS solutions only cater to new apps and quite often the small, and “non” mission-critical share of our enterprise applications, which still leaves the majority of our enterprise workload outside of our cloud infrastructure.
  • App Migration as a One Off Project - The other approach for migrating applications to the cloud has been to select a small group of applications, and then migrate these one by one to the cloud. Quite often the thought behind this approach has been that application migration is a one-off project. The reality is that applications are more of a living organism – things fail, are moved, or need to be added and removed over time. Therefore it’s not enough to move apps to the cloud using some sort of virtualization technique, it’s critical that the way they’re run and maintained will also fit the dynamic nature of the cloud.

Why is This not Going to Work?

Simple math shows that if you apply this model to the rest of your apps, it’s probably going to take years of effort to migrate all your apps to the cloud. The cost of doing so is going to be extremely high, not to mention the time to market issue which can be even an even greater risk in the end, as it will reflect on cost of operation, profit margins and even the ability to survive in this an extremely competitive market, if it is too long.

What's missing?

What we’re missing is a simple and systematic way to brings all these hundreds and thousands of apps to the cloud.

Moving Enterprise Workloads to the Cloud at a Massive Scale

Instead of thinking of cloud migration as a one-off thing, we need to think of cloud migration on a massive scale.

Thinking in such terms drives a fairly different approach.

In this post, I outlined what i believe should be the main principles for moving enterprise application at such a scale.

Read full post: http://www.cloudifysource.org/2012/10/30/moving_enterprise_workloads_to_the_cloud_on_a_massive_scale.html

Wednesday
Aug222012

Cloud Deployment: It’s All About Cloud Automation

Many organizations are facing the challenge of migrating their IT to the cloud. But not many know how to actually approach this undertaking. In my previous post I took a hands-on example of SpringSource’s PetClinic reference application with a Tomcat web server front-end and a Cassandra NoSQL database backend and showed how to onboard it to the cloud in a manageable fashion. But many think this methodology is only good for modern applications that were built with some dynamic/cloud orientation in mind. For example, how different would the cloud on-boarding process be if I modify my PetClinic example to use a MySQL relational database instead of the modern Cassandra NoSQL clustered database? In this blog post I intend to show that cloud on-boarding of brownfield applications doesn’t have to be a huge monolithic migration project with high risk. Cloud on-boarding can take the pragmatic approach and can be performed in a gradual process that both mitigates the risk and enables you to enjoy the immediate benefits of automation and easier management of your application’s operational lifecycle even before moving to the cloud...

Click to read more ...