Entries in Monitoring (14)

Wednesday
Jan222020

Follower Clusters – 3 Major Use Cases for Syncing SQL & NoSQL Deployments

Follower Clusters – 3 Major Use Cases for Syncing SQL & NoSQL Deployments

Follower clusters are a ScaleGrid feature that allows you to keep two independent database systems (of the same type) in sync. Unlike cloning or replication, this allows you to maintain an active, point-in-time copy of your production data. This extra cluster, known as a follower cluster, can be leveraged for multiple use cases, including for analyzing, optimizing and testing your application performance for MongoDB, MySQL and PostgreSQL. In this blog post, we will cover the top three scenarios to leverage follower clusters for your application.

How Do Follower Clusters Differ From Replication?

Unlike a static clone, this data imports on a set schedule so your follower cluster is always in sync with your production cluster. Here are a few critical ways in which it differs from replication:

Click to read more ...

Tuesday
Apr162019

MySQL High Availability Framework Explained – Part III: Failover Scenarios

MySQL High Availability Framework Explained – Part III: Failover Scenarios

In this three-part blog series, we introduced a High Availability (HA) Framework for MySQL hosting in Part I, and discussed the details of MySQL semisynchronous replication in Part II. Now in Part III, we review how the framework handles some of the important MySQL failure scenarios and recovers to ensure high availability.

MySQL Failover Scenarios

Scenario 1 – Master MySQL Goes Down

  • The Corosync and Pacemaker framework detects that the master MySQL is no longer available. Pacemaker demotes the master resource and tries to recover with a restart of the MySQL service, if possible.
  • At this point, due to the semisynchronous nature of the replication, all transactions committed on the master have been received by at least one of the slaves.
  • Pacemaker waits until all the received transactions are applied on the slaves and lets the slaves report their promotion scores. The score calculation is done in such a way that the score is ‘0’ if a slave is completely in sync with the master, and is a negative number otherwise.
  • Pacemaker picks the slave that has reported the 0 score and promotes that slave which now assumes the role of master MySQL on which writes are allowed.
  • After slave promotion, the Resource Agent triggers a DNS rerouting module. The module updates the proxy DNS entry with the IP address of the new master, thus, facilitating all application writes to be redirected to the new master.
  • Pacemaker also sets up the available slaves to start replicating from this new master.

Thus, whenever a master MySQL goes down (whether due to a MySQL crash, OS crash, system reboot, etc.), our HA framework detects it and promotes a suitable slave to take over the role of the master. This ensures that the system continues to be available to the applications.

Scenario 2 – Slave MySQL Goes Down

  • The Corosync and Pacemaker framework detects that the slave MySQL is no longer available.
  • Pacemaker tries to recover the resource by trying to restart MySQL on the node. If it comes up, it is added back to the current master as a slave and replication continues.
  • If recovery fails, Pacemaker reports that resource as down – based on which alerts or notifications can be generated. If necessary, the ScaleGrid support team will handle the recovery of this node.
  • In this case, there is no impact on the availability of MySQL services.

Scenario 3 – Network Partition – Network Connectivity Breaks Down Between Master and Slave Nodes

This is a classical problem in any distributed system where each node thinks the other nodes are down, while in reality, only the network communication between the nodes is broken. This scenario is more commonly known as split-brain scenario, and if not handled properly, can lead to more than one node claiming to be a master MySQL which in turn leads to data inconsistencies and corruption.

Let’s use an example to review how our framework deals with split-brain scenarios in the cluster. We assume that due to network issues, the cluster has partitioned into two groups – master in one group and 2 slaves in the other group, and we will denote this as [(M), (S1,S2)].

  • Corosync detects that the master node is not able to communicate with the slave nodes, and the slave nodes can communicate with each other, but not with the master.
  • The master node will not be able to commit any transactions as the semisynchronous replication expects acknowledgement from at least one of the slaves before the master can commit. At the same time, Pacemaker shuts down MySQL on the master node due to lack of quorum based on the Pacemaker setting ‘no-quorum-policy = stop’. Quorum here means a majority of the nodes, or two out of three in a 3-node cluster setup. Since there is only one master node running in this partition of the cluster, the no-quorum-policy setting is triggered leading to the shutdown of the MySQL master.
  • Now, Pacemaker on the partition [(S1), (S2)] detects that there is no master available in the cluster and initiates a promotion process. Assuming that S1 is up to date with the master (as guaranteed by semisynchronous replication), it is then promoted as the new master.
  • Application traffic will be redirected to this new master MySQL node and the slave S2 will start replicating from the new master.

Thus, we see that the MySQL HA framework handles split-brain scenarios effectively, ensuring both data consistency and availability in the event the network connectivity breaks between master and slave nodes.

This concludes our 3-part blog series on the MySQL High Availability (HA) framework using semisynchronous replication and the Corosync plus Pacemaker stack. At ScaleGrid, we offer highly available hosting for MySQL on AWS and MySQL on Azure that is implemented based on the concepts explained in this blog series. Please visit the ScaleGrid Console for a free trial of our solutions.

Wednesday
Apr032019

2019 PostgreSQL Trends Report: Private vs. Public Cloud, Migrations, Database Combinations & Top Reasons Used

2019 PostgreSQL Trends Report: Private vs. Public Cloud, Migrations, Database Combinations & Top Reasons Used

PostgreSQL is an open source object-relational database system that has soared in popularity over the past 30 years from its active, loyal, and growing community. For the 2nd year in a row, PostgreSQL has kept the title of #1 fastest growing database in the world according to the DBMS of the Year report by the experts at DB-Engines. So what makes PostgreSQL so special, and how is it being used today? We found the answers at the Postgres Conference in March where we surveyed PostgreSQL users, contributors, and SQL and NoSQL database administrators alike. In this free PostgreSQL Trends Report, we break down PostgreSQL hosting use across public cloud vs. private cloud vs. hybrid cloud, most popular cloud providers, migration trends, database combinations with Postgres, and why PostgreSQL is preferred over popular RDBMS alternatives.

Private Cloud vs. Public Cloud vs. Hybrid Cloud

Click to read more ...

Monday
Mar032014

The “Four Hamiltons” Framework for Mitigating Faults in the Cloud: Avoid it, Mask it, Bound it, Fix it Fast

This is a guest post by Patrick Eaton, Software Engineer and Distributed Systems Architect at Stackdriver.

Stackdriver provides intelligent monitoring-as-a-service for cloud hosted applications.  Behind this easy-to-use service is a large distributed system for collecting and storing metrics and events, monitoring and alerting on them, analyzing them, and serving up all the results in a web UI.  Because we ourselves run in the cloud (mostly on AWS), we spend a lot of time thinking about how to deal with faults in the cloud.  We have developed a framework for thinking about fault mitigation for large, cloud-hosted systems.  We endearingly call this framework the “Four Hamiltons” because it is inspired by an article from James Hamilton, the Vice President and Distinguished Engineer at Amazon Web Services.

The article that led to this framework is called “The Power Failure Seen Around the World.  Hamilton analyzes the causes of the power outage that affected Super Bowl XLVII in early 2013.  In the article, Hamilton writes:

As when looking at any system faults, the tools we have to mitigate the impact are: 1) avoid the fault entirely, 2) protect against the fault with redundancy, 3) minimize the impact of the fault through small fault zones, and 4) minimize the impact through fast recovery.

The mitigation options are roughly ordered by increasing impact to the customer.  In this article, we will refer to these strategies, in order, as “Avoid it”, “Mask it”, “Bound it”, and “Fix it fast”...

Click to read more ...

Tuesday
Feb192013

Puppet monitoring: how to monitor the success or failure of Puppet runs  

This is a guest post by LogicMonitor's Director of Tech Ops, Jesse Aukeman, about the different ways they're monitoring the success or failure of Puppet runs.

If you are like us, you are running some type of linux configuration management tool. The value of centralized configuration and deployment is well known and hard to overstate. Puppet is our tool of choice. It is powerful and works well for us, except when things don't go as planned. Failures of puppet can be innocuous and cosmetic, or they can cause production issues, for example when crucial updates do not get properly propagated.

Why?

In the most innocuous cases, the puppet agent craps out (we run puppet agent via cron). As nice as puppet is, we still need to goose it from time to time to get past some sort of network or host resource issue. A more dangerous case is when an administrator temporarily disables puppet runs on a host in order to perform some test or administrative task and then forgets to reenable it. In either case it’s easy to see how a host may stop receiving new puppet updates. The danger here is that this may not be noticed until that crucial update doesn't get pushed, production is impacted, and it’s the client who notices.

How to implement monitoring?

Click to read more ...

Wednesday
May112011

Troubleshooting response time problems – why you cannot trust your system metrics

Production Monitoring is about ensuring the stability and health of our system, that also includes the application. A lot of times we encounter production systems that concentrate on System Monitoring, under the assumption that a stable system leads to stable and healthy applications. So let’s see what System Monitoring can tell us about our Application.

Let’s take a very simple two tier Web Application:

A simple two tier web application 

This is a simple multi-tier eCommerce solution. Users are concerned about bad performance when they do a search. Let's see what we can find out about it if performance is not satisfactory. We start by looking at a couple of simple metrics.

CPU Utilization

Click to read more ...

Thursday
Aug132009

Reconnoiter - Large-Scale Trending and Fault-Detection

One of the top recommendations from the collective wisdom contained in Real Life Architectures is to add monitoring to your system. Now! Loud is the lament for not adding monitoring early and often. The reason is easy to understand. Without monitoring you don't know what your system is doing which means you can't fix it and you can't improve it. Feedback loops require data.

Some popular monitor options are Munin, Nagios, Cacti and Hyperic. A relatively new entrant is a product called Reconnoiter from Theo Schlossnagle, President and CEO of OmniTI, leading consultants on solving problems of scalability, performance, architecture, infrastructure, and data management. Theo's name might sound familiar. He gives lots of talks and is the author of the very influential Scalable Internet Architectures book.

So right away you know Reconnoiter has a good pedigree. As Theo says, their products are born of pain, from the fire of solving real-life problems and that's always a harbinger of good things to come.

The problem Reconnoiter is trying to solve is monitoring thousands of nodes across many datacenters where the nodes can vary widely in power, architecture, and software configuration. With that kind of problem what they really want is the ability to:

 

  • Configure everything from one place.
  • Cheap checks that are made on the specified time interval and aren't late and don't cause a heavy load on the machine.
  • Change the configuration from any datacenter without coordination.
  • Add checks in the field.
  • Separate data collection from visualization and fault-detection.
  • Analyze trends for long-term capacity planning and postmortem analysis.
  • Detect when faults have happened and when they are about to happen.
  • Support trending: the intelligent data correlation, regression analysis/curve fitting and looking into the past to see how much you go where you are now so you can do better next time.
  • Create a monitoring system that doesn't require a separate powerful network and its own set of hosts on which to run.

    If you've ever used or written a distributed stats collection system the architecture of Reconnoiter will look somewhat familiar:


    Some of the more interesting bits of the architecture are:
  • PostgresSQL stores all the data. The data isn't stuck in funky little files.
  • Fault-detection is based on Esper, a streaming complex event processing system. It's not clear how well this approach will work but the hooks are there.
  • A Comet-style web server is used to feed real-time updates. Much better than your traditional polling cycle.
  • Although the web console is PHP based, PHP is used mainly to execute Json calls. Rendering happens in the browser in an AJAX client.
  • Canvas is used for real time graphics. No images are created on the fly.
  • Data is transferred securely over SSL.
  • The system is robust against failures.
  • Data is not thrown away as it is with some systems so you can check against history.

    Reconnoiter isn't completely pain free. Lua for an extension language is an interesting choice. The installation and configuration process is very complex. There are a lot of separate steps and bits to configure. Another potential problem is monitoring produces a lot of real-time data. I have to wonder if PostgresSQL can handle that flow for very large systems. The data is partitioned by month, but a large number of machines and a large number of events can be crushing. And I wasn't sure if graph data could be correlated with released features or other system changes. In the video Theo mentions seeing in the graphs that using deflate improved performance, but I'm not sure just looking at the graph how you would be able correlate system data with system changes.

    It's droolingly clear where Reconnoiter shines is on creating complex graphs, charts, and other visualizations. The graphs look useful and quick to render. The real time visualizations are spectacular and extremely are difficult to do in other systems.

    Related Articles



  • OmniTI Reconnoiter: Web Management and Analysis by Eric J. Bruno
  • Reconnoiter Update by Theo Schlossnagle
  • Reconnoiter Project Home Page
  • Video: Reconnoiter: a whirlwind tour
  • Big Picture of the Overall System
  • Reconnoiter: Monitoring and Trend Analysis from OSCON
  • OmniTI Unveils Open Source Monitoring Tool, Reconnoiter by Jayashree Adkoli
  • The sad state of open source monitoring tools by Grig Gheorghiu
  • How to Succeed at Capacity Planning Without Really Trying : An Interview with Flickr's John Allspaw on His New Book.
  • New open source IT management tool: Lighter-weight than Nagios, more granular than Cacti by Matt Stansberry
  • Wednesday
    Apr022008

    Product: Supervisor - Monitor and Control Your Processes

    It's a sad fact of life, but processes die. I know, it's horrible. You start them, send them out into process space, and hope for the best. Yet sometimes, despite your best coding, they core dump, seg fault, or some other calamity befalls them. Unlike our messy biological world so cruelly ruled by entropy, in the digital world processes can be given another chance. They can be restarted. A greater destiny awaits. And hopefully this time the random lottery of unforeseen killing factors will be avoided and a long productive life will be had by all. This is fun code to write because it's a lot more complicated than you might think. And restarting processes is a highly effective high availability strategy. Most faults are transient, caused by an unexpected series of events. Rather than taking drastic action, like taking a node out of production or failing over, transients can be effectively masked by simply restarting failed processes. Though complexity makes it a fun problem, it's also why you may want to "buy" rather than build. If you are in the market, Supervisor looks worth a visit. Adapted from their website: Supervisor is a Python program that allows you to start, stop, and restart other programs on UNIX systems. It can restart crashed processes.

  • It is often inconvenient to need to write "rc.d" scripts for every single process instance. rc.d scripts are a great lowest-common-denominator form of process initialization/autostart/management, but they can be painful to write and maintain. Additionally, rc.d scripts cannot automatically restart a crashed process and many programs do not restart themselves properly on a crash. Supervisord starts processes as its subprocesses, and can be configured to automatically restart them on a crash. It can also automatically be configured to start processes on its own invocation.
  • It's often difficult to get accurate up/down status on processes on UNIX. Pidfiles often lie. Supervisord starts processes as subprocesses, so it always knows the true up/down status of its children and can be queried conveniently for this data.
  • Users who need to control process state often need only to do that. They don't want or need full-blown shell access to the machine on which the processes are running. Supervisorctl allows a very limited form of access to the machine, essentially allowing users to see process status and control supervisord-controlled subprocesses by emitting "stop", "start", and "restart" commands from a simple shell or web UI.
  • Users often need to control processes on many machines. Supervisor provides a simple, secure, and uniform mechanism for interactively and automatically controlling processes on groups of machines.
  • Processes which listen on "low" TCP ports often need to be started and restarted as the root user (a UNIX misfeature). It's usually the case that it's perfectly fine to allow "normal" people to stop or restart such a process, but providing them with shell access is often impractical, and providing them with root access or sudo access is often impossible. It's also (rightly) difficult to explain to them why this problem exists. If supervisord is started as root, it is possible to allow "normal" users to control such processes without needing to explain the intricacies of the problem to them.
  • Processes often need to be started and stopped in groups, sometimes even in a "priority order". It's often difficult to explain to people how to do this. Supervisor allows you to assign priorities to processes, and allows user to emit commands via the supervisorctl client like "start all", and "restart all", which starts them in the preassigned priority order. Additionally, processes can be grouped into "process groups" and a set of logically related processes can be stopped and started as a unit. Supervisor also has a web interface and an XMP-RPC interface:
  • A (sparse) web user interface with functionality comparable to supervisorctl may be accessed via a browser if you start supervisord against an internet socket. Visit the server URL (e.g. http://localhost:9001/) to view and control process status through the web interface after activating the configuration file's [inet_http_server] section. XML-RPC Interface
  • The same HTTP server which serves the web UI serves up an XML-RPC interface that can be used to interrogate and control supervisor and the programs it runs. To use the XML-RPC interface, connect to supervisor's http port with any XML-RPC client library and run commands against it. An example of doing this using Python's xmlrpclib client library is as follows.

    Related Articles

  • PyCon Presentation: Supervisor as a Platform
  • Monitor Pylons application with supervisord
  • Supervisor Manual

    Click to read more ...

  • Monday
    Jan212008

    Product: Hyperic

    From Wikipedia: Hyperic HQ is a popular open source IT Operations computer system and network monitoring application software. It auto-discovers all system resources and their metrics, including hardware, operating systems, virtualization, databases, middleware, applications, and services. It watches hosts and services that you specify, alerting you when things go bad and again when they get better. It also provides historical charting and event correlation for faster problem identification. The Hyperic HQ server is a distributed J2EE application that runs on top of the open source JBoss Application Server. It is written in Java and portable C code and runs on Linux, Windows, Solaris, HP-UX and Mac OS X. Hyperic HQ Portal is a Java and AJAX User Interface that includes: * Inventory/Application Model & host hierarchy * Monitoring of network services (SMTP, POP3, HTTP, NNTP, ICMP, SNMP) * Monitoring of host resources (processor load, disk usage, system logs) * Remote monitoring supported through SSH or SSL encrypted tunnels. * Continuous Auto-Discovery of system resources including hardware, software and services * Track log & configuration data * Remote resource control for corrective actions such as starting and stopping services, vacuum database table, or snapshotting a VM * Ability to define event handlers to be run during service or host events for proactive problem resolution * Problem Resource Identification & Root Cause Analysis * Event Correlation * Alerting when service or host problems occur or get resolved via email, pager, Text messaging, RSS * Security/Access Control * Simple plug-in design that allows users to easily develop their own service checks depending on needs, by using the tools of choice (XML, J2EE, Bash, C++, Perl, Ruby, Python, PHP, C#, etc.) I met Javier Soltero, the CEO of Hyperic at the Velocity Web Performance and Operations dinner. I hit him with my best stuff and he didn't flinch a bit. Javier showed a deep understanding of the issues, a real passion for his product and the space, and the knowing good humor of someone who has been through a few wars and learned a little something along the way. I don' know if that translates to an excellent product, but it would at least make me take a look.

    Click to read more ...

    Monday
    Dec312007

    Product: collectd

    From http://directory.fsf.org/project/collectd/ : 'collectd' is a small daemon which collects system information every 10 seconds and writes the results in an RRD-file. The statistics gathered include: CPU and memory usage, system load, network latency (ping), network interface traffic, and system temperatures (using lm-sensors), and disk usage. 'collectd' is not a script; it is written in C for performance and portability. It stays in the memory so there is no need to start up a heavy interpreter every time new values should be logged. From the collectd website: Collectd gathers information about the system it is running on and stores this information. The information can then be used to do find current performance bottlenecks (i. e. performance analysis) and predict future system load (i. e. capacity planning). Or if you just want pretty graphs of your private server and are fed up with some homegrown solution you're at the right place, too ;). While collectd can do a lot for you and your administrative needs, there are limits to what it does: * It does not generate graphs. It can write to RRD-files, but it cannot generate graphs from these files. There's a tiny sample script included in contrib/, though. Also you can have a look at drraw for a generic solution to generate graphs from RRD-files. * It does not do monitoring. The data is collected and stored, but not interpreted and acted upon. There's a plugin for Nagios, so it can use the values collected by collectd, though. It's reportedly a reliable product that doesn't cause a lot load on your system. This enables you to collect data at a faster rate so you can detect problems earlier.

    Click to read more ...