Entries by Todd Hoff (380)

Thursday
Jul262007

Product: AWStats a Log Analyzer

AWStats is a free powerful and featureful tool that generates advanced web, streaming, ftp or mail server statistics, graphically. This log analyzer works as a CGI or from command line and shows you all possible information your log contains, in few graphical web pages. It uses a partial information file to be able to process large log files, often and quickly. It can analyze log files from all major server tools like Apache log files (NCSA combined/XLF/ELF log format or common/CLF log format), WebStar, IIS (W3C log format) and a lot of other web, proxy, wap, streaming servers, mail servers and some ftp servers.

Click to read more ...

Wednesday
Jul252007

Product: 3 PAR REMOTE COPY

3PAR Remote Copy is a uniquely simple and efficient replication technology that allows customers to protect and share any application data affordably. Built upon 3PAR Thin Copy technology, Remote Copy lowers the total cost of storage by addressing the cost and complexity of remote replication. Common Uses of 3PAR Remote Copy: Affordable Disaster Recovery: Mirror data cost-effectively across town or across the world. Centralized Archive: Replicate data from multiple 3PAR InServs located in multiple data centers to a centralized data archive location. Resilient Pod Architecture: Mutually replicate tier 1 or 2 data to tier 3 capacity between two InServs (application pods). Remote Data Access: Replicate data to a remote location for sharing of data with remote users.

Click to read more ...

Wednesday
Jul252007

Product: NetApp MetroCluster Software

NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. Simple and powerful recovery process minimizes downtime, with little or no user action required. At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. They had a very fast backbone and it worked well. The issue with NetApp is always one of cost, but if you can afford it, it's a good option.

Click to read more ...

Wednesday
Jul252007

Paper: Designing Disaster Tolerant High Availability Clusters

A very detailed (339 pages) paper on how to use HP products to create a highly available cluster. It's somewhat dated and obviously concentrates on HP products, but it is still good information. Table of contents: 1. Disaster Tolerance and Recovery in a Serviceguard Cluster 2. Building an Extended Distance Cluster Using ServiceGuard 3. Designing a Metropolitan Cluster 4. Designing a Continental Cluster 5. Building Disaster-Tolerant Serviceguard Solutions Using Metrocluster with Continuous Access XP 6. Building Disaster Tolerant Serviceguard Solutions Using Metrocluster with EMC SRDF 7. Cascading Failover in a Continental Cluster Evaluating the Need for Disaster Tolerance What is a Disaster Tolerant Architecture? Types of Disaster Tolerant Clusters Extended Distance Clusters Metropolitan Cluster Continental Cluster Continental Cluster With Cascading Failover Disaster Tolerant Architecture Guidelines Protecting Nodes through Geographic Dispersion Protecting Data through Replication Using Alternative Power Sources Creating Highly Available Networking Disaster Tolerant Cluster Limitations Managing a Disaster Tolerant Environment Using this Guide with Your Disaster Tolerant Cluster Products 2. Building an Extended Distance Cluster Using ServiceGuard Types of Data Link for Storage and Networking Two Data Center Architecture Two Data Center FibreChannel Implementations Advantages and Disadvantages of a Two-Data-Center Architecture Three Data Center Architectures Rules for Separate Network and Data Links Guidelines on DWDM Links for Network and Data 3. Designing a Metropolitan Cluster Designing a Disaster Tolerant Architecture for use with Metrocluster Products Single Data Center Two Data Centers and Third Location with Arbitrator(s) Additional EMC SRDF Configurations Setting up Hardware for 1 by 1 Configurations Setting up Hardware for M by N Configurations Worksheets Disaster Tolerant Checklist Cluster Configuration Worksheet Package Configuration Worksheet Next Steps 4. Designing a Continental Cluster Understanding Continental Cluster Concepts Mutual Recovery Configuration Application Recovery in a Continental Cluster Monitoring over a Wide Area Network Cluster Events Interpreting the Significance of Cluster Events How Notifications Work Alerts Alarms Creating Notifications for Failure Events Creating Notifications for Events that Indicate a Return of Service Performing Cluster Recovery Notes on Packages in a Continental Cluster How Serviceguard commands work in a Continentalcluster Designing a Disaster Tolerant Architecture for use with Continentalclusters Mutual Recovery Serviceguard Clusters Data Replication Highly Available Wide Area Networking Data Center Processes Continentalclusters Worksheets Preparing the Clusters Setting up and Testing Data Replication Configuring a Cluster without Recovery Packages Configuring a Cluster with Recovery Packages Building the Continentalclusters Configuration Preparing Security Files Creating the Monitor Package Editing the Continentalclusters Configuration File Checking and Applying the Continentalclusters Configuration Starting the Continentalclusters Monitor Package Validating the Configuration Documenting the Recovery Procedure Reviewing the Recovery Procedure Testing the Continental Cluster Testing Individual Packages Testing Continentalclusters Operations Switching to the Recovery Packages in Case of Disaster Receiving Notification Verifying that Recovery is Needed Using the Recovery Command to Switch All Packages How the cmrecovercl Command Works Forcing a Package to Start Restoring Disaster Tolerance Restore Clusters to their Original Roles Primary Packages Remain on the Surviving Cluster Primary Packages Remain on the Surviving Cluster using cmswitchconcl Newly Created Cluster Will Run Primary Packages Newly Created Cluster Will Function as Recovery Cluster for All Recovery Groups Maintaining a Continental Cluster Adding a Node to a Cluster or Removing a Node from a Cluster Adding a Package to the Continental Cluster Removing a Package from the Continental Cluster Changing Monitoring Definitions Checking the Status of Clusters, Nodes, and Packages Reviewing Messages and Log Files Deleting a Continental Cluster Configuration Renaming a Continental Cluster Checking Java File Versions Next Steps Support for Oracle RAC Instances in a Continentalclusters Environment Configuring the Environment for Continentalclusters to Support Oracle RAC Initial Startup of Oracle RAC Instance in a Continentalclusters Environment Failover of Oracle RAC Instances to the Recovery Site Failback of Oracle RAC Instances After a Failover 5. Building Disaster-Tolerant Serviceguard Solutions Using Metrocluster with Continuous Access XP Files for Integrating XP Disk Arrays with Serviceguard Clusters Overview of Continuous Access XP Concepts PVOLs and SVOLs Device Groups and Fence Levels Creating the Cluster Preparing the Cluster for Data Replication Creating the RAID Manager Configuration Defining Storage Units Configuring Packages for Disaster Recovery Completing and Running a Metrocluster Solution with Continuous Access XP Maintaining a Cluster that uses Metrocluster/CA XP/CA Device Group Monitor Completing and Running a Continental Cluster Solution with Continuous Access XP Setting up a Primary Package on the Primary Cluster Setting up a Recovery Package on the Recovery Cluster Setting up the Continental Cluster Configuration Switching to the Recovery Cluster in Case of Disaster Failback Scenarios Maintaining the Continuous Access XP Data Replication Environment 6. Building Disaster Tolerant Serviceguard Solutions Using Metrocluster with EMC SRDF Files for Integrating ServiceGuard with EMC SRDF Overview of EMC and SRDF Concepts Preparing the Cluster for Data Replication Installing the Necessary Software Building the Symmetrix CLI Database Determining Symmetrix Device Names on Each Node Building a Metrocluster Solution with EMC SRDF Setting up 1 by 1 Configurations Grouping the Symmetrix Devices at Each Data Center Setting up M by N Configurations Configuring Serviceguard Packages for Automatic Disaster Recovery Maintaining a Cluster that Uses Metrocluster/SRDF Managing Business Continuity Volumes R1/R2 Swapping Building a Continental Cluster Solution with EMC SRDF Setting up a Primary Package on the Primary Cluster Setting up a Recovery Package on the Recovery Cluster Setting up the Continental Cluster Configuration Switching to the Recovery Cluster in Case of Disaster Failback Scenarios Maintaining the EMC SRDF Data Replication Environment R1/R2 Swapping 7. Cascading Failover in a Continental Cluster Overview Symmetrix Configuration Using Template Files Data Storage Setup Setting Up Symmetrix Device Groups Setting up Volume Groups Testing the Volume Groups Primary Cluster Package Setup Recovery Cluster Package Setup Continental Cluster Configuration Data Replication Procedures Data Initialization Procedures Data Refresh Procedures in the Steady State Data Replication in Failover and Failback Scenarios

Click to read more ...

Wednesday
Jul252007

Product: lighttpd

lighttpd (pronounced "lighty") is a web server which is designed to be secure, fast, standards-compliant, and flexible while being optimized for speed-critical environments. Its low memory footprint (compared to other web servers), light CPU load and its speed goals make lighttpd suitable for servers that are suffering load problems, or for serving static media separately from dynamic content. lighttpd is free software / open source, and is distributed under the BSD license. lighttpd runs on GNU/Linux and other Unix-like operating systems and Microsoft Windows. * Load-balancing FastCGI, SCGI and HTTP-proxy support * chroot support * select()-/poll()-based web server * Support for more efficient event notification schemes like kqueue and epoll * Conditional rewrites (mod_rewrite) * SSL and TLS support, via openSSL. * Authentication against an LDAP server * rrdtool statistics * Rule-based downloading with possibility of a script handling only authentication * Server-side includes support * Flexible virtual hosting * Modules support * Cache Meta Language (currently being replaced by mod_magnet) * Minimal WebDAV support * Servlet (AJP) support (in versions 1.5.x and up) * HTTP compression using mod_compress and the newer mod_deflate ( 1.5.x )

Information Sources

* http://en.wikipedia.org/wiki/Lighttpd * http://highscalability.com/paper-lightweight-web-servers

Click to read more ...

Wednesday
Jul252007

Paper: Lightweight Web servers

This paper is a great overview of different lightweight web servers. A lot of websites use lightweight web servers to serve images and static content. YouTube is one example: http://highscalability.com/youtube-architecture. So if you need to improve performance consider changing over a different web server for some types of content. Overview: Recent years have enjoyed a florescence of interesting implementations of Web servers, including lighttpd, litespeed, and mongrel, among others. These Web servers boast different combinations of performance, ease of administration, portability, security, and related values. The following engineering study surveys the field of lightweight Web servers to help you find one likely to meet the technical requirements of your next project. "Lightweight" Web servers like lighttpd, litespeed, and mongrel can offer dramatic benefits for your projects. This article surveys the possibilities and shows how they apply to you. Important dimensions for evaluation of a Web server include: * Performance: How fast does it respond to requests? * Scalability: Does the server continue to behave reliably when many users simultaneously access it? * Security: Does the server do only the operations it should? What support does it offer for authenticating users and encrypting its traffic? Does its use make nearby applications or hosts more vulnerable? * Availability: What are the failure modes and incidences of the server? * Compliance to standards: Does the server respect the pertinent RFCs? * Flexibility: Can the server be tuned to accommodate heavy request loads, or computationally demanding dynamic pages, or expensive authentication, or ...? * Platform requirements: On what range of platforms is the server available? Does it have specific hardware needs? * Manageability: Is the server easy to set up and maintain? Is it compatible with organizational standards for logging, auditing, costing, and so on?

Click to read more ...

Tuesday
Jul242007

Major Websites Down: Or Why You Want to Run in Two or More Data Centers.

A lot of sites hosted in San Francisco are down because of at least 6 back-to-back power outages power outages. More details at laughingsquid. Sites like SecondLife, Craigstlist, Technorati, Yelp and all Six Apart properties, TypePad, LiveJournal and Vox are all down. The cause was an underground explosion in a transformer vault under a manhole at 560 Mission Street. Flames shot 6 feet out from the manhole cover. Over PG&E 30,000 customers are without power. What's perplexing is the UPS backup and diesel generators didn't kick in to bring the datacenter back on line. I've never toured that datacenter, but they usually have massive backup systems. It's probably one of those multiple simultaneous failure situations that you hope never happen in real life, but too often do. Or maybe the infrastructure wasn't rolled out completely. Update: the cause was a cascade of failures in a tightly couples system that could never happen :-) Details at Failure Happens: A summary of the power outage at 365 Main. It's just these sorts of emergencies that make us think. How would You handle a similar failure? How can you make your website span more than one data center? Is the cost of putting in all this infrastructure worth it compared to your website being down for a day? All good hard to answer and even harder to implement questions. Geo-distributed clustering is not easy, which is why most companies don't do it, but for some help distributing your website take a look at http://highscalability.com/tags/geo-distributed-clusters.

Click to read more ...

Tuesday
Jul242007

Product: Hibernate Shards

If you want to adopt a shard architecture, but don't want to start from scratch, you may want to consider Hibernate's sharding system. Hibernate Shards is a framework that is designed to encapsulate and minimize this complexity by adding support for horizontal partitioning to Hibernate Core. Hibernate Shards key features: * Standard Hibernate programming model - Hibernate Shards allows you to continue using the Hibernate APIs you know and love: SessionFactory, Session, Criteria, Query. If you already know how to use Hibernate, you already know how to use Hibernate Shards. * Flexible sharding strategies - Distribute data across your shards any way you want. Use one of the default strategies we provide or plug in your own application-specific logic. * Support for virtual shards - Think your sharding strategy is never going to change? Think again. Adding new shards and redistributing your data is one of the toughest operational challenges you will face once you've deployed your shard-aware application. Hibernate Sharding supports virtual shards, a feature designed to simplify the process of resharding your data. * Free/open source - Hibernate Shards is licensed under the LGPL (Lesser GNU Public License)

Click to read more ...

Monday
Jul232007

GoogleTalk Architecture

Google Talk is Google's instant communications service. Interestingly the IM messages aren't the major architectural challenge, handling user presence indications dominate the design. They also have the challenge of handling small low latency messages and integrating with many other systems. How do they do it? Site: http://www.google.com/talk

Information Sources

  • GoogleTalk Architecture

    Platform

  • Linux
  • Java
  • Google Stack
  • Shard

    What's Inside?

    The Stats

  • Support presence and messages for millions of users.
  • Handles billions of packets per day in under 100ms.
  • IM is different than many other applications because the requests are small packets.
  • Routing and application logic are applied per packet for sender and receiver.
  • Messages must be delivered in-order.
  • Architecture extends to new clients and Google services.

    Lessons Learned

  • Measure the right thing. - People ask about how many IMs do you deliver or how many active users. Turns out not to be the right engineering question. - Hard part of IM is how to show correct present to all connected users because growth is non-linear: ConnectedUsers * BuddyListSize * OnlineStateChanges - A linear user grown can mean a very non-linear server growth which requires serving many billions of presence packets per day. - Have a large number friends and presence explodes. The number IMs not that big of deal.
  • Real Life Load Tests - Lab tests are good, but don't tell you enough. - Did a backend launch before the real product launch. - Simulate presence requests and going on-line and off-line for weeks and months, even if real data is not returned. It works out many of the kinks in network, failover, etc.
  • Dynamic Resharding - Divide user data or load across shards. - Google Talk backend servers handle traffic for a subset of users. - Make it easy to change the number of shards with zero downtime. - Don't shard across data centers. Try and keep users local. - Servers can bring down servers and backups take over. Then you can bring up new servers and data migrated automatically and clients auto detect and go to new servers.
  • Add Abstractions to Hide System Complexity - Different systems should have little knowledge of each other, especially when separate groups are working together. - Gmail and Orkut don't know about sharding, load-balancing, or fail-over, data center architecture, or number of servers. Can change at anytime without cascading changes throughout the system. - Abstract these complexities into a set of gateways that are discovered at runtime. - RPC infrastructure should handle rerouting.
  • Understand Semantics of Lower Level Libraries - Everything is abstracted, but you must still have enough knowledge of how they work to architect your system. - Does your RPC create TCP connections to all or some of your servers? Very different implications. - Does the library performance health checking? This is architectural implications as you can have separate system failing independently. - Which kernel operation should you use? IM requires a lot connections but few have any activity. Use epoll vs poll/select.
  • Protect Again Operation Problems - Smooth out all spoke in server activity graphs. - What happens when servers restart with an empty cache? - What happens if traffic shifts to a new data center? - Limit cascading problems. Back of from busy servers. Don't accept work when sick. - Isolate in emergencies. Don't infect others with your problems. - Have intelligent retry logic policies abstracted away. Don't sit in hard 1msec retry loops, for example.
  • Any Scalable System is a Distributed System - Add fault tolerance to every component of the system. Everything fails. - Add ability to profile live servers without impacting server. Allows continual improvement. - Collect metrics from server for monitoring. Log everything about your system so you see patterns in cause and effects. - Log end-to-end so you can reconstruct an entire operation from beginning to end across all machines.
  • Software Development Strategies - Make sure binaries are both backward and forward compatible so you can have old clients work with new code. - Build an experimentation framework to try new features. - Give engineers access to product machines. Gives end-to-end ownership. This is very different than many companies who have completely separate OP teams in their data centers. Often developers can't touch production machines.

    Click to read more ...

  • Monday
    Jul232007

    Weblink Template

    Site:

    Information Sources

    Platform

    What's Inside?

    The Stats

    The Architecture

    * Data Center. * Storage. * Development Environment. * OS. * Web Server. * Database. * Database abstraction layer. * Load balancing. * Web Framework. * Real-time messaging. * Identity management. * Distributed job management. * Ad serving. * Standard API to website. * AJAX library. * PHP Cache. * Object and Content Cache. * Client Side Cache. * Monitoring. * Log Analysis. * Testing. * Performance Analysis. * Backup and Restore. * Fault Tolerance. * Scalability Plan. * Business Continuity Plan. * Future Directions.

    Lessons Learned

    To discuss this article please visit the forums at

    Click to read more ...