Friday
Mar282008

How to Get DNS Names of a Web Server

For some special reason, I'm trying to make a web server able to get all the DNS names mapped to its IP. Let me explain more, I'm creating a website that will run in a web farm, every web server in the farm will have some subdomains mapped to its ip, what I want is that whenever my application starts on a web server is to be able to get all the subdomains mapped/assigned to that server, e.g. sub1.mydomain.com, sub2.mydomain.com. I understand that I have to use reverse dns lookup (i.e. give the IP get the domain name), but I also want to get all the subdomains not just the first one that maps to that IP. I've been reading about DNS on the internet but I don't seem to find any information on how to achieve what I want, normally you use dns to get the ip of a domain but I'm not sure that all servers enable reverse lookup. The problem is that I'm still not sure whether I'll host my own DNS server or use the services of some company (many companies offer DNS hosting services), so, my question is: - If I host my own DNS server, will it be possible to get all the subdomains using reverse lookup? Another question here, if I enable reverse lookup on my DNS server, can this have any negative side effects? As to security .. etc .. is there any way I can enable only my web servers to do reverse lookup while preventing anybody else on the internet from using reverse lookup? - If use the DNS hosting services of some company, will I be able to do what I want? ie. get the subdomains mapped to the IP address of a web server? Unfortunately I don't have much experience with working with web farms, so I would like also to ask whether every web server in the web farm gets its own static IP or how does it work? I mean you have the firewall ... etc .. so I don't know how IP assignments works in a web farm scenario .. Thanks a million in advance and sorry for my really long post .. Wal

Click to read more ...

Thursday
Mar272008

Amazon Announces Static IP Addresses and Multiple Datacenter Operation

Amazon is fixing two of their major problems: no static IP addresses and single datacenter operation. By adding these two new features developers can finally build a no apology system on Amazon. Before you always had to throw in an apology or two. No, we don't have low failover times because of the silly DNS games and unexceptionable DNS update and propagation times and no, we don't operate in more than one datacenter. No more. Now Amazon is adding Elastic IP Addresses and Availability Zones. Elastic IP addresses are far better than normal IP addresses because they are both in tight with Jessica Alba and they are: Static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account, not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or availability zone failures by programmatically remapping your public IP addresses to any instance associated with your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by programmatically remapping your Elastic IP address to a replacement instance. About the new feature RightScale says: Amazon did a very nice job in creating something much more powerful than simply adding “static IPs” to their offering. They are giving us dynamically remappable IP addresses that fit well into the overall cloud computing paradigm that we can use to manage servers better than with traditional hosting solutions. Mostly good news. It's not great news because RightScale also says "Assigning or reassigning an IP to an instance takes a couple of minutes..." So it's not as speedy as one would hope, but at least you don't have to wait for TTL to kick in and everyone up and down the stack to get new IP addresses. Cached static IP addresses will always be valid, which simplifies and speeds things up considerably, especially when using redundant load balancers as the entry point into your system. The other power feature added was the ability to specify which datacenter your instances run in. Amazon calls this feature Availability Zones: Availability Zones provide the ability to place instances in multiple locations. Amazon EC2 locations are composed of regions and availability zones. Regions are geographically dispersed and will be in separate geographic areas or countries. Currently, Amazon EC2 exposes only a single region. Availability zones are distinct locations that are engineered to be insulated from failures in other availability zones and provide inexpensive, low latency network connectivity to other availability zones in the same region. Regions consist of one or more availability zones. By launching instances in separate availability zones, you can protect your applications from failure of a single location. You might also be wondering how fast connections are between datacenters. They are said to be: "inexpensive, low latency network connectivity to other availability zones in the same region." I tend to believe this. I've been surprised before how fast datacenter links can be in that you didn't have code specially for these configurations. How this will impact S3 and SimpleDB latencies is an interesting question. And how system design will need to change once datacenters are in different regions of the world is another interesting question. Other services still have more datacenters, more geographically dispersed datacenters than Amazon, and better content migration capabilities, but this is a great first step that allows developers to add another layer of reliability to their systems. Update: I thought this was a good explanation of using static IPs in Some more about EC2: Slashdot hasn't run many stories on EC2 (none that I know of) because until now it's been a niche service. Without a way to guarantee that you can have a static IP, there had been a single point of failure: if your outward-facing VMs all went down, your only recourse was to start up more VMs on new, dynamically-assigned IPs, point your DNS to them, and wait hours for your users' DNS caches to expire. That meant that while it may have been a good service for sites that needed to do massive private computation, it was an unacceptable hosting service. Now with static IPs, you basically set up your service to have several VMs which provide the outward-facing service (maybe running a webserver, or a reverse proxy for your internal webservers), and you point your public, static IPs at those. If one or more of them goes down, you start up new copies of those VMs and repoint the IPs to them. No DNS changes required.

Related Articles

  • Jesse Robbins links to a good explanation by RightScale of Availability Zones: Setting up a fault-tolerant site using Amazon’s Availability Zones.
  • RightScale also writes DNS, Elastic IPs (EIP) and how things fit together when upgrading a server

    Click to read more ...

  • Tuesday
    Mar252008

    Paper: On Designing and Deploying Internet-Scale Services

    Greg Linden links to a heavily lesson ladened LISA 2007 paper titled On Designing and Deploying Internet-Scale Services by James Hamilton of the Windows Live Services Platform group. I know people crave nitty-gritty details, but this isn't a how to configure a web server article. It hitches you to a rocket and zooms you up to 50,000 feet so you can take a look at best web operations practices from a broad, yet practical perspective. The author and his team of contributors obviously have a lot of in the trenches experience. Many non-obvious topics are covered. And there's a lot to learn from.

    The paper has too many details to cover here, but the big sections are:

  • Recommendations
  • Automatic Management and Provisioning
  • Dependency Management
  • Release Cycle and Testing
  • Operations and Capacity Planning
  • Graceful Degradation and Admission Control
  • Customer Self-Provisioning and Self-Help
  • Customer and Press Communication Plan

    In the recommendations we see some of our old favorites:
  • Expect failure and design for failure.
  • Implement redundancy and fault recovery.
  • Depend upon a commodity hardware slice.
  • Keep things simple and robust.
  • Automate everything.

    Personally, I'm still trying to figure out how to make something simple.

    Next are some good thoughts on how to design operations friendly software:
  • Quick service health check. This is the services version of a build verification test.
  • Develop in the full environment.
  • Zero trust of underlying components.
  • Do not build the same functionality in multiple components.
  • One pod or cluster should not affect another pod or cluster.
  • Allow (rare) emergency human intervention.
  • Enforce admission control at all levels.
  • Partition services.
  • Understand the network design.
  • Analyze throughput and latency.
  • Treat operations utilities as part of the service.
  • Understand access patterns.
  • Version everything.
  • Keep the unit/functional tests from the last release.
  • Avoid single points of failure.
  • Support single-version software. Have all your customers run the same version.
  • Implement multi-tenancy. Apparently a lot of software requires cloning hardware installations to support multiple customers. Don't do that. Have your software work for multiple customers all on the same hardware.

    And the paper continues along the same lines in each section. Good detailed advice on lots of different topics.

    You'll undoubtedly agree with some of the advice and disagree with some. Greg wants faster release cycles, thinks having server affinity for some things is OK, and thinks the advice on allowing humans to throttle load won't work in a crisis. Perfectly valid points, but what's fun is to consider them. Some companies, for example, have a dead-man's switch that must be thrown before one master can failover to another in a multi-datacenter situation. Is that wrong or right? Only the shadow knows.

    The advice to "document all conceivable component failures and modes and combinations" sounds good but is truly difficult to do in practice. I went through this process once on a telco project and it took months just to cover all the failure scenarios on a few cards. But the spirit is right I think.

    My favorite part of the whole paper is:
    We have long believed that 80% of operations issues originate in design and development, so this section
    on overall service design is the largest and most important. When systems fail, there is a natural tendency
    to look first to operations since that is where the problem actually took place. Most operations issues,
    however, either have their genesis in design and development are best solved there.

    Understand this and I think much of the rest follows naturally.
  • Monday
    Mar242008

    Advertise

    If you would like to advertise on this site please contact me at todd@possibility.com. We can work out the details over email. Thanks

    Click to read more ...

    Thursday
    Mar202008

    Paper: Asynchronous HTTP and Comet architectures

    Comet has popularized asynchronous non-blocking HTTP programming, making it practically indistinguishable from reverse Ajax, also known as server push. This JavaWorld article takes a wider view of asynchronous HTTP, explaining its role in developing high-performance HTTP proxies and non-blocking HTTP clients, as well as the long-lived HTTP connections associated with Comet.

    Click to read more ...

    Wednesday
    Mar192008

    RAD Lab is Creating a Datacenter Operating System

    The RAD Lab (Reliable Adaptive Distributed Systems Laboratory) wants to leapfrog the Big Switch and create The Next Big Switch, skipping the cloud/utility evolutionary stage altogether. This hyper-evolutionary niche buster develops technology so advanced the cloud disperses and you can go back to building your own personal datacenters again. Where Google took years to create their datacenters, using a prefab Datacenter Operating System you might create your own in a long holiday weekend. Not St. Patrick's of course. Their vision: Enable one person to invent and run the next revolutionary IT service, operationally expressing a new business idea as a multi-million-user service over the course of a long weekend. By doing so we hope to enable an Internet "Fortune 1 million". How? By wizardry in the form of a “datacenter operating system” created from a pinch of "statistical machine learning (SML)" and a tincture of "recent insights from networking and distributed systems." But like most magics it's not so outlandish once you understand it:

  • Virtual machines provide the OS mechanism.
  • SML enforces the overarching policy.
  • Tools collect sensor data from all the hardware and software components.
  • Actuators shutdown, reboot, or migrate services inside the datacenter.
  • Workload generators and application simulators to record behaviors of proprietary systems and then recreate them in a research environment.
  • Ruby on Rails is the likely programming language.
  • Chubby and MapReduce are the libraries.
  • Storage is via services like BigTable, Google File System, and Amazon’s Simple Storage Service.
  • Crash-only software design.
  • CAP (consistency, availability, partition-tolerance) based design strategies.
  • Improve the efficiency of power delivery and usage. The only new part would be the SML. All the rest is fairly standard by now, even if it's not yet available in a nice gift box at a discount store. And I am highly skeptical when people draw a big circle around the really tricky complex bits and say we'll solve all that with "statistical machine learning", but the idea is intriguing. The dramatic rise of cloud/utility computing makes the personal datacenter idea less appealing than it otherwise would have been. When datacenters were built from scratch by hardy settlers with nothing but flint knives and bear skins, a Datacenter OS would have been very exciting. But now, isn't leveraging multiple clouds a better strategy? After all, the DC OS really just packages best practices. It won't really innovate for you so you aren't gaining a competitive advantage or even a lower cost structure. And if that's the case, wouldn't I rather have someone else do all of the work? But I have high hopes I'll have my own personal power plant in the near future. Maybe one of the things it will power is my own personal datacenter!

    Related Articles

  • Home Page for RAD Lab - Reliable Adaptive Distributed Systems Laboratory
  • RADLab Technical Vision (2005)
  • CS 294-23, Software as a Service (Patterson/Fox/Sobel)
  • Internet-scale Computing: The Berkeley RADLab Perspective
  • CS 294-14: Architecture of Internet Datacenters. This a course at Berkeley and many classes have lecture notes. Very cool. PS Is it "datacenter" or "data center"? Both are used and it drives me crazy.

    Click to read more ...

  • Wednesday
    Mar192008

    Serving JavaScript Fast

    Cal Henderson writes at thinkvitamin.com: "With our so-called "Web 2.0' applications and their rich content and interaction, we expect our applications to increasingly make use of CSS and JavaScript. To make sure these applications are nice and snappy to use, we need to optimize the size and nature of content required to render the page, making sure we’re delivering the optimum experience. In practice, this means a combination of making our content as small and fast to download as possible, while avoiding unnecessarily refetching unmodified resources." A lot of good comments too.

    Click to read more ...

    Tuesday
    Mar182008

    Shared filesystem on EC2

    Hi. I'm looking for a way to share files between EC2 nodes. Currently we are using glusterfs to do this. It has been reliable recently, but in the past it has crashed under high load and we've had trouble starting it up again. We've only been able to restart it by removing the files, restarting the cluster, and filing it up again with our files from backup. This takes ages, and will take even longer the more files we get. What worries me is that it seems to make each node a point of failure for the entire system. One node crashes and soon the entire cluster has crashed. The other problem is adding another node. It seems like you have to take down the whole thing, reconfigure to include the new node, and restart. This kind of defeats the horizontal scaling strategy. We are using 2 EC2 instances as web servers, 1 as a DB master, and 1 as a slave. GlusterFS is installed on the web server machines as well as the DB slave machine (we backup files to s3 from this machine). The files are mostly thumbnails, but also some larger images and media files. Does anyone have a good solution for sharing files between EC2 nodes? I like the ThruDB [http://trac.thrudb.org/] concept of using the local filesystem as a cache for S3, but I'm not sure if ThruDB is mature enough yet. Or maybe some kind of distributed filesystem built on top of git would work? Any ideas? Thanks! ~rvr

    Click to read more ...

    Tuesday
    Mar182008

    Database War Stories #3: Flickr

    [Tim O'Reilly] Continuing my series of queries about how "Web 2.0" companies used databases, I asked Cal Henderson of Flickr to tell me "how the folksonomy model intersects with the traditional database. How do you manage a tag cloud?"

    Click to read more ...

    Tuesday
    Mar182008

    Database Design 101

    I am working on the design for my database and can't seem to come up with a firm schema. I am torn between normalizing the data and dealing with the overhead of joins and denormalizing it for easy sharding. The data is essentially music information per user: UserID, Artist, Album, Song. This lends itself nicely to be normalized and have separate User, Artist, Album and Song databases with a table full of INTs to tie them together. This will be in a mostly read based environment and with about 80% being searches of data by artist album or song. By the time I begin the query for artist, album or song I will already have a list of UserID's to limit the search by. The problem is that the tables can get unmanageably large pretty quickly and my plan was to shard off users once it got too big. Given this simple data relationship what are the pros and cons of normalizing the data vs denormalizing it? Should I go with 4 separate, normalized tables or one 4 column table? Perhaps it might be best to write the data in both formats at first and see what query speed is like once the tables fill up... Another potential issue would be the fact that inserts will be coming in batches of about 500 - 2000+ per user at a time which will be pretty intensive to pull off for the normalized table as there will need to be quite a few selects for each insert due to the fact that the artist, album or song may already be in the database or it may not requiring an insert. What do you all think?

    Click to read more ...