Entries in Performance (43)

Tuesday
Feb032009

10 More Rules for Even Faster Websites

Update:How-To Minimize Load Time for Fast User Experiences. Shows how to analyze the bottlenecks preventing websites and blogs from loading quickly and how to resolve them. 80-90% of the end-user response time is spent on the frontend, so it makes sense to concentrate efforts there before heroically rewriting the backend. Take a shower before buying a Porsche, if you know what I mean. Steve Souders, author of High Performance Websites and Yslow, has ten more best practices to speed up your website:

  • Split the initial payload
  • Load scripts without blocking
  • Don’t scatter scripts
  • Split dominant content domains
  • Make static content cookie-free
  • Reduce cookie weight
  • Minify CSS
  • Optimize images
  • Use iframes sparingly
  • To www or not to www Sadly, according to String Theory, there are only 26.7 rules left, so get them while they're still in our dimension. Here are slides on the first few rules. Love the speeding dog slide. That's exactly what my dog looks like traveling down the road, head hanging out the window, joyfully battling the wind. Also see 20 New Rules for Faster Web Pages.

    Click to read more ...

  • Tuesday
    Dec092008

    Rules of Thumb in Data Engineering

    This is an interesting and still relevant research paper by Jim Gray, Prashant Shenoy at Microsoft Research that examines the rules of thumb for the design of data storage systems. It looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance. Jim Gray has an updated presentation on this interesting topic: Long Term Storage Trends and You. Robin Harris has a great post that reflects on the Rules of Thumb whitepaper on his StorageMojo blog: Architecting the Internet Data Center - Parts I-IV.

    Click to read more ...

    Monday
    Dec012008

    Breakthrough Web-Tier Solutions with Record-Breaking Performance

    With the explosive growth of the Internet, increasing complexity of user requirements, and wide choice of hardware, operating systems, and middleware, IT executives are facing new challenges in their application infrastructures. Rapid expansion of the application tier has resulted in significant cost and complexity, and many organizations are simply running out of datacenter space, power, and cooling.

    Click to read more ...

    Monday
    Oct132008

    SQL Server 2008 Database Performance and Scalability

    Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement relational databases, reporting systems, and data warehouses of enterprise scale, and provides optimal performance and responsiveness.
    With SQL Server 2008, you can take advantage of the latest hardware technologies while scaling up your servers to support server consolidation. SQL Server 2008 also enables you to scale out your largest data solutions.

    This white paper describes the performance and scalability capabilities of Microsoft® SQL Server® 2008 and explains how you can use these capabilities to:
    * Optimize performance for any size of database with the tools and features that are available for the database engine, analysis services, reporting services, and integration services.
    * Scale up your servers to take full advantage of new hardware capabilities.
    * Scale out your database environment to optimize responsiveness and to move your data closer to your users.


    Read the entire article about SQL Server 2008 Database Performance and Scalability at MyTestBox.com - web software reviews, news, tips & tricks.

    Click to read more ...

    Thursday
    Sep252008

    Is your cloud as scalable as you think it is?

    An unstated assumption is that clouds are scalable. But are they? Stick thousands upon thousands of machines together and there are a lot of potential bottlenecks just waiting to choke off your scalability supply. And if the cloud is scalable what are the chances that your application is really linearly scalable? At 10 machines all may be well. Even at 50 machines the seas look calm. But at 100, 200, or 500 machines all hell might break loose. How do you know? You know through real life testing. These kinds of tests are brutally hard and complicated. who wants to do all the incredibly precise and difficult work of producing cloud scalability tests? GridDynamics has stepped up to the challenge and has just released their Cloud Performance Reports. The report is quite detailed so I'll just cover what I found most interesting. GridDynamics in this report test three configurations:

  • GridGain running a Monte-Carlo simulation on EC2. This test is a CPU only test, a data grid is not accessed. This scenario tests the scalability of EC2 and GridGain. * GridGain provided near linear scalability end-to-end on a 512 node EC2 hosted grid. * EC2 is ready for production usage on large-scale stateless computations exhibiting good price for performance and a strong linear scaling curve.
  • GigaSpaces running a risk management simulation on EC2. This is a data-driven test. GigaSpaces is used in a configuration where the compute grid and the data grid are separated, even though GigaSpaces supports an in-memory data grid. * GigaSpaces provided near linear scalability from 16 to 256 nodes. There was a 28% degradation from 256 to 512 nodes because only four data grid servers were used. More were needed. The compute grid and data grid must each be sized to independently to scale properly. * EC2 is ready for production usage for classes of large-scale data-driven applications.
  • Windows HPC Server and Velocity running an analytics application in Microsoft's grid testbed. This test was more complicated than the others. It tested the performance implications of data "in the cloud" vs. "outside the cloud" for data-intensive analytics applications. * Keeping data close to the business logic matters. Performance improved up to 31 times over "outside the cloud." * Velocity failed on 50 node clusters with 200 concurrent clients. * Local caches provided significant performance gains over distributed caches. The local cache took load off the distributed cache. They are currently running more tests with different configurations. Hopefully we'll see those results later. All-in-all a generally optimistic report. EC2 scales. Mot of the tested grid frameworks scaled. What's also clear is it may take a while before deploying cloud based grids is an easy process. It still takes a lot of work to install, configure, start, stop, monitor, and debug bottlenecks in cloud based grids. Thanks to GridDynamics for putting in all this work and I look forward to their next set of reports.

    Click to read more ...

  • Tuesday
    Apr292008

    High performance file server

    What have bunch of applications which run on Debian servers, which processes huge amount of data stored in a shared NFS drive. we have 3 applications working as a pipeline, which process data stored in the NFS drive. The first application processes the data and store the output in some folder in the NFS drive, the second app in the pipeline process the data from the previous step and so on. The data load to the pipeline is like 1 GBytes per minute. I think the NFS drive is the bottleneck here. Would buying a specialized file server improve the performance of data read write from the disk ?

    Click to read more ...

    Monday
    Apr072008

    Lazy web sites run faster

    It is fairly obvious that web site performance can be increased by making the code run faster and optimising the response time. But that only scales up to a point. To really take our web sites to the next level, we need to look at the performance problem from a different angle.

    Click to read more ...

    Saturday
    Mar292008

    20 New Rules for Faster Web Pages

    Update: Nice explanation in The importance of bandwidth versus latency of how long latencies cause cascading delays in resource loading. Doloto tries to optimize how resources are loaded. Twenty new rules have been added to the original 14 rules for sizzling web performance. Part of scalability is worrying about performance too. The front-end is where 80-90% of end-user response time is spent and following these best practices improved the performance of Yahoo! properties by 25-50%. The rules are divided into server, content, cookie, JavaScript, CSS, images, and mobile categories. The new rules are:

  • Flush the buffer early [server]
  • Use GET for AJAX requests [server]
  • Post-load components [content]
  • Preload components [content]
  • Reduce the number of DOM elements [content]
  • Split components across domains [content]
  • Minimize the number of iframes [content]
  • No 404s [content]
  • Reduce cookie size [cookie]
  • Use cookie-free domains for components [cookie]
  • Minimize DOM access [javascript]
  • Develop smart event handlers [javascript]
  • Choose <link> over @import [css]
  • Avoid filters [css]
  • Optimize images [images]
  • Optimize CSS sprites [images]
  • Don't scale images in HTML [images]
  • Make favicon.ico small and cacheable [images]
  • Keep components under 25K [mobile]
  • Pack components into a multipart document [mobile] Thanks to Simon Willison for the link.

    Click to read more ...

  • Wednesday
    Mar192008

    Serving JavaScript Fast

    Cal Henderson writes at thinkvitamin.com: "With our so-called "Web 2.0' applications and their rich content and interaction, we expect our applications to increasingly make use of CSS and JavaScript. To make sure these applications are nice and snappy to use, we need to optimize the size and nature of content required to render the page, making sure we’re delivering the optimum experience. In practice, this means a combination of making our content as small and fast to download as possible, while avoiding unnecessarily refetching unmodified resources." A lot of good comments too.

    Click to read more ...

    Monday
    Feb252008

    Make Your Site Run 10 Times Faster

    This is what Mike Peters says he can do: make your site run 10 times faster. His test bed is "half a dozen servers parsing 200,000 pages per hour over 40 IP addresses, 24 hours a day." Before optimization CPU spiked to 90% with 50 concurrent connections. After optimization each machine "was effectively handling 500 concurrent connections per second with CPU at 8% and no degradation in performance." Mike identifies six major bottlenecks:

  • Database write access (read is cheaper)
  • Database read access
  • PHP, ASP, JSP and any other server side scripting
  • Client side JavaScript
  • Multiple/Fat Images, scripts or css files from different domains on your page
  • Slow keep-alive client connections, clogging your available sockets Mike's solutions:
  • Switch all database writes to offline processing
  • Minimize number of database read access to the bare minimum. No more than two queries per page.
  • Denormalize your database and Optimize MySQL tables
  • Implement MemCached and change your database-access layer to fetch information from the in-memory database first.
  • Store all sessions in memory.
  • If your system has high reads, keep MySQL tables as MyISAM. If your system has high writes, switch MySQL tables to InnoDB.
  • Limit server side processing to the minimum.
  • Precompile all php scripts using eAccelerator
  • If you're using WordPress, implement WP-Cache
  • Reduce size of all images by using an image optimizer
  • Merge multiple css/js files into one, Minify your .js scripts
  • Avoid hardlinking to images or scripts residing on other domains.
  • Put .css references at the top of your page, .js scripts at the bottom.
  • Install FireFox FireBug and YSlow. YSlow analyze your web pages on the fly, giving you a performance grade and recommending the changes you need to make.
  • Optimize httpd.conf to kill connections after 5 seconds of inactivity, turn gzip compression on.
  • Configure Apache to add Expire and ETag headers, allowing client web browsers to cache images, .css and .js files
  • Consider dumping Apache and replacing it with Lighttpd or Nginx. Find more details in Mike's article.

    Click to read more ...