Stuff The Internet Says On Scalability For October 11th, 2013
Friday, October 11, 2013 at 8:50AM
HighScalability Team in hot links
Hey, it's HighScalability time:
- Quotable Quotes:
- @BrandonBloom: Mutable data structures are only faster b/c they have fewer features: ie no persistence. You must manually recover that feature w/ copying.
- @hayesdrumwright: Scale breaks hardware Speed breaks software Scale and speed breaks everything Adrian Cockcroft - Netflix #TechSummit
- Gladwell, Malcolm: Saul thinks of power in terms of physical might. He doesn’t appreciate that power can come in other forms as well— in breaking rules, in substituting speed and surprise for strength.
- Now here's an irony. The East India Company established a major trading post at Bantam in Java. They called their trading posts "factories." Get it? Java. Factories. That's stranger than fiction.
- Cost of Healthcare.gov: $634 Million — So Far. In a modern methodology you hardly ever just open a big site in one big bang. That never works. You start with a small nucleus of functionality, get a few alpha users, get feed back, make changes, and add features slowly. Then you allow a few more users into the system and repeat. I don't know how they handled the release or were forced to handle it, but given the deadline it seems probable they didn't have a chance to do it right.
- Flashcache at Facebook: From 2010 to 2013 and beyond. Fascinating look at how Facebook responded to a variety of changes. Compression increased IOPS demand, cold data was moved to different tiers, disk IO limits were reached, the result being the need to optimize software to use system resources more efficiently. They looked at read-write distribution and how to structure cache to better match load patterns and avoid hot spots. A very useful example analysis for anyone looking to tune their systems.
- A Middle Approach to Schema Design in OLTP Applications: at eBay we tried a denormalized approach to improve our Core Cart Service’s DB access performance specifically for writes. We switched to using a BLOB-based cart database representation, combining 14 table updates into a single BLOB column. As a result, the response time for an “add to cart” operation improved on average 30%, from 1,500 milliseconds to 1,030 milliseconds. And in use cases where this same call is made against a cart that already contains several items (>20), the performance has improved by 40% —from 3,900 milliseconds to 2,300 milliseconds.
- Online Algorithms in High-frequency Trading. Where we learn the economic fate of the free world is based on relatively stupid algorithms, encoded in hardware, executing really really fast. What could go wrong?
- Parallelism in the Cloud by Eric Brewer. Do not fear, it's not a CAP talk. It's a talk similar to the one Jeff Dean gives on how Google optimizes distributing functionality and state across large clusters. Giant-scale services are 3 tiers: front-end load balancers; stateless servers; distributed storage. Latency matters a lot which means you must bring in your tail latencies. Latency reduced via parallelism and caching. Virtualization is a lie revealed by tail latency. Batch computing that fills in the gaps is free. Provisioning and allocation are contrasted. Lots more.
- Optimizing Linux Memory Management for Low-latency / High-throughput Databases. Deep intense debugging session finding the performance problems were inside the NUMA zone reclaim system of Linux. Details are given on how to prevent this problem and recognize when it happens to you. Wonderful.
- The technology behind the website that 30% of Swedes visit every day. 3 million unique visitors each day and 15 TB of data added daily. Sexy metrics display, Debian, MySQL, Tomcat, Intel-based HP servers, VMWare, Varnish, CDN, Nagio, Goole Analytics, Pingdom, Xymon.
- How to get auto scaling of Google Compute Engine "just right". Google isn't a bear to autoscale at all and here's how to do it. Excellent details on what factors to consider and how to make the decisions to scale up or down using an orchestration application.
- The Long Tail of Hardware. Hardware is cool again. It's easier and cheaper than ever to build systems from off the shelf components. People are buying all sorts of crazy Fitbit like devices. It's not just a phone or tablet. It's a device in every pot. Yet smart phones are the perfect integration engine for personal clouds. And crowdfunded hardware is a thing. Who would have thunk it?
- About face. In this post the counter intuitive finding was that a relation database was many times faster than Neo4j at graph operations. Que? Then after a few emails and tuning suggestions we learn in Benchmarking Graph Databases – Updates that sanity is restored and Neo4j acquitted itself with distinction.
- Panel at Library of Congress Storage Architectures meeting: To sum up, commercial cloud services are probably not suitable for ingest, definitely not suitable for preservation, possibly suitable for dissemination to a restricted audience, if suitable charging arrangements are in place, and highly suitable for dissemination of medium-sized public data.
- Here's the key to world domination. Godin on Understanding marginal cost: Someone entering the market, someone with nothing to lose, is happy to wipe out as many fixed costs as he can and then price as close to zero as he can get away with. It's not nice nor does it feel fair, but it's true and it works. The only defense against this race to marginal cost is to have a product that is differentiated, that has no substitute, that is worth asking for by name.
- Big list of 199 Static Site Generators. Sounds like a hit movie idea to me.
- MySQL Performance: The Road to 500K QPS with MySQL 5.7. Series of changes that doubled the number of of QPS. The very first improvement introduced within MySQL 5.7 was an auto-discovery of RO transactions. Changes in mutex contention.
- Mobile First: Lessons Learned from GroupOn: Addict users to your app; use context to giver user individualized experiences; mobile first; test many variations simultaneously and don't be afraid to make bold changes in design; your development organization should be decentralized with mobile developers embedded in every team with a core team responsible for overall continuity.
- ZeroMQ: Helping us Block Malicious Domains in Real Time. How systems deal with fraud and abuse is always an interesting problem. The data rates are so high and the attacks so varied that it's certain the AI that comes to rule humanity will arise from this seed.
- AWS Cost Saving Tip 16: Periodically remove your unwanted AWS resources and save costs. Title says it all.
- Haeinsa - linearly scalable transaction library for HBase (slides). It use two-phase commit protocol and optimistic concurrency control to implement. Haeinsa now processes more than 300M+ transactions per day in single cluster without any consistency problems.
- The anatomy of successful computational biology software : Nature Biotechnology : Nature Publishing Group: BLAST was the first program to assign rigorous statistics to useful scores of local sequence alignments. Before then people had derived many different scoring systems, and it wasn't clear why any should have a particular advantage. I had made a conjecture that every scoring system that people proposed using was implicitly a log-odds scoring system with particular 'target frequencies', and that the best scoring system would be one where the target frequencies were those you observed in accurate alignments of real proteins.
- Towards OLAP in Graph Databases (MSc. Thesis): In this project, we study the problem of online analytical processing in graph databases that use the property graph data model, which is a graph with properties attached to both vertices and edges. We use vertex degree analysis as a simple example problem, create a formal definition of vertex degree in a property graph, and develop a theoretical vertex degree cache with constant space and read time complexity, enabled by a cache compaction operation and a property change frequency heuristic.
- Highly Available Transactions: Virtues and Limitations: To minimize network latency and remain online during server failures and network partitions, many modern distributed data storage systems eschew transactional functionality, which provides strong semantic guarantees for groups of multiple operations over multiple data items. In this work, we consider the problem of providing Highly Available Transactions (HATs): transactional guarantees that do not suffer unavailability during system partitions or incur high network latency. We introduce a taxonomy of highly available systems and analyze existing ACID isolation and distributed data consistency guarantees to identify which can and cannot be achieved in HAT systems. This unifies the literature on weak transactional isolation, replica consistency, and highly available systems. We analytically and experimentally quantify the availability and performance benefits of HATs--often two to three orders of magnitude over wide-area networks--and discuss their necessary semantic compromises.
Article originally appeared on (http://highscalability.com/).
See website for complete article licensing information.