The email sent will contain a link to this article, the article title, and an article excerpt (if available). For security reasons, your IP address will also be included in the sent email.
This is a guest post by Greg Luck Founder and CTO, Ehcache Terracotta Inc. Note: this article contains a bit too much of a product pitch, but the points are still generally valid and useful.
The legendary Moore’s Law, which states that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years, has held true since 1965. It follows that integrated circuits will continue to get smaller, with chip fabrication currently at a minuscule 22nm process (1). Users of big iron hardware, or servers that are dense in terms of CPU power and memory capacity, benefit from this trend as their hardware becomes cheaper and more powerful over time. At some point soon, however, density limits imposed by quantum mechanics will preclude further density increases.
At the same time, low-cost commodity hardware influences enterprise architects to scale their applications horizontally, where processing is spread across clusters of low-cost commodity servers. These clusters present a new set of challenges for architects, such as tricky distributed computing problems, added complexity and increased management costs. However, users of big iron and commodity servers are on a collision course that’s just now becoming apparent.
Until recently, Moore’s Law resulted in faster CPUs, but physical constraints—heat dissipation, for example—and computer requirements force manufacturers to place multiple cores to single CPU wafers. Increases in memory, however, are unconstrained by this type of physical requirement. For instance, today you can purchase standard Von Neumann servers from Oracle, Dell and HP with up to 2TB of physical RAM and 64 cores. Servers with 32 cores and 512GB of RAM are certainly more typical, but it’s clear that today’s commodity servers are now “big iron” in their own right.
This trend begs the question: can you now avoid the complexity of scaling out over large clusters of computers, and instead do all your processing on a single, more powerful node? The answer, in theory, is “Yes”—until you take into account the problem with large amounts of memory in Java.