« Adcash - 1 Trillion HTTP Requests Per Month | Main | Evolution of data structures in Yandex.Metrica »

Stuff The Internet Says On Scalability For September 22nd, 2017

Hey, it's HighScalability time: 


Ever feel like howling at the universe? (Greg Rakozy)


If you like this sort of Stuff then please support me on Patreon.


  • 10 billion: API calls made every second in Google datacenters; $767,758,000,000: collected by Apple on iPhones sold to the end of June; 20: watts of power consumed by human brain, autonomous vehicles peak at 3000 watts; 59%: drop in leads using AMP; 27%: success rate of AIs guessing passwords; 2.8 kilometers: distance devices running on almost zero power can xmit using backscatter; 96: age at which Lotfi Zadeh, inventor of Fuzzy Logic, passed away; 35%: store time series data in a RDBMS; $1.1 billion: Google's spend on self-driving tech;  $5.1 billion: Slack valuation; 15%: bugs reduced by strong typing; ~1 ft: new smartphone GPS accuracy; 

  • Quotable Quotes:
    • Napoleon: [Sir Hudson Lowe] was a man wanting in education and judgment. He was a stupid man, he knew nothing at all of the world, and like all men who knew nothing of the world, he was suspicious and jealous.
    • Rich Werner: Data center operations, to me, is 362 days of boredom. And then you get these hurricanes coming through, and it’s three days of pulling your hair out.
    • @pacoid: @kenneth0stanley #TheAIConf "We're not interested in complexity for its own sake" -- ref. operational closure in second-order cybernetics
    • Animats: Much as I like Rust, I have to agree. When you have to get it done, use Go. When you want to explore advanced experimental programming constructs, use Rust. The Go guys knew when to stop. Arguably they stopped too early, before generics or parameterized types, and "interface[]" is used too much. But that doesn't seem to be a big overhead item. Rust has the complexity of C++ plus additional complexity from the functional community. Plus lots of fancy template cruft.
    • @mims: People who say data is the new oil are wrong. Non-volatile flash memory is the new oil.
    • @cmeik: As former member of a NoSQL startup, "Safety, reliability [as well as pay up front, save later] doesn't sell" sure sounds familiar.
    • @PaulDJohnston: ... simply because we've had 20 years of "servers" and 10 years of "instances" and now "containers"... they are all the same...
    • Venkatraman Ramakrishnan~ Inventions in one discipline can build on—and spur—basic research in many others, often unwittingly. It’s a virtuous cycle, and scientists take joy in exploiting all of it. Scientists are very promiscuous and the good ones are the most promiscuous.
    • @rightfold: Most of programmers learn early to avoid premature optimization. Next step: teach people about premature distributed computing.
    • @indievc: “Not heroine…Not cocaine….But Venture Capital is the drug flowing through the veins of most Silicon Valley startups”
    • @skamille: Editing is a different profession than writing, but code review and programming are both performed by the same people
    • XNormal: Mainframes had the reputation of being very expensive. But this is misleading. In terms of cost per processing task they were much more efficient than mini and microcomputers.
    • @swardley: "Culture eats strategy for breakfast" is code for "I don't know what the heck I'm talking about but this meme sounds smart"
    • James Glanz: Yet another data center, west of Houston, was so well prepared for the storm — with backup generators, bunks and showers — that employees’ displaced family members took up residence and United States marshals used it as a headquarters until the weather passed.
    • @pacoid: Neuroevolution talk @kenneth0stanley #theaiconf -- "exact gradient is not always the best move"; evolution uses fitness fn, not objective fn
    • @GossiTheDog: Holy crap. CCleaner trojan 1st stage payload is on 700k PCs, with these orgs targeted for 2nd stage (successfully) 
    • Scott Aaronson: In the meantime, the survival of the human race might hinge on people’s ability to understand much smaller numbers than 10^122: for example, a billion, a trillion, and other numbers that characterize the exponential growth of our civilization and the limits that we’re now running up against.
    • Timothy Morgan: Compute and networking could hit the Moore’s Law wall at about the same time, and that is precisely what we expect.
    • ralmidani: As I've said before, universal web components are a pipe dream. Developers disagree on even the most trivial things, like the best way to parse a query string. What makes anyone think those disagreements will magically disappear once web components become a standard?
    • Mallory Locklear: [Virus] Jumps between species have driven most major evolutionary innovations in the viruses. Meanwhile, co-divergence has been less common than was assumed and has mostly caused incremental changes.
    • @SwiftOnSecurity: Linux is like if the creator of git wrote an operating system.
    • Morning Paper: Let’s quickly rehash the main arguments for partitioning a network between an end device and the cloud: often the end device isn’t powerful enough to run the full model, yet transmitting all of the raw sensor data to the cloud incurs significant overhead. If we can run the lower layers of the network on an end device then we only need to send the outputs of those layers up to the cloud, normally a significantly smaller data transmission. 
    • Sean Gallagher: Because the human in the loop was a thinking human—Stanislav Petrov—Andropov was never alerted, and there was no response to a falsely detected attack. And because of that, we are all still here today. Покойся с миром (Pokoysya s mirom), Colonel Petrov. Rest in peace.
    • Florian Mormann: Within the MTL are what are known as ‘concept cells,’ neurons that can represent either an abstract or concrete concept. A single cell can, for example, fire in response to an image of Jennifer Aniston, her written name, or hearing her name said out loud
    • @dormando: "memcached memory gets fragmented" -> wrong "memcached provides lazy eviction only" -> wrong-ish, and weird. shill article written by redis
    • @CodeWisdom: "Telling a programmer there's already a library to do X is like telling a songwriter there's already a song about love." - Pete Cordell
    • @cpuguy83: Sometimes I feel like 90% of the battle is working out which technically equivalent method of performing X is the least offensive to a group
    • @LukeYoungblood: Customers are already doing it; you can switch EC2 EBS GP2 volumes to IO1 on the fly with Elastic Volumes, then switch back later to save.
    • @ericschmidt: Electric motors are the unsung hero of clean energy - the latest are 97% efficient, vs. 45% for internal combustion.
    • @bcantrill: OH: "I don't know if I would call it a rewrite, but every line might change..."
    • A Mind at Play~ If there can be said to have been an old boys’ club of Silicon Valley in its initial days, then Claude Shannon was a card-carrying member—and he benefited from all the privileges therein...By his own admission, Shannon had been fortunate in his timing, and privileged in knowing certain company founders and securing early investments. The bulk of his wealth was concentrated in Teledyne, Motorola, and HP stock; after getting in on the ground floor, the smartest thing Shannon did was hold on.
    • sonink: The author might be missing the woods for the trees. ICO's are at heart a rethink on the angel-vc-ipo funding cycle. ICO's offer very clear advantages in terms of increasing distribution, decreasing friction, aligning investors to product success and builds upon emerging ideas around equity/control structuring. That they offer a diversification of bitcoin is merely an oversight in the grand scheme of things.
    • Small Datum: tl;dr Compression doesn't hurt MyRocks performance MyRocks matches or beats InnoDB performance with much better efficiency
    • Jonathan Beard~ there is still another 60-70% performance left on devices for many applications since so much time is spent moving useless data around the system. The goal is to take that 60-70% of “packaging” and make it do something more useful.
    • Rainer Klose: [The Komatsu dumper truck] will be the largest electric vehicle in the world: weighing in at 45 tons when empty plus 65 tons of loading capacity – and a battery pack boasting 700 kWh of storage capacity. That’s as much as eight Tesla Model S vehicles. The driver has to climb nine stairs to get to work and the e-mobile’s tires measure almost two meters in diameter. 
    • Pedro Ramalhete: The fact that the memory reclamation technique needs to have the same progress (or better) as the data structure where you're using it, means that for fully lock-free or wait-free data structures the only true solution is to use pointer-based memory reclamation. Moreover, if we want the data structure to be fast, we need the memory reclamation needs to be just as fast (on top of having the same progress).
    • dwild: I feel like you are from an older generation, not that there's anything wrong with it, but you probably no longer have the same needs as the younger generation. When I organize stuff, I don't always do it between 1 or 2 people. Once we were a dozen that went to an Escape Room, even more people were interested and even more were invited. It was barely organized days before. Sure it works coordinating through more direct means and people did that for a pretty long time, I feel like you had to when you had similar needs, but the thing is, there's a better tool for it nowadays which is Facebook. It's like saying you don't want to use the all good new framework because of the good old way that worked.
    • monkmartinez: I would argue that real infrastructure or vendor "lock in" is due to the friction of learning the API's associated with each cloud platform. Obviously, one can choose to use all the services of the cloud platform they will deploy to. However, and in most cases, that is not a strict requirement to use the cloud platform to begin with. "Lock in" seems more like a self-created problem than anything.
    • KirinDave: Eth hasn't been hacked, but the technical parties developing it have yet to deliver a smart contract platform which is reasonable to write a verifiable contract on. And as a consequence: contracts are being exploited either due to implementation bugs in the language or code that has no obvious testing or verification plan. In many ways this is worse. There is a lack of competence in executing on the single most important feature of Etherium: it's ability to home other cryptocurrency. And without this, what exactly is the value eth adds? To me, this is an attempt to control and force the ecosystem. It's there to facilitate overvalued and underveted ICos and sidecar it as a revenue stream. It's not there to build something sustainable, it's there to extract value. If they cared about sustainability, the lack of rigor and correctness in eth contract building tools would be seen as a crisis and it would be much closer to resolution.
    • davidbrin1: Nature does use zero sum competition to craft gradual improvement in species.  But that is very very slow and happens upon piles and mountains of corpses.  The same was true for nearly all past human societies, in which you won mostly by causing others to lose.  The Western Enlightenment's positive sum systems DO utilize the creative power of competition! The five great arenas - science, democracy, markets, courts and sports - are very competitive.  But the arenas are carefully regulated so that losers can keep coming back next year and winners are (mostly) stymied from cheating, so that the competition remains positive sum.
      The result has been prodigious creativity and production, making us wealthy enough to then take on old wasteful injustices like racism, sexism and environmental neglect. Alas, Cheaters abound and try to turn Pos sum games into zero sum ones.  Today's oligarchic putsch is an effort to restore zero sum feudalism.

  • Episode 794: How To Make It In The Music Business : Planet Money : NPR. The story of how Illmind helped create an industry to supply producers digital music components. Interesting in relation to Have You Noticed There's A Lot More Collaboration Going On These Days? Why? While individual songs have an impoverished business model compared to books, music itself is compositional in nature, and there's a whole industry supplying drum kits and sample packs, which are lego blocks—snares, snaps, cow bells, drums, and voices—producers can use to create songs. It's like reusing libraries in code, but most libraries these days are open source. Songs now are composed of digital components. This is how music gets made now. Somebody somewhere comes up with a little riff and thinks who do I know that might be able to work this? Illmind says in a day he can create 10 tracks, which he thinks of as 10 lottery tickets. Each has the potential to be used in a song and make money. He says relationships are key. Put in the work to get to know people so they like being around you. Those people, even if their beats aren't as good, will get the opportunities. If you are the guy who knows a guy then you are exposed to opportunities. Like programmers creating software, Illmind has been working on this his entire adult life learning how to create beats. Though it takes 20 minutes to create a beat, there's a lifetime of work in the relationships. 

  • Stunning. 30 Days Timelapse at Sea. Watching the unloading of container ship by giant cranes is mesmerizing. Containers is the link here, in case you were wondering. An awesome book on the subject is The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, Second Edition with a new chapter by the author. While reading it I kept searching for some way to tie it back to HS, but it's really its own thing. Perhaps one metaphor could be found in plummeting shipping prices. Too many ships were built, and as you learn in macroeconomics class, increased supply lowers prices. Will  that happen in the cloud? Unlikely. Shipping container prices seem to be going back up as global trade increases. Since cloud demand is still high, and it seems cloud demand and supply closely track each other, it's unlikely cloud prices will soon face much in the way of downward pressure. 

  • "Overall, using request queuing as the input to our service scaling has gotten us to a place where we feel confident in our ability to respond to change. We’ve been running with some form of autoscaling based on request queuing for over a year the effort was far and away worth it." Autoscaling based on request queuing: The tool that we reach for first is to solve the problem [be able to do a ton of work and to be able to do it for a significant number of requests] problem through hardware...An alternative is to use horizontal scaling. Instead of adding CPU, RAM, and disk space to a single server, we add more servers...One of the great upsides of the rise of cloud computing is that they are very good at horizontal scaling...Putting horizontal scaling into practice comes in many flavors and one of the easiest approaches to start is to over provision...The alternative is to scale with your load...We switched to another method using request queuing...At startup, we start a new thread that will report statistics on a looping basis up to our metric collector (CloudWatch in our case). Every 30 seconds, we query the puma master process to fetch the stats for each working, including the number of working threads and the current backlog (queue) count. We report those metrics to our metric collector. We then aggregate the queue across all of the running services to get an idea of how backed up we are. If we are backed up for a sustained period of time, we add services to clear that queue. If we are clear for a longer period of time, we remove services from the stack (down to some minimum, we always have at least 3 for redundancy). This system allows us to respond both quicker and with a great deal of clarity to our production load. In addition, the same scaling system works exactly the same for background workers. Instead of using puma queues, we use Sidekiq queues and scale on similar parameters.

  • You know what neural networks suck at? Generating Harry Potter spell names. Harry Potter spells, generated by neural network. Anti-Dining Charm? Corn jinx? Spell to pug? Animato Animagus? Jend-curse of the Bogies? Must be a muggle.

  • It's always funny how when you pick a language that automatically manage memory for you, you spend a heck of lot of time trying outsmart all the "help" you get. Allocation Efficiency in High-Performance Go Services.

  • Have you ever wondered how Enigma would be cracked today using modern technology? Here you go. How we cracked the Enigma code using Artificial Intelligence: The Polish applied a different strategy - they hired young, smart mathematicians, who came up with the idea of building automatons that mimicked the letter substitutions and tried all the possible variants of the password...First, we taught the artificial intelligence to recognize the German language - we fed it Grimm’s fairytales...we chose the most sophisticated version of Enigma (4 rotors navy type, 1 pair of plugs, which gave us a whopping 15,354,393,600 password variants) and wrote a simulator of its behavior...The good news was, the project was working like a charm; the Enigma simulator was testing the combinations, and the artificial intelligence was classifying the decrypted messages. The bad news was, it would have taken 2 weeks to find the password...We contacted DigitalOcean and asked if we could spin our project on 1000 of their virtual servers. They gave us the green light, we prepared a parallel version of our bombe, and... it worked! The whole thing finished in 19 minutes, resulting in 13 million combinations tested per second...[the cost was] $7. In comparison, it's estimated that the British bombe cost more than half a million pounds (in today’s money). Long live parallel computing!

  • The Finnish Experiment. The interesting part of this is not the universal basic income experiment, it's how the government is adopting software development techniques. Instead of a big design up-front method of creating legislation, the government is able to try experiments, get feedback, make improvements, and try again. And if the experiment fails they can move on and try something different. 

  • I imagine neighbors in the past sharing stories about why they were dumping their horse and getting an automobile. Why we [@codeenigma] don’t Private Cloud any more: we’ve discovered the hard way that to make a go of vSphere you need to be the right kind of outfit. In short, you need to have pots and pots of money...we made a private cloud service transition from Rackspace to Pulsant in the summer of 2016...While everything was highly available on paper, the reality was we were hit by outages of one form or another at least once a month...We decided to take a step back and look at AWS services for a second time...We could move to the AWS public cloud services and have NAS...Then there was another service in Dublin which didn’t exist last time we looked — Direct Connect...we went on to look more closely at the Virtual Private Cloud (VPC)...But the icing on the cake is HashiCorp’s Terraform. Infrastructure in code...So on reflection, we realised AWS has come a long way and we can now do (almost**) everything we wanted to do with the private cloud product within the AWS suite of public cloud services. So why bother running a private cloud, with all the risk and management that it entails? The answer is “don’t”. So we won’t any more. Bye bye private cloud, hello AWS managed infrastructure, and very happy we are too.

  • How do you convince people your ICO hasn't been hacked from inception? How do you make sure evil can't enter the world when there's free will? That's a different question. The Ceremony. When trust needs to be engineered into a system, even for something as ethereal as an ICO, humans still turn to ritual. Though this one seems more of an elaborate magic act conceived to deceive. But the forms for obeyed and money was made.

  • ICO as art form. The Tulip Individual Coin Offering: An Early Harvest. There is only one indivisible Tulip Token (TLP) of which 100% ownership will exchange hands. In the aftermath of the ICO, initially funds will be used for the creation and shipping of the commemorative physical artwork. Surplus funds will then be divided 80/20 between The Artist and the Hand in Hand Hurricane Relief Fund respectively. The Artist will use these funds to produce future works.

  • Bacteria Use Brainlike Bursts of Electricity to Communicate: But Süel and other scientists are now finding that bacteria in biofilms can also talk to one another electrically. Biofilms appear to use electrically charged particles to organize and synchronize activities across large expanses. This electrical exchange has proved so powerful that biofilms even use it to recruit new bacteria from their surroundings, and to negotiate with neighboring biofilms for their mutual well-being...Basically, we’re observing a primitive form of action potential in these biofilms.

  • You can now get a prescription for an app. First Prescription App for Substance Abuse Approved by FDA: Developed by Pear Therapeutics in Boston and San Francisco, the app helps people recovering from addiction stay on track while participating in outpatient treatment. The U.S. Food and Drug Administration (FDA) last week approved the prescription-only software for the American market. 

  • Oracle as the low cost leader? Talk about going against type. But that's what Larry Ellison is promising. Oracle promises SLAs that halve Amazon's cloud costs. How? By automation: "the primary cost of running platform-as-a-service (PaaS) is labour and opined that human involvement in running databases or middleware is best avoided." Larry must think AWS is not highly automated and simply couldn't lower prices in response in the unlikely event Oracle cloud ever becomes a threat.

  • Why GCP: Compute: No noisy neighbours impacting your resource availability; Network throughput that cannot be matched; High performance disks that don’t cost you a small fortune; Flexible pricing that still benefits you when committing to use resource longer term

  • Not a game changer, but it will save some money and make simpler architectures possible, but it's not enough. More granulairity is needed. New – Per-Second Billing for EC2 Instances and EBS Volumes.  aidos: This is one of the better things to happen in ec2 in years for me. We have a bunch of scripts so a spot instance can track when it came online and shut itself down effectively. It took far too much fiddling around to work around aws autoscale and get efficient billing with the per hour model. In the end we came up with a model where we protect the instances for scale in and then at the end of each hour, we have a cron that tries to shut all the worker services down, and if it can't it spins them all up again to run for another hour. If it can, then it shuts the machine down (which we have set on terminate to stop). The whole thing feels like a big kludge and for our workload we still have a load of wasted resources. We end up balancing not bringing up machines too fast during a spike against the long tail of wasted resource afterwards. This change by ec2 is going to make it all much easier.   andrewstuart: Really welcome, although per millisecond would be better. It's now possible to boot operating systems in milliseconds and have them carry out a task (for example respond to a web request) and disappear again. Trouble is the clouds (AWS, Google, Azure, Digital Ocean) do not have the ability to support such fast OS boot times. Per second billing is a step in the right direction but needs to go further to millisecond billing, and clouds need to support millisecond boot times. grzm: The important and very real distinction is the change from per-hour to per-second. If you're going to make it more granular (which from per-hour is a good thing), why would AWS stop at per-minute if it's the same or only marginally more difficult to make it per-second, particularly when they have the added benefit of being more granular that GCP? In other words, I don't see the reduction as primarily a marketing move on AWS's part. I'm sure they felt pressure to make it more granular. Stopping at parity doesn't necessarily make sense, nor should they be called out for doing more purely for marketing reasons. andrewstuart: The lifetime of a web request, for example, can be measured in milliseconds. It is now possible, technically anyway, for operating systems to boot, service the request and disappear. There needs to be pricing models that reflect computing models like this.

  • Looks like a good on-line tutorial. Cassandra Architecture & Replication Factor Strategy Tutorial.

  • Managing Peak Power: Alongside 5G are demands to move more computing into the cloud, ubiquitous connectivity, and a much more flexible and robust infrastructure. To make these things a reality, however, infrastructure improvements associated with a shift from 4G to 5G will likely include a hike in cell edge data rates from 10 Mbits/sec to more than 1 Gbit/s, a trebling of spectral efficiency, a 50% gain in energy efficiency, and an increase in mobility from 350 km/hour to 500 km/hour...The same kinds of issues that are being dealt with at advanced nodes are showing up inside racks of servers in a data center, too. There is much more data that needs to be processed than ever before, but there is only so fast that servers can run and turn on because of the thermal limits on the server racks.

  • Facebook is forking PHP. The Future of HHVM: "PHP7 is charting a new course away from PHP5, and we want to do the same, via a renewed focus on Hack. Consequently, HHVM will not aim to target PHP7. The HHVM team believes that we have a clear path toward making Hack a fantastic language for web development, untethered from its PHP origins." Makes sense if you are Facebook. You have a lot of PHP code. PHP7 won't be backwards compatible. You have the ability to create a language that does exactly what you need done. So why not?

  • How do you build a data processing pipeline using Lambda? Game of Lambdas. The picture of the resulting system is a complex graph of interactions. They did a great job explaining the process, but they didn't say if they like the resulting system. It seems complex for what it does. Also, Building Serverless SaaS Applications on AWS.

  • This is why nobody really likes competition. Lower profits. The indie games industry is perfect — and that’s the problem!: The independent games industry is currently in a state of quiet chaos. Indie games cost less than a hamburger on average, at their real world selling price. So much less in fact, that it’s more like they cost about as much as a packet of ketchup. You know the ones you get for free when you buy a hamburger? That’s about right. Why is this so? Because the independent gaming industry is perfect. Perfectly competitive. It is the exact opposite of a monopoly. It is an industry that has many suppliers competing for many buyers. So many suppliers in fact, that if it weren’t for the fact that most gamers play more than one video game, we would likely have more suppliers than buyers if it came down to ratio.

  • Impressive demo. ATHENA Laser Weapon System Defeats Unmanned Aerial Systems. But laser blasts are kind of boring. You can't see anything. Oh, Han shot first. 

  • Shared state. Just don't. The Worst Computer Bugs in History: Race conditions in Therac-25: The software consisted of several routines running concurrently. Both the Data Entry and Keyboard Handler routines shared a single variable, which recorded whether the technician had completed entering commands. Once the data entry phase was marked complete, the magnet setting phase began. However, if a specific sequence of edits was applied in the Data Entry phase during the 8 second magnet setting phase, the setting was not applied to the machine hardware, due to the value of the completion variable. The UI would then display the wrong mode to the user, who would confirm the potentially lethal treatment...An additional concurrency bug caused the last known incident, which was due to overflow in a one-byte shared variable.

  • Migrating from RethinkDB to Postgres — An Experience Report: we switched from RethinkDB and ElasticSearch to Postgres, leaning heavily on Haskell in order to fill in some of the gaps quickly. The project was a success, and we’re very happy with the switch. Haskell has been invaluable for refactoring safely and confidently.

  • Well, that's comforting. IoT Edge Attacks II!: The team investigated if they could find a way to gain control over the MEMS sensor output using sound waves...The team found that 75% of the 25 accelerometer devices tested where vulnerable to taking control of the sensor output...Some scary implications of this experiment should be jumping to mind at this point. For example, what if this was a military drone?...What we learn from this experiment is that IoT edge device designers do have control over security at the hardware level.

  • DigitalOcean is expanding their services. Introducing Spaces: Scalable Object Storage on DigitalOcean: "simple, standalone object storage service that enables developers to store and serve any amount of data with automatic scalability, performance, and reliability." Seems a lot of people are unhappy with the $5 minimum charge, but you do get 250GB of storage and 1TB of outbound bandwidth. SynerG: S3 does not impose a minimum, and DO droplets can be run by hours without a monthly minimum. As it has already been pointed out, many (small) projects never reach 5$ in service use, even if paid as extra costs ($0.01 GB of transfer, $0.02 GB of storage) or even at S3 prices. If you want to prevent many bills of only a few cents each, you could disable the minimum only for accounts that already have Droplets running. The $5 minimum discourage me to start using Spaces (with small projects) as they run actually cheaper at S3

  • oxford-cs-deepnlp-2017/lectures: This repository contains the lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford.

  • codahale/usl4j: usl4j is Java modeler for Dr. Neil Gunther's Universal Scalability Law as described by Baron Schwartz in his book Practical Scalability Analysis with the Universal Scalability Law.

  • The Sparse Data Reduction Engine: This paper presents a general solution for a programmable data rearrangement/reduction engine near-memory to deliver bulk byte-addressable data access. The key technology presented in this paper is the Sparse Data Reduction Engine (SPDRE), which builds previous similar efforts to provide a practical near-memory reorganization engine.

Hey, just letting you know I've written a new book: A Short Explanation of the Cloud that Will Make You Feel Smarter: Tech For Mature Adults. It's pretty much exactly what the title says it is. If you've ever tried to explain the cloud to someone, but had no idea what to say, send them this book.

I've also written a novella: The Strange Trial of Ciri: The First Sentient AI. It explores the idea of how a sentient AI might arise as ripped from the headlines deep learning techniques are applied to large social networks. Anyway, I like the story. If you do too please consider giving it a review on Amazon.

Thanks for your support!

Reader Comments (1)

> Believe me, people don't mind being in touch and coordinating through more direct means

I feel like you are from an older generation, not that there's anything wrong with it, but you probably no longer have the same needs as the younger generation. When I organize stuff, I don't always do it between 1 or 2 people. Once we were a dozen that went to an Escape Room, even more people were interested and even more were invited. It was barely organized days before. Sure it works coordinating through more direct means and people did that for a pretty long time, I feel like you had to when you had similar needs, but the thing is, there's a better tool for it nowadays which is Facebook. It's like saying you don't want to use the all good new framework because of the good old way that worked.

What is the cost of the new framework versus the old?
I spend 0.0 hours per week on facebook, and 0.25 hours per week coordinating activities directly.
How much time do you spend on facebook?

September 22, 2017 | Unregistered Commenterclewis

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>