Entries in graph (11)

Thursday
Feb142013

When all the Program's a Graph - Prismatic's Plumbing Library

At some point as a programmer you might have the insight/fear that all programming is just doing stuff to other stuff.

Then you may observe after coding the same stuff over again that stuff in a program often takes the form of interacting patterns of flows.

Then you may think hey, a program isn't only useful for coding datastructures, but a program is a kind of datastructure and that with a meta level jump you could program a program in terms of flows over data and flow over other flows.

That's the kind of stuff Prismatic is making available in the Graph extension to their plumbing package (code examples), which is described in an excellent post: Graph: Abstractions for Structured Computation.

You may remember Prismatic from previous profile we did on HighScalability: Prismatic Architecture - Using Machine Learning On Social Networks To Figure Out What You Should Read On The Web. We learned how Prismatic, an interest driven content suggestion service, builds programs in terms of graph stuff:

Click to read more ...

Wednesday
Jul182012

Disks Ain't Dead Yet: GraphChi - a disk-based large-scale graph computation

GraphChi uses a Parallel Sliding Windows method which can: process a graph with mutable edge values efficiently from disk, with only a small number of non-sequential disk accesses, while supporting the asynchronous model of computation.

The result is graphs with billions of edges can be processed on just a single machine. It uses a vertex-centric computation model similar to Pregel, which supports iterative algorithims as apposed to the batch style of MapReduce. Streaming graph updates are supported.

About GraphChi, Carlos Guestrin, codirector of Carnegie Mellon's Select Lab, says:

A Mac Mini running GraphChi can analyze Twitter's social graph from 2010—which contains 40 million users and 1.2 billion connections—in 59 minutes. "The previous published result on this problem took 400 minutes using a cluster of about 1,000 computers

Click to read more ...

Wednesday
Jul272011

Making Hadoop 1000x Faster for Graph Problems

Dr. Daniel Abadi, author of the DBMS Musings blog and Cofounder of Hadapt, which offers a product improving Hadoop performance by 50x on relational data, is now taking his talents to graph data in Hadoop's tremendous inefficiency on graph data management (and how to avoid it), which shares the secrets of getting Hadoop to perform 1000x better on graph data.

TL;DR:

Click to read more ...

Wednesday
Jun302010

Paper: GraphLab: A New Framework For Parallel Machine Learning

In the never ending quest to figure out how to do something useful with never ending streams of data, GraphLab: A New Framework For Parallel Machine Learning wants to go beyond low-level programming, MapReduce, and dataflow languages with a new parallel framework for ML (machine learning) which exploits the sparse structure and common computational patterns of ML algorithms. GraphLab enables ML experts to easily design and implement efficient scalable parallel algorithms by composing problem specific computation, data-dependencies, and scheduling.  Our main contributions include: 

  • A graph-based data model which simultaneously represents data and computational dependencies. 
  • A set of concurrent access models which provide a range of sequential-consistency guarantees. 
  • A sophisticated modular scheduling mechanism. 
  • An aggregation framework to manage global state. 

Click to read more ...

Tuesday
Apr062010

Strategy: Make it Really Fast vs Do the Work Up Front

In Cool spatial algos with Neo4j: Part 1 - Routing with A* in Ruby Peter Neubauer not only does a fantastic job explaining a complicated routing algorithm using the graph database Neo4j, but he surfaces an interesting architectural conundrum: make it really fast so work can be done on the reads or do all the work on the writes so the reads are really fast.

The money quote pointing out the competing options is:

[Being] able to do these calculations in sub-second speeds on graphs of millions of roads and waypoints makes it possible in many cases to abandon the normal approach of precomputing indexes with K/V stores and be able to put routing into the critical path with the possibility to adapt to the live conditions and build highly personalized and dynamic spatial services.

The poster boys for the precompute strategy is SimpleGeo, a startup that is building a "scaling infrastructure for geodata." Their strategy for handling geodata is to use Cassandra and build two clusters: one for indexes and one for records. The records cluster is a simple data lookup. The index cluster has a carefully constructed key for every lookup scenario. The indexes are computed on the write, so reads are very fast. Ad hoc queries are not allowed. You can only search on what has been precomputed.

What I think Peter is saying is because a graph database represents the problem in such a natural way and graph navigation is so fast, it becomes possible to run even large complex queries in real-time. No special infrastructure is needed.

If you are creating a geo service, which approach would you choose? Before you answer, let's first ponder: is the graph database solution really solving the same problem as SimpleGeo is solving?

Click to read more ...

Tuesday
Mar302010

Running Large Graph Algorithms - Evaluation of Current State-of-the-Art and Lessons Learned

On the surface nothing appears more different than soft data and hard raw materials like iron. Then isn’t it ironic, in the Alanis Morissette sense, that in this Age of Information, great wealth still lies hidden deep beneath piles of stuff? It's so strange how directly digging for dollars in data parallels the great wealth producing models of the Industrial Revolution.

The piles of stuff is the Internet. It takes lots of prospecting to find the right stuff. Mighty web crawling machines tirelessly collect stuff, bringing it into their huge maws, then depositing load after load into rack after rack of distributed file system machines. Then armies of still other machines take this stuff and strip out the valuable raw materials, which in the Information Age, are endless bytes of raw data. Link clicks, likes, page views, content, head lines, searches, inbound links, outbound links, search clicks, hashtags, friends, purchases: anything and everything you do on the Internet is a valuable raw material.

By itself data is no more useful than a truck load of iron ore. Data must be brought to a factory. It must be purified, processed, and formed. That’s the job for a new field of science called Data Science. Yes, while you weren't looking a whole new branch of science was created. It makes sense in a way. Since data is a new kind of material we need a new profession paralleling that of the Material Scientist, someone who seeks to deeply understand data, the Data Scientist. We aren't so much in the age of data, as the age of data inference.

Click to read more ...

Tuesday
Jan262010

Product: HyperGraphDB - A Graph Database

With the success of Neo4j as a graph database in the NoSQL revolution, it's interesting to see another graph database, HyperGraphDB, in the mix. Their quick blurb on HyperGraphDB says it is a: general purpose, extensible, portable, distributed, embeddable, open-source data storage mechanism. It is a graph database designed specifically for artificial intelligence and semantic web projects, it can also be used as an embedded object-oriented database for projects of all sizes.

From the NoSQL Archive the summary on HyperGraphDB is: API: Java (and Java Langs), Written in:Java,  Query Method: Java or P2P, Replication: P2P, Concurrency: STM, Misc: Open-Source, Especially for AI and Semantic Web.

So it has some interesting features, like software transactional memory and P2P  for data distribution, but I found that my first and most obvious question was not answered: what the heck is a hypergraph and why do I care? Buried in the tutorial was:

A HyperGraphDB database is a generalized graph of entities. The generalization is two-fold:

  1. Links/edges "point to" an arbitrary number of elements instead of just two as in regular graphs 
  2. Links can be pointed to by other links as well.

OK, but I wish there was some explanation of why this is valuable. What can I do with it that I can't do with normal graphs? Given that there have been concerns over the complexity of the API this would seem a natural topic to cover. I assume it's cool, it sounds cool, but I would like to know why :-)

In any case it looks like an interesting product to take a look at. Database options are expanding fast.

Click to read more ...

Monday
Jun152009

Large-scale Graph Computing at Google

To continue the graph theme Google has got into the act and released information on Pregel. Pregel does not appear to be a new type of potato chip. Pregel is instead a scalable infrastructure...

...to mine a wide range of graphs. In Pregel, programs are expressed as a sequence of iterations. In each iteration, a vertex can, independently of other vertices, receive messages sent to it in the previous iteration, send messages to other vertices, modify its own and its outgoing edges' states, and mutate the graph's topology.

Currently, Pregel scales to billions of vertices and edges, but this limit will keep expanding. Pregel's applicability is harder to quantify, but so far we haven't come across a type of graph or a practical graph computing problem which is not solvable with Pregel. It computes over large graphs much faster than alternatives, and the application programming interface is easy to use. Implementing PageRank, for example, takes only about 15 lines of code. Developers of dozens of Pregel applications within Google have found that "thinking like a vertex," which is the essence of programming in Pregel, is intuitive.

Pregel does not appear to be publicly available, so it's not clear what the purpose of the announcement could be. Maybe it will be a new gmail extension :-)

Saturday
Jun132009

Neo4j - a Graph Database that Kicks Buttox

Update: Social networks in the database: using a graph database. A nice post on representing, traversing, and performing other common social network operations using a graph database.

If you are Digg or LinkedIn you can build your own speedy graph database to represent your complex social network relationships. For those of more modest means Neo4j, a graph database, is a good alternative.

A graph is a collection nodes (things) and edges (relationships) that connect pairs of nodes. Slap properties (key-value pairs) on nodes and relationships and you have a surprisingly powerful way to represent most anything you can think of. In a graph database "relationships are first-class citizens. They connect two nodes and both nodes and relationships can hold an arbitrary amount of key-value pairs. So you can look at a graph database as a key-value store, with full support for relationships."

A graph looks something like:


For more lovely examples take a look at the Graph Image Gallery.

Here's a good summary by Emil Eifrem, founder of the Neo4j, making the case for why graph databases rule:

Most applications today handle data that is deeply associative, i.e. structured as graphs (networks). The most obvious example of this is social networking sites, but even tagging systems, content management systems and wikis deal with inherently hierarchical or graph-shaped data.

This turns out to be a problem because it’s difficult to deal with recursive data structures in traditional relational databases. In essence, each traversal along a link in a graph is a join, and joins are known to be very expensive. Furthermore, with user-driven content, it is difficult to pre-conceive the exact schema of the data that will be handled. Unfortunately, the relational model requires upfront schemas and makes it difficult to fit this more dynamic and ad-hoc data.

A graph database uses nodes, relationships between nodes and key-value properties instead of tables to represent information. This model is typically substantially faster for associative data sets and uses a schema-less, bottoms-up model that is ideal for capturing ad-hoc and rapidly changing data.

So relational database can't handle complex relationships. Graph systems are opaque, unmaintainable, and inflexible. OO databases loose flexibility by combining logic and data. Key-value stores require the programmer to maintain all relationships. There, everybody sucks :-)

Neo4j's Key Characteristics

  • Dual license: open source and commercial.
  • Well suited for many web use cases such as tagging, metadata annotations, social networks, wikis and other network-shaped or hierarchical data sets.
  • An intuitive graph-oriented model for data representation. Instead of static and rigid tables, rows and columns, you work with a flexible graph network consisting of nodes, relationships and properties.
  • Decent documentation, active and responsive email list, a few releases, good buzz. All a good sign for something that has a chance to last a while.
  • Has bindings for a number of languages Python, Jython, Ruby, and Clojure. No binding for .Net yet. The recommendation is to access using a REST interface.
  • Disk-based, native storage manager completely optimized for storing graph structures for maximum performance and scalability. SSD ready.
  • Massive scalability. Neo4j can handle graphs of several billion nodes/relationships/properties on a single machine.
  • Frequently outperforms relational backends with >1000x for many increasingly important use cases.
  • Powerful traversal framework for high-speed traversals in the node space.
  • Small footprint. Neo4j is a single <500k jar with one dependency (the Java Transaction API).
  • Simple and convenient object-oriented API.
  • Retrieving children is trivial in a graph database.
  • No need to flatten and serialize an object graph as graphs are native to a graph database.
  • Fully transactional like a real database. Supports JTA/JTS, XA, 2PC, Tx recovery, deadlock detection, etc.
  • Current implementation is built to handle large graphs that don't fit in memory with durability. It's not a cache, it's a fully persistent transactional store.
  • No events or triggers. Planned in a future release.
  • No sharding. A suggestion for how one might shard is here.
  • Some common graph calculations are missing. For example, in a social network finding a common friend for a set of users.
  • Separates data and logic with a more "natural" representation than tables. This makes it easy to use Neo4j as the storage tier for OO code while keeping behaviour and state separate.
  • Neo4j traverses depths of 1000 levels and beyond at millisecond speed. That's many orders of magnitude faster than relational systems.

    Neo4j vs Hadoop

    This post makes an illuminating comparison between Neo4j vs Hadoop:

    In principle, Hadoop and other Key-Value stores are mostly concerned with relatively flat data structures. That is, they are extremely fast and scalable regarding retrieval of simple objects, like values, documents or even objects.

    However, if you want to do deeper traversal of e.g. a graph, you will have to retrieve the nodes for every traversal step (very fast) and then match them yourself in some manner (e.g. in Java or so) - slow.

    Neo4j in contrast is build around the concept of "deep" data structures. This gives you almost unlimited flexibility regarding the layout of your data and domain object graph and very fast deep
    traversals (hops over several nodes) since they are handled natively by the Neo4j engine down to the storage layer and not your client code. The drawback is that for huge data amounts (>1Billion nodes) the clustering and partitioning of the graph becomes non-trivial, which is one of the areas we are working on.

    Then of course there are differences in the transaction models, consistency and others, but I hope this gives you a very short philosophical answer :)

    It would have never occurred to me to compare the two, but the comparison shows why we need multiple complementary views of data. Hadoop scales the data grid and the compute grid and is more flexible in how data are queried and combined. Neo4j has far lower latencies for complex navigation problems. It's not a zero-sum game.

    Related Articles

  • Neo4j -- or why graph dbs kick ass
  • The current database debate and graph databases by Anders Nawroth
  • On Building a Stupidly Fast Graph Database by Scott Wheeler and the Hacker News Thread
  • Network Model from wikipedia
  • Databases as a service: FathomDB
  • Using Neo4J to load and query OWL ontologies by Sujit Pal
  • Graph Databases and the Future of Large-Scale Knowledge Management by Marko A. Rodriguez
  • Memo To The Semantic Web: Drop “Semantic” And Become The “Graph Web” by Hank Williams
  • Is the Relational Database Doomed? by Tony Bain
  • Neo Database Introduction
  • Neo4j Email List
  • flare Data Visualization for the Web
  • Giant Global Graph by Tim Berners-Lee
  • Tim Berners-Lee -- Linked Data at TED
  • Drop ACID and Think About Data by Bob Ippolito
  • Analyzing and adapting graph algorithms for large persistent graphs by Larsson, Patrik
  • Wednesday
    Jun102009

    Paper: Graph Databases and the Future of Large-Scale Knowledge Management

    Relational databases, document databases, and distributed hash tables get most of the hype these days, but there's another option: graph databases. Back to the future it seems. Here's a really interesting paper by Marko A. Rodriguez introducing the graph model and it's extension to representing the world wide web of data.

    Modern day open source and commercial graph databases can store on the order of 1 billion relationships with some databases reaching the 10 billion mark. These developments are making the graph database practical for applications that require large-scale knowledge structures. Moreover, with
    the Web of Data standards set forth by the Linked Data community, it is possible to interlink graph databases across the web into a giant global knowledge structure. This talk will discuss graph databases, their underlying data model, their querying mechanisms, and the benefits of the graph data structure for modeling and analysis.