New Book Review: "Designing Data-Intensive Applications"
New book review for Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, by Martin Kleppmann, O'Reilly, 2017, reposted here:
Kleppmann mentioned during his "Turning the Database Inside Out with Apache Samza" talk at Strange Loop 2014 (see my notes) that he was on sabbatical working on this book, and while waiting quite some time for it to be published, I ended up experimenting with his Bottled Water project as well as Apache Kafka (which was only at release 0.8.2.2 at that point in time). Other reviewers are correct that much of the material included in this book is available elsewhere, but this book is packaged well (although still at 550-pages and heavyweight), with most of the key topics associated with data-intensive applications under one roof with good explanations and numerous footnotes which point to resources providing additional detail.
Content is broken down into 3 sections and 12 chapters: (1) foundations of data systems, which covers reliable, scalable, and maintainable applications, data models and query languages, storage and retrieval, and encoding and evolution, (2) distributed data, which covers replication, partitioning, transactions, the trouble with distributed systems, and consistency and consensus, and (3) derived data, which covers batch processing, stream processing, and the future of data systems. The latter 6 chapters are weighted more heavily, with chapter 9 on consistency and consensus, and chapter 12 on the future of data systems, the most lengthy with each comprising about 12% of the book.
Some potential readers might be disappointed that this book is all theory, but while the author does not provide any code he discusses practical implementation and specific details when applicable for comparisons within a product category. In my opinion, the last chapter is probably the most abstract simply because it explores ideas about how the tools covered in the prior two chapters might be used in the future to build reliable, scalable, and maintainable applications. Similiary, the chapter on the opposite end of this book sets the stage well for any developer of nontrivial applications with its section on thinking about database systems and the concerns around reliability, scalability, and maintainability.
About a year ago, I recall an executive colleague responding to me with a quizzical look when I mentioned that tooling for data and application development is converging over time, and just a few months prior I mentioned in a presentation to developers that transactional and analytical capabilities are being provided more and more by single database products, with one executive in the audience shaking his head in disagreement that kappa rather than lambda architectures are the way to go. Kleppman mentions that we typically think of databases, message brokers, caches, etc as residing in very different categories of tooling because each of these has very different access patterns, meaning different performance characteristics and therefore different implementations.
So why should all of this tooling not be lumped together under an umbrella term such as 'data systems'? Many products for data storage and processing have emerged in recent years, optimized for a variety of use cases and no longer neatly fitting into traditional categories: the boundaries between categories are simply becoming blurred, and since a single tool can no longer satisfy the data processing and storage needs for many applications, work is broken down into tasks that can be performed efficiently on a single system that is often comprised of different tooling stitched together by application code under the covers.
In addition to the author's abundant and effective simple line diagrams that are reminiscent (although more sophisticated) of his earlier diagrams, one aspect that I especially appreciate is the nomenclature comparisons between products when walking through terminology. For example, at the beginning of chapter 6, the author specifically calls out the terminological confusion that exists with respect to partitioning. "What we call a 'partition' here is called a 'shard' in MongoDB, Elasticsearch, and SolrCloud; it's known as a 'region' in HBase, a 'tablet' in Bigtable, a 'vnode' in Cassandra and Riak, and a 'vBucket' in Couchbase. However, partitioning is the most established term, so we'll stick to that."
In addition, Kleppmann walks through differences between products when the same terminology is being used, which can also lead to confusion. For example, in chapter 7 the author provides a great 5-page discussion on the meaning of "ACID" (atomicity, consistency, isolation, and durability), which was an effective reminder to me that while this term was coined in 1983 in an effort to establish precise terminology for fault-tolerance mechanisms in databases, in practice one database's implementation of ACID does not equal another's implementation. "Today, when a system claims to be 'ACID compliant', it's unclear what guarantees you can actually expect. ACID has unfortunately become mostly a marketing term."
If you've ever found yourself confused about the concept of "consistency", the author offers a sanity check that your confusion is warranted, not only because the term is "terribly overloaded" with at least four different meanings, but because "the letter C doesn't really belong in ACID" since it was "tossed in to make the acronym work" in the original paper, and that "it wasn't considered important at the time." The reality is that "atomicity, isolation, and durability are properties of the database, whereas consistency (in the ACID sense) is a property of the application. The application may rely on the database's atomicity and isolation properties in order to achieve consistency, but it's not up to the database alone."
An later in chapter 9 where he discusses consistency and consensus, the author provides a great sidebar on "the unhelpful CAP theorem". As Kleppmann later comments, "the CAP theorem as formally defined is of very narrow scope: it only considers one consistency model (namely linearizability) and one kind of fault (network partitions, or nodes that are alive but disconnected from each other). It doesn't say anything about network delays, dead nodes, or other trade-offs. Thus, although CAP has been historically influential, it has little practical value for designing systems."
The author concludes in a sidebar by commenting that "all in all, there is a lot of misunderstanding and confusion around CAP, and it does not help us understand systems better, so CAP is best avoided." This is because "CAP is sometimes presented as 'Consistency, Availability, Partition tolerance: pick 2 out of 3'. Unfortunately, putting it this way is misleading because network partitions are a kind of fault, so they aren't something about which you have a choice: they will happen whether you like it or not…A better way of phrasing CAP would be 'either Consistent or Available when Partitioned'. A more reliable network needs to make this choice less often, but at some point the choice is inevitable."
While the second section of this text on distributed data was most beneficial to me, the third section on derived data was least beneficial, mainly because I'm already familiar with these topics from recent readings and experience, and because I needed to refamiliarize myself with the content discussed in the second section. However, the author presents derived data well, and I certainly do not recommend skipping this section. As Kleppmann comments, the issues around integrating multiple different data systems into one coherent application architecture is often overlooked by vendors who claim that their product can satisfy all of your needs. In reality, integrating disparate systems (which can be grouped into the two broad categories of "systems of record" and "derived data systems") is one of the most important things that needs to be done in a nontrivial application. I highly recommend this text.