CMSC 818e Day 7

Reading time ~2 minutes

These are notes taken during CMSC 818e: Distributed And Cloud-Based Storage Systems. Course webpage and syllabus here.

Day Seven

Discussion of the Bayou system paper - my write-ups of the paper here

Novel Design Choices

  • The topology of the distributed system is visible to the application. High performance relies on exposing this topology. (Note: This can be dealt with using another layer between the network and the application. Eg. In overlay networks, setting up a spanning tree; find neighbors that are actually close to you. Come up with several different replicas and do a latency test to find out which one is best.)
  • Application-specific conflict detection and resolution. Before this, most systems were pessimistic - assume conflict may concur and ensure that it never happens. This is bad for distributed systems because it slows things down, doesn’t perform well with low bandwidth. The conservative approach means that you have to have full connectivity between all replicas at the same time. The optimistic approach is also a bit naive. Getting information from the application information means that you can more effectively identify and react to conflict. (In the case of Bayou, conflict is defined by the application as a SQL script.)
  • Merge procs: Environment-aware code, and disconnected. (Note: this was written in tcl)
  • Partial-, multi-object updates (we’re assuming this is one write at a time – which is slow, but will ensure consistency, eventually)
  • “stable” vs. “tentative”

Problems

  • De-partitioning => could be a problem because there will be big logs that have to be re-written, with many operations rolled back
  • It’s sort of bogus. There’s actually a primary master (they call it the “primary”); nothing is committed until it gets to the primary.
  • Was abandoned because they couldn’t get it to work with any real applications.

Orthogonal Points

  • Gossip protocols (aka “epidemic algorithms”) can be slow, but are topology independent. This strategy is durable, survives partitions. Good at dissemination, don’t have to reconfigure if one of the servers goes down (this isn’t the case for spanning trees). You sacrifice speed but get resiliency. All decisions are made locally, so there’s no centralized decision making.

Takeaways

  • Anti-entropy is useful, as long as the system isn’t dependent on it only for updates.
  • Eventual consistency
  • Real applications > toy applications
  • Cloud providers (AWS, Google Cloud Compute, MS Azure, etc) expose the zone of the servers to the applications.
  • Non-determinism based on resource constraints. Have to make sure if one of them fails, any of them fail (e.g. by setting memory ceilings)

A Parrot Trainer Eats Crow

In this post, we'll consider how it is that models trained on massive datasets using millions of parameters can be both "low bias" and al...… Continue reading

Embedded Binaries for Go

Published on February 06, 2021