-
Notifications
You must be signed in to change notification settings - Fork 105
Description
As currently implemented, Diskv.ReadStream is not a purely streaming reader, because it buffers an entire copy of the value (in the siphon) before attempting to put the value into the cache. Unfortunately with large values this is a recipe for memory exhaustion.
Ideally we would stream the value directly into the cache as the io.ReadCloser that ReadStream returns is consumed, checking the cache size as we go. I started to go down this path, but it creates another race condition because then writing into the cache is not atomic: we cannot know when the ReadCloser will be finished consuming the entry, and it's very possible for others to begin Reading the same key-value pair as we're still writing it. So the next step down that road is for readers to actually take a lock per cache entry (which would then get released once the caller Closes the ReadCloser). This quickly became a web of mutexes and synchronisation hacks which felt very unidiomatic golang.
Various simple "solutions" exist (e.g. prechecking the size of the file against the cache size before starting to siphon), but they are all inherently racy and could still lead to memory exhaustion under stressful conditions. (We could also just take a global write lock during reads, but that wouldn't be very nice to other readers, would it?)
@peterbourgon @philips thoughts?