Skip to content

Conversation

@abjeni
Copy link
Collaborator

@abjeni abjeni commented May 7, 2025

This pr replaces QuorumSpec with Responses.
instead of a quorum call returning the result of a quorum function, it returns an iterator called Responses, which can be used to get a quorum result the same way a quorum function could more or less.

Example Comparison

with QuorumSpec:

type qspec struct {
	cfgSize int
}

// ReadQCQF is the quorum function for the ReadQC
// ordered quorum call method. The in parameter is the request object
// supplied to the ReadQC method at call time, and may or may not
// be used by the quorum function. If the in parameter is not needed
// you should implement your quorum function with '_ *ReadRequest'.
func (q qspec) ReadQCQF(_ *proto.ReadRequest, replies map[uint32]*proto.ReadResponse) (*proto.ReadResponse, bool) {
	// wait until at least half of the replicas have responded
	if len(replies) <= q.cfgSize/2 {
		return nil, false
	}
	// return the value with the most recent timestamp
	return newestValue(replies), true
}

func (repl) readQC(args []string, cfg *pb.Configuration) {
	...
	ctx, cancel := context.WithTimeout(context.Background(), time.Second)
	resp, err := cfg.ReadQC(ctx, pb.ReadRequest_builder{Key: args[0]}.Build())
	cancel()
	...
}

with Iterator:

func readQF(replies gorums.Responses[*pb.ReadResponse], quorum int) (*pb.ReadResponse, error) {
	var newest *pb.ReadResponse
	replyCount := int(0)
	for reply := range replies.IgnoreErrors() {
		newest = newestValueOfTwo(newest, reply.Msg)
		replyCount++
		if replyCount <= quorum {
			continue
		}
		return newest, nil
	}
	return nil, errors.New("storage.readqc: quorum not found")
}

func (repl) readQC(args []string, cfg *pb.Configuration) {
	...
        ctx, cancel := context.WithTimeout(context.Background(), time.Second)
	req := pb.ReadRequest_builder{Key: args[0]}.Build()
	resp, err := readQF(cfg.ReadQC(ctx, req), cfg.Size()/2)
	cancel()
	...
}

Benchmark results

gorums v0.9.0 benchmark
7 servers
client flags: -config-size 0 -max-async 10 -time 100s -warmup 10s
server flags: -server-buffer 4

Benchmark          Throughput          Latency     Std.dev    Client+Servers                     
AsyncQuorumCall    45121.40 ops/sec    0.59 ms     3.99 ms    23168 B/op        451 allocs/op    
Multicast          37550.46 ops/sec    0.51 ms     8.72 ms    45844 B/op        727 allocs/op    
QuorumCall         5806.83 ops/sec     0.17 ms     0.03 ms    23556 B/op        546 allocs/op    
SlowServer         87.26 ops/sec       11.46 ms    0.09 ms    24656 B/op        578 allocs/op    

Benchmark          Throughput          Latency     Std.dev    Client+Servers                     
AsyncQuorumCall    44780.16 ops/sec    0.50 ms     3.59 ms    23173 B/op        452 allocs/op    
Multicast          35870.83 ops/sec    0.66 ms     9.16 ms    45830 B/op        728 allocs/op    
QuorumCall         4895.39 ops/sec     0.20 ms     0.29 ms    23630 B/op        548 allocs/op    
SlowServer         87.25 ops/sec       11.46 ms    0.09 ms    24663 B/op        578 allocs/op    

Benchmark          Throughput          Latency     Std.dev    Client+Servers                     
AsyncQuorumCall    44220.72 ops/sec    0.53 ms     4.23 ms    23171 B/op        453 allocs/op    
Multicast          36867.41 ops/sec    0.41 ms     6.37 ms    45808 B/op        728 allocs/op    
QuorumCall         5811.10 ops/sec     0.17 ms     0.06 ms    23602 B/op        549 allocs/op    
SlowServer         87.24 ops/sec       11.46 ms    0.09 ms    24659 B/op        578 allocs/op    

Benchmark          Throughput          Latency     Std.dev     Client+Servers                     
AsyncQuorumCall    45508.26 ops/sec    0.53 ms     3.44 ms     23162 B/op        452 allocs/op    
Multicast          35690.53 ops/sec    0.83 ms     13.79 ms    45817 B/op        733 allocs/op    
QuorumCall         4841.58 ops/sec     0.21 ms     0.32 ms     23641 B/op        549 allocs/op    
SlowServer         87.27 ops/sec       11.46 ms    0.10 ms     24659 B/op        578 allocs/op 

Iterator benchmark
7 servers
client flags: -config-size 0 -max-async 10 -time 100s -warmup 10s
server flags: -server-buffer 4

Benchmark          Throughput          Latency     Std.dev     Client+Servers
AsyncQuorumCall    74103.21 ops/sec    12.71 ms    32.62 ms    22637 B/op        445 allocs/op
Multicast          37793.45 ops/sec    0.44 ms     7.25 ms     47439 B/op        766 allocs/op
QuorumCall         5689.57 ops/sec     0.18 ms     0.04 ms     23244 B/op        556 allocs/op
SlowServer         87.33 ops/sec       11.45 ms    0.10 ms     24264 B/op        585 allocs/op

Benchmark          Throughput          Latency     Std.dev     Client+Servers
AsyncQuorumCall    76608.30 ops/sec    13.98 ms    33.92 ms    22642 B/op        445 allocs/op
Multicast          38652.91 ops/sec    0.44 ms     6.30 ms     45802 B/op        728 allocs/op
QuorumCall         5785.90 ops/sec     0.17 ms     0.05 ms     23199 B/op        553 allocs/op
SlowServer         87.35 ops/sec       11.45 ms    0.09 ms     24262 B/op        585 allocs/op

Benchmark          Throughput          Latency     Std.dev     Client+Servers
AsyncQuorumCall    78888.81 ops/sec    9.67 ms     25.13 ms    22643 B/op        445 allocs/op
Multicast          37392.28 ops/sec    0.68 ms     8.62 ms     45769 B/op        728 allocs/op
QuorumCall         5580.44 ops/sec     0.18 ms     0.10 ms     23231 B/op        556 allocs/op
SlowServer         87.55 ops/sec       11.42 ms    0.13 ms     24265 B/op        585 allocs/op

Benchmark          Throughput          Latency     Std.dev     Client+Servers
AsyncQuorumCall    71089.62 ops/sec    8.25 ms     23.04 ms    22614 B/op        446 allocs/op
Multicast          36917.05 ops/sec    0.79 ms     8.70 ms     45792 B/op        728 allocs/op
QuorumCall         5773.34 ops/sec     0.17 ms     0.32 ms     23240 B/op        556 allocs/op
SlowServer         87.35 ops/sec       11.45 ms    0.11 ms     24260 B/op        585 allocs/op

Issues

An example for how to use async/correctable could be added.

@deepsource-io
Copy link
Contributor

deepsource-io bot commented May 7, 2025

Here's the code health analysis summary for commits e1cadd6..f25c47d. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource Go LogoGo❌ Failure
❗ 4 occurences introduced
🎯 4 occurences resolved
View Check ↗
DeepSource Shell LogoShell✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

meling added a commit that referenced this pull request Nov 27, 2025
This changes the iterator API from iter.Seq2[uint32, Result[T]] to a
cleaner iter.Seq[Result[T]] pattern, and a type alias Results[T] which
can serve as receiver for methods on said iterator. This simplifies
the API by consolidating node ID and result information into a single
Result[T] value, making iteration more ergonomic, despite not following
Go's function-based iterator patterns.

Key changes:

- Introduce Results[T] type alias for iter.Seq[Result[T]]
- Change ClientCtx.Responses() return type from iter.Seq2 to Results[T]
- Update iterator helper methods to be methods on Results[T]:
  * IgnoreErrors() now returns Results[T] instead of iter.Seq2[uint32, T]
  * Add Filter() method for generic result filtering
  * CollectN() and CollectAll() now methods on Results[T]
- Update all iterator consumers to use single-value iteration pattern
- Constrain ClientCtx type parameters to msg (proto.Message) type

Benefits:

- Simpler iteration: `for result := range ctx.Responses()` vs
  `for nodeID, result := range ctx.Responses()`
- More composable: method chaining like `ctx.Responses().IgnoreErrors().CollectAll()`
- Consistent: Result[T] already contains NodeID, no need to pass separately
- Cleaner: Filter() operates on complete Result[T] values

This borrows from Asbjørn Salhus's design in PR #230, which I now
agree is better than Go's function-based iterator pattern because
of its significantly better composability. That is, you avoid
composing with functions that would look like:

 gorums.IgnoreErrors(ctx.Responses())...

and even worse when there are many iterators being composed.
@meling
Copy link
Member

meling commented Dec 26, 2025

A variant of this design was reimplemented and merged in #245

@meling meling closed this Dec 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants