Skip to content

Conversation

@alexandru
Copy link
Member

@alexandru alexandru commented Feb 27, 2017

Changes in version 2:

  • change the in-memory cache implementation:
    • provide an immutable cache data-structure
    • the mutable cache should be based on the immutable one
      • also make it distribute the load over a configurable number of atomic references for performance
  • SpyMemcached futures now expose addListener, so we no longer have to use its dirty internals for exposing futures and tasks
  • provide integration with the Monix Task, in addition to the methods exposing CancelableFuture
  • split the project in shade-local and shade-memcached, the idea being that Shade should be a project for "caching" and it should maybe provide other integrations meant for caching stuff (e.g. Redis?)

For review, it might be easier to just review the files in the version2 branch.

@alexandru alexandru added this to the 2.0 milestone Feb 27, 2017
@alexandru alexandru requested a review from lloydmeta February 27, 2017 11:53
@alexandru alexandru requested a review from aoprisan February 27, 2017 13:20
@codecov
Copy link

codecov bot commented Feb 27, 2017

Codecov Report

❗ No coverage uploaded for pull request base (master@22c216b). Click here to learn what that means.
The diff coverage is 34.96%.

@@            Coverage Diff            @@
##             master      #50   +/-   ##
=========================================
  Coverage          ?   34.69%           
=========================================
  Files             ?        8           
  Lines             ?      392           
  Branches          ?       48           
=========================================
  Hits              ?      136           
  Misses            ?      256           
  Partials          ?        0
Impacted Files Coverage Δ
...src/main/scala/shade/memcached/Configuration.scala 0% <ø> (ø)
...mcached/src/main/scala/shade/memcached/Codec.scala 0% <0%> (ø)
.../src/main/scala/shade/memcached/SpyMemcached.scala 0% <0%> (ø)
...hed/src/main/scala/shade/memcached/Memcached.scala 0% <0%> (ø)
...ocal/jvm/src/main/scala/shade/local/Platform.scala 0% <0%> (ø)
...local/js/src/main/scala/shade/local/Platform.scala 0% <0%> (ø)
...ain/scala/shade/local/mutable/TimeBasedCache.scala 61.22% <61.22%> (ø)
...n/scala/shade/local/immutable/TimeBasedCache.scala 92.68% <92.68%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 22c216b...3b5c1bb. Read the comment docs.

@lloydmeta
Copy link
Collaborator

Just skimmed over and it looks promising 👍 I'll have a look over the weekend.

Any idea what kind of performance impact this has?

Copy link
Collaborator

@lloydmeta lloydmeta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks pretty good. I really like the improvements to CI and the build.

As I mentioned before, I'm a bit concerned about not knowing the performance impact of this upgrade, since a Memcached client should be extremely performant (in #33 you mentioned performance should be a goal) of this project.

Just left a few comments/questions.

</developer>
</developers>
lazy val scalaJSSettings = Seq(
coverageExcludedFiles := ".*"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason for this? Minitest works cross-platform right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't about Minitest. I've made shade-local to cross-compile to Scala.js, since there's nothing JVM specific in it and who knows, might be useful to somebody in the browser :-)

The problem is that sbt-scoverage doesn't work with Scala.js. So we have to exclude all Scala.js files from coverage reports.

import scala.collection.immutable.SortedMap
import scala.concurrent.duration._

/** Describes an immutable cache data-structure.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a use case for this cache outside of say, testing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The immutable cache thing would be useful in combination with streaming:

observable.scan((cache, elem) => cache.set(elem.key, elem.value))

People working with Observable have done this many times, aggregating data in simple Map references. Adding expiry logic for those values can also be very useful.

* https://github.com/monix/shade/blob/master/LICENSE.txt
*/

package shade.memcached
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose going forward Codecs will be strictly tied to being used with Memcached?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how to handle it otherwise. Because we are now using the high-level API, we need to work with Transcoder, which isn't a bad thing because we can now also work with flags.

Bringing in SpyMemcached's Transcoder implementations has these benefits:

  1. data persisted by Shade can be read by SpyMemcached - the problem is SpyMemcached Transcoder implementations reject values if they aren't stored with proper flags - so with the current version you'd have to implement your own Transcoder that just deals with Array[Byte] like the current Codec does
  2. the SerializingTranscoder in SpyMemcached begins compressing data in case it goes over a certain size, which is cool

I'm not sure if making Codec specific to SpyMemcached is bad, because by introducing usage of those flags, that's an implementation detail that doesn't seem reusable.

client.add(withPrefix(key), expSecs, value, codec)
},
(async, callback) => {
async.addListener(new OperationCompletionListener {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

}
}
}
//
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a bunch of commented-out tests here. TODOs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes 😄 Tests need to be fixed.

SerializingCodecAnyRef.asInstanceOf[Codec[A]]

/** Reusable reference for [[serializingCodec]]. */
private object SerializingCodecAnyRef extends Codec[Any] {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it might be worthwhile to add in Shapeless to build an auto generic codec that can be tried before we fall back to something that relies on Java serialisation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, never did that, can you share a sample? Wouldn't that also involve macros?

@alexandru alexandru force-pushed the master branch 3 times, most recently from d97b560 to 18da705 Compare August 16, 2017 15:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants