-
Notifications
You must be signed in to change notification settings - Fork 10
Plans Design
jsuereth edited this page Sep 14, 2010
·
1 revision
SPerformance is supposed to be my means to create a bunch of code that will be performance tested. I want to create a whole bunch, e.g. a Scala Collections shootout or Scala vs. Java or “My new Idea vs. the old way”. I’m not quite sure what graphs will be useful, so I want some automated mechanism of generating every possible graph that could be useful. They should also look nice. And each one should be worth no less than 1000 words, give or take 6 parsecs (in the location relativity of word-value/language).
- Generators
- Generators a performance test generators. These will generate individual performance tests.
- Performance Test “runs” are the smallest quantum
- These runs contain a mechanism to setup the run (provide intiial state) and then actually run the test. The final run is measured, the setup is not.
- Generators are also Monads, so they can be combined and nested
- Generators need to provide a “warmup” case. This is a single test that can be run a ridiculuous amount of times to warmup the hostpot optimizer
- Generators a performance test generators. These will generate individual performance tests.
- Intelligence
- This is the section where we analyze test results and try to cluster them together in meaningful ways to produce fun graphs
- Test Results – It’s hard to talk about the design without talking about test results. Test Results contain 3 pieces of information
- Random Attributes (e.g. Originating Filename, Class/Object under test etc.)
- The measured time for this test to run (note: The calculation of this is TBD… currently it’s best of 10 runs)
- Axis values for axes besides time (e.g. size, index etc.)
- Test Results – It’s hard to talk about the design without talking about test results. Test Results contain 3 pieces of information
- This is the section where we analyze test results and try to cluster them together in meaningful ways to produce fun graphs
- Reporting
- ChartGerators are responsible for making charts from clusters
- Analyze the cluster to see if it meets the criteria for an ideal chart
- Combine data in cluster into meaningful DataSeries
- Product JFreeChart and pass it to the master report engine (creates a website)
- There should also be some kind of textual output
- Compare test to each other and display “List.foreach 1.1x Array.foreach”
- ChartGerators are responsible for making charts from clusters
- A DSL
- All tests can be generated with or without a DSL.
- The DSL should be sufficiently readable to know what’s going on
- The DSL should support the common use cases, or 80% of use cases, whichever is less
- Historical Data
- There should be a mechanism of saving performance results (for the current computer)
- There should be a mechanism to retreive previous performance results with some kind of attribute (“version”→“VERSION_STRING”)
- The currently generated results would then have version => current
- The framework could optionally fail if new results do not exceed old results in terms of performance
- Need some mechanism to configure which clusters are important and will be saved
- There should be a mechanism of saving performance results (for the current computer)
- A UI
- Perhaps in a long distance future we provide some kind of Swing UI to view tests as they are run and inspect them
- Drill down and mark clusters as important for historical data
I’m only completing what I need to support scala-io and my book. If you’re interested in helping please contact me!