-
Notifications
You must be signed in to change notification settings - Fork 335
In-process benchmarking #370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
73 commits
Select commit
Hold shift + click to select a range
c3e5f38
in-process-benching: Add Clojure fibonacci POC
PEZ 5ee3f83
in-process-benching: Clojure: Extract benchmark tool
PEZ b0c280c
in-process-benching: Clojure: Fix mean time calc
PEZ 9a80ecb
in-process-benching: WIP: add run and compile scripts
PEZ 6b38539
in-process-benching: make run script append to a result file in /tmp
PEZ 7ab1559
in-process-benching: Add Clojure Native
PEZ 709632a
in-process-benching: Update Clojure Native compile-benchmark.sh to us…
PEZ a558c2f
in-process-benching: Update run-benchmark.sh to output more meta
PEZ 4d4e254
in-process-benching: Update clojure-fibonacci to match output from ru…
PEZ f39b7fa
in-process-benching: Add check-output.sh for loops
PEZ 5bf8c50
in-process-benching: Add Clojure and Clojure Native loops
PEZ 7a54d6c
in-process-benching: Clean up run-benchmark.sh some
PEZ b62100e
in-process-benching: Add is_checked to result printout
PEZ ddab3be
in-process-benching: Add only_langs option
PEZ ca67fa6
in-process-benching: Update only_langs file name slug
PEZ e6c980b
in-process-benching: Make Clojure benchmark runner use sum of executi…
PEZ a2b6071
in-process-benching: Make Clojure benchmark runner return more statis…
PEZ c737013
in-process-benching: Refactor Clojure benchmark runner some
PEZ 032ced7
in-process-benching: Update implementations for new stats output
PEZ 8fab7b9
in-process-benching: Rename compile script
PEZ 9c49a19
in-process-benching: Clojure, move formatting to benchmark util
PEZ 19d5265
in-process-benching: Clojure add Levenshtein
PEZ cf90ee7
in-process-benching: Clojure add hello-world, introduce hyperfine-ben…
PEZ 890616d
in-process-benching: Change to comma for csv
PEZ 46a8667
in-process-benching: update gitignore
PEZ 271c7cf
in-process-benching: Reorder the benchmark/run args
PEZ 34434e5
in-process-benching: Update README
PEZ d15e53c
in-process-benching: Add Java Benchmark utility
PEZ ff3aaee
in-process-benching: Add Java Fibonacci
PEZ 761826b
in-process-benching: Add compile and run for Java Native Image
PEZ 91a8e6d
in-process-benching: Update clean.sh
PEZ e8f020f
in-process-benching: Add Java loops
PEZ 76064d8
in-process-benching: Add Java levenshtein
PEZ f5e6771
in-process-benching: Add Java hello-world
PEZ 699e59d
in-process-benching: Add some documentation to the Java Benchmark uti…
PEZ 235ab7d
in-process-benching: Update README with Java reference
PEZ 720f63e
in-process-benching: Babashka: Add loops
PEZ 3c6682e
in-process-benching: Babashka: Add fibonacci
PEZ daf08ee
in-process-benching: Update README with Babashka results included
PEZ acbf530
in-process-benching: Babashka: Add levenshtein
PEZ 6f6a3d2
in-process-benching: Babashka: Add hello-world
PEZ ab70c3f
in-process-benching: Add C benchmark utility
PEZ 5f598db
in-process-benching: Add C compile and run commands
PEZ c36ab47
in-process-benching: Add `run` to gitignore
PEZ 72a9c7e
in-process-benching: Add C levenshtein
PEZ b3830a4
in-process-benching: Clean C `run`
PEZ ee8d6cf
in-process-benching: Update README with C references
PEZ a746c8d
in-process-benching: Add C hello-world
PEZ 7c7cab3
in-process-benching: Add C loops
PEZ baba47a
in-process-benching: Add C fibonacci
PEZ ef733e9
in-process-benching: Add timestamp and RAM to results CSV
PEZ a1bfbbf
in-process-benching: Clojure benchmark utility prints status
PEZ a07d426
in-process-benching: Java benchmark utility prints status
PEZ 9c27cda
in-process-benching: Clojure benchmark utility prints status dot at s…
PEZ ae51787
in-process-benching: C benchmark utility prints status
PEZ 99791db
in-process-benching: C benchmark utility responsible for formatting r…
PEZ 667ff38
in-process-benching: Fibonacci benchmark now about `fib(n)` (skipping…
PEZ 1b063b1
in-process-benching: Add missing update of output check for fibonacci
PEZ 9e4739f
in-process-benching: Levenshtein update input to use one word per line
PEZ 172f64a
in-process-benching: Update benchmark READMEs
PEZ 6675214
in-process-benching: Update project README to reflect that the old ru…
PEZ dccf3aa
in-process-benching: Add notes to reference benchmark utilities about…
PEZ 52ab160
in-process-benching: Replace compile and run scripts
PEZ ae01829
in-process-benching: Update README:s
PEZ b9bcc69
in-process-benching: Add option to compile.sh to compile only some la…
PEZ 5ebaa16
in-process-benching: Inform about -l option to compile.sh and run.sh
PEZ 8ca156f
in-process-benching: Add warmup arg to benchmark runs
PEZ d63b7f6
in-process-benching: Make reference fibonacci implementations use war…
PEZ 06b314d
in-process-benching: Make reference loops implementations use warmup arg
PEZ c0b0bc7
Ignore Clojure .nrepl-port files
PEZ b511823
in-process-benching: Fix Clojure benchmark tool not skipping run when…
PEZ b6d6892
in-process-benching: Make reference levenshtein implementations use w…
PEZ 035aa81
in-process-benching: Update readme about the warmup arg
PEZ File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,76 +1,207 @@ | ||
|
|
||
| # Languages | ||
|
|
||
| A repo for collaboratively building small benchmarks to compare languages. | ||
| If you have a suggestion for improvement: PR! | ||
| If you want to add a language: PR! | ||
| Having fun together, learning about programming languages, compilers, interpreters, and toolchains, by way of microbenchmarks. | ||
|
|
||
| ## Running | ||
| > [!NOTE] | ||
| > We are in the process of replacing our previous benchmark runner with one that relies on in-process measurements, *removing the influence of start/setup times from the results*. Please help in this transitioning by adding the necessary tooling to languages that lack it. See below, under [The Runner](#the-runner). | ||
|
|
||
| To run one of the benchmarks: | ||
| We're learning together here. | ||
|
|
||
| 1. `cd` into desired benchmark directory (EG `$ cd loops`) | ||
| 2. Compile by running `$ ../compile.sh` | ||
| 3. Run via `$ ../run.sh`. | ||
| You should see output something like: | ||
|
|
||
| ``` | ||
| $ ../run.sh | ||
| * If you have a suggestion for improvement, want to add a language, a benchmark, fix a bug or typo: Issues and PRs. Which one to use depends. But generally is is first Issue, then PR, where you can use the issue to formulate the problem statement, and PRs to address the problem. Use your judgement and we will succeed. | ||
| * Have a question? -> Issue. | ||
|
|
||
| ## Running the benchmarks | ||
|
|
||
| To run benchmarks you need toolchains to run (and often to compile) the programs for the languages you want to benchmark. The scripts are written so that benchmarks are compiled and run for any language for which you have a working toolchain. | ||
|
|
||
| The steps are performed in a per-benchmark fashion by doing `cd` to the benchmark directory and then: | ||
|
|
||
| 1. Compile the programs that need compiling: | ||
|
|
||
| ``` | ||
| $ ./compile.sh | ||
| ``` | ||
| 1. Run, providing your GitHub user handle, e.g.: | ||
|
|
||
| ``` | ||
| $ ./run.sh -u PEZ | ||
| ``` | ||
|
|
||
| (This is what we refer to as [The Runner](#the-runner)) | ||
| 1. Clean build files: | ||
|
|
||
| ``` | ||
| $ ./clean.sh | ||
| ``` | ||
|
|
||
| ## The Runner | ||
|
|
||
| The general strategy for the runner to benchmark only the targeted function is that it is the programs being benchmarked that do the benchmarking in-process. They only measure around the single piece of work that the benchmark is about. So for **fibonacci** only the call to the function calculating `fibonacci(n)` should be measured. For **levenshtein** benchmark a function that collects all pairing distances is measured. This is because we use the sum of the distances for [correctness check](#correctness-check). | ||
|
|
||
| Each program (language) will be allowed the same amount of time to complete the benchmark work (as many times as it can). | ||
|
|
||
| Because of the above, each language will have to have some minimal utility/tooling for running the function-under-benchmark as many times as a timeout allows, plus reporting the measurements and the result. Here are three implementations, that we can regard as being reference: | ||
|
|
||
| * [benchmark.clj](lib/clojure/src/languages/benchmark.clj) | ||
| * [benchmark.java](lib/java/languages/Benchmark.java) | ||
| * [benchmark.c](lib/c/benchmark.c) (This one may need some scrutiny from C experts before we fully label it as *reference*.) | ||
|
|
||
| You'll see that the `benchmark/run` function takes two arguments: | ||
|
|
||
| 1. `f`: A function (a thunk) | ||
| 1. `run-ms`: A total time in milliseconds within which the function should be run as many times as possible | ||
|
|
||
| To make the overhead of running and measuring as small as possible, the runner takes a delta time for each time it calls `f`. It is when the sum of these deltas, `total-elapsed-time`, is over the `run-ms` time that we stop calling `f`. So, for a `run-ms` of `1000` the total runtime will always be longer than a second. Because we will almost always “overshoot” with the last run, and because the overhead of running and keeping tally, even if tiny, will always be _something_. | ||
|
|
||
| The benchmark/run function is responsible to report back the result/answer to the task being benchmarked, as well as some stats, like mean run time, standard deviation, min and max times, and how many runs where completed. | ||
|
|
||
| ### Running a benchmark | ||
|
|
||
| The new run script is named [run.sh](run.sh). Let's say we run it in the **levenstein** directory: | ||
|
|
||
| ```sh | ||
| ../run.sh -u PEZ | ||
| ``` | ||
|
|
||
| Benchmarking Zig | ||
| Benchmark 1: ./zig/code 40 | ||
| Time (mean ± σ): 513.9 ms ± 2.9 ms [User: 504.5 ms, System: 2.6 ms] | ||
| Range (min … max): 510.6 ms … 516.2 ms 3 runs | ||
| The default run time is `10000` ms. `-u` sets the user name (preferably your GitHub handle). The output was this: | ||
|
|
||
| ```csv | ||
| benchmark,timestamp,commit_sha,is_checked,user,model,ram,os,arch,language,run_ms,mean_ms,std-dev-ms,min_ms,max_ms,runs | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,Babashka,10000,23376.012916,0.0,23376.012916,23376.012916,1 | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,C,10000,31.874277,0.448673,31.286000,35.599000,314 | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,Clojure,10000,57.27048066857143,2.210445845051782,55.554958,75.566792,175 | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,Clojure Native,10000,59.95592388622754,0.8493245545620596,58.963833,62.897834,167 | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,Java,10000,55.194704,1.624322,52.463125,63.390833,182 | ||
| levenshtein,2025-01-18T23:32:41Z,8e63938,true,PEZ,Apple M4 Max,64GB,darwin24,arm64,Java Native,10000,60.704966,6.579482,51.807750,96.343541,165 | ||
| ``` | ||
|
|
||
| Benchmarking C | ||
| Benchmark 1: ./c/code 40 | ||
| Time (mean ± σ): 514.0 ms ± 1.1 ms [User: 505.6 ms, System: 2.8 ms] | ||
| Range (min … max): 513.2 ms … 515.2 ms 3 runs | ||
| It's a CSV file you can open in something Excel-ish, or consume with your favorite programming language. | ||
|
|
||
|  | ||
|
|
||
| Benchmarking Rust | ||
| Benchmark 1: ./rust/target/release/code 40 | ||
| Time (mean ± σ): 514.1 ms ± 2.0 ms [User: 504.6 ms, System: 3.1 ms] | ||
| Range (min … max): 512.4 ms … 516.3 ms 3 runs | ||
| As you can see, it has some meta data about the run, in addition to the benchmark results. **Clojure** ran the benchmark 175 times, with a mean time of **57.3 ms**. Which shows the point with the new runner, considering that Clojure takes **300 ms** (on the same machine) to start. | ||
|
|
||
| ... | ||
| ``` | ||
| See [run.sh](run.sh) for some more command line options it accepts. Let's note one of them: `-l` which takes a string of comma separated language names, and only those languages will be run. Good for when contributing a new language or updates to a language. E.g: | ||
|
|
||
| 4. For good measure, execute `$ ../clean.sh` when finished. | ||
| ``` | ||
| ~/Projects/languages/levenshtein ❯ ../run.sh -u PEZ -l Clojure | ||
| Running levenshtein benchmark... | ||
| Results will be written to: /tmp/languages-benchmark/levenshtein_PEZ_10000_5bb1995_only_langs.csv | ||
|
|
||
| Hyperfine is used to warm, execute, and time the runs of the programs. | ||
| Checking levenshtein Clojure | ||
| Check passed | ||
| Benchmarking levenshtein Clojure | ||
| java -cp clojure/classes:src:/Users/pez/.m2/repository/org/clojure/clojure/1.12.0/clojure-1.12.0.jar:/Users/pez/.m2/repository/org/clojure/core.specs.alpha/0.4.74/core.specs.alpha-0.4.74.jar:/Users/pez/.m2/repository/org/clojure/spec.alpha/0.5.238/spec.alpha-0.5.238.jar run 10000 levenshtein-words.txt | ||
| levenshtein,5bb1995,true,PEZ,Apple M4 Max,darwin24,arm64,Clojure,10000,56.84122918181818,0.8759056030546785,55.214541,59.573,176 | ||
|
|
||
| ## Adding | ||
| Done running levenshtein benchmark | ||
| Results were written to: /tmp/languages-benchmark/levenshtein_PEZ_10000_5bb1995_only_langs.csv | ||
| ``` | ||
|
|
||
| To add a language: | ||
| ### Compiling a benchmark | ||
|
|
||
| 1. Select the benchmark directory you want to add to (EG `$ cd loops`) | ||
| 2. Create a new subdirectory for the language (EG `$ mkdir rust`) | ||
| 3. Implement the code in the appropriately named file (EG: `code.rs`) | ||
| 4. If the language is compiled, add appropriate command to `../compile.sh` and `../clean.sh` | ||
| 5. Add appropriate line to `../run.sh` | ||
| This works as before, but since the new programs are named `run` instead of `code`, we need a new script. Meet: [compile.sh](compile.sh) | ||
|
|
||
| You are also welcome to add new top-level benchmarks dirs | ||
| ```sh | ||
| ../compile.sh | ||
| ``` | ||
|
|
||
| # Available Benchmarks | ||
| ### Adding a language | ||
|
|
||
| ### [hello-world](./hello-world/README.md) | ||
| To add (or port) a language for a benchmark to the new runner you'll need to add: | ||
|
|
||
| ### [loops](./loops/README.md) | ||
| 1. A benchmarking utility in `lib/<language>` | ||
| 1. Code in `<benchmark>/<language>/run.<language-extension>` (plus whatever extra project files) | ||
| - If you are porting from the legacy runner, copy the corresponding `code.<language-extension>` and start from there. See about [benchmark changes](#changes-to-the-benchmarks-compared-to-legacy-runner) below. | ||
| 1. An entry in `compile.sh` (copy from `compile-legacy.sh` if you are porting) | ||
| 1. An entry in `run.sh` (copy from `compile-legacy.sh` if you are porting) | ||
| 1. Maybe some code in `clean.sh` (All temporary/build files should be cleaned.) | ||
| 1. Maybe some entries in `.gitignore` (All build files, and temporary toolchain files should be added here.) | ||
|
|
||
| ### [fibonacci](./fibonacci/README.md) | ||
| The `main` function of the program provided should take three arguments: | ||
|
|
||
| ### [levenshtein](./levenshtein/README.md) | ||
| 1. The run time in milliseconds | ||
| 1. The warmup time in milliseconds | ||
| 1. The input to the function | ||
| - There is only one input argument, unlike before. How this input argument should be interpreted depends on the benchmark. For **levenshtein** it is a file path, to the file containing the words to use for the test. | ||
|
|
||
| # Corresponding visuals | ||
| As noted before the program should run the function-under-benchmark as many times as it can, following the example of the reference implementations mentioned above. The program is allowed to run warmup runs before the actual benchmark run. E.g. so that a JIT compiler will have had some chance to optimize. It should then pass the warmup time to its benchmark runner. | ||
|
|
||
| The program should output a csv row with: | ||
|
|
||
| ```csv | ||
| mean_ms,std-dev-ms,min_ms,max_ms,times,result | ||
| ``` | ||
|
|
||
| Before a PR with a new or ported language contribution will be merged, you should provide output (text) from a benchmark run. To facilitate this both `compile.sh` and `run.sh` takes a `-l <languages>` argument, where `<languages>` is a comma-seprated list of language names. E.g.: | ||
|
|
||
| ```sh | ||
| $ ../compile.sh -l 'C,Clojure' | ||
| $ ../run.sh -u PEZ -l 'C,Clojure' | ||
| ``` | ||
|
|
||
| Please provide output from all benchmark contributions you have added/touched. | ||
|
|
||
| ### Changes to the benchmarks compared to legacy runner | ||
|
|
||
| When adapting a language implementation of some benchmark, consider these differences | ||
|
|
||
| * **fibonacci**: | ||
| * The program should return the result of `fib(n)`. This is to keep the benchmark focused on one thing. | ||
| * Early exit for `n < 2` are now allowed, again to keep the benchmark focused. | ||
| * The input is now `37`, to allow slower languages to complete more runs. | ||
| * **loops**: The inner loop is now 10k, again to allow slower languages to complete more runs. | ||
| * **levenshtein**: | ||
| 1. Smaller input (slower languages...) | ||
| 1. The input is provided via a file (pointed at by the input argument) | ||
| 1. We only calculate each word pairing distance once (A is as far from B as B is from A) | ||
| 1. There is a single result, the sum of the distances. | ||
| * **hello-world**: No changes. | ||
| * It needs to accept and ignore the two arguments (There is no benchmarking code in there, because it will be benchmarked out-of-process, using **hyperfine**) | ||
|
|
||
| Let's look at the `-main` function for the Clojure **levenshtein** contribution: | ||
|
|
||
| ```clojure | ||
| (defn -main [& args] | ||
| (let [run-ms (parse-long (first args)) | ||
| warmup-ms (parse-long (second args)) | ||
| input-path (nth args 2) | ||
| strings (-> (slurp input-path) | ||
| (string/split-lines)) | ||
| _warmup (benchmark/run #(levenshtein-distances strings) warmup-ms) | ||
| results (benchmark/run #(levenshtein-distances strings) run-ms)] | ||
| (-> results | ||
| (update :result (partial reduce +)) | ||
| benchmark/format-results | ||
| println))) | ||
| ``` | ||
|
|
||
| The `benchmark/run` function returns a map with the measurements and the result keyed on `:result`. *This result is a sequence of all the distances.* Outside the benchmarked function we sum the distances, and then format the output with this sum. It's done this way to minimize the impact that the benchmarking needs has on the benchmarked work. (See [levenshtein/jvm/run.java](levenshtein/jvm/run.java) or [levenshtein/c/run.c](levenshtein/c/run.c) if the Lisp is tricky to read for you.) | ||
|
|
||
| ## Available Benchmarks | ||
|
|
||
| #### [hello-world](./hello-world/README.md) | ||
|
|
||
| #### [loops](./loops/README.md) | ||
|
|
||
| #### [fibonacci](./fibonacci/README.md) | ||
|
|
||
| #### [levenshtein](./levenshtein/README.md) | ||
|
|
||
| ## Corresponding visuals | ||
|
|
||
| Here's a visualization of a run using the languages ported to the in-process runner as of January 23, 2024 | ||
|
|
||
| - https://pez.github.io/languages-visualizations/#https://gist.github.com/PEZ/411e2da1af3bbe21c4ad1d626451ec1d | ||
| - The https://pez.github.io/languages-visualizations/ page will soon be defaulting to the in-process runs | ||
|
|
||
| ### Legacy visuals | ||
|
|
||
| Several visuals have been published based on the work here. | ||
| More will likely be added in the future, as this repository improves: | ||
|
|
||
| - https://benjdd.com/languages | ||
| - https://benjdd.com/languages2 | ||
| - https://benjdd.com/languages3 | ||
| - https://pez.github.io/languages-visualizations/ | ||
| - check https://github.com/PEZ/languages-visualizations/tags for tags, which correspond to a snapshot of some particular benchmark run: e.g: | ||
| - https://pez.github.io/languages-visualizations/v2024.12.31/ | ||
|
|
||
| - https://pez.github.io/languages-visualizations/v2025.01.21/ | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,66 @@ | ||
| function compile { | ||
| if [ -d ${1} ]; then | ||
| echo "" | ||
| echo "Compiling $1" | ||
| ${2} 2>/dev/null | ||
| result=$? | ||
| if [ $result -ne 0 ]; then | ||
| echo "Failed to compile ${1} with command: ${2}" | ||
| fi | ||
| fi | ||
| } | ||
|
|
||
| compile 'c3' 'c3c compile c3/code.c3 -o c3/code' | ||
| compile 'c' 'gcc -O3 c/code.c -o c/code' | ||
| compile 'cpp' 'g++ -std=c++23 -march=native -O3 -Ofast -o cpp/code cpp/code.cpp' | ||
| #compile 'go' 'go build -ldflags "-s -w" -o go/code go/code.go' | ||
| go build -ldflags "-s -w" -o go/code go/code.go | ||
| hare build -R -o hare/code hare/code.ha | ||
| compile 'jvm' 'javac jvm/code.java' | ||
| compile 'js' 'bun build --bytecode --compile js/code.js --outfile js/bun' | ||
| # The compile function can't cope with the java-native-image compile | ||
| (cd java-native-image && native-image -cp .. -O3 --pgo-instrument -march=native jvm.code && ./jvm.code $(cat input.txt) && native-image -cp .. -O3 --pgo -march=native jvm.code -o code) | ||
| compile 'rust' 'cargo build --manifest-path rust/Cargo.toml --release' | ||
| compile 'kotlin' 'kotlinc -include-runtime kotlin/code.kt -d kotlin/code.jar' | ||
| compile 'kotlin' 'kotlinc-native kotlin/code.kt -o kotlin/code -opt' | ||
| compile 'dart' 'dart compile exe dart/code.dart -o dart/code --target-os=macos' | ||
| compile 'inko' '(cd inko && inko build --opt=aggressive code.inko -o code)' | ||
| compile 'nim' 'nim c -d:danger --opt:speed -d:passC -x:off -a:off nim/code.nim' | ||
| compile 'nim' 'nim -d:release --threads:off --stackTrace:off --lineTrace:off --opt:speed -x:off -o:nim/code c nim/code.nim' | ||
| compile 'sbcl' 'sbcl --noinform --non-interactive --load "common-lisp/code.lisp" --build' | ||
| compile 'fpc' 'fpc -O3 fpc/code.pas' | ||
| compile 'modula2' 'gm2 -O3 modula2/code.mod -o modula2/code' | ||
| compile 'crystal' 'crystal build -o crystal/code --release crystal/code.cr' | ||
| compile 'scala' 'scala-cli --power package --assembly scala/code.scala -f -o scala/code' | ||
| compile 'scala' 'scala-cli --power package --native scala/code.scala -f -o scala/code-native --native-mode release-full' | ||
| compile 'scala' 'scala-cli --power package --js scala/codeJS.scala -f -o scala/code.js --js-module-kind commonjs --js-mode fullLinkJS' | ||
| compile 'scala' 'bun build --bytecode --compile scala/code.js --outfile scala/bun' | ||
| compile 'ldc2' 'ldc2 -O3 -release -boundscheck=off -mcpu=native flto=thin d/code.d' | ||
| compile 'odin' 'odin build odin/code.odin -o:speed -file -out:odin/code' | ||
| compile 'objc' 'clang -O3 -framework Foundation objc/code.m -o objc/code' | ||
| compile 'fortran' 'gfortran -O3 fortran/code.f90 -o fortran/code' | ||
| compile 'zig' 'zig build-exe -O ReleaseFast -femit-bin=zig/code zig/code.zig' | ||
| compile 'lua' 'luajit -b lua/code.lua lua/code' | ||
| compile 'swift' 'swiftc -O -parse-as-library -Xcc -funroll-loops -Xcc -march=native -Xcc -ftree-vectorize -Xcc -ffast-math swift/code.swift -o swift/code' | ||
| compile 'csharp' 'dotnet publish csharp -o csharp/code' | ||
| compile 'csharp' 'dotnet publish csharp -o csharp/code-aot /p:PublishAot=true /p:OptimizationPreference=Speed' | ||
| compile 'fsharp' 'dotnet publish fsharp -o fsharp/code' | ||
| compile 'fsharp' 'dotnet publish fsharp -o fsharp/code-aot /p:PublishAot=true /p:OptimizationPreference=Speed' | ||
| compile 'haskell' 'ghc -O2 -fllvm haskell/code.hs -o haskell/code || { echo "ghc: cannot compile with llvm backend; fallback to use default backend"; ghc -O2 haskell/code.hs -o haskell/code; }' | ||
| compile 'v' 'v -prod -cc clang -cflags -march=native -d no_backtrace -o v/code v/code.v' | ||
| compile 'emojicode' 'emojicodec emojicode/code.emojic' | ||
| compile 'chez' "echo '(compile-program \"chez/code.ss\")' | chez --optimize-level 3 -q" | ||
| #compile 'clojure' "(cd clojure && mkdir -p classes && clojure -Sdeps '{:paths [\".\"]}' -M -e \"(compile 'code)\")" | ||
| (cd clojure && mkdir -p classes && clojure -Sdeps '{:paths ["."]}' -M -e "(compile 'code)") | ||
| #compile 'clojure-native-image' "(cd clojure-native-image && clojure -M:native-image)" | ||
| #Using `compile` for clojure-native-image silently fails | ||
| (cd clojure-native-image && clojure -M:native-image --pgo-instrument -march=native && ./code $(cat input.txt) && clojure -M:native-image --pgo -march=native) | ||
| compile 'cobol' 'cobc -I /opt/homebrew/include/ -O -O2 -O3 -Os -x -o cobol/main cobol/main.cbl' | ||
| compile 'lean4' 'lake build --dir lean4 ' | ||
| # compile 'java' 'haxe --class-path haxe -main Code --jvm haxe/code.jar # was getting errors running `haxelib install hxjava`' | ||
| # compile 'ada' 'gnatmake -O3 -gnat2022 -gnatp -flto ada/code.adb -D ada -o ada/code' | ||
| #Using `compile` for Emacs Lisp silently fails | ||
| (cd emacs-lisp && emacs -Q --batch --eval '(byte-compile-file "code.el")') | ||
| (cd emacs-lisp && emacs -Q --batch --eval '(native-compile "code.el" (expand-file-name "code.eln"))') | ||
| (cd racket && raco make code.rkt && raco demod -o code.zo code.rkt && raco exe -o code code.zo) | ||
| pip3.12 install numba --break-system-packages |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure you're using a monotonic clock for these measurements (wall clock time can jump around).
I don't know how this actually works in other languages, but in Racket you can provide a time-limit for a body of execution (see
call-with-time-limit).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I made sure it was a monotonic clock for Java and Clojure, but I shall now check the other language implementations. It's probably fine, but I didn't pay attention to that so could have slipped in some wall clocking.