diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml new file mode 100644 index 00000000..b2316674 --- /dev/null +++ b/.github/workflows/codespell.yml @@ -0,0 +1,25 @@ +# Codespell configuration is within pyproject.toml +--- +name: Codespell + +on: + push: + branches: [main] + pull_request: + branches: [main] + +permissions: + contents: read + +jobs: + codespell: + name: Check for spelling errors + runs-on: ubuntu-latest + + steps: + - name: Checkout + uses: actions/checkout@v4 + - name: Annotate locations with typos + uses: codespell-project/codespell-problem-matcher@v1 + - name: Codespell + uses: codespell-project/actions-codespell@v2 diff --git a/brainscore_core/submission/developers_guide.md b/brainscore_core/submission/developers_guide.md index b5c1c7cd..141ed8ea 100644 --- a/brainscore_core/submission/developers_guide.md +++ b/brainscore_core/submission/developers_guide.md @@ -1,7 +1,7 @@ ## Submission system ### Components -To provide an automatical scoring mechanism for artificial models of the ventral stream, Brain-Score has implemented a whole system, which is explained in the follows. The system consists of following components: +To provide an automatic scoring mechanism for artificial models of the ventral stream, Brain-Score has implemented a whole system, which is explained in the follows. The system consists of following components: ![](submission_system.png) - **Brain-Score Website:** @@ -12,12 +12,12 @@ To provide an automatical scoring mechanism for artificial models of the ventral - **[Jenkins](http://braintree.mit.edu:8080/):** [Jenkins](http://braintree.mit.edu:8080/) is a continuous integration tool, which we use to automatically run project unittests and the scoring process for brain models. - Jenkins is running on Braintree, the lab's internal server. Jenkins defines different jobs, executing different taks. The task for a new submission is triggered by the website, the unittest tasks are triggerd by GitHub web hooks. + Jenkins is running on Braintree, the lab's internal server. Jenkins defines different jobs, executing different tasks. The task for a new submission is triggered by the website, the unittest tasks are triggered by GitHub web hooks. Once the jobs are triggered, jenkins runs a procedure to execute the tests or scoring and communicate the results back to the user or back to GitHub. - **Openmind** Scoring submissions is a computation and memory expensive process, we cannot execute model scoring on small machines. Because we do not want to execute the jobs on Braintree, we submit jobs to Openmind, the department cluster system. - The big advantage of Openmind is its queuing system, which allows to define detailed ressource requirements and jobs are executed, once their requested ressources are available. The jenkins related contents are stored on ``/om5/group/dicarlo/jenkins``. + The big advantage of Openmind is its queuing system, which allows to define detailed resource requirements and jobs are executed, once their requested resources are available. The jenkins related contents are stored on ``/om5/group/dicarlo/jenkins``. This directory contains a script for model submission (`score_model.sh`) and for unittests (`unittests_brainscore.sh`). The scripts are executed in an openmind job and are responsible for fully installing a conda environment, executing the process, shutting everything down again. Results are stored in the database or copied to amazon S3 cloud file system. From there jenkins reports the results back to its caller. diff --git a/docs/source/index.rst b/docs/source/index.rst index 6f033415..56cb9aa6 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -6,7 +6,7 @@ benchmarks combine neural/behavioral data with a metric to score models on their and models are evaluated as computational hypotheses of natural intelligence. This repository implements core functionality including a plugin system to manage data assemblies and models, -as well as metrics to compare e.g. neural recordings or behavioral mesurements. +as well as metrics to compare e.g. neural recordings or behavioral measurements. Data assemblies and model predictions are organized in BrainIO_. .. _BrainIO: https://github.com/brain-score/brainio diff --git a/pyproject.toml b/pyproject.toml index c84c6611..f2efcf36 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -73,3 +73,10 @@ filterwarnings = [ [tool.setuptools.package-data] # include bash files (e.g. 'test_plugin.sh') in package install "brainscore_core.plugin_management" = ["**"] + +[tool.codespell] +# Ref: https://github.com/codespell-project/codespell#using-a-config-file +skip = '.git*' +check-hidden = true +# ignore-regex = '' +# ignore-words-list = '' diff --git a/tests/test_submission/test_alexnet_consistency_integration.py b/tests/test_submission/test_alexnet_consistency_integration.py index a178a687..3740ed91 100644 --- a/tests/test_submission/test_alexnet_consistency_integration.py +++ b/tests/test_submission/test_alexnet_consistency_integration.py @@ -232,7 +232,7 @@ def test_metadata_query_by_identifier_works(self): self.assertIsNotNone(requeried_meta, "Should return metadata along with model") self.assertEqual(requeried_meta.architecture, 'CNN') - logger.info("SUCCESS: Metadata correctly linked to alexnet model and can be requeried by identifier") + logger.info("SUCCESS: Metadata correctly linked to alexnet model and can be required by identifier") finally: # Clean up