Skip to content
/ ecomon Public
forked from MfN-Berlin/ecomon

A platform for processing and analyzing acoustic monitoring data from environmental and wildlife recordings.

Notifications You must be signed in to change notification settings

aot29/ecomon

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Batch Audio 'File Inferencing

Use script to analyze big collections auf recordings

Setup

Setup Web Service with docker-compose

  1. clone repository
  2. copy .env-default to .env ans set following variables
    MARIADB_PORT=3306                 # port of database
    MARIADB_DATABASE=DATABASE_NAME    # database name of the service, It will be created in the docker container
    MARIADB_USER=DB_USER              # database user
    MARIADB_PASSWORD=dev_password     # password of database user
    MARIADB_ROOT_PASSWORD=root_password # root password of the database
    DATA_DIRECTORY=/net/              # root directory of monitoring recordings
    TMP_DIRECTORY=./runtime-data/backend/tmp # directory for temporary files used for packaging result zip files
    SAMPLE_FILES_DIRECTORY=./runtime-data/backend/files # files created from the service
  3. change environment variables in .env
  4. start service with docker-compose up -d

How to start dev environment

  1. copy .env to backend folder
  2. create development python environment
  3. install requirements
pip install --no-cache-dir --upgrade -r /code/requirements.txt
pip install pytz
pip install pyyaml
  1. start development database with docker-compose -f docker-compose.dev.yaml up -d
  2. change to your python environment and start dev server `./run.dev.sh'

import a collection

A collection should not be to big, One Collection is one station one year.

  1. Requirements: FFMPEG has to be installed
sudo apt install ffmpeg
  1. Add following variables to .env file
RESULT_DIRECTORY=/folder/results #folder where result pickle files of the classifier container will be stored
  1. Adapt the docker-compose file to your needs and to the hardware of the computer. Start the containers with docker-compose -f filename.yaml up -d

  2. Create a yaml config file with following schema

# Folder Path to be relative to DATA_DIRECTORY environment variable
recordFolder: # List of directories containing the recordings
   - folder1
   - folder2
fileEnding: # List of file extensions to be analyzed
   - wav
   - WAV
resultFolder: folder # Subdirectory in result folder
indexToNameFile: ./classifier_index_to_name.json # Index to name File of your used classifier
filenameParsing: "ammod" # | "inpediv" # Choose method to parse recording time from file name
prefix: Classifier_Station_Year # Prefix of the created tables in database. Also shows up in frontend. Name is parsed for Classifier, station name and year.
testRun: False # If True no results are save to the database
timezone: CET #Timezone of the datetime information in the filename of the recordings
speciesIndexList: # list of species for which columns a index will be created
   - grus_grus
   - ardea_cinerea
basePort: 9000 # lowest port of classifier containers
analyzeThreads: 6 # how many analyze threads will be instantiated
transformModelOutput: False # Transform output if dimension of classifier output and speciesIndexList miss match
allThreadsUseSamePort: False # if False port number is created else Same port in all threads is used
repeats: 10 # how often the all files are tried to be analyzed
modelOutputStyle: resultDict # if not none this will be added to the classifier request
  1. Run import script import_records.py parameters:

    • filepath to config yaml file
    • --create_index if you want to create database species column index
    • --drop_index if you want to drop the index before creating a new one
    • --create_report if you want to create a JSON report
    • --generate_events if you want to generate events
    • --generate_histograms if you want to generate histograms
    • --all if you want to run all of the above
    • --config_includes Only process config files that include this string in their filename.
  2. If you wan to run more then one import use batch_import.py

MARIABDB - MYSQLWORCKBENCH

If you have problems to connect to MARIADB because of SSL required got to Advanced Tab in the connection dialog and add

useSSL=0

About

A platform for processing and analyzing acoustic monitoring data from environmental and wildlife recordings.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • TypeScript 58.3%
  • Python 39.6%
  • Shell 1.1%
  • HTML 0.4%
  • JavaScript 0.2%
  • CSS 0.2%
  • Dockerfile 0.2%