Skip to content

Comments

Jon/gaze - Head solver updates & Gaze reconstruction (not geometrically validated) #23

Open
jonmatthis wants to merge 47 commits intomainfrom
jon/gaze
Open

Jon/gaze - Head solver updates & Gaze reconstruction (not geometrically validated) #23
jonmatthis wants to merge 47 commits intomainfrom
jon/gaze

Conversation

@jonmatthis
Copy link
Member

@schollben @philipqueen @OptogeneticsandNeuralEngineeringCore

We should have a meeting to go over this PR some time later this week or next. There's a lot to it, and there are some core data objects in here that will form the basis of future analysis pipelines.

Its NOT fully completed yet - I think there are still a few foibles somewhere in the pipeline that make the final data not (necessarily) correct. All the correct pieces are attached to the correct parts, but I think there may be a sign flip or L/R eye swap or something somewhere in the pipeline that might be borking up the final output.

However, this pipeline is sound on a more structural level, so I think its a good time to slot it in to the full pipeline and then break out the fine-toothed combs to look for the little snarls that may still exist.

Installation

  • Should run from a uv environment made from the pyproject.toml in the top level (run uv sync from a terminal set to the top level bs/ folder)

Entry points

Skull Solver

  • Run the skull solver runs from the bs/python_code/rigid_body_solver/ferret_skull_solver.py script.
  • It takes a long time for a ~5000 frame clip, so it might be prohibitive for a full recording. We'll see.
  • Dumps data into the {recording/clip_folder}/mocap_data/output_data/solver_output/ folder.
  • Run this script BEFORE the gaze pipeline (below)

Gaze Pipeline

  • Runs from python_code/ferret_gaze/run_gaze_pipeline.py
  • Gaze Pipeline steps:
    • Calculate eye kinematics from pupil data and save them to {recording/clip_folder/eye_data/output_data/eye_kinematics
    • Resample everything (trajectories and kinematics) to the same framerate (matching fastest, set first frame timestamp to zero), also create 'display_videos/'
    • Calculate gaze by combining resampled skull and eye kinematics to calucated gaze_kinematics (note: gaze kinematics are similar to eye kinematics, but in a global/world-centered reference frame. Eye kinematics are in a eye-socket-centered/local reference frame)

Output data

  • All data from the gaze pipeline goes to an analyzable_data folder in the top level of the recording/clip folder
  • All data in that folder is in the same timebase and precisely sampled, so that 'frame_number' always refers to the same timestamp in all data sources.
  • The pipeline creates a ferret_full_gaze_blender_viz.py script int he analyzable_data folder - open this script in the Blender scripting and run it to load the data into a Blender scene (NOTE - open the System Console first to see output as the script runs)
  • The pipeline also creates a display_videos which has resampled annotated videos that match the timestamp of the analysis data. The original frame# is written in the corner of each frame. These videos are for convenience when making visuals such as the Blender animation

NOTE - For analysis, you should NEVER need to look anywhere else but the analyzable_data folder. If you find there is something in the 'reconstruction' folders that you need for your analysis work, let us know and we'll tweak the pipeline to pull it up to the analysis folder. We really want to be sure to keep the 'Reconstruction' pipeline separate from the 'Analysis' pipelines (it's the S in S.O.L.I.D.)

Key objects are:

from python_code.kinematics_core.rigid_body_kinematics_model import RigidBodyKinematics
from python_code.kinematics_core.reference_geometry_model import     ReferenceGeometry
from python_code.ferret_gaze.eye_kinematics.ferret_eye_kinematics_models import FerretEyeKinematics
  • The 'FerretEyeKinematics' contains an 'eyeball' object which is precisely the same RigidBodyKinematics object as the Skull
  • The basic strategy for the main Pydantic data objects are to hold the big data blob as a numpy array, and then expose accessor and mutator methods. That way we get the validation of the Pydantic object but the vectorization and speed of vectorized numpy-based operations (like, 1000x faster)
  • All orientations are managed and stored as Quaternions. If you want other rotation formalism (rotation matrices, euler, etc), use the properties of the Quaternion object

AI Slop

  • This PR was not exactly vibe coded, but AI played a heavy role in it. I validated the important parts, but expect to run across some AI slop from time to time. This is usually harmless loop-di-loop nonsense, but keep an eye out for more egregious "lemme turn this error into a warning" stuff that may still be hiding in there.
  • Some of the models/code in this PR may not be used, and may or may not be compatible with the current data model. In particular, I didn't keep the rerun code up to date as the data models evolved.
  • There is a Torsion Estimator in this thing that theoretically uses the axis tilt of the ferret's oval eyeball to estimate torsion, but I'm like 90% sure its broken and worthless. I kept it in for funzies, but mostly likely we should just assume Listings Law and zero'ing torsion on each frame and untill we have a better option.

Data Model Primer Script

  • Speaking of AI Slop - there's a Primer script at python_code/ferret_gaze/ferret_kinematics_primer.py that is designed to give you an intro to the basic data models and relevant geometry.

Note for Analysis

  • For the most part my suggestion is to use those models to load in the raw csv/json data, and then use the accessor methods to get what you need for analysis.
  • Try not to rewrite code that already exists (the D in SOLID)! Its really easy to mess up the geometry for stuff like this, so let's not introduce more opportunities for this to happen.
  • If you find yourself writing any code that feels like it should be included as a method in the main data model, we can fold it in easily.

Marching Orders

@philip

  • Start integrating the new skull solver and gaze calculator into the main pipeline. The content of the output files may change if we fix bugs, but the basic file structure should be pretty close to what we're looking for (but feel free to have opinions!)
  • Start trying to wrap your head around the nitty gritty of what's happening here, we can talk it over in detail in the future!
  • We will also need to have an estimate of the Enclosure Geometry - you should be able to calculate that from the calibration data using the extend of the charuco detections to define the edges, the ground-plane estiamtor to get the origin/orientation, and the world-centered camera data to get the ceiling/camear positions. Try to save it into the same ReferenceGeometry json object as are used in the Skull/Eye Kinematics objects, if you can

@schollben and whatever Em's GH handle is

  • Probably spend most of your time wrapping your head around the data models via the primer script i mentioned.
  • There are some stubs of visualization scripts in the python_code/ferret_gaze/visualization folder. You can try starting from there if that's helpful. If there are bugs when running it, drop the script into an AI chat alongside the relevant data object, and it can probably fix the error (esp useful for small path/syntax errors. most of these scripts will be pretty close to right, so most AI's will be able to handle such errors)

gl; hf!

@philipqueen
Copy link
Collaborator

I've gone ahead and merged the general processing changes into this. I'm going to add it into the processing pipeline and recording folder steps then merge it. We will likely need to make other changes as well, but those can come in a new PR. At the very least the solver output seems to be an improvement over the old method, the gaze stuff will still need some work though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants