Skip to content

Conversation

@eelcovv
Copy link
Owner

@eelcovv eelcovv commented Oct 22, 2025

feat: Improve CLI commands and fix simulation engine

This commit introduces significant enhancements to the list and view commands and resolves several critical bugs in the simulation engine.

  • The list command can now display detailed case summaries.
  • The view command has a more intuitive interface and can resolve meshes from case names.
  • The simulation engine now correctly handles mesh transformations, symmetry, and temporary files on Windows.
  • All unit tests and static analysis checks are passing.

Summary by Sourcery

Add new 'list' and 'view' commands to the CLI and overhaul mesh processing with translation support, temporary file robustness, and improved database storage. Ensure cross-platform fixes, update tests, and document changes for the 0.2.0 release.

New Features:

  • Add 'list' CLI command to display available meshes or simulation cases from HDF5 database files
  • Add 'view' CLI command to load and visualize meshes or cases via trimesh viewer or optional VTK pipeline
  • Support mesh translations through new translation_x, translation_y, and translation_z settings

Bug Fixes:

  • Resolve cross-platform temporary file handling and cleanup errors in mesh export and Capytaine loading
  • Ensure symmetric meshes are converted back to full geometry before database storage
  • Fix unit tests and static analysis errors related to mocked objects and type checks

Enhancements:

  • Refactor mesh processing into separate geometry loading and Capytaine body configuration functions
  • Improve add_mesh_to_database to accept in-memory trimesh objects, compute hashes from memory exports, and store STL content as opaque binary data
  • Simplify processing pipeline to propagate transformations and mesh name overrides without temporary directories

Build:

  • Update pyproject.toml to require pyglet<2 and adjust VTK dependency specifications for new visualization commands

Documentation:

  • Update CHANGELOG.md for version 0.2.0 with details on new CLI commands, breaking interface change for 'view', and bug fixes

Tests:

  • Update unit tests to reflect new function signatures, mocking patterns, and translation settings in processing pipeline

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 22, 2025

Reviewer's Guide

This PR overhauls the mesh processing pipeline—splitting geometry loading and Capytaine setup into discrete functions with explicit translation support and robust tempfile handling—rewrites mesh ingestion to operate on in-memory STL exports, enhances the simulation routines to propagate translation and mesh-naming parameters (including draft workflows), and adds two new CLI commands (list and view) along with their tests and dependency updates.

Sequence diagram for the new mesh processing pipeline in simulation engine

sequenceDiagram
    participant CLI
    participant Engine
    participant Trimesh
    participant Capytaine
    participant HDF5_DB
    CLI->>Engine: run_simulation_batch(settings)
    Engine->>Trimesh: _prepare_trimesh_geometry(stl_file, translation_x, translation_y, translation_z)
    Trimesh-->>Engine: Transformed mesh
    Engine->>Capytaine: _prepare_capytaine_body(transformed_mesh, mesh_name, ...)
    Capytaine-->>Engine: Capytaine body, final mesh
    Engine->>HDF5_DB: add_mesh_to_database(output_file, final_mesh, mesh_name)
    HDF5_DB-->>Engine: Mesh stored
    Engine-->>CLI: Simulation results saved
Loading

ER diagram for mesh and case storage in HDF5 database

erDiagram
    MESHES {
        string mesh_name PK
        float volume
        float cog_x
        float cog_y
        float cog_z
        float bbox_lx
        float bbox_ly
        float bbox_lz
        binary stl_content
        float[3][3] inertia_tensor
        string sha256
    }
    CASES {
        string case_name PK
        string stl_mesh_name FK
    }
    MESHES ||--o{ CASES : "referenced by"
Loading

Class diagram for updated mesh processing and CLI command structure

classDiagram
    class SimulationSettings {
        output_hdf5_file: str
        wave_periods: float | list[float]
        wave_directions: float | list[float]
        translation_x: float
        translation_y: float
        translation_z: float
        forward_speed: float | list[float]
        lid: bool
        add_center_of_mass: bool
    }
    class Engine {
        +_prepare_trimesh_geometry(stl_file, translation_x, translation_y, translation_z)
        +_prepare_capytaine_body(source_mesh, mesh_name, lid, grid_symmetry, add_center_of_mass)
        +add_mesh_to_database(output_file, mesh_to_add, mesh_name, overwrite)
        +_process_single_stl(stl_file, settings, output_file, mesh_name_override)
        +process_all_cases_for_one_stl(...)
        +run_simulation_batch(settings)
    }
    class ListCommand {
        +list_command(files, option_files, cases)
    }
    class ViewCommand {
        +view(hdf5_file, mesh_names, vtk, show_all)
    }
    SimulationSettings <|-- Engine
    ListCommand <|-- CLI
    ViewCommand <|-- CLI
Loading

File-Level Changes

Change Details Files
Refactor mesh geometry preparation and Capytaine integration
  • Introduced _prepare_trimesh_geometry for STL loading and optional X/Y/Z translations
  • Reworked _prepare_capytaine_body to accept a Trimesh, handle temp STL files reliably, apply symmetry and extract final mesh
  • Ensured symmetric meshes are expanded before database storage
src/fleetmaster/core/engine.py
Revise mesh database ingestion to use in-memory STL exports
  • Changed add_mesh_to_database signature to accept Trimesh and mesh_name
  • Exported mesh to bytes for SHA256 hashing and stored as np.void to avoid null-byte issues
  • Populated volume, center_of_mass, bounding_box, and inertia_tensor directly from the mesh object
src/fleetmaster/core/engine.py
Enhance simulation pipeline for translations and draft cases
  • Extended _process_single_stl and process_all_cases_for_one_stl to pass translation_x/y/z and mesh_name_override
  • Moved mesh addition to after final mesh creation, removed early file-based additions
  • Refactored run_simulation_batch to apply draft offsets via model_copy instead of TemporaryDirectory loops
src/fleetmaster/core/engine.py
Add and register new CLI commands: list and view
  • Created list_command and view modules with click interfaces for meshes and cases
  • Registered new subcommands in cli.py and show help when none invoked
  • Updated commands/init.py, adjusted pyproject.toml to include pyglet and vtk.* dependencies
src/fleetmaster/cli.py
src/fleetmaster/commands/list.py
src/fleetmaster/commands/view.py
src/fleetmaster/commands/__init__.py
pyproject.toml
Extend simulation settings with translation parameters
  • Added translation_x, translation_y, translation_z fields to SimulationSettings with defaults
  • Updated run command and processing routines to read and use these new fields
src/fleetmaster/core/settings.py
Update tests to align with refactored engine and CLI
  • Modified test_prepare_capytaine_body, test_add_mesh_to_database to mock Trimesh and tempfile behavior
  • Adjusted test_process_single_stl and test_run_simulation_batch_drafts to assert translation offsets and mesh_name_override
  • Ensured all unit tests pass with new mesh-handling logic
tests/test_engine.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider removing or consolidating src/fleetmaster/core/visualize_db_mesh.py since its functionality largely duplicates the new view command in src/fleetmaster/commands/view.py to avoid code duplication.
  • Centralize hardcoded HDF5 group names (e.g. "meshes", case group prefixes) into shared constants to prevent typos and make maintenance across engine, list, and view modules easier.
  • The draft‐generation logic subtracts the draft depth from translation_z—please clarify this behavior in the CLI help and docstrings so users understand how translation and draft offset interact.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider removing or consolidating src/fleetmaster/core/visualize_db_mesh.py since its functionality largely duplicates the new view command in src/fleetmaster/commands/view.py to avoid code duplication.
- Centralize hardcoded HDF5 group names (e.g. "meshes", case group prefixes) into shared constants to prevent typos and make maintenance across engine, list, and view modules easier.
- The draft‐generation logic subtracts the draft depth from translation_z—please clarify this behavior in the CLI help and docstrings so users understand how translation and draft offset interact.

## Individual Comments

### Comment 1
<location> `tests/test_engine.py:109-118` </location>
<code_context>
+def test_prepare_capytaine_body(mock_tempfile, mock_cpt, tmp_path: Path):
</code_context>

<issue_to_address>
**suggestion (testing):** Edge case: test does not cover failure to delete temporary file.

Add a test to confirm temporary file cleanup when exceptions occur during mesh loading or Capytaine body creation, to ensure resources are properly released.
</issue_to_address>

### Comment 2
<location> `tests/test_engine.py:167-168` </location>
<code_context>
 def test_add_mesh_to_database_new(tmp_path):
     """Test adding a new mesh to the HDF5 database."""
     output_file = tmp_path / "db.h5"
-    stl_file = tmp_path / "mesh.stl"
-    stl_content = b"This is a dummy stl file"
-    stl_file.write_bytes(stl_content)
+    stl_content = b"This is a dummy stl file content"

-    mock_mesh = MagicMock()
+    mock_mesh = MagicMock(spec=trimesh.Trimesh)
     mock_mesh.volume = 1.0
     mock_mesh.center_mass = [0.1, 0.2, 0.3]
     mock_mesh.bounding_box.extents = [1.0, 2.0, 3.0]
     mock_mesh.moment_inertia = np.eye(3)
+    mock_mesh.export.return_value = stl_content

-    with patch("fleetmaster.core.engine.trimesh.load_mesh", return_value=mock_mesh):
-        add_mesh_to_database(output_file, str(stl_file), overwrite=False)
+    add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

     with h5py.File(output_file, "r") as f:
         group = f["meshes/mesh"]
</code_context>

<issue_to_address>
**suggestion (testing):** Test does not verify all mesh attributes and datasets written to HDF5.

Add assertions for all expected mesh attributes (e.g., bbox_lx, bbox_ly, bbox_lz, inertia_tensor) and verify that the stl_content dataset is stored as np.void. This will improve test coverage and robustness.

```suggestion
    with h5py.File(output_file, "r") as f:
        group = f["meshes/mesh"]
        # Assert mesh attributes
        assert group.attrs["volume"] == 1.0
        assert np.allclose(group.attrs["center_mass"], [0.1, 0.2, 0.3])
        assert group.attrs["bbox_lx"] == 1.0
        assert group.attrs["bbox_ly"] == 2.0
        assert group.attrs["bbox_lz"] == 3.0
        assert np.allclose(group.attrs["inertia_tensor"], np.eye(3))
        # Assert STL content dataset
        assert "stl" in group
        stl_data = group["stl"][()]
        assert isinstance(stl_data, np.void)
        assert bytes(stl_data) == stl_content
```
</issue_to_address>

### Comment 3
<location> `tests/test_engine.py:165-168` </location>
<code_context>
 def test_add_mesh_to_database_skip_existing(tmp_path, caplog):
     """Test that an existing mesh with the same hash is skipped."""
     output_file = tmp_path / "db.h5"
-    stl_file = tmp_path / "mesh.stl"
     stl_content = b"dummy stl"
-    stl_file.write_bytes(stl_content)
     file_hash = hashlib.sha256(stl_content).hexdigest()

+    mock_mesh = MagicMock(spec=trimesh.Trimesh)
+    mock_mesh.export.return_value = stl_content
+
     with h5py.File(output_file, "w") as f:
         group = f.create_group("meshes/mesh")
         group.attrs["sha256"] = file_hash

     with caplog.at_level(logging.INFO):
-        add_mesh_to_database(output_file, str(stl_file), overwrite=False)
+        add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

     assert "has the same SHA256 hash. Skipping." in caplog.text

</code_context>

<issue_to_address>
**suggestion (testing):** Test does not check that mesh attributes remain unchanged when skipping.

Add assertions to confirm that mesh attributes and datasets in the HDF5 file are unchanged when a mesh is skipped due to a matching hash.

```suggestion
    add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

    # Assert that mesh attributes and datasets remain unchanged
    with h5py.File(output_file, "r") as f:
        group = f["meshes/mesh"]
        # Check that the sha256 attribute is unchanged
        assert group.attrs["sha256"] == file_hash
        # Check that no new attributes have been added
        assert len(group.attrs) == 1
        # Check that no datasets have been added to the group
        assert list(group.keys()) == []
```
</issue_to_address>

### Comment 4
<location> `tests/test_engine.py:186-193` </location>
<code_context>
 def test_add_mesh_to_database_overwrite_warning(tmp_path, caplog):
     """Test warning when mesh is different and overwrite is False."""
     output_file = tmp_path / "db.h5"
-    stl_file = tmp_path / "mesh.stl"
-    stl_file.write_bytes(b"new content")
+    mock_mesh = MagicMock(spec=trimesh.Trimesh)
+    mock_mesh.export.return_value = b"new content"

     with h5py.File(output_file, "w") as f:
         group = f.create_group("meshes/mesh")
         group.attrs["sha256"] = "old_hash"

     with caplog.at_level(logging.WARNING):
-        add_mesh_to_database(output_file, str(stl_file), overwrite=False)
+        add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

     assert "is different from the one in the database" in caplog.text

</code_context>

<issue_to_address>
**suggestion (testing):** Missing test for overwrite=True scenario.

Add a test for add_mesh_to_database with overwrite=True to confirm that mesh data and attributes are correctly updated in the HDF5 file and the expected log message is generated.

```suggestion
    with h5py.File(output_file, "w") as f:
        group = f.create_group("meshes/mesh")
        group.attrs["sha256"] = file_hash

    with caplog.at_level(logging.INFO):
        add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

    assert "has the same SHA256 hash. Skipping." in caplog.text


def test_add_mesh_to_database_overwrite_true(tmp_path, caplog):
    """Test that mesh data and attributes are updated when overwrite=True."""
    output_file = tmp_path / "db.h5"
    mock_mesh = MagicMock(spec=trimesh.Trimesh)
    mock_mesh.export.return_value = b"new content"
    new_hash = "new_hash"

    # Create initial mesh group with old hash and dummy data
    with h5py.File(output_file, "w") as f:
        group = f.create_group("meshes/mesh")
        group.attrs["sha256"] = "old_hash"
        group.create_dataset("mesh_data", data=b"old content")

    with patch("fleetmaster.core.engine._compute_sha256", return_value=new_hash):
        with caplog.at_level(logging.WARNING):
            add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=True)

    # Check that the mesh group has the new hash and new data
    with h5py.File(output_file, "r") as f:
        group = f["meshes/mesh"]
        assert group.attrs["sha256"] == new_hash
        assert group["mesh_data"][()] == b"new content"

    assert "Overwriting mesh data in the database" in caplog.text
```
</issue_to_address>

### Comment 5
<location> `tests/test_engine.py:232-241` </location>
<code_context>
 @patch("fleetmaster.core.engine.process_all_cases_for_one_stl")
-@patch("fleetmaster.core.engine.add_mesh_to_database")
-def test_process_single_stl(mock_add_mesh, mock_process_all, mock_settings):
+def test_process_single_stl(mock_process_all, mock_settings):
     """Test the main processing pipeline for a single STL file."""
     stl_file = "/path/to/dummy.stl"
     output_file = Path("/fake/output.hdf5")

     _process_single_stl(stl_file, mock_settings, output_file)

-    mock_add_mesh.assert_called_once_with(output_file, stl_file, mock_settings.overwrite_meshes)
     mock_process_all.assert_called_once()
     _, kwargs = mock_process_all.call_args
     assert kwargs["stl_file"] == stl_file
</code_context>

<issue_to_address>
**suggestion (testing):** Test does not cover mesh_name_override argument.

Please add a test for _process_single_stl with mesh_name_override set, verifying it is passed to process_all_cases_for_one_stl and affects downstream behavior as expected.

Suggested implementation:

```python
@patch("fleetmaster.core.engine.process_all_cases_for_one_stl")
def test_process_single_stl(mock_process_all, mock_settings):
    """Test the main processing pipeline for a single STL file."""
    stl_file = "/path/to/dummy.stl"
    output_file = Path("/fake/output.hdf5")

    _process_single_stl(stl_file, mock_settings, output_file)

    mock_process_all.assert_called_once()
    _, kwargs = mock_process_all.call_args
    assert kwargs["stl_file"] == stl_file

@patch("fleetmaster.core.engine.process_all_cases_for_one_stl")
def test_process_single_stl_with_mesh_name_override(mock_process_all, mock_settings):
    """Test _process_single_stl with mesh_name_override set."""
    stl_file = "/path/to/dummy.stl"
    output_file = Path("/fake/output.hdf5")
    mesh_name_override = "custom_mesh_name"

    _process_single_stl(stl_file, mock_settings, output_file, mesh_name_override=mesh_name_override)

    mock_process_all.assert_called_once()
    _, kwargs = mock_process_all.call_args
    assert kwargs["stl_file"] == stl_file
    assert kwargs["mesh_name_override"] == mesh_name_override

```

Make sure that `_process_single_stl` accepts the `mesh_name_override` argument and passes it to `process_all_cases_for_one_stl`. If it does not, you will need to update its signature and implementation accordingly.
</issue_to_address>

### Comment 6
<location> `tests/test_engine.py:273-282` </location>
<code_context>
+def test_run_simulation_batch_drafts(mock_setup, mock_process, mock_settings, tmp_path: Path):
</code_context>

<issue_to_address>
**suggestion (testing):** Edge case: test does not cover when drafts list is empty.

Add a test for run_simulation_batch with settings.drafts as an empty list to verify correct fallback behavior.
</issue_to_address>

### Comment 7
<location> `src/fleetmaster/core/engine.py:106` </location>
<code_context>


-def _prepare_capytaine_body(stl_file: str, lid: bool, grid_symmetry: bool, add_centre_of_mass: bool = False) -> Any:
+def _prepare_trimesh_geometry(
+    stl_file: str,
+    translation_x: float = 0.0,
</code_context>

<issue_to_address>
**issue (complexity):** Consider consolidating geometry preparation and database addition into unified functions using a config object to reduce boilerplate and simplify the API.

```markdown
There’s a lot of boilerplate just to thread translations, mesh names and load/​export dances through two separate functions. You can collapse both `_prepare_trimesh_geometry` and `_prepare_capytaine_body` into a single “prepare‐and‐load” step by bundling all of the geometry parameters into one small config object. Likewise unify your two `add_mesh_to_database` variants into one:

```python
from dataclasses import dataclass
import numpy as np
import trimesh, tempfile, pathlib, cpt, hashlib, h5py

@dataclass
class BodyConfig:
    translation: np.ndarray = np.zeros(3)    # [x, y, z]
    lid: bool = False
    grid_symmetry: bool = False
    add_center_of_mass: bool = False
    mesh_name: str | None = None

def prepare_body(
    stl_file: str,
    cfg: BodyConfig
) -> tuple[cpt.FloatingBody, trimesh.Trimesh]:
    # 1) load + apply translation
    mesh = trimesh.load_mesh(stl_file)
    if cfg.translation.any():
        mesh.apply_translation(cfg.translation)

    # 2) write to temp + load by Capytaine in one try/finally
    path = None
    try:
        with tempfile.NamedTemporaryFile(suffix=".stl", delete=False) as f:
            path = pathlib.Path(f.name)
            mesh.export(f, file_type="stl")
        hull = cpt.load_mesh(str(path), name=cfg.mesh_name or "")
    finally:
        if path and path.exists():
            path.unlink()

    # 3) configure body
    cog = mesh.center_mass if cfg.add_center_of_mass else None
    boat = cpt.FloatingBody(mesh=hull, lid_mesh=(hull.generate_lid(z=-0.01) if cfg.lid else None), center_of_mass=cog)
    if cfg.grid_symmetry:
        boat.mesh = cpt.ReflectionSymmetricMesh(boat.mesh, plane=cpt.xOz_Plane).to_mesh()
    boat.add_all_rigid_body_dofs()
    boat.keep_immersed_part()

    # 4) extract final trimesh for the DB
    final = trimesh.Trimesh(vertices=boat.mesh.vertices, faces=boat.mesh.faces)
    return boat, final
```

And unify your `add_mesh_to_database` into one that accepts either a filename or a `trimesh.Trimesh`:

```python
def add_mesh_to_database(
    output_file: pathlib.Path,
    mesh_or_path: str | trimesh.Trimesh,
    mesh_name: str,
    overwrite: bool = False,
):
    # load or use the provided mesh directly
    mesh = trimesh.load_mesh(mesh_or_path) if isinstance(mesh_or_path, str) else mesh_or_path
    new_stl = mesh.export(file_type="stl")
    new_hash = hashlib.sha256(new_stl).hexdigest()

    with h5py.File(output_file, "a") as f:
        grp = f.require_group(f"{MESH_GROUP_NAME}/{mesh_name}")
        if grp.attrs.get("sha256") == new_hash and not overwrite:
            return
        grp.attrs.update({
            "sha256": new_hash,
            "volume": mesh.volume,
            "cog_x": mesh.center_mass[0],
            "cog_y": mesh.center_mass[1],
            "cog_z": mesh.center_mass[2],
        })
        grp.create_dataset("inertia_tensor", data=mesh.moment_inertia)
        grp.create_dataset("stl_content", data=np.void(new_stl))
```

This:

- Removes the temp‐file and geometry dance from two separate functions into one place.
- Bundles translation/lid/symmetry/mesh_name into a single config.
- Stops propagating three coords + an override string through every call site.
- Unifies your two `add_mesh_to_database` variants into one.

All functionality is identical, but the surface API is much simpler, the boilerplate is reduced, and you only have one place to change if the STL → Capytaine workflow evolves.
</issue_to_address>

### Comment 8
<location> `src/fleetmaster/commands/view.py:157` </location>
<code_context>
+@click.argument("mesh_names", nargs=-1)
+@click.option("--vtk", is_flag=True, help="Use the VTK viewer instead of the default trimesh viewer.")
+@click.option("--show-all", is_flag=True, help="Visualize all meshes found in the specified files.")
+def view(hdf5_file: str, mesh_names: tuple[str, ...], vtk: bool, show_all: bool) -> None:
+    """
+    CLI command to load and visualize meshes from HDF5 databases.
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring the mesh loading and VTK actor creation logic into separate helper functions to make the CLI command slimmer and more testable.

Here are two small refactorings that peel off the nested bits into testable helpers and collapse duplication with your existing core routines. You can keep all features intact, but your `view` command becomes much slimmer.

1) **Extract HDF5‐to‐Trimesh loading**  
   Move this into `core/visualize_db_mesh.py` (or a new `core/io.py`):

```python
# core/visualize_db_mesh.py

import io
import h5py
import trimesh
from pathlib import Path
import logging

logger = logging.getLogger(__name__)

def load_meshes_from_hdf5(
    hdf5_path: Path,
    mesh_names: list[str],
) -> list[trimesh.Trimesh]:
    """Load and return trimesh objects for the given names from HDF5."""
    meshes: list[trimesh.Trimesh] = []
    if not hdf5_path.exists():
        raise FileNotFoundError(f"{hdf5_path} not found")

    with h5py.File(hdf5_path, "r") as f:
        for name in mesh_names:
            group = f.get(f"meshes/{name}")
            if not group:
                logger.warning("Mesh %r not found", name)
                continue
            raw = group["stl_content"][()]
            try:
                mesh = trimesh.load_mesh(io.BytesIO(raw.tobytes()), file_type="stl")
                meshes.append(mesh)
            except Exception as e:
                logger.exception("Failed to parse mesh %r", name)
    return meshes
```

2) **Extract VTK–actor builder**  
   In your CLI module (e.g. `cli/view.py`), factor out the conversion loop:

```python
# cli/view.py

from itertools import cycle
import vtk
import numpy as np
from vtk.util.numpy_support import numpy_to_vtk
import trimesh

VTK_COLORS = [
    (0.8,0.8,1.0), (1.0,0.8,0.8),
    (0.8,1.0,0.8), (1.0,1.0,0.8),
]

def vtk_actor_from_trimesh(mesh: trimesh.Trimesh, color: tuple[float, float, float]):
    pts = vtk.vtkPoints()
    pts.SetData(numpy_to_vtk(mesh.vertices, deep=True))

    faces = np.hstack((np.full((len(mesh.faces),1),3), mesh.faces))
    cells = vtk.vtkCellArray()
    cells.SetCells(len(mesh.faces),
                   numpy_to_vtk(faces.flatten(), array_type=vtk.VTK_ID_TYPE))

    poly = vtk.vtkPolyData()
    poly.SetPoints(pts)
    poly.SetPolys(cells)

    mapper = vtk.vtkPolyDataMapper()
    mapper.SetInputData(poly)

    actor = vtk.vtkActor()
    actor.SetMapper(mapper)
    actor.GetProperty().SetColor(color)
    return actor

def show_with_vtk(meshes: list[trimesh.Trimesh]):
    renderer = vtk.vtkRenderer()
    renderer.SetBackground(0.1,0.2,0.3)

    for mesh, color in zip(meshes, cycle(VTK_COLORS)):
        renderer.AddActor(vtk_actor_from_trimesh(mesh, color))

    # axes, window, interactor setup ...
```

3) **Slim down your CLI**  
   Finally your `view` handler just does:

```python
from core.visualize_db_mesh import load_meshes_from_hdf5

@click.command(...)
def view(...):
    # resolve `resolved_mesh_names` as before...
    meshes = load_meshes_from_hdf5(Path(hdf5_file), sorted(resolved_mesh_names))
    if not meshes:
        click.echo("No meshes to display", err=True); return

    if use_vtk:
        show_with_vtk(meshes)
    else:
        # trimesh viewer path...
```

This removes the deep nesting, reuses the shared loader, and keeps your CLI definition focused.
</issue_to_address>

### Comment 9
<location> `src/fleetmaster/commands/list.py:127` </location>
<code_context>
    final_files = ["results.hdf5"] if not all_files else list(all_files)

</code_context>

<issue_to_address>
**suggestion (code-quality):** Swap if/else branches of if expression to remove negation ([`swap-if-expression`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/swap-if-expression))

```suggestion
    final_files = list(all_files) if all_files else ["results.hdf5"]
```

<br/><details><summary>Explanation</summary>Negated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the `if` and `else` conditions around we
can invert the condition and make it positive.
</details>
</issue_to_address>

### Comment 10
<location> `src/fleetmaster/commands/view.py:126-127` </location>
<code_context>
def visualize_meshes_from_db(hdf5_path: str, mesh_names_to_show: list[str], use_vtk: bool) -> None:
    """Loads one or more meshes from HDF5 databases and visualizes them in a single scene."""
    loaded_meshes = []

    db_file = Path(hdf5_path)
    if not db_file.exists():
        click.echo(f"❌ Error: Database file '{hdf5_path}' not found.", err=True)
        return

    for mesh_name in mesh_names_to_show:
        found_mesh_data = None
        mesh_group_path = f"meshes/{mesh_name}"
        try:
            logger.debug(f"Opening database {db_file}")
            with h5py.File(db_file, "r") as f:
                if mesh_group_path in f:
                    click.echo(f"📦 Loading mesh '{mesh_name}' from '{hdf5_path}'...")
                    found_mesh_data = f[mesh_group_path]["stl_content"][()]
        except Exception as e:
            logger.exception(f"Error reading mesh {mesh_group_path}' from '{hdf5_path}'")
            click.echo(f"❌ Error reading '{hdf5_path}': {e}", err=True)
            continue

        if found_mesh_data:  # A non-empty numpy.void object evaluates to True.
            try:
                # The data is stored as a numpy.void object, which must be converted to bytes.
                stl_bytes = found_mesh_data.tobytes()
                mesh = trimesh.load_mesh(io.BytesIO(stl_bytes), file_type="stl")
                if mesh:
                    loaded_meshes.append(mesh)
            except Exception as e:
                logger.exception(f"Failed to parse mesh '{mesh_name}'")
                click.echo(f"❌ Error parsing mesh '{mesh_name}': {e}", err=True)
        else:
            click.echo(f"❌ Warning: Mesh '{mesh_name}' not found in any of the specified files.", err=True)

    if not loaded_meshes:
        click.echo("No meshes were loaded. Nothing to display.", err=True)
        return

    if use_vtk:
        show_with_vtk(loaded_meshes)
    else:
        click.echo(f"🎨 Displaying {len(loaded_meshes)} mesh(es) with trimesh viewer. Close the window to continue.")
        # To avoid potential rendering glitches with the scene object,
        # we create a scene with an axis and pass the meshes to show directly.
        axis = trimesh.creation.axis(origin_size=0.1)
        scene = trimesh.Scene([axis, *loaded_meshes])

        logger.debug("Showing with solid mode. Toggle with w/s to go to wireframe")
        scene.show()

</code_context>

<issue_to_address>
**suggestion (code-quality):** Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))

```suggestion
                if mesh := trimesh.load_mesh(
                    io.BytesIO(stl_bytes), file_type="stl"
                ):
```
</issue_to_address>

### Comment 11
<location> `src/fleetmaster/core/visualize_db_mesh.py:109-110` </location>
<code_context>
def visualize_mesh_from_db(hdf5_path: str, mesh_name: str, use_vtk: bool) -> None:
    """
    Loads a specific mesh from the HDF5 database and visualizes it.
    """
    db_file = Path(hdf5_path)
    if not db_file.exists():
        print(f"❌ Error: Database file '{hdf5_path}' not found.")
        return

    mesh_group_path = f"meshes/{mesh_name}"

    with h5py.File(db_file, "r") as f:
        if mesh_group_path not in f:
            print(f"❌ Error: Mesh '{mesh_name}' not found in {hdf5_path}.")
            available_meshes = list(f.get("meshes", {}).keys())
            if available_meshes:
                print("\nAvailable meshes are:")
                for name in available_meshes:
                    print(f"  - {name}")
            return

        print(f"📦 Loading mesh '{mesh_name}' from the database...")
        stl_binary_content = f[mesh_group_path]["stl_content"][()]

    stl_file_in_memory = io.BytesIO(stl_binary_content)
    mesh = trimesh.load_mesh(stl_file_in_memory, file_type="stl")

    if use_vtk:
        show_with_vtk(mesh)
    else:
        show_with_trimesh(mesh)

</code_context>

<issue_to_address>
**suggestion (code-quality):** Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))

```suggestion
            if available_meshes := list(f.get("meshes", {}).keys()):
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +109 to +118
def test_prepare_capytaine_body(mock_tempfile, mock_cpt, tmp_path: Path):
"""Test _prepare_capytaine_body configures the body correctly."""
# Arrange
mock_source_mesh = MagicMock(spec=trimesh.Trimesh)
mock_source_mesh.center_mass = [1, 2, 3]

# Create a real temporary file path for the test to use
temp_file_path = tmp_path / "temp.stl"
mock_tempfile.mkstemp.return_value = (123, str(temp_file_path))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Edge case: test does not cover failure to delete temporary file.

Add a test to confirm temporary file cleanup when exceptions occur during mesh loading or Capytaine body creation, to ensure resources are properly released.

Comment on lines 167 to 168
with h5py.File(output_file, "r") as f:
group = f["meshes/mesh"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Test does not verify all mesh attributes and datasets written to HDF5.

Add assertions for all expected mesh attributes (e.g., bbox_lx, bbox_ly, bbox_lz, inertia_tensor) and verify that the stl_content dataset is stored as np.void. This will improve test coverage and robustness.

Suggested change
with h5py.File(output_file, "r") as f:
group = f["meshes/mesh"]
with h5py.File(output_file, "r") as f:
group = f["meshes/mesh"]
# Assert mesh attributes
assert group.attrs["volume"] == 1.0
assert np.allclose(group.attrs["center_mass"], [0.1, 0.2, 0.3])
assert group.attrs["bbox_lx"] == 1.0
assert group.attrs["bbox_ly"] == 2.0
assert group.attrs["bbox_lz"] == 3.0
assert np.allclose(group.attrs["inertia_tensor"], np.eye(3))
# Assert STL content dataset
assert "stl" in group
stl_data = group["stl"][()]
assert isinstance(stl_data, np.void)
assert bytes(stl_data) == stl_content

Comment on lines 186 to 193
with h5py.File(output_file, "w") as f:
group = f.create_group("meshes/mesh")
group.attrs["sha256"] = file_hash

with caplog.at_level(logging.INFO):
add_mesh_to_database(output_file, str(stl_file), overwrite=False)
add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)

assert "has the same SHA256 hash. Skipping." in caplog.text
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Missing test for overwrite=True scenario.

Add a test for add_mesh_to_database with overwrite=True to confirm that mesh data and attributes are correctly updated in the HDF5 file and the expected log message is generated.

Suggested change
with h5py.File(output_file, "w") as f:
group = f.create_group("meshes/mesh")
group.attrs["sha256"] = file_hash
with caplog.at_level(logging.INFO):
add_mesh_to_database(output_file, str(stl_file), overwrite=False)
add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)
assert "has the same SHA256 hash. Skipping." in caplog.text
with h5py.File(output_file, "w") as f:
group = f.create_group("meshes/mesh")
group.attrs["sha256"] = file_hash
with caplog.at_level(logging.INFO):
add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=False)
assert "has the same SHA256 hash. Skipping." in caplog.text
def test_add_mesh_to_database_overwrite_true(tmp_path, caplog):
"""Test that mesh data and attributes are updated when overwrite=True."""
output_file = tmp_path / "db.h5"
mock_mesh = MagicMock(spec=trimesh.Trimesh)
mock_mesh.export.return_value = b"new content"
new_hash = "new_hash"
# Create initial mesh group with old hash and dummy data
with h5py.File(output_file, "w") as f:
group = f.create_group("meshes/mesh")
group.attrs["sha256"] = "old_hash"
group.create_dataset("mesh_data", data=b"old content")
with patch("fleetmaster.core.engine._compute_sha256", return_value=new_hash):
with caplog.at_level(logging.WARNING):
add_mesh_to_database(output_file, mock_mesh, "mesh", overwrite=True)
# Check that the mesh group has the new hash and new data
with h5py.File(output_file, "r") as f:
group = f["meshes/mesh"]
assert group.attrs["sha256"] == new_hash
assert group["mesh_data"][()] == b"new content"
assert "Overwriting mesh data in the database" in caplog.text

Comment on lines +232 to 241
def test_process_single_stl(mock_process_all, mock_settings):
"""Test the main processing pipeline for a single STL file."""
stl_file = "/path/to/dummy.stl"
output_file = Path("/fake/output.hdf5")

_process_single_stl(stl_file, mock_settings, output_file)

mock_add_mesh.assert_called_once_with(output_file, stl_file, mock_settings.overwrite_meshes)
mock_process_all.assert_called_once()
_, kwargs = mock_process_all.call_args
assert kwargs["stl_file"] == stl_file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Test does not cover mesh_name_override argument.

Please add a test for _process_single_stl with mesh_name_override set, verifying it is passed to process_all_cases_for_one_stl and affects downstream behavior as expected.

Suggested implementation:

@patch("fleetmaster.core.engine.process_all_cases_for_one_stl")
def test_process_single_stl(mock_process_all, mock_settings):
    """Test the main processing pipeline for a single STL file."""
    stl_file = "/path/to/dummy.stl"
    output_file = Path("/fake/output.hdf5")

    _process_single_stl(stl_file, mock_settings, output_file)

    mock_process_all.assert_called_once()
    _, kwargs = mock_process_all.call_args
    assert kwargs["stl_file"] == stl_file

@patch("fleetmaster.core.engine.process_all_cases_for_one_stl")
def test_process_single_stl_with_mesh_name_override(mock_process_all, mock_settings):
    """Test _process_single_stl with mesh_name_override set."""
    stl_file = "/path/to/dummy.stl"
    output_file = Path("/fake/output.hdf5")
    mesh_name_override = "custom_mesh_name"

    _process_single_stl(stl_file, mock_settings, output_file, mesh_name_override=mesh_name_override)

    mock_process_all.assert_called_once()
    _, kwargs = mock_process_all.call_args
    assert kwargs["stl_file"] == stl_file
    assert kwargs["mesh_name_override"] == mesh_name_override

Make sure that _process_single_stl accepts the mesh_name_override argument and passes it to process_all_cases_for_one_stl. If it does not, you will need to update its signature and implementation accordingly.



def _prepare_capytaine_body(stl_file: str, lid: bool, grid_symmetry: bool, add_centre_of_mass: bool = False) -> Any:
def _prepare_trimesh_geometry(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider consolidating geometry preparation and database addition into unified functions using a config object to reduce boilerplate and simplify the API.

There’s a lot of boilerplate just to thread translations, mesh names and load/​export dances through two separate functions. You can collapse both `_prepare_trimesh_geometry` and `_prepare_capytaine_body` into a single “prepare‐and‐load” step by bundling all of the geometry parameters into one small config object. Likewise unify your two `add_mesh_to_database` variants into one:

```python
from dataclasses import dataclass
import numpy as np
import trimesh, tempfile, pathlib, cpt, hashlib, h5py

@dataclass
class BodyConfig:
    translation: np.ndarray = np.zeros(3)    # [x, y, z]
    lid: bool = False
    grid_symmetry: bool = False
    add_center_of_mass: bool = False
    mesh_name: str | None = None

def prepare_body(
    stl_file: str,
    cfg: BodyConfig
) -> tuple[cpt.FloatingBody, trimesh.Trimesh]:
    # 1) load + apply translation
    mesh = trimesh.load_mesh(stl_file)
    if cfg.translation.any():
        mesh.apply_translation(cfg.translation)

    # 2) write to temp + load by Capytaine in one try/finally
    path = None
    try:
        with tempfile.NamedTemporaryFile(suffix=".stl", delete=False) as f:
            path = pathlib.Path(f.name)
            mesh.export(f, file_type="stl")
        hull = cpt.load_mesh(str(path), name=cfg.mesh_name or "")
    finally:
        if path and path.exists():
            path.unlink()

    # 3) configure body
    cog = mesh.center_mass if cfg.add_center_of_mass else None
    boat = cpt.FloatingBody(mesh=hull, lid_mesh=(hull.generate_lid(z=-0.01) if cfg.lid else None), center_of_mass=cog)
    if cfg.grid_symmetry:
        boat.mesh = cpt.ReflectionSymmetricMesh(boat.mesh, plane=cpt.xOz_Plane).to_mesh()
    boat.add_all_rigid_body_dofs()
    boat.keep_immersed_part()

    # 4) extract final trimesh for the DB
    final = trimesh.Trimesh(vertices=boat.mesh.vertices, faces=boat.mesh.faces)
    return boat, final

And unify your add_mesh_to_database into one that accepts either a filename or a trimesh.Trimesh:

def add_mesh_to_database(
    output_file: pathlib.Path,
    mesh_or_path: str | trimesh.Trimesh,
    mesh_name: str,
    overwrite: bool = False,
):
    # load or use the provided mesh directly
    mesh = trimesh.load_mesh(mesh_or_path) if isinstance(mesh_or_path, str) else mesh_or_path
    new_stl = mesh.export(file_type="stl")
    new_hash = hashlib.sha256(new_stl).hexdigest()

    with h5py.File(output_file, "a") as f:
        grp = f.require_group(f"{MESH_GROUP_NAME}/{mesh_name}")
        if grp.attrs.get("sha256") == new_hash and not overwrite:
            return
        grp.attrs.update({
            "sha256": new_hash,
            "volume": mesh.volume,
            "cog_x": mesh.center_mass[0],
            "cog_y": mesh.center_mass[1],
            "cog_z": mesh.center_mass[2],
        })
        grp.create_dataset("inertia_tensor", data=mesh.moment_inertia)
        grp.create_dataset("stl_content", data=np.void(new_stl))

This:

  • Removes the temp‐file and geometry dance from two separate functions into one place.
  • Bundles translation/lid/symmetry/mesh_name into a single config.
  • Stops propagating three coords + an override string through every call site.
  • Unifies your two add_mesh_to_database variants into one.

All functionality is identical, but the surface API is much simpler, the boilerplate is reduced, and you only have one place to change if the STL → Capytaine workflow evolves.

eelcovv and others added 3 commits October 23, 2025 01:10
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
eelcovv and others added 6 commits October 23, 2025 01:10
@eelcovv eelcovv merged commit 265cdf0 into main Oct 22, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant