A collection of python threaded camera support routines for
- USB and laptop internal webcams
- RTSP streams
- MIPI CSI cameras (Raspberry Pi, Jetson Nano)
- Teledyne/FLIR blackfly (USB)
- Basler (not implemented yet)
Also support to save as
- hdf5
- tiff
- avi
- mkv
Supported OS
- Windows
- MacOS
- Unix
The routines use OpenCV, gstreamer, libcamera, PiCamera2 or PySpin to interface with the camera. The image acquisition runs in a background thread to achieve maximal frame rate and minimal latency.
Simplifies the settings needed for the Blackfly camera. You still need PySpin and Spinnaker software installed. Supports trigger out during frame exposure and trigger in for frame start. Optimized settings to achieve full frame rate with S BFS-U3-04S2M.
gstreamer pipeline for nvidia conversion and nvarguscamera capture. Settings optimized for Sony IMX219 Raspi v2 Module.
- Examples (auto-selected on Jetson where applicable): examples/cv2_capture_display.py, examples/cv2_capture_display_send2rtp.py
cv2 capture architecture. The video subsystem is chosen based on the operating system.
RTSP capture architecture. RTSP camera can stream to multiple clients.
-
If Python GI is available: uses GStreamer appsink (low latency, configurable).
-
Otherwise (default on Windows/macOS with pip-installed OpenCV): uses OpenCV
VideoCapture(rtsp_url)(typically FFmpeg backend). -
Examples: examples/rtsp_display.py
RTP (UDP) stream is usually single-client.
-
If Python GI is available: uses GStreamer appsink.
-
Otherwise: uses FFmpeg via
imageio-ffmpegand requires an SDP file (rtp_sdp) to describe the RTP payload. -
Examples: examples/rtp_display.py
Interface for legacy picamera module.
- Example configs: examples/configs/all_capture_configs.py
Interface for PiCamera2 module.
- Examples: examples/raspi_capture_display.py
GStreamer appsink-based capture. Useful when you need a custom GStreamer pipeline (e.g., hardware decode/convert, CSI sources, RTSP).
Sends frames as an RTP/UDP H264 stream.
- Examples: examples/cv2_capture_display_send2rtp.py, examples/gcapture_display_send2rtp.py
- Receiver example: examples/rtp_display.py
- SDP template (for FFmpeg fallback): examples/rtp_h264_pt96.sdp
RTSP server for streaming frames to multiple clients.
- Server example: examples/rtsp_server.py
- Receiver example: examples/rtsp_display.py
Background writer threads for saving frames/cubes to disk.
- AVI (MJPG): camera/streamer/avistorageserver.py
- MKV (mp4v): camera/streamer/mkvstorageserver.py
- HDF5 arrays: camera/streamer/h5storageserver.py
- TIFF stacks: camera/streamer/tiffstorageserver.py
PySpinfor FLIR camerasOpenCVfor USB, CSI cameras- Windows uses cv2.CAP_MSMF
- Darwin uses cv2.CAP_AVFOUNDATION
- Linux uses cv2.CAP_V4L2
- Jetson Nano uses cv2.CAP_GSTREAMER
- RTSP typically uses FFmpeg backend in OpenCV wheels
GStreamer- RTSP network streams
imageio-ffmpegfor RTP receive on Windows/macOS
cd "folder where you have this Readme.md file"pip install .orpython setup.py bdist_wheelandpip3 install .\dist\*.whl
Used for RTSP/RTP pipelines and the gCapture appsink backend.
Linux / Ubuntu:
sudo apt install gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-rtsp gstreamer1.0-alsa gstreamer1.0-pulseaudio v4l-utilsLinux (Python bindings for direct GStreamer use, needed by gCapture):
sudo apt install python3-gi python3-gst-1.0 gir1.2-gstreamer-1.0Verify Python GI works:
python3 -c "import gi; gi.require_version('Gst','1.0'); from gi.repository import Gst; Gst.init(None); print(Gst.version_string())"Windows (GStreamer runtime):
Install GStreamer from GStreamer (typically the MSVC x86_64 installer). Add the GStreamer bin folder (e.g. C:\gstreamer\1.0\msvc_x86_64\bin or C:\gstreamer\1.0\x86_64\bin, depending on installer) to your PATH.
Verify GStreamer works:
gst-inspect-1.0 --version
gst-device-monitor-1.0 Video/Source
gst-inspect-1.0 ksvideosrc
gst-inspect-1.0 dshowvideosrcgCapture (and the GI backend of rtpCapture/rtspCapture) depends on Python GI bindings (import gi) which is not directly supported by default on Windows. Options:
- MSYS2 Python + PyGObject + GStreamer (then run your scripts from that MSYS2 environment)
- Conda / Miniconda with conda-forge packages (example):
conda install -c conda-forge pygobject gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-uglyIf you do not want MSYS2/Conda on Windows:
- Use
rtspCapturevia OpenCV (FFmpeg backend) for RTSP cameras. - For raw RTP/UDP (
rtpCapture), use the FFmpeg fallback and providertp_sdp.
For many cameras.
Unix: sudo apt install libopencv-dev python3-opencv for system level installation or pip3 install opencv-contrib-python for latest pip version or use instructions on Qengineering (Ubuntu/Jetson) and/or Qengineering (Raspberry Pi) to compile your own. For Ubuntu you can also use my install script: configure_opencv.sh
Windows: pip3 install opencv-contrib-python
If you want rtpCapture to work on Windows/macOS without Python GI/GStreamer, install the bundled FFmpeg executable:
pip install imageio-ffmpeg- The RTP fallback uses FFmpeg and requires an SDP file describing the RTP stream (set
rtp_sdpin the capture config). - The sender in
camera/streamer/rtpserver.pycan generate a matching SDP file when it uses the FFmpeg backend (passsdp_path=...). - RTSP does not need this;
rtspCapturecan usually use OpenCV+FFmpeg directly on anrtsp://...URL.
Raspberry Pi: use piCamera2Capture (Picamera2/libcamera).
sudo apt install -y python3-picamera2For Teledyne BlackFly obtain Spinnaker SDK and PySpin
Extract the files and on Windows: run the exe file. In the Unix shell: .\install_spinnaker.sh and pip install spinnaker_python...
Please note, this usually requires an old version of Python and will not run with latest Python.
For recording and storing image data.
On Unix sudo apt install libhdf5-dev libtiff-dev
All systems: pip install tifffile imagecodecs h5py
To get better tiff performance on Windows, building libtiff is advised
On Windows, the Windows Camera Utility will give you resolution options and frames per second. To investigate other options you can use OBS studio (or any other capture program), establish camera capture device and inspect video options.
When using OpenCV as camera interface python3 list_cv2CameraProperties.py will show all camera options the video system offers. When an option states -1 it likely is not available for that camera. The program is located in camera/examples/list_cv2CameraProperties.py
Use one of the existing camera configurations in examples/configs/all_capture_configs.py or create your own. 1) set appropriate resolution and frames per second. 2) figure out the exposure and autoexposure settings.
| method | name | description |
|---|---|---|
| 0 | none | Identity (no rotation) |
| 1 | clockwise | Rotate clockwise 90 degrees |
| 2 | rotate-180 | Rotate 180 degrees |
| 3 | counterclockwise | Rotate counter-clockwise 90 degrees |
| 4 | horizontal-flip | Flip horizontally |
| 5 | vertical-flip | Flip vertically |
| 6 | upper-left-diagonal | Flip across upper left/lower right diagonal |
| 7 | upper-right-diagonal | Flip across upper right/lower left diagonal |
| 8 | automatic | Select flip method based on image-orientation tag |
The config keys exposure and autoexposure are interpreted differently depending on the capture backend (OpenCV, GStreamer, Picamera2/libcamera, legacy picamera).
| key | what it does | typical values |
|---|---|---|
exposure |
Manual exposure request. |
>0: time/value (often µs on PiCamera2/libcamera and Jetson nvargus). Some OpenCV webcam drivers use negative “stops-like” values (e.g. -5 ≈ 0/-1: often means “auto/leave default”, depending on backend. |
autoexposure |
Auto-exposure (AE) on/off request. | OpenCV: -1 keep, 0 manual, 1 auto; many V4L2 cams also accept 0.25 (manual) / 0.75 (auto). Other backends either ignore this key or map it differently. |
Backend details:
-
OpenCV (
cv2Capture):autoexposure = -1leaves AE unchanged;0requests manual AE mode;1requests auto AE mode.- Some Linux/V4L2 drivers commonly accept
0.25(manual) and0.75(auto). Windows/macOS backends can differ. exposure > 0requests manual exposure and the code attempts to disable AE first. The numeric meaning ofCAP_PROP_EXPOSUREis driver-specific (some cameras use negative values, others use milliseconds/microseconds-like scales).- To inspect what your OpenCV backend/camera supports, run
python3 examples/list_cv2CameraProperties.py.
-
GStreamer (
gCapture):gCapturebuilds a pipeline and the meaning of “exposure” depends on the selected source element:- Jetson
nvarguscamerasrc:exposureis treated as microseconds (converted internally) and applied viaexposuretimerangewhen supported. libcamerasrc: exposure/AE are controlled via libcamera controls, but the exact property interface is element-version specific. Usegst-inspect-1.0 libcamerasrcand set the element’s control-related properties viagst_source_props_str, or provide a complete custom source withgst_source_str.v4l2src: exposure is typically controlled via V4L2 controls (recommended:v4l2-ctlfromv4l-utils). Some GStreamer builds expose anextra-controls-style property onv4l2src, but it is not consistent across platforms/drivers.- Windows
ksvideosrc/dshowvideosrc: many cameras do not expose manual exposure controls through these elements. If the element exposes a property for it, you can set it viagst_source_props_str; otherwise use vendor tools / OS camera settings (or configure exposure through a separate API).
- Jetson
- Use
gst-inspect-1.0 <element>to discover supported properties and set them viagst_source_props_stror a fully customgst_source_str.
-
FLIR/Teledyne Blackfly (
blackflyCapture, PySpin):exposureis in microseconds and maps toExposureTime(requiresExposureAutooff andExposureMode=Timed).autoexposure:0off,1on (continuous/once depending on camera settings).
-
Raspberry Pi Picamera2/libcamera (
piCamera2Capture):exposureis interpreted as microseconds and mapped to libcameraExposureTime.autoexposuremaps to libcameraAeEnable(>0enables,0disables,-1leaves unchanged).
-
Raspberry Pi legacy picamera (
piCapture):- Exposure is controlled via
shutter_speedin microseconds. - In this backend,
exposure <= 0uses automatic exposure (exposure_mode='auto');exposure > 0requests manual exposure (exposure_mode='off'plusshutter_speed).
- Exposure is controlled via
Maximum usable exposure time is typically limited by frame interval. Roughly, max exposure is about
Try cv2_capture_display.py from ./examples.
You need to set the proper config file in the program. You should not need to edit python files in capture or streamer folder.
Naming convention: examples use backend-explicit names like cv2_*, gcapture_*, raspi_*, and blackfly_capture_*.
Capture + display:
cv2_capture_display.pyOpenCV capture + display.gcapture_display.pyGStreamer appsink capture (gCapture) + display.raspi_capture_display.pyRaspberry Pi capture + display (prefers Picamera2, falls back to OpenCV).blackfly_capture_display.pyBlackfly capture + display + FPS reporting.rtsp_display.pyRTSP receive + display.rtp_display.pyRTP receive + display.
Streaming (send RTP):
cv2_capture_display_send2rtp.pyOpenCV capture + display + send RTP.cv2_capture_display_send2rtp_process.pylike above, but uses processes.gcapture_display_send2rtp.pygCapture + display + send RTP.
Recording (HDF5 / TIFF / video):
cv2_capture_savehdf5_display.pyOpenCV capture + display + store to HDF5.cv2_capture_proc_savehdf5_display.pyOpenCV capture + simple processing + display + HDF5.cv2_capture_savemkv_display.pyOpenCV capture + display + store to MKV.blackfly_capture_savehdf5.pyBlackfly capture + store to HDF5.blackfly_capture_savehdf5_display.pyBlackfly capture + display + store to HDF5.blackfly_capture_proc_savehdf5_display.pyBlackfly capture + processing + display + store.blackfly_capture_savetiff_display.pyBlackfly capture + display + store to TIFF.
Benchmarks / tests:
test_display.pydisplay framerate test (no camera).test_savehdf5.pydisk throughput test for HDF5 (no camera).test_savetiff.pydisk throughput test for TIFF (no camera).test_saveavi.pydisk throughput test for AVI (no camera; 3 color planes per image).test_savemkv.pydisk throughput test for MKV/MP4V (no camera; 3 color planes per image).test_blackfly.pyBlackfly capture FPS test (no display).
- 720x540 524fps
- auto_exposure off
- auto_exposure 0: auto, 1:manual
- exposure in microseconds
- Max Resolution 2592x1944
- YU12, (YUYV, RGB3, JPEG, H264, YVYU, VYUY, UYVY, NV12, BGR3, YV12, NV21, BGR4)
- 320x240 90fps
- 640x480 90fps
- 1280x720 60fps
- 1920x1080 6.4fps
- 2592x1944 6.4fps
- auto_exposure 0: auto, 1:manual
- exposure in microseconds
- Max Resolution 3280x2464
- YU12, (YUYV, RGB3, JPEG, H264, YVYU, VYUY, UYVY, NV12, BGR4)
- 320x240 90fps
- 640x480 90fps
- 1280x720 60fps
- 1920x1080 4.4fps
- 3280x2464 2.8fps
- MJPG
- 320x240 and 640/480, 120fps
- auto_exposure, can not figure out out in MJPG mode
- auto_exposure = 0 -> static exposure
- exposure is about (exposure value / 10) in ms
- WB_TEMP 6500
- 320x240, 30fps
- YUY2
- autoexposure ? 0.25, 0.74 -1. 0
- WB_TEMP 4600
- 1280x720, 30fps
- 620x480, 30fps
- 960x540, 30fps
python3 -m pip install --upgrade pip build twine
rm -rf dist build *.egg-info
python3 -m build
python3 -m twine check dist/*
python3 -m pip install dist/*.whl
export TWINE_USERNAME=__token__
export TWINE_PASSWORD=<your-pypi-token>
python -m twine upload dist/*2025 - Codereview, gcapture module, GI and FFMPEG fallbacks and standardized naming.
2022 - February added libcamera capture for Raspian Bullseye
2022 - January added queue as intialization option, updated cv2Capture
2021 - November moved queue into class
2021 - November added rtp server and client
2021 - November added mkvServer, wheel installation, cleanup
2021 - October added aviServer and multicamera example, PySpin trigger fix
2021 - September updated PySpin trigger out polarity setting
2020 - Initial release