This commit introduces the necessary infrastructure for integrating pupil segmentation into the mono camera pipelines. Key changes include: - Modifying `gstreamer_pipeline.py` to add a tee element to split mono camera streams, creating a dedicated branch for segmentation output with a placeholder `videoconvert` element and `appsink`. This also includes new callbacks and data structures to handle the segmentation frames. - Adding a new Flask route `/segmentation_feed/<int:stream_id>` to `app.py` to serve the segmentation video stream to the frontend. - Updating `index.html` to display the new segmentation feed and implementing cache-busting for all video streams. - Introducing `test_segmentation.py` to verify the functionality of the new segmentation feed. - Refine existing UI and visual tests by updating locators and fixing indentation errors to accommodate the new segmentation feature and maintain test stability.
2.1 KiB
2.1 KiB
Pupil Segmentation Integration
- Objective: Integrated Pupil segmentation into the mono camera pipelines.
- Key Changes:
- Modified
src/unified_web_ui/gstreamer_pipeline.pyto:- Add a
teeelement for mono camera streams to split the video feed. - Create a new branch for pupil segmentation with a
videoconvertplaceholder and a dedicatedappsink(seg_sink_{i}). - Implement
on_new_seg_sample_factorycallback to handle segmentation data. - Added
seg_frame_buffersandseg_buffer_locksfor segmentation output. - Introduced
get_seg_frame_by_idto retrieve segmentation frames. - Ensured unique naming for
teeelements (t_{i}) in the GStreamer pipeline to prevent linking errors.
- Add a
- Modified
src/unified_web_ui/app.pyto:- Add a new Flask route
/segmentation_feed/<int:stream_id>to serve the segmentation video stream. - Added
datetime.utcnowto the Jinja2 context for cache-busting in templates.
- Add a new Flask route
- Modified
src/unified_web_web_ui/templates/index.htmlto:- Include a new "Segmentation Feed" section displaying the segmentation video streams, sourcing from
/segmentation_feed/with cache-busting timestamps. - Updated existing video feeds (
video_feed) with cache-busting timestamps for consistency.
- Include a new "Segmentation Feed" section displaying the segmentation video streams, sourcing from
- Modified
- Testing:
- Created
tests/test_segmentation.pyto verify the segmentation feed is visible and updating. - Updated
src/unified_web_ui/tests/test_ui.pyto refine locators (#camera .camera-streams-grid .camera-container-individual) for camera stream elements, resolving conflicts with segmentation feeds. - Updated
src/unified_web_ui/tests/test_visual.pyto refine locators (#camera .camera-mono-row,#camera .camera-color-row,#camera .camera-mono) to prevent strict mode violations and ensure accurate targeting of camera layout elements. - Fixed indentation errors in
src/unified_web_ui/tests/test_visual.py.
- Created
- Status: All tests are passing, and the infrastructure for pupil segmentation is in place, awaiting the integration of a DeepStream model.