Compare commits

..

No commits in common. "1f8da0017c142ebc2c6b1ebc51cebd023a3d51af" and "17d691173bc323b2d9bb11a4fdf09fe54cd12d8e" have entirely different histories.

29 changed files with 1318 additions and 3052 deletions

3
.gitignore vendored
View File

@ -1,6 +1,5 @@
# Virtual Environment
.venv/
init/
# Python cache
__pycache__/
@ -10,4 +9,4 @@ __pycache__/
app_stdout.log
app_stderr.log
screenshots/
.DS_Store
.DS_Store

View File

@ -1,23 +0,0 @@
### Pupil Segmentation Integration
- **Objective:** Integrated Pupil segmentation into the mono camera pipelines.
- **Key Changes:**
- Modified `src/unified_web_ui/gstreamer_pipeline.py` to:
- Add a `tee` element for mono camera streams to split the video feed.
- Create a new branch for pupil segmentation with a `videoconvert` placeholder and a dedicated `appsink` (`seg_sink_{i}`).
- Implement `on_new_seg_sample_factory` callback to handle segmentation data.
- Added `seg_frame_buffers` and `seg_buffer_locks` for segmentation output.
- Introduced `get_seg_frame_by_id` to retrieve segmentation frames.
- Ensured unique naming for `tee` elements (`t_{i}`) in the GStreamer pipeline to prevent linking errors.
- Modified `src/unified_web_ui/app.py` to:
- Add a new Flask route `/segmentation_feed/<int:stream_id>` to serve the segmentation video stream.
- Added `datetime.utcnow` to the Jinja2 context for cache-busting in templates.
- Modified `src/unified_web_web_ui/templates/index.html` to:
- Include a new "Segmentation Feed" section displaying the segmentation video streams, sourcing from `/segmentation_feed/` with cache-busting timestamps.
- Updated existing video feeds (`video_feed`) with cache-busting timestamps for consistency.
- **Testing:**
- Created `tests/test_segmentation.py` to verify the segmentation feed is visible and updating.
- Updated `src/unified_web_ui/tests/test_ui.py` to refine locators (`#camera .camera-streams-grid .camera-container-individual`) for camera stream elements, resolving conflicts with segmentation feeds.
- Updated `src/unified_web_ui/tests/test_visual.py` to refine locators (`#camera .camera-mono-row`, `#camera .camera-color-row`, `#camera .camera-mono`) to prevent strict mode violations and ensure accurate targeting of camera layout elements.
- Fixed indentation errors in `src/unified_web_ui/tests/test_visual.py`.
- **Status:** All tests are passing, and the infrastructure for pupil segmentation is in place, awaiting the integration of a DeepStream model.

View File

@ -1,146 +1,65 @@
appdirs==1.4.4
apturl==0.5.2
async-timeout==5.0.1
attrs==21.2.0
bcrypt==3.2.0
beniget==0.4.1
bleak==2.0.0
blinker==1.9.0
Brlapi==0.8.3
Brotli==1.0.9
certifi==2020.6.20
chardet==4.0.0
certifi==2025.11.12
charset-normalizer==3.4.4
click==8.3.1
colorama==0.4.4
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.3.2
cpuset==1.6
cryptography==3.4.8
cupshelpers==1.0
cycler==0.11.0
dbus-fast==3.1.2
dbus-python==1.2.18
decorator==4.4.2
defer==1.0.6
distro==1.7.0
distro-info==1.1+ubuntu0.2
duplicity==0.8.21
exceptiongroup==1.3.1
fasteners==0.14.1
contourpy==1.3.3
cycler==0.12.1
filelock==3.20.0
Flask==3.1.2
flatbuffers==25.9.23
fonttools==4.29.1
fs==2.4.12
fonttools==4.60.1
fsspec==2025.10.0
future==0.18.2
gast==0.5.2
greenlet==3.2.4
httplib2==0.20.2
humanfriendly==10.0
idna==3.3
importlib-metadata==4.6.4
idna==3.11
iniconfig==2.3.0
itsdangerous==2.2.0
jeepney==0.7.1
Jetson.GPIO==2.1.7
Jinja2==3.1.6
keyring==23.5.0
kiwisolver==1.3.2
language-selector==0.1
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lockfile==0.12.2
louis==3.20.0
lxml==4.8.0
lz4==3.1.3+dfsg
macaroonbakery==1.3.1
Mako==1.1.3
kiwisolver==1.4.9
MarkupSafe==3.0.3
matplotlib==3.5.1
meson==1.9.1
matplotlib==3.10.7
ml_dtypes==0.5.4
monotonic==1.6
more-itertools==8.10.0
mpmath==1.3.0
networkx==3.4.2
ninja==1.13.0
numpy==2.2.6
oauthlib==3.2.0
olefile==0.46
onboard==1.4.1
onnx==1.20.0
networkx==3.6
numpy==1.26.4
onnx==1.19.1
onnxruntime==1.23.2
onnxslim==0.1.77
opencv-python==4.12.0.88
packaging==25.0
pandas==1.3.5
paramiko==2.9.3
pexpect==4.8.0
Pillow==9.0.1
pillow==12.0.0
playwright==1.56.0
pluggy==1.6.0
ply==3.11
polars==1.35.2
polars-runtime-32==1.35.2
protobuf==6.33.1
psutil==7.1.3
ptyprocess==0.7.0
pycairo==1.20.1
pycups==2.0.1
pyee==13.0.0
Pygments==2.19.2
PyGObject==3.42.1
PyJWT==2.3.0
pymacaroons==0.13.0
PyNaCl==1.5.0
PyOpenGL==3.1.5
pyparsing==2.4.7
pyobjc-core==12.1
pyobjc-framework-Cocoa==12.1
pyobjc-framework-CoreBluetooth==12.1
pyobjc-framework-libdispatch==12.1
pyparsing==3.2.5
pypylon==4.2.0
pyRFC3339==1.1
pyservicemaker @ file:///opt/nvidia/deepstream/deepstream-7.1/service-maker/python/pyservicemaker-0.0.1-py3-none-linux_aarch64.whl
pytest==9.0.1
pytest-base-url==2.1.0
pytest-playwright==0.7.2
python-apt==2.4.0+ubuntu4
python-dateutil==2.8.1
python-dbusmock==0.27.5
python-debian==0.1.43+ubuntu1.1
python-dateutil==2.9.0.post0
python-slugify==8.0.4
pythran==0.10.0
pytz==2022.1
pyxdg==0.27
PyYAML==6.0.3
requests==2.25.1
scipy==1.8.0
seaborn==0.13.2
SecretStorage==3.3.1
six==1.16.0
SQLAlchemy==2.0.44
requests==2.32.5
scipy==1.16.3
six==1.17.0
sympy==1.14.0
systemd-python==234
text-unidecode==1.3
thop==0.1.1.post2209072238
tomli==2.3.0
torch==2.9.1
torchaudio==2.9.1
torchvision==0.24.1
tqdm==4.67.1
torch==2.2.2
torchvision==0.17.2
typing_extensions==4.15.0
ubuntu-advantage-tools==8001
ubuntu-drivers-common==0.0.0
ufoLib2==0.13.1
ultralytics==8.3.233
ultralytics-thop==2.0.18
unicodedata2==14.0.0
urllib3==1.26.5
urwid==2.1.2
uv==0.9.13
wadllib==1.3.6
websockets==15.0.1
Werkzeug==3.1.4
xdg==5
xkit==0.0.0
zipp==1.0.0
urllib3==2.5.0
Werkzeug==3.1.3

View File

@ -1,59 +0,0 @@
#!/bin/bash
# Start the Flask application in the background
python src/unified_web_ui/app.py &
APP_PID=$!
# Wait for the application to start
echo "Waiting for application to start..."
sleep 10
# Check if the application is running
if ! ps -p $APP_PID > /dev/null
then
echo "Application failed to start."
exit 1
fi
# Run the curl tests
echo "Running curl tests..."
http_code=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:5000/)
echo "Main page status code: $http_code"
if [ "$http_code" != "200" ]; then
echo "Main page test failed."
fi
http_code=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:5000/get_fps)
echo "get_fps status code: $http_code"
if [ "$http_code" != "200" ]; then
echo "get_fps test failed."
fi
matrix_data='{"matrix":['
for i in {1..5}; do
matrix_data+='['
for j in {1..5}; do
matrix_data+='{"ww":0,"cw":0,"blue":0}'
if [ $j -lt 5 ]; then
matrix_data+=','
fi
done
matrix_data+=']'
if [ $i -lt 5 ]; then
matrix_data+=','
fi
done
matrix_data+=']}'
http_code=$(curl -s -o /dev/null -w "%{http_code}" -X POST -H "Content-Type: application/json" -d "$matrix_data" http://localhost:5000/set_matrix)
echo "set_matrix status code: $http_code"
if [ "$http_code" != "200" ]; then
echo "set_matrix test failed."
fi
# Run the pytest tests
echo "Running pytest tests..."
pytest src/unified_web_ui/tests/
# Kill the Flask application
kill $APP_PID

View File

@ -1,4 +1,4 @@
from flask import Flask, render_template, request, jsonify
from flask import Flask, render_template, request, jsonify, Response
import asyncio
from bleak import BleakScanner, BleakClient
import threading
@ -7,6 +7,8 @@ import json
import sys
import signal
import os
import cv2
from vision import VisionSystem
# =================================================================================================
# APP CONFIGURATION
@ -14,15 +16,17 @@ import os
# Set to True to run without a physical BLE device for testing purposes.
# Set to False to connect to the actual lamp matrix.
DEBUG_MODE = False
DEBUG_MODE = True
# --- BLE Device Configuration (Ignored in DEBUG_MODE) ---
DEVICE_NAME = "Pupilometer LED Billboard"
global ble_client
global ble_characteristics
global ble_connection_status
ble_client = None
ble_characteristics = None
ble_event_loop = None # Will be initialized if not in debug mode
ble_connection_status = False
# =================================================================================================
# BLE HELPER FUNCTIONS (Used in LIVE mode)
@ -71,6 +75,7 @@ SPIRAL_MAP_5x5 = create_spiral_map(5)
async def set_full_matrix_on_ble(colorSeries):
global ble_client
global ble_characteristics
global ble_connection_status
if not ble_client or not ble_client.is_connected:
print("BLE client not connected. Attempting to reconnect...")
@ -120,6 +125,7 @@ async def set_full_matrix_on_ble(colorSeries):
async def connect_to_ble_device():
global ble_client
global ble_characteristics
global ble_connection_status
print(f"Scanning for device: {DEVICE_NAME}...")
devices = await BleakScanner.discover()
@ -127,6 +133,7 @@ async def connect_to_ble_device():
if not target_device:
print(f"Device '{DEVICE_NAME}' not found.")
ble_connection_status = False
return False
print(f"Found device: {target_device.name} ({target_device.address})")
@ -144,12 +151,15 @@ async def connect_to_ble_device():
]
ble_characteristics = sorted(characteristics, key=lambda char: char.handle)
print(f"Found {len(ble_characteristics)} characteristics for lamps.")
ble_connection_status = True
return True
else:
print(f"Failed to connect to {target_device.name}")
ble_connection_status = False
return False
except Exception as e:
print(f"An error occurred during BLE connection: {e}")
ble_connection_status = False
return False
# =================================================================================================
# COLOR MIXING
@ -255,14 +265,58 @@ def set_matrix():
print(f"Getting current lamp matrix info: {lamp_matrix}")
@app.route('/ble_status')
def ble_status():
global ble_connection_status
if DEBUG_MODE:
return jsonify(connected=True)
return jsonify(connected=ble_connection_status)
@app.route('/vision/pupil_data')
def get_pupil_data():
"""
Endpoint to get the latest pupil segmentation data from the vision system.
"""
if vision_system:
data = vision_system.get_pupil_data()
return jsonify(success=True, data=data)
return jsonify(success=False, message="Vision system not initialized"), 500
def gen_frames():
"""Generator function for video streaming."""
while True:
frame = vision_system.get_annotated_frame()
if frame is not None:
ret, buffer = cv2.imencode('.jpg', frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route."""
return Response(gen_frames(),
mimetype='multipart/x-mixed-replace; boundary=frame')
# =================================================================================================
# APP STARTUP
# =================================================================================================
vision_system = None
def signal_handler(signum, frame):
print("Received shutdown signal, gracefully shutting down...")
global ble_connection_status
# Stop the vision system
if vision_system:
print("Stopping vision system...")
vision_system.stop()
print("Vision system stopped.")
if not DEBUG_MODE and ble_client and ble_client.is_connected:
print("Disconnecting BLE client...")
ble_connection_status = False
disconnect_future = asyncio.run_coroutine_threadsafe(ble_client.disconnect(), ble_event_loop)
try:
# Wait for the disconnect to complete with a timeout
@ -285,6 +339,16 @@ if __name__ == '__main__':
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
# Initialize and start the Vision System
try:
vision_config = {"camera_id": 0, "model_name": "yolov8n-seg.pt"}
vision_system = VisionSystem(config=vision_config)
vision_system.start()
except Exception as e:
print(f"Failed to initialize or start Vision System: {e}")
vision_system = None
if not DEBUG_MODE:
print("Starting BLE event loop in background thread...")
ble_event_loop = asyncio.new_event_loop()
@ -295,4 +359,4 @@ if __name__ == '__main__':
future = asyncio.run_coroutine_threadsafe(connect_to_ble_device(), ble_event_loop)
future.result(timeout=10) # Wait up to 10 seconds for connection
app.run(debug=True, use_reloader=False, host="0.0.0.0")
app.run(debug=True, use_reloader=False, host="0.0.0.0")

View File

View File

@ -0,0 +1,250 @@
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
import pyds
import threading
import numpy as np
try:
from pypylon import pylon
except ImportError:
print("pypylon is not installed. DeepStreamBackend will not be able to get frames from Basler camera.")
pylon = None
class DeepStreamPipeline:
"""
A class to manage the DeepStream pipeline for pupil segmentation.
"""
def __init__(self, config):
self.config = config
Gst.init(None)
self.pipeline = None
self.loop = GLib.MainLoop()
self.pupil_data = None
self.annotated_frame = None
self.camera = None
self.frame_feeder_thread = None
self.is_running = False
print("DeepStreamPipeline initialized.")
def _frame_feeder_thread(self, appsrc):
"""
Thread function to feed frames from the Basler camera to the appsrc element.
"""
while self.is_running:
if not self.camera or not self.camera.IsGrabbing():
print("Camera not ready, stopping frame feeder.")
break
try:
grab_result = self.camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
if grab_result.GrabSucceeded():
frame = grab_result.Array
# Create a Gst.Buffer
buf = Gst.Buffer.new_allocate(None, len(frame), None)
buf.fill(0, frame)
# Push the buffer into the appsrc
appsrc.emit('push-buffer', buf)
else:
print(f"Error grabbing frame: {grab_result.ErrorCode}")
except Exception as e:
print(f"An error occurred in frame feeder thread: {e}")
break
finally:
if 'grab_result' in locals() and grab_result:
grab_result.Release()
def bus_call(self, bus, message, loop):
"""
Callback function for handling messages from the GStreamer bus.
"""
t = message.type
if t == Gst.MessageType.EOS:
sys.stdout.write("End-of-stream\n")
self.is_running = False
loop.quit()
elif t == Gst.MessageType.WARNING:
err, debug = message.parse_warning()
sys.stderr.write("Warning: %s: %s\n" % (err, debug))
elif t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
sys.stderr.write("Error: %s: %s\n" % (err, debug))
self.is_running = False
loop.quit()
return True
def pgie_sink_pad_buffer_probe(self, pad, info, u_data):
"""
Probe callback function for the sink pad of the pgie element.
"""
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return Gst.PadProbeReturn.OK
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the address of gst_buffer as input, which is a ptr.
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
frame_meta = pyds.glist_get_data(l_frame)
except StopIteration:
break
# Get frame as numpy array
self.annotated_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
l_obj = frame_meta.obj_meta_list
while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta = pyds.glist_get_data(l_obj)
except StopIteration:
break
# Access and process object metadata
rect_params = obj_meta.rect_params
top = rect_params.top
left = rect_params.left
width = rect_params.width
height = rect_params.height
self.pupil_data = {
"bounding_box": [left, top, left + width, top + height],
"confidence": obj_meta.confidence
}
print(f"Pupil detected: {self.pupil_data}")
try:
l_obj = l_obj.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
def start(self):
"""
Builds and starts the DeepStream pipeline.
"""
if not pylon:
raise ImportError("pypylon is not installed. Cannot start DeepStreamPipeline with Basler camera.")
# Initialize camera
try:
self.camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
self.camera.Open()
self.camera.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
print("DeepStreamPipeline: Basler camera opened and started grabbing.")
except Exception as e:
print(f"DeepStreamPipeline: Error opening Basler camera: {e}")
return
self.pipeline = Gst.Pipeline()
if not self.pipeline:
sys.stderr.write(" Unable to create Pipeline \n")
return
source = Gst.ElementFactory.make("appsrc", "app-source")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
sink = Gst.ElementFactory.make("appsink", "app-sink")
videoconvert = Gst.ElementFactory.make("nvvideoconvert", "nv-videoconvert")
# Set appsrc properties
# TODO: Set caps based on camera properties
caps = Gst.Caps.from_string("video/x-raw,format=GRAY8,width=1280,height=720,framerate=30/1")
source.set_property("caps", caps)
source.set_property("format", "time")
pgie.set_property('config-file-path', "pgie_yolov10_config.txt")
# Set appsink properties
sink.set_property("emit-signals", True)
sink.set_property("max-buffers", 1)
sink.set_property("drop", True)
self.pipeline.add(source)
self.pipeline.add(videoconvert)
self.pipeline.add(pgie)
self.pipeline.add(sink)
if not source.link(videoconvert):
sys.stderr.write(" Unable to link source to videoconvert \n")
return
if not videoconvert.link(pgie):
sys.stderr.write(" Unable to link videoconvert to pgie \n")
return
if not pgie.link(sink):
sys.stderr.write(" Unable to link pgie to sink \n")
return
pgie_sink_pad = pgie.get_static_pad("sink")
if not pgie_sink_pad:
sys.stderr.write(" Unable to get sink pad of pgie \n")
return
pgie_sink_pad.add_probe(Gst.PadProbeType.BUFFER, self.pgie_sink_pad_buffer_probe, 0)
bus = self.pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", self.bus_call, self.loop)
self.is_running = True
self.frame_feeder_thread = threading.Thread(target=self._frame_feeder_thread, args=(source,))
self.frame_feeder_thread.start()
print("Starting pipeline...")
self.pipeline.set_state(Gst.State.PLAYING)
print("DeepStreamPipeline started.")
def stop(self):
"""
Stops the DeepStream pipeline.
"""
self.is_running = False
if self.frame_feeder_thread:
self.frame_feeder_thread.join()
if self.pipeline:
self.pipeline.set_state(Gst.State.NULL)
print("DeepStreamPipeline stopped.")
if self.camera and self.camera.IsGrabbing():
self.camera.StopGrabbing()
if self.camera and self.camera.IsOpen():
self.camera.Close()
print("DeepStreamPipeline: Basler camera closed.")
def get_data(self):
"""
Retrieves data from the pipeline.
"""
return self.pupil_data
def get_annotated_frame(self):
"""
Retrieves the annotated frame from the pipeline.
"""
return self.annotated_frame
if __name__ == '__main__':
config = {}
pipeline = DeepStreamPipeline(config)
pipeline.start()
# Run the GLib main loop in the main thread
try:
pipeline.loop.run()
except KeyboardInterrupt:
print("Interrupted by user.")
pipeline.stop()

View File

View File

View File

@ -0,0 +1,18 @@
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
#onnx-file=yolov10.onnx
model-engine-file=model.engine
#labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
network-mode=0
num-detected-classes=1
gie-unique-id=1
output-blob-names=output0
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

View File

@ -0,0 +1,314 @@
// State for the entire 5x5 matrix, storing {ww, cw, blue} for each lamp
var lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
var selectedLamps = [];
// Function to calculate a visual RGB color from the three light values using a proper additive model
function calculateRgb(ww, cw, blue) {
// Define the RGB components for each light source based on slider track colors
const warmWhiteR = 255;
const warmWhiteG = 192;
const warmWhiteB = 128;
const coolWhiteR = 192;
const coolWhiteG = 224;
const coolWhiteB = 255;
const blueR = 0;
const blueG = 0;
const blueB = 255;
// Normalize the slider values (0-255) and apply them to the base colors
var r = (ww / 255) * warmWhiteR + (cw / 255) * coolWhiteR + (blue / 255) * blueR;
var g = (ww / 255) * warmWhiteG + (cw / 255) * coolWhiteG + (blue / 255) * blueG;
var b = (ww / 255) * warmWhiteB + (cw / 255) * coolWhiteB + (blue / 255) * blueB;
// Clamp the values to 255 and convert to integer
r = Math.min(255, Math.round(r));
g = Math.min(255, Math.round(g));
b = Math.min(255, Math.round(b));
// Convert to hex string
var toHex = (c) => ('0' + c.toString(16)).slice(-2);
return '#' + toHex(r) + toHex(g) + toHex(b);
}
function updateLampUI(lamp, colorState) {
var newColor = calculateRgb(colorState.ww, colorState.cw, colorState.blue);
var lampElement = $(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`);
lampElement.css('background-color', newColor);
if (newColor === '#000000') {
lampElement.removeClass('on');
lampElement.css('box-shadow', `inset 0 0 5px rgba(0,0,0,0.5)`);
} else {
lampElement.addClass('on');
lampElement.css('box-shadow', `0 0 15px ${newColor}, 0 0 25px ${newColor}`);
}
}
// Function to update the UI and send the full matrix state to the backend
function sendFullMatrixUpdate(lampsToUpdate, isRegionUpdate = false) {
var fullMatrixData = lampMatrixState.map(row => row.map(lamp => ({
ww: lamp.ww,
cw: lamp.cw,
blue: lamp.blue
})));
$.ajax({
url: '/set_matrix',
type: 'POST',
contentType: 'application/json',
data: JSON.stringify({ matrix: fullMatrixData }),
success: function(response) {
if (response.success) {
if (isRegionUpdate) {
// On a region button click, update the entire matrix UI
for (var r = 0; r < 5; r++) {
for (var c = 0; c < 5; c++) {
updateLampUI({row: r, col: c}, lampMatrixState[r][c]);
}
}
} else {
// Otherwise, just update the lamps that changed
lampsToUpdate.forEach(function(lamp) {
updateLampUI(lamp, lampMatrixState[lamp.row][lamp.col]);
});
}
}
}
});
}
function updateSliders(ww, cw, blue, prefix = '') {
$(`#${prefix}ww-slider`).val(ww);
$(`#${prefix}cw-slider`).val(cw);
$(`#${prefix}blue-slider`).val(blue);
$(`#${prefix}ww-number`).val(ww);
$(`#${prefix}cw-number`).val(cw);
$(`#${prefix}blue-number`).val(blue);
}
$(document).ready(function() {
var regionMaps = {
'Upper': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
],
'Lower': [
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'Left': [
{row: 0, col: 0}, {row: 1, col: 0}, {row: 2, col: 0}, {row: 3, col: 0}, {row: 4, col: 0},
{row: 0, col: 1}, {row: 1, col: 1}, {row: 2, col: 1}, {row: 3, col: 1}, {row: 4, col: 1},
],
'Right': [
{row: 0, col: 3}, {row: 1, col: 3}, {row: 2, col: 3}, {row: 3, col: 3}, {row: 4, col: 3},
{row: 0, col: 4}, {row: 1, col: 4}, {row: 2, col: 4}, {row: 3, col: 4}, {row: 4, col: 4},
],
'Inner ring': [
{row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3},
{row: 2, col: 1}, {row: 2, col: 3},
{row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}
],
'Outer ring': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'All': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 1}, {row: 2, col: 3}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
]
};
// Exclude the center lamp from the 'All' region
var allRegionWithoutCenter = regionMaps['All'].filter(lamp => !(lamp.row === 2 && lamp.col === 2));
regionMaps['All'] = allRegionWithoutCenter;
// Initialize lampMatrixState from the initial HTML colors
$('.lamp').each(function() {
var row = $(this).data('row');
var col = $(this).data('col');
var color = $(this).css('background-color');
var rgb = color.match(/\d+/g);
lampMatrixState[row][col] = {
ww: rgb[0], cw: rgb[1], blue: rgb[2]
};
});
$('#region-select').on('change', function() {
var region = $(this).val();
// Toggle the inactive state of the control panel based on selection
if (region) {
$('.control-panel').removeClass('inactive-control');
} else {
$('.control-panel').addClass('inactive-control');
}
var newlySelectedLamps = regionMaps[region];
// Clear selected class from all lamps
$('.lamp').removeClass('selected');
// Get the current slider values to use as the new default
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
// Reset all lamps except the center to black in our state
var lampsToUpdate = [];
var centerLampState = lampMatrixState[2][2];
lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
lampMatrixState[2][2] = centerLampState; // Preserve center lamp state
// Set newly selected lamps to the current slider values
selectedLamps = newlySelectedLamps;
selectedLamps.forEach(function(lamp) {
$(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`).addClass('selected');
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
});
if (selectedLamps.length > 0) {
// Update sliders to reflect the state of the first selected lamp
var firstLamp = selectedLamps[0];
var firstLampState = lampMatrixState[firstLamp.row][firstLamp.col];
updateSliders(firstLampState.ww, firstLampState.cw, firstLampState.blue, '');
}
// Send the full matrix state
sendFullMatrixUpdate(lampsToUpdate, true);
});
// Event listener for the region sliders and number inputs
$('.region-slider-group input').on('input', function() {
if (selectedLamps.length === 0) return;
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
// Clamp value
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
var lampsToUpdate = [];
selectedLamps.forEach(function(lamp) {
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
lampsToUpdate.push(lamp);
});
sendFullMatrixUpdate(lampsToUpdate);
});
// Event listener for the center lamp sliders and number inputs
$('.center-slider-group input').on('input', function() {
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
// Clamp value
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#center-ww-slider').val());
var cw = parseInt($('#center-cw-slider').val());
var blue = parseInt($('#center-blue-slider').val());
var centerLamp = {row: 2, col: 2};
lampMatrixState[centerLamp.row][centerLamp.col] = {ww: ww, cw: cw, blue: blue};
sendFullMatrixUpdate([centerLamp]);
});
// Initial check to set the inactive state
if (!$('#region-select').val()) {
$('.control-panel').addClass('inactive-control');
}
function checkBleStatus() {
$.ajax({
url: '/ble_status',
type: 'GET',
success: function(response) {
var statusElement = $('#ble-status');
if (response.connected) {
statusElement.text('BLE Connected');
statusElement.css('color', 'lightgreen');
} else {
statusElement.text('BLE Disconnected');
statusElement.css('color', 'red');
}
},
error: function() {
var statusElement = $('#ble-status');
statusElement.text('Reconnecting...');
statusElement.css('color', 'orange');
}
});
}
setInterval(checkBleStatus, 2000);
checkBleStatus(); // Initial check
function getPupilData() {
$.ajax({
url: '/vision/pupil_data',
type: 'GET',
success: function(response) {
if (response.success && response.data) {
var pupilData = response.data;
var pupilPosition = pupilData.pupil_position;
var pupilDiameter = pupilData.pupil_diameter;
// Update text fields
$('#pupil-center').text(`(${pupilPosition[0]}, ${pupilPosition[1]})`);
$('#pupil-area').text(pupilDiameter);
// Draw on canvas
var canvas = $('#pupil-canvas')[0];
var ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.beginPath();
ctx.arc(pupilPosition[0] / 2, pupilPosition[1] / 2, pupilDiameter / 2, 0, 2 * Math.PI);
ctx.fillStyle = 'red';
ctx.fill();
}
}
});
}
setInterval(getPupilData, 500); // Fetch data every 500ms
});

View File

@ -1,151 +1,191 @@
:root {
--matrix-width: calc(5 * 70px + 4 * 20px);
}
body {
font-family: Arial, sans-serif;
display: flex;
flex-direction: column;
align-items: center;
margin: 0;
background-color: #f0f0f0;
min-height: 100vh;
}
.container {
display: flex;
flex-direction: column;
align-items: center;
position: relative;
}
.main-content {
display: flex;
flex-direction: row;
align-items: flex-start;
gap: 40px;
}
.matrix-grid {
display: grid;
grid-template-columns: repeat(5, 70px);
grid-template-rows: repeat(5, 70px);
gap: 20px;
padding: 20px;
background-color: #333;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
margin-bottom: 20px;
}
.lamp {
width: 70px;
height: 70px;
border-radius: 10%;
background-color: #000;
transition: box-shadow 0.2s, transform 0.1s;
cursor: pointer;
border: 2px solid transparent;
}
.lamp.on {
box-shadow: 0 0 15px currentColor, 0 0 25px currentColor;
}
.lamp.selected {
border: 2px solid #fff;
transform: scale(1.1);
}
h1 {
color: #333;
margin-bottom: 20px;
}
.region-control {
margin-bottom: 20px;
text-align: center;
}
.region-control select {
padding: 10px 15px;
font-size: 14px;
cursor: pointer;
border: 1px solid #ccc;
border-radius: 5px;
background-color: #fff;
width: 200px;
}
.control-panel, .center-lamp-control {
background-color: #444;
padding: 20px;
border-radius: 10px;
width: var(--matrix-width); /* Fixed width for consistency */
max-width: var(--matrix-width);
margin-bottom: 20px;
}
.control-panel.inactive-control {
background-color: #333;
filter: saturate(0.2);
}
.control-panel.inactive-control .slider-row {
pointer-events: none;
}
.control-panel h2, .center-lamp-control h2 {
color: #fff;
font-size: 16px;
margin-bottom: 10px;
text-align: center;
}
.slider-group {
width: 100%;
display: flex;
flex-direction: column;
gap: 5px;
}
.slider-row {
display: grid;
grid-template-columns: 150px 1fr 50px;
gap: 10px;
align-items: center;
}
.slider-group input[type="range"] {
-webkit-appearance: none;
height: 8px;
border-radius: 5px;
outline: none;
cursor: pointer;
}
.slider-group input[type="number"] {
width: 100%;
font-size: 14px;
text-align: center;
border: none;
border-radius: 5px;
padding: 5px;
}
.slider-group input[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none;
height: 20px;
width: 20px;
border-radius: 50%;
background: #fff;
cursor: pointer;
box-shadow: 0 0 5px rgba(0,0,0,0.5);
margin-top: 2px;
}
.slider-group input[type="range"]::-webkit-slider-runnable-track {
height: 24px;
border-radius: 12px;
}
input.white-3000k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #ffc080); }
input.white-6500k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #c0e0ff); }
input.blue::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #00f); }
.slider-label {
color: #fff;
font-size: 14px;
text-align: left;
white-space: nowrap;
width: 120px;
}
.inactive-control .slider-label {
color: #888;
}
@media (max-width: 1000px) {
.main-content {
flex-direction: column;
align-items: center;
}
:root {
--matrix-width: calc(5 * 70px + 4 * 20px);
}
body {
font-family: Arial, sans-serif;
display: flex;
flex-direction: column;
align-items: center;
margin: 0;
background-color: #f0f0f0;
min-height: 100vh;
}
.container {
display: flex;
flex-direction: column;
align-items: center;
position: relative;
}
.main-content {
display: flex;
flex-direction: row;
align-items: flex-start;
gap: 40px;
}
#vision-system {
display: flex;
flex-direction: column;
align-items: center;
}
#pupil-detection {
margin-bottom: 20px;
text-align: center;
}
#pupil-canvas {
border: 1px solid #ccc;
background-color: #f0f0f0;
}
#pupil-data p {
margin: 5px 0;
}
#video-feed {
text-align: center;
}
#video-feed img {
border: 1px solid #ccc;
}
.matrix-grid {
display: grid;
grid-template-columns: repeat(5, 70px);
grid-template-rows: repeat(5, 70px);
gap: 20px;
padding: 20px;
background-color: #333;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
margin-bottom: 20px;
}
.lamp {
width: 70px;
height: 70px;
border-radius: 10%;
background-color: #000;
transition: box-shadow 0.2s, transform 0.1s;
cursor: pointer;
border: 2px solid transparent;
}
.lamp.on {
box-shadow: 0 0 15px currentColor, 0 0 25px currentColor;
}
.lamp.selected {
border: 2px solid #fff;
transform: scale(1.1);
}
h1 {
color: #333;
margin-bottom: 20px;
}
.region-control {
margin-bottom: 20px;
text-align: center;
}
.region-control select {
padding: 10px 15px;
font-size: 14px;
cursor: pointer;
border: 1px solid #ccc;
border-radius: 5px;
background-color: #fff;
width: 200px;
}
.control-panel, .center-lamp-control {
background-color: #444;
padding: 20px;
border-radius: 10px;
width: var(--matrix-width); /* Fixed width for consistency */
max-width: var(--matrix-width);
margin-bottom: 20px;
}
.control-panel.inactive-control {
background-color: #333;
filter: saturate(0.2);
}
.control-panel.inactive-control .slider-row {
pointer-events: none;
}
.control-panel h2, .center-lamp-control h2 {
color: #fff;
font-size: 16px;
margin-bottom: 10px;
text-align: center;
}
.slider-group {
width: 100%;
display: flex;
flex-direction: column;
gap: 5px;
}
.slider-row {
display: grid;
grid-template-columns: 150px 1fr 50px;
gap: 10px;
align-items: center;
}
.slider-group input[type="range"] {
-webkit-appearance: none;
height: 8px;
border-radius: 5px;
outline: none;
cursor: pointer;
}
.slider-group input[type="number"] {
width: 100%;
font-size: 14px;
text-align: center;
border: none;
border-radius: 5px;
padding: 5px;
}
.slider-group input[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none;
height: 20px;
width: 20px;
border-radius: 50%;
background: #fff;
cursor: pointer;
box-shadow: 0 0 5px rgba(0,0,0,0.5);
margin-top: 2px;
}
.slider-group input[type="range"]::-webkit-slider-runnable-track {
height: 24px;
border-radius: 12px;
}
input.white-3000k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #ffc080); }
input.white-6500k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #c0e0ff); }
input.blue::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #00f); }
.slider-label {
color: #fff;
font-size: 14px;
text-align: left;
white-space: nowrap;
width: 120px;
}
.inactive-control .slider-label {
color: #888;
}
@media (max-width: 1000px) {
.main-content {
flex-direction: column;
align-items: center;
}
}
#ble-status {
position: fixed;
top: 10px;
right: 10px;
font-size: 16px;
color: #fff;
background-color: #333;
padding: 5px 10px;
border-radius: 5px;
}

View File

@ -1,342 +1,96 @@
<!DOCTYPE html>
<html>
<head>
<title>Lamp Matrix Control</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script>
// State for the entire 5x5 matrix, storing {ww, cw, blue} for each lamp
var lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
var selectedLamps = [];
// Function to calculate a visual RGB color from the three light values using a proper additive model
function calculateRgb(ww, cw, blue) {
// Define the RGB components for each light source based on slider track colors
const warmWhiteR = 255;
const warmWhiteG = 192;
const warmWhiteB = 128;
const coolWhiteR = 192;
const coolWhiteG = 224;
const coolWhiteB = 255;
const blueR = 0;
const blueG = 0;
const blueB = 255;
// Normalize the slider values (0-255) and apply them to the base colors
var r = (ww / 255) * warmWhiteR + (cw / 255) * coolWhiteR + (blue / 255) * blueR;
var g = (ww / 255) * warmWhiteG + (cw / 255) * coolWhiteG + (blue / 255) * blueG;
var b = (ww / 255) * warmWhiteB + (cw / 255) * coolWhiteB + (blue / 255) * blueB;
// Clamp the values to 255 and convert to integer
r = Math.min(255, Math.round(r));
g = Math.min(255, Math.round(g));
b = Math.min(255, Math.round(b));
// Convert to hex string
var toHex = (c) => ('0' + c.toString(16)).slice(-2);
return '#' + toHex(r) + toHex(g) + toHex(b);
}
function updateLampUI(lamp, colorState) {
var newColor = calculateRgb(colorState.ww, colorState.cw, colorState.blue);
var lampElement = $(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`);
lampElement.css('background-color', newColor);
if (newColor === '#000000') {
lampElement.removeClass('on');
lampElement.css('box-shadow', `inset 0 0 5px rgba(0,0,0,0.5)`);
} else {
lampElement.addClass('on');
lampElement.css('box-shadow', `0 0 15px ${newColor}, 0 0 25px ${newColor}`);
}
}
// Function to update the UI and send the full matrix state to the backend
function sendFullMatrixUpdate(lampsToUpdate, isRegionUpdate = false) {
var fullMatrixData = lampMatrixState.map(row => row.map(lamp => ({
ww: lamp.ww,
cw: lamp.cw,
blue: lamp.blue
})));
$.ajax({
url: '/set_matrix',
type: 'POST',
contentType: 'application/json',
data: JSON.stringify({ matrix: fullMatrixData }),
success: function(response) {
if (response.success) {
if (isRegionUpdate) {
// On a region button click, update the entire matrix UI
for (var r = 0; r < 5; r++) {
for (var c = 0; c < 5; c++) {
updateLampUI({row: r, col: c}, lampMatrixState[r][c]);
}
}
} else {
// Otherwise, just update the lamps that changed
lampsToUpdate.forEach(function(lamp) {
updateLampUI(lamp, lampMatrixState[lamp.row][lamp.col]);
});
}
}
}
});
}
function updateSliders(ww, cw, blue, prefix = '') {
$(`#${prefix}ww-slider`).val(ww);
$(`#${prefix}cw-slider`).val(cw);
$(`#${prefix}blue-slider`).val(blue);
$(`#${prefix}ww-number`).val(ww);
$(`#${prefix}cw-number`).val(cw);
$(`#${prefix}blue-number`).val(blue);
}
$(document).ready(function() {
var regionMaps = {
'Upper': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
],
'Lower': [
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'Left': [
{row: 0, col: 0}, {row: 1, col: 0}, {row: 2, col: 0}, {row: 3, col: 0}, {row: 4, col: 0},
{row: 0, col: 1}, {row: 1, col: 1}, {row: 2, col: 1}, {row: 3, col: 1}, {row: 4, col: 1},
],
'Right': [
{row: 0, col: 3}, {row: 1, col: 3}, {row: 2, col: 3}, {row: 3, col: 3}, {row: 4, col: 3},
{row: 0, col: 4}, {row: 1, col: 4}, {row: 2, col: 4}, {row: 3, col: 4}, {row: 4, col: 4},
],
'Inner ring': [
{row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3},
{row: 2, col: 1}, {row: 2, col: 3},
{row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}
],
'Outer ring': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'All': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 1}, {row: 2, col: 3}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
]
};
// Exclude the center lamp from the 'All' region
var allRegionWithoutCenter = regionMaps['All'].filter(lamp => !(lamp.row === 2 && lamp.col === 2));
regionMaps['All'] = allRegionWithoutCenter;
// Initialize lampMatrixState from the initial HTML colors
$('.lamp').each(function() {
var row = $(this).data('row');
var col = $(this).data('col');
var color = $(this).css('background-color');
var rgb = color.match(/\d+/g);
lampMatrixState[row][col] = {
ww: rgb[0], cw: rgb[1], blue: rgb[2]
};
});
$('#region-select').on('change', function() {
var region = $(this).val();
// Toggle the inactive state of the control panel based on selection
if (region) {
$('.control-panel').removeClass('inactive-control');
} else {
$('.control-panel').addClass('inactive-control');
}
var newlySelectedLamps = regionMaps[region];
// Clear selected class from all lamps
$('.lamp').removeClass('selected');
// Get the current slider values to use as the new default
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
// Reset all lamps except the center to black in our state
var lampsToUpdate = [];
var centerLampState = lampMatrixState[2][2];
lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
lampMatrixState[2][2] = centerLampState; // Preserve center lamp state
// Set newly selected lamps to the current slider values
selectedLamps = newlySelectedLamps;
selectedLamps.forEach(function(lamp) {
$(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`).addClass('selected');
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
});
if (selectedLamps.length > 0) {
// Update sliders to reflect the state of the first selected lamp
var firstLamp = selectedLamps[0];
var firstLampState = lampMatrixState[firstLamp.row][firstLamp.col];
updateSliders(firstLampState.ww, firstLampState.cw, firstLampState.blue, '');
}
// Send the full matrix state
sendFullMatrixUpdate(lampsToUpdate, true);
});
// Event listener for the region sliders and number inputs
$('.region-slider-group input').on('input', function() {
if (selectedLamps.length === 0) return;
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
// Clamp value
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
var lampsToUpdate = [];
selectedLamps.forEach(function(lamp) {
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
lampsToUpdate.push(lamp);
});
sendFullMatrixUpdate(lampsToUpdate);
});
// Event listener for the center lamp sliders and number inputs
$('.center-slider-group input').on('input', function() {
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
// Clamp value
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#center-ww-slider').val());
var cw = parseInt($('#center-cw-slider').val());
var blue = parseInt($('#center-blue-slider').val());
var centerLamp = {row: 2, col: 2};
lampMatrixState[centerLamp.row][centerLamp.col] = {ww: ww, cw: cw, blue: blue};
sendFullMatrixUpdate([centerLamp]);
});
// Initial check to set the inactive state
if (!$('#region-select').val()) {
$('.control-panel').addClass('inactive-control');
}
});
</script>
</head>
<body>
<div class="container">
<h1>Lamp Matrix Control</h1>
<div class="region-control">
<label for="region-select">Select Region:</label>
<select id="region-select">
<option value="" disabled selected>-- Select a region --</option>
<option value="Upper">Upper</option>
<option value="Lower">Lower</option>
<option value="Left">Left</option>
<option value="Right">Right</option>
<option value="Inner ring">Inner ring</option>
<option value="Outer ring">Outer ring</option>
<option value="All">All</option>
</select>
</div>
<div class="main-content">
<div class="matrix-grid">
{% for row in range(5) %}
{% for col in range(5) %}
<div class="lamp" data-row="{{ row }}" data-col="{{ col }}" style="background-color: {{ matrix[row][col] }}; box-shadow: {{ '0 0 15px ' + matrix[row][col] + ', 0 0 25px ' + matrix[row][col] if matrix[row][col] != '#000000' else 'inset 0 0 5px rgba(0,0,0,0.5)' }}"></div>
{% endfor %}
{% endfor %}
</div>
<div class="slider-controls">
<div class="center-lamp-control">
<h2>Center Lamp</h2>
<div class="slider-group center-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="center-ww-slider" min="0" max="255" value="0" class="white-3000k">
<input type="number" id="center-ww-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="center-cw-slider" min="0" max="255" value="0" class="white-6500k">
<input type="number" id="center-cw-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="center-blue-slider" min="0" max="255" value="0" class="blue">
<input type="number" id="center-blue-number" min="0" max="255" value="0">
</div>
</div>
</div>
<div class="control-panel">
<h2>Selected Region</h2>
<div class="slider-group region-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="ww-slider" min="0" max="255" value="0" class="white-3000k">
<input type="number" id="ww-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="cw-slider" min="0" max="255" value="0" class="white-6500k">
<input type="number" id="cw-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="blue-slider" min="0" max="255" value="0" class="blue">
<input type="number" id="blue-number" min="0" max="255" value="0">
</div>
</div>
</div>
</div>
</div>
</div>
</body>
<!DOCTYPE html>
<html>
<head>
<title>Lamp Matrix Control</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="{{ url_for('static', filename='script.js') }}"></script>
</head>
<body>
<div class="container">
<div id="ble-status"></div>
<h1>Lamp Matrix Control</h1>
<div class="region-control">
<label for="region-select">Select Region:</label>
<select id="region-select">
<option value="" disabled selected>-- Select a region --</option>
<option value="Upper">Upper</option>
<option value="Lower">Lower</option>
<option value="Left">Left</option>
<option value="Right">Right</option>
<option value="Inner ring">Inner ring</option>
<option value="Outer ring">Outer ring</option>
<option value="All">All</option>
</select>
</div>
<div class="main-content">
<div class="matrix-grid">
{% for row in range(5) %}
{% for col in range(5) %}
<div class="lamp" data-row="{{ row }}" data-col="{{ col }}" style="background-color: {{ matrix[row][col] }}; box-shadow: {{ '0 0 15px ' + matrix[row][col] + ', 0 0 25px ' + matrix[row][col] if matrix[row][col] != '#000000' else 'inset 0 0 5px rgba(0,0,0,0.5)' }}"></div>
{% endfor %}
{% endfor %}
</div>
<div class="slider-controls">
<div class="center-lamp-control">
<h2>Center Lamp</h2>
<div class="slider-group center-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="center-ww-slider" min="0" max="255" value="0" class="white-3000k">
<input type="number" id="center-ww-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="center-cw-slider" min="0" max="255" value="0" class="white-6500k">
<input type="number" id="center-cw-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="center-blue-slider" min="0" max="255" value="0" class="blue">
<input type="number" id="center-blue-number" min="0" max="255" value="0">
</div>
</div>
</div>
<div class="control-panel">
<h2>Selected Region</h2>
<div class="slider-group region-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="ww-slider" min="0" max="255" value="0" class="white-3000k">
<input type="number" id="ww-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="cw-slider" min="0" max="255" value="0" class="white-6500k">
<input type="number" id="cw-number" min="0" max="255" value="0">
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="blue-slider" min="0" max="255" value="0" class="blue">
<input type="number" id="blue-number" min="0" max="255" value="0">
</div>
</div>
</div>
</div>
<div id="vision-system">
<div id="pupil-detection">
<h2>Pupil Detection</h2>
<canvas id="pupil-canvas" width="300" height="300"></canvas>
<div id="pupil-data">
<p>Center: <span id="pupil-center">(x, y)</span></p>
<p>Area: <span id="pupil-area">0</span></p>
</div>
</div>
<div id="video-feed">
<h2>Camera Feed</h2>
<img src="{{ url_for('video_feed') }}" width="640" height="480">
</div>
</div>
</div>
</div>
</body>
</html>

View File

@ -0,0 +1,358 @@
import sys
import platform
import os
import numpy as np
import cv2
import logging
from ultralytics import YOLO # New import
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
class VisionSystem:
"""
The main class for the vision system, responsible for pupil segmentation.
It uses a platform-specific backend for the actual implementation.
"""
def __init__(self, config):
self.config = config.copy()
self.config.setdefault('model_name', 'yolov8n-seg.pt') # Set default model
# Ensure model_path in config points to the selected model_name
self.config['model_path'] = self.config['model_name']
self._backend = self._initialize_backend()
def _initialize_backend(self):
"""
Initializes the appropriate backend based on the environment and OS.
"""
# If in a test environment, use the MockBackend
if os.environ.get("PUPILOMETER_ENV") == "test":
logging.info("PUPILOMETER_ENV is set to 'test'. Initializing Mock backend.")
return MockBackend(self.config)
os_name = platform.system()
if os_name == "Linux" or os_name == "Windows":
logging.info(f"Operating system is {os_name}. Attempting to initialize DeepStream backend.")
try:
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst
Gst.init(None)
logging.info("DeepStream (GStreamer) is available.")
return DeepStreamBackend(self.config)
except (ImportError, ValueError) as e:
logging.warning(f"Could not initialize DeepStreamBackend: {e}. Falling back to PythonBackend.")
return PythonBackend(self.config)
elif os_name == "Darwin":
logging.info("Operating system is macOS. Initializing Python backend.")
return PythonBackend(self.config)
else:
logging.error(f"Unsupported operating system: {os_name}")
raise NotImplementedError(f"Unsupported operating system: {os_name}")
def start(self):
"""
Starts the vision system.
"""
self._backend.start()
def stop(self):
"""
Stops the vision system.
"""
self._backend.stop()
def get_pupil_data(self):
"""
Returns the latest pupil segmentation data.
"""
return self._backend.get_pupil_data()
def get_annotated_frame(self):
"""
Returns the latest annotated frame.
"""
return self._backend.get_annotated_frame()
class MockBackend:
"""
A mock backend for testing purposes.
"""
def __init__(self, config):
self.config = config
logging.info("MockBackend initialized.")
def start(self):
logging.info("MockBackend started.")
pass
def stop(self):
logging.info("MockBackend stopped.")
pass
def get_pupil_data(self):
logging.info("Getting pupil data from MockBackend.")
return {
"pupil_position": (123, 456),
"pupil_diameter": 789,
"info": "mock_data"
}
def get_annotated_frame(self):
"""
Returns a placeholder image.
"""
frame = np.zeros((480, 640, 3), np.uint8)
cv2.putText(frame, "Mock Camera Feed", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
return frame
class DeepStreamBackend:
"""
A class to handle pupil segmentation on Jetson/Windows using DeepStream.
"""
def __init__(self, config):
"""
Initializes the DeepStreamBackend.
Args:
config (dict): A dictionary containing configuration parameters.
"""
from deepstream_pipeline import DeepStreamPipeline
self.config = config
self.pipeline = DeepStreamPipeline(config)
logging.info("DeepStreamBackend initialized.")
def start(self):
"""
Starts the DeepStream pipeline.
"""
self.pipeline.start()
logging.info("DeepStreamBackend started.")
def stop(self):
"""
Stops the DeepStream pipeline.
"""
self.pipeline.stop()
logging.info("DeepStreamBackend stopped.")
def get_pupil_data(self):
"""
Retrieves pupil data from the DeepStream pipeline.
"""
return self.pipeline.get_data()
def get_annotated_frame(self):
"""
Retrieves the annotated frame from the DeepStream pipeline.
"""
return self.pipeline.get_annotated_frame()
class PythonBackend:
"""
A class to handle pupil segmentation on macOS using pypylon and Ultralytics YOLO models.
"""
def __init__(self, config):
"""
Initializes the PythonBackend.
Args:
config (dict): A dictionary containing configuration parameters
such as 'model_path'.
"""
self.config = config
self.camera = None
self.model = None # Ultralytics YOLO model
self.annotated_frame = None
self.conf_threshold = 0.25 # Confidence threshold for object detection
self.iou_threshold = 0.45 # IoU threshold for Non-Maximum Suppression
# Load the YOLO model (e.g., yolov8n-seg.pt)
try:
model_full_path = os.path.join(os.path.dirname(__file__), self.config['model_path'])
self.model = YOLO(model_full_path)
logging.info(f"PythonBackend: Ultralytics YOLO model loaded from {model_full_path}.")
# Dynamically get class names from the model
self.class_names = self.model.names
except Exception as e:
logging.error(f"PythonBackend: Error loading Ultralytics YOLO model: {e}")
self.model = None
self.class_names = [] # Fallback to empty list
logging.info("PythonBackend initialized.")
def start(self):
"""
Initializes the Basler camera.
"""
try:
from pypylon import pylon
except ImportError:
raise ImportError("pypylon is not installed. Cannot start PythonBackend.")
try:
# Initialize the camera
self.camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
self.camera.Open()
# Start grabbing continuously
self.camera.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
logging.info("PythonBackend: Basler camera opened and started grabbing.")
except Exception as e:
logging.error(f"PythonBackend: Error opening Basler camera: {e}")
self.camera = None
logging.info("PythonBackend started.")
def stop(self):
"""
Releases the camera resources.
"""
if self.camera and self.camera.IsGrabbing():
self.camera.StopGrabbing()
logging.info("PythonBackend: Basler camera stopped grabbing.")
if self.camera and self.camera.IsOpen():
self.camera.Close()
logging.info("PythonBackend: Basler camera closed.")
logging.info("PythonBackend stopped.")
def get_pupil_data(self):
"""
Grabs a frame from the camera, runs inference using Ultralytics YOLO, and returns pupil data.
"""
if not self.camera or not self.camera.IsGrabbing():
logging.warning("PythonBackend: Camera not ready.")
return None
if not self.model:
logging.warning("PythonBackend: YOLO model not loaded.")
return None
grab_result = None
try:
from pypylon import pylon
import cv2
import numpy as np
grab_result = self.camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
if grab_result.GrabSucceeded():
image_np = grab_result.Array # This is typically a grayscale image from Basler
# Convert grayscale to BGR if necessary for YOLO (YOLO expects 3 channels)
if len(image_np.shape) == 2:
image_bgr = cv2.cvtColor(image_np, cv2.COLOR_GRAY2BGR)
else:
image_bgr = image_np
# Run inference with Ultralytics YOLO
results = self.model.predict(source=image_bgr, conf=self.conf_threshold, iou=self.iou_threshold, verbose=False)
pupil_data = {}
self.annotated_frame = image_bgr.copy() # Start with original image for annotation
if results and len(results[0].boxes) > 0: # Check if any detections are made
# Assuming we are interested in the largest or most confident pupil
# For simplicity, let's process the first detection
result = results[0] # Results for the first (and only) image
# Extract bounding box
box = result.boxes.xyxy[0].cpu().numpy().astype(int) # xyxy format
x1, y1, x2, y2 = box
# Extract confidence and class ID
confidence = result.boxes.conf[0].cpu().numpy().item()
class_id = int(result.boxes.cls[0].cpu().numpy().item())
class_name = self.class_names[class_id]
# Calculate pupil position (center of bounding box)
pupil_center_x = (x1 + x2) // 2
pupil_center_y = (y1 + y2) // 2
# Calculate pupil diameter (average of width and height of bounding box)
pupil_diameter = (x2 - x1 + y2 - y1) // 2
pupil_data = {
"pupil_position": (pupil_center_x, pupil_center_y),
"pupil_diameter": pupil_diameter,
"class_name": class_name,
"confidence": confidence,
"bounding_box": box.tolist() # Convert numpy array to list for JSON serialization
}
# Extract and draw segmentation mask
if result.masks:
# Get the mask for the first detection, upsampled to original image size
mask_np = result.masks.data[0].cpu().numpy() # Raw mask data
# Resize mask to original image dimensions if necessary (ultralytics usually returns scaled masks)
mask_resized = cv2.resize(mask_np, (image_bgr.shape[1], image_bgr.shape[0]), interpolation=cv2.INTER_LINEAR)
binary_mask = (mask_resized > 0.5).astype(np.uint8) * 255 # Threshold to binary
# Draw bounding box
color = (0, 255, 0) # Green for pupil detection
cv2.rectangle(self.annotated_frame, (x1, y1), (x2, y2), color, 2)
# Create a colored mask overlay
mask_color = np.array([0, 255, 0], dtype=np.uint8) # Green color for mask
colored_mask_overlay = np.zeros_like(self.annotated_frame, dtype=np.uint8)
colored_mask_overlay[binary_mask > 0] = mask_color
self.annotated_frame = cv2.addWeighted(self.annotated_frame, 1, colored_mask_overlay, 0.5, 0)
# Draw label
label = f"{class_name}: {confidence:.2f}"
cv2.putText(self.annotated_frame, label, (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
else:
logging.info("No objects detected by YOLO model.")
return pupil_data
else:
logging.error(f"PythonBackend: Error grabbing frame: {grab_result.ErrorCode} {grab_result.ErrorDescription}")
return None
except Exception as e:
logging.error(f"PythonBackend: An error occurred during frame grabbing or inference: {e}")
return None
finally:
if grab_result:
grab_result.Release()
def get_annotated_frame(self):
"""
Returns the latest annotated frame.
"""
return self.annotated_frame
# The if __name__ == '__main__': block should be outside the class
if __name__ == '__main__':
# Example usage
# Ensure 'yolov8n-seg.pt' is in src/controllerSoftware for this example to run
config = {"camera_id": 0, "model_path": "yolov8n-seg.pt"}
try:
vision_system = VisionSystem(config)
vision_system.start()
# In a real application, this would run in a loop
pupil_data = vision_system.get_pupil_data()
if pupil_data:
logging.info(f"Received pupil data: {pupil_data}")
else:
logging.info("No pupil data received.")
# Get and show the annotated frame
annotated_frame = vision_system.get_annotated_frame()
if annotated_frame is not None:
cv2.imshow("Annotated Frame", annotated_frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
vision_system.stop()
except NotImplementedError as e:
logging.error(e)
except Exception as e:
logging.error(f"An error occurred: {e}")

View File

@ -1,267 +0,0 @@
import sys
import subprocess
import threading
import time
import gc
import json
from flask import Flask, Response, render_template_string, jsonify
# --- CONFIGURATION ---
TARGET_NUM_CAMS = 3
DEFAULT_W = 1280
DEFAULT_H = 720
# --- PART 1: DETECTION ---
def scan_connected_cameras():
print("--- Scanning for Basler Cameras ---")
detection_script = """
import sys
try:
from pypylon import pylon
tl_factory = pylon.TlFactory.GetInstance()
devices = tl_factory.EnumerateDevices()
if not devices:
print("NONE")
else:
serials = [d.GetSerialNumber() for d in devices]
cam = pylon.InstantCamera(tl_factory.CreateDevice(devices[0]))
cam.Open()
try:
cam.BinningHorizontal.Value = 2
cam.BinningVertical.Value = 2
w = cam.Width.GetValue()
h = cam.Height.GetValue()
cam.BinningHorizontal.Value = 1
cam.BinningVertical.Value = 1
supported = 1
except:
w = cam.Width.GetValue()
h = cam.Height.GetValue()
supported = 0
cam.Close()
print(f"{','.join(serials)}|{w}|{h}|{supported}")
except Exception:
print("NONE")
"""
try:
result = subprocess.run([sys.executable, "-c", detection_script], capture_output=True, text=True)
output = result.stdout.strip()
if "NONE" in output or not output:
return [], DEFAULT_W, DEFAULT_H, False
parts = output.split('|')
return parts[0].split(','), int(parts[1]), int(parts[2]), (parts[3] == '1')
except: return [], DEFAULT_W, DEFAULT_H, False
DETECTED_SERIALS, CAM_W, CAM_H, BINNING_SUPPORTED = scan_connected_cameras()
ACTUAL_CAMS_COUNT = len(DETECTED_SERIALS)
# --- RESOLUTION & LAYOUT ---
INTERNAL_WIDTH = 1280
if ACTUAL_CAMS_COUNT > 0:
scale = INTERNAL_WIDTH / CAM_W
INTERNAL_HEIGHT = int(CAM_H * scale)
else:
INTERNAL_HEIGHT = 720
if INTERNAL_HEIGHT % 2 != 0: INTERNAL_HEIGHT += 1
WEB_WIDTH = 1280
total_source_width = INTERNAL_WIDTH * TARGET_NUM_CAMS
scale_tiled = WEB_WIDTH / total_source_width
WEB_HEIGHT = int(INTERNAL_HEIGHT * scale_tiled)
if WEB_HEIGHT % 2 != 0: WEB_HEIGHT += 1
print(f"LAYOUT: {TARGET_NUM_CAMS} Slots | Detected: {ACTUAL_CAMS_COUNT} Cams")
# --- FLASK & GSTREAMER ---
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
app = Flask(__name__)
frame_buffer = None
buffer_lock = threading.Lock()
current_fps = 0.0
frame_count = 0
start_time = time.time()
class GStreamerPipeline(threading.Thread):
def __init__(self):
super().__init__()
self.loop = GLib.MainLoop()
self.pipeline = None
def run(self):
Gst.init(None)
self.build_pipeline()
self.pipeline.set_state(Gst.State.PLAYING)
try:
self.loop.run()
except Exception as e:
print(f"Error: {e}")
finally:
self.pipeline.set_state(Gst.State.NULL)
def on_new_sample(self, sink):
global frame_count, start_time, current_fps
sample = sink.emit("pull-sample")
if not sample: return Gst.FlowReturn.ERROR
frame_count += 1
# Calculate FPS every 30 frames
if frame_count % 30 == 0:
elapsed = time.time() - start_time
current_fps = 30 / elapsed if elapsed > 0 else 0
start_time = time.time()
buffer = sample.get_buffer()
success, map_info = buffer.map(Gst.MapFlags.READ)
if not success: return Gst.FlowReturn.ERROR
global frame_buffer
with buffer_lock:
frame_buffer = bytes(map_info.data)
buffer.unmap(map_info)
return Gst.FlowReturn.OK
def build_pipeline(self):
# 1. CAMERA SETTINGS
# Note: We run cameras at 60 FPS for internal stability
cam_settings = (
"cam::TriggerMode=Off "
"cam::AcquisitionFrameRateEnable=true cam::AcquisitionFrameRate=60.0 "
"cam::ExposureAuto=Off "
"cam::ExposureTime=20000.0 "
"cam::GainAuto=Continuous "
"cam::DeviceLinkThroughputLimitMode=Off "
)
if BINNING_SUPPORTED:
cam_settings += "cam::BinningHorizontal=2 cam::BinningVertical=2 "
sources_str = ""
for i in range(TARGET_NUM_CAMS):
if i < len(DETECTED_SERIALS):
# --- REAL CAMERA SOURCE ---
serial = DETECTED_SERIALS[i]
print(f"Slot {i}: Linking Camera {serial}")
pre_scale = (
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12, width={INTERNAL_WIDTH}, height={INTERNAL_HEIGHT}, framerate=60/1 ! "
)
source = (
f"pylonsrc device-serial-number={serial} {cam_settings} ! "
"video/x-raw,format=GRAY8 ! "
"videoconvert ! "
"video/x-raw,format=I420 ! "
"nvvideoconvert compute-hw=1 ! "
"video/x-raw(memory:NVMM) ! "
f"{pre_scale}"
f"m.sink_{i} "
)
else:
# --- DISCONNECTED PLACEHOLDER ---
print(f"Slot {i}: Creating Placeholder (Synchronized)")
# FIX 1: Add 'videorate' to enforce strict timing on the fake source
# This prevents the placeholder from running too fast/slow and jittering the muxer
source = (
f"videotestsrc pattern=black is-live=true ! "
f"videorate ! " # <--- TIMING ENFORCER
f"video/x-raw,width={INTERNAL_WIDTH},height={INTERNAL_HEIGHT},format=I420,framerate=60/1 ! "
f"textoverlay text=\"DISCONNECTED\" valignment=center halignment=center font-desc=\"Sans, 48\" ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM),format=NV12,width={INTERNAL_WIDTH},height={INTERNAL_HEIGHT},framerate=60/1 ! "
f"m.sink_{i} "
)
sources_str += source
# 3. MUXER & PROCESSING
# FIX 2: batched-push-timeout=33000
# This tells the muxer: "If you have data, send it every 33ms (30fps). Don't wait forever."
# FIX 3: Output Videorate
# We process internally at 60fps (best for camera driver), but we DROP to 30fps
# for the web stream. This makes the network stream buttery smooth and consistent.
processing = (
f"nvstreammux name=m batch-size={TARGET_NUM_CAMS} width={INTERNAL_WIDTH} height={INTERNAL_HEIGHT} "
f"live-source=1 batched-push-timeout=33000 ! " # <--- TIMEOUT FIX
f"nvmultistreamtiler width={WEB_WIDTH} height={WEB_HEIGHT} rows=1 columns={TARGET_NUM_CAMS} ! "
"nvvideoconvert compute-hw=1 ! "
"video/x-raw(memory:NVMM) ! "
"videorate drop-only=true ! " # <--- DROPPING FRAMES CLEANLY
"video/x-raw(memory:NVMM), framerate=30/1 ! " # <--- Force 30 FPS Output
f"nvjpegenc quality=60 ! "
"appsink name=sink emit-signals=True sync=False max-buffers=1 drop=True"
)
pipeline_str = f"{sources_str} {processing}"
print(f"Launching SMOOTH Pipeline...")
self.pipeline = Gst.parse_launch(pipeline_str)
appsink = self.pipeline.get_by_name("sink")
appsink.connect("new-sample", self.on_new_sample)
# --- FLASK ---
@app.route('/')
def index():
return render_template_string('''
<html>
<head>
<style>
body { background-color: #111; color: white; text-align: center; font-family: monospace; margin: 0; padding: 20px; }
.container { position: relative; display: inline-block; border: 3px solid #4CAF50; }
img { display: block; max-width: 100%; height: auto; }
.hud {
position: absolute; top: 10px; left: 10px;
background: rgba(0, 0, 0, 0.6); color: #00FF00;
padding: 5px 10px; font-weight: bold; pointer-events: none;
}
</style>
</head>
<body>
<h1>Basler 3-Cam (Smooth)</h1>
<div class="container">
<div class="hud" id="fps-counter">FPS: --</div>
<img src="{{ url_for('video_feed') }}">
</div>
<script>
setInterval(function() {
fetch('/get_fps').then(r => r.json()).then(d => {
document.getElementById('fps-counter').innerText = "FPS: " + d.fps;
});
}, 500);
</script>
</body>
</html>
''')
@app.route('/video_feed')
def video_feed():
def generate():
count = 0
while True:
with buffer_lock:
if frame_buffer:
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame_buffer + b'\r\n')
# Sleep 33ms (30 FPS)
time.sleep(0.033)
count += 1
if count % 200 == 0: gc.collect()
return Response(generate(), mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/get_fps')
def get_fps():
return jsonify(fps=round(current_fps, 1))
if __name__ == "__main__":
subprocess.run([sys.executable, "-c", "import gc; gc.collect()"])
gst_thread = GStreamerPipeline()
gst_thread.daemon = True
gst_thread.start()
app.run(host='0.0.0.0', port=5000, debug=False, threaded=True)

View File

@ -1,58 +0,0 @@
from pypylon import pylon
import time
import sys
try:
# Get the Transport Layer Factory
tl_factory = pylon.TlFactory.GetInstance()
devices = tl_factory.EnumerateDevices()
if not devices:
print("No cameras found!")
sys.exit(1)
print(f"Found {len(devices)} cameras. Checking Camera 1...")
# Connect to first camera
cam = pylon.InstantCamera(tl_factory.CreateDevice(devices[0]))
cam.Open()
# 1. Reset to Defaults
print("Reseting to Defaults...")
cam.UserSetSelector.Value = "Default"
cam.UserSetLoad.Execute()
# 2. Enable Auto Exposure/Gain
print("Enabling Auto Exposure & Gain...")
cam.ExposureAuto.Value = "Continuous"
cam.GainAuto.Value = "Continuous"
# 3. Wait for it to settle (Camera adjusts to light)
print("Waiting 3 seconds for auto-adjustment...")
for i in range(3):
print(f"{3-i}...")
time.sleep(1)
# 4. READ VALUES
current_exposure = cam.ExposureTime.GetValue() # In Microseconds (us)
current_fps_readout = cam.ResultingFrameRate.GetValue()
print("-" * 30)
print(f"REPORT FOR SERIAL: {cam.GetDeviceInfo().GetSerialNumber()}")
print("-" * 30)
print(f"Current Exposure Time: {current_exposure:.1f} us ({current_exposure/1000:.1f} ms)")
print(f"Theoretical Max FPS: {1000000 / current_exposure:.1f} FPS")
print(f"Camera Internal FPS: {current_fps_readout:.1f} FPS")
print("-" * 30)
if current_exposure > 33000:
print("⚠️ PROBLEM FOUND: Exposure is > 33ms.")
print(" This physically prevents the camera from reaching 30 FPS.")
print(" Solution: Add more light or limit AutoExposureUpperLimit.")
else:
print("✅ Exposure looks fast enough for 30 FPS.")
cam.Close()
except Exception as e:
print(f"Error: {e}")

View File

@ -1,33 +0,0 @@
# Unified WebUI
This application combines the functionality of the `detectionSoftware` and `controllerSoftware` into a single, unified web interface.
## Features
- **Camera View:** Displays a tiled video stream from multiple Basler cameras.
- **Lamp Control:** Provides a web interface to control a 5x5 LED matrix via Bluetooth Low Energy (BLE).
- **Responsive UI:** The UI is designed to work on both desktop and mobile devices. On desktop, the lamp control and camera view are displayed side-by-side. On mobile, they are in separate tabs.
## Setup
1. **Install dependencies:**
```bash
pip install -r requirements.txt
```
2. **Run the application:**
```bash
python src/unified_web_ui/app.py
```
3. **Open the web interface:**
Open a web browser and navigate to `http://<your-ip-address>:5000`.
## Modules
- **`app.py`:** The main Flask application file.
- **`ble_controller.py`:** Handles the BLE communication with the lamp matrix.
- **`camera_scanner.py`:** Scans for connected Basler cameras.
- **`gstreamer_pipeline.py`:** Creates and manages the GStreamer pipeline for video processing.
- **`templates/index.html`:** The main HTML template for the web interface.
- **`static/style.css`:** The CSS file for styling the web interface.

View File

@ -1,226 +0,0 @@
import sys
import subprocess
import threading
import time
import asyncio
import json
import signal
import os
from flask import Flask, Response, render_template, request, jsonify, g
from camera_scanner import scan_connected_cameras
from gstreamer_pipeline import GStreamerPipeline
from ble_controller import BLEController, get_spiral_address, SPIRAL_MAP_5x5, lampAmount
# =================================================================================================
# APP CONFIGURATION
# =================================================================================================
# --- Camera Configuration ---
TARGET_NUM_CAMS = 3
DEFAULT_W = 1280
DEFAULT_H = 720
# --- BLE Device Configuration ---
DEVICE_NAME = "Pupilometer LED Billboard"
DEBUG_MODE = False # Set to True to run without a physical BLE device
# =================================================================================================
# INITIALIZATION
# =================================================================================================
# --- Camera Initialization ---
DETECTED_CAMS = scan_connected_cameras()
ACTUAL_CAMS_COUNT = len(DETECTED_CAMS)
# Sort cameras: color camera first, then mono cameras
# Assuming 'is_color' is a reliable flag
# If no color camera exists, the first mono will be at index 0.
detected_cams_sorted = sorted(DETECTED_CAMS, key=lambda x: x['is_color'], reverse=True)
if ACTUAL_CAMS_COUNT > 0:
MASTER_W = detected_cams_sorted[0]['width']
MASTER_H = detected_cams_sorted[0]['height']
else:
MASTER_W = DEFAULT_W
MASTER_H = DEFAULT_H
INTERNAL_WIDTH = 1280
scale = INTERNAL_WIDTH / MASTER_W
INTERNAL_HEIGHT = int(MASTER_H * scale)
if INTERNAL_HEIGHT % 2 != 0: INTERNAL_HEIGHT += 1
WEB_WIDTH = 1280
total_source_width = INTERNAL_WIDTH * TARGET_NUM_CAMS
scale_tiled = WEB_WIDTH / total_source_width
WEB_HEIGHT = int(INTERNAL_HEIGHT * scale_tiled)
if INTERNAL_HEIGHT % 2 != 0: INTERNAL_HEIGHT += 1 # Ensure even for some GStreamer elements
print(f"LAYOUT: {TARGET_NUM_CAMS} Slots | Detected: {ACTUAL_CAMS_COUNT}")
for c in detected_cams_sorted:
print(f" - Cam {c['serial']} ({c['model']}): {'COLOR' if c['is_color'] else 'MONO'}")
# --- Flask App Initialization ---
app = Flask(__name__)
# --- GStreamer Initialization ---
gst_thread = GStreamerPipeline(detected_cams_sorted, TARGET_NUM_CAMS, INTERNAL_WIDTH, INTERNAL_HEIGHT, WEB_WIDTH, WEB_HEIGHT)
gst_thread.daemon = True
gst_thread.start()
# --- BLE Initialization ---
ble_controller = BLEController(DEVICE_NAME, DEBUG_MODE)
ble_thread = None
if not DEBUG_MODE:
ble_controller.ble_event_loop = asyncio.new_event_loop()
ble_thread = threading.Thread(target=ble_controller.ble_event_loop.run_forever, daemon=True)
ble_thread.start()
future = asyncio.run_coroutine_threadsafe(ble_controller.connect(), ble_controller.ble_event_loop)
try:
future.result(timeout=10)
except Exception as e:
print(f"Failed to connect to BLE device: {e}")
# Optionally, set DEBUG_MODE to True here if BLE connection is critical
# DEBUG_MODE = True
# --- In-memory matrix for DEBUG_MODE ---
lamp_matrix = [['#000000' for _ in range(5)] for _ in range(5)]
# =================================================================================================
# COLOR MIXING
# =================================================================================================
def calculate_rgb(ww, cw, blue):
warm_white_r, warm_white_g, warm_white_b = 255, 192, 128
cool_white_r, cool_white_g, cool_white_b = 192, 224, 255
blue_r, blue_g, blue_b = 0, 0, 255
r = (ww / 255) * warm_white_r + (cw / 255) * cool_white_r + (blue / 255) * blue_r
g = (ww / 255) * warm_white_g + (cw / 255) * cool_white_g + (blue / 255) * blue_g
b = (ww / 255) * warm_white_b + (cw / 255) * cool_white_b + (blue / 255) * blue_b
r = int(min(255, round(r)))
g = int(min(255, round(g)))
b = int(min(255, round(b)))
return r, g, b
def rgb_to_hex(r, g, b):
r = int(max(0, min(255, r)))
g = int(max(0, min(255, g)))
b = int(max(0, min(255, b)))
return f'#{r:02x}{g:02x}{b:02x}'
# =================================================================================================
# FLASK ROUTES
# =================================================================================================
from datetime import datetime
@app.context_processor
def inject_now():
return {'now': datetime.utcnow}
@app.before_request
def before_request():
g.detected_cams_info = []
for cam in gst_thread.sorted_cams:
cam_copy = cam.copy()
if cam_copy['height'] > 0:
cam_copy['aspect_ratio'] = cam_copy['width'] / cam_copy['height']
else:
cam_copy['aspect_ratio'] = 16 / 9 # Default aspect ratio
g.detected_cams_info.append(cam_copy)
@app.route('/')
def index():
return render_template('index.html', matrix=lamp_matrix, detected_cams_info=g.detected_cams_info)
@app.route('/video_feed/<int:stream_id>')
def video_feed(stream_id):
def generate(stream_id):
while True:
frame = gst_thread.get_frame_by_id(stream_id)
if frame:
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
time.sleep(0.016) # Roughly 60 fps
return Response(generate(stream_id), mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/segmentation_feed/<int:stream_id>')
def segmentation_feed(stream_id):
def generate(stream_id):
while True:
frame = gst_thread.get_seg_frame_by_id(stream_id)
if frame:
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
time.sleep(0.016) # Roughly 60 fps
return Response(generate(stream_id), mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/get_fps')
def get_fps():
return jsonify(fps=gst_thread.get_fps())
@app.route('/set_matrix', methods=['POST'])
def set_matrix():
data = request.get_json()
full_matrix = data.get('matrix', [])
if not full_matrix or len(full_matrix) != 5 or len(full_matrix[0]) != 5:
return jsonify(success=False, message="Invalid matrix data received"), 400
serial_colors = [b'\x00\x00\x00'] * lampAmount
try:
for row in range(5):
for col in range(5):
lamp_data = full_matrix[row][col]
ww = int(lamp_data['ww'])
cw = int(lamp_data['cw'])
blue = int(lamp_data['blue'])
color_bytes = bytes([ww, cw, blue])
spiral_pos = get_spiral_address(row, col, SPIRAL_MAP_5x5)
if spiral_pos != -1:
serial_colors[spiral_pos] = color_bytes
lampColorR, lampColorG, lampColorB = calculate_rgb(ww,cw,blue)
lamp_matrix[row][col] = rgb_to_hex(lampColorR, lampColorG, lampColorB)
if DEBUG_MODE:
return jsonify(success=True)
else:
asyncio.run_coroutine_threadsafe(
ble_controller.set_full_matrix(serial_colors),
ble_controller.ble_event_loop
)
return jsonify(success=True)
except Exception as e:
print(f"Error in set_matrix route: {e}")
return jsonify(success=False, message=str(e)), 500
# =================================================================================================
# APP SHUTDOWN
# =================================================================================================
def signal_handler(signum, frame):
print("Received shutdown signal, gracefully shutting down...")
if not DEBUG_MODE:
disconnect_future = asyncio.run_coroutine_threadsafe(ble_controller.disconnect(), ble_controller.ble_event_loop)
try:
disconnect_future.result(timeout=5)
except Exception as e:
print(f"Error during BLE disconnect: {e}")
if not DEBUG_MODE and ble_controller.ble_event_loop and ble_controller.ble_event_loop.is_running():
ble_controller.ble_event_loop.call_soon_threadsafe(ble_controller.ble_event_loop.stop)
ble_thread.join(timeout=1)
os._exit(0)
# =================================================================================================
# APP STARTUP
# =================================================================================================
if __name__ == '__main__':
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
app.run(host='0.0.0.0', port=5000, debug=False, threaded=True, use_reloader=False)

View File

@ -1,108 +0,0 @@
import asyncio
from bleak import BleakScanner, BleakClient
# =================================================================================================
# BLE HELPER FUNCTIONS (Used in LIVE mode)
# =================================================================================================
lampAmount = 25
def create_spiral_map(n=5):
if n % 2 == 0:
raise ValueError("Matrix size must be odd for a unique center point.")
spiral_map = [[0] * n for _ in range(n)]
r, c = n // 2, n // 2
address = 0
spiral_map[r][c] = address
dr = [-1, 0, 1, 0]
dc = [0, 1, 0, -1]
direction = 0
segment_length = 1
steps = 0
while address < n * n - 1:
for _ in range(segment_length):
address += 1
r += dr[direction]
c += dc[direction]
if 0 <= r < n and 0 <= c < n:
spiral_map[r][c] = address
direction = (direction + 1) % 4
steps += 1
if steps % 2 == 0:
segment_length += 1
return spiral_map
def get_spiral_address(row, col, spiral_map):
n = len(spiral_map)
if 0 <= row < n and 0 <= col < n:
return spiral_map[row][col]
else:
return -1
SPIRAL_MAP_5x5 = create_spiral_map(5)
class BLEController:
def __init__(self, device_name, debug_mode=False):
self.device_name = device_name
self.debug_mode = debug_mode
self.ble_client = None
self.ble_characteristics = None
self.ble_event_loop = None
async def connect(self):
print(f"Scanning for device: {self.device_name}...")
devices = await BleakScanner.discover()
target_device = next((d for d in devices if d.name == self.device_name), None)
if not target_device:
print(f"Device '{self.device_name}' not found.")
return False
print(f"Found device: {target_device.name} ({target_device.address})")
try:
self.ble_client = BleakClient(target_device.address)
await self.ble_client.connect()
if self.ble_client.is_connected:
print(f"Connected to {target_device.name}")
services = [service for service in self.ble_client.services if service.handle != 1]
characteristics = [
char for service in services for char in service.characteristics
]
self.ble_characteristics = sorted(characteristics, key=lambda char: char.handle)
print(f"Found {len(self.ble_characteristics)} characteristics for lamps.")
return True
else:
print(f"Failed to connect to {target_device.name}")
return False
except Exception as e:
print(f"An error occurred during BLE connection: {e}")
return False
async def disconnect(self):
if self.ble_client and self.ble_client.is_connected:
await self.ble_client.disconnect()
print("BLE client disconnected.")
async def set_full_matrix(self, color_series):
if not self.ble_client or not self.ble_client.is_connected:
print("BLE client not connected. Attempting to reconnect...")
await self.connect()
if not self.ble_client or not self.ble_client.is_connected:
print("Failed to reconnect to BLE client.")
return
if self.debug_mode:
print(f"Constructed the following matrix data: {color_series}")
for i, char in enumerate(self.ble_characteristics):
value_to_write = color_series[i]
print(f"Setting Lamp {i} ({char.uuid}) to {value_to_write.hex()}")
await self.ble_client.write_gatt_char(char.uuid, value_to_write)
else:
value_to_write = b"".join([color for color in color_series])
print(f"Setting lamps to {value_to_write.hex()}")
await self.ble_client.write_gatt_char(self.ble_characteristics[0].uuid, value_to_write)

View File

@ -1,51 +0,0 @@
import sys
import subprocess
def scan_connected_cameras():
print("--- Scanning for Basler Cameras ---")
detection_script = """
import sys
try:
from pypylon import pylon
tl_factory = pylon.TlFactory.GetInstance()
devices = tl_factory.EnumerateDevices()
if not devices:
print("NONE")
else:
results = []
for i in range(len(devices)):
cam = pylon.InstantCamera(tl_factory.CreateDevice(devices[i]))
cam.Open()
serial = cam.GetDeviceInfo().GetSerialNumber()
model = cam.GetDeviceInfo().GetModelName()
is_color = model.endswith("c") or "Color" in model
w = cam.Width.GetValue()
h = cam.Height.GetValue()
binning = 0
try:
cam.BinningHorizontal.Value = 2
cam.BinningVertical.Value = 2
cam.BinningHorizontal.Value = 1
cam.BinningVertical.Value = 1
binning = 1
except: pass
current_fmt = cam.PixelFormat.GetValue()
cam.Close()
results.append(f"{serial}:{w}:{h}:{binning}:{1 if is_color else 0}:{model}:{current_fmt}")
print("|".join(results))
except Exception: print("NONE")
"""
try:
result = subprocess.run([sys.executable, "-c", detection_script], capture_output=True, text=True)
output = result.stdout.strip()
if "NONE" in output or not output: return []
camera_list = []
entries = output.split('|')
for entry in entries:
parts = entry.split(':')
camera_list.append({
"serial": parts[0], "width": int(parts[1]), "height": int(parts[2]),
"binning": (parts[3] == '1'), "is_color": (parts[4] == '1'), "model": parts[5]
})
return camera_list
except: return []

View File

@ -1,195 +0,0 @@
import threading
import time
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib, GObject
class GStreamerPipeline(threading.Thread):
def __init__(self, detected_cams, target_num_cams, internal_width, internal_height, web_width, web_height):
super().__init__()
self.loop = GLib.MainLoop()
self.pipeline = None
self.target_num_cams = target_num_cams
self.internal_width = internal_width
self.internal_height = internal_height
self.web_width = web_width
self.web_height = web_height
self.frame_buffers = [None] * self.target_num_cams
self.buffer_locks = [threading.Lock() for _ in range(self.target_num_cams)]
self.seg_frame_buffers = [None] * self.target_num_cams
self.seg_buffer_locks = [threading.Lock() for _ in range(self.target_num_cams)]
self.current_fps = 0.0 # Will still report overall FPS, not per stream
self.frame_count = 0
self.start_time = time.time()
# Sort cameras: color camera first, then mono cameras
self.sorted_cams = detected_cams # We now expect detected_cams to be already sorted in app.py or be handled by the client
print(f"Sorted cameras for GStreamer: {self.sorted_cams}")
def run(self):
Gst.init(None)
self.build_pipeline()
if self.pipeline:
self.pipeline.set_state(Gst.State.PLAYING)
try:
self.loop.run()
except Exception as e:
print(f"Error: {e}")
finally:
self.pipeline.set_state(Gst.State.NULL)
else:
print("GStreamer pipeline failed to build.")
def on_new_seg_sample_factory(self, stream_id):
def on_new_sample(sink):
sample = sink.emit("pull-sample")
if not sample: return Gst.FlowReturn.ERROR
buffer = sample.get_buffer()
success, map_info = buffer.map(Gst.MapFlags.READ)
if not success: return Gst.FlowReturn.ERROR
with self.seg_buffer_locks[stream_id]:
self.seg_frame_buffers[stream_id] = bytes(map_info.data)
buffer.unmap(map_info)
return Gst.FlowReturn.OK
return on_new_sample
def on_new_sample_factory(self, stream_id):
def on_new_sample(sink):
sample = sink.emit("pull-sample")
if not sample: return Gst.FlowReturn.ERROR
# Update overall FPS counter from the first stream
if stream_id == 0:
self.frame_count += 1
if self.frame_count % 30 == 0:
elapsed = time.time() - self.start_time
self.current_fps = 30 / float(elapsed) if elapsed > 0 else 0
self.start_time = time.time()
buffer = sample.get_buffer()
success, map_info = buffer.map(Gst.MapFlags.READ)
if not success: return Gst.FlowReturn.ERROR
with self.buffer_locks[stream_id]:
self.frame_buffers[stream_id] = bytes(map_info.data)
buffer.unmap(map_info)
return Gst.FlowReturn.OK
return on_new_sample
def build_pipeline(self):
sources_and_sinks_str = []
for i in range(self.target_num_cams):
if i < len(self.sorted_cams):
cam_info = self.sorted_cams[i]
serial = cam_info['serial']
is_color = cam_info['is_color']
print(f"Setting up pipeline for Stream {i}: {serial} [{'Color' if is_color else 'Mono'}]")
base_settings = f"pylonsrc device-serial-number={serial} " \
"cam::TriggerMode=Off " \
"cam::AcquisitionFrameRateEnable=true cam::AcquisitionFrameRate=60.0 " \
"cam::DeviceLinkThroughputLimitMode=Off "
if is_color:
color_settings = f"{base_settings} " \
"cam::ExposureAuto=Off cam::ExposureTime=20000.0 " \
"cam::GainAuto=Continuous " \
"cam::Width=1920 cam::Height=1080 " \
"cam::PixelFormat=BayerBG8 "
source_and_sink = (
f"{color_settings} ! "
"bayer2rgb ! " # Debayer
"videoconvert ! "
"video/x-raw,format=RGBA ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12, width={self.internal_width}, height={self.internal_height}, framerate=60/1 ! "
f"nvjpegenc quality=60 ! "
f"appsink name=sink_{i} emit-signals=True sync=False max-buffers=1 drop=True"
)
else:
mono_settings = f"{base_settings} " \
"cam::ExposureAuto=Off cam::ExposureTime=20000.0 " \
"cam::GainAuto=Continuous "
if cam_info['binning']:
mono_settings += "cam::BinningHorizontal=2 cam::BinningVertical=2 "
source_and_sink = (
f"{mono_settings} ! "
"video/x-raw,format=GRAY8 ! "
"videoconvert ! "
f"tee name=t_{i} ! "
"queue ! "
"video/x-raw,format=I420 ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12, width={self.internal_width}, height={self.internal_height}, framerate=60/1 ! "
f"nvjpegenc quality=60 ! "
f"appsink name=sink_{i} emit-signals=True sync=False max-buffers=1 drop=True "
f"t_{i}. ! queue ! "
"videoconvert ! " # Placeholder for DeepStream
f"appsink name=seg_sink_{i} emit-signals=True sync=False max-buffers=1 drop=True"
)
else:
# Placeholder for disconnected cameras
source_and_sink = (
"videotestsrc pattern=black is-live=true ! "
f"videorate ! "
f"video/x-raw,width={self.internal_width},height={self.internal_height},format=I420,framerate=60/1 ! "
f"textoverlay text=\"DISCONNECTED\" valignment=center halignment=center font-desc=\"Sans, 48\" ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM),format=NV12,width={self.internal_width},height={self.internal_height},framerate=60/1 ! "
f"nvjpegenc quality=60 ! "
f"appsink name=sink_{i} emit-signals=True sync=False max-buffers=1 drop=True"
)
sources_and_sinks_str.append(source_and_sink)
pipeline_str = " ".join(sources_and_sinks_str)
print("\n--- GStreamer Pipeline String ---")
print(pipeline_str)
print("---------------------------------\n")
self.pipeline = Gst.parse_launch(pipeline_str)
if self.pipeline is None:
print("ERROR: GStreamer pipeline failed to parse. Check pipeline string for errors.")
return
for i in range(self.target_num_cams):
appsink = self.pipeline.get_by_name(f"sink_{i}")
if appsink:
# Set caps on appsink to ensure it's negotiating JPEG
appsink.set_property("caps", Gst.Caps.from_string("image/jpeg,width=(int)[1, 2147483647],height=(int)[1, 2147483647]"))
appsink.connect("new-sample", self.on_new_sample_factory(i))
else:
print(f"Error: appsink_{i} not found in pipeline.")
segsink = self.pipeline.get_by_name(f"seg_sink_{i}")
if segsink:
segsink.connect("new-sample", self.on_new_seg_sample_factory(i))
def get_frame_by_id(self, stream_id):
if 0 <= stream_id < self.target_num_cams:
with self.buffer_locks[stream_id]:
return self.frame_buffers[stream_id]
return None
def get_seg_frame_by_id(self, stream_id):
if 0 <= stream_id < self.target_num_cams:
with self.seg_buffer_locks[stream_id]:
return self.seg_frame_buffers[stream_id]
return None
def get_fps(self):
return round(self.current_fps, 1)

View File

@ -1,301 +0,0 @@
import sys
import subprocess
import threading
import time
import gc
import json
from flask import Flask, Response, render_template_string, jsonify
# --- CONFIGURATION ---
TARGET_NUM_CAMS = 3
DEFAULT_W = 1280
DEFAULT_H = 720
# --- PART 1: DETECTION (Unchanged) ---
def scan_connected_cameras():
print("--- Scanning for Basler Cameras ---")
detection_script = """
import sys
try:
from pypylon import pylon
tl_factory = pylon.TlFactory.GetInstance()
devices = tl_factory.EnumerateDevices()
if not devices:
print("NONE")
else:
results = []
for i in range(len(devices)):
cam = pylon.InstantCamera(tl_factory.CreateDevice(devices[i]))
cam.Open()
serial = cam.GetDeviceInfo().GetSerialNumber()
model = cam.GetDeviceInfo().GetModelName()
is_color = model.endswith("c") or "Color" in model
w = cam.Width.GetValue()
h = cam.Height.GetValue()
binning = 0
try:
cam.BinningHorizontal.Value = 2
cam.BinningVertical.Value = 2
cam.BinningHorizontal.Value = 1
cam.BinningVertical.Value = 1
binning = 1
except: pass
current_fmt = cam.PixelFormat.GetValue()
cam.Close()
results.append(f"{serial}:{w}:{h}:{binning}:{1 if is_color else 0}:{model}:{current_fmt}")
print("|".join(results))
except Exception: print("NONE")
"""
try:
result = subprocess.run([sys.executable, "-c", detection_script], capture_output=True, text=True)
output = result.stdout.strip()
if "NONE" in output or not output: return []
camera_list = []
entries = output.split('|')
for entry in entries:
parts = entry.split(':')
camera_list.append({
"serial": parts[0], "width": int(parts[1]), "height": int(parts[2]),
"binning": (parts[3] == '1'), "is_color": (parts[4] == '1'), "model": parts[5]
})
return camera_list
except: return []
DETECTED_CAMS = scan_connected_cameras()
ACTUAL_CAMS_COUNT = len(DETECTED_CAMS)
# --- RESOLUTION LOGIC ---
if ACTUAL_CAMS_COUNT > 0:
MASTER_W = DETECTED_CAMS[0]['width']
MASTER_H = DETECTED_CAMS[0]['height']
else:
MASTER_W = DEFAULT_W
MASTER_H = DEFAULT_H
INTERNAL_WIDTH = 1280
scale = INTERNAL_WIDTH / MASTER_W
INTERNAL_HEIGHT = int(MASTER_H * scale)
if INTERNAL_HEIGHT % 2 != 0: INTERNAL_HEIGHT += 1
WEB_WIDTH = 1280
total_source_width = INTERNAL_WIDTH * TARGET_NUM_CAMS
scale_tiled = WEB_WIDTH / total_source_width
WEB_HEIGHT = int(INTERNAL_HEIGHT * scale_tiled)
if WEB_HEIGHT % 2 != 0: WEB_HEIGHT += 1
print(f"LAYOUT: {TARGET_NUM_CAMS} Slots | Detected: {ACTUAL_CAMS_COUNT}")
for c in DETECTED_CAMS:
print(f" - Cam {c['serial']} ({c['model']}): {'COLOR' if c['is_color'] else 'MONO'}")
# --- FLASK & GSTREAMER ---
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
app = Flask(__name__)
frame_buffer = None
buffer_lock = threading.Lock()
current_fps = 0.0
frame_count = 0
start_time = time.time()
class GStreamerPipeline(threading.Thread):
def __init__(self):
super().__init__()
self.loop = GLib.MainLoop()
self.pipeline = None
def run(self):
Gst.init(None)
self.build_pipeline()
self.pipeline.set_state(Gst.State.PLAYING)
try:
self.loop.run()
except Exception as e:
print(f"Error: {e}")
finally:
self.pipeline.set_state(Gst.State.NULL)
def on_new_sample(self, sink):
global frame_count, start_time, current_fps
sample = sink.emit("pull-sample")
if not sample: return Gst.FlowReturn.ERROR
frame_count += 1
if frame_count % 30 == 0:
elapsed = time.time() - start_time
current_fps = 30 / elapsed if elapsed > 0 else 0
start_time = time.time()
buffer = sample.get_buffer()
success, map_info = buffer.map(Gst.MapFlags.READ)
if not success: return Gst.FlowReturn.ERROR
global frame_buffer
with buffer_lock:
frame_buffer = bytes(map_info.data)
buffer.unmap(map_info)
return Gst.FlowReturn.OK
def build_pipeline(self):
sources_str = ""
for i in range(TARGET_NUM_CAMS):
if i < len(DETECTED_CAMS):
cam_info = DETECTED_CAMS[i]
serial = cam_info['serial']
is_color = cam_info['is_color']
print(f"Slot {i}: Linking {serial} [{'Color' if is_color else 'Mono'}]")
# --- 1. BASE SETTINGS (Common) ---
# We DISABLE Throughput Limit to allow high bandwidth
base_settings = (
f"pylonsrc device-serial-number={serial} "
"cam::TriggerMode=Off "
"cam::AcquisitionFrameRateEnable=true cam::AcquisitionFrameRate=60.0 "
"cam::DeviceLinkThroughputLimitMode=Off "
)
# Pre-scaler
pre_scale = (
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12, width={INTERNAL_WIDTH}, height={INTERNAL_HEIGHT}, framerate=60/1 ! "
)
if is_color:
# --- 2A. COLOR SETTINGS (High Speed) ---
# FIX: Force ExposureTime=20000.0 (20ms) even for Color.
# If we leave it on Auto, it will slow down the Mono cameras.
# We rely on 'GainAuto' to make the image bright enough.
color_settings = (
f"{base_settings} "
"cam::ExposureAuto=Off cam::ExposureTime=20000.0 "
"cam::GainAuto=Continuous "
"cam::Width=1920 cam::Height=1080 cam::OffsetX=336 cam::OffsetY=484 "
"cam::PixelFormat=BayerBG8 " # Force Format
)
source = (
f"{color_settings} ! "
"bayer2rgb ! " # Debayer
"videoconvert ! "
"video/x-raw,format=RGBA ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12 ! "
f"{pre_scale}"
f"m.sink_{i} "
)
else:
# --- 2B. MONO SETTINGS (High Speed) ---
# Force ExposureTime=20000.0
mono_settings = (
f"{base_settings} "
"cam::ExposureAuto=Off cam::ExposureTime=20000.0 "
"cam::GainAuto=Continuous "
)
if cam_info['binning']:
mono_settings += "cam::BinningHorizontal=2 cam::BinningVertical=2 "
source = (
f"{mono_settings} ! "
"video/x-raw,format=GRAY8 ! "
"videoconvert ! "
"video/x-raw,format=I420 ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM), format=NV12 ! "
f"{pre_scale}"
f"m.sink_{i} "
)
else:
# --- DISCONNECTED PLACEHOLDER ---
source = (
f"videotestsrc pattern=black is-live=true ! "
f"videorate ! "
f"video/x-raw,width={INTERNAL_WIDTH},height={INTERNAL_HEIGHT},format=I420,framerate=60/1 ! "
f"textoverlay text=\"DISCONNECTED\" valignment=center halignment=center font-desc=\"Sans, 48\" ! "
"nvvideoconvert compute-hw=1 ! "
f"video/x-raw(memory:NVMM),format=NV12,width={INTERNAL_WIDTH},height={INTERNAL_HEIGHT},framerate=60/1 ! "
f"m.sink_{i} "
)
sources_str += source
# 3. MUXER & PROCESSING
processing = (
f"nvstreammux name=m batch-size={TARGET_NUM_CAMS} width={INTERNAL_WIDTH} height={INTERNAL_HEIGHT} "
f"live-source=1 batched-push-timeout=33000 ! "
f"nvmultistreamtiler width={WEB_WIDTH} height={WEB_HEIGHT} rows=1 columns={TARGET_NUM_CAMS} ! "
"nvvideoconvert compute-hw=1 ! "
"video/x-raw(memory:NVMM) ! "
"videorate drop-only=true ! "
"video/x-raw(memory:NVMM), framerate=30/1 ! "
f"nvjpegenc quality=60 ! "
"appsink name=sink emit-signals=True sync=False max-buffers=1 drop=True"
)
pipeline_str = f"{sources_str} {processing}"
print(f"Launching Optimized Pipeline (All Cams Forced to 20ms Shutter)...")
self.pipeline = Gst.parse_launch(pipeline_str)
appsink = self.pipeline.get_by_name("sink")
appsink.connect("new-sample", self.on_new_sample)
# --- FLASK ---
@app.route('/')
def index():
return render_template_string('''
<html>
<head>
<style>
body { background-color: #111; color: white; text-align: center; font-family: monospace; margin: 0; padding: 20px; }
.container { position: relative; display: inline-block; border: 3px solid #4CAF50; }
img { display: block; max-width: 100%; height: auto; }
.hud {
position: absolute; top: 10px; left: 10px;
background: rgba(0, 0, 0, 0.6); color: #00FF00;
padding: 5px 10px; font-weight: bold; pointer-events: none;
}
</style>
</head>
<body>
<h1>Basler Final Feed</h1>
<div class="container">
<div class="hud" id="fps-counter">FPS: --</div>
<img src="{{ url_for('video_feed') }}">
</div>
<script>
setInterval(function() {
fetch('/get_fps').then(r => r.json()).then(d => {
document.getElementById('fps-counter').innerText = "FPS: " + d.fps;
});
}, 500);
</script>
</body>
</html>
''')
@app.route('/video_feed')
def video_feed():
def generate():
count = 0
while True:
with buffer_lock:
if frame_buffer:
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame_buffer + b'\r\n')
time.sleep(0.016)
count += 1
if count % 200 == 0: gc.collect()
return Response(generate(), mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/get_fps')
def get_fps():
return jsonify(fps=round(current_fps, 1))
if __name__ == "__main__":
subprocess.run([sys.executable, "-c", "import gc; gc.collect()"])
gst_thread = GStreamerPipeline()
gst_thread.daemon = True
gst_thread.start()
app.run(host='0.0.0.0', port=5000, debug=False, threaded=True)

View File

@ -1,436 +0,0 @@
body {
background-color: #1a1a1a; /* Darker gray */
color: #ffffff;
font-family: Arial, sans-serif; /* Reverted to original font */
margin: 0;
padding-top: 20px; /* Added padding to top for overall spacing */
padding-bottom: 20px; /* Added padding to bottom for overall spacing */
box-sizing: border-box; /* Ensure padding is included in height */
display: flex; /* Changed to flex */
flex-direction: column; /* Set flex direction to column */
height: 100vh; /* Make body fill viewport height */
gap: 20px; /* Added gap between flex items (h1 and main-container) */
}
h1 {
color: #64ffda; /* Kept existing color */
text-align: center;
margin: 0; /* Removed explicit margins */
}
.main-container {
display: flex; /* Desktop default */
flex-direction: row;
flex-grow: 1; /* Make main-container fill remaining vertical space */
width: 100%;
/* Removed max-width to allow full screen utilization */
margin: 0 auto;
/* Removed height: calc(100vh - 80px); */
/* Removed padding: 20px; */
box-sizing: border-box; /* Ensure padding is included in element's total width and height */
gap: 20px; /* Added spacing between the two main sections */
}
/* Tabs are hidden by default on desktop, dynamically added for mobile */
.tabs {
display: none;
}
.content-section {
display: block; /* Desktop default */
padding: 5px; /* Reduced padding further */
overflow-y: auto;
}
/* --- Lamp View (Original styles adapted to dark theme) --- */
.lamp-view {
flex: 0 0 auto; /* Allow content to determine width, do not shrink */
/* Removed min-width as padding will affect total width */
padding-left: 2vw; /* Added 2vw padding on the left side */
padding-right: 2vw; /* Added 2vw padding on the right side */
border-right: 1px solid #333; /* Reintroduced the line separating the sections */
display: flex;
flex-direction: column;
align-items: center;
overflow-y: auto; /* Added to allow vertical scrolling if its content is too tall */
}
.lamp-view .container { /* Added for original styling effect */
display: flex;
flex-direction: column;
align-items: center;
position: relative;
width: 100%;
}
.lamp-view .main-content { /* Added for original styling effect */
display: flex;
flex-direction: column; /* Changed to column to stack matrix and controls vertically */
align-items: center; /* Changed to center to horizontally center its children */
gap: 20px; /* Adjusted gap for vertical stacking */
flex-wrap: wrap; /* Allow wrapping for responsiveness - not strictly needed for column but kept for safety */
justify-content: center; /* This will center the column within the lamp-view if its width allows */
width: 100%; /* Ensure main-content fills lamp-view's width */
}
.matrix-grid {
display: grid;
grid-template-columns: repeat(5, 70px); /* Fixed 5-column grid */
grid-template-rows: repeat(5, 70px);
gap: 20px;
padding: 20px;
background-color: #333;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
margin-bottom: 20px; /* Kept margin-bottom for spacing below grid */
/* Removed width: 100%; to let grid determine its own width */
box-sizing: border-box; /* Account for padding */
}
.lamp {
width: 70px;
height: 70px;
border-radius: 10%; /* Reverted to original square with rounded corners */
background-color: #000;
transition: box-shadow 0.2s, transform 0.1s;
cursor: pointer;
border: 2px solid transparent;
}
.lamp.on {
box-shadow: 0 0 15px currentColor, 0 0 25px currentColor;
}
.lamp.selected {
border: 2px solid #fff;
transform: scale(1.1);
}
.region-control {
margin-bottom: 20px; /* Kept margin-bottom for spacing below region-control */
/* Removed text-align: center; as parent's align-items will handle centering */
width: 470px; /* Explicitly set width to match matrix grid */
box-sizing: border-box; /* Ensure padding/border included in width */
}
.region-control select {
padding: 10px 15px;
font-size: 14px;
cursor: pointer;
border: 1px solid #64ffda; /* Adapted to theme */
border-radius: 5px;
background-color: #333; /* Adapted to theme */
color: #ffffff;
width: 100%; /* Fill parent's width */
box-sizing: border-box; /* Include padding in width */
}
.control-panel, .center-lamp-control {
background-color: #444; /* Adapted to theme */
padding: 20px;
border-radius: 10px;
width: 470px; /* Explicitly set width to match matrix grid */
margin-bottom: 20px; /* Kept margin-bottom for spacing below control panel */
box-sizing: border-box; /* Account for padding */
}
.control-panel.inactive-control {
background-color: #333;
filter: saturate(0.2);
}
.control-panel.inactive-control .slider-row {
pointer-events: none;
}
.control-panel h2, .center-lamp-control h2 {
color: #64ffda; /* Adapted to theme */
font-size: 16px;
margin-bottom: 10px;
text-align: center;
}
.slider-group {
width: 100%;
display: flex;
flex-direction: column;
gap: 5px;
}
.slider-row {
display: grid;
grid-template-columns: 150px 1fr 50px; /* Adjusted last column for number input buttons */
gap: 10px;
align-items: center;
}
.slider-group input[type="range"] {
-webkit-appearance: none;
height: 8px;
border-radius: 5px;
outline: none;
cursor: pointer;
background: #555; /* Adapted to theme */
}
.slider-group input[type="number"] {
-webkit-appearance: none; /* Hide default spinner for Chrome, Safari */
-moz-appearance: textfield; /* Hide default spinner for Firefox */
text-align: center; /* Center the number */
width: auto; /* Allow flex-grow to manage width */
font-size: 14px;
border: none; /* Will be part of the new control's border */
border-radius: 0; /* No radius on its own if part of a group */
padding: 5px;
background-color: #333; /* Adapted to theme */
color: #ffffff;
}
/* Specifically hide number input spinner buttons */
.slider-group input[type="number"]::-webkit-inner-spin-button,
.slider-group input[type="number"]::-webkit-outer-spin-button {
-webkit-appearance: none;
margin: 0;
}
.slider-group input[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none;
height: 20px;
width: 20px;
border-radius: 50%;
background: #64ffda; /* Adapted to theme */
cursor: pointer;
box-shadow: 0 0 5px rgba(0,0,0,0.5);
margin-top: 2px;
}
.slider-group input[type="range"]::-webkit-slider-runnable-track {
height: 24px;
border-radius: 12px;
}
input.white-3000k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #ffc080); }
input.white-6500k::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #c0e0ff); }
input.blue::-webkit-slider-runnable-track { background: linear-gradient(to right, #000, #00f); }
.slider-label {
color: #ffffff; /* Adapted to theme */
font-size: 14px;
text-align: left;
white-space: nowrap;
width: 120px;
}
.inactive-control .slider-label {
color: #888;
}
/* --- New styles for number input controls --- */
.number-input-controls {
display: flex;
align-items: stretch; /* Stretch children to fill container height */
gap: 2px; /* Small gap between buttons and input */
flex-shrink: 0; /* Prevent the control group from shrinking in the grid */
}
.number-input-controls input[type="number"] {
flex-grow: 1; /* Make it fill available space */
text-align: center;
border: 1px solid #64ffda; /* Border for the number input */
border-radius: 5px;
background-color: #333;
color: #ffffff;
min-width: 40px; /* Ensure it doesn't get too small */
}
.number-input-controls button {
width: 30px; /* Fixed width */
background-color: #64ffda; /* Accent color */
color: #1a1a1a; /* Dark text */
border: none;
border-radius: 5px;
font-size: 16px;
font-weight: bold;
cursor: pointer;
transition: background-color 0.2s;
display: flex; /* Center content */
justify-content: center;
align-items: center;
line-height: 1; /* Prevent extra height from line-height */
padding: 0; /* Remove default button padding */
}
.number-input-controls button:hover {
background-color: #4ed8bd; /* Lighter accent on hover */
}
.number-input-controls button:active {
background-color: #3cb89f; /* Darker accent on click */
}
/* Adjust slider-row grid to accommodate new number input controls */
.slider-row {
grid-template-columns: 150px 1fr 100px; /* Label, Range, NumberInputGroup(approx 30+30+2+40=102px) */
}
/* --- Camera View (Individual streams) --- */
.camera-view {
flex: 1; /* Allow it to grow and shrink to fill available space */
height: 100%; /* Added to make it fill the height of its parent */
overflow-y: auto; /* Added to allow vertical scrolling if content exceeds height */
/* Removed width: 75%; */
display: flex;
flex-direction: column;
align-items: center;
justify-content: flex-start; /* Align items to start for title */
position: relative;
gap: 10px; /* Space between elements */
}
.camera-streams-grid {
display: grid; /* Use CSS Grid */
/* Removed width: 100%; */
/* Removed height: 100%; */
flex-grow: 1; /* Allow it to grow to fill available space */
grid-template-rows: 1fr 2fr; /* 1/3 for color, 2/3 for monos */
grid-template-columns: 1fr; /* Single column for the main layout */
gap: 10px;
padding: 0 5px; /* Reduced horizontal padding */
}
.camera-color-row {
grid-row: 1;
grid-column: 1;
display: flex;
justify-content: center;
align-items: center;
overflow: hidden; /* Ensure content is clipped */
height: 100%; /* Explicitly set height to fill grid cell */
}
.camera-mono-row {
grid-row: 2;
grid-column: 1;
display: grid;
grid-template-columns: 1fr 1fr; /* Two columns for the mono cameras */
gap: 10px;
overflow: hidden; /* Ensure content is clipped */
height: 100%; /* Explicitly set height to fill grid cell */
}
.camera-container-individual {
position: relative;
border: 1px solid #333;
display: flex; /* Changed to flex for centering image */
justify-content: center;
align-items: center;
background-color: transparent;
aspect-ratio: var(--aspect-ratio); /* Keep aspect-ratio on container */
max-width: 100%; /* Re-added max-width */
/* Removed height: 100%; */
max-height: 100%; /* Ensure it doesn't exceed the boundaries of its parent */
overflow: hidden; /* Ensure image fits and is clipped if necessary */
box-sizing: border-box; /* Include padding and border in the element's total width and height */
border-radius: 10px; /* Added corner radius */
}
.camera-stream-individual {
max-width: 100%;
max-height: 100%;
object-fit: contain;
border-radius: 10px; /* Added corner radius to the image itself */
}
.camera-label {
position: absolute;
bottom: 5px;
left: 5px;
background: rgba(0, 0, 0, 0.6);
color: #fff;
padding: 3px 6px;
font-size: 12px;
border-radius: 3px;
}
.hud {
position: absolute; /* Kept existing position for FPS counter */
top: 10px;
right: 10px; /* Moved to right for better placement in new layout */
background: rgba(0, 0, 0, 0.6);
color: #00FF00;
padding: 5px 10px;
font-weight: bold;
pointer-events: none;
}
/* --- Responsive Design --- */
@media (max-width: 768px) {
.main-container {
flex-direction: column;
height: auto;
max-width: 100%;
}
.tabs {
display: flex; /* Show tabs on mobile */
justify-content: space-around;
background-color: #333;
padding: 10px 0;
}
.tab-link {
background-color: #333;
color: #ffffff;
border: none;
padding: 10px 15px;
cursor: pointer;
transition: background-color 0.3s;
}
.tab-link.active {
background-color: #64ffda;
color: #1a1a1a;
}
.lamp-view, .camera-view {
width: 100%;
border: none;
}
.content-section {
display: none; /* Hide tab content by default on mobile */
}
.content-section.active {
display: block; /* Show active tab content on mobile */
}
.lamp-view .main-content {
flex-direction: column;
align-items: center;
}
.control-panel, .center-lamp-control {
width: 100%;
max-width: none;
}
.camera-streams-grid {
/* On mobile, stack cameras */
grid-template-rows: auto; /* Revert to auto rows */
grid-template-columns: 1fr; /* Single column */
padding: 0;
}
.camera-color-row, .camera-mono-row {
grid-row: auto;
grid-column: auto;
display: flex; /* Change mono-row to flex for stacking vertically on mobile */
flex-direction: column;
gap: 10px;
}
.camera-container-individual {
width: 100%;
height: auto; /* Let aspect-ratio define height */
}
}

View File

@ -1,423 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>Pupilometer Unified Control</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
</head>
<body>
<h1>Pupilometer Unified Control</h1>
<div class="main-container">
<!-- The content sections will be populated based on the view -->
<div id="lamp" class="content-section lamp-view">
<!-- Lamp Control UI goes here -->
<div class="container">
<h2>Lamp Matrix Control</h2>
<div class="region-control">
<label for="region-select">Select Region:</label>
<select id="region-select">
<option value="" disabled selected>-- Select a region --</option>
<option value="Upper">Upper</option>
<option value="Lower">Lower</option>
<option value="Left">Left</option>
<option value="Right">Right</option>
<option value="Inner ring">Inner ring</option>
<option value="Outer ring">Outer ring</option>
<option value="All">All</option>
</select>
</div>
<div class="main-content">
<div class="matrix-grid">
{% for row in range(5) %}
{% for col in range(5) %}
<div class="lamp" data-row="{{ row }}" data-col="{{ col }}" style="background-color: {{ matrix[row][col] }};"></div>
{% endfor %}
{% endfor %}
</div>
<div class="slider-controls">
<div class="center-lamp-control">
<h2>Center Lamp</h2>
<div class="slider-group center-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="center-ww-slider" min="0" max="255" value="0" class="white-3000k">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="center-ww-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="center-cw-slider" min="0" max="255" value="0" class="white-6500k">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="center-cw-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="center-blue-slider" min="0" max="255" value="0" class="blue">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="center-blue-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
</div>
</div>
<div class="control-panel">
<h2>Selected Region</h2>
<div class="slider-group region-slider-group">
<div class="slider-row">
<span class="slider-label">Warm White (3000K)</span>
<input type="range" id="ww-slider" min="0" max="255" value="0" class="white-3000k">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="ww-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
<div class="slider-row">
<span class="slider-label">Cool White (6500K)</span>
<input type="range" id="cw-slider" min="0" max="255" value="0" class="white-6500k">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="cw-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
<div class="slider-row">
<span class="slider-label">Blue</span>
<input type="range" id="blue-slider" min="0" max="255" value="0" class="blue">
<div class="number-input-controls">
<button type="button" class="decrement-btn">-</button>
<input type="number" id="blue-number" min="0" max="255" value="0">
<button type="button" class="increment-btn">+</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div id="camera" class="content-section camera-view">
<h2>Basler Final Feed</h2>
<div class="camera-streams-grid">
<div class="camera-color-row">
{% for cam_index in range(detected_cams_info|length) %}
{% set cam_info = detected_cams_info[cam_index] %}
{% if cam_info.is_color %}
<div class="camera-container-individual {% if cam_info.is_color %}camera-color{% else %}camera-mono{% endif %}" style="--aspect-ratio: {{ cam_info.aspect_ratio }};">
<img src="{{ url_for('video_feed', stream_id=cam_index) }}?t={{ now().timestamp() }}" class="camera-stream-individual">
<div class="camera-label">{{ cam_info.model }} ({{ 'Color' if cam_info.is_color else 'Mono' }})</div>
</div>
{% endif %}
{% endfor %}
</div>
<div class="camera-mono-row">
{% for cam_index in range(detected_cams_info|length) %}
{% set cam_info = detected_cams_info[cam_index] %}
{% if not cam_info.is_color %}
<div class="camera-container-individual {% if cam_info.is_color %}camera-color{% else %}camera-mono{% endif %}" style="--aspect-ratio: {{ cam_info.aspect_ratio }};">
<img src="{{ url_for('video_feed', stream_id=cam_index) }}?t={{ now().timestamp() }}" class="camera-stream-individual">
<div class="camera-label">{{ cam_info.model }} ({{ 'Color' if cam_info.is_color else 'Mono' }})</div>
</div>
{% endif %}
{% endfor %}
</div>
</div>
<div class="hud" id="fps-counter">FPS: --</div>
</div>
<div id="segmentation" class="content-section camera-view">
<h2>Segmentation Feed</h2>
<div class="camera-streams-grid">
<div class="camera-mono-row">
{% for cam_index in range(detected_cams_info|length) %}
{% set cam_info = detected_cams_info[cam_index] %}
{% if not cam_info.is_color %}
<div class="camera-container-individual camera-mono" style="--aspect-ratio: {{ cam_info.aspect_ratio }};">
<img src="{{ url_for('segmentation_feed', stream_id=cam_index) }}?t={{ now().timestamp() }}" class="camera-stream-individual" id="segmentation-feed-{{- cam_index -}}">
<div class="camera-label">{{ cam_info.model }} (Segmentation)</div>
</div>
{% endif %}
{% endfor %}
</div>
</div>
</div>
</div>
<script>
// FPS counter
setInterval(function() {
fetch('/get_fps').then(r => r.json()).then(d => {
document.getElementById('fps-counter').innerText = "FPS: " + d.fps;
});
}, 500);
// State for the entire 5x5 matrix, storing {ww, cw, blue} for each lamp
var lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
var selectedLamps = [];
// Function to calculate a visual RGB color from the three light values using a proper additive model
function calculateRgb(ww, cw, blue) {
const warmWhiteR = 255, warmWhiteG = 192, warmWhiteB = 128;
const coolWhiteR = 192, coolWhiteG = 224, coolWhiteB = 255;
const blueR = 0, blueG = 0, blueB = 255;
var r = (ww / 255) * warmWhiteR + (cw / 255) * coolWhiteR + (blue / 255) * blueR;
var g = (ww / 255) * warmWhiteG + (cw / 255) * coolWhiteR + (blue / 255) * blueG;
var b = (ww / 255) * warmWhiteB + (cw / 255) * coolWhiteB + (blue / 255) * blueB;
r = Math.min(255, Math.round(r));
g = Math.min(255, Math.round(g));
b = Math.min(255, Math.round(b));
var toHex = (c) => ('0' + c.toString(16)).slice(-2);
return '#' + toHex(r) + toHex(g) + toHex(b);
}
function updateLampUI(lamp, colorState) {
var newColor = calculateRgb(colorState.ww, colorState.cw, colorState.blue);
var lampElement = $(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`);
lampElement.css('background-color', newColor);
if (newColor === '#000000') {
lampElement.removeClass('on');
lampElement.css('box-shadow', `inset 0 0 5px rgba(0,0,0,0.5)`);
} else {
lampElement.addClass('on');
lampElement.css('box-shadow', `0 0 15px ${newColor}, 0 0 25px ${newColor}`);
}
}
function sendFullMatrixUpdate(lampsToUpdate, isRegionUpdate = false) {
var fullMatrixData = lampMatrixState.map(row => row.map(lamp => ({
ww: lamp.ww,
cw: lamp.cw,
blue: lamp.blue
})));
$.ajax({
url: '/set_matrix',
type: 'POST',
contentType: 'application/json',
data: JSON.stringify({ matrix: fullMatrixData }),
success: function(response) {
if (response.success) {
if (isRegionUpdate) {
for (var r = 0; r < 5; r++) {
for (var c = 0; c < 5; c++) {
updateLampUI({row: r, col: c}, lampMatrixState[r][c]);
}
}
} else {
lampsToUpdate.forEach(function(lamp) {
updateLampUI(lamp, lampMatrixState[lamp.row][lamp.col]);
});
}
}
}
});
}
function updateSliders(ww, cw, blue, prefix = '') {
$(`#${prefix}ww-slider`).val(ww);
$(`#${prefix}cw-slider`).val(cw);
$(`#${prefix}blue-slider`).val(blue);
$(`#${prefix}ww-number`).val(ww);
$(`#${prefix}cw-number`).val(cw);
$(`#${prefix}blue-number`).val(blue);
}
$(document).ready(function() {
var regionMaps = {
'Upper': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
],
'Lower': [
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'Left': [
{row: 0, col: 0}, {row: 1, col: 0}, {row: 2, col: 0}, {row: 3, col: 0}, {row: 4, col: 0},
{row: 0, col: 1}, {row: 1, col: 1}, {row: 2, col: 1}, {row: 3, col: 1}, {row: 4, col: 1},
],
'Right': [
{row: 0, col: 3}, {row: 1, col: 3}, {row: 2, col: 3}, {row: 3, col: 3}, {row: 4, col: 3},
{row: 0, col: 4}, {row: 1, col: 4}, {row: 2, col: 4}, {row: 3, col: 4}, {row: 4, col: 4},
],
'Inner ring': [
{row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3},
{row: 2, col: 1}, {row: 2, col: 3},
{row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}
],
'Outer ring': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
],
'All': [
{row: 0, col: 0}, {row: 0, col: 1}, {row: 0, col: 2}, {row: 0, col: 3}, {row: 0, col: 4},
{row: 1, col: 0}, {row: 1, col: 1}, {row: 1, col: 2}, {row: 1, col: 3}, {row: 1, col: 4},
{row: 2, col: 0}, {row: 2, col: 1}, {row: 2, col: 3}, {row: 2, col: 4},
{row: 3, col: 0}, {row: 3, col: 1}, {row: 3, col: 2}, {row: 3, col: 3}, {row: 3, col: 4},
{row: 4, col: 0}, {row: 4, col: 1}, {row: 4, col: 2}, {row: 4, col: 3}, {row: 4, col: 4},
]
};
var allRegionWithoutCenter = regionMaps['All'].filter(lamp => !(lamp.row === 2 && lamp.col === 2));
regionMaps['All'] = allRegionWithoutCenter;
$('.lamp').each(function() {
var row = $(this).data('row');
var col = $(this).data('col');
var color = $(this).css('background-color');
var rgb = color.match(/\d+/g);
lampMatrixState[row][col] = {
ww: rgb[0], cw: rgb[1], blue: rgb[2]
};
});
$('#region-select').on('change', function() {
var region = $(this).val();
if (region) {
$('.control-panel').removeClass('inactive-control');
} else {
$('.control-panel').addClass('inactive-control');
}
var newlySelectedLamps = regionMaps[region];
$('.lamp').removeClass('selected');
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
var lampsToUpdate = [];
var centerLampState = lampMatrixState[2][2];
lampMatrixState = Array(5).fill(null).map(() => Array(5).fill({ww: 0, cw: 0, blue: 0}));
lampMatrixState[2][2] = centerLampState;
selectedLamps = newlySelectedLamps;
selectedLamps.forEach(function(lamp) {
$(`.lamp[data-row="${lamp.row}"][data-col="${lamp.col}"]`).addClass('selected');
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
});
if (selectedLamps.length > 0) {
var firstLamp = selectedLamps[0];
var firstLampState = lampMatrixState[firstLamp.row][firstLamp.col];
updateSliders(firstLampState.ww, firstLampState.cw, firstLampState.blue, '');
}
sendFullMatrixUpdate(lampsToUpdate, true);
});
$('.region-slider-group input').on('input', function() {
if (selectedLamps.length === 0) return;
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#ww-slider').val());
var cw = parseInt($('#cw-slider').val());
var blue = parseInt($('#blue-slider').val());
var lampsToUpdate = [];
selectedLamps.forEach(function(lamp) {
lampMatrixState[lamp.row][lamp.col] = {ww: ww, cw: cw, blue: blue};
lampsToUpdate.push(lamp);
});
sendFullMatrixUpdate(lampsToUpdate);
});
$('.center-slider-group input').on('input', function() {
var target = $(this);
var originalVal = target.val();
var value = parseInt(originalVal, 10);
if (isNaN(value) || value < 0) { value = 0; }
if (value > 255) { value = 255; }
if (target.is('[type="number"]') && value.toString() !== originalVal) {
target.val(value);
}
var id = target.attr('id');
if (target.is('[type="range"]')) {
$(`#${id.replace('-slider', '-number')}`).val(value);
} else if (target.is('[type="number"]')) {
$(`#${id.replace('-number', '-slider')}`).val(value);
}
var ww = parseInt($('#center-ww-slider').val());
var cw = parseInt($('#center-cw-slider').val());
var blue = parseInt($('#center-blue-slider').val());
var centerLamp = {row: 2, col: 2};
lampMatrixState[centerLamp.row][centerLamp.col] = {ww: ww, cw: cw, blue: blue};
sendFullMatrixUpdate([centerLamp]);
});
// Handle increment/decrement buttons
$('.number-input-controls button').on('click', function() {
var btn = $(this);
var numberInput = btn.siblings('input[type="number"]');
var currentVal = parseInt(numberInput.val());
var min = parseInt(numberInput.attr('min'));
var max = parseInt(numberInput.attr('max'));
if (btn.hasClass('decrement-btn')) {
currentVal = Math.max(min, currentVal - 1);
} else if (btn.hasClass('increment-btn')) {
currentVal = Math.min(max, currentVal + 1);
}
numberInput.val(currentVal);
// Trigger the 'input' event to propagate the change to the slider and matrix update logic
numberInput.trigger('input');
});
if (!$('#region-select').val()) {
$('.control-panel').addClass('inactive-control');
}
// Mobile tab handling
if (window.innerWidth <= 768) {
// Dynamically add tab buttons
const tabsDiv = $('<div class="tabs"></div>');
tabsDiv.append('<button class="tab-link" data-tab="camera">Camera</button>');
tabsDiv.append('<button class="tab-link" data-tab="lamp">Lamp Control</button>');
// Prepend tabsDiv to .main-container
$('.main-container').prepend(tabsDiv);
// Hide all content sections initially
$('.content-section').hide();
// Show the camera section by default
$('#camera').show();
// Make the Camera tab active
$('.tab-link[data-tab="camera"]').addClass('active');
// Add click handlers for tab buttons
$('.tab-link').on('click', function() {
$('.tab-link').removeClass('active');
$(this).addClass('active');
$('.content-section').hide();
$(`#${$(this).data('tab')}`).show();
});
}
});
</script>
</body>
</html>

View File

@ -1,58 +0,0 @@
from pypylon import pylon
import time
import sys
try:
# Get the Transport Layer Factory
tl_factory = pylon.TlFactory.GetInstance()
devices = tl_factory.EnumerateDevices()
if not devices:
print("No cameras found!")
sys.exit(1)
print(f"Found {len(devices)} cameras. Checking Camera 1...")
# Connect to first camera
cam = pylon.InstantCamera(tl_factory.CreateDevice(devices[0]))
cam.Open()
# 1. Reset to Defaults
print("Reseting to Defaults...")
cam.UserSetSelector.Value = "Default"
cam.UserSetLoad.Execute()
# 2. Enable Auto Exposure/Gain
print("Enabling Auto Exposure & Gain...")
cam.ExposureAuto.Value = "Continuous"
cam.GainAuto.Value = "Continuous"
# 3. Wait for it to settle (Camera adjusts to light)
print("Waiting 3 seconds for auto-adjustment...")
for i in range(3):
print(f"{3-i}...")
time.sleep(1)
# 4. READ VALUES
current_exposure = cam.ExposureTime.GetValue() # In Microseconds (us)
current_fps_readout = cam.ResultingFrameRate.GetValue()
print("-" * 30)
print(f"REPORT FOR SERIAL: {cam.GetDeviceInfo().GetSerialNumber()}")
print("-" * 30)
print(f"Current Exposure Time: {current_exposure:.1f} us ({current_exposure/1000:.1f} ms)")
print(f"Theoretical Max FPS: {1000000 / current_exposure:.1f} FPS")
print(f"Camera Internal FPS: {current_fps_readout:.1f} FPS")
print("-" * 30)
if current_exposure > 33000:
print("⚠️ PROBLEM FOUND: Exposure is > 33ms.")
print(" This physically prevents the camera from reaching 30 FPS.")
print(" Solution: Add more light or limit AutoExposureUpperLimit.")
else:
print("✅ Exposure looks fast enough for 30 FPS.")
cam.Close()
except Exception as e:
print(f"Error: {e}")

View File

@ -1,16 +0,0 @@
#!/bin/bash
# Test the main page
echo "Testing main page..."
curl -s -o /dev/null -w "%{http_code}" http://localhost:5000/
echo ""
# Test the get_fps endpoint
echo "Testing get_fps endpoint..."
curl -s -o /dev/null -w "%{http_code}" http://localhost:5000/get_fps
echo ""
# Test the set_matrix endpoint
echo "Testing set_matrix endpoint..."
curl -s -o /dev/null -w "%{http_code}" -X POST -H "Content-Type: application/json" -d '{"matrix": [[{"ww":0,"cw":0,"blue":0}]]}' http://localhost:5000/set_matrix
echo ""

View File

@ -1,52 +0,0 @@
import re
from playwright.sync_api import Page, expect
def test_ui_elements_mobile(page: Page):
page.set_viewport_size({"width": 375, "height": 667})
page.goto("http://localhost:5000/")
# Check for main title
expect(page).to_have_title("Pupilometer Unified Control")
# Wait for dynamically added tabs to be attached to the DOM
page.wait_for_selector(".tabs", state="attached")
# Check for dynamically added tabs visibility on mobile
expect(page.locator(".tabs")).to_be_visible()
expect(page.locator(".tab-link[data-tab='camera']")).to_be_visible()
expect(page.locator(".tab-link[data-tab='lamp']")).to_be_visible()
# Check for camera view content
expect(page.locator("#camera h2")).to_contain_text("Basler Final Feed")
expect(page.locator("#fps-counter")).to_be_visible()
expect(page.locator("#camera .camera-streams-grid .camera-container-individual")).to_have_count(3)
expect(page.locator(".camera-streams-grid .camera-label").first).to_be_visible()
# Check for lamp view content
page.locator(".tab-link[data-tab='lamp']").click()
expect(page.locator("#lamp .container > h2")).to_contain_text("Lamp Matrix Control")
expect(page.locator("#region-select")).to_be_visible()
expect(page.locator(".center-lamp-control h2")).to_contain_text("Center Lamp")
expect(page.locator(".control-panel h2")).to_contain_text("Selected Region")
def test_ui_elements_desktop(page: Page):
page.set_viewport_size({"width": 1280, "height": 720})
page.goto("http://localhost:5000/")
# Check for main title
expect(page).to_have_title("Pupilometer Unified Control")
# Check that tabs are NOT visible on desktop
expect(page.locator(".tabs")).not_to_be_visible()
# Check for camera view content
expect(page.locator("#camera h2")).to_contain_text("Basler Final Feed")
expect(page.locator("#fps-counter")).to_be_visible()
expect(page.locator("#camera .camera-streams-grid .camera-container-individual")).to_have_count(3)
expect(page.locator(".camera-streams-grid .camera-label").first).to_be_visible()
# Check for lamp view content
expect(page.locator("#lamp .container > h2")).to_contain_text("Lamp Matrix Control")
expect(page.locator("#region-select")).to_be_visible()
expect(page.locator(".center-lamp-control h2")).to_contain_text("Center Lamp")
expect(page.locator(".control-panel h2")).to_contain_text("Selected Region")

View File

@ -1,126 +0,0 @@
import re
from playwright.sync_api import Page, expect
def test_visual_regression_desktop(page: Page):
page.set_viewport_size({"width": 1280, "height": 720})
page.goto("http://localhost:5000/")
page.screenshot(path="src/unified_web_ui/tests/screenshots/screenshot_desktop.png")
def test_visual_regression_tablet(page: Page):
page.set_viewport_size({"width": 768, "height": 1024}) # Common tablet size
page.goto("http://localhost:5000/")
page.screenshot(path="src/unified_web_ui/tests/screenshots/screenshot_tablet.png")
def test_visual_regression_mobile(page: Page):
page.set_viewport_size({"width": 375, "height": 667})
page.goto("http://localhost:5000/")
page.screenshot(path="src/unified_web_ui/tests/screenshots/screenshot_mobile.png")
def test_camera_layout_dimensions(page: Page):
page.set_viewport_size({"width": 1280, "height": 720})
page.goto("http://localhost:5000/")
# Wait for camera streams to load
page.wait_for_selector('img[src*="video_feed"]')
# Get bounding boxes for the key layout elements
camera_streams_grid_box = page.locator('#camera .camera-streams-grid').bounding_box()
color_camera_row_box = page.locator('#camera .camera-color-row').bounding_box()
mono_camera_row_box = page.locator('#camera .camera-mono-row').bounding_box()
assert camera_streams_grid_box is not None, "Camera streams grid not found"
assert color_camera_row_box is not None, "Color camera row not found"
assert mono_camera_row_box is not None, "Mono camera row not found"
# Define a small tolerance for floating point comparisons
tolerance = 7 # pixels, increased slightly for robust testing across browsers/OS
# 1. Check vertical positioning and 1/3, 2/3 height distribution
# The grid's 1fr 2fr distribution applies to the space *after* accounting for gaps.
grid_internal_gap_height = 10 # Defined in .camera-streams-grid gap property
total_distributable_height = camera_streams_grid_box['height'] - grid_internal_gap_height
expected_color_row_height = total_distributable_height / 3
expected_mono_row_height = total_distributable_height * 2 / 3
assert abs(color_camera_row_box['height'] - expected_color_row_height) < tolerance, \
f"Color camera row height is {color_camera_row_box['height']}, expected {expected_color_row_height} (1/3 of distributable height)"
assert abs(mono_camera_row_box['height'] - expected_mono_row_height) < tolerance, \
f"Mono camera row height is {mono_camera_row_box['height']}, expected {expected_mono_row_height} (2/3 of distributable height)"
# Check vertical stacking - top of mono row should be roughly at bottom of color row + gap
assert abs(mono_camera_row_box['y'] - (color_camera_row_box['y'] + color_camera_row_box['height'] + grid_internal_gap_height)) < tolerance, \
"Mono camera row is not positioned correctly below the color camera row with the expected gap."
# 2. Check horizontal padding (5px on each side of .camera-streams-grid)
grid_left_edge = camera_streams_grid_box['x']
grid_right_edge = camera_streams_grid_box['x'] + camera_streams_grid_box['width']
color_row_left_edge = color_camera_row_box['x']
color_row_right_edge = color_camera_row_box['x'] + color_camera_row_box['width']
mono_row_left_edge = mono_camera_row_box['x']
mono_row_right_edge = mono_camera_row_box['x'] + mono_camera_row_box['width']
# The content rows should align with the grid's padding
assert abs(color_row_left_edge - (grid_left_edge + 5)) < tolerance, \
f"Color camera row left edge is {color_row_left_edge}, expected {grid_left_edge + 5} (grid left + 5px padding)"
assert abs(grid_right_edge - color_row_right_edge - 5) < tolerance, \
f"Color camera row right edge is {color_row_right_edge}, expected {grid_right_edge - 5} (grid right - 5px padding)"
assert abs(mono_row_left_edge - (grid_left_edge + 5)) < tolerance, \
f"Mono camera row left edge is {mono_row_left_edge}, expected {grid_left_edge + 5} (grid left + 5px padding)"
assert abs(grid_right_edge - mono_row_right_edge - 5) < tolerance, \
f"Mono camera row right edge is {mono_row_right_edge}, expected {grid_right_edge - 5} (grid right - 5px padding)"
# 3. Verify no "behind" effect - check if mono camera row box's top is below color camera row's bottom
# This is implicitly covered by the vertical stacking check, but can be explicit for clarity
assert mono_camera_row_box['y'] > color_camera_row_box['y'] + color_camera_row_box['height'], \
"Mono camera row is visually overlapping the color camera row."
# 4. Check that individual camera containers tightly wrap their images
color_cam_container = page.locator('.camera-color-row .camera-container-individual')
color_cam_img = color_cam_container.locator('.camera-stream-individual')
if color_cam_container.count() > 0:
color_container_box = color_cam_container.bounding_box()
color_img_box = color_cam_img.bounding_box()
assert color_container_box is not None, "Color camera container not found for image fit check"
assert color_img_box is not None, "Color camera image not found for image fit check"
assert abs(color_container_box['width'] - color_img_box['width']) < tolerance, \
f"Color camera container width ({color_container_box['width']}) does not match image width ({color_img_box['width']})"
assert abs(color_container_box['height'] - color_img_box['height']) < tolerance, \
f"Color camera container height ({color_container_box['height']}) does not match image height ({color_img_box['height']})"
mono_cam_containers = page.locator('#camera .camera-mono-row .camera-container-individual').all()
for i, mono_cam_container in enumerate(mono_cam_containers):
mono_cam_img = mono_cam_container.locator('.camera-stream-individual')
mono_container_box = mono_cam_container.bounding_box()
mono_img_box = mono_cam_img.bounding_box()
assert mono_container_box is not None, f"Mono camera container {i} not found for image fit check"
assert mono_img_box is not None, f"Mono camera image {i} not found for image fit check"
assert abs(mono_container_box['width'] - mono_img_box['width']) < tolerance, \
f"Mono camera container {i} width ({mono_container_box['width']}) does not match image width ({mono_img_box['width']})"
assert abs(mono_container_box['height'] - mono_img_box['height']) < tolerance, \
f"Mono camera container {i} height ({mono_container_box['height']}) does not match image height ({mono_img_box['height']})"
# Optionally, check that individual mono cameras are side-by-side within their row
mono_cams = page.locator('#camera .camera-mono').all()
assert len(mono_cams) == 2, "Expected two mono cameras"
if len(mono_cams) == 2:
mono_cam_1_box = mono_cams[0].bounding_box()
mono_cam_2_box = mono_cams[1].bounding_box()
assert mono_cam_1_box is not None and mono_cam_2_box is not None, "Mono camera boxes not found"
# Check horizontal alignment
assert abs(mono_cam_1_box['y'] - mono_cam_2_box['y']) < tolerance, \
"Mono cameras are not horizontally aligned."
# Check side-by-side positioning (cam 2 should be to the right of cam 1)
assert mono_cam_2_box['x'] > mono_cam_1_box['x'] + mono_cam_1_box['width'] - tolerance, \
"Mono cameras are not side-by-side as expected."

View File

@ -1,18 +0,0 @@
import cv2
import time
import pytest
from playwright.sync_api import Page, expect
def test_segmentation_output(page: Page):
page.goto("http://localhost:5000/")
# Check for the presence of a segmentation feed for the first mono camera (stream 1)
segmentation_feed = page.locator("#segmentation-feed-1")
expect(segmentation_feed).to_be_visible()
# Verify that the segmentation feed is updating
initial_src = segmentation_feed.get_attribute("src")
page.reload()
page.wait_for_selector("#segmentation-feed-1")
new_src = segmentation_feed.get_attribute("src")
assert initial_src != new_src, "Segmentation feed is not updating"