Skip to main content

Overview

The --test-image flag allows you to test person detection on a static image file before processing live RTSP streams. This is essential for:
  • Validating detection setup – Verify your model files are loaded correctly
  • Tuning parameters – Find optimal confidence and area thresholds
  • Debugging issues – Understand why detections may be failing
  • Testing hardware – Confirm GPU acceleration is working
Test image mode runs a single detection pass and exits. It does not connect to any RTSP streams.

Basic Usage

uv run main.py --test-image photo.jpg --save image
Output:
Config loaded from: config.cfg
  model_dir   = model
  output_dir  = output
Loading person detection model...
Model loaded: YOLOv4
Confidence threshold: 0.5
Person area threshold: 1000 pixels
Testing with image: photo.jpg
Persons detected: 2
Bounding boxes: [(145, 89, 234, 456, 0.87), (456, 102, 189, 423, 0.72)]
Annotated result saved to: test_result_1741528222.jpg

Step-by-Step Guide

1

Prepare a test image

Use any JPEG or PNG image containing people. For best results:
  • Resolution: Similar to your RTSP stream resolution
  • Lighting: Similar conditions to your deployment environment
  • Distance: People at similar distances as your camera setup
Example test images:
# Download a sample image
wget https://example.com/sample-crowd.jpg -O test.jpg

# Or use a frame from your stream
ffmpeg -i rtsp://camera.local/stream -frames:v 1 test.jpg
2

Run detection with default settings

Test with default configuration:
uv run main.py --test-image test.jpg --save image
This uses default thresholds from config.cfg:
  • confidence_threshold = 0.5
  • person_area_threshold = 1000
3

Review the annotated output

Open the generated test_result_*.jpg file:
  • Green bounding boxes around detected persons
  • Confidence scores displayed above each box (e.g., “Person 1: 0.87”)
  • Box dimensions correspond to bounding box size in pixels
Check the console output:
Persons detected: 2
Bounding boxes: [(145, 89, 234, 456, 0.87), (456, 102, 189, 423, 0.72)]
Format: (x, y, width, height, confidence)
4

Adjust thresholds if needed

If results aren’t as expected, tune parameters:
# Lower confidence to detect more persons
uv run main.py --test-image test.jpg --save image --confidence 0.3

# Higher confidence for fewer false positives
uv run main.py --test-image test.jpg --save image --confidence 0.7

# Filter small detections
uv run main.py --test-image test.jpg --save image --area-threshold 5000
5

Test with your production settings

Once you find optimal parameters, test them:
uv run main.py --test-image test.jpg --save image \
  --confidence 0.6 \
  --area-threshold 2500 \
  --config production.cfg
If results look good, use the same settings for live streams.

Understanding the Output

Console Output Breakdown

Persons detected: 2
Bounding boxes: [(145, 89, 234, 456, 0.87), (456, 102, 189, 423, 0.72)]
Annotated result saved to: test_result_1741528222.jpg
Bounding box format: (x, y, width, height, confidence)
FieldDescriptionExample
xLeft edge position (pixels)145
yTop edge position (pixels)89
widthBox width (pixels)234
heightBox height (pixels)456
confidenceDetection confidence (0.0-1.0)0.87 (87%)
Box area: width × height = 234 × 456 = 106,704 pixels

Annotated Image

The output image test_result_*.jpg shows:
  • Green rectangles – Bounding boxes around detected persons
  • Labels – “Person 1: 0.87”, “Person 2: 0.72”, etc.
  • Original image – Background preserved, detections overlaid
The annotated image is saved with a Unix timestamp in the filename (e.g., test_result_1741528222.jpg) to prevent overwriting previous test runs.

Tuning Detection Parameters

Confidence Threshold Examples

uv run main.py --test-image test.jpg --save image --confidence 0.3
Result:
Persons detected: 5
Bounding boxes: [(145, 89, 234, 456, 0.87), (456, 102, 189, 423, 0.72), 
                 (234, 156, 123, 289, 0.45), (678, 234, 98, 234, 0.38), 
                 (890, 123, 156, 345, 0.32)]
Interpretation: More detections, including lower-confidence ones. May include false positives (non-person objects).

Area Threshold Examples

uv run main.py --test-image test.jpg --save image --area-threshold 500
Captures small bounding boxes (distant persons, children, partial views).

Troubleshooting Detection Issues

Possible causes:
  1. Thresholds too high – Try lowering them:
    uv run main.py --test-image test.jpg --save image \
      --confidence 0.2 \
      --area-threshold 100
    
  2. Model files missing – Check for HOG fallback warning:
    Warning: YOLO weights not found. Using OpenCV's built-in HOG person detector as fallback.
    
    HOG is less accurate than YOLO. Download proper model files.
  3. Image issues – Verify image loaded correctly:
    file test.jpg  # Should show: JPEG image data
    
  4. Poor image quality – Try with a clearer image or better lighting.
Solution: Increase thresholds:
uv run main.py --test-image test.jpg --save image \
  --confidence 0.7 \
  --area-threshold 5000
Also check the annotated image – false positives may be objects that resemble humans (mannequins, posters, etc.).
Possible causes:
  1. Confidence too low – Person detected but below threshold
  2. Area too small – Person detected but filtered by area threshold
  3. Model limitations – YOLO/HOG struggle with certain poses or occlusions
Solution: Lower both thresholds:
uv run main.py --test-image test.jpg --save image \
  --confidence 0.3 \
  --area-threshold 500
This shouldn’t happen with test images. If you see incorrect boxes:
  1. Check image format – Use standard JPEG/PNG
  2. Verify model files – Re-download YOLO weights
  3. Check OpenCV version – Ensure opencv-contrib-python is installed
python -c "import cv2; print(cv2.__version__)"
Output:
Error: Image file test.jpg not found
Solution: Provide the full or correct relative path:
uv run main.py --test-image /full/path/to/test.jpg --save image
uv run main.py --test-image ./images/test.jpg --save image
Output:
Error: Could not load image test.jpg
Possible causes:
  • Corrupted image file
  • Unsupported format
  • File permissions
Solution:
# Check file integrity
file test.jpg

# Convert to standard JPEG
convert test.png test.jpg

# Fix permissions
chmod 644 test.jpg

Comparing Detection Methods

YOLOv4 vs YOLOv3 vs HOG

Using Test Results for Production

Finding Optimal Settings

1

Test with multiple images

Capture diverse scenarios from your deployment:
# Different times of day
uv run main.py --test-image morning.jpg --save image
uv run main.py --test-image afternoon.jpg --save image
uv run main.py --test-image evening.jpg --save image

# Different distances
uv run main.py --test-image close-up.jpg --save image
uv run main.py --test-image medium-range.jpg --save image
uv run main.py --test-image far-away.jpg --save image
2

Tune for your use case

High-security (minimize false negatives):
uv run main.py --test-image test.jpg --save image \
  --confidence 0.4 \
  --area-threshold 800
High-precision (minimize false positives):
uv run main.py --test-image test.jpg --save image \
  --confidence 0.7 \
  --area-threshold 3000
3

Document your findings

Record optimal settings for your environment:
# production.cfg
[paths]
model_dir = model
output_dir = /data/surveillance

[detection]
confidence_threshold = 0.6  # Tuned from test results
person_area_threshold = 2500  # Filters distant persons
frame_skip = 15
4

Apply to live streams

Use tested settings with RTSP streams:
uv run main.py --config production.cfg \
  --rtsp "rtsp://camera.local/stream" \
  --save image

Advanced Testing

Batch Testing Multiple Images

#!/bin/bash
# test-batch.sh

for img in test_images/*.jpg; do
  echo "Testing: $img"
  uv run main.py --test-image "$img" --save image \
    --confidence 0.5 \
    --area-threshold 2000
  echo "---"
done
Run:
chmod +x test-batch.sh
./test-batch.sh

Comparing Threshold Ranges

#!/bin/bash
# compare-thresholds.sh

for conf in 0.3 0.4 0.5 0.6 0.7; do
  echo "Testing confidence: $conf"
  uv run main.py --test-image test.jpg --save image --confidence $conf
  mv test_result_*.jpg "result_conf_${conf}.jpg"
done
Review all result_conf_*.jpg files to compare detection results.

Testing GPU Acceleration

Verify CUDA is being used:
uv run main.py --test-image test.jpg --save image 2>&1 | grep CUDA
Expected output:
CUDA available, using GPU for inference
If you see:
CUDA not available, using CPU for inference
Install CUDA-enabled OpenCV:
uv pip install opencv-contrib-python