Smart Parking Spot Detection System

Parking Slot Occupancy Detection
Using Classical Computer Vision

Project Identity
Project Smart Parking Spot Detection System
Subject Computer Vision (UCS532P)
Faculty Dr. Raghav B. Venkataramaiyer
Team Team Insight
Members Aditi Dilip Kumar Gupta, Arin Goyal
The Gap

Urban parking facilities lack real-time, cost-effective occupancy monitoring — traditional sensor-based systems require expensive hardware per-slot installation and constant maintenance, leaving most lots without intelligent space management.

The Solution

Smart Parking Spot Detection System uses a single fixed CCTV camera and classical image processing — frame differencing, CLAHE contrast enhancement, and polygon-masked ROI analysis — to detect slot occupancy in real time without any deep learning or specialized hardware.

The Impact

By eliminating per-sensor costs and ML model dependencies, this system can be deployed in any camera-equipped parking facility, reducing urban traffic congestion from lot-searching by up to 30% and cutting infrastructure costs by over 90%.

The Urban Parking Crisis

Background Context

Urban traffic congestion is a globally recognized challenge. Studies from the INRIX Global Traffic Scorecard report that drivers in major cities spend an average of 17 minutes per trip searching for parking — accounting for approximately 30% of total urban traffic. As vehicle ownership continues to grow, this problem compounds at scale.

Modern parking facilities typically rely on one of two approaches: (1) physical loop-detector sensors embedded in each bay costing $300–$800 per space, or (2) manual attendant monitoring with zero real-time data. Neither approach is scalable, cost-effective, or easily retrofittable to existing infrastructure.

Problem Gap
No lightweight, camera-only, infrastructure-free solution exists that achieves reliable occupancy detection without deep learning model dependencies, GPU requirements, or labelled training datasets.

Academic Relevance

This project applies fundamental computer vision principles — spatial filtering, morphological operations, ROI masking, and statistical thresholding — studied in image processing coursework. It bridges theory and practical deployment in a real urban context.

Research Motivation

The absence of a dataset-free, zero-GPU baseline system represents an unexplored niche. Establishing reliable accuracy benchmarks for classical CV approaches informs future hybrid systems and validates lightweight deployment in resource-constrained environments.

30% of urban traffic from parking search
$800 avg per-bay sensor cost
0 training data required by our system
1 camera for the entire lot

Project Objectives

Primary Objective

To design, implement, and evaluate a real-time parking slot occupancy detection system using classical computer vision techniques that achieves ≥90% classification accuracy on a standard parking lot video dataset without deep learning models or per-slot sensors.

01
Specific Measurable

Slot Annotation System

Develop an interactive polygon annotation tool that allows a user to label all parking slots in a camera view within 10 minutes, saving persistent slot geometry to JSON.

⏱ Week 2 ✓ Attainable ↗ Relevant
02
Measurable Attainable

Preprocessing Pipeline

Implement a frame preprocessing pipeline (grayscale → CLAHE → Gaussian blur) that reduces false-positive rate by ≥15% compared to raw frame comparison on test footage.

⏱ Week 3–4 ✓ Attainable ↗ Relevant
03
Specific Time-bound

Occupancy Detection Core

Implement frame-differencing occupancy logic achieving ≥90% slot-level accuracy (F1-score) on a 30-minute evaluation video by Week 7, validated against manual ground-truth annotations.

⏱ Week 5–7 ✓ Attainable ↗ Relevant
04
Relevant Measurable

Debug Visualization Tool

Build a multi-window debug runtime displaying reference frames and amplified pixel-diff maps per slot, enabling threshold tuning within 2 iterations of initial calibration.

⏱ Week 6–8 ✓ Attainable ↗ Relevant
05
Time-bound Specific

Evaluation & Documentation

Produce a final technical report with accuracy benchmarks, performance tables, known limitations, and reproducible setup guide by Week 12 for academic submission.

⏱ Week 10–12 ✓ Attainable ↗ Relevant

System Design & Implementation

System Architecture

The system is composed of four independent Python modules that form a linear processing pipeline. Each module has a single responsibility and communicates via standardized data contracts (NumPy arrays, JSON files).

Video Input
video1.mp4
Preprocessor
preprocess.py
Slot Masking
slots.json
Diff Engine
main.py
HUD Display
cv2.imshow

Tech Stack

Python 3.x Runtime
OpenCV 4.x Vision Core
NumPy Array Ops
MOG2 BG Subtraction
CLAHE Contrast Enhance
JSON Slot Storage

Module Development

preprocess.py Core

Converts each BGR frame into a normalized single-channel representation suitable for stable comparison.

BGR Grayscale CLAHE Gaussian Blur (5×5)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
gray = clahe.apply(gray)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
background_subtraction.py Auxiliary

MOG2-based foreground extraction with shadow removal (threshold >200) and morphological cleanup. Currently implemented but not in the primary decision path — reserved for future hybrid scoring.

MOG2 Threshold MORPH_OPEN MORPH_DILATE
slot_annotation.py Tool

Interactive 4-point polygon annotation GUI. Click-to-place corners, press 1/0 for occupancy state, S to persist to slots.json. Supports undo, delete, and full reset.

main.py / main_ref.py Runtime

Primary occupancy detection loop. For each frame: loads slot polygons, applies preprocessor, diffs each slot against its stored empty reference, counts changed pixels above threshold, classifies as occupied/free.

# Occupancy decision rule
diff = cv2.absdiff(processed, slot_references[i])
diff_masked = cv2.bitwise_and(diff, diff, mask=mask)
changed_pixels = np.sum(diff_masked > DIFF_THRESHOLD)  # 30
ratio = changed_pixels / (slot_area + 1)
is_occupied = ratio > OCCUPANCY_RATIO  # 0.25

Execution Workflow

1
Initialization

Load video, read first frame, preprocess it. For all slots marked occupied=0 in JSON, store first processed frame as empty reference. Slots marked occupied=1 start with None reference.

2
Per-Frame Processing

Read frame → preprocess → for each slot: create polygon mask, compute absdiff vs reference, mask diff to slot region, compute changed pixel ratio.

3
State Transition & Reference Update

If slot transitions from occupied→free, capture current processed frame as the new empty reference for that slot. This enables dynamic adaptation when vehicles depart.

4
Visualization

Draw color-coded slot polygons (green=free, red=occupied), numbered labels, and HUD overlay showing total/occupied/free counts.

Testing & Validation

Metric Definition Target Method
Accuracy Correct slot classifications / total slots per frame ≥ 90% Manual ground-truth labels
Precision True occupied / all predicted occupied ≥ 88% Frame-by-frame audit
Recall True occupied / all actual occupied ≥ 85% Frame-by-frame audit
F1-Score Harmonic mean of precision & recall ≥ 0.87 Computed metric
Latency Processing time per frame < 33ms (30fps) Python time module
Reference Stability False transitions on parked vehicle 0 per 5 min Continuous monitoring

Constraints & Safety

Camera Static

System assumes a fixed overhead/angled CCTV perspective. No camera movement compensation implemented.

No PII

Video is processed locally, no face/plate data extracted or stored. Privacy-safe by design.

Lighting Sensitivity

Sudden lighting changes (shadows, headlights) can trigger false occupancy transitions. CLAHE partially mitigates this.

Initial Occupied Slots

Slots marked occupied at start have no empty reference until a vehicle departs. Fallback to JSON annotation state used during this period.

Project Roadmap

Phase 01

Research & Setup

  • Literature review and problem framing
  • Environment setup and toolchain preparation
  • Dataset acquisition and slot planning
Phase 02

Core Development

  • Annotation workflow and slot geometry capture
  • Preprocessing pipeline and contrast enhancement
  • Occupancy detection core and slot scoring
Phase 03

Validation

  • Threshold tuning and debug visualization
  • Accuracy checks against manual labels
  • Stability testing across lighting conditions
Phase 04

Delivery

  • Code cleanup and final integration
  • Report writing and documentation polish
  • Submission-ready project packaging

Tools, Dependencies & Team

Software

  • Python 3.10+Free
  • OpenCV 4.xOpen Source
  • NumPyOpen Source
  • VS CodeFree
  • Git / GitHubFree

Hardware

  • Dev Laptop (i5/i7)Existing
  • Webcam / CCTVExisting
  • 8GB RAM minimumExisting
  • GPUNot Required

Team Roles

  • Aditi Dilip Kumar GuptaTeam Insight
  • Arin GoyalTeam Insight
  • Dr. Raghav B. VenkataramaiyerFaculty
  • Computer Vision (UCS532P)Subject
ItemCategoryEstimated CostStatus
Python + OpenCVSoftware$0✓ Free
Development MachineHardware$0✓ Available
Test Video FootageDataset$0✓ Self-recorded
GitHub Pages HostingDeployment$0✓ Free
Cloud GPUCompute$0✓ Not needed
Total Project Cost$0✓ Funded
No External Funding Required
This project is fully self-sufficient using open-source tools and existing hardware. All dependencies are freely available under permissive licenses (Apache 2.0, BSD, MIT).

Technical Risks & Mitigation

High

Lighting Variation

Sudden daylight changes or artificial light flickers cause frame-wide intensity shifts, producing mass false-positive occupancy detections.

Mitigation: CLAHE normalization reduces global illumination effects. Temporal smoothing (multi-frame confirmation) to be added in refinement phase.
Fallback: Increase DIFF_THRESHOLD dynamically based on frame mean intensity; add per-slot adaptive thresholding.
Medium

No Initial Reference for Occupied Slots

Slots annotated as occupied at video start have no empty reference image, causing fallback to annotation state for potentially long periods.

Mitigation: Auto-update reference when slot first transitions to free state. Log reference update events to console.
Fallback: Pre-capture an empty-lot reference frame before vehicle arrival and inject into slot_references at startup.
Medium

Camera Shake / Drift

Minor camera movement (wind, vibration) shifts the entire frame, causing reference mismatch across all slots simultaneously.

Mitigation: Mount camera securely. Use larger polygon ROIs to tolerate minor alignment drift.
Fallback: Add global frame alignment (ECC/ORB homography) as preprocessing step before diff computation.
Low

Point Annotation Order

Non-clockwise polygon annotation creates self-intersecting quads, causing incorrect ROI masking and degraded detection quality.

Mitigation: Add convex hull auto-sorting of annotation points before saving to slots.json.
Fallback: Visual preview of polygon in annotation tool allows user to verify shape before confirming.
Low

Slow Frame Rate on Low-end Hardware

High-resolution video or many slots may cause processing latency exceeding real-time on underpowered machines.

Mitigation: Resize input frames to 720p before processing. Profile with cProfile and optimize tight loops.
Fallback: Process every Nth frame (e.g. every 3rd frame) to maintain real-time display at reduced detection frequency.

Expected Outputs

Source Code Repository

Fully documented Python codebase with all modules, annotation tool, and slots data — hosted on GitHub with MIT license.

✓ In Progress

Working Prototype

Real-time demo video showing occupancy detection on parking lot footage with HUD overlay and debug visualization mode.

✓ In Progress

Documentation Website

This GitHub Pages site — fully documenting design decisions, architecture, and results for public reference.

✓ Complete