Smart Parking Spot Detection System
Parking Slot Occupancy Detection
Using Classical Computer Vision
Urban parking facilities lack real-time, cost-effective occupancy monitoring — traditional sensor-based systems require expensive hardware per-slot installation and constant maintenance, leaving most lots without intelligent space management.
Smart Parking Spot Detection System uses a single fixed CCTV camera and classical image processing — frame differencing, CLAHE contrast enhancement, and polygon-masked ROI analysis — to detect slot occupancy in real time without any deep learning or specialized hardware.
By eliminating per-sensor costs and ML model dependencies, this system can be deployed in any camera-equipped parking facility, reducing urban traffic congestion from lot-searching by up to 30% and cutting infrastructure costs by over 90%.
The Urban Parking Crisis
Background Context
Urban traffic congestion is a globally recognized challenge. Studies from the INRIX Global Traffic Scorecard report that drivers in major cities spend an average of 17 minutes per trip searching for parking — accounting for approximately 30% of total urban traffic. As vehicle ownership continues to grow, this problem compounds at scale.
Modern parking facilities typically rely on one of two approaches: (1) physical loop-detector sensors embedded in each bay costing $300–$800 per space, or (2) manual attendant monitoring with zero real-time data. Neither approach is scalable, cost-effective, or easily retrofittable to existing infrastructure.
No lightweight, camera-only, infrastructure-free solution exists that achieves reliable occupancy detection without deep learning model dependencies, GPU requirements, or labelled training datasets.
Academic Relevance
This project applies fundamental computer vision principles — spatial filtering, morphological operations, ROI masking, and statistical thresholding — studied in image processing coursework. It bridges theory and practical deployment in a real urban context.
Research Motivation
The absence of a dataset-free, zero-GPU baseline system represents an unexplored niche. Establishing reliable accuracy benchmarks for classical CV approaches informs future hybrid systems and validates lightweight deployment in resource-constrained environments.
Project Objectives
To design, implement, and evaluate a real-time parking slot occupancy detection system using classical computer vision techniques that achieves ≥90% classification accuracy on a standard parking lot video dataset without deep learning models or per-slot sensors.
Slot Annotation System
Develop an interactive polygon annotation tool that allows a user to label all parking slots in a camera view within 10 minutes, saving persistent slot geometry to JSON.
Preprocessing Pipeline
Implement a frame preprocessing pipeline (grayscale → CLAHE → Gaussian blur) that reduces false-positive rate by ≥15% compared to raw frame comparison on test footage.
Occupancy Detection Core
Implement frame-differencing occupancy logic achieving ≥90% slot-level accuracy (F1-score) on a 30-minute evaluation video by Week 7, validated against manual ground-truth annotations.
Debug Visualization Tool
Build a multi-window debug runtime displaying reference frames and amplified pixel-diff maps per slot, enabling threshold tuning within 2 iterations of initial calibration.
Evaluation & Documentation
Produce a final technical report with accuracy benchmarks, performance tables, known limitations, and reproducible setup guide by Week 12 for academic submission.
System Design & Implementation
System Architecture
The system is composed of four independent Python modules that form a linear processing pipeline. Each module has a single responsibility and communicates via standardized data contracts (NumPy arrays, JSON files).
video1.mp4preprocess.pyslots.jsonmain.pycv2.imshowTech Stack
Module Development
preprocess.py
Core
Converts each BGR frame into a normalized single-channel representation suitable for stable comparison.
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
gray = clahe.apply(gray)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
background_subtraction.py
Auxiliary
MOG2-based foreground extraction with shadow removal (threshold >200) and morphological cleanup. Currently implemented but not in the primary decision path — reserved for future hybrid scoring.
slot_annotation.py
Tool
Interactive 4-point polygon annotation GUI. Click-to-place corners, press 1/0 for occupancy state, S to persist to slots.json. Supports undo, delete, and full reset.
main.py / main_ref.py
Runtime
Primary occupancy detection loop. For each frame: loads slot polygons, applies preprocessor, diffs each slot against its stored empty reference, counts changed pixels above threshold, classifies as occupied/free.
# Occupancy decision rule
diff = cv2.absdiff(processed, slot_references[i])
diff_masked = cv2.bitwise_and(diff, diff, mask=mask)
changed_pixels = np.sum(diff_masked > DIFF_THRESHOLD) # 30
ratio = changed_pixels / (slot_area + 1)
is_occupied = ratio > OCCUPANCY_RATIO # 0.25
Execution Workflow
Load video, read first frame, preprocess it. For all slots marked occupied=0 in JSON, store first processed frame as empty reference. Slots marked occupied=1 start with None reference.
Read frame → preprocess → for each slot: create polygon mask, compute absdiff vs reference, mask diff to slot region, compute changed pixel ratio.
If slot transitions from occupied→free, capture current processed frame as the new empty reference for that slot. This enables dynamic adaptation when vehicles depart.
Draw color-coded slot polygons (green=free, red=occupied), numbered labels, and HUD overlay showing total/occupied/free counts.
Testing & Validation
| Metric | Definition | Target | Method |
|---|---|---|---|
| Accuracy | Correct slot classifications / total slots per frame | ≥ 90% | Manual ground-truth labels |
| Precision | True occupied / all predicted occupied | ≥ 88% | Frame-by-frame audit |
| Recall | True occupied / all actual occupied | ≥ 85% | Frame-by-frame audit |
| F1-Score | Harmonic mean of precision & recall | ≥ 0.87 | Computed metric |
| Latency | Processing time per frame | < 33ms (30fps) | Python time module |
| Reference Stability | False transitions on parked vehicle | 0 per 5 min | Continuous monitoring |
Constraints & Safety
System assumes a fixed overhead/angled CCTV perspective. No camera movement compensation implemented.
Video is processed locally, no face/plate data extracted or stored. Privacy-safe by design.
Sudden lighting changes (shadows, headlights) can trigger false occupancy transitions. CLAHE partially mitigates this.
Slots marked occupied at start have no empty reference until a vehicle departs. Fallback to JSON annotation state used during this period.
Project Roadmap
Research & Setup
- Literature review and problem framing
- Environment setup and toolchain preparation
- Dataset acquisition and slot planning
Core Development
- Annotation workflow and slot geometry capture
- Preprocessing pipeline and contrast enhancement
- Occupancy detection core and slot scoring
Validation
- Threshold tuning and debug visualization
- Accuracy checks against manual labels
- Stability testing across lighting conditions
Delivery
- Code cleanup and final integration
- Report writing and documentation polish
- Submission-ready project packaging
Tools, Dependencies & Team
Software
- Python 3.10+Free
- OpenCV 4.xOpen Source
- NumPyOpen Source
- VS CodeFree
- Git / GitHubFree
Hardware
- Dev Laptop (i5/i7)Existing
- Webcam / CCTVExisting
- 8GB RAM minimumExisting
- GPUNot Required
Team Roles
- Aditi Dilip Kumar GuptaTeam Insight
- Arin GoyalTeam Insight
- Dr. Raghav B. VenkataramaiyerFaculty
- Computer Vision (UCS532P)Subject
| Item | Category | Estimated Cost | Status |
|---|---|---|---|
| Python + OpenCV | Software | $0 | ✓ Free |
| Development Machine | Hardware | $0 | ✓ Available |
| Test Video Footage | Dataset | $0 | ✓ Self-recorded |
| GitHub Pages Hosting | Deployment | $0 | ✓ Free |
| Cloud GPU | Compute | $0 | ✓ Not needed |
| Total Project Cost | $0 | ✓ Funded | |
This project is fully self-sufficient using open-source tools and existing hardware. All dependencies are freely available under permissive licenses (Apache 2.0, BSD, MIT).
Technical Risks & Mitigation
Lighting Variation
Sudden daylight changes or artificial light flickers cause frame-wide intensity shifts, producing mass false-positive occupancy detections.
No Initial Reference for Occupied Slots
Slots annotated as occupied at video start have no empty reference image, causing fallback to annotation state for potentially long periods.
Camera Shake / Drift
Minor camera movement (wind, vibration) shifts the entire frame, causing reference mismatch across all slots simultaneously.
Point Annotation Order
Non-clockwise polygon annotation creates self-intersecting quads, causing incorrect ROI masking and degraded detection quality.
Slow Frame Rate on Low-end Hardware
High-resolution video or many slots may cause processing latency exceeding real-time on underpowered machines.
Expected Outputs
Source Code Repository
Fully documented Python codebase with all modules, annotation tool, and slots data — hosted on GitHub with MIT license.
✓ In ProgressWorking Prototype
Real-time demo video showing occupancy detection on parking lot footage with HUD overlay and debug visualization mode.
✓ In ProgressDocumentation Website
This GitHub Pages site — fully documenting design decisions, architecture, and results for public reference.
✓ Complete