Faisal-writeups

Traffic Management Implementation Guide

2025-01-06 · Faisal-writeups

Complete Implementation Guide: Smart Traffic Management System Project Overview This comprehensive guide provides a fully functional implementation of an intelligent traffic management system using image-based object detection with YOLOv5, OpenCV, and Python. The system automatically detects vehicles, counts them, and dynamically adjusts traffic signal timings to optimize traffic flow.

1. System Architecture Core Components The system consists of four main modules: 1. Vehicle Detection Module - YOLOv5-based object detection 2. Signal Control Module - Dynamic traffic signal timing algorithm 3. Tracking Module - Vehicle tracking across frames (optional) 4. Main System - Coordinates all components and handles video processing

Technology Stack Deep Learning Framework: PyTorch with YOLOv5 Computer Vision: OpenCV Programming Language: Python 3.8+ Additional Libraries: NumPy, Pandas, Matplotlib

2. Installation & Setup Step 1: Environment Setup Create a virtual environment and install dependencies: # Create virtual environment python -m venv traffic_env source traffic_env/bin/activate

# On Windows: traffic_env\Scripts\activate

# Install required packages pip install torch torchvision opencv-python numpy pandas matplotlib PyYAML tqdm scipy

Step 2: Project Directory Structure traffic_management_system/ ├── data/ │ ├── raw/ │ ├── processed/ │ ├── annotations/ │ └── models/ ├── src/ │ ├── vehicle_detection.py │ ├── signal_control.py │ ├── tracking.py │ ├── utils.py │ └── config.py ├── output/ ├── requirements.txt ├── README.md └── main.py

# Raw traffic videos/images # Preprocessed data # YOLO format annotations # Trained model weights # YOLOv5 detection module # Dynamic signal timing algorithm # DeepSORT tracking module # Helper functions # Configuration parameters # Output videos and logs # Python dependencies # Project documentation # Main execution script

3. Core Module Implementation Module 1: Configuration (config.py) This file contains all system parameters and settings: Key Configuration Parameters: YOLO Model Settings Model type: yolov5s (small), yolov5m (medium), yolov5l (large) Confidence threshold: 0.4 NMS threshold: 0.45 Image size: 640x640 Traffic Signal Parameters Number of lanes: 4 Min green time: 10 seconds Max green time: 90 seconds Yellow time: 3 seconds All-red transition: 2 seconds Vehicle Weights (for priority calculation) Car: 1.0 Motorcycle/Bicycle: 0.5 Bus/Truck: 2.0

Module 2: Vehicle Detection (vehicle_detection.py) Purpose: Detect and classify vehicles in traffic footage Key Functions: 1. __init__(model_path, device) - Initializes YOLOv5 model Loads pre-trained or custom weights Sets up GPU/CPU device Configures detection parameters 2. detect(frame) - Performs vehicle detection Input: BGR image frame Output: Array of [x1, y1, x2, y2, confidence, class_id] Filters only vehicle classes (car, bus, truck, motorcycle, bicycle) 3. count_vehicles(detections) - Counts vehicles by type Returns dictionary with counts per vehicle class Example: {'car': 15, 'bus': 2, 'motorcycle': 5} 4. draw_detections(frame, detections) - Visualizes results Draws bounding boxes around detected vehicles Adds labels with class name and confidence Returns annotated frame Example Usage: from vehicle_detection import VehicleDetector import cv2 # Initialize detector detector = VehicleDetector() # Read image frame = cv2.imread('traffic_image.jpg') # Detect vehicles detections = detector.detect(frame) # Count vehicles counts = detector.count_vehicles(detections) print(f"Vehicle counts: {counts}") # Visualize result = detector.draw_detections(frame, detections) cv2.imshow('Detection Result', result) cv2.waitKey(0)

Module 3: Signal Control Algorithm (signal_control.py) Purpose: Dynamically calculate optimal traffic signal timings Key Classes & Methods: TrafficSignalController Class 1. __init__(num_lanes) - Initialize controller Sets up data structures for each lane Initializes vehicle counts and weights Creates historical data buffers 2. update_vehicle_counts(lane_id, vehicle_counts_dict) Updates vehicle counts for a specific lane Calculates weighted count based on vehicle types Stores data in rolling history buffer 3. calculate_green_time(lane_id) - Calculates optimal green time Formula: green_time = MIN_GREEN + (weighted_count × time_per_vehicle) Considers historical average for smoothing Applies min/max bounds Returns calculated time in seconds 4. get_next_green_lane() - Determines priority lane Selects lane with highest weighted vehicle count Returns lane ID and calculated green duration Updates current green lane status 5. get_signal_sequence() - Generates complete cycle Returns ordered sequence of signals for all lanes Includes green, yellow, and all-red phases Prioritizes lanes by traffic density Signal Timing Algorithm Details: The algorithm uses a weighted priority system:

Where: = 10 seconds (configurable) = 90 seconds (configurable)

= 2.5 seconds Example: Lane with 10 cars, 2 buses, 3 motorcycles: Weighted count = (10 × 1.0) + (2 × 2.0) + (3 × 0.5) = 15.5 Green time = 10 + (15.5 × 2.5) = 48.75 ≈ 49 seconds

Module 4: Main System (main.py) Purpose: Orchestrates the entire traffic management system TrafficManagementSystem Class Key Methods: 1. __init__(video_sources, model_path) Initializes detector and controller Sets up video captures for all lanes Configures output video writer 2. process_lane(lane_id, frame) Detects vehicles in frame Counts and classifies vehicles Draws annotations Returns annotated frame and counts 3. create_display_grid(frames) Combines multiple lane views into grid Resizes frames to uniform size Creates 2×2 grid for 4 lanes 4. add_signal_info(frame) Overlays signal status information Shows current signal state for each lane Displays vehicle counts and green times 5. run(duration) - Main execution loop Processes frames from all lanes Updates vehicle counts every second Recalculates signal timings every 3 seconds Displays real-time output Saves output video and logs

4. Running the System Basic Usage Option 1: With Video Files python main.py --videos lane1.mp4 lane2.mp4 lane3.mp4 lane4.mp4

Option 2: With Live Cameras python main.py --videos 0 1 2 3

Option 3: With Custom Model python main.py --videos lane1.mp4 lane2.mp4 --model models/custom_best.pt --duration 120

Command Line Arguments --videos: Video file paths or camera indices (required) --model: Path to custom YOLO model weights (optional) --duration: Maximum duration in seconds (optional)

Keyboard Controls During Execution 'q' - Quit the system 's' - Save screenshot of current frame ESC - Stop processing

5. Training Custom Model (Optional) If you want to train a custom model on your specific traffic conditions:

Step 1: Collect Data Record traffic videos from your target location Extract frames at regular intervals (e.g., every 30 frames) Aim for 500-1000 images minimum

Step 2: Annotate Images Use LabelImg or Roboflow to annotate: # Install LabelImg pip install labelImg # Run LabelImg labelImg

Annotation format (YOLO): class_id x_center y_center width height

Example annotation file (image.txt): 2 0.515 0.376 0.284 0.418 2 0.735 0.298 0.193 0.337 7 0.821 0.421 0.156 0.289

Step 3: Organize Dataset dataset/ ├── images/ │ ├── train/ │ │ ├── img001.jpg │ │ ├── img002.jpg │ │ └── ... │ └── val/ │ ├── img101.jpg │ └── ... └── labels/ ├── train/ │ ├── img001.txt │ ├── img002.txt │ └── ... └── val/ ├── img101.txt └── ...

Step 4: Create Dataset Configuration Create dataset.yaml: path: ./dataset train: images/train val: images/val

nc: 5 # number of classes names: ['car', 'motorcycle', 'bus', 'truck', 'bicycle']

Step 5: Train Model from yolov5 import train # Train command !python train.py --img 640 --batch 16 --epochs 100 --data dataset.yaml --weights yolov5s.

Training parameters: --img: Input image size (640 recommended) --batch: Batch size (adjust based on GPU memory) --epochs: Number of training epochs (50-100 typical) --data: Path to dataset yaml --weights: Pre-trained weights to start from

Step 6: Evaluate Model After training, evaluate on test set: python val.py --weights runs/train/exp/weights/best.pt --data dataset.yaml

6. System Workflow Complete Processing Pipeline Frame Processing Loop (30 FPS): 1. Capture Frame (each lane) Read frame from video source Resize if necessary 2. Detect Vehicles (YOLOv5) Run inference on frame Filter detections by confidence Extract bounding boxes 3. Count & Classify (every second) Count vehicles by type Calculate weighted count Update lane statistics

4. Update Signal Timings (every 3 seconds) Calculate optimal green times Determine priority lane Generate signal sequence 5. Display & Log Show annotated frames Display signal status Log traffic data to CSV

Signal Cycle Example Initial State: Lane 1: 15 cars, 2 buses → Weighted count = 19 → Green time = 57s Lane 2: 8 cars, 1 truck → Weighted count = 10 → Green time = 35s Lane 3: 20 cars, 3 buses → Weighted count = 26 → Green time = 75s Lane 4: 5 cars → Weighted count = 5 → Green time = 22s Priority Order: 1. Lane 3 (75s green) - Highest density 2. Lane 1 (57s green) 3. Lane 2 (35s green) 4. Lane 4 (22s green) - Lowest density Total Cycle Time: 75 + 57 + 35 + 22 + (4 × 3) yellow + (4 × 2) all-red = 209 seconds

7. Advanced Features Feature 1: Vehicle Tracking with DeepSORT Add tracking to maintain vehicle IDs across frames: from deep_sort_realtime.deepsort_tracker import DeepSort tracker = DeepSort(max_age=30, n_init=3, nms_max_overlap=1.0) # In detection loop tracks = tracker.update_tracks(detections, frame=frame) for track in tracks: if not track.is_confirmed(): continue track_id = track.track_id

bbox = track.to_ltrb() # Draw tracked vehicle

Feature 2: Emergency Vehicle Detection Prioritize emergency vehicles: def detect_emergency_vehicle(frame, audio=None): # Visual detection (strobe lights) # Or audio detection (siren) if emergency_detected: return True return False # In signal control if emergency_detected: # Override normal timing switch_to_green_immediately(emergency_lane)

Feature 3: Pedestrian Crossing Integration Add pedestrian detection: # In YOLO classes, add 'person' PEDESTRIAN_CLASS = 0 def check_pedestrian_crossing(detections): pedestrian_count = sum(1 for det in detections if int(det[5]) == PEDESTRIAN_CLASS) return pedestrian_count > 0 # In signal logic if pedestrian_crossing_requested: all_signals_red() pedestrian_signal_green(duration=15)

8. Performance Optimization GPU Acceleration Ensure CUDA is properly configured: import torch print(f"CUDA available: {torch.cuda.is_available()}") print(f"CUDA device: {torch.cuda.get_device_name(0)}") # Force GPU usage

device = torch.device('cuda:0') model.to(device)

Model Selection Trade-offs Model

Speed (FPS)

Accuracy (mAP)

YOLOv5n

100+

~40%

Edge devices, real-time

YOLOv5s

60-80

~55%

Recommended balance

YOLOv5m

40-50

~65%

Higher accuracy needed

YOLOv5l

25-35

~69%

High-end systems

YOLOv5x

15-20

~71%

Maximum accuracy

Optimization Tips 1. Reduce Input Size: Use 416×416 instead of 640×640 2. Batch Processing: Process multiple frames together 3. Model Quantization: Convert to FP16 or INT8 4. Region of Interest: Process only relevant image areas 5. Frame Skipping: Process every 2nd or 3rd frame

9. Troubleshooting Common Issues & Solutions Issue 1: Low FPS / Slow Processing Solutions: Use smaller YOLO model (yolov5n or yolov5s) Reduce image size in config Enable GPU if available Process fewer lanes simultaneously Issue 2: Poor Detection Accuracy Solutions: Adjust confidence threshold (try 0.3-0.5) Train custom model on your specific footage Improve camera positioning and lighting Clean camera lens Issue 3: Signal Timing Too Short/Long

Use Case

Solutions: Adjust MIN_GREEN and MAX_GREEN in config Modify time_per_vehicle parameter Change vehicle weight values Increase update frequency Issue 4: Memory Issues Solutions: Reduce batch size Use smaller model Process frames in chunks Clear GPU cache: torch.cuda.empty_cache() Issue 5: Video Not Opening Solutions: Check video codec compatibility Install ffmpeg: pip install ffmpeg-python Try different video format Verify file path is correct

10. Data Logging & Analysis Traffic Data Logging The system automatically logs data to CSV: Log File Format (traffic_log.csv): timestamp,lane_id,total_vehicles,car,motorcycle,bus,truck,bicycle 2025-10-19 15:30:00,0,15,12,2,1,0,0 2025-10-19 15:30:01,1,8,6,1,0,1,0 2025-10-19 15:30:02,2,20,15,3,2,0,0

Data Analysis Examples Python Analysis Script: import pandas as pd import matplotlib.pyplot as plt # Load data

df = pd.read_csv('output/traffic_log.csv') df['timestamp'] = pd.to_datetime(df['timestamp']) # Plot traffic by lane over time plt.figure(figsize=(12, 6)) for lane in df['lane_id'].unique(): lane_data = df[df['lane_id'] == lane] plt.plot(lane_data['timestamp'], lane_data['total_vehicles'], label=f'Lane {lane+1}') plt.xlabel('Time') plt.ylabel('Number of Vehicles') plt.title('Traffic Density Over Time') plt.legend() plt.xticks(rotation=45) plt.tight_layout() plt.savefig('traffic_analysis.png') plt.show() # Calculate average wait time avg_vehicles = df.groupby('lane_id')['total_vehicles'].mean() print("Average vehicles per lane:") print(avg_vehicles)

11. Deployment Considerations Hardware Requirements Minimum: CPU: Intel i5 or equivalent RAM: 8 GB Storage: 50 GB Camera: 720p resolution Recommended: CPU: Intel i7 or AMD Ryzen 7 GPU: NVIDIA GTX 1660 or better (6GB VRAM) RAM: 16 GB Storage: 100 GB SSD Camera: 1080p resolution, 30 FPS

Production Deployment 1. Edge Computing Setup For on-site deployment: Use NVIDIA Jetson Nano/Xavier Install Jetson-optimized PyTorch Convert model to TensorRT for speed 2. Cloud Deployment For centralized processing: Deploy on AWS/GCP/Azure Use GPU instances (e.g., AWS p3.2xlarge) Set up streaming pipeline Implement load balancing 3. Security Considerations Encrypt video streams Secure API endpoints Implement authentication Regular security audits Comply with privacy regulations (GDPR, etc.)

12. Testing & Validation Unit Tests # test_detection.py import unittest from vehicle_detection import VehicleDetector class TestVehicleDetection(unittest.TestCase): def setUp(self): self.detector = VehicleDetector() def test_model_loading(self): self.assertIsNotNone(self.detector.model) def test_detection(self): # Load test image frame = cv2.imread('test_images/traffic_test.jpg') detections = self.detector.detect(frame) # Assert detections exist

self.assertGreater(len(detections), 0) def test_vehicle_counting(self): detections = [[100, 150, 200, 250, 0.9, 2]] # car counts = self.detector.count_vehicles(detections) self.assertEqual(counts['car'], 1) if __name__ == '__main__': unittest.main()

Integration Testing Test complete workflow: 1. Load test videos 2. Process through system 3. Verify signal timing calculations 4. Check output video generation 5. Validate log files

13. Future Enhancements Planned Features 1. Machine Learning Improvements Traffic flow prediction using LSTM Anomaly detection for accidents Adaptive learning from historical data 2. IoT Integration Connect to smart city infrastructure V2X communication (Vehicle-to-Everything) Real-time weather data integration 3. Mobile Application Real-time traffic updates Route optimization suggestions Crowdsourced incident reporting 4. Advanced Analytics Traffic pattern analysis Peak hour prediction Congestion forecasting

14. References & Resources Academic Papers 1. "Image processing based Tracking and Counting Vehicles" - DOI: 10.1109/ICECA.2019.8822070 2. "Real-time traffic monitoring system using IoT-aided robotics" - ScienceDirect, 2024

Online Resources 1. YOLOv5 Documentation: https://github.com/ultralytics/yolov5 2. OpenCV Tutorials: https://docs.opencv.org/ 3. DeepSORT Implementation: https://github.com/nwojke/deep_sort 4. Roboflow Datasets: https://universe.roboflow.com/

Datasets for Training 1. KITTI Dataset: Autonomous driving dataset 2. UA-DETRAC: Traffic detection dataset 3. Cityscapes: Urban scene understanding 4. COCO: Common objects in context

15. Support & Contact Getting Help GitHub Issues: Report bugs and request features Documentation: Refer to inline code comments Community: Join discussions on project forums

Contributing Contributions welcome! Please: 1. Fork the repository 2. Create a feature branch 3. Commit your changes 4. Submit a pull request

Conclusion This implementation guide provides everything needed to build a fully functional smart traffic management system. The modular design allows easy customization and extension. Start with the basic implementation, then gradually add advanced features based on your specific requirements. Key Takeaways: YOLOv5 provides accurate real-time vehicle detection Dynamic signal timing adapts to traffic conditions System is scalable from single intersection to city-wide deployment Regular updates and retraining improve accuracy over time Good luck with your traffic management project! 🚦🚗


F

Faisal

Cybersecurity graduate focused on SOC operations, threat intelligence, and defensive security. Writing practical, no-BS explanations based on real learning and hands-on analysis.