The advent of deep learning has brought in disruptive techniques with unprecedented accuracy rates in so many fields and scenarios. Tasks such as the detection of regions of interest and semantic features out of images and video sequences are quite effectively tackled because of the availability of publicly available and adequately annotated datasets. This paper describes a use case scenario with a deep learning models' stack being used for crowd behaviour analysis. It consists of two main modules preceded by a pre-processing step. The first deep learning module relies on the integration of YOLOv5 and DeepSORT to detect and track down pedestrians from CCTV cameras' video sequences. The second module ingests each pedestrian's spatial coordinates, velocity, and trajectories to cluster groups of people using the Coherent Neighbor Invariance technique. The method envisages the acquisition of video sequences from cameras overlooking pedestrian areas, such as public parks or squares, in order to check out any possible unusualness in crowd behaviour. Due to its design, the system first checks whether some anomalies are underway at the microscale level. Secondly, It returns clusters of people at the mesoscale level depending on velocity and trajectories. This work is part of the physical behaviour detection module developed for the S4AllCities H2020 project.

High-Level Feature Extraction for Crowd Behaviour Analysis: A Computer Vision Approach, 2022.

High-Level Feature Extraction for Crowd Behaviour Analysis: A Computer Vision Approach

Alessandro Bruno
;
2022-01-01

Abstract

The advent of deep learning has brought in disruptive techniques with unprecedented accuracy rates in so many fields and scenarios. Tasks such as the detection of regions of interest and semantic features out of images and video sequences are quite effectively tackled because of the availability of publicly available and adequately annotated datasets. This paper describes a use case scenario with a deep learning models' stack being used for crowd behaviour analysis. It consists of two main modules preceded by a pre-processing step. The first deep learning module relies on the integration of YOLOv5 and DeepSORT to detect and track down pedestrians from CCTV cameras' video sequences. The second module ingests each pedestrian's spatial coordinates, velocity, and trajectories to cluster groups of people using the Coherent Neighbor Invariance technique. The method envisages the acquisition of video sequences from cameras overlooking pedestrian areas, such as public parks or squares, in order to check out any possible unusualness in crowd behaviour. Due to its design, the system first checks whether some anomalies are underway at the microscale level. Secondly, It returns clusters of people at the mesoscale level depending on velocity and trajectories. This work is part of the physical behaviour detection module developed for the S4AllCities H2020 project.
Inglese
2022
https://link.springer.com/chapter/10.1007/978-3-031-13324-4_6
HBAxSCES (Human Behaviour Analysis for Smart City Environment Safety), held in conjunction with ICIAP 2022 (International Conference on Image Analysis and Processing).
Lecce/Online
2022
internazionale
contributo
ICIAP 2022. Lecture Notes in Computer Science
59
70
978-3-031-13323-7
978-3-031-13324-4
Switzerland
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
SPRINGER INTERNATIONAL PUBLISHING AG
esperti anonimi
Online
Settore INF/01 - Informatica
Ho ricoperto il ruolo di Co-Investigator per il del progetto di ricerca S4AllCities che ha finanziato le ricerche sperimentali che hanno portato a tale pubblicazione
   Smart Spaces Safety and Security for All Cities
   S4AllCities
   European Commission
   Horizon 2020 Framework Programme
   883522

   Horizon 2020 Framework Programme

   Horizon 2020 Framework Programme
8
File in questo prodotto:
File Dimensione Formato  
HBAxSCES1_paper_6.pdf

Open Access

Tipologia: Documento in Pre-print
Dimensione 1.18 MB
Formato Adobe PDF
1.18 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10808/50244
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact