Challenge and Dataset on Large-scale Human-centric Video Analysis in Complex Events (HiEve)


Introduction

The development of modern intelligent city highly relies on the advancement of human-centric analysis technologies. Intelligent multimedia understanding is one of the essential technologies for visual analysis which requires many human-centered and event-driven visual understanding tasks such as human pose estimation, pedestrian tracking and action recognition.

In this grand challenge, we focus on very challenging and realistic tasks of human-centric analysis in various crowd & complex events, including subway getting on/off, collision, fighting, and earthquake escape (cf. Figure. 1). To the best of our knowledge, few existing human analysis approaches report their performance under such complex events. With this consideration, we further propose a dataset (named as Human-in-Events or HiEve) with large-scale and densely-annotated labels covering a wide range of tasks in human-centric analysis.

Our HiEve dataset includes the currently largest number of poses (>1M), the largest number of complex-event action labels (>56k), and one of the largest number of trajectories with long terms (with average trajectory length >480). More information and details about our dataset can be found here.

Four challenging tasks are established on our dataset, which aims to bring together researchers in the multimedia and computer vision communities to enhance the performance of human motion, pose, and action analyzing methods in 3 aspects:

Organize challenges on our large-scale dataset with a comprehensive tasks of human-centric analysis and facilitate the multimedia & AI researches & applications in human-centric understanding.

Encourage and accelerate the develop of new techniques in the areas of human-centric analysis and understanding in complex events.

Foster new ideas and directions on “Large-scale human-centric visual analysis in complex events”.

Figure 1: Samples of various complex events such as (a) dining in canteen, (b) earthquake, (c) getting-off train, and (d) bike collision.
           
(a) (b) (c) (d)

Organizers

Weiyao Lin

Shanghai Jiao Tong University, China

Guojun Qi

Machine Perception and Learning Lab, USA

Nicu Sebe

University of Trento, Italy

Ning Xu

Adobe Research,
USA

Hongkai Xiong

Shanghai Jiao Tong University, China

Mubarak Shah

University of Central Florida, USA

Download

Get all data
Get the paper for our dataset

News

[TOP] June 3, 2022: We have included additional datasets or challenge links for Group Re-Id, In-scene Video Re-id and Video Object Segmentation.

[TOP] July 23, 2021: We have added two new tracks on pedestrian detection and object re-identification. See here for the evaluation metrics and submit format of Track-5: Pedestrian Detection in Complex and Crowded Events. See here and here for the detailed information of Re-ID tracks. We welcome researchers to use our dataset to evaluate pedestrian detection algorithms and Re-ID algorithms.

[TOP] June 22, 2021: We have simplified the user registration process by deleting the administrator activation step. Now, users can log in directly after submitting their registration information (no more need to wait for administrator activation). See here for the new registration page.

[TOP] August 15, 2020: Our dataset and challenge system has been re-opened to the public as a long-term challenge. We welcome researchers to use our dataset to evaluate your research works. Please refer to How To on how to register and use our dataset.

October 15, 2020: Our grand challenge won the best Grand Challenge organization award in ACM Multimedia 2020!

October 13, 2020: Our GC session will take place in ACM MM'20 on Oct.13 and Oct.15. The session time are listed here and the detailed login information to attend can be found here. Please feel free to attend our session.

August 1, 2020: Our challenge on ACM Multimedia 2020 will take place on 12-16 October. Top-3 teams in each track will present their methods in the challenge. Please see our challenge page for details.

June 30, 2020: The ACM MM challenge result has been released, please refer to the Top-5 teams or the complete LeaderBoard for detailed information.

March 10, 2020: Website online.

Tracks

  • Track-1: Multi-person Motion Tracking in Complex Events
  • Track-2: Crowd Pose Estimation in Complex Events
  • Track-3: Crowd Pose Tracking in Complex Events
  • Track-4: Person-level Action Recognition in Complex Events
  • Track-5: Pedestrian Detection in Complex and Crowded Events
  • Track-6: Object Re-Identification in Complex and Crowded Events

  Please refer to the Tracks page for detailed information about datasets and tracks.

How To

Register an account for evaluation and refer to the Tracks page to learn about the datasets and tracks of the challenge.

• Download datasets at Data page to train your models, code for evaluating your model performance in training set could be obtained here.

• Submit results of the test set to our evluation server.

• After a while, check your evaluation results on LeaderBoard page.

More information and rules can be found here.

Paper bib

The paper for our dataset is released. Please click here to download. Note that track-5 and track-6 are not included in our paper.