Publications

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

Fast Run-Based Connected Components Labeling for Bitonal Images

Authors: Wonsang, Lee; Allegretti, Stefano; Bolelli, Federico; Grana, Costantino

Connected Components Labeling (CCL) is a fundamental task in binary image processing. Since its introduction in the sixties, several algorithmic … (Read full abstract)

Connected Components Labeling (CCL) is a fundamental task in binary image processing. Since its introduction in the sixties, several algorithmic strategies have been proposed to optimize its execution time. Most CCL algorithms in literature, including the current state-of-the-art, are designed to work on an input stored with 1-byte per pixel, even if the most memory-efficient format for a binary input only uses 1-bit per pixel. This paper deals with connected components labeling on 1-bit per pixel images, also known as 1bpp or bitonal images. An existing run-based CCL strategy is adapted to this input format, and optimized with Find First Set hardware operations and a smart management of provisional labels, giving birth to an efficient solution called Bit-Run Two Scan (BRTS). Then, BRTS is further optimized by merging pairs of consecutive lines through bitwise OR, and finding runs on this reduced data. This modification is the basis for another new algorithm on bitonal images, Bit-Merge-Run Scan (BMRS). When evaluated on a public benchmark, the two proposals outperform all the fastest competitors in literature, and therefore represent the new state-of-the-art for connected components labeling on bitonal images.

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Del Bimbo, A.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Del Bimbo, A.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Del Bimbo, A.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Del Bimbo, A.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Bimbo, A. D.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Del Bimbo, A.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

Foreword by general chairs

Authors: Cucchiara, R.; Bimbo, A. D.; Sclaroff, S.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2021 Relazione in Atti di Convegno

FUNGI: FUsioN Gene Integration toolset

Authors: Cervera, Alejandra; Rausio, Heidi; Kähkönen, Tiia; Andersson, Noora; Partel, Gabriele; Rantanen, Ville; Paciello, Giulia; Ficarra, Elisa; Hynninen, Johanna; Hietanen, Sakari; Carpén, Olli; Lehtonen, Rainer; Hautaniemi, Sampsa; Huhtinen, Kaisa

Published in: BIOINFORMATICS

2021 Articolo su rivista

Future Urban Scenes Generation Through Vehicles Synthesis

Authors: Simoni, Alessandro; Bergamini, Luca; Palazzi, Andrea; Calderara, Simone; Cucchiara, Rita

Published in: INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION

In this work we propose a deep learning pipeline to predict the visual future appearance of an urban scene. Despite … (Read full abstract)

In this work we propose a deep learning pipeline to predict the visual future appearance of an urban scene. Despite recent advances, generating the entire scene in an end-to-end fashion is still far from being achieved. Instead, here we follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently. We leverage a per-object novel view synthesis paradigm; i.e. generating a synthetic representation of an object undergoing a geometrical roto-translation in the 3D space. Our model can be easily conditioned with constraints (e.g. input trajectories) provided by state-of-the-art tracking methods or by the user itself. This allows us to generate a set of diverse realistic futures starting from the same input in a multi-modal fashion. We visually and quantitatively show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow, a challenging real world dataset.

2021 Relazione in Atti di Convegno

Page 34 of 106 • Total publications: 1054