Publications by Monica Millunzi

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

Active filters (Clear): Author: Monica Millunzi

A Second-Order Perspective on Model Compositionality and Incremental Learning

Authors: Porrello, Angelo; Bonicelli, Lorenzo; Buzzega, Pietro; Millunzi, Monica; Calderara, Simone; Cucchiara, Rita

2025 Relazione in Atti di Convegno

May the Forgetting Be with You: Alternate Replay for Learning with Noisy Labels

Authors: Millunzi, Monica; Bonicelli, Lorenzo; Porrello, Angelo; Credi, Jacopo; Kolm, Petter N.; Calderara, Simone

Forgetting presents a significant challenge during incremental training, making it particularly demanding for contemporary AI systems to assimilate new knowledge … (Read full abstract)

Forgetting presents a significant challenge during incremental training, making it particularly demanding for contemporary AI systems to assimilate new knowledge in streaming data environments. To address this issue, most approaches in Continual Learning (CL) rely on the replay of a restricted buffer of past data. However, the presence of noise in real-world scenarios, where human annotation is constrained by time limitations or where data is automatically gathered from the web, frequently renders these strategies vulnerable. In this study, we address the problem of CL under Noisy Labels (CLN) by introducing Alternate Experience Replay (AER), which takes advantage of forgetting to maintain a clear distinction between clean, complex, and noisy samples in the memory buffer. The idea is that complex or mislabeled examples, which hardly fit the previously learned data distribution, are most likely to be forgotten. To grasp the benefits of such a separation, we equip AER with Asymmetric Balanced Sampling (ABS): a new sample selection strategy that prioritizes purity on the current task while retaining relevant samples from the past. Through extensive computational comparisons, we demonstrate the effectiveness of our approach in terms of both accuracy and purity of the obtained buffer, resulting in a remarkable average gain of 4.71% points in accuracy with respect to existing loss-based purification strategies. Code is available at https://github.com/aimagelab/mammoth

2024 Relazione in Atti di Convegno

Novel continual learning techniques on noisy label datasets

Authors: Millunzi, M.; Bonicelli, L.; Zurli, A.; Salman, A.; Credi, J.; Calderara, S.

Published in: CEUR WORKSHOP PROCEEDINGS

Many Machine Learning and Deep Learning algorithms are widely used with remarkable success in scenarios whose benchmark datasets consist of … (Read full abstract)

Many Machine Learning and Deep Learning algorithms are widely used with remarkable success in scenarios whose benchmark datasets consist of reliable data. However, they often struggle to handle realistic scenarios, particularly those in the financial sector, where available data constantly vary, increase daily, and may contain noise. As a result, we present an overview of the ongoing research at the AImageLab research laboratory of the University of Modena and Reggio Emilia, in collaboration with AxyonAI, focused on exploring Continual Learning methods in the presence of noisy data, with a special focus on noisy labels. To the best of our knowledge, this is a problem that has received limited attention from the scientific community thus far.

2023 Relazione in Atti di Convegno