skip to Main Content

Research Activity

The RELATE research group conducts advanced activities in the field of machine learning and explainable artificial intelligence, with the goal of developing transparent, secure, and responsible systems aligned with the principles of Trustworthy AI promoted at the international level. The group’s research focuses on the analysis, design, and development of models and methodologies that integrate reliability, interpretability, and robustness, addressing both theoretical aspects and practical implications in complex and sensitive domains.

The group’s expertise extends to Computational Science, which combines modeling, parallel computing, optimization, and complex systems analysis, applying High Performance Computing (HPC) techniques to the simulation of physical and geological processes related to natural and environmental risks. In this context, the activities aim at the development of a scalable, AI-driven platform architecture that integrates machine learning, HPC, IoT, and agentic reasoning for predictive risk modeling, real-time monitoring, and early-warning analysis, thereby promoting a data-driven and explainable approach to the sustainable management of complex systems.

RELATE investigates explainable-by-design approaches, including interpretable symbolic models, decision trees, rule-based systems, and sparsified regression techniques, as well as post-hoc methods (e.g., LIME, SHAP, and counterfactual explanations) designed to make the decision processes of complex models understandable. The research pays particular attention to the analysis of complex data through representation learning, graph neural networks, and interpretable autoencoders, with the aim of extracting meaningful and comprehensible information structures even in the presence of uncertainty and noise.

In parallel, the group explores emerging paradigms such as self-supervised learning, meta-learning, and neuro-symbolic approaches, to construct models capable of learning from unlabeled data, rapidly adapting to new tasks, and combining the flexibility of statistical learning with the formal structure of symbolic knowledge. Within this framework, research also addresses Agentic AI and Retrieval-Augmented Generation (RAG) systems, oriented toward the development of autonomous, context-aware agents capable of interacting with complex environments. These systems integrate explicit reasoning mechanisms, contextual memory, and dynamic knowledge retrieval, enhancing the ability of intelligent systems to explain, justify, and adapt their decisions while maintaining information traceability and epistemic coherence.

Another research area concerns Security Intelligence Techniques, aimed at designing and developing intelligent methodologies for data security, privacy, and integrity, with particular attention to secure identification, robust authentication, anti-counterfeiting, and end-to-end traceability (supply chain). These activities leverage cyber-physical security technologies based on Physical Unclonable Functions (PUFs) and distributed Blockchain/DLT solutions.

In parallel, RELATE develops advanced solutions for Deep Image Analysis, aimed at the automatic, semantic, and contextual analysis of images through transparent, reliable, and explainable AI models. The activities include the design of deep neural networks, generative models, semantic segmentation systems, multimodal recognition architectures, and interpretable algorithms capable of providing visual and descriptive explanations of automated decisions, with applications in security, medical diagnostics, and environmental monitoring.

Complementary to interpretable approaches, RELATE also explores Bayesian probabilistic modeling, with emphasis on hierarchical latent factor models which, although not explainable-by-design, allow for the explicit representation of underlying assumptions, the modeling of data variability, and the quantification of uncertainty associated with inference processes, thereby providing an advanced form of epistemic transparency.

Goals

RELATE is a multidisciplinary group of researchers and technologists engaged in both basic and applied research in the field of reliable, explainable, and interpretable artificial intelligence. The research activities develop through a synergistic cooperation among several areas, including data mining, (deep) machine learning, computational intelligence, and high-performance computing (HPC). In particular, the group focuses on designing knowledge-based intelligent systems capable of learning from large volumes of data, adapting to complex contexts, and providing meaningful interpretations of models, predictions, and decision-making processes, with applications ranging from scientific research to industrial innovation.

Application Fields

The solutions developed by RELATE find application in highly complex and strategically relevant domains such as High-Performance Computing (HPC), Industry 4.0, Cybersecurity, and Healthcare, addressing challenges related to scalability, data security, model explainability, and computational efficiency.

 

 

Back To Top