background
logo
ArxivPaperAI

Demolition and Reinforcement of Memories in Spin-Glass-like Neural Networks

Author:
Enrico Ventura
Keyword:
Condensed Matter, Disordered Systems and Neural Networks, Disordered Systems and Neural Networks (cond-mat.dis-nn), Machine Learning (stat.ML)
journal:
--
date:
2024-03-04 00:00:00
Abstract
Statistical mechanics has made significant contributions to the study of biological neural systems by modeling them as recurrent networks of interconnected units with adjustable interactions. Several algorithms have been proposed to optimize the neural connections to enable network tasks such as information storage (i.e. associative memory) and learning probability distributions from data (i.e. generative modeling). Among these methods, the Unlearning algorithm, aligned with emerging theories of synaptic plasticity, was introduced by John Hopfield and collaborators. The primary objective of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models. Initially, we demonstrate that the Unlearning algorithm can be simplified to a linear perceptron model which learns from noisy examples featuring specific internal correlations. The selection of structured training data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction. Subsequently, a novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets. The Unlearning rule is derived from this new regularized algorithm and is showed to be comparable, in terms of inferential performance, to traditional Boltzmann-Machine learning.
PDF: Demolition and Reinforcement of Memories in Spin-Glass-like Neural Networks.pdf
Empowered by ChatGPT