background
logo
ArxivPaperAI

Working Memory Capacity of ChatGPT: An Empirical Study

Author:
Dongyu Gong, Xingchen Wan, Dingmin Wang
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Computation and Language (cs.CL), Neurons and Cognition (q-bio.NC)
journal:
--
date:
2023-04-29 16:00:00
Abstract
Working memory is a critical aspect of both human intelligence and artificial intelligence, serving as a workspace for the temporary storage and manipulation of information. In this paper, we systematically assess the working memory capacity of ChatGPT, a large language model developed by OpenAI, by examining its performance in verbal and spatial n-back tasks under various conditions. Our experiments reveal that ChatGPT has a working memory capacity limit strikingly similar to that of humans. Furthermore, we investigate the impact of different instruction strategies on ChatGPT's performance and observe that the fundamental patterns of a capacity limit persist. From our empirical findings, we propose that n-back tasks may serve as tools for benchmarking the working memory capacity of large language models and hold potential for informing future efforts aimed at enhancing AI working memory.
PDF: Working Memory Capacity of ChatGPT: An Empirical Study.pdf
Empowered by ChatGPT