Abstract:
Large language models (LLMs) exhibit unprecedentedly rich scaling behaviors. In physics, scaling behavior is closely related to phase transitions, critical phenomena, and field theory. To investigate the phase transition phenomena in LLMs, we reformulated the Transformer architecture as an O(N) model. Our study reveals two distinct phase transitions corresponding to the temperature used in text generation and the model's parameter size, respectively. The first phase transition enables us to estimate the internal dimension of the model, while the second phase transition is of higher-depth and signals the emergence of new capabilities. As an application, the energy of the O(N) model can be used to evaluate whether an LLM's parameters are sufficient to learn the training data.
报告人简介:
孙悠然,清华大学丘成桐数学中心博士,本科毕业于北京大学物理学院。研究兴趣:大模型,机器学习,优化理论和其中的物理 (AI is science)。