The talk discusses vintage large language models (LLMs), which are trained on historical data, exploring challenges such as data contamination and the need for extensive datasets. These models offer potential in scientific forecasting and humanistic interactions, enabling insights into past knowledge. However, significant obstacles include data requirements and training costs.












