reference
Zeng et al. (2024b) introduced 'Memorize step by step', a method for efficient long-context prefilling in large language models using incremental memory and decremental chunking, published in the Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing.
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept