LAMBDA CALCULUS TERM REDUCTION: EVALUATING LLMS' PREDICTIVE CAPABILITIES

Authors

DOI:

https://doi.org/10.32689/maup.it.2024.1.7

Keywords:

Lambda Calculus, Large Language Model, reduction process, prompt engineering

Abstract

This study is part of a research series of optimizing compilers and interpreters of functional programming languages. Lambda Calculus was chosen as the most straightforward functional programming language, which can process any operation available to other functional programming languages but with the simplest syntax. Using machine learning methods allows for uncovering relations inside lambda terms, which might indicate which reduction strategy better suits their reduction. Finding those techniques for lambda terms allows optimizing not only lambda term reduction but also interpreters and compilers of functional programming languages. This research aims to scrutinize LLMs' understanding of Lambda term reduction to predict reduction steps and evaluate prediction accuracy. Artificially generated Lambda terms were employed Utilizing OpenAI's GPT-4 and GPT-3.5 models. However, due to model constraints and cost considerations, experiments were limited to terms with specific token counts. Despite its larger size, results revealed that the GPT-4 model did not significantly outperform GPT-3.5 in understanding reduction procedures. Moreover, while the GPT-3.5 model exhibited improved accuracy with reduced token counts, its performance with more complex prompts was suboptimal. This underscores the LLMs' limitations in grasping Lambda terms and reduction strategies, especially with larger and more intricate terms. Conclusions. The research concludes that general-purpose LLMs like GPT-3.5 and GPT-4 are inadequate for accurately predicting Lambda term reductions and distinguishing between strategies, particularly with larger terms. While fine-tuning may enhance model performance, the current findings highlight the need for further exploration and alternative approaches to achieve a deeper understanding of lambda term reduction using LLMs.

References

Cummins, C., Seeker, V., Grubisic, D., Elhoushi, M., Liang, Y., Roziere, B., Gehring, J., Gloeckle, F., Hazelwood, K., Synnaeve, G., Leather, H. Large Language Models for Compiler Optimization. ArXiv, 2023. URL: https://arxiv.org/abs/2309.07062.

Dal Lago, U., and Martini, S. On Constructor Rewrite Systems and the Lambda Calculus. Logical Methods in Computer Science, 2012, Volume 8. DOI: 10.2168/LMCS-8(3:12)2012.

Deineha, O. Supervised data extraction from Transformer representation of lambda-terms. Radioelectronic And Computer Systems, 2024. In press.

Deineha, O., Donets, V., & Zholtkevych, G. Deep Learning Models for Estimating Number of Lambda-Term Reduction Steps. ProfIT AI 2023: 3rd International Workshop of IT-professionals on Artificial Intelligence (ProfIT AI 2023), 2023, vol. 3624, pp. 147-156. URL: https://ceur-ws.org/Vol-3641/paper12.pdf.

Deineha, O., Donets, V., & Zholtkevych, G. Estimating Lambda-Term Reduction Complexity with Regression Methods. International Conference "Information Technology and Interactions", 2023, no. 3624, pp. 147–156. URL: https://ceur-ws.org/Vol-3624/Paper_13.pdf.

Deineha, O., Donets, V., & Zholtkevych, G. Unsupervised Data Extraction from Transformer Representation of Lambda-Terms. Eastern European Journal of Enterprise Technology, 2024. In press.

Deliyannis, E.P., Paul, N., Patel, P.U., & Papanikolaou, M. A comparative performance analysis of Chat GPT3.5, Chat GTP4.0 and Bard in answering common patient questions on melanoma. Clinical and experimental dermatology, 2023. DOI: 10.1093/ced%2Fllad409.

Dezani-Ciancaglini, M., Ronchi Della Rocca, S., and Saitta, L. Complexity of lambda-term reductions. RAIRO Theor. Informatics, 1979, Appl. 13: 257-287. DOI: 10.1051/ita/1979130302571.

Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Qin, B., Liu, T., Jiang, D., & Zhou, M. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp 1536–1547. DOI: 10.18653/v1/2020.findings-emnlp.139.

Grabmayer, C. Linear Depth Increase of Lambda Terms along Leftmost-Outermost Beta-Reduction. ArXiv, 2019. URL: https://doi.org/10.48550/arXiv.1604.07030.

Liu1, C., Lu, S., Chen, W., Jiang, D., Svyatkovskiy, A., Fu, S., Sundaresan, N., Duan, N. Code Execution with Pre-trained Language Models. Accepted to the Findings of ACL 2023, 2023. URL: https://arxiv.org/abs/2305.05383.

Miranda, Brando, Shinnar, Avi, Pestun, Vasily, Trager, Barry. Transformer Models for Type Inference in the Simply Typed Lambda Calculus: A Case Study in Deep Learning for Code. Computer Science, 2023. URL: https://arxiv.org/abs/2304.10500.

Pollak, D., Layka, V., & Sacco, A. Functional Programming. Beginning Scala 3. 2020. DOI:10.1007/978-1-4842-7422-4_4.

Qi, Xiaochu. Reduction Strategies in Lambda Term Normalization and their Effects on Heap Usage. ArXiv, 2004. URL: https://arxiv.org/abs/cs/0405075.

White, J., Hays, S., Fu, Q., Spencer-Smith, J., & Schmidt, D.C. ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Requirements Elicitation, and Software Design. ArXiv, 2023. DOI: 10.48550/arXiv.2303.07839.

Yang, Zhen, Ding, Ming, Lv, Qingsong, Jiang, Zhihuan, He, Zehai, Guo, Yuyi, Bai, Jinfeng, Tang, Jie. GPT Can Solve Mathematical Problems Without a Calculator. Machine Learning. ArXiv, 2023. URL: https://arxiv.org/abs/2309.03241.

New embedding models and API updates. Blog OpenAI, 2024. URL: https://openai.com/blog/new-embeddingmodels-and-api-updates (accessed 24.04.2024).

Downloads

Published

2024-07-01

How to Cite

ДЕЙНЕГА, О. (2024). LAMBDA CALCULUS TERM REDUCTION: EVALUATING LLMS’ PREDICTIVE CAPABILITIES. Information Technology and Society, (1 (12), 51-55. https://doi.org/10.32689/maup.it.2024.1.7