PROGRAMMING WITH THE HELP OF GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS: RISKS AND CHALLENGES

Authors

DOI:

https://doi.org/10.32689/maup.it.2023.2.2

Keywords:

artificial intelligence, programming, risk, generative artificial intelligence system, machine learning

Abstract

The article discusses the risks associated with the use of generative artificial intelligence (GenAI) systems. The authors emphasize that countries with technologically advanced legal systems, such as Italy and Switzerland, already regulate the use of GenAI in terms of data protection and cybersecurity. They also mention a project on the administration of generative services in China, which emphasizes the responsibility of GenAI service providers for the security and accuracy of the generated content. The authors go on to discuss the risks associated with software and IT product development, in particular the use of LLMAP (Large Language Models for Application Programming). The proposed classification of risks distinguishes between passive risks arising from working with GenAI and active risks associated with deliberate misuse. The authors point out the need for a conscious approach to the use of GenAI and the development of appropriate control and security measures. This paper contradicts advertising claims about generative systems (GenAI) and points out their potential incompleteness and unpredictability of code quality. The authors emphasize the need to take into account the passive and active risks associated with the use of such systems. Passive risks include the possibility of errors and hallucinations in the issuance of GenAI, problems with the generation of complex code, and uncontrolled dissemination of the results of their work. Active risks include the possibility of reverse engineering databases, hacking the system, and obtaining "forbidden" data. The authors recommend strict control over the use of GenAI in critical industries that require uninterrupted operation and a low probability of errors. The authors also point out the need to improve technical, organizational and legislative measures for the effective use of GenAI, such as database quality control, open access to source codes and the development of audit and control systems based on progress.

References

Gray N. A. B. Dendral and meta-dendral – the myth and the reality. Chemometrics and Intelligent Laboratory Systems. Elsevier BV., Volume 5, Issue 1, 1988. P. 11–32. https://doi.org/10.1016/0169-7439(88)80122-9

Murillo A., D’Angelo S. An Engineering Perspective on Writing Assistants for Productivity and Creative Code. 2023. URL: https://cdn.glitch.global/d058c114-3406-43be-8a3c-d3afff35eda2/paper1_2023.pdf

Oppenlaender J. A Taxonomy of Prompt Modifiers for Text-To-Image Generation. arXiv. 2023. 18 р. https://doi.org/10.48550/arXiv.2204.13988 URL: https://arxiv.org/pdf/2204.13988.pdf

Noy S., Zhang W. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. SSRN Electronic Journal. Elsevier BV. (March 1, 2023). 15 р. Available at SSRN: https://ssrn.com/abstract=4375283 or http://dx.doi.org/10.2139/ssrn.4375283

Sarkar A., Gordon A. D., Negreanu C., Poelitz C., Ragavan S. S., Zorn B. What is it like to program with artificial intelligence? 2022.

Mettler T. The Road to Digital and Smart Government in Switzerland. Governance and Public Management. Springer International Publishing. 2019. P. 175–186. https://doi.org/10.1007/978-3-319-92381-9_10

Josh Ye China proposes measures to manage generative AI services | Reuters. 2023. URL: https://www.reuters.com/technology/china-releases-draft-measures-managing-generative-artificial-intelligence-2023-04-11/ (дата звернення: 2023-07-06).

LI H., LOVE P. E. D. Combining rule-based expert systems and artificial neural networks for mark-up estimation //Construction Management and Economics. Informa UK Limited, 1999. Vol. 17, no. 2. P. 169–176.

GitHub Copilot · Your AI pair programmer. URL: https://github.com/features/copilot/ (дата звернення: 2023-07-06).

Xia X., Bao L., Lo D., Xing Z., Hassan A. E., Li S. Measuring Program Comprehension: A Large-Scale Field Study with Professionals. IEEE Transactions on Software Engineering. 2017. Vol. 44. No. 10. P. 951–976. doi: 10.1109/TSE.2017.2734091.

Athaluri S. A., Manthena S. V., Kesapragada V. S. R. K. M., Yarlagadda V., Dave T., Duddumpudi R. T. S. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. Cureus. Springer Science; Business Media LLC. April 11, 2023. doi: 10.7759/cureus.37432

Pangakis N., Wolken S., Fasching N. Automated Annotation with Generative AI Requires Validation. arXiv. 2023. URL: https://arxiv.org/pdf/2306.00176v1.pdf (дата звернення: 2023-07-09).

Environment Variables in Apache – Apache HTTP Server Version 2.4. URL: https://httpd.apache.org/docs/2.4/env.html (дата звернення: 2023-07-09).

A Man Sued Avianca Airline. His Lawyer Used ChatGPT. – The New York Times. 2023. URL: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html?smid=tw-nytimes&smtyp=cur (дата звернення: 2023-07-09).

Lidstone H. K. Ethical Pitfalls When Lawyers are Using Artificial Intelligence. SSRN Electronic Journal. Elsevier BV, 2023. 12 p. Available at SSRN: https://ssrn.com/abstract=4457790 or http://dx.doi.org/10.2139/ssrn.4457790

Klee M. Texas A and M Professor Wrongly Accuses Class of Cheating With ChatGPT – Rolling Stone. 2023. URL: https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-falseclaims-1234736601/ (дата звернення: 2023-07-09).

Professor Fails Half His Class After ChatGPT Falsely Said It Wrote Their Papers. 2023. URL: https://www.businessinsider.com/professor-fails-students-after-chatgpt-falsely-said-it-wrote-papers-2023-5 (дата звернення: 2023-07-09).

Lyell D., Coiera E. Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association. Oxford University Press (OUP), 2017. Vol. 24, no. 2. P. 423–431. https://doi.org/10.1093/jamia/ocw105

Clancy J. BREAKDOWNS IN HUMAN-AI PARTNERSHIP: REVELATORY CASES OF AUTOMATION BIAS IN AUTONOMOUS VEHICLE ACCIDENTS: mathesis. The University of North Carolina at Chapel Hill University Libraries, 2019. https://doi.org/10.17615/jpah-hc02

Posey B. M. The final destination: Incorporating ’Death by GPS’ into forensic and legal sciences. Science and Justice. Elsevier BV, 2023. Vol. 63, no. 3. P. 421–426. https://doi.org/10.1016/j.scijus.2023.04.005

Lin A. Y., Kuehl K., Sch√∂ning J., Hecht B. Understanding "Death by GPS" / Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017. P. 1154–1166 https://doi.org/10.1145/3025453.3025737

How Hackers Are Using Google To Pwn Your Site – ShoeMoney. 2006. URL: https://www.shoemoney.com/2006/12/26/how-hackers-are-using-google-to-pwn-your-site/ (дата звернення: 2023-07-06).

Schwartz B. Using Google Code Search To Find Vulnerable Sites. 2006. URL: https://searchengineland.com/usinggoogle-code-search-to-find-vulnerable-sites-10146 (дата звернення: 2023-07-06).

Liu Y., Deng G., Xu Z., Li Y., Zheng Y., Zhang Y., Zhao L., Zhang T., Liu Y. Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv, 2023. P. 27. https://doi.org/10.48550/arXiv.2305.13860 URL: https://arxiv.org/pdf/2305.13860.pdf (дата звернення: 2023-07-06).

Published

2023-09-08

How to Cite

БОЙКО, В., ВАСИЛЕНКО, М., & СЛАТВІНСЬКА, В. (2023). PROGRAMMING WITH THE HELP OF GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS: RISKS AND CHALLENGES. Information Technology and Society, (2 (8), 18-26. https://doi.org/10.32689/maup.it.2023.2.2

Most read articles by the same author(s)