Artificial Intelligence in the Work of a Psychologist: An Example of Analysis of Verbal Forecasts. Part I
- Авторлар: Kornilov S.A.1, Kornilova T.V.1, Ziyi W.1
-
Мекемелер:
- Moscow State University named after M.V. Lomonosov
- Шығарылым: Том 46, № 4 (2025)
- Беттер: 85-98
- Бөлім: Methodes and procedures
- URL: https://rjraap.com/0205-9592/article/view/691068
- DOI: https://doi.org/10.31857/S0205959225040084
- ID: 691068
Дәйексөз келтіру
Аннотация
The relevance of the research is determined by the need to develop a methodology for using artificial intelligence systems to expand human capabilities, in particular, in relation to solving problems of classifying qualitative data based on the use of Large Language Models (LLM). The purpose of the research described in this 1st article (a 2-part cycle) was to present a methodological approach that implements the use of large language models to classify individual verbal predictions about possible and impossible events in the future. Chinese participants (n = 149 gave 447 predictions) generated descriptions of events using the Wenciuxin (SoJump) system according to the specified three conditions: possible – improbable – impossible. The resulting corpus of open responses in Chinese was analyzed using five modern LLMs: GPT-4, Claude 3.5, Qwen 2.5-72B, Gemini Pro 1.5, and Llama 3.1-70B. After the training procedure, each model was assigned to analyze the responses based on several semantic parameters. An analysis of the responses of five large language models (Claude 3.5 Sonnet, GPT-4, Gemini Pro, LLaMA 3, and Qwen) revealed both significant similarities and noticeable differences in their approaches to analyzing verbal predictions (Chinese study participants). The integrative meta-model synthesized the results into a single heading, which allowed reliability testing within each model and comparative analysis between models. The revealed differences in the analytical approaches implemented by the models indicate that, although these models have common capabilities in identifying the main themes and patterns of semantic units represented in the participants’ verbal forecasts, they demonstrate different strengths in the aspects of text analysis they identify, which significantly expands the capabilities of a research psychologist. The obtained high reliability indicators (consistency between and within the models) indicate the potential of artificial intelligence technologies in applied scientific activities in the field of “mixed” methods.
Авторлар туралы
S. Kornilov
Moscow State University named after M.V. Lomonosov
Хат алмасуға жауапты Автор.
Email: sa.kornilov@gmail.com
125009, Moscow, Mokhovaya St., 11, bldg. 9, Russia
T. Kornilova
Moscow State University named after M.V. Lomonosov
Email: tvkornilova@mail.ru
125009, Moscow, Mokhovaya St., 11, bldg. 9, Russia
W. Ziyi
Moscow State University named after M.V. Lomonosov
Email: ziyiw480@gmail.com
125009, Moscow, Mokhovaya St., 11, bldg. 9, Russia
Әдебиет тізімі
- Vzorin G.D., Ushakov D.V. Obrazy Cheloveka: ot “fasetochnogo videnija” – k Homo Complexus. Obrazovatel’naja politika. 2023. V. 94, № 2. P. 8–19. doi: 10.22394/2078-838X-2023-2-8-18. (In Russian)
- Znakov V.V. Teoreticheskie osnovanija psihologii vozmozhnogo. Vestnik Sankt-Peterburgskogo universiteta. Psihologija. 2022. V. 12, № 2. P. 122–131. doi: 10.21638/spbu16.2022.202 (In Russian)
- Kornilova T.V. Intellektual’no-lichnostnyj potencial cheloveka v uslovijah neopredelennosti i riska. Sankt-Peterburg: Nestor-Istorija, 2016. (In Russian)
- Kornilova T.V., Tihomirov O.K. Prinjatie intellektual’nyh reshenij v dialoge s komp’juterom. Moscow: Izdatel’stvo MGU, 1990. (In Russian)
- Bender E.M., Koller A. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020. Р. 5185–5198. Association for Computational Linguistics.
- Bran A.M., Cox S., Schilter O. et al. Augmenting large language models with chemistry tools. Nature Machine Intelligence. 2024. V. 6(5). P. 525–535. doi: 10.1038/s42256-024-00832-8
- Brown T., Mann B., Ryder N. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems. 2020. V. 33. Р. 1877–1901.
- Buhr C.R., Smith H., Huppertz T. et al. ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case–Based Questions. Journal of Medical Internet Research Medical Education. 2023. V. 9. P. e49183. doi: 10.2196/491839.
- Engelbart D.C. Augmenting Human Intellect: A conceptual framework. Summary Report AFOSR-3233. Stanford Research Institute, 1962.
- Hutchins E. Cognition in the Wild. MIT Press, 1995.
- Krippendorff K. Content analysis: An introduction to its methodology (4th ed.). Sage Publications, 2018.
- Liu S., Fang Y. Use Large Language Models for Named Entity Disambiguation in Academic Knowledge Graphs. In Proceedings of the 2023 3rd International Conference on Education, Information Management and Service Science (EIMSS 2023). New York: Atlantis Press, 2023. Р. 681–691.
- Liu M.D., Salganik M.J. Successes and Struggles with Computational Reproducibility: Lessons from the Fragile Families Challenge. Successes and Struggles with Computational Reproducibility. Technical Report. 2019. OSF.io. https://osf.io/preprints/socarxiv/ g3pdb/.
- Miles M.B., Huberman A.M., Saldaña J. Qualitative data analysis: A methods sourcebook (3rd ed.). Sage Publications, 2014.
- Nasution A.H., Onan A. ChatGPT Label: Comparing the Quality of Human-Generated and LLM-Generated Annotations in Low-Resource Language NLP Tasks. IEEE Access. 2024. V. 12. Р. 71876–71900.
- Newell A., Simon H.A. Human problem solving. 1972. Prentice-Hall.
- Ouyang L., Wu J., Jiang X. et al. Training language models to follow instructions with human feedback. 2022. arXiv preprint arXiv:2203.02155.
- Saldaña J. The Сoding Manual for Qualitative Researchers (3rd ed.). 2016. Sage Publications.
- Wong M.-F. EuclidNet: Deep Visual Reasoning for Constructible Problems in Geometry. Advances in Artificial Intelligence and Machine Learning; 2023. Research 3 (1). P. 839–852.
- Zhou R., Chen L.Y.K. Is LLM a Reliable Reviewer? A Comprehensive Evaluation of LLM on Automatic Paper Reviewing Tasks. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024. Р. 9340–9351.
Қосымша файлдар
