Revealing Interaction Patterns in Concept Map Construction Using Deep Learning and Machine Learning Models
Abstract
Concept maps are educational tools for organizing and representing knowledge, enhancing comprehension, and memory retention. In concept map construction, much knowledge can be utilized. Still, concept map construction is complex, involving actions that reflect a user’s thinking and problemsolving strategies. Traditional methods struggle to analyze large datasets and capture temporal dependencies in these actions. To address this, the study applies deep learning and machine learning techniques. This research aims to evaluate and compare the performance of Long Short-Term Memory (LSTM), K-Nearest Neighbors (K-NN), and Random Forest algorithms in predicting user actions and uncovering user interaction patterns in concept map construction. This research method collects and analyzes interaction logs data from concept map activities, using these three models for evaluation and comparison. The results of this research are that LSTM achieved the highest accuracy (83.91%) due to its capacity to model temporal dependencies. Random Forest accuracy (80.53%), excelling in structured data scenarios. K-NN offered the fastest performance due to its simplicity, though its reliance on distance-based metrics limited accuracy (70.53%). In conclusion, these findings underscore the practical considerations in selecting models for concept map applications; LSTM demonstrates effectiveness in predicting user actions and excels for temporal tasks, while Random Forest and K-NN offer more efficient alternatives in computational.
Downloads
References
Learning Processes,” International Journal of Innovative Science and Research Technology (IJISRT), vol. 9, no. 5, pp. 3694–
3699, jul 2024, https://doi.org/10.38124/ijisrt/IJISRT24MAY2484.
[2] B. K. Dahal, “Concept Mapping: Unleashing the Power of Visual Learning in Science Education,” Ganeshman Darpan, vol. 8,
no. 1, pp. 35–43, 2023, https://doi.org/10.3126/gd.v8i1.57330.
[3] D. D. Prasetya, A. Pinandito, Y. Hayashi, and T. Hirashima, “Analysis of quality of knowledge structure and students’ perceptions
in extension concept mapping,” Research and Practice in Technology Enhanced Learning, vol. 17, no. 1, pp. 1–25, 2022,
https://doi.org/10.1186/s41039-022-00189-9.
[4] “Editorial: Concept mapping: Improving learning and understanding,” Knowledge Management & E-Learning: An International
Journal, vol. 15, no. 3, pp. 365–368, sep 2023, https://doi.org/10.34105/j.kmel.2023.15.020.
[5] A. Pinandito, C. P. Wulandari, D. D. Prasetya, Y. Hayashi, and T. Hirashima, “Knowledge Reconstruction with Kit-Build
Concept Map: A Review from Student Experience,” in 7th International Conference on Sustainable Information Engineering
and Technology 2022. New York: ACM, nov 2022, pp. 263–270, https://doi.org/10.1145/3568231.3568274.
[6] D. D. Prasetya and T. Hirashima, “Associated Patterns in Open-Ended Concept Maps within E-Learning,” Knowledge Engineering
and Data Science, vol. 5, no. 2, p. 179, 2022, https://doi.org/10.17977/um018v5i22022p179-187.
[7] K. E. de Ries, H. Schaap, A. M. M. van Loon, M. M. Kral, and P. C. Meijer, “A literature review of open-ended concept
maps as a research instrument to study knowledge and learning,” vol. 56, no. 1, pp. 73–107, 2022, https://doi.org/10.1007/
s11135-021-01113-x.
[8] P. G. Fonteles Furtado, T. Hirashima, and Y. Hayashi, “The effect on new knowledge and reviewed knowledge caused by the
positioning task in closed concept maps,” Research and Practice in Technology Enhanced Learning, vol. 14, no. 1, p. 15, dec
2019, https://doi.org/10.1186/s41039-019-0108-1.
[9] Enitan Shukurat Animashaun, Babajide Tolulope Familoni, and Nneamaka Chisom Onyebuchi, “Advanced machine learning
techniques for personalising technology education,” Computer Science & IT Research Journal, vol. 5, no. 6, pp. 1300–1313,
jun 2024, https://doi.org/10.51594/csitrj.v5i6.1198.
[10] B. Alnasyan, M. Basheri, and M. Alassafi, “The power of Deep Learning techniques for predicting student performance in
Virtual Lear ning Environments: A systematic literature review,” Computers and Education: Artificial Intelligence, vol. 6, no.
2024, p. 100231, jun 2024, https://doi.org/10.1016/j.caeai.2024.100231.
[11] J. King, R. Lemmens, and E.-W. Augustijn, “Enhancing Large Educational Node-Link Concept Map Design for Optimal User
Learning,” Abstracts of the ICA, vol. 5, no. September, pp. 1–2, 2022, https://doi.org/10.5194/ica-abs-5-23-2022.
[12] N. Islam, H. Noor, and D. M. Farid, “A Novel Ensemble K-Nearest Neighbours Classifier with Attribute Bagging,” Communications
in Computer and Information Science, vol. 1950 CCIS, no. October, pp. 262–276, 2024, https://doi.org/10.1007/
978-981-99-7666-9 22.
[13] P. Srisuradetchai and K. Suksrikran, “Random kernel k-nearest neighbors regression,” Frontiers in Big Data, vol. 7, no. 2021,
pp. 1–14, jul 2024, https://doi.org/10.3389/fdata.2024.1402384.
[14] H. A. Salman, A. Kalakech, and A. Steiti, “Random Forest Algorithm Overview,” Babylonian Journal of Machine Learning,
vol. 2024, no. 1, pp. 69–79, jun 2024, https://doi.org/10.58496/BJML/2024/007.
[15] J. Luan, C. Zhang, B. Xu, Y. Xue, and Y. Ren, “The predictive performances of random forest models with limited sample size
and different species traits,” Fisheries Research, vol. 227, no. July 2020, p. 105534, jul 2020, https://doi.org/10.1016/j.fishres.
2020.105534.
[16] I. Sonata and Y. Heryadi, “Comparison of LSTM and Transformer for Time Series Data Forecasting,” in 2024 7th International
Conference on Informatics and Computational Sciences (ICICoS). IEEE, jul 2024, pp. 491–495, https://doi.org/10.1109/
ICICoS62600.2024.10636892.
[17] A. Isah, H. Shin, S. Oh, S. Oh, I. Aliyu, T. W. Um, and J. Kim, “Digital Twins Temporal Dependencies-Based on Time Series
Using Multivariate Long Short-Term Memory,” Electronics (Switzerland), vol. 12, no. 19, pp. 1–15, 2023, https://doi.org/10.
3390/electronics12194187.
[18] V. B and R. Gangula, “Exploring the Power and Practical Applications of K-Nearest Neighbours (KNN) in Machine Learning,”
Journal of Computer Allied Intelligence, vol. 2, no. 1, pp. 8–15, 2024, https://doi.org/10.69996/jcai.2024002.
[19] E. Cantor, S. Guauque-Olarte, R. Le´on, S. Chabert, and R. Salas, “Knowledge-slanted random forest method for highdimensional
data and small sample size with a feature selection application for gene expression data,” BioData Mining, vol. 17,
no. 1, p. 34, sep 2024, https://doi.org/10.1186/s13040-024-00388-8.
[20] A. S. Aljaloud, D. M. Uliyan, A. Alkhalil, M. A. Elrhman, A. F. M. Alogali, Y. M. Altameemi, M. Altamimi, and P. Kwan, “A
Deep Learning Model to Predict Student Learning Outcomes in LMS Using CNN and LSTM,” IEEE Access, vol. 10, no. July,
pp. 85 255–85 265, 2022, https://doi.org/10.1109/ACCESS.2022.3196784.
[21] M. Jawthari and V. Stoffov´a, “Predicting students’ academic performance using a modified kNN algorithm,” Pollack Periodica,
vol. 16, no. 3, pp. 20–26, 2021, https://doi.org/10.1556/606.2021.00374.
[22] L. H. Alamri, R. S. Almuslim, M. S. Alotibi, D. K. Alkadi, I. Ullah Khan, and N. Aslam, “Predicting Student Academic Performance
using Support Vector Machine and Random Forest,” ACM International Conference Proceeding Series, vol. PartF16898,
no. July, pp. 100–107, 2020, https://doi.org/10.1145/3446590.3446607.
[23] H. N. Bhandari, B. Rimal, N. R. Pokhrel, R. Rimal, and K. R. Dahal, “LSTM-SDM: An integrated framework of LSTM
implementation for sequential data modeling[Formula presented],” Software Impacts, vol. 14, no. July, p. 100396, 2022, https:
//doi.org/10.1016/j.simpa.2022.100396.
[24] S. Odoom and E. O. Osei, “An Optimal Random Forest Model for Enhancing Decision-Making in Improving Students’ Performance
via Educational Data Mining,” International Journal For Multidisciplinary Research, vol. 6, no. 1, pp. 1–12, 2024,
https://doi.org/10.36948/ijfmr.2024.v06i01.11584.
[25] D. Roy, A. Bhaduri, K. Maji, M. Mal, J. Patra, and S. Gupta, “Improving Alexa’s response with sentiment analysis and comparing
various machine learning models,” in 2024 International Conference on Circuit, Systems and Communication (ICCSC).
IEEE, jun 2024, pp. 1–6, https://doi.org/10.1109/ICCSC62074.2024.10617303.
[26] F. Knapp and M. ˇSimon, “Methodology for the Development of Production Systems in the Automotive Industry,” Tehnicki
Glasnik, vol. 18, no. 3, pp. 400–409, 2024, https://doi.org/10.31803/tg-20240502085916.
[27] J. G. B. Derraik, W. Parklak, B. B. Albert, K. Boonyapranai, and K. Rerkasem, “Fundamentals of Data Collection in Clinical
Studies: Simple Steps to Avoid “Garbage In, Garbage Out”,” The International Journal of Lower Extremity Wounds, vol. 20,
no. 3, pp. 183–187, sep 2021, https://doi.org/10.1177/1534734620938234.
[28] M. A. Oladipupo, P. C. Obuzor, B. J. Bamgbade, A. E. Adeniyi, K. M. Olagunju, and S. A. Ajagbe, “An Automated Python
Script for Data Cleaning and Labeling using Machine Learning Technique,” Informatica, vol. 47, no. 6, pp. 219–232, jun 2023,
https://doi.org/10.31449/inf.v47i6.4474.
[29] K. Mahmud Sujon, R. Binti Hassan, Z. Tusnia Towshi, M. A. Othman, M. Abdus Samad, and K. Choi, “When to Use Standardization
and Normalization: Empirical Evidence From Machine Learning Models and XAI,” IEEE Access, vol. 12, pp.
135 300–135 314, 2024, https://doi.org/10.1109/ACCESS.2024.3462434.
[30] A. Carrera-Rivera, D. Reguera-Bakhache, F. Larrinaga, G. Lasa, and I. Garitano, “Structured dataset of human-machine
interactions enabling adaptive user interfaces,” Scientific Data, vol. 10, no. 1, pp. 1–9, 2023, https://doi.org/10.1038/
s41597-023-02741-8.
[31] F. Carrara, P. Elias, J. Sedmidubsky, and P. Zezula, “LSTM-based real-time action detection and prediction in human
motion streams,” Multimedia Tools and Applications, vol. 78, no. 19, pp. 27 309–27 331, 2019, https://doi.org/10.1007/
s11042-019-07827-3.
[32] B. Basnayake and N. Chandrasekara, “Assessing the Performance of Feedforward Neural Network Models with Random Data
Split for Time Series Data: A Simulation Study,” in 2024 International Research Conference on Smart Computing and Systems
Engineering (SCSE). IEEE, apr 2024, pp. 1–6, https://doi.org/10.1109/SCSE61872.2024.10550735.
[33] J. Allgaier and R. Pryss, “Practical approaches in evaluating validation and biases of machine learning applied to mobile health
studies,” Communications Medicine, vol. 4, no. 1, pp. 1–11, 2024, https://doi.org/10.1038/s43856-024-00468-0.
[34] H. AbdelAzim, M. Tharwat, and A. Mohammed, “Efficient Computational Cost Reduction in KNN through Maximum Entropy
Clustering,” in 2024 6th International Conference on Computing and Informatics (ICCI). IEEE, mar 2024, pp. 34–38, https:
//doi.org/10.1109/ICCI61671.2024.10485066.
[35] N. Hidayati and A. Hermawan, “K-Nearest Neighbor (K-NN) algorithm with Euclidean and Manhattan in classification of
student graduation,” Journal of Engineering and Applied Technology, vol. 2, no. 2, pp. 86–91, 2021, https://doi.org/10.21831/
jeatech.v2i2.42777.
[36] L. Barre˜nada, P. Dhiman, D. Timmerman, A.-L. Boulesteix, and B. Van Calster, “Understanding overfitting in random forest
for probability estimation: a visualization and simulation study,” Diagnostic and Prognostic Research, vol. 8, no. 1, p. 14, sep
2024, https://doi.org/10.1186/s41512-024-00177-1.
[37] H. Agarwal, G. Mahajan, A. Shrotriya, and D. Shekhawat, “Predictive Data Analysis: Leveraging RNN and LSTM Techniques
for Time Series Dataset,” Procedia Computer Science, vol. 235, no. 2024, pp. 979–989, 2024, https://doi.org/10.1016/j.procs.
2024.04.093.
[38] V. J. Raja, D. M, G. Solaimalai, D. L. Rani, P. Deepa, and R. G. Vidhya, “Machine Learning Revolutionizing Performance
Evaluation: Recent Developments and Breakthroughs,” in 2024 2nd International Conference on Sustainable Computing and
Smart Systems (ICSCSS). IEEE, jul 2024, pp. 780–785, https://doi.org/10.1109/ICSCSS60660.2024.10625103.
[39] S. Geng, “Analysis of the Different Statistical Metrics in Machine Learning,” Highlights in Science, Engineering and Technology,
vol. 88, no. 2024, pp. 350–356, 2024, https://doi.org/10.54097/jhq3tv19.
[40] A. Faheem and U. Arshad, “Harnessing Long Short Term Memory Networks for Stock Market Forecasts,” in 2024 International
Conference on Engineering & Computing Technologies (ICECT). IEEE, may 2024, pp. 1–6, https://doi.org/10.1109/
ICECT61618.2024.10581047.
[41] K. N. Simani, Y. O. Genga, and Y.-C. J. Yen, “Using LSTM to Perform Load Predictions for Grid-Interactive Buildings,” SAIEE
Africa Research Journal, vol. 115, no. 2, pp. 42–47, 2024, https://doi.org/10.23919/saiee.2024.10520212.
[42] X. Huang, X. Li, andW.Wang, “Temporal data-driven short-term traffic prediction: Application and analysis of LSTM model,”
Theoretical and Natural Science, vol. 14, no. 1, pp. 205–211, 2023, https://doi.org/10.54254/2753-8818/14/20240948.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.