MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer https://journal.universitasbumigora.ac.id/index.php/matrik <p style="text-align: justify;"><strong>Matrik : Jurnal Manajemen, Teknik Informatika, dan Rekayasa Komputer</strong>&nbsp;is peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of science, engineering and information technology. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. Matrik follows the open access policy that allows the published articles freely available online without any subscription.</p> <p style="text-align: justify;">ISSN (Print)&nbsp;1858-4144 || ISSN (Online)&nbsp;2476-9843</p> LPPM Universitas Bumigora en-US MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 1858-4144 Reducing Transmission Signal Collisions on Optimized Link State Routing Protocol Using Dynamic Power Transmission https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/3899 <p style="text-align: justify;">Many devices connected to a network inevitably result in clashes between communication signals. These collisions are an important factor that causes a decrease in network performance, especially affecting Quality of Service (QoS) like throughput, Packet Delivery Ratio (PDR), and end-to-end de- lay, which has a direct impact on the success of data transmission by potentially causing data loss or damage. The aim of this research is to integrate the Dynamic Power Transmission (DPT) algorithm into the Optimized Link State Routing (OLSR) routing protocol to regulate the communication sig- nal strength range. The DPT algorithm dynamically adapts the signal coverage distance based on the density of neighboring nodes to reduce signal collisions. In our protocol, the basic mechanism of a DPT algorithm includes four steps. The Hello message structure of OLSR has been modified to incorporate the ”x-y position” coordinate field data. Nodes calculate distances to neighbors using these coordinates, which is crucial for route discovery, where all nearby nodes can process route re-quests. The results of this research are that DPT-OLSR improves network efficiency in busy areas. In particular, the DPT-OLSR routing protocol achieves an average throughput enhancement of 0.93%, a 94.79% rise in PDR, and reduces end-to-end delay by 45.69% across various variations in node density. The implication of this research result is that the algorithm proposed automatically adapts the transmission power of individual nodes to control the number of neighboring nodes within a de-fined range. This effectively avoids unwanted interference, unnecessary overhearing, and excessive processing by other nodes, ultimately boosting the network’s overall throughput.</p> Lathifatul Mahabbati Andy Hidayat Jatmika Raphael Bianco Huwae ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-09-09 2024-09-09 24 1 1 10 10.30812/matrik.v24i1.3899 Development of Smart Charity Box Monitoring Robot in Mosque with Internet of Things and Firebase using Raspberry Pi https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4209 <p style="text-align: justify;">Mosques are the center of Muslim communities’ spiritual and communal life, thus requiring effective financial management. The purpose of this study was to develop a smart donation box robot that utilizes Internet of Things technology to address efficiency and increase transparency in managing donations. The methodology in this study used a prototyping method consisting of Rapid Planning, Rapid Modeling, Construction, and Evaluation stages, which aimed to develop a functional prototype quickly. The results showed that the smart donation box robot detected and counted banknote denominations with varying degrees of success, achieving a detection success rate of 100% for all tested denominations at an optimal sensor distance of 1 cm. However, the detection rate dropped to 42.86% at 0.5 cm and 28.57% at 1.5 cm, highlighting the significant impact of sensor placement on performance. Coin detection was performed accurately, correctly identifying and sorting denominations without error. This enabled real-time financial monitoring via the Telegram application, significantly increasing transparency for mosque administrators and congregants. The conclusion of this study confirms that IoT technology can substantially improve mosque donation management by automating the collection process and providing real-time</p> Nenny Anggraini Zulkifli Zulkifli Nashrul Hakiem ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-01 2024-11-01 24 1 11 24 10.30812/matrik.v24i1.4209 Characterizing Hardware Utilization on Edge Devices when Inferring Compressed Deep Learning Models https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/3938 <p style="text-align: justify;">Implementing edge AI involves running AI algorithms near the sensors. Deep Learning (DL) Model has successfully tackled image classification tasks with remarkable performance. However, their requirements for huge computing resources hinder the implementation of edge devices. Compressing the model is an essential task to allow the implementation of the DL model on edge devices. Post-training quantization (PTQ) is a compression technique that reduces the bit representation of the model weight parameters. This study looks at the impact of memory allocation on the latency of compressed DL models on Raspberry Pi 4 Model B (RPi4B) and NVIDIA Jetson Nano (J. Nano). This&nbsp; research aims to understand hardware utilization in central processing units (CPU), graphics processing units (GPU),<br>and memory. This study focused on the quantitative method, which controls memory allocation and measures warm-up time, latency, CPU, and GPU utilization. Speed comparison among inference of DL models on RPi4B and J. Nano. This paper observes the correlation between hardware utilization versus the various DL inference latencies. According to our experiment, we concluded that smaller memory allocation led to high latency on both RPi4B and J. Nano. CPU utilization on RPi4B. CPU utilization in RPi4B increases along with the memory allocation; however, the opposite is shown on J. Nano since the GPU carries out the main computation on the device. Regarding computation, the<br>smaller DL Size and smaller bit representation lead to faster inference (low latency), while bigger bit representation on the same DL model leads to higher latency.</p> Ahmad Naufal Labiib Nabhaan Rakandhiya Daanii Rachmanto Arief Setyanto ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 25 38 10.30812/matrik.v24i1.3938 Variation of Distributed Power Control Algorithm in Co-Tier Femtocell Network https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/3992 <p style="text-align: justify;">The wireless communication network has seen rapid growth, especially with the widespread use of smartphones, but resources are increasingly limited, especially indoors. Femtocell, a spectrum-efficient small cellular network solution, faces challenges in distributed power control (DPC) when deployed with distributed users, impacting power levels, and causing interference in the main network. <strong>The aim of this research </strong>is optimizing user power consumption in co-tier femtocell networks by using the user power treatment. <strong>This study proposed</strong> the Distributed Power Control (DPC) variation methods such as Distributed Constrained Power Control (DCPC), Half Distributed Constrained Power Control (HDCPC), and Generalized Distributed Constrained Power Control (GDCPC) in co-tier femtocell network. The research examines scenarios where user power converges but exceeds the maximum threshold or remains semi-feasible, considering factors like number of users, distance, channel usage, maximum power values, non-negative power vectors, Signal-to-Interference-plus-Noise Ratio (SINR), and link gain matrix values. In Distributed Power Control (DPC), distance and channel utilization affect feasibility conditions: feasible, semi-feasible, and non-feasible. <strong>The result shows that</strong> Half Distributed Constrained Power Control (HDCPC) is more effective than Distributed Constrained Power Control (DCPC) in semi-feasible conditions due to its efficient power usage and similar Signal-to-Interference-plus-Noise Ratio (SINR). Half Distributed Constrained Power Control (HDCPC) is also easier to implement than Generalized Distributed Constrained Power Control (GDCPC) as it does not require user deactivation when exceeding the maximum power limit. Distributed Power Control (DPC) variations can shift the power and Signal-to-Interference-plus-Noise Ratio (SINR) conditions from non-convergence to convergence at or below the maximum power level. <strong>We concluded</strong> that the best performance of Distributed Power Control (DPC) is Half Distributed Constrained Power Control (HDCPC).</p> Fatur Rahman Harahap Anggun Fitrian Isnawati Khoirun Ni'amah ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 39 60 10.30812/matrik.v24i1.3992 Cluster Validity for Optimizing Classification Model: Davies Bouldin Index – Random Forest Algorithm https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4043 <p style="text-align: justify;">Several factors impact pregnant women’s health and mortality rates. The symptoms of disease in pregnant women are often similar. This makes it difficult to evaluate which factors contribute to a low, medium, or high risk of mortality among pregnant women. The purpose of this research is to generate classification rules for maternal health risk using optimal clusters. The optimal cluster is obtained from the process carried out by the validity cluster. The methods used are K-Means clustering, Davies Bouldin Index (DBI), and the Random Forest algorithm. These methods build optimum clusters from a set of k-tests to produce the best classification. Optimal clusters comprising cluster members with<br>strong similarities are high-dimensional data. Therefore, the Principal Component Analysis (PCA) technique is required to evaluate attribute value. The result of the research is that the best classification rule was obtained from k-tests = 22 on the 20th cluster, which has an accuracy of 97% to low, mid, and high risk. The novelty lies in using DBI for data that the Random Forest will classify. According to the research findings, the classification rules created through optimal clusters are 9.7% better than without the clustering process. This demonstrates that optimizing the data group has implications for enhancing the classification algorithm’s performance.</p> Prihandoko Prihandoko Deny Jollyta Gusrianty Gusrianty Muhammad Siddik Johan Johan ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 61 72 10.30812/matrik.v24i1.4043 Optimizing Currency Circulation Forecasts in Indonesia: A Hybrid Prophet- Long Short Term Memory Model with Hyperparameter Tuning https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4052 <p style="text-align: justify;">The core problem for decision-makers lies in selecting an effective forecasting method, particularly when faced with the challenges of nonlinearity and nonstationarity in time series data. To address this, hybrid models are increasingly employed to enhance forecasting accuracy. In Indonesia and other Muslim countries, monthly economic and business time series data often include trends, seasonality, and calendar variations. This study compares the performance of the hybrid Prophet-Long Short-Term Memory (LSTM) model with their individual counterparts to forecast such patterned time series. The aim is to identify the best model through a hybrid approach for forecasting time series data exhibiting<br>trend, seasonality, and calendar variations, using the real-life case of currency circulation in South Sulawesi. The goodness of the models is evaluated using the smallest Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) values. The results indicate that the hybrid Prophet- LSTM model demonstrates superior accuracy, especially for predicting currency outflow, with lower MAPE and RMSE values than standalone models. The LSTM model shows excellent performance for currency inflow, while the Prophet model lags in inflow and outflow accuracy. This insight is valuable for Bank Indonesia’s strategic planning, aiding in better cash flow prediction and currency stock management.</p> Vivin Nur Aziza Utami Dyah Syafitri Anwar Fitrianto ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 73 84 10.30812/matrik.v24i1.4052 Enhancing Multiple Linear Regression with Stacking Ensemble for Dissolved Oxygen Estimation https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4280 <p style="text-align: justify;">Maintaining optimal dissolved oxygen levels is essential for aquatic ecosystems, yet industrial and domestic waste has led to a global decline in dissolved oxygen. Traditional measurement methods, such as oxygen meters and Winkler titration, are often costly or time-consuming. This study aims to improve the Root Mean Square Error, Mean Absolute Error, and R<sup>2</sup> values for estimating dissolved oxygen levels. The research method uses Multiple Linear Regression with various training and testing data splits, both before and after applying polynomial features. The model is further optimized using a stacking technique, with Random Forest Regressor and Gradient Booster Regressor as base models.<br>The results show that the best model was achieved using the stacking ensemble technique with a 90:10 data split and polynomial features, yielding a Root Mean Square Error of 1.206, Mean Absolute Error of 0.990, and R<sup>2</sup> of 0.670. This model has also met the assumptions of linear regression, such as residual normality, homoscedasticity, and no autocorrelation of residuals. This study concluded that the ensemble stacking technique and the addition of polynomial features could improve the model in estimating dissolved oxygen values and also contribute by providing an accessible user interface using the Gradio Framework, allowing users to estimate dissolved oxygen levels effectively.</p> Rahmaddeni Rahmaddeni M. Teguh Wicaksono Denok Wulandari Agustriono Agustriono Sang Adji Ibrahim ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 85 94 10.30812/matrik.v24i1.4280 Optimizing Hotel Room Occupancy Prediction Using an Enhanced Linear Regression Algorithms https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4254 <p style="text-align: justify;">Predicting the correct hotel occupancy rate is important in the tourism industry because it has a major impact on the level of revenue and maintenance of a hotel’s reputation. With accurate predictions, hotel performance can be optimized regarding resources, staff, and hotel facilities. The linear regression method has been proven to perform causal predictions well. However, this method has several weaknesses, such as the function of the relationship between dependent variables and independent variables that are not linear, overfitting, or underfitting in building the prediction model. The purpose of this study was to optimize the linear regression model in predicting hotel occupancy rates. The method used in this study was a Linear Regression method optimized with Polynomial Regression and regularization techniques to reduce overfitting using Ridge Regression and Lasso Regression. The results of the model evaluation showed that linear regression, which was optimized with Polynomial Regression and Ridge Regression in the model with the historical data of the Adiwana Unagi occupancy rate, historical data of the hotel occupancy rate in Bali, and the number of tourist visits in Bali, gave the best performance, with a mean absolute error score of 1.0648, root mean square error of 2.1036, and R-squared of 0.9953. The conclusion of this research was optimization using polynomial regression, achieving the best evaluation scores, where the prediction model performance indicates that variable X7 (tourist visit numbers) strongly influences the prediction of the occupancy rate.</p> <p>&nbsp;</p> Dewa Ayu Kadek Pramita Ni Wayan Sumartini Saraswati I Putu Dedy Sandana Poria Pirozmand I Kadek Agus Bisena ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-06 2024-11-06 24 1 95 104 10.30812/matrik.v24i1.4254 Blockchain-Based TraditionalWeaving Certification and Elliptic Curve Digital Signature https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4337 <p style="text-align: justify;">Traditional weaving in West Nusa Tenggara was essential to the region’s cultural heritage. Many local micro, small, and medium enterprises continued to practice traditional weaving using natural materials. However, the rise of synthetic materials threatened this tradition, making distinguishing between natural and synthetic woven fabrics difficult. This study aimed to develop a blockchain-based self-certification system to enhance traceability, security, and efficiency using Non-Fungible Tokens. The research method leveraged the Elliptic Curve Digital Signature Algorithm for user authentication and smart contracts to mint Non-Fungible Tokens, ensuring the authenticity and origin of each product.<br>Each product’s metadata was signed with a digital signature that anyone could authenticate, and the outcome and the product metadata became a certificate. This study resulted in a web prototype with an easy-to-use interface that allowed artisans to create certificates and sell their registered works. This solution aimed to ensure the authenticity of traditional woven products by offering secure and transparent blockchain technology.</p> Pradita Dwi Rahman Heri Wijayanto Royana Afwani Wirarama Wesdawara Ahmad Zafrullah Mardiansyah ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-08 2024-11-08 24 1 105 116 10.30812/matrik.v24i1.4337 Implementation of The Extreme Gradient Boosting Algorithm with Hyperparameter Tuning in Celiac Disease Classification https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4031 <p style="text-align: justify;">Celiac Disease (CeD) is an autoimmune disorder triggered by gluten consumption and involves the immune system and HLA in the intestine. The global incidence ranges from 0.5%-1%, with only 30% correctly diagnosed. Diagnosis remains challenging, requiring complex tests like blood tests, small bowel biopsy, and elimination of gluten from the diet. Therefore, a faster and more efficient alternative is needed. Extreme Gradient Boosting (XGBoost), an ensemble machine learning technique that utilizes decision trees to aid in the classification of Celiac disease, was used. The aim of this study was to classify patients into six classes, namely potential, atypical, silent, typical, latent and none disease, based on attributes such as blood test results, clinical symptoms and medical history. This research method employs 5-fold cross-validation to optimize parameters that are max depth, n estimator, gamma, and learning rate. Experiments were conducted 96 times to get the best combination of parameters. The results of this research are highlighted by an improvement of 0.45% above the accuracy value with the default XGBoost parameter of 98.19%. The best model was obtained in the trial with parameters max depth of 3, n estimator of 100, gamma of 0, and learning rate of 0.3 and 0.5 after modifying the parameters, yielding an accuracy rate of 98.64%, a sensitivity rate of 98.43%, and a specificity rate of 99.72%. This research shows that tuning the XGBoost parameters for Celiac</p> Roudlotul Jannah Alfirdausy Nurissaidah Ulinnuha Wika Dianita Utami ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-08 2024-11-08 24 1 117 128 10.30812/matrik.v24i1.4031 Population Prediction Using Multiple Regression and Geometry Models Based on Demographic Data https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4121 <p style="text-align: justify;">Population growth is an important issue because it significantly impacts a country’s growth and development. Large population growth can impact potential resources that drive the pace of the economy and national development. On the other hand, it can also be a problem of poverty, hunger, unemployment, education, health, and others. The government needs to control population growth to balance it with good population quality. Data sourced from the Population and Civil Registration Office of Simalungun Regency, Tanah Java sub-district has a high population and continues to increase every year. The impact of the population increase is that it affects the population’s welfare, most of whom work as laborers and farmers. To overcome this problem, it is necessary to predict the number of people in the future so that the government can make the right decisions and policies in controlling the population. This study aims to make predictions using two models, namely Multiple Linear Regression, to find linear equations and Geometry Models for population growth projections. This study utilizes multiple regression analysis and geometric models using three independent variables, namely birth rate (X1), migration rate (X2), and death rate (X3), as well as one bound variable, population number (Y). This study’s results show that the Tanah Java sub-district population is expected to increase in the next five years (2024-2028). Predictions show that by 2024, the population is expected to reach 61178 people from 59589 in 2023. Based on the results of the study, the conclusion of this study it can be used as a guide for the authorities in planning strategies and resource allocation and making a significant contribution in estimating population development in the Java region so that there will be no population explosion in the future so that it does not have a negative impact.</p> M Safii Rika Setiana ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-14 2024-11-14 24 1 129 140 10.30812/matrik.v24i1.4121 Higher Education Institution Clustering Based on Key Performance Indicators using Quartile Binning Method https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4244 <p style="text-align: justify;">The Key Performance Indicators of Higher Education Institutions (KPI-HEIs) are a crucial component of the internal quality assurance system that supports the achievement of excellence status for higher education institutions. Many private higher education institutions face challenges in independently analyzing the key performance assessment indicators of Private Higher Education Institutions (PHEIs), which often require complex methodological approaches and specialized expertise. The research aims to cluster PHEIs based on achieving key performance indicators (KPIs). Research the method used descriptive statistical methods and quartile binning techniques to analyze and cluster data based on the achievement of KPI-HEIs. The research results, based on descriptive statistical analysis, identified outliers in eight KPI-HEIs, along with a dominance of zero values in KPI 1, KPI 2, KPI 6, KPI 7, and KPI 8, with the highest proportion reaching 90.91% for KPI 8. Based on these findings, clustering using the quartile binning method resulted in four clusters of PHEIs based on KPIs: Cluster 1 consists of 19 institutions with poor, Cluster 2 consists of 14 institutions with fair achievement, Cluster 3 consists of 16 institutions with good achievement, and Cluster 4 consists of 17 institutions with very good achievement, which can serve as examples for other institutions. This research concludes that the quartile binning method successfully categorized private higher education institutions based on their achievement of KPIs into four clusters: poor, fair, good, and very good. This outcome demonstrates the effectiveness of the method in understanding the performance distribution of these institutions. It provides valuable insights for stakeholders to develop data-driven strategies aimed at enhancing educational quality.</p> Virdiana Sriviana Fatmawaty Imam Riadi Herman Herman ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-14 2024-11-14 24 1 141 154 10.30812/matrik.v24i1.4244 Segmentation and Classification of Breast Cancer Histopathological Image Utilizing U-Net and Transfer Learning ResNet50 https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4186 <p style="text-align: justify;">Breast cancer is the most common type of cancer among various types of cancer. Approximately 1 in 8 women in the United States die from breast cancer. Early screening and accurate diagnosis are essential for prevention and accelerated treatment intervention. Several artificial intelligence methods have emerged to develop effective segmentation, detection, and classification to determine cancer types. Although there has been progress in automated algorithms for breast cancer histopathology image analysis, many of these approaches still face several challenges. This study aims to address the challenges in breast cancer image analysis. This research method uses the development of the U-Net architecture combined with Transfer Learning using ResNet50. The encoder path aims to improve the model’s sensitivity in the segmentation and classification of cancer areas by utilizing deep hierarchical features extracted by ResNet50. In addition, data augmentation techniques are used to create a diverse and comprehensive training dataset, which improves the model’s ability to distinguish between different tissue types and cancer areas. The results of this study are U-Net and ResNet50, which show an average IoU of 0.482 and a Dice coefficient of 0.916. This study concludes that integrating UNet with Transfer Learning ResNet50 improves the segmentation and classification accuracy in breast cancer histopathology images and overcomes the problem of high computational requirements. This approach shows significant potential for improvement in early breast cancer detection and diagnosis.</p> Nella Rosa Sudianjaya Chastine Fatichah ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-15 2024-11-15 24 1 155 166 10.30812/matrik.v24i1.4186 Multi-Algorithm Approach to Enhancing Social Assistance Efficiency Through Accurate Poverty Classification https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4275 <p style="text-align: justify;">The determination of poverty status in Lombok Utara district depends on criteria such as income, access to health and education services, and housing conditions. These factors are crucial for assessing the level of community welfare and guiding the allocation of social assistance by the district government. <strong>The purpose</strong> of this study is to address the gap by utilizing advanced data mining techniques to improve the accuracy of poverty status classification in North Lombok, thereby informing more effective social assistance policies. <strong>The method used</strong> in this research is the Random Forest (RF), K-Nearest Neighbor (KNN) and Naïve Bayes with split data 80% data training and 20% data testing. <strong>The finding indicated</strong> that the machine learning model the RF algorithm, which achieved an accuracy rate of 82.56%, proved to play an important role in this process by effectively distinguishing between different categories of poverty based on these criteria. In comparison, the KNN algorithm achieved an accuracy of 70.94% and the Naïve Bayes model achieved an accuracy of 53.47%. It means that the machine learning model using the RF algorithm has more accurate accuracy than the KNN and Naïve Bayes algorithm in predicting or recommending Recipients of Social Assistance from the District Government. <strong>The implication</strong> is that RF machine learning can help the role of social service officers in predicting the economic status of the community. The high accuracy of the RF algorithm enhances its role in informing targeted policy decisions and optimizing the effectiveness of social assistance programs. Nonetheless, continuous improvement is essential to refine the model's predictive capabilities and ensure the accuracy and reliability of poverty assessments. These continuous improvements are essential to effectively alleviate poverty and break the cycle of socio-economic disparities in the region.</p> Christofer Satria Peter Wijaya Sugijanto Anthony Anggrawan I Nyoman Yoga Sumadewa Aprilia Dwi Dayani Rini Anggriani ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-15 2024-11-15 24 1 167 178 10.30812/matrik.v24i1.4275 Integration of Deep Learning and Autoregressive Models for Marine Data Prediction https://journal.universitasbumigora.ac.id/index.php/matrik/article/view/4032 <p style="text-align: justify;">Climate change and human activities significantly affect the dynamics of the marine environment, making accurate predictions essential for resource management and disaster mitigation. Deep learning models such as Long Short-Term Memory excel at capturing non-linear temporal patterns, while autoregressive models handle linear trends to improve prediction accuracy. This aim study predicts sea surface temperature, height, and salinity using deep learning compared to Moving Average and Autoregressive Integrated Moving Average methods. The research methods include spatial gap analysis, temporal variability modeling, and oceanographic parameter prediction. The relationship between<br>parameters is analyzed using the Pearson Correlation method. The dataset is divided into 80% training and 20% test data, with prediction results compared between Long Short-Term Memory, Moving Average, and Autoregressive models. The results show that Long Short-Term Memory performs best with a Root Mean Squared Error of 0.1096 and a Mean Absolute Error of 0.0982 for salinity at 13 sample points. In contrast, Autoregressive models produce a Root Mean Squared Error of 0.193 for salinity, 0.055 for sea surface height, and 2.504 for sea surface temperature, with a correlation coefficient 0.6 between temperature and sea surface height. In conclusion, the Long Short Term Memory model excels in predicting salinity because it is able to capture complex non-linear patterns. Meanwhile, Autoregressive models are more suitable for linear data trends and explain the relationship between parameters, although their accuracy is lower in salinity prediction. This approach</p> Mukhlis Mukhlis Puput Yuniar Maulidia Achmad Mujib Adi Muhajirin Alpi Surya Perdana ##submission.copyrightStatement## http://creativecommons.org/licenses/by-sa/4.0 2024-11-23 2024-11-23 24 1 179 194 10.30812/matrik.v24i1.4032