https://e-journal.unair.ac.id/JISEBI/issue/feedJournal of Information Systems Engineering and Business Intelligence2025-03-28T14:03:32+07:00JISEBI Editorial Officejisebi@journal.unair.ac.idOpen Journal Systems<p>Journal of Information Systems Engineering and Business Intelligence (JISEBI) aims to promote high-quality Information Systems (IS) research among academics and practitioners alike, including computer scientists, IS professionals, business managers and other stakeholders in the industry. The journal publishes research articles and systematic reviews in the areas of Information System Engineering and Business Intelligence. The former refers to a multidisciplinary approach to all activities in the development and management of information systems aiming to achieve organizational goals; whereas the latter focuses on techniques to transfer raw data into meaningful information for business analysis purposes to achieve sustainable competitive advantage.</p>https://e-journal.unair.ac.id/JISEBI/article/view/56303Multi-task Learning for Named Entity Recognition and Intent Classification in Natural Language Understanding Applications2024-08-02T03:11:13+07:00Rizal Setya Perdanarizalespe@ub.ac.idPutra Pandu Adikaraadikara.putra@ub.ac.id<p><strong>Background:</strong> Understanding human language is a part of the research in Natural Language Processing (NLP) known as Natural Language Understanding (NLU). It becomes a crucial part of some NLP applications such as chatbots, that interpret the user intent and important entities. NLU systems depend on intent classification and named entity recognition (NER) which is crucial for understanding the user input to extract meaningful information. Not only important in chatbots, NLU also provides a pivotal function in other applications for efficient and precise text understanding.</p> <p><strong>Objective:</strong> The aim of this study is to introduce multitask learning techniques to improve the application's performance on NLU tasks, especially intent classification and NER in specific domains.</p> <p><strong>Methods:</strong> To achieve the language understanding capability, a strategy is to combine the intent classification and entity recognition tasks by using a shared model based on the shared representation and task dependencies. This approach is known as multitask learning and leverages the collaborative interaction between these related tasks to enhance performance. The proposed learning architecture is designed to be adaptable to various NLU-based applications, but in this work are discussed use cases in chatbots.</p> <p><strong>Results:</strong> The results show the effectiveness of the proposed approach by following several experiments, both from intent classification and named entity recognitions. The multitask learning capabilities highlight the potential of multi-task learning in chatbot systems for close domains. The optimal hyperparameters consist of a warm-up step of 60, an early stopping probability of 10, a weight decay of 0.001, a Named Entity Recognition (NER) loss weight of 0.58, and an intention classification loss weight of 0.4.</p> <p><strong>Conclusion:</strong> The performance of Dual Intent and Entity Transformer (DIET) for both tasks—intent classification and named entity recognition—is highly dependent on the data. This leads to various capabilities for the hyperparameter combinations. Our proposed model architecture significantly outperforms previous studies based on common evaluation metrics.</p> <p><strong><em>Keywords:</em></strong> Natural Language Understanding, Chatbot, Multi-task Learning, Named Entity Recognition</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/60300Dynamic Sign Language Recognition in Bahasa using MediaPipe, Long Short-Term Memory, and Convolutional Neural Network2025-01-10T09:30:18+07:00Ivana Valentina Lemmuela ivanavlemmuela@gmail.comMewati Ayubmewati.ayub@maranatha.ac.idOscar Karnalimoscar.karnalim@it.maranatha.edu<p><strong>Background:</strong> Communication is important for everyone, including individuals with hearing and speech impairments. For this demographic, sign language is widely used as the primary medium of communication with others who share similar conditions or with hearing individuals who understand sign language. However, communication difficulties arise when individuals with these impairments attempt to interact with those who do not understand sign language.</p> <p><strong>Objective:</strong> This research aims to develop models capable of recognizing sign language movements in Bahasa and converting the detected gesture into corresponding words, with a focus on vocabularies related to religious activities. Specifically, the research examined dynamic sign language in Bahasa, which comprised gestures requiring motion for proper demonstration.</p> <p><strong>Methods:</strong> In accordance with the research objective, sign language recognition model was developed using MediaPipe-assisted extraction process. Recognition of dynamic sign language in Bahasa was achieved through the application of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) methods.</p> <p><strong>Results:</strong> Sign language recognition model developed using bidirectional LSTM showed the best result with a testing accuracy of 100%. However, the best result for the CNN alone was 86.67 %. The integration of CNN and LSTM was observed to improve performance than CNN alone, with the best CNN-LSTM model achieving an accuracy of 95.24%.</p> <p><strong>Conclusion:</strong> The bidirectional LSTM model outperformed the unidirectional LSTM by capturing richer temporal information, with a specific consideration of both past and future time steps. Based on the observations made, CNN alone could not match the effectiveness of the Bidirectional LSTM, but a combination of CNN with LSTM produced better results. It is also important to state that normalized landmark data was found to significantly improve accuracy. Accuracy within this context was also influenced by shot type variability and specific landmark coordinates. Furthermore, the dataset containing straight-shot videos with x and y coordinates provided more accurate results, dissimilar to those comprised of videos with shot variation, which typically require x, y, and z coordinates for optimal accuracy.</p> <p><strong><em>Keywords:</em></strong> Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), MediaPipe, Sign Language</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/63660Domain-Specific Fine-Tuning of IndoBERT for Aspect-Based Sentiment Analysis in Indonesian Travel User-Generated Content2024-11-22T09:12:13+07:00Rifki Indra Perwirarifki@upnyk.ac.idVynska Amalia Permadivynspermadi@upnyk.ac.idDian Indri Purnamasari dian_indri@upnyk.ac.idRiza Prapascatama Agusdin rizapra@upnyk.ac.id<p><strong>Background:</strong> Aspect-based sentiment analysis (ABSA) is essential in extracting meaningful insights from user-generated content (UGC) in various domains. In tourism, UGC such as Google Reviews offers essential feedback, but the challenges associated with processing in Indonesian language, including the unique linguistic characteristics, pose difficulties for automatic sentiment, and aspect detection. Recent advancements in transformer-based models, such as BERT, have shown great potential in addressing these challenges by providing context-aware embeddings.</p> <p><strong>Objective:</strong> This research aimed to fine-tune IndoBERT, a pre-trained Indonesian language model, to perform information extraction and key aspect detection from tourism-related UGC. The objective was to identify critical aspects of tourism reviews and classify their sentiments.</p> <p><strong>Methods:</strong> A dataset of 20,000 Google Reviews, focusing on 20 tourism destinations in DI Yogyakarta and Jawa Tengah, was collected and preprocessed. Multiple fine-tuning experiments were conducted, using a layer-freezing method by adjusting only the top layers of IndoBERT, while freezing others to determine the optimal configuration. The model's performance was evaluated based on validation loss, precision, recall, and F1-score in aspect detection and overall sentiment classification accuracy.</p> <p><strong>Results:</strong> The best-performing configuration involved freezing the last six layers and fine-tuning the top six layers of IndoBERT, yielding a validation loss of 0.324. The model achieved precision scores between 0.85 and 0.89 in aspect detection and an overall sentiment classification accuracy of 0.84. Error analysis revealed challenges in distinguishing neutral and negative sentiments and in handling reviews with multiple aspects or mixed sentiments.</p> <p><strong>Conclusion:</strong> The fine-tuned IndoBERT model effectively extracted key tourism aspects and classified sentiments from Indonesian UGC. While the model performed well in detecting strong sentiments, improvements are needed to handle neutral and mixed sentiments better. Future work will explore sentiment intensity analysis and aspect segmentation methods to enhance the model's performance.</p> <p><strong><em>Keywords:</em></strong> Aspect-Based Sentiment Analysis, Fine-tuning, IndoBERT, Sentiment Classification, Tourism Reviews, User-Generated Content</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/61312BloodCell-YOLO: Efficient Detection of Blood Cell Types Using Modified YOLOv8 with GhostBottleneck and C3Ghost Modules2025-01-10T09:20:58+07:00Mohammad Farid Naufalfaridnaufal@staff.ubaya.ac.idSelvia Ferdiana Kusumaselvia@pens.ac.id<p><strong>Background</strong><strong>: </strong>Diagnosing many medical ailments, including infections, immunological problems, and hematological diseases, is a process that depends on precise as well as quick identification of blood cell. Conventional methods for blood cell identification may include skilled pathologists visually inspecting the cell under a microscope, which is a time-consuming choreography. This method is not appropriate for processing vast amounts of data, because the process is time-consuming and is prone to human mistakes.</p> <p><strong>Objective</strong>: This study aimed to improve YOLOv8 architecture, offering a more efficient and simplified model for blood cell identification. In addition, the main objective of the analysis was to reduce computational load as well as amount of parameters and still maintaining or improving detection performance.</p> <p><strong>Methods:</strong> GhostBottleneck and C3Ghost modules used in the study were included in the head and backbone of YOLOv8 architecture for improvement. All versions of YOLOv8 was subjected to the changes including n, s, m, l, and x. During the analysis, the efficacy of the recommended method was evaluated using a dataset of seven kinds of blood, namely basophil, eosinophil, lymphocyte, monocyte, neutrophil, platelets, and red blood cells (RBCs). The analysis also tested the proposed method on the well-known Blood Cell Count and Detection (BCCD) dataset, which was a common benchmark in this field, for comparing the performance. Performance of the model relating to past studies was assessed through this process.</p> <p><strong>Results:</strong> The investigation used GhostBottleneck and C3Ghost modules to reduce GFLOPS by 45.56% and the number of parameters by 76.55%. Mean average precision (mAP50) of 0.984 was achieved using recommended method. Additionally, on BCCD, the method scored 0.94 on New Cell Dataset.</p> <p><strong>Conclusion:</strong> Modifications performed to YOLOv8 design significantly increased its blood cell detection efficiency and effectiveness. The improvements showed that the changed model was suitable for real-time use in settings with constrained resources.</p> <p><strong><em>Keywords:</em></strong> Blood Cell Detection, C3Ghost, Ghostbottleneck, YOLOv8</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/59873Optimizing Convolutional Neural Networks with Particle Swarm Optimization for Enhanced Hoax News Detection2025-01-07T20:26:59+07:00Aditiya Hermawanaditiya.hermawan@ubd.ac.idLidya Lunardilidya.lunardi@ubd.ac.idYusuf Kurniayusuf.kurnia@ubd.ac.idBenny Daniawanbenny.daniawan@ubd.ac.idJunaedijunaedi@ubd.ac.id<p><strong>Background:</strong> The global spreading of hoax news is causing significant challenges, by misleading the public and undermining public trust in media and institutions. This issue is worsened by the rapid spreading of misinformation which is facilitated by digital platforms, triggering social unrest and threatening national security. To overcome this problem, reliable and robust method is essential to adapt to the evolving tactics of misleading information spreading.</p> <p><strong>Objective:</strong> This study aimed to improve the accuracy of hoax news detection tools by evaluating the effectiveness of Deep Learning methods enhanced with Convolutional Neural Networks (CNNs) using Particle Swarm Optimization (PSO).</p> <p><strong>Methods:</strong> The dataset was processed by tokenization, stopword removal, and stemming. CNNs were trained with default parameters, due to their potential as one of the effective methods for text classification. Furthermore, PSO was used to optimize the main parameters such as filters, kernel sizes, and learning rate, which was refined iteratively based on validation accuracy.</p> <p><strong>Results:</strong> The optimized CNNs+PSO was further tested by data training to show its effectiveness in detecting hoax news and misleading articles. The result showed that the optimized CNNs+PSO model had high effectiveness, by achieving accuracy rate of 92.06%, precision 91.6%, and recall 96.19%. These values validated the model’s ability to classify hoax news in Indonesian accurately.</p> <p><strong>Conclusion:</strong> This study showed that the optimized CNNs+PSO method was highly effective in detecting hoax news and misleading articles by achieving impressive accuracy, precision, and recall rate. The integration showed the potential of CNNs+PSO to mitigate the impacts of hoax news, enhance public awareness, and promote people to critically believe the news</p> <p><strong><em>Keywords:</em></strong> Convolutional Neural Networks, Deep Learning, Hoax, Particle Swarm Optimization, Text Mining</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/60789ChatGPT and Its Impact on Students Assessment Practices in the Higher Educational Sector: A Systematic Review2024-12-27T09:29:11+07:00Lizzy Oluwatoyin Ofusorilizzyofusori@yahoo.co.ukRimuljo Hendradirimuljohendradi@fst.unair.ac.id<p><strong>Background:</strong> The proliferation of Artificial Intelligence (AI) tools such as ChatGPT is growing at a rapid pace, sparing no sector. One of the AI tools that has grown in it use across the sectors is the use of ChatGPT, a tool that mimics human like capabilities of producing ideas. However, there have been many concerns about how ChatGPT will change the higher education institutions. More worrisome is how it poses risks that compromise the integrity of academic outputs if left unregulated</p> <p><strong>Objective:</strong> This study examines the influence of ChatGPT on students’ assessment practices in the higher educational sector</p> <p><strong>Methods:</strong> The study carried out a systematic literature review by gathering data from peer reviewed academic papers. Initially, 140 research papers were identified. Thereafter, these papers went through further filtering, and 35 usable papers were selected and included in the study</p> <p><strong>Results:</strong> This study highlighted the importance of using AI tools such as ChatGPT in the higher education sector, underscoring its advantages and the threats that it poses to the sector if the use remains unregulated. The study has recommended institutional policies about the use of AI tools that must be put in place to guide academic staff, researchers and learners in the responsible use of ChatGPT for academic work.</p> <p><strong>Conclusion:</strong> “While the widespread adoption of ChatGPT is undeniable, there is an urgent need for a well-balanced regulation regarding its use within Higher Education Institutions (HEIs). Thus, future research should focus on examining the existing policies and practices related to ChatGPT ethics, privacy, and security in education and identify gaps and areas for improvement.</p> <p><strong><em> </em></strong><strong><em>Keywords:</em></strong> ChatGPT, Artificial Intelligence, Chatbot, OpenAI, Higher Education</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/59700PlatFab: A Platform Engineering Approach to Improve Developer Productivity2024-09-13T09:45:15+07:00Vaishnavi Srinivasanvaishnavi1731@gmail.comManimegalai Rajkumardrrm@psgitech.ac.inSrivatsan Santhanamsrivatsan.santhanam@sap.comArjit Gargarjit.garg@sap.com<p><strong>Background:</strong> Software developers are key players in IT/ITES business in order to drive software development by writing high-quality code quickly. Based on user needs, they must adapt evolving technologies and tools to produce efficient and successful software using Software Development Life Cycle (SDLC) principles. Platform Engineering comprises a set of activities to design, develop and maintain software code, making it a foundation for building software applications.</p> <p><strong>Objective:</strong> This work focuses on reducing the time and effort needed to execute the above tasks that boosts software developer productivity which includes software development workflow automation. The main objective of the proposed work is to lower total cost of ownership, standardize software development practices, help cost optimization and reduce production incidents.</p> <p><strong>Methods:</strong> PlatFab, a Platform Engineering service implemented in Industrial Budgeting System is presented in this work. The methodology involves custom developer portal with Continuous Integration and Continuous Delivery/Continuous Deployment. (CI/CD) pipeline to automate financial workflows and streamline collaborative development. It provides the developers architectural components, containers, infrastructure automation and services orchestration that helps them to concentrate on their quality code irrespective of implementation efforts.</p> <p><strong>Results:</strong> After deploying PlatFab in an organization's software development, build time is reduced by one minute for each service, and 60MB of storage space is saved for each service. Developers can handle vulnerability attacks in one day. Before the use of PlatFab, build time was five minutes, 2 GB was used for each service, and vulnerability handling required five days to resolve. Production downtime issues were 12 before PlatFab and were reduced to almost zero after integrating PlatFab.</p> <p><strong>Conclusion:</strong> The results after implementing PlatFab for a Budgeting System service in an IT Organization help the developers reduce build time, number of days to fix vulnerabilities, and space requirements for the service. PlatFab helps the developers complete their projects with quality code in a shorter time, improving their productivity.</p> <p><strong><em>Keywords:</em></strong> Agile Methodology, Budgeting Service, Platform Engineering, Software Development Life Cycle, Service Oriented Architecture.</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.https://e-journal.unair.ac.id/JISEBI/article/view/63521Improving Café Reputation: Machine Learning Analytics for Predicting Customer Engagement on Google Maps2024-11-08T09:34:59+07:00Siti Anisahsiti_anisah@sbm-itb.ac.idMeditya Wasesameditya.wasesa@itb.ac.id<p><strong>Background:</strong> Online reviews is a powerful tool in shaping customer decisions, as they significantly influence a business’s reputation and the ability to attract new customer. Given the growing reliance on digital platforms, understanding engagement levels is crucial for business that want to enhance online presence. By analyzing these customer activities, business owners can leverage Machine Learning (ML) analytics to predict engagement on Google Maps reviews.</p> <p><strong>Objective:</strong> This study aimed to develop the most suitable ML model in order to predict customer engagement levels in café business on Google Maps, and determine the online review features that have the greatest impact on engagement. Additionally, the analysis aimed to provide actionable recommendations to help business owners improve online reputation and engagement strategies.</p> <p><strong>Method:</strong> A total of 5,626 online reviews data were collected using web scraping methods during the analysis. The data was then preprocessed by extracting major review features, calculating engagement levels, and addressing class imbalance with SMOTE method. In the study, K-Means clustering was used to segment engagement levels, while sentiment analysis through VADER Lexicon was applied to measure sentiment content. Various ML models were trained and validated using a 10-fold cross-validation method. Finally, Analysis was conducted using Spearman's correlation to identify relationships among features derived from the best-performing ML models.</p> <p><strong>Results:</strong> The result of the analysis showed that Random Forest model achieved the highest accuracy and PR AUC in predicting engagement levels. The four most influential factors were review length (16.23%), photos (15.57%), total rating (12.35%), and author review count (10.19%). Spearman's correlation analysis showed a positive relationship among review length, photos, and author review count, signifying the combined impact on engagement levels.</p> <p><strong>C</strong><strong>onclusion:</strong> This study described the effectiveness of Random Forest model in predicting engagement levels in Google Maps reviews. Specifically, the model identified review length, photos, total rating, and author review count as the key factors influencing engagement. These results would provide valuable guidance for business owners that desire to improve customer engagement and online reputation. Building on this, future studies should explore larger datasets, integrate additional features, and examine how the engagement contribute to long-term customer retention.</p> <p><strong><em>Keywords:</em></strong> Online Reputation Management, Customer Engagement, Behavior, Machine Learning, Google Maps Review, Predictive Analytics</p>2025-03-28T00:00:00+07:00Copyright (c) 2025 The Authors. Published by Universitas Airlangga.