<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>PPG Computação Aplicada</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/1678</link>
<description>PPG Computação Aplicada</description>
<pubDate>Wed, 15 Apr 2026 12:45:12 GMT</pubDate>
<dc:date>2026-04-15T12:45:12Z</dc:date>
<item>
<title>Odisseu: um modelo para serviços inteligentes na indústria 4.0 baseado em análise de históricos de contextos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13950</link>
<description>Odisseu: um modelo para serviços inteligentes na indústria 4.0 baseado em análise de históricos de contextos
Arruda, Helder Moreira
The Fourth Industrial Revolution, also called Industry 4.0, has been leveraging&#13;
many fields of computing today. Industry 4.0 comprises automated tasks in the manufacturing industry that generate large amounts of data obtained through sensors.&#13;
These data contribute to the interpretation of industrial operations in favor of managerial and technical decision-making. Data Science supports this interpretation due&#13;
to significant technological advances, particularly data processing methods and software tools. In this sense, this thesis presents a model entitled Odisseu that focuses&#13;
on supporting the development of intelligent services aimed at Industry 4.0, using context histories, which represent data from a given entity over a certain period of time.&#13;
The model proposes an ontology that acts as a link between data science methods&#13;
and smart services. Compared to other models, Odisseu seeks to fill a gap that involves monitoring data from input to storage in context histories format, in addition to&#13;
proposing an ontology and a model for generic support to intelligent services in the&#13;
industry. To evaluate the model, two intelligent services are proposed, the first aimed&#13;
at locating people in an industry and the second aiming to estimate the subjective wellbeing of employees. The services used data from mobile and fixed beacons, vital signs&#13;
such as blood volume pulse and electrodermal activity, as well as data from self-report&#13;
questions focused on well-being. The location service achieved 100% accuracy with&#13;
both the Random Forest and Multilayer Perceptron algorithms. The well-being service achieved the best performance with the Random Forest algorithm, reaching 74%&#13;
accuracy.
</description>
<pubDate>Thu, 04 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13950</guid>
<dc:date>2024-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Estudos empíricos sobre o uso de gamificação em modelagem de software com UML</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13945</link>
<description>Estudos empíricos sobre o uso de gamificação em modelagem de software com UML
Silva Júnior, Ed Wilson Rodrigues
Gamification has emerged as a promising strategy to increase engagement and motivation&#13;
across different domains, including Software Engineering, where UML modeling remains a&#13;
central activity but faces recurring challenges related to adoption and effective use. Despite its potential, little is known about how gamified elements can support the modeling process or improve the quality of the produced models. In this context, this thesis investigates three main gaps: the lack of empirical knowledge on the use of UML in industry, the absence of a quality model to evaluate gamified modeling activities, and the scarcity of evidence regarding the effects of gamification on the quality of UML models. The general objective is to produce empirical knowledge on the use of gamification in software modeling, proposing mechanisms to evaluate the models generated and analyzing the effects of gamified techniques on learning and on the quality of the artifacts. To achieve this, a survey with IT professionals was conducted, followed by the development and evaluation of a gamified quality model, and a series&#13;
of empirical studies—including controlled experiments and a case study—aimed at investigating attributes such as completeness, consistency, motivation, and analytical depth. The results indicate that, although well know, UML still encounters adoption barriers associated with organizational culture and language complexity; they also show that the proposed quality model is perceived as useful to support learning; and they provide evidence that gamified elements can improve engagement, the diversity of the produced artifacts, and the accuracy in detecting inconsistencies. It is concluded that gamification has the potential to enhance both the practice and teaching of software modeling, supporting the creation of more complete and robust UML models and contributing to increased participant motivation.
</description>
<pubDate>Thu, 18 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13945</guid>
<dc:date>2025-12-18T00:00:00Z</dc:date>
</item>
<item>
<title>Developing effective AI and law applications: a methodological proposal for developing natural language processing applications based on transformers, pre-trained models and transfer learning</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13869</link>
<description>Developing effective AI and law applications: a methodological proposal for developing natural language processing applications based on transformers, pre-trained models and transfer learning
Zanuz, Luciano
CONTEXT: The intersection between Artificial Intelligence (AI) and Law has been explored since the early days of AI research. In recent years, both judicial institutions and&#13;
legal professionals have increasingly adopted AI technologies to streamline processes, support legal decision-making, automate repetitive tasks, and extract structured information from legal texts. These applications are predominantly powered by Natural Language Processing (NLP), given the textual nature of legal proceedings. Advances in deep learning, particularly transformer-based architectures and pre-trained language models, have significantly reshaped the development of NLP systems and achieved state-of-the-art results across many downstream&#13;
tasks. PROBLEM: Despite the potential of these advances, there remains a lack of a unified, domain-specific methodology for developing AI applications in the legal field. This gap limits the effective adoption of NLP-based AI solutions in real-world legal contexts. Additionally, most cutting-edge resources are available primarily in English, creating a barrier for Portuguese-speaking legal systems. SOLUTION: This thesis proposes a structured methodology for developing AI and Law applications, grounded in modern NLP paradigms such as transformers, transfer learning, and domain adaptation. The methodology addresses both technical and domain-specific challenges and includes practical components, such as datasets, fine-tuned&#13;
models, and evaluation tools, tailored to the Portuguese legal context. PROPOSED METHOD: The proposed methodology is composed of four main steps and emphasizes interdisciplinary collaboration between legal and technical teams. It defines a clear development flow, from problem definition to deployment, incorporating iterative validation, the creation of applicationlevel datasets, and the use of Explainable AI mechanisms where applicable. RESULTS: The methodology was validated through multiple experiments, including the development of Legal Named Entity Recognition (NER) models that achieved new state-of-the-art results on the LeNER-Br dataset, alongside experiments with context-specific and parameter-efficient finetuning techniques to enhance model performance and adaptability. A real-world application for AI-generated judgment reports was implemented, supported by a novel evaluation framework combining automated metrics and human assessment. This work also provides practical resources in Portuguese and insights into the correlation between human and automated evaluations of AI outputs in the legal domain, demonstrating the feasibility and benefits of a structured, domain-adapted approach to AI development in legal applications.
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13869</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Detecção da autoeficácia em ambientes computacionais de aprendizagem</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13868</link>
<description>Detecção da autoeficácia em ambientes computacionais de aprendizagem
Campos, Kelis Estatiane de
In recent years, there has been an increase in interest in studies focused on the area of Affective Computing, with emphasis on affective states, which comprise studies that relate emotion and learning. Emotions can influence learning both positively and negatively. Thus, this study is directed to the belief of Self-efficacy, which understands the individual's ability to produce certain levels of performance, which implies how the person feels, thinks, is motivated and behaves. Thus, in making use of the Computer Learning Environments (ACA) to assist in the learning process, studies have shown how emotional factors and the relationship between internal (psychic) and external (environmental) variables are fundamental in the teachinglearning process. Thus, the preliminary study carried out in the form of systematic mapping identified a gap pointing to the need for research aimed at detecting self-efficacy or other socioaffective phenomena, since the use of ACA currently presents a significant increase in the number of cases, as a reality present in the different teaching modalities. From this perspective, this study proposed a model for the diagnosis, instruction and monitoring of academic selfefficacy in Computer Learning Environments, in order to promote self-knowledge and&#13;
contribute to better learning strategies of students. It is hypothesized that the realization of&#13;
diagnosis, instruction on self-efficacy, as well as follow-up actions when using ACAs can&#13;
encourage the modification of attitudes of students' studies in order to change the learning path, enabling the improvement of academic performance. Thus, the study included exploratory and descriptive research, based on quantitative and qualitative techniques. The proposed model, called the Self-efficacy Diagnosis, Prevention and Follow-up Model (MDPAA), involved three stages: diagnosis of the level of self-efficacy, from the creation of the student's self-efficacy form; Analysis of student behavior patterns using the Orange tool, and elaboration and evaluation of the Orientation Guide, instructional material created to instruct students about self-efficacy. Based on the results of the experiments performed, it is possible to obtain indicatives of applicability and effectiveness of the MDPAA, being evaluated that the model can enable more assertive choices in relation to the academic trajectory, in order to contribute to the favoring of the teaching-learning process and consequently contribute to the improvement of academic performance, promoting the reduction of dropout and dropout rates in education.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13868</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>VSAC: um framework de compressão adaptativa para dados de saúde em cidades inteligentes</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13866</link>
<description>VSAC: um framework de compressão adaptativa para dados de saúde em cidades inteligentes
Andrade, Alexandre Luis de
The increasing adoption of wearable devices and smart health technologies has highlighted the need for efficient transmission and storage of physiological data in urban-scale health monitoring systems. In this context, one of the major challenges is the ability to manage large volumes of vital-sign data—such as heart rate, respiratory rate, body temperature, and blood pressure—generated continuously by heterogeneous devices, while ensuring clinical integrity, low latency, and optimized network usage. Current solutions often treat compression techniques in isolation and lack adaptive mechanisms responsive to the clinical condition of each monitored individual. This research addresses that gap by proposing an integrated and context-aware approach for compressing vital-sign data in smart city infrastructures. This study aimed to develop and validate the VSAC (Vital Sign Adaptive Compressor), an adaptive framework for managing the transmission of physiological signals using a combination of lossy and lossless data compression techniques. The framework was designed to dynamically adjust its compression parameters based on the signal type and clinical priority, ensuring a balance between data fidelity and transmission efficiency. The methodology consisted of designing and implementing a two-stage compression prototype in Python, applied to real-world datasets of heart rate collected from wearable devices with different sampling intervals. The evaluation included three operational scenarios—losslessonly, lossy-only, and hybrid compression—tested across datasets of varying sizes and densities. Performance was measured using standard metrics: compression rate, compression time, and&#13;
distortion. The tests were repeated ten times per scenario to ensure measurement stability. The results confirmed that the hybrid and adaptive strategy proposed by VSAC outperformed traditional static methods. The framework achieved compression rates up to 46.3% higher than lossless-only approaches while maintaining distortion levels below 10% in most cases, especially in medium and large datasets. The compression time was significantly reduced with the use of LZW in the final stage, particularly for large files.&#13;
The study concludes that the adaptive and hybrid compression approach embodied in VSAC offers a robust and scalable solution for the efficient transmission of vital-sign data in smart healthcare environments. Its context-aware mechanism enables real-time prioritization and efficient resource usage, making it a promising tool for public health policies that rely on connected infrastructures and timely clinical decision-making in smart cities.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13866</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>MEPCA: a technical model to improve on-chain electronic health records processing</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13695</link>
<description>MEPCA: a technical model to improve on-chain electronic health records processing
Vanin, Fausto Neri da Silva
The integration of blockchain technology within the healthcare industry has garnered significant attention due to its potential to address critical challenges such as data privacy, interoperability, and the integrity of health records. Although electronic health record (EHR) standards such as HL7 FHIR and OpenEHR have established frameworks for data consistency and system interoperability, concerns remain about the privacy and security of sensitive patient information, particularly in light of regulations such as the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), and the Lei Geral de Proteção de Dados (LGPD). Most related work stores only data hash on blockchain nodes, making data validation impossible from a blockchain perspective, which raises the risks of invalid or malicious data being provided. This work introduces the MEPCA model, a novel framework grounded in five core principles that explore the application of blockchain and cryptographic technologies&#13;
in the management of health records, focusing on maximizing the use of on-chain resources for the processing of EHR data. Our main contribution is to provide guidance and techniques to maximize the adoption of decentralized solutions in the healthcare industry, with practical use cases and technical analysis. Our model introduces novel elements for secure data sharing, called Data Steward and Shared Data Vault, and proposes an innovative method that generates Zero-Knowledge Proofs of HL7 FHIR required fields for hash digests. We run technical experiments with Fully Homomorphic Encryption (FHE) algorithms to evaluate on-chain data analysis using a dataset with 1.3 million records and evaluates on-chain data processing and storage with a 10 thousand HL7 FHIR dataset with plain and hash representation. Our findings suggest that maximizing on-chain processing can improve the security and reliability of&#13;
health records, offering a robust alternative to traditional off-chain data processing approaches. The adoption of the MEPCA model can bring an evolution to the healthcare industry, allowing society and institutions to have a more secure and efficient digital infrastructure for EHR.
</description>
<pubDate>Sun, 29 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13695</guid>
<dc:date>2024-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>A semantic interoperability model based on NLP for nonstructured health data</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13608</link>
<description>A semantic interoperability model based on NLP for nonstructured health data
Mello, Blanda Helena de
The healthcare domain faces significant challenges in managing the rapidly growing&#13;
volume of data generated daily, particularly in the collection and sharing of this&#13;
information. Healthcare professionals such as physicians, nurses, radiologists,&#13;
cardiologists, surgeons, and other specialists frequently enter patient data into electronic&#13;
systems, often in an open, unstructured textual format. We conducted a literature&#13;
review that reveals several challenges in processing real-world data, with one critical&#13;
issue being the scarcity of tools and dictionaries available in Portuguese for the&#13;
healthcare sector. This gap, coupled with the unique challenges inherent in healthcare&#13;
data processing, adds considerable complexity to extracting and structuring essential&#13;
information from clinical records. Additionally, ensuring data interoperability between&#13;
different healthcare providers becomes challenging when these providers do not initially&#13;
aim for interoperability during input data. Observing these challenges, this research&#13;
proposed a model to enable semantic interoperability of clinical notes from electronic&#13;
health record systems. The methodology used in this research has an applied and&#13;
exploratory character, and it has been evaluated through the development of a&#13;
prototype. This approach aims to address some of the current limitations in data&#13;
processing and integration, specifically within the Portuguese healthcare context, and to&#13;
create a flexible model that can treat real-world data more effectively in structuring and&#13;
sharing data. This research is part of the MyDigitalHealth project, a collaboration&#13;
between the university and six hospitals in Porto Alegre, which provided data from&#13;
hospitalized patients who tested positive for COVID-19, ensuring a real-world context&#13;
for data issues. We analyzed the characteristics of the data with respect to&#13;
interoperability between providers and proposed a model that involves hybrid techniques&#13;
for information extraction, lexical normalization, and structure for standard&#13;
harmonization. Thus, we defined a set of experiments using machine learning,&#13;
combining the Transformers architecture for entity recognition with natural language&#13;
processing for lexical normalization and semantic matching and adopting OWL&#13;
ontologies as an intermediary representation structure. The experiments revealed three&#13;
main contributions. First, we developed a specialized annotated dataset, classifying six&#13;
entities with 18,666 validated annotations by specialists in 314 documents. Second, we&#13;
conducted experiments using BERT models fine-tuned on our small dataset for entity&#13;
recognition, achieving 95% accuracy, with precision rates of 90% for classifying entities&#13;
related to Invasive or Therapeutic Procedures and 89% for Disease or Syndrome and&#13;
Diagnostic Procedures. These results demonstrate the model’s effectiveness in extracting&#13;
relevant information from unstructured clinical notes. Third, ontologies as intermediary&#13;
representation structures ensured semantic consistency and enhanced interoperability in&#13;
an independent format. The limitations and opportunities for future studies from this&#13;
research include applying the model to data from different domains, such as nursing&#13;
notes, odontology, clinic context, and accountability records. Another topic is the gap in&#13;
term disambiguation and semantic alignment in healthcare data, focusing on linking&#13;
terminologies to structured data, ensuring international coding for clinical data, and&#13;
enabling interoperability across borders. Finally, this research aims to contribute to the&#13;
continuity of citizen healthcare and guide developers and providers in building robust&#13;
and complex platforms that implement the use of healthcare standards. We also expect&#13;
more and more professionals and health managers to improve healthcare worldwide&#13;
through the adoption of international standards within electronic health record systems.
</description>
<pubDate>Wed, 13 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13608</guid>
<dc:date>2024-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Um modelo preditivo com base na integração de dados numéricos e textuais: um estudo de caso no mercado acionário brasileiro</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13565</link>
<description>Um modelo preditivo com base na integração de dados numéricos e textuais: um estudo de caso no mercado acionário brasileiro
Rosa, Michele Jackeline Andressa
The analysis of movements and prices in the Brazilian stock market has been widely studied, with a recent increase in the use of Artificial Intelligence for this purpose.&#13;
Traditionally, predictive approaches rely on historical numerical data, with an emphasis on&#13;
graphical analysis. However, these techniques have not fully explored the potential of&#13;
fundamental data extracted from technical reports and financial statements, nor have they&#13;
taken advantage of the vast amount of real-time information available through social media and news portals. This study aimed to identify the most effective approach to improving the accuracy of stock price predictions by integrating numerical and textual data, applied to a set of assets in the Brazilian stock market. Various deep learning techniques and models were employed, and the literature review revealed gaps in integrating heterogeneous data. To address these limitations, an approach was proposed that combines numerical and textual data, assessing the impact of this integration on stock price and movement predictions. The textual data includes financial statement information, posts on X (formerly Twitter), and financial and economic news published online. The numerical data consists of historical stock price and volume series, macroeconomic variables, and the Google Trends search index. The proposed model allows for an evaluation of advancements in the processing and integration of numerical and textual data to identify stock price movements in the Brazilian market. Studies were conducted to explore the behavior of numerical and textual data. Additionally, experiments implementing the proposed approach demonstrated a percentage gain in prediction accuracy compared to purely numerical analysis. The results revealed that the inclusion of tweets, news&#13;
(Google News), and technical indicators, along with stock price and volume data, improved forecasting accuracy. When comparing the tested models, the LSTM outperformed the DNN. The collected RMSE values were: PETR4 (0.0114; 0.0111; 0.0210), VALE3 (0.0106; 0.0128; 0.0452), BBDC4 (0.0119; 0.0112; 0.0234), and ITUB4 (0.0117; 0.0119). It is concluded that the integration of heterogeneous data can significantly enhance stock price predictions, contributing to the development of more effective strategies in the financial market.
</description>
<pubDate>Thu, 06 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13565</guid>
<dc:date>2025-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>AIDA: uma arquitetura inteligente para gerenciamento de diabetes</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13564</link>
<description>AIDA: uma arquitetura inteligente para gerenciamento de diabetes
Gubert, Luis Claudio
The incidence of type 2 diabetes mellitus has increased significantly in recent years, emerging as a significant cause of morbidity and mortality. The disease management system currently focuses on identifying a disease and its curable methods. This approach is not suitable for patients with DM2, as the disease is chronic, and the approach must be concerned with the long course of the disease and with everyone involved in its follow-up (the patient, the primary care service, doctors, and hospitals). The evolution of portable devices and sensors and the increased use of electronic health records contribute to the increase in the volume of data that can be used to monitor, improve, and individualize treatment. The extensive data availability for each patient makes the analysis process by health services challenging. This fact encourages the use of artificial intelligence (AI) techniques to extract knowledge that can be used to support healthcare professionals’ decisions. Considering that care for chronic diseases is long-term. It requires continuous monitoring, we propose as a contribution to this work the development of a computational architecture that, based on data collected by sensors and data from the patient’s&#13;
electronic health record, uses machine learning to find individualized patterns of the course of the disease. The aim is to detect the emergence of comorbidities early and the consequent decline in the patient’s health. For this thesis, we propose developing, implementing, and evaluating an intelligent architecture called AIDA for monitoring patients with type 2 diabetes mellitus. We evaluated the model through technology acceptance assessment studies based on the TAM (technology acceptance model) and the application of the SUS (System Usability scale) in interviews with experts and through the use of technology by patients. Regarding classification and prediction evaluations, we built a dataset with data from a national clinical center. To evaluate the results, several machine learning models for classification were compared, such as Random forest, Decision Tree, Logistic regression, Gradient boosting, XGBoost and LightGBM with hyperparameter settings changed to get the best results. LSTM networks (Long short-term memory) were used for the prediction model with data collected in the patient’s context. The results were evaluated visually and by metrics such as accuracy, precision, sensitivity, specificity, F1-score, and AUROC for classification and Root mean square error (RMSE), Mean absolute&#13;
error (MAE ) and Mean Poisson Deviance (MPD) for prediction. The acceptance and usability results showed that both groups, professionals and patients, have positive perceptions about applying technology in the care of chronic diseases. In turn, the best results for classification presented AUC of 0.85, specificity of 0.92, sensitivity of 0.52, precision of 0.46, accuracy of 0.87 and F1-score of 0.49. For prediction, the best results presented RMSE of 38.74, MAE of 31.41, and MPD of 8.82. The results reinforce the hypothesis that it is possible to define a computational model to support and monitor patients with type 2 diabetes mellitus.
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13564</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Método evolutivo para reconfiguração de sistemas de distribuição de energia elétrica com foco em redução nos indicadores coletivos de continuidade em casos de intervenção</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13561</link>
<description>Método evolutivo para reconfiguração de sistemas de distribuição de energia elétrica com foco em redução nos indicadores coletivos de continuidade em casos de intervenção
Keller, Armando Leopoldo
Interventions on power distribution systems are needed for maintenance and expansion, in&#13;
some cases, interruptions are required causing power outage. Those interruptions could bring losses to the consumers and are monitored by ANEEL through continuity indicators.By the network reconfiguration is possible to achieve new routes that attend a higher number of consumers, reducing the indicators. Being an NP-hard problem, evaluation of all possible configurations is invisible, requiring a metaheuristics approach as genetic algorithms. This thesis presents methods to model the distribution networks using graphs that could be simplified, the graph simplification algorithm, and a modified version of the genetic algorithm that aims the minimization continuity indicators proposing new configurations that maintain the distribution networks radiality characteristics. Experiments were performed to prove the operation of each step, other algorithms were compared, and the results show that it is possible to achieve adequate configurations. Compared with projects designed by an experienced engineer, the configurations proposed by the system result in indicators equal to or smaller than the ones obtained by the engineer.A new algorithm for positioning new maneuverable elements in a network and its impact on network reconfiguration and continuity indicators is also presented. When compared&#13;
with the traditional method, the method proposed here, by performing topological analysis and reducing the number of power flow runs, it can speed up the optimizer runtime by up to 138 times, making your application viable to assist the designer. In tests with real networks with a feeder, carried out on a conventional computer, solutions were found in less than 30 seconds. In comparison with the experienced designer, the tool found solutions with indicators equal to or less than the designer.
</description>
<pubDate>Fri, 20 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13561</guid>
<dc:date>2024-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>Athena: um modelo computacional para serviços inteligentes na educação a distância usando históricos de contextos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13423</link>
<description>Athena: um modelo computacional para serviços inteligentes na educação a distância usando históricos de contextos
Silva, Lídia Martins da
The educational environment has undergone numerous transformations and among them, the expansion and democratization of teaching. Distance Education aims to offer a complete, dynamic and efficient teaching and learning process, mediated by technological resources. It has been growing and assuming an important role in the educational environment. It has been of paramount importance in the expansion and access to higher education, driven by the use of New Information and Communication Technologies (NICT) and new behaviors in the teaching and learning process. However, the high dropout rate in distance courses has caused many concerns, as not all students who enroll in a course manage to reach the end for different reasons. Evasion occurs in all teaching modalities, whether face-to-face, blended and distance learning, both in public and private institutions. However, distance education needs a deeper look, since it is mediated by technologies. Given this context, this thesis proposal proposes the development of a computational model for intelligent services focused on distance education based on historical contexts. The model aims to assist managers and teachers in strategic planning, allowing them to monitor the student's academic progress, as well as assist students with difficulties in the learning process through website recommendations, complementary teaching materials, videos, among other resources. The aim of offering intelligent services is to offer solutions such as: monitoring, intervention, recommendation, formation of study groups, motivation and improvement in the students' learning process, avoiding failure and consequently reducing the dropout rate. The model uses an ontology to represent knowledge in the field of distance education. Furthermore, it explores elements of the students' context that are used in composing context histories. Context history analysis is used to personalize services that deliver useful information for distance learning. An Athena prototype was created and tested with information from 25 students enrolled in the Technology in Systems Analysis and Development course, supporting the implementation and evaluation of two intelligent services, Forecasting to predict students' academic performance and Grouping for training study group. The results obtained reinforce the hypothesis that it is possible to develop a computational model that generically supports the creation and use of intelligent services for distance education based on students' context histories.
</description>
<pubDate>Fri, 02 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13423</guid>
<dc:date>2024-08-02T00:00:00Z</dc:date>
</item>
<item>
<title>Freya: an event prediction model for power distribution networks</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13422</link>
<description>Freya: an event prediction model for power distribution networks
Aranda, Jorge Arthur Schneider
In the Smart Grids (SGs) context, energy utilities manage extensive data volumes to monitor and optimize distribution networks. Key parameters include voltage, current, and power levels, as well as fault indicators like short-circuit currents and voltage sags. Detecting and predicting technical failures is for preventing power shortages that affect residential and industrial consumers. The process of predicting future conditions of the SG, is essential for identifying potential technical failures. SGs feature hierarchically distributed equipment across large geographical areas, influencing the overall network based on their hierarchical importance andinevice-level predictions to forecast the network’s overall state—finally, the process of inferences through OntoFreya, the ontology proposed in this thesis. OntoFreya classifiedividual historical contexts. Continuous communication between these devices and monitoring centers is vital for effective SG operation. Integrating concepts such as Edge Computing (EC), the Internet of Things (IoT), and Machine Learning (ML) enhances event prediction and operational&#13;
efficiency in SGs.This thesis introduces the Freya model, an intelligent computational framework designed for event prediction in SGs, focusing on energy distribution. Freya’s scientific contribution lies in event prediction at both the equipment and network levels. Comparative analysis shows that Freya uniquely addresses three aspects: (1) operation of remote equipment, (2) context-aware on SGs, and (3) hierarchical importance within the network. These aspects serve as inputs for predictive modeling.Event prediction in Freya consists of three steps. Initially, ML models are applied to individual SG devices. Subsequently, a stacked ML model consolidates these ds network and equipment events in compliance with energy utility regulations and regulatory standards, enabling proactive maneuvers to mitigate potential issues. The model’s validation uses real-world data from distribution feeders, voltage regulators, reclosers, and various applied scenarios, demonstrating the capability of the Freya model. The Freya model for distribution networks achieved an accuracy of 99.73%, recall of 99.75%, and F1- Score of 99.73%, compared to commonly used models in this type of task, which reached an accuracy of 83.36%, recall of 82.91%, and F1-Score of 83.36%, demonstrating the superiority&#13;
of the Freya model in terms of event prediction metrics.
</description>
<pubDate>Fri, 16 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13422</guid>
<dc:date>2024-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Um modelo multinível para estruturação de informações contidas em evoluções de prontuários escritos em texto livre</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13421</link>
<description>Um modelo multinível para estruturação de informações contidas em evoluções de prontuários escritos em texto livre
Oliveira, Jezer Machado de
The medical field has undergone a series of transformations with the adoption of new&#13;
technologies. One aspect that has seen significant changes is how patient information is stored. Electronic health records have brought a series of advantages, but they still present some issues. One of these issues is of structuring of the information contained in clinical notes. These notes can be stored in free text, that is, in an unstructured form; in a semi-structured form, containing a list of free-text fields to categorize each piece of information; in a structured form, where each piece of information has a series of specific fields; or a combination of these forms. Greater structuring brings a wealth of information and ease of automated consultation. On the other hand, the medical staff must dedicate more attention when managing clinical notes due to the rules to maintain its structure. This problem is more evident when migrating from a less structured record to a more structured record, given the impracticality of direct migration. Considering these aspects, this study arises from a concrete need, related to the migration of an electronic health record software from a company that used unstructured clinical notes to an electronic health record with structured and semi-structured records. For this migration to be effective, the following software requirements are imperative: that all relevant information is maintained and that it is at least semi-structured and, when possible, fully structured. Through a systematic review to find the state of the art in the field, no proposal was found that satisfactorily meets these requirements. Considering this context, this work proposes a multi-level model for structuring progress notes written in free text in the Portuguese language. The main requirements of the model are that, in the structuring process, all relevant information from the clinical notes is maintained, that the information is structured at least at the sentence level, and that, when possible, each entity in the sentence is also structured. The model consists of a pipeline with two main components: the first is responsible for structuring the information at the sentence level, by dividing the text and individually classifying each sentence in the SOAP notes. At the second level, if possible, structuring is done at the level of its entities, identifying and relating them. To evaluate the viability of the model, a prototype of the pipeline was implemented, using natural language processing and machine learning techniques, such as BERT models, associating its subcomponents with classic NLP tasks such as sentence boundary detection, sentence classification, named entity recognition, relation extraction, and ontology matching. For training and evaluating the pipeline and networks, a database provided by the company&#13;
that motivated the study was used. The database contains 10,000 records and 234,673 clinical notes, of which 15,883 were divided into 100,021 sentences, classified, and structured through a Graphical User Interface (GUI) developed for this task, forming the gold standard for network training. After a series of training and evaluations, the best networks were selected, and the pipeline was implemented. For the final evaluation of this pipeline, 721 records with a total of 10,013 sentences were used, which were also classified using the GUI, forming the gold standard for the final evaluation. The results obtained were compared with those of the pipeline, achieving an accuracy of 0.8641, precision of 0.9493, and F-score of 0.9029 for the first level of structuring, and an accuracy of 0.8354, precision of 0.9382, and F-score of 0.8815 for the second level of&#13;
structuring.
</description>
<pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13421</guid>
<dc:date>2024-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive network management in 6G O-RAN: a framework for dynamic user demands</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13420</link>
<description>Adaptive network management in 6G O-RAN: a framework for dynamic user demands
Bruno, Gustavo Zanatta
The advent of Sixth Generation (6G) mobile networks heralds a transformative era in wireless communication, demanding unprecedented adaptability and energy efficiency to meet burgeoning dynamic user demands. This thesis introduces an innovative framework for adaptive network management within the Open Radio Access Network (O-RAN) paradigm, focusing on the integration of its dynamic architecture. The framework emphasizes the exploration of integration components to manage and optimize the Radio Access Network (RAN). Central to this framework are the architectural components of O-RAN: the Service Management and Orchestration (SMO), the Near-Real-Time RAN Intelligent Controller (Near-RT RIC), and the Non-Real-Time RAN Intelligent Controller (Non-RT RIC), each playing a crucial role in enhancing network adaptability and operational efficiency. The proposed framework leverages the open and intelligent architecture of O-RAN, deploying various applications such as the rApp Energy Savings to optimize energy consumption and data flow management. It aims to establish a new benchmark for network management in the 6G era by integrating real-time data analytics, intelligent policy implementation, and adaptive energy management strategies. While energy savings is a primary use case for validating our dynamic architecture, it is important to note that the framework’s flexibility allows for the integration of other applications as well. The dynamic clustering mechanism for radio nodes, coupled with the RIC, facilitates efficient resource management by adjusting network configurations based on current and predicted traffic loads, significantly improving resource utilization and energy efficiency. A prototype implementation validates the framework under various network conditions and user demands, demonstrating substantial improvements in resource utilization and energy efficiency compared to traditional static network management approaches. The adaptive framework significantly reduces energy consumption during low-demand periods while maintaining high performance during peak times. The effectiveness of the proposed framework is demonstrated through its dynamic management of network resources, facilitated by specialized rApps and xApps. These applications interface with the SMO, Near-RT RIC, and Non-RT RIC, contributing to substantial improvements in network performance. Key components such as the modified VespaMgr and A1 Mediator within the Near-RT RIC, along with custom SMO elements, ensure seamless communication and integration across the network. This dynamic architecture has shown significant enhancements in resource utilization and energy efficiency. The results indicate efficient data handling and communication through the O1 interface for VES, validating the framework’s capability to adapt to varying network conditions and demands, thereby establishing new benchmarks in 6G network management.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13420</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>HealCity: usando dados de sinais vitais dos cidadãos e a técnica de elasticidade para gerência de ambientes de saúde no contexto de cidades inteligentes</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13419</link>
<description>HealCity: usando dados de sinais vitais dos cidadãos e a técnica de elasticidade para gerência de ambientes de saúde no contexto de cidades inteligentes
Fischer, Gabriel Souto
Smart cities can improve the quality of life of citizens by optimizing the utilization of resources. In an IoT-connected environment, people’s health can be constantly monitored, which can help identify medical problems before they become serious. However, overcrowded healthcare environments can lead to long waiting times for patients to receive treatment. The global COVID-19 pandemic has exacerbated this problem. In an increasingly connected environment, such as smart cities, people’s health can be monitored at all times so that situations requiring medical support can be identified in advance. Thus, smart cities would adapt the hospital allocation of healthcare professionals to patient demands based on previously collected data. The literature presents alternatives to address this problem, such as sharing human resources between hospitals, adjustments to work shifts and human resources on-the-fly in order to adjust the capacity to demand. However, there is still a need for a solution that can adjust human resources on-the-fly in multiple healthcare settings, which is the reality of cities. In this context,&#13;
this work presents HealCity, a smart city-focused model for human resources optimization that can monitor patients’ use of healthcare settings and adapt the allocation of health professionals to meet their needs. HealCity uses vital signs data and prediction techniques to anticipate when the demand for a given environment will exceed its capacity and suggests actions to allocate health professionals accordingly. Additionally, HealCity introduce the concept of Multilevel Human Resources Elasticity in Smart Cities, an extension of the concept of resource elasticity in Cloud computing to manage human resources at different levels of a smart city. An algorithm was also developed to manage future patients in the smart city, automatically identifying the appropriate hospital for a possible future patient. HealCity was evaluated by simulating a smart city composed of four healthcare environments and obtained promising results: compared to medical environment with static professional allocations, the model was able to reduce waiting&#13;
time for care by up to 87.62%, with an increase in cost of only 9.68%. In this context, it is&#13;
believed that flexibility in human resources management can be a good alternative to mitigate health problems, which appear to a greater or lesser extent in practically every country in the world.
</description>
<pubDate>Wed, 25 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13419</guid>
<dc:date>2024-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Multsurv: a multimodal deep learning model for hospitalized patients survival analysis in the contexto of a pandemic</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13418</link>
<description>Multsurv: a multimodal deep learning model for hospitalized patients survival analysis in the contexto of a pandemic
Zeiser, Felipe André
BACKGROUND: Respiratory infectious diseases represent a major challenge in modern society. We recently faced the most significant public health challenge of the last century. Severe Acute Respiratory Syndrome Coronavirus 2 has overwhelmed almost all health systems worldwide, highlighting pre-existing weaknesses. The heterogeneity of COVID-19 clinical manifestations has made it challenging to manage hospitalized patients, making it crucial to identify those at greatest risk, especially for eciently allocating vital resources. Unlike past pandemics, hospitalized patients are currently monitored continuously and through di↵erent modalities. These data generate large longitudinal and multimodal datasets in health institutions. In this context, data-driven solutions can support clinical decisions and provide new tools for risk management of hospitalized patients during pandemics. OBJECTIVE: Therefore, we propose integrating clinical, laboratory, and chest X-ray imaging features into a survival analysis model for hospitalized patients with COVID-19. With the model, we aim to combine multimodal and longitudinal data to&#13;
capture the dynamic nature of COVID-19 and provide an explainable hazard function. METHODOLOGY: The methodology involves the proposition and development of the model. The model is divided into five main components: (i) pre-processing; (ii) feature encoders; (iii) temporal attention; (iv) CheXReport; and (v) multitask networks. The pre-processing component is responsible for data cleaning, outlier removal, variable selection, and image processing. In the feature encoders, the categorical and continuous data are transformed into a vector of embeddings that capture the complex and non-linear relationships between the variables. Then, based on the embeddings up to the current time instant, we extract a temporal context vector using temporal attention. The CheXReport component processes the patient’s X-ray images using a fully-transformers architecture, which integrates visual features with the textual elements of the reports. Finally, all feature vectors are concatenated to be processed in the multitask networks, a set of neural networks that allow the model to capture the specific characteristics of each risk. RESULTS: To evaluate the model performance, we used an incremental ablation study. We use the public datasets PBC2, MIMIC-CXR, Curated Dataset for COVID-19, and a private dataset. Then, we compare the results of the MultSurv model with the state of the art. The results obtained demonstrate that the MultSurv outperforms all reference&#13;
architectures, with a C-index of 0.723 ± 0.008 for t = 1 and t = 1, and 0.695 ± 0.003 for t = 7 and t = 7. CONCLUSION: The main scientific contribution of this study is the proposal of a multimodal model for processing dynamic and longitudinal data in survival analysis in the context of COVID-19. Furthermore, the MultSurv model offers a tool to support patient prioritization in pandemic scenarios. Finally, the application of the model can be adapted to di↵erent clinical contexts, extending beyond COVID-19.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13418</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>A value-based approach for information classification</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13146</link>
<description>A value-based approach for information classification
Ignaczak, Luciano
The digital transformation has revamped how products and services are produced and traded in the digital world. This innovation, bolstered by emerging technologies and evolving business models, underscores the growing importance of information to organizations and amplifies the significance of protecting it. However, a current challenge faced by information security teams is identifying which information requires safeguarding. Today, securing all information collected and produced by an organization is complex due to several limitations, such as budget constraints and understaffed security teams. Furthermore, organizations hold much information that does not require protection. Information classification is the cornerstone process to deal with this challenge in an organization. This process distinguishes confidential from non-confidential information and defines different sensitivity levels. Information classification is a previously introduced research topic, but its real-world application encounters several difficulties due to its manual nature. In order to overcome real-world barriers, scientific research has evaluated the application of natural language processing to automate the process. Most scientific studies proposed supervised learning approaches, which also present drawbacks, such as the significant effort to annotate sensitive labels and the limited flexibility for changes in the information classification scheme. Thus, this study proposes a new information classification model based on the information value. To the best of our knowledge, this is the first attempt to estimate the information value using textual features in the information classification context. The model assesses document value from two perspectives: (i) personal information associated with laws and regulations and (ii) confidential information related to the organizational context. The model applies information extraction and topic modeling to acquire document features and a regression model to estimate information value. We evaluated the proposed model by designing three experiments. The first experiment assessed the performance of two named entity recognition approaches and a&#13;
relation extraction technique for identifying personal and sensitive personal data. We also&#13;
implemented an experiment to evaluate the bag-of-words approach to classify documents into four departments. The third experiment assessed the model implementation using a corpus comprising 197 documents from an organization related to the educational sector. The proposed model evaluation implemented six experimental scenarios comprising three, four, and five-level information classification schemes. The model implementation using a Decision Tree regressor achieved an accuracy higher than 80% in the six scenarios. The study also presented that the BERT model outcome LSTM neural network in discovering personal data entities. Finally, the study demonstrated the feasibility of implementing a specific model to extract topics from each organization department since the text classification task achieved an accuracy that did not significantly impact the proposed information classification model.
</description>
<pubDate>Thu, 18 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13146</guid>
<dc:date>2024-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>MILPDM: an architecture for predictive maintenance of assets in the military domain</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13145</link>
<description>MILPDM: an architecture for predictive maintenance of assets in the military domain
Dalzochio, Jovani
Predictive maintenance is a topic addressed in different contexts such as industry, logistics, and healthcare, where sensing equipment parameters allows monitoring health degradation and anticipating failures. This same approach has been used in the military, monitoring assets such as vehicles. Monitoring asset degradation generates economic benefits similar to those observed in other areas, reducing costs by optimizing the use of monitored assets. However, assets operating in the military domain perform critical tasks, where failures can generate a high material and human cost. Applying predictive maintenance to military equipment is challenging. Vehicles and equipment are sent for missions in already known scenarios with high availability of collected data. However, this equipment can be sent to new environments where there is no data from previous operations to evaluate the degradation of the equipment’s health. The approaches in the literature for failure prediction applied to the military domain focus on equipment monitoring. This work presents a broader approach through the use of MILPdM. This architecture aims to predict failures applied to the military domain, considering the dynamic scenario in which military equipment operates. Our approach has two distinct fronts. First, we seek to verify the possibility of using machine learning models to predict failures, and from that point on, we seek to verify the prediction capacity in new scenarios, where we test the new foundation models for predicting lifespan in new scenarios, with a comparison of results with traditional models. We propose four use cases to test the architecture. Two use cases validate traditional failure prediction models using long-short term memory and random forest machine learning algorithms. Two other use cases evaluate the use of foundation models in new scenarios. The results acquired from the prediction of the trained models show that MILPdM can anticipate failures with high accuracy. As for the prediction capacity in new scenarios, using the foundation model proved promising, surpassing traditional learning models. These results show the&#13;
potential of using the foundation model in predictive maintenance.
</description>
<pubDate>Tue, 28 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13145</guid>
<dc:date>2024-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Reconhecimento de entidades nomeadas e extração de relações de registros de prontuários médicos para população de ontologia</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12911</link>
<description>Reconhecimento de entidades nomeadas e extração de relações de registros de prontuários médicos para população de ontologia
Silva, Diego Pinheiro da
There has been a significant increase in the number of Electronic Health Records (EHRs)&#13;
that accommodate unstructured data, such as text and natural language observations. Consequently, there is a growing interest in using this data to promote improvements in health. Manual analysis of these data is not feasible due to the large volume, which continues to increase. Therefore, there is a need for an approach that automatically structures this information, enabling it to assist health professionals in data analysis, treatment recommendations, disease diagnoses, among other applications.An evaluation of the literature in this area has identified demands for addressing this problem in Portuguese. However, there are still a limited number of studies with real data from the health sector. A research opportunity identified is the use of resources based on the Transformers architecture and the application of the results for data structuring in ontologies.In this context, this work aims to develop a model for processing unstructured data from EHRs to support the activity of updating an ontology. The contributions of this research are present in two related aspects. Firstly, it aims to support the development of&#13;
applications in EHR systems for oncology by enhancing their capacity to utilize unstructured data. Secondly, the research focuses on experimenting and proposing advances in computing approaches for entity recognition and relations extraction, as well as integrating them with an ontology. The study was carried out as a case study in a company operating in the field of Oncology. Detailed analyses of a widely used system in EHRs of oncology clinics were conducted. As a result of this analysis, one of the distinctive features of the work is the creation of unpublished datasets of entities and relations of medical evolutions, containing 1,622 annotated documents, comprising 146,769 entities and 111,716 relations. Another unique aspect of the work is the adaptation of a domain ontology to represent the structured data of this case study. Finally, experiments were conducted with approaches to extract entities and relations in text, achieving results such as 78.24% accuracy in the exams domain and 72.87% in the diagnostics domain. In addition, an ontology focused on oncology was built and integrated into the model, encompassing approximately 181 classes, 14 data properties, 12 object properties, and over 200 individuals. Healthcare specialists evaluated the model, obtaining a 73.52% accuracy rate in relation to their analysis, and the usability research showed excellent acceptance. The training of models using real oncology data and the construction of a knowledge base through ontology stands out as a differential of the work.
</description>
<pubDate>Thu, 09 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12911</guid>
<dc:date>2023-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Metodologias ativas integradas a um sistema de recomendação com suporte à mineração de dados educacionais e learning analytics para a mitigação de evasão da educação a distância</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12804</link>
<description>Metodologias ativas integradas a um sistema de recomendação com suporte à mineração de dados educacionais e learning analytics para a mitigação de evasão da educação a distância
Andrade, Tiago Luís de
Distance Education enabled educational practices based on digital platforms. Despite the widespread adoption of this teaching modality, the high dropout rates are a matter of concern for teachers and institutional managers. There are initiatives to mitigate this situation, such as the application of Educational Data Mining (EDM) and Learning Analytics (LA) techniques and the use of Recommender Systems (RS). Despite being effective in identifying students prone to dropping out and recommending complementary learning materials, they lack mechanisms for student motivation and teachers' pedagogical intervention, as they do not present methodological proposals to encourage the learning of those identified as at risk of dropout, mitigating this possibility. Given this, this work aims to develop an RS model that presents as a differential integration of the pedagogical strategy of Active Methodologies, Problem-Based Learning (PBL), with the EDM and LA techniques, capable of identifying students at risk of disapproval and dropout and enhance their permanence, in a collaborative and interactive process for solving problems. In the studies carried out in the literature, no evidence of this integration was found, with the development of the model being the main scientific contribution of this work, since recent studies point to the importance of improving the ways of teaching. In this sense, a prototype was developed, called Éforo-SR, and a Case Study was carried out with 3 evaluations: (i) verification of the functionalities and interfaces with the use by a teacher; (ii) the evaluation of 13 professors from different areas of knowledge based on acceptance and perceived usefulness, according to the TAM Model - Technology Acceptance Model; (iii) the evaluation of the practical application in a discipline offered in the Distance Education, by 1 teacher to 89 students. Data were collected through questionnaires and submitted for quantitative and qualitative analysis. The results indicated that teachers and students attested to the correct functioning of Éforo-SR, and, according to the TAM Model, more than 87% of teachers and 90% of students agreed with its ease of use. Furthermore, 77% of teachers and more than 88% of students agreed that RS could be helpful in the teaching and learning process. Such results are confirmed when more than 84% of the teachers and 87% of the students answered that they would recommend the prototype of the RS model developed since, in the subject studied, there was an increase in the class average and effective participation of students in an interactive and collaborative process, considered positive and, at the same time, promising given the scientific contribution of the integration of Active Methodologies with the recommendation of materials, EDM and LA for Distance Education. Thus, the results of the evaluations suggested that the model can contribute to improving teaching practices since it helps the teacher recommend complementary materials, encourages collaborative learning, and favors the monitoring of this process and the activities developed by the students.
</description>
<pubDate>Thu, 24 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12804</guid>
<dc:date>2023-08-24T00:00:00Z</dc:date>
</item>
<item>
<title>Smart Monitoring Tool: intelligent model for monitoring colorectal  cancer patients in the active phase of treatment</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12750</link>
<description>Smart Monitoring Tool: intelligent model for monitoring colorectal  cancer patients in the active phase of treatment
Queiroz, Diogo Albino de
Colorectal cancer is one of the most prevalent in men and women, and its development is associated with several risk factors, such as a sedentary lifestyle and eating habits. In addition, it directly impacts the individual's quality of life and daily routine (work, study, leisure, among others), especially when diagnosed in advanced stages. Currently, during the period between chemotherapy sessions, there is no follow-up to verify if the patient is following the treatment, as instructed by the medical team, which contributes to low engagement in actions to improve their clinical condition and self-manage the adverse effects of the treatment. This work aimed to develop a computational model, based on Artificial Intelligence and the Internet of Things, for monitoring cancer patients undergoing active treatment to ensure greater patient engagement in treatment through individualized and automated interactions and feedback between the patient and the virtual assistant and/or multidisciplinary team responsible for your treatment. Data were stored in a database, and the multidisciplinary team was notified when the patient's clinical condition indicated deterioration. The model worked both passively and actively, and the study was carried out in three phases. The first phase was carried out in December 2021, when the Sinop Cancer Center team evaluated one of the computational model tools. In the second phase, the model was applied to colorectal cancer patients undergoing active treatment from July to December 2022. All patients who addressed the inclusion criteria were invited to participate. For 8 weeks, patients were encouraged to self-report symptoms and adverse effects related to treatment, physical activity, and data about their diet. The outcome assessment was based on the comparison between the intervention and control groups. The patients evaluated the model through the User Experience Questionnaire (UEQ) and System Usability Scale (SUS) surveys. In the third phase, the application of a recommendation system integrated to the proposed model was evaluated. The results of the first phase showed that the model was effective in addressing usability and user experience. We evaluated the UEQ attractiveness and efficiency scales as excellent and the others as good. The usability evaluated by the SUS obtained a mean of 75 ± 7.14 and a median of 72.5 (70-77.5). In the second phase, patients who participated in the model reported signs and symptoms more accurately (control: 64.7%; intervention: 92.3%; p=0.1038). In the intervention group, the practice of physical activity was more effective, and most patients (61.5%) interacted with the chatbot for at least 62.5% of the period. There was also a statistical reduction in the consumption of alcoholic beverages and fast food, and a statistical increase in fruit consumption in the intervention group. Finally, in the third phase, the results suggest that the recommender system can positively address user expectations. Therefore, results indicate that the model contributed to more assertive data collection and greater patient engagement in self-management of symptoms and adverse effects of treatment and cancer. Moreover, the model contributed to increasing the practice of light physical activity. UEQ and SUS scores indicate that the model met users' expectations and had acceptable usability.
</description>
<pubDate>Tue, 22 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12750</guid>
<dc:date>2023-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Reconhecimento de emoções acadêmicas por face através de aprendizagem profunda: considerando a sequência de emoções e a personalidade do estudante</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12444</link>
<description>Reconhecimento de emoções acadêmicas por face através de aprendizagem profunda: considerando a sequência de emoções e a personalidade do estudante
Werlang, Pablo Santos
Affective computing aims to improve human-machine interaction by developing tools and&#13;
techniques to enable the system’s decision-making processes to adjust to human affective states. Automatic face recognition of emotions is a relatively recent area that has the potential of turning human-computer interaction into an increasingly natural experience. Especially in intelligent learning environments, emotion detection benefits the students by directly using their affective information to perceive their difficulties, adapt the pedagogic intervention and engage them. The present work created a model capable of recognizing by face the emotions commonly experienced by students in interaction sections with learning environments: engagement, confusion, frustration, and boredom. The proposed model used deep neural networks to classify one of these emotions, extracting statistical, temporal, and spatial features from the videos provided for training, including eye movement and Action Units. Considering the psychological model of affect dynamics proposed by D’Mello, which states that in learning situations, each emotion’s experience is tied to each other, and their presence is determined by the order in which they are shown, this work’s main contribution is to take into account the flow of emotions as well as the learner’s personality traits as a mean for increasing emotion detection accuracy. We&#13;
tested several model configurations and their efficiency compared to recently developed models. Results show that considering the learning emotions sequence and the personality as models’ input improves those algorithms’ effectiveness. Training the model on the DAiSEE dataset, we achieved 26.27% F1 improvement (from 0.5122 to 0.6468) when including the emotions’ history in the model, while we achieved 1.48% F1 improvement on the model trained using the PAT2Math dataset (from 0.8741 to 0.8871) when including subject’s personality traits. Compared to the state-of-the-art, the model achieved a superior 5.6% using the F1 metric. However, its accuracy was 4.7% lower.
</description>
<pubDate>Mon, 31 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12444</guid>
<dc:date>2022-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Prognosis &amp; Health Management System (PHMS): a machine learning framework to support decision-making in predictive maintenance in a production system</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12443</link>
<description>Prognosis &amp; Health Management System (PHMS): a machine learning framework to support decision-making in predictive maintenance in a production system
Souza, Marcos Leandro Hoffmann
The search for the effective use of production assets has been constant, mainly in industries with evolving mechanization. In this way, maintenance management gains visibility as it ensures asset availability. Predictive maintenance (PDM) is one of the main maintenance management strategies. Allows early detection of failures, avoiding unscheduled downtime and unnecessary costs. As technologies have advanced, predictive maintenance improves Prognosis and Health Management (PHM). It provides the means to recognize patterns, understand anomalies and estimate the equipment’s remaining useful life (RUL). At the same time, technologies such as the internet of things (IoT), machine learning (ML), and cloud computing enable the digitization of assets, providing intelligent manufacturing. However, this scenario makes PDM a complex&#13;
and expensive task when applied to systems with equipment connected in series. On the one hand, data is abundantly generated, collected, and stored. On the other hand, it is difficult to convert data into useful information to support PDM and PHM. Given the gaps related to PDM and reliability, we suggest the Prognosis and Health Management System (PHMS) in this thesis, which is supported by an analytical framework that uses a set of techniques and ML. First, we performed a case study to evaluate the proposition with real data from the process industry. In developing the framework, we used semi-supervised ML with Autoencoder (AE) to build the operational threshold and identify anomalies. For the Feature Identification step, we applied XGBoost and the SHAP method. Next, we test different deep learning architectures to predict the RUL of the system. In the RUL prediction, we present different deep learning architectures. In this sense, we highlight the N-BEATS deep learning architecture as an essential alternative to traditional architectures such as Recurrent Neural Networks (RNN). Through the framework applied to the case study, it was possible to identify an anomaly and the behavior of the most relevant variables for the failure and predict the RUL of the equipment with R2 greater than 90% with N-BEATS. In this way, according to the results presented, the operation and maintenance teams can carry out preventive actions, avoiding unscheduled stops of the&#13;
production system. In this sense, the development of the framework contributes to the adoption of emerging technologies in real processes. In addition to the benefits presented, we highlight the development of PDM studies on real data unknown in the academic environment. We draw attention to this point, as most reliability studies are based on widely known and treated data.
</description>
<pubDate>Thu, 23 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12443</guid>
<dc:date>2023-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>FoG-Care: a fog computing and blockchain architecture for global sharing of healthcare data</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12152</link>
<description>FoG-Care: a fog computing and blockchain architecture for global sharing of healthcare data
Costa, Humberto Jorge de Moura
Due to recent advances in distributed systems and healthcare, patient data can be dispersed in distant locations. However, processing and transmission errors are more likely to occur as data sets become larger and more complex. Several solutions based on Cloud Computing have been proposed to manage health data. These solutions present many healthcare implementation challenges, such as scalability, data privacy, and global patient identification. Thus, Fog Computing and Blockchain present themselves as an alternative to reduce the complexity of managing health data and increase its reliability. Therefore, the main challenge to be faced is how health services can benefit from a computational architecture that supports standards for the global identification of assets and sharing of geographically distributed information, considering scalability, latency, and privacy. The scientific contribution is to propose an architectural model based on Blockchain and Fog Computing that meets these requirements and eventual limitations. The methodology consists of proposing and implementing a prototype of a healthcare software architecture called Fog-Care, evaluating performance metrics such as latency, throughput, and sending rate of blockchain smart contracts in a healthcare scenario of a global vaccination campaign. This software includes a globally unique identity model called ID-Care, which supports the global identification of unique individuals with various combinations of documents, biometrics, and the GS1 healthcare industry standard. The assessment is a use-case scenario based on an integrated vaccination campaign in the top 5 most visited tourist destinations globally. The performance evaluation demonstrated that the minimum latency takes less than 1 second to run, and this metric’s average grows linearly. Also, the average latency of transactions is just a few seconds; even 100 simultaneous requests per peer are considered. Thus, its data-sharing issues of privacy and identification and the use of a model for a global id for healthcare can help reduce costs, time, and efforts, especially in the context of health threats, Where agility and financial support must be prioritized. From the results, It is crucial to add more fog nodes, like one per state to support the increase of demand of transactions in a blockchain with comprehensive nodes dispersed, to support scalability; as the send rate increases, approximately half of the transactions are processed at that time, according to the throughput results; privacy can be supported and treated globally with blockchain with the writing of blockchain smart contracts that represent these features; the no mutation and integrity of the ledger in a healthcare global environment can help to protect the privacy of the patients; the unique and global identification of persons and resources is necessary and can be made with GS1 Standards properly; the use of a global identification architecture for health can generate several valuable suggestions in public health policies depending on the specifics of each country and the health data shared with the participants, being possible to implement better political decision-making and a more global coordinated healthcare strategy with faster and earlier results available.
</description>
<pubDate>Mon, 05 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12152</guid>
<dc:date>2022-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Detecção e análise de redes de fraturas em afloramentos por métodos de visão computacional adaptativos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11889</link>
<description>Detecção e análise de redes de fraturas em afloramentos por métodos de visão computacional adaptativos
Marques Junior, Ademir
The identification of fractures and discontinuities is of great importance in the estimation of&#13;
fluid flow in hydrocarbon reservoirs, as they influence the porosity and permeability properties.Due to the inaccessibility and scarcity of reservoir data, fracture characterization is usually evaluated by studying outcrop analogues by remote sensing or in-situ observations by an expert. Considering the remote sensing methods, the acquisition of Unmanned Aerial Vehicles (UAV) combined with Structure from Motion photogrammetry (SfM) is a low-cost way to generate products such as orthorectified images, allowing manual and automated methods of detection of fracture designs and discontinuities to obtain discrete fracture network models(Discrete Fracture Networks - DFN). Computer vision and image processing approaches with the objective&#13;
of segmenting the areas of interest by semantic segmentation or edge and valley detection, commonly used to detect and characterize the fracture network, have been used in the literature, but they have peculiarities or are optimized for each outcrop type and its peculiarities. The outcrops that have undergone a karstification process, mainly, show a high level of fracturing due to the dissolution caused by weathering and the subsequent breakage and erosion of the rocky medium. This scenario, together with the presence of vegetation and areas with irregular lighting or shade, contribute to the challenge of automatic fracture detection in outcrop images. The segmentation techniques by thresholding or binarization employed by previous works in fracture segmentation, bring the difficulty of establishing a global threshold applicable to the entire image without generating a large number of false positives and negatives in the detection. An&#13;
alternative already used in biomedicine and character recognition is the use of local threshold adaptive segmentation techniques, which are the focus of this work. To optimize the detection of fractures in highly fractured karst regions, we propose the use and evaluation of these adaptive methods. In preliminary tests, the Sauvola local adaptive segmentation presented the best result when compared to the manually annotated ground truth. This work also proposes the use of binary noise reduction techniques to create the fracture segmentation method presented, which is complemented by a fracture segment detection method that identifies topological fracture data such as nodes and terminations. The results presented also bring the combination of UAV acquisitions at different times of the day to evaluate the influence of the position of the sun in the detection of fractures and the interpretation bias. This analysis is carried out on orthophotos of the outcrop of karstified carbonate rocks from Lajedo do Rosário, belonging to the Jandaíra formation, in Rio Grande do Norte. With the proposed methodology, we acquired&#13;
more accurate fracture data over the study area, following directional statistics from previous works carried out in the region. In addition to the directional analysis, the DFN model and its length and opening statistics follow the expected distributions for this type of outcrop, while the fracture network connectivity is also analyzed. From the proposed methodology, it was possible to generate DFN models more faithful to the field truth, reducing the impact of external agents to the rocky environment such as the solar position and the presence of vegetation, providing more quality data for stochastic modeling and reservoir modeling.
</description>
<pubDate>Fri, 12 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11889</guid>
<dc:date>2022-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Os efeitos de usar estimativas de conhecimento do aluno em programação de computadores em modelos livres de sensores de detecção da emoção confusão</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11830</link>
<description>Os efeitos de usar estimativas de conhecimento do aluno em programação de computadores em modelos livres de sensores de detecção da emoção confusão
Kautzmann, Tiago Roberto
The research area of Affective Computing has been looking for ways to improve the detection of student confusion in computer-based learning environments. Environments capable of detecting student confusion can use different pedagogical strategies, such as intervening in the environment and helping students resolve their confusion or controlling it to benefit their learning. The author is interested in contributing to state the art in detecting confusion without using physical sensors (sensor-free) in the context of computer programming learning. The Thesis hypothesized that using data on student knowledge estimates and data on student interaction with the computer-based learning environment can improve the performance of sensor-free machine learning models in detecting student confusion in tasks of programming learning compared to baseline models. Baseline models represent related works, which developed their models using only student environment interaction data. The Thesis hypothesis is justified in cognitive theories of emotions, which relate the confusion with appraisals of incompatibility between the information that comes to the student and the student's mental model, such as the mental model of prior knowledge. To verify its hypothesis, the Thesis generated several machine learning models that represent the Thesis approach (Thesis hypothesis) and the baseline approach (related works) for different configurations of observation time windows (5, 10, 20, 40, 60, 90, 120, 180, 240 and 360 seconds and variable) and different algorithms. Statistical tests compared the results of each approach (Thesis and baseline). Methods were also applied to verify the models' most relevant data and the generalization performance for students with heterogeneous characteristics. The machine learning models were trained and tested with samples formed by data collected from 62 technical and higher education students for five months while solving exercises in a programming software adapted for the Thesis. Statistical tests showed that the best models of the Thesis approach presented superior and significant predictive accuracy compared to the best baseline models in all observation windows. In a list of the ten most relevant data attributes for the best models of the Thesis approach, five were attributes about interaction with the environment, and the other five were attributes about estimates of student knowledge. Regarding the performance of generalization for students with heterogeneous characteristics, significant differences were found between the approaches only in observation windows of 5, 10, and 20 seconds. In these windows, the performance of the thesis approach's best models was superior to that of the best baseline models. The results presented positive evidence that supports the hypothesis raised that estimates of student knowledge can improve the performance of sensor-free confusion detection models in computer programming tasks. The Thesis presents discussions for several other intermediate results and the scenarios where the Thesis approach is most advantageous.
</description>
<pubDate>Mon, 06 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11830</guid>
<dc:date>2022-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Um modelo de machine-learning para predição do tempo de colheita de árvores macieiras com base em dados fenológicos e parâmetros climáticos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11777</link>
<description>Um modelo de machine-learning para predição do tempo de colheita de árvores macieiras com base em dados fenológicos e parâmetros climáticos
Boechel, Tiago
Machine learning approaches have been used in several areas. In the field of agricultural&#13;
research, machine learning has been used to increase agricultural productivity and minimize its environmental impact, proving to be an important tool to support decision making.Different strategies are found in the literature to predict phenological stages of different cultures. From the current state of the art, we observed few works that address the prediction of the harvest date. We did not find works with an approach similar to the one proposed. Forecasting the time of harvest is a challenge to develop fruit production sustainably and reduce food waste. Fruits are perishable, of high value and seasonal, and sales prices are generally time sensitive, which makes harvest forecasts extremely valuable to growers. This study proposes the Pred- Harv model, a machine learning model that uses recurrent neural networks to predict the start date of the apple harvest, given the temperature-related weather conditions expected for the period. Predictions are made from the phenological stage of full bloom, based on historical series of phenology and meteorological data. The computational model contributes with the ability to anticipate information about the harvest date, enabling the producer to better plan activities,&#13;
avoiding costs and improving productivity. The use of ML methods aims to make the predictive capacity of models based on thermal summation aimed at fruit growing more effective, allowing for the simulation of climate changes in the period. The PredHarv model is based on thermal sum models, but uses a multivariate approach. We use the thermal sum relating it to period length and other variables related to period temperature. We use a machine learning method, exploring the potential of LSTM networks to deal with problems involving time series. The model output returns the period length in calendar days, given the expected temperaturerelated weather conditions for the period. Additionally, a methodology for using the model is proposed in order to expand the predictive capacity, as a way to reduce the uncertainty implicit in the information provided by the user, necessary for calculating the forecast. We developed a prototype of the PredHarv model and performed experiments with real data from agricultural institutions.&#13;
The combination of variables used in the model demonstrated an effective prediction&#13;
strategy. We evaluated the metrics and the results obtained in the evaluation scenarios demonstrate that the model is efficient, with good generalization and capable of obtaining results with better accuracy compared to the linear model based on thermal accumulation.
</description>
<pubDate>Fri, 22 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11777</guid>
<dc:date>2022-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Hope: a conversational agent based-model for pregnant health literacy</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11553</link>
<description>Hope: a conversational agent based-model for pregnant health literacy
Montenegro, João Luis Zeni
The gestational period is a moment of great expectation and critical for pregnant women due to many uncertainties and doubts that affect the pregnant woman. The first months of pregnancy and motherhood, known as the baby’s thousand days period, include the prenatal and postnatal stages with pregnant women investigating many topics, including risk management, physical activity, nutrition, and other issues that cause uncertainty and anxiety. Conversational agents have been played a role over the years as engagement, support, and information tools in different areas in the health field for collaborative action with patients and doctors. We propose in this thesis the development, implementation, and evaluation of conversational agents based on the HoPE (Help in Obstetrician for PrEgant) architecture model, which aims to promote literacy in pregnant women through reliable information. We evaluated this model through clinical trials and experiments involving information retrieval. Studies involving clinical trials with health professionals and pregnant women are still remote and need further investigation. The strategies we have listed for managing dialogue and retrieving information are unprecedented in the scientific context, and we have not found any proposal that promotes models with similar concepts. The architecture developed has as main pillars the ability to recover and disambiguate information using ontologies and architectures based on Transformers as a center. We carried out five assessments that provided numerous insights for studies in this field. Initially, we applied a survey to get a general picture of the subject in question. In a quantitative study using semi-structured questionnaires, pregnant women and health professionals interacted with conversational agents trained in nutritional data. The results showed that both groups have positive perceptions about the experience with the conversational agent and statistically the null hypothesis was accepted (P-value = 0.713). A second evaluation with a sample formed by different pregnant women and doctors verifies through a mixed analysis that the perceptions of these groups are complementary and positive, regarding the use of conversational health agents trained in general data of the content of a thousand days in pregnancy. The new sample of pregnant women again showed a positive perception in general about the new constructs evaluated (Overall Mean = 4.0 Mean Deviation = 1.1). Also, insights generated by doctors through qualitative analysis indicated some improvements as the inclusion of COVID-19 content and family behavior, as well as adjustments in the approach and language of the conversational agent. We evaluated the pre-trained Sentence-BERT models in Portuguese, adjusted to health protocol data that we extracted from official protocols of the Brazilian Government. The BERTimbau model, trained in data augmentation strategies, obtained the highest correlation with embeddings generated by the health data corpus (Spearman:95.55) and was selected as the winning model in our experiments. Using this model, we performed the second study that evaluated the performance of the HoPE architecture for conversational agents. Three main metrics were evaluated in this study: information retrieval efficacy, architecture’s ability to identify composite intents, and architecture inference speed. For the information retrieval task, the HoPE architecture obtained an F1-Score of (0.89) under the test data, a hit score of (90%) in the identification of composite/unique intents under a set of 10 sentences, and a performance regular in the information retrieval speed (CPU=2.223, GPU=0.222). Future studies will evaluate through clinical studies the hybrid HoPE architecture for information retrieval, validation for groups of pregnant women from different demographic strata, and deepen the study on mechanisms for identifying multiple intentions in dialogues.
</description>
<pubDate>Thu, 31 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11553</guid>
<dc:date>2022-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>CogEff: uma abordagem para mensurar a carga cognitiva de desenvolvedores em tarefas de compreensão de código</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11418</link>
<description>CogEff: uma abordagem para mensurar a carga cognitiva de desenvolvedores em tarefas de compreensão de código
Gonçales, Lucian José
Cognitive load refers to the mental effort that users spend when performing cognitive tasks such as interpreting software artifacts with different levels of abstraction. Cognitive load in software engineering is important due to the potential for considering human factors through physiological signals while users are in their work routine. For instance, developers spend most of their time understanding code while working. Measuring the cognitive load using psychophysiological indicators would enable a coherent correlation between the developers’ perception and the code comprehension tasks. While psychophysiological indicators instantly reflect user stimuli, traditional metrics consolidate after comprehension tasks finish. Therefore, literature recognizes that cognitive load approaches have high predictive potential in software engineering tasks. Researchers have investigated the cognitive load in software engineering by combining various approaches from diferente psychophysiological devices. For example, from the electroencephalogram (EEG) researchers have already directly used approaches such as as the Assymetry Ratio (ASR), the Event-Related Desyncronization (ERD), and the band power of the alpha (α), beta (β), delta (δ), and theta (θ) waves. Using the fMRI sensor, researchers also used the Blood Oxygen Level Dependent (BOLD) to highlight which brain areas are active during code comprehension tasks. However, despite these techniques’ relation to cognitive load, an approach to measure cognitive load in code comprehension tasks is still lacking. Thus, there is a series of problems identified in this area: (1) Absence of a state-of-the-art classification on measures of cognitive load in software engineering; (2) Lack of approach to measure cognitive load in software engineering; (3) Absence of correlation analysis of approaches used to measure cognitive load in code comprehension tasks; (4) Lack of evaluation of effectiveness for using EEG data to classify code comprehension using machine learning techniques. Therefore, this research aims to: (1) Conduct a systematic mapping study to classify research on measures of cognitive load in software engineering; (2) Develop a technique to measure cognitive load through EEG data in software comprehension tasks, named as CogEff; (3) Analyze the correlation between traditional EEG approaches, and the CogEff approach with code comprehension tasks; (4) Analyze the effectiveness of classifying code comprehension trained with traditional EEG approaches. The main results are: (1) Based on the classification, 37% (23/63) of the studies adopted multimodal devices; and 59% (37/63) of the studies analyzed  the cognitive load in programming tasks, such as code comprehension; (2) CogEff approach has the potential to measure cognitive load through EEG channel connectivity; (3) Correlation tests showed that traditional EEG approaches, and CogEff correlates with code comprehension; (4) The K-Nearest Neighbors classifier obtained an average f-measure of 86% to classify code comprehension based on EEG data.
</description>
<pubDate>Thu, 31 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11418</guid>
<dc:date>2022-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>SURYA: um modelo para serviços inteligentes em ecossistemas de mobilidade baseado em históricos de contextos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11392</link>
<description>SURYA: um modelo para serviços inteligentes em ecossistemas de mobilidade baseado em históricos de contextos
Gomes, Joneval Zanella
Since vehicles became consumer goods, people and vehicles share spaces. If, on the one hand, the use of vehicles brings benefits such as agility, safety, and comfort, on the other hand, it is responsible for part of the stress on large urban centers. With the passing of generations and the evolution of computer systems, the amount of data collected and available in a mobility ecosystem is vast, however, few studies were identified that seek to understand the interactions of people, and other agents, in these ecosystems. Mobility ecosystem is the context where this study proposes the Surya model, a generic model aimed at understanding the interactions among its agents to offer them, based on context history, intelligent services. The Surya model was evaluated in a computational environment that simulates the morning complexity of a large urban center. Context histories are generated over 75 simulation cycles and support the provision of&#13;
two intelligent services. In addition to the services, this study brings an update of the systemic mapping of service provision in vehicles, an ontology for the domain of knowledgment of services in mobility ecosystems, and the Surya model.
</description>
<pubDate>Tue, 22 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11392</guid>
<dc:date>2022-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Tellus: um modelo computacional para análise de solo na agricultura ubíqua baseado em históricos de contextos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11263</link>
<description>Tellus: um modelo computacional para análise de solo na agricultura ubíqua baseado em históricos de contextos
Helfer, Gilson Augusto
The applications of ubiquitous computing have in creasedin recente years, mainly due to the development of Technologies such as mobile computing and its integration with the real world. One of the challenges in this area is the use of contexto aware ness. In agriculture, the contexto related to the environment can be considered, such as the chemical and physical aspects that characterize the diferente types of soil.This scenario changes periodically dueto factors such as climate, type of cultivar and soil management technique used, amongo ther aspects. This thesis presentes a computational model called Tellus applied in precision agriculture that uses historical contexts to predict soil physicochemical properties. A prototype was created to evaluate the model based on a telemetry station and installed in the field, as well as a mobile application for in formation management. The Prediction Agent training had 43 soil samples from diferente collection points in Vale do Rio Pardo, whose concentrations of organic matter varied between 0.6% and 5.9% and clay between 8% and 60%, respectively. For prediction of organic matter and clay in the soil, coefficients of determination (R2) of 0.9738 and 0.9536 were obtained and mean square erros of calibration (RMSEC) of 0.26% and 2.95%, respectively. For the irrigation recommendation, 192 images were used for training and an accuracy of 82.55% was achieved. In addition, na Agro XML based ontology called Tellus-Onto was proposed that extends the state of the artin the classification of Brazilians oil saccording to organicand textural composition. A series of axioms and semantic rules were used to provide queries and inferences about its instantiated base. In addition, from thes oil analysis information, the ontology infers recommendation for fertilization and liming. To test the ontology, 98 soils ample results were instantiated and their classifications were in ferredina precise and automatic way.The computational modeland its prediction agentes together with the ontology are the contributions of Tellus in ubiquitous agriculture applied to soil analysis.
</description>
<pubDate>Fri, 25 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11263</guid>
<dc:date>2022-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Apollo: um modelo para predição de acidentes por causas externas em ambientes inteligentes</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11258</link>
<description>Apollo: um modelo para predição de acidentes por causas externas em ambientes inteligentes
Tavares, João Elison da Rosa
According to data from the World Health Organization (WHO), around 8% of all deaths in the world, approximately 5 million/year, are the result of external causes. These causes can be intentional, such as homicide with a firearm, or unintentional, with domestic accidents such as falling or electric shock being the most frequent. These accidents mainly affect People with Reduced Autonomy (PRA), such as the elderly, children, and People with Disabilities (PWD). Although protocols and standards in the medical field have evolved to assist in the diagnosis and mapping of these accidents, gaps in effective support for the prevention of these health incidents are still observed. From a technological perspective, the accelerated development of the last decades has provided the application of the Internet of Things, the use of wearables and the development of intelligent environments that contribute to the monitoring of activities of people, identifying patterns or detecting accidents such as falls. However, although the detection of events can help to expedite medical care and minimize the consequences of trauma, this approach follows a post-trauma reactive model. On the other hand, this thesis presents the Apollo model, which predicts accidents based on historical contexts of PRA in intelligent environments. Apollo scientifically contributes to external causes prevention by identifying risks and predicting accidents, applying the ubiquitous care approach in intelligent environments, and with the support of service robots. The Apollo model employs supervised machine learning algorithms for the detection and classification of risks, based on the historical contexts of the PRA. Furthermore, it uses the Hidden Markov Model (HMM) model for accident prediction. In addition, the ApolloOnto ontology was designed to formalize the application domain and structure the processed contexts. Apollo Simulator was implemented to generate synthetic datasets that made the experiments possible. For the evaluation of Apollo’s accuracy, 15 scenarios were modeled based on heuristics and validated by 5 experts. The scenarios evaluated considered the prediction of falls of the elderly, burns of the deaf, electric shock, and drowning of children. Risk detection reached an average F1-score of 97.9 %, while accident prediction achieved na average accuracy of 100 %. The results indicate the feasibility and effectiveness of Apollo in supporting accident prediction.
</description>
<pubDate>Mon, 28 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11258</guid>
<dc:date>2022-03-28T00:00:00Z</dc:date>
</item>
<item>
<title>Salus: um modelo para assistência educacional ubíqua em doenças crônicas não transmissíveis</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11257</link>
<description>Salus: um modelo para assistência educacional ubíqua em doenças crônicas não transmissíveis
Larentis, Andrêsa Vargas
Noncommunicable Chronic Diseases (NCDs) are the leading cause of death worldwide, meaning 41 million people die each year, accounting for 74% of all deaths. These chronic diseases need long-term and continuous treatment that requires knowledge about diseases, requires adaptation to treatment needs, incurs cost and delays or hinders the development, especially of low- and middle-income families. Actions for the prevention and monitoring of NCDs should be promoted through the use of ubiquitous computing technologies to provide services to individuals that assist in health education, including self-management and self-care of their health conditions. Through ubiquitous computing it is possible to integrate technologies to the daily life of individuals, making use of smartphones and tablets. In turn, ubiquitous education makes possible the integration of the individual with their context, helping in continuous and contextualized learning. In this sense, this thesis proposes SALUS, a computational model that uses context histories of individuals' as a mechanism to assist in ubiquitous educational assistance in the prevention and monitoring of NCDs. These technologies allow SALUS to adapt to available resources and provide personalized services to assist individuals in improving their specific health conditions. The model uses an ontology to represent knowledge in the domain of ubiquitous educational assistance in NCDs. In addition, it explores elements of the context of individuals that are used in the composition of context histories. Finally, the analysis of context histories is used to customize services that deliver useful information for the health education of individuals. These characteristics are the contributions of SALUS in the area of ubiquitous education applied to NCDs. In order to evaluate the model, a prototype with services for recommending content and place for individuals with or without a diagnosis of cardiovascular diseases was created. The evaluation aimed to assess the correctness of the content and place recommendations indicated by the service. A public database containing data from 4239 individuals was used in the evaluation, with results showing an occurrence of 28.8% where content recommendation is between high (score &gt;60 and ≤80) and very high (score &gt;80) ranges, where 96.18% of these records have a score of risk for nominal cardiovascular disease (CVD) between elevated and risk&gt;30% (classification indicated by calculating the 10-year risk score for CVD, defined in the Framingham Heart Study). Regarding the place recommendation score, 25.4% of the records had a value between high and very high, where 100% of these records had a nominal CVD risk score between elevated and risk&gt;30%. The results obtained reinforce the hypothesis that it is possible to define a computational model to support ubiquitous assistance in the education of an individual aimed at the prevention and monitoring of NCDs.
</description>
<pubDate>Wed, 30 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11257</guid>
<dc:date>2022-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>STEAM: um modelo para processamento de eventos e enriquecimento de fluxos de dados IoT na borda da rede</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10918</link>
<description>STEAM: um modelo para processamento de eventos e enriquecimento de fluxos de dados IoT na borda da rede
Gomes, Márcio Miguel
CONTEXT: The Internet of Things is a fast expanding environment in which objects, animals, or people are equipped with the most diverse sensors and can automatically transfer their data through a network. Due to their limited nature, sensors and edge devices usually only relay the collected data to be processed by centralized systems in the cloud and, in many cases, wait for a response. This transfer from local to remote processing results in critical issues such as loss of connection, high response time, computer system overhead, in addition to requiring a robust and scalable structure for data communication and centralized processing. OBJECTIVE: Thus, we identified two challenges. First, devise a model capable of bringing data processing from the cloud to the network edge. Second, implement a solution that meets the constraints and heterogeneity of the IoT environment, both from a hardware and software perspective. The scientific contribution consists in the proposal of a model containing several layers, from data collection, processing, evaluation, and publication of results, in addition to the implementation of a set of classes and functions that facilitate the development of IoT applications executed by devices with few computational resources at the edge of the network. The main practical results are the optional use of the cloud, near real-time processing and simplicity in application development. METHODOLOGY: The methodology consists of proposing a model and implementing a framework called STEAM. The validation of the model takes place through the implementation of applications built with the STEAM framework, besides the evaluation of performance metrics and computational resources usages such as CPU, memory, and network. RESULTS: The experiments carried out in a semiconductor industry through the implementation of 2 applications and 4 test scenarios demonstrated the viability of both the model and the framework STEAM. Since one of the goals was to build lightweight applications in edge computing, we achieved an average of less than 1.0% CPU load and less than 436kb of memory consumption on a Raspberry Pi 3 model B+. In addition, we reached fast response times, processing up to 239 data packets per second, reducing the size of the output data to 14% the size of the raw input data when notifying events, and integrating with a remote control panel application. CONCLUSION: The proposal proved to be viable with promising results, presenting the framework STEAM as a lightweight, fast and accurate alternative for the development of IoT applications with data processing at the edge of the network, eliminating the processing dependency in the cloud.
</description>
<pubDate>Wed, 23 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10918</guid>
<dc:date>2022-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive Maintenance &amp; Schedule (PdMS): um novo processo de fabricação que integra manutenção preditiva e programação de produção</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10901</link>
<description>Predictive Maintenance &amp; Schedule (PdMS): um novo processo de fabricação que integra manutenção preditiva e programação de produção
Zonta, Tiago
CONTEXT: Industry 4.0 (I4.0) provides connectivity, data volume, new devices, miniaturization, inventory reduction, personalization, and controlled production. In this new era, production customization and data availability are essential to generate information that allows decision-making. The possibility of predicting the need for maintenance in the future and using this information for other processes is one of the manufacturing process challenges. In this context, this thesis proposal transcends the specific fact of applying predictive maintenance (PdM) and suggests ways to integrate processes, focusing on maintenance and&#13;
production schedules. OBJECTIVE: The objective is to create the Predictive Maintenance &amp; Schedule (PdMS) to integrate maintenance and production schedules in a predictive way. At each sensor data reading and operational information, the machine’s remaining useful life (RUL) is predicted, deciding whether the machine will be part of the production process or not. Reinforcing that, this new Industry scenario allows Computing Applications, together with artificial intelligence and distributed computing, to become more effective in manufacturing processes. With the PdMS creation, the idea is to reduce downtime, improve communication between the maintenance and production sectors and allow future integration with the production, storage, and logistics sectors. METHODOLOGY: The PdMS creation process was divided into two phases: (i) related to PdM, which describes to create and combine degradation indices using similarity patterns and application Savitzky-Golay and Kalman smoothing filters that allow noisy data to identify time-based failures; (ii) related to the scheduling problem and the integration with the results generated by the PdM, which describes the schedule generation, maintenance verification and graphics generation to control and follow up the production schedule. To evaluate the PdMS, a sample predictive maintenance dataset provided by Microsoft was used. We searched for data with characteristics that could contribute to the idea of defining an approach that encourages the adoption of predictive maintenance in factories that already have telemetry in their assets but still perform corrective or preventive maintenance. RESULTS: To evaluate the results, we compared several models based on Deep Neural Networks (DNN) and Recurrent Neural Networks (RNN). Regression Random Forest (RRF) was used to contribute to feature selection and was performed a comparison between Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Networks, and Deep Feed Forward (DFF) network. The results were visually evaluated and by criteria based on errors: Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Squared Error (MSE), Determination Coefficient R2 and Mean Absolute Percentage Error (MAPE). The best results presentes RMSE = 8:789;MSE = 77:253;MAE = 2:262;R2 = 0:848;MAPE = 92:22. CONCLUSION: As a contribution, this work brings a systematic review with a taxonomy proposal, challenges identification, and open questions regarding I4.0 with a focus on PdM. The PdMS model was created from the challenges presented, which presented the decisions, strategies, and architecture that resulted in the prediction of failures in noisy data with five-day anticipation in the data set used for the experiment, thus enabling the intended outcome integration simulation.
</description>
<pubDate>Tue, 11 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10901</guid>
<dc:date>2022-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>HealthStack: providing an IoT middleware for malleable QoS service stacking for Healthcare 4.0</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10859</link>
<description>HealthStack: providing an IoT middleware for malleable QoS service stacking for Healthcare 4.0
Rodrigues, Vinicius Facco
</description>
<pubDate>Tue, 08 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10859</guid>
<dc:date>2020-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>Deepsigns: a predictive model based on deep learning for the early detection of patient health deterioration</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10773</link>
<description>Deepsigns: a predictive model based on deep learning for the early detection of patient health deterioration
Silva, Denise Bandeira da
CONTEXT: The accurate and early diagnosis of critically ill patients depends on medical staff’s attention and the observation of different variables, vital signs, and laboratory test results, among others. Seriously ill patients usually have changes in their vital signs before worsening. Monitoring these changes is essential to anticipate the diagnosis in order to initiate patients’ care. Prognostic indexes play a fundamental role in this context since they allow us to estimate the patients’ health status. Besides, Electronic Health Records’ adoption improved data availability, which can be processed by machine learning techniques for information extraction to support clinical decisions. The volume and variety of data stored in the EHR make it possible to carry out more accurate analyzes that allow different types of health care assessments. Nevertheless, as the amount of data available is vast and complex, there is a need for new methods to analyze that data to explore significant patterns. The use of Machine Learning (ML) techniques to generate knowledge, search for information patterns, and support clinical decisions is one of the possibilities to address this problem. OBJECTIVE: this work aims to create a computational model able to predict the deterioration of patients’ health status in such a way that it is possible to start the appropriate treatment as soon as possible. The model was developed based on the Deep Learning technique, a Recurrent Neural Networks, the Long Short-Term Memory, to predict patient’s vital signs and subsequent evaluation of the patient’s health status severity through Prognostic Indexes commonly used in the Health area. METHOD: The methodology of this work consists of the following steps carried out in sequence. The definition of the data source to be used in the creation of the model and the selection of the data, the pre-processing to create a database for the development of the model, the definition of the implementation of the model and its evaluation through comparison with other models. RESULTS: Experiments showed that it is possible to predict vital signs with good precision (accuracy &gt; 80%) and, consequently, predict the Prognostic Indexes in advance to treat the patients before deterioration. Predicting the patient’s vital signs for the future and use them for the Prognostic Index’ calculation allows clinical times to predict future severe diagnoses that would not be possible applying the current patient’s vital signs (50% - 60% of cases would not be identified). CONCLUSION: This work’s main scientific contribution is the creation of a method for predicting vital signs based on historical data with low Mean Squared Error and its following application in the calculation of prognostic indexes with effectiveness (50% - 60% of cases that would not be identified as severe). The differential presented by this proposal stems from the fact that few works predict vital signs. Most of the works focus on predicting specific health outcomes, such as specific diagnoses, considering the current vital signs. In this work, the proposal is to predict the evolution of vital signs in the future and use these predicted signs to calculate prognostic indexes.
</description>
<pubDate>Tue, 29 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10773</guid>
<dc:date>2020-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Investigação de diferentes métodos e recursos para controle de prótese de mão através da classificação de sinais EMG via aprendizado de máquina</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10571</link>
<description>Investigação de diferentes métodos e recursos para controle de prótese de mão através da classificação de sinais EMG via aprendizado de máquina
Souza, João Olegário de Oliveira de
The technological advances in the last years have allowed the development of hand prostheses that have more movement precision, weight reduction and the use of bioelectric signals in its operation. Nowadays, the prostheses with myoelectric control are considered the state of art in this segment; they represent a great tool in the restoration of parts of daily tasks and in the improvement of life quality for upper limb amputees. However, the control of these devices is not intuitive, because the users of myoelectric prostheses need to perform complex sequences of muscle contraction impulses to change the type of movement. The goal of this thesis was the development of a real-time myoelectric control of a hand prosthesis using Machine Learning. The system architecture includes the integration of the electromyographic (EMG) signal acquisition devices, platform for the implementation of the real-time classifier and interface for servomotor driver for an open source hand prosthesis. The following classifier models were implemented and compared: Multilayer Neural Network, Convolutional Neural Network, Recurrent Neural Network using LSTM units and Random Forest. Firstly, the assays were performed on offline systems involving the three databases processing, incrementally incorporating and evaluating different resources and sensors until the implementation of the online system. A Multilayer Perceptron (MLP) classifier was implemented on a platform for rapid prototyping (Raspberry Pi 3 model B+) obtaining average accuracies of 96.3% (offline) and 87.2% (online) and responses in real-time (10.3 ms) for 11 hand gestures.
</description>
<pubDate>Fri, 08 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10571</guid>
<dc:date>2021-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Um novo método para avaliar os coeficientes de rugosidade e áreas de vale de superfícies adquiridas por scanner a laser</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10046</link>
<description>Um novo método para avaliar os coeficientes de rugosidade e áreas de vale de superfícies adquiridas por scanner a laser
Tonietto, Leandro
Quality evaluation of a material’s surface is performed through roughness analysis of surface samples. Several techniques have been presented to achieve this goal, including geometrical analysis and surface roughness analysis. Geometric analysis allows a visual and subjective assessment of roughness (a qualitative assessment), whereas computation of the roughness parameters is a quantitative assessment and allows a standardized analysis of the surfaces. This work proposes a new method to evaluate surface roughness, starting from the generation of a visual surface roughness signature, which is calculated through the roughness parameters computed in hierarchically organized regions. New parameters for analysis of favoring adherence by contact are also proposed, the valley area rate and average valley area. The proposed method is compared with the conventional (2D) roughness determination method to demonstrate the advantage of this new method, which presents results with higher resolution and accuracy. The evaluation tools presented in this new method provide a local and more accurate evaluation of the computed coefficients, which benefit the evaluation and comparison of the sampled surfaces when compared to other roughness determination methods. The results presented using the new parameters demonstrate that the method is effective for analyzing the extent of adhesion by contact area.
</description>
<pubDate>Wed, 05 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10046</guid>
<dc:date>2021-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>AdaptThing: modelo computacional para gerenciamento dinâmico e adaptativo de objetos da IoT utilizando histórico de contextos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9676</link>
<description>AdaptThing: modelo computacional para gerenciamento dinâmico e adaptativo de objetos da IoT utilizando histórico de contextos
Wolf, Alexandre Stürmer
It's currently expected that any environment will become an Intelligent Environment, capable of responding to events, especially in unexpected situations. Intelligent environments are favored by technological evolution, which has enabled the emergence of the Internet of Things (IoT) paradigm, which allows connectivity between different systems and devices, whether physical or virtual, through the Internet. Both systems and devices must adapt to the needs of environments, responding dynamically to existing changes. This requires sensory elements, called sensory objects, used to collect information from the environment. In addition to collecting and obtaining data from heterogeneous sources, it is necessary to store the data in such a way as to constitute a historical basis for consultation and inference, considering the characteristics of the event, location, and moment in which it occurred, thus generating a history of contexts. Based on contextual history, coupled with new events, it is possible to infer the need to reconfigure the operational behavior of sensitive objects, even relocating mobile resources to less densely monitored areas, thus enabling more reliable and detailed data. Thus, this thesis proposes the AdaptThing computational model, which supports heterogeneous sensitive object networks, with the ability to dynamically adapt the operational behavior of the elements involved, to improve data resolution and detail. The computational model was implemented and evaluated in two application scenarios. One scenario was educational, where the system provided questions according to the average knowledge of the class where applied, reducing the number of questions of the same subject by 33.3%. The other implementation scenario was applied to a set of 14 professional climate stations, where one of the stations had its operation adapted based on contextual information, reducing its computational consumption by 67%. Thus, it is considered that the AdaptThing computational model can manage IoT sensitive objects, dynamically adapting operational operating behavior, allowing for more detailed information.
</description>
<pubDate>Wed, 11 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9676</guid>
<dc:date>2019-12-11T00:00:00Z</dc:date>
</item>
</channel>
</rss>
