<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>PPG Computação Aplicada</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/1686</link>
<description>PPG Computação Aplicada</description>
<pubDate>Fri, 10 Apr 2026 04:03:35 GMT</pubDate>
<dc:date>2026-04-10T04:03:35Z</dc:date>
<item>
<title>Oraculum: a model for self-adaptive system optimization in smart environments</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13867</link>
<description>Oraculum: a model for self-adaptive system optimization in smart environments
Noetzold, Darlan
This dissertation introduces Oraculum, a modular self-adaptive framework designed to support the monitoring, prediction, reasoning, and adaptation of distributed systems operating in smart environments. Many existing solutions treat these tasks as disconnected components, relying on static training phases, fixed adaptation logic, and reactive decision-making triggered only after system degradation is detected. Oraculum proposes an integrated approach in which monitored metrics are continuously collected and processed to generate predictions and select actions in advance of performance failures. The framework consists of three key components. SHiELD is a sensor data simulator that generates synthetic time-series data using ARIMA models and applies heuristic methods-such as filtering, aggregation, and compression-to simulate&#13;
realistic variability and reduce processing overhead. OntOraculum is a semantic ontology that formalizes performance metrics into five categories and enables the system to classify and validate alerts through rule-based reasoning and SPARQL queries. The adaptation engine uses regression and classification models to forecast short-term metric behavior and integrates a reinforcement learning agent based on a Markov Decision Process (MDP), which receives contextual states and selects actions such as resource scaling, scheduling adjustment, or service reconfiguration. The RL engine also includes a retraining mechanism that periodically updates policies using new data. The entire architecture operates in a closed feedback loop, using predictions and inferred knowledge to support earlier and more informed decisions. The model includes automated pipelines for dataset creation, model training, hyperparameter tuning, and continuous learning, covering both predictive models and RL agents. Experimental validation was conducted in a containerized testbed with simulated load variation. Results were collected&#13;
across multiple performance indicators, including CPU, memory, latency, and model accuracy. The contributions of this work are: (i) the proposal of an integrated framework that combines monitoring, forecasting, semantic validation, and adaptation; (ii) the development of SHiELD for synthetic data generation and heuristic preprocessing; (iii) the design of OntOraculum for metric classification and rule-based inference; (iv) the implementation of a prediction-based strategy for early alert generation to reduce adaptation delay; and (v) the modeling of an RL engine with configurable actions and scheduled policy retraining.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13867</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Heimdall: an architecture for online machine learning through imbalanced data</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13844</link>
<description>Heimdall: an architecture for online machine learning through imbalanced data
Vargas, Vitor Werner de
Machine Learning (ML) algorithms have been increasingly applied to domain areas where&#13;
data is available for process automation. However, in the case of imbalanced data applications, the training process is challenging since ML algorithms intrinsically learn from balanced distributions. This research proposes Heimdall, a resourceful architecture for online ML through imbalanced data. Designed as a service for prediction and analysis requests, Heimdall serves existing applications from external systems, extending artificial intelligence capabilities and automated processes to traditional applications supervised by experts. The architecture focuses on efficiently solving imbalance and improving performance through a set of good practices compiled from mapped studies – such as probability threshold optimization, high-performance sampling, and ensemble learning. Furthermore, Heimdall proposes and evaluates the efficiency of novel functionalities. Firstly, a new performance metric corrects precision-recall balance according to the application’s needs, enhancing probability threshold optimization. Secondly,&#13;
the architecture independently automates data management and training pipelines through two rule-based reactive agents constantly monitoring data changes and model degradation to trigger processes. These reactive agents compose a strategy for adaptive efficiency, enabling better and more stable performance by sacrificing efficiency in warm-up conditions, and maintaining excellent performance and efficiency in hot conditions. To adequately evaluate the architecture, this study implemented a prototype for one well-studied and severely imbalanced application – Credit Card Fraud Detection (CCFD). Isolating the improvement of each proposed functionality, the analysis evaluated performance over time and overall performance against related works&#13;
through five scenarios. Namely, the results indicated that the prototype achieved excellent performance even with few anomalies, and improved systemic efficiency over time. Finally, the overall performance achieved comparable results to the best-performing related works.
</description>
<pubDate>Tue, 26 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13844</guid>
<dc:date>2023-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Tekohá: um ambiente virtual para o ensino da história das missões jesuíticas no Rio Grande do Sul</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13694</link>
<description>Tekohá: um ambiente virtual para o ensino da história das missões jesuíticas no Rio Grande do Sul
Martins Júnior, Renato da Veiga
Virtual Reality (VR) is defined as a technology that creates three-dimensional digital environments that require physical immersion and sensory stimulation, are interactive, and can mentally transport the user to another place. It is a computational technology that has grown in many forms, being increasingly used in different contexts. Numerous studies on the use of VR in adult education have produced positive results, providing students with an authentic context where they can develop their scope of learning, visualize situations and concepts in a unique way, increase time on task and enjoyment of learning, increase motivation, deepen learning, and improve long-term retention. However, little is known about the use of VR in children’s education, but the few results already obtained point to its stimulating potential. In this work, a systematic review on the use of VR in children’s education was carried out and a project for the use of VR to support history teaching was developed. A virtual environment, called Tekohá, was specially designed for use by children in the fourth and fifth grades of Elementary School. The virtual environment presents students with an introduction to the Guarani Jesuit Mission, now ruins in São Miguel das Missões in Rio Grande do Sul/Brazil, considered a UNESCO World&#13;
Heritage Site and part of the curricular requirements for Elementary School. For the evaluation, a controlled experiment with 130 students from a private school in Porto Alegre was conducted. The results indicate that the Tekohá virtual environment is easy and fun to use and that the use of VR motivates and engages students in the learning process, enabling a sense of presence in situations from the past in an immersive and realistic way. Regarding learning, it was possible to statistically confirm that the group that used Tekohá performed better in the knowledge test about the Jesuit Mission of São Miguel Arcanjo.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13694</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Continual knowledge distillation for histopathology</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13563</link>
<description>Continual knowledge distillation for histopathology
Rodrigues Neto, João Batista
With the emergence of computational pathology, many datasets were made public and challenges were published to encourage researches into developing assistant frameworks for pathology tasks. The analysis of histopathological slides, made by pathologists to detect tumorous cells or metastasis in tissue images, is one of such tasks, for which, computer vision had been successfully applied and even outperformed human expert levels. Despite the excellent results in the literature, the majority of approaches are dataset-dependent and lack generalization, making even the best documented models perform poorly when presented with different tissues. In this work, we designed a novel continuous learning method, that leverages the model generalization across datasets using enhanced knowledge distillation. We verified, through deep and extensive experimentation on 19 datasets, an overall improvement of 15,66% in comparison to&#13;
common literature methods, and superior metrics in relation to models with full dataset availability. Also, our method was the only one to achieve positive forward (FWT) and backward (BWT) knowledge transfer indexes, considerably mitigating the catastrophic forgetting effect.
</description>
<pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13563</guid>
<dc:date>2024-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Artificial Intelligence methods for the automatic measurement of a new biomarker aiming at glaucoma diagnosis</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13562</link>
<description>Exploring Artificial Intelligence methods for the automatic measurement of a new biomarker aiming at glaucoma diagnosis
Fernandes, Gabriel Castro
</description>
<pubDate>Fri, 26 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13562</guid>
<dc:date>2024-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>NASP: Network Slice as a Service Platform for newgeneration networks beyond 5G</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13334</link>
<description>NASP: Network Slice as a Service Platform for newgeneration networks beyond 5G
Grings, Felipe Hauschild
The 5th generation of mobile telecommunications (5G) is rapidly adopted worldwide, accelerating the demands for highly flexible private networks. In this context, 5G has mobile network slicing as one of its main features, where the 3rd Generation Partnership Project (3GPP) defines three main use cases: massive Internet of Things (mIoT), enhanced Mobile BroadBand (eMBB), and Ultra Reliable Low Latency Communications (URLLC), along with their management functions. Moreover, the European Telecommunications Standards Institute (ETSI) defines standards for Zero-touch network &amp; Service Management (ZSM) without human intervention. However, the technical documents of these institutes fail to define End-to-End (E2E) management and integration among different domains and subnet instances. This work presents a network slice as a service platform (NASP) agnostically to 3GPP and non-3GPP networks. A NASP architecture is based on the main components, namely: (i) onboard requests for new&#13;
slices at the business level, fulfilling the translation for definitions of physical instances, distributions, and interfaces among domains; (ii) hierarchy orchestrator working among management functions; and (iii) communication interfaces with network controllers. These configurations are based on the technical documents of entities such as 3GPP, ETSI, and O-RAN, following the study of overlapping designs and gaps among the different views. The NASP prototype was developed based on the proposed architecture, bringing implementations and solutions for an agnostic platform and provider of an end-to-end Network Slice as a Service. The tests were analyzed using two use cases (3GPP and Non-3GPP) with four different scenarios, i.e., mIoT, URLLC, 3GPP Shared, and Non-3GPP. The results pointed out the platform’s adaptability in serving different requests received by the Communication Service Management Function. Moreover, the evaluation showed the time to create a Network Slice Instance, where 68% is dedicated to the Core configuration. The tests also presented a 93% reduction in data session establishment time comparing the URLLC and Shared scenarios. Finally, the study presents the cost variation for operating the platform with the orchestration of 5 and 10 slices, presenting a&#13;
variation of 112% between Edge and Central.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13334</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>ProMerge: uma abordagem para auxiliar desenvolvedores na detecção e resolução proativa de conflitos de integração de código fontes</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/13144</link>
<description>ProMerge: uma abordagem para auxiliar desenvolvedores na detecção e resolução proativa de conflitos de integração de código fontes
Carbonera, Carlos Eduardo
The source files integration caracterize as a fundamental role in various software development tasks, for example, when it was accommodated new functionalities or it was when reconcilited conflicting code snippets changed in parallel by distributed software development teams. The conflicts are detected when code snippets receive divergent modifications implemented in parallel by different developers, affecting the structure and/or source code files semantic. This modifications can affect the same code section (direct conflict) or different sections (indirect conflict). Although the topic "source file integration" has been widely investigated and explored by the industry and academia in recent decades, conflict detection and resolution are still considered tasks that are highly prone to errors and require high effort from developers. This research, proposes ProMerge, an approach to assist developers to detect and resolve proactively direct and indirect conflicts generated as code snippets are modified in parallel. The ProMerge&#13;
introduces the context history conflicts concept between code snippets, detects conflicts in different branches (or not), helps developers to evaluating the conflicts severity and supports the committing time concept. The ProMerge it was designed based on the results obtained from a systematic literature mapping, investigated published work at the last two decades about software integration topic evaluating nine research questions. The ProMerge was implemented as a plug-in for the Eclipse platform. The proposed approach it was evaluated through a controlled experiment with thirty-two industry professionals, that performed ten experimental tasks divided into two evaluate the integration effort scenarios, the integrations correctness and the error rate when integrations taske it was performed, generating three hundred and twenty evaluation scenarios. So, the results it was supported by statistical tests, indicating that the accuracy rate found was higher than the traditional approach. Finally, the error rate founded in the evaluation tasks that were part of the experiment it was higher than the traditional approach. Furthermore, a qualitative assessment it was executed, applying the TAM questionnaire, to understand the proposed approach acceptance degree. In total, thrity-one participants that answered the questionnaire are industry professionals. The results indicated that the ProMerge use presented a significantly reduced of the effort (time) to resolve the tasks. The context information generated from the experiments execution helped the developers to understand better the error and correctness rate . Finally, the ProMerge use contributed to improving developers performance and also in understanding and applying the new concepts implemented and generating performance and productivity indicators.
</description>
<pubDate>Fri, 22 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/13144</guid>
<dc:date>2024-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Healthtranslator: a model for integration between IoT devices and healthcare systems</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12910</link>
<description>Healthtranslator: a model for integration between IoT devices and healthcare systems
Cabral, Arthur Tassinari
</description>
<pubDate>Mon, 04 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12910</guid>
<dc:date>2023-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Federated hospital: a multilevel federated learning architecture for dealing with heterogeneous data distribution in the context of smart hospitals services</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12803</link>
<description>Federated hospital: a multilevel federated learning architecture for dealing with heterogeneous data distribution in the context of smart hospitals services
Policarpo, Lucas Micol
The integration of artificial intelligence (AI) and machine learning (ML) services in healthcare has revolutionized patient care, ranging from real-time health monitoring to complex medical image analysis. However, deploying these ML services in the context of smart hospitals poses significant challenges due to varying data demands and privacy concerns. Federated Learning (FL) emerges as a promising solution by allowing data to remain with users while training ML models collaboratively. FL ensures data privacy and offers scalability by enabling distributed learning across multiple users. In this research, we extend the FL paradigm to the domain of smart hospitals and propose the "Federated Hospital" model to address the challenges posed by heterogeneity among diferente hospital departments. By leveraging multi-level aggregation, the Federated Hospital architecture is designed to accommodate the diverse demands and health situations within individual departments, providing personalized and accurate ML models for each user. Through extensive experimentation and evaluation in distinct scenarios, including homogeneous and heterogeneous data distributions, we compare the performance of the Federated Hospital model against standard ML and FL approaches. The results confirm the effectiveness of our proposal in terms of accuracy, efficiency, and convergence speed. Moreover, the multi-level aggregation process in the smart hospital architecture enhances model performance, ensuring the generation of tailored ML models specific to each department’s unique characteristics. The Federated Hospital model demonstrates its potential to improve the execution of MLoriented services in smart hospitals. By optimizing the accuracy and performance of ML models for diverse healthcare departments, our proposal aims to revolutionize data-driven decisionmaking, promoting personalized patient care and efficient healthcare services. The next step of this research is to execute Federated Hospital in real hospitals in the metropolitan area of Porto Alegre, Rio Grande do Sul.
</description>
<pubDate>Thu, 10 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12803</guid>
<dc:date>2023-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Aperfeiçoamento do treinamento de redes de super-resolução deep learning a partir de imagens hiperespectrais aprimoradas</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12596</link>
<description>Aperfeiçoamento do treinamento de redes de super-resolução deep learning a partir de imagens hiperespectrais aprimoradas
Sales, Vinicius Ferreira
Inserted in the context of Visual Computing and Remote Sensing, Super-resolution consists of the process of restoring high frequency information in low spatial resolution images. Traditionally, this type of technique seeks to solve the physical limitations of some imaging sensors in identify and analyze specific targets. With the increasing use of Deep Learning methods more robust approaches of Super-resolution has been gaining more and more space, as are the cases of Super-resolution networks based on Convolutional neural network. Although such an approach proved to be superior to traditional digital image processing techniques, mainly in RGB scenes, multispectral and essentially hyperspectral images need more attention by the method, since, by improving their spatial resolutions, the spectral consistency must be maintained, a fact considered one of the great challenges within the Super-resolution. Still in this context, due to the difficulty of obtaining hyperspectral images of low and high spatial resolutions properly registered,&#13;
the scenes of low spatial resolution are synthesized from the process of degradation, resampling and noise of their corresponding of high spatial resolution . Although this flow is commonly adopted, it has not yet been evaluated what is the real influence that resampling techniques have on intelligent Super-resolution methods, since there is no consensus on the best technique to be used. Thus, the hypothesis is that the identification of the best resampling function of HIs de LR enables the Super-resolution models in Deep Learning to generate HIs of HR of better quality. Therefore, this work aims to evaluate resampling functions and identify the best function for the improvement of training of super-resolution networks in deep learning. As a proposal for this work, two different datasets of hyperpectral images consecrated in the literature were chosen and used in the process of synthesizing low spatial resolution images. Subsequently, with the data generated, their behaviors were evaluated within the best Super-resolution model selected based on the related works. This evaluation was performed from different metrics of comparison of hyperspectral images, especially the metrics: Peak Signal-to-Noise Ratio, Spectral Angle Mapper e Structure Similarity Index Measurement. From the values obtained hypothesis tests such as the case of the Friedman and Nemeyi tests were applied, in order to identify statistically which technique was best applied. Finally, the results obtained were compared and evaluated from a new set of data obtained in a controlled way, so that the spectral consistency could be evaluated based on predicted high resolution spectral images and point readings with non-iImageer spectrometer. From the results obtained, the resampling type Lanczos and Cubic presented the best results in relation to the others, thus proving, the hypothesis evaluated.
</description>
<pubDate>Fri, 26 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12596</guid>
<dc:date>2023-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Associação das queimadas com doenças respiratórias e complicações da COVID-19 no Estado do Pará, Brasil</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12442</link>
<description>Associação das queimadas com doenças respiratórias e complicações da COVID-19 no Estado do Pará, Brasil
Schroeder, Lucas
Brazil has faced two simultaneous problems related to respiratory health: forest fires and&#13;
the high mortality rate due to COVID-19 pandemics. The Amazon rain forest is one of the&#13;
Brazilian biomes that suffers the most with fires caused by droughts and illegal deforestation. These fires can bring respiratory diseases associated with air pollution, and the State of Pará in Brazil is the most affected. COVID-19 pandemics associated with air pollution can potentially increase hospitalizations and deaths related to respiratory diseases. Here, we aimed to evaluate the association of fire occurrences with the COVID-19 mortality rates and general respiratory diseases hospitalizations in the State of Pará, Brazil. We employed machine learning technique for clustering k-means accompanied with the elbow method used to identify the ideal quantity of clusters for the k-means algorithm, clustering 10 groups of cities in the State of Pará where we selected the clusters with the highest and lowest fires occurrence from the 2015 to 2019. Next, an Auto-regressive Integrated Moving Average Exogenous (ARIMAX) model was proposed to&#13;
study the serial correlation of respiratory diseases hospitalizations and their associations with fire occurrences. Regarding the COVID-19 analysis, we computed the mortality risk and its confidence level considering the quarterly incidence rate ratio in clusters with high and low exposure to fires. Using the k-means algorithm we identified two clusters with similar DHI (Development Human Index) and GDP (Gross Domestic Product) from a group of ten clusters that divided the State of Pará but with diverse behavior considering the hospitalizations and forest fires in the Amazon biome. From the auto-regressive and moving average model (ARIMAX), it was possible to show that besides the serial correlation, the fires occurrences contribute to the respiratory diseases increase, with an observed lag of six months after the fires for the case with high exposure to fires. A highlight that deserves attention concerns the relationship between fire occurrences and deaths. Historically, the risk of mortality by respiratory diseases is higher (about the double) in regions and periods with high exposure to fires than the ones with low exposure to fires. The same pattern remains in the period of the COVID-19 pandemic, where the&#13;
risk of mortality for COVID-19 was 80% higher in the region and period with high exposure to fires. Regarding the SARS-COV-2 analysis, the risk of mortality related to COVID-19 is higher in the period with high exposure to fires than in the period with low exposure to fires. Another highlight concerns the relationship between fire occurrences and COVID-19 deaths. The results show that regions with high fire occurrences are associated with more cases of COVID deaths. The decision-make process is a critical problem mainly when it involves environmental and health control policies. Environmental policies are often more cost-effective as health measures than the use of public health services. This highlight the importance of data analyses to support the decision making and to identify population in need of better infrastructure due to historical environmental factors and the knowledge of associated health risk. The results suggest that the fires occurrences contribute to the increase of the respiratory diseases hospitalization. The mortality rate related to COVID-19 was higher for the period with high exposure to fires than the period with low exposure to fires. The regions with high fire occurrences is associated with more COVID-19 deaths, mainly in the months with high number of fires.
</description>
<pubDate>Fri, 16 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12442</guid>
<dc:date>2022-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Identificação de falhas geológicas em sísmicas usando Redes Neurais Convolucionais</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12441</link>
<description>Identificação de falhas geológicas em sísmicas usando Redes Neurais Convolucionais
Alves, Lucas Gabriel Ferreira
Approaches using machine learning are being used to support activities in Geoscience.&#13;
Among the possible applications, some are aimed at interpreting seismic data in tasks such as identifying features or identifying faults. In particular, this work assists the seismic interpretation and can bring gains by reducing manual work and the time spent studying the geological area. This dissertation describes how a tool capable of selecting points representing geometric sequences in seismic and discontinuities in these sequences can be developed. Thus, in this work, a study of types of deep neural networks in seismic geological data was done. From these works, we have the identification of 2D faults or fractures. Experiments with deep neural network training in seismic were also carried out to serve as the basis for the proposed work. With this study and these experiments, a new network architecture of the encoder-decoder type was proposed and evaluated, making image segmentation identify faults. This architecture is based on DNFS, StNet, and FaultNet networks. The work also generated contributions in producing and annotating a dataset with annotated seismic fault data available for access and used in experiments. Our future steps include fostering solutions to identify faults or critically stressed fractures according to the tension field.
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12441</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical fog-cloud architecture to process priority-oriented health services with serverless computing</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12440</link>
<description>Hierarchical fog-cloud architecture to process priority-oriented health services with serverless computing
Cassel, Gustavo André Setti
Smart cities and healthcare services have been gaining much attention in recent years, as the benefits provided by this field of research are significant and improve quality of life. Systems can proactively detect health problems by monitoring a person’s vital signs and making automated decisions in order to prevent these problems from worsening. Examples include health services sending notifications to the user’s smartphone when a health problem is detected, or automatically calling an ambulance when vital signs indicate that a severe problem is about to happen in the next minutes. With this context in mind, we highlight two essential requirements that architectures for smart cities should consider to achieve high quality of experience in the field of health. The first is to execute health services with short response times when ingesting high-priority vital signs, so people with comorbidities can have health problems identified as soon as possible. The second is to employ scalability techniques to deal with high usage peaks caused by people concentrating in specific city neighborhoods. Related works already propose solutions to minimize response time, but we argue that considering the semantics of user priority and service priority in the field of health is essential to ensure the appropriate quality of experience. Our understanding is that users with comorbidities should have more priority than healthy users when computing resources are scarce, and specific health services should have higher priority than others. With this in mind, this thesis contributes to this field of research by proposing SmartVSO - a computational model of a hierarchic, scalable, fog-cloud architecture, which executes health services with optimized execution throughput and minimized response time for critical vital signs. We employ fog computing to achieve short response times and cloud computing to achieve virtually infinite computing resources. A first heuristic favors critical vital signs when disputing for scarce, low-latency resources during high usage peaks. This is encompassed by calculating a ranking for the incoming vital sign, which considers both user and service priorities that semantically represent the vital sign’s importance. When vital signs&#13;
collide with the same calculated ranking, a second heuristic uses forecasting techniques to favor health services that will complete faster, with the goal of optimizing execution throughput. We consider serverless computing as the primary technology for deploying and running health services because this allows authorized third parties to implement their own health services in a distributed and pluggable approach, without recompiling the proposed decision-making modules. Finally, we introduce a recursive mechanism that offloads vital signs to parent fog nodes when local computing resources are overloaded, until the vital sign can be processed on a fog node with available computing resources, or is offloaded to the cloud as the last resort. An experiment with 80.000 vital signs indicates that our solution processes 60% of critical vital signs in no more than 5,3 seconds, while a naive architecture that does not employ fog computing and does not favor critical vital signs takes up to 231 minutes (around 3 hours and 51 minutes) to process 60% of critical vital signs.
</description>
<pubDate>Fri, 24 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12440</guid>
<dc:date>2023-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Aprendizado profundo para assistência histopatológica: um modelo computacional para detectar micrometástases em câncer de mama</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12439</link>
<description>Aprendizado profundo para assistência histopatológica: um modelo computacional para detectar micrometástases em câncer de mama
Kuhn, Gabriela
CONTEXT: Cancer is nowadays one of the leading public health problems worldwide and&#13;
breast cancer is one of the most common in women. The prognosis and overall patient survival significantly decrease when breast cancer metastasizes. The evaluation of the presence of metastatic cells in the sentinel lymph node is currently the gold standard for the diagnosis of metastases, but the examination process is time-consuming for the pathologist and susceptible to failures, especially for the detection of micrometastasis. Advances in the histopathological image digitalization and deep learning added a highlight for the study of models that can assist in tasks related to micrometastases diagnosis in breast cancer. OBJECTIVE: Thus, this research aims to assist in this investigation, through the investigation of a deep learning model capable of detecting breast cancer micrometastases at an efficiency comparable to pathologists. METHODOLOGY: To achieve this objective, our architecture is divided into two main tasks. The first consists of a convolutional neural network to perform a patch-level classification at the level of fragments of the original image with full resolution - which in this work we will refer as a patch -. Afterwards, we will perform the second task which is responsible for the&#13;
segmentation task at the pixel level to extract the metastatic areas of the images and measure it, in order to identify the micrometastases. For such training, we are using the Camelyon16 challenge dataset. Therefore, the evaluation metrics of our model are based on the baselines evaluated by this challenge. RESULTS: For the partial results, our classification task achieved AUC = 0.998, in the isolated tests carried out at the fragmented level of the slide, resulting in a F1Score = 1.00 for the negative class and F1Score = 0.99 for positive class, not generating false negatives in the partial steps. Our segmentation task has reached the result of IoU − Score : 0.5434 F I − Score : 0.64818. The final results were found through the reconstruction of the segmented images. Although we obtained good results in the partial and isolated tests for each task for the slides fragments, they did not corroborate with the final results of the slide produced at the end of the framework, thus those results did not demonstrate the metrics found in the partial tests, not being possible to locate the regions of metastasis precisely. However, there is still room for improvements in the model, and the experimental results indicate that the method can contribute to the proposed study. As we chose to work with the images in the highest resolution, dividing them into patches and having a two-layer neural network model - classification and segmentation, the processing time of a single slide by the proposed framework is up to 2 hours. CONCLUSION: Our results indicate&#13;
that, with the improvement of implementation, this model has the potential to meet the&#13;
proposed contributions and this master thesis indicates new directions regarding new analyzes and tests to implement improvements on it.
</description>
<pubDate>Tue, 18 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12439</guid>
<dc:date>2023-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>MoStress: um modelo de aprendizado profundo para detecção de estresse a partir de sinais fisiológicos</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12438</link>
<description>MoStress: um modelo de aprendizado profundo para detecção de estresse a partir de sinais fisiológicos
Souza, Arturo de
The COVID-19 pandemic showed how the preparation and fight against those diseases plays a crucial role on the modern society. However, the coronavirus is not the only pandemic diseases which afflicts the globe: mental illnesses also afflict a large number of the world population. Nowadays, stress, anxiety and depression are classified as mental illness and proximally 10.7% of the world population suffers with one of those diseases, therefore, mental illness might have a high pandemic potential and should be treated with the necessary urgency. One approach to deal with mental illness is to use machine learning algorithm which uses time series as input to detect those diseases. Considering the huge variety of physiologic measured by modern sensors, such as temperature, heart rate and others, and also considering the increase popularity of those sensors in our society, the use of those signals to monitor all kind of the diseases gains more relevance. In that sense, dealing with time series which represents physiologic signs with modern machine learning technics, may result in a substantial improvement of life quality&#13;
of the population, because with those algorithms, several diseases might be classified quickly and more efficient, making more ease the health care professional diagnoses and avoiding the diseases to reach an worst scenario. This work introduce the MoStress, a deep learning model which get as input time series which represents physiologic signs and make stress classification. The MoStres is made by a pre-processing step, which consists in using Fourier Transform to clean noise, Rolling Z-Score to normalize the data, windowing by class frequency to window classification and weight calculation to deal with unbalance data. Besides that, the MoStress also have a deep neural network which make the classification using the pre-processed data, where this neural network consists on one of the following models: a recurrent neural network, a Echo State Network or a combination of the NBeats and a Multi Layer Perceptron network. The MoStress used public physiologic data collected by the Siegen University, in Germany (the dataset is named WESAD), where this dataset is constituted also by 3 different classes: baseline,&#13;
stress amusement. Considering this, the MoStress using physiologic signals of respiration,&#13;
temperature, electrocardiogram, electromyogram and electrodermal activity, collected via chest&#13;
sensor after pre-process these data and using a recurrent neural network, achieved accuracy of&#13;
96.5% on the 3 class classification problem and also achieved recall, f1-score and precision&#13;
of 96%, 93% and 94%, respectively, for the stress class, showing the good performance on&#13;
classification problem with pre-processed data and a recurrent neural network.
</description>
<pubDate>Tue, 28 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12438</guid>
<dc:date>2023-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Uso de NFT no receituário digital: um mecanismo de segurança médica e farmacêutica no sistema de saúde pública</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12217</link>
<description>Uso de NFT no receituário digital: um mecanismo de segurança médica e farmacêutica no sistema de saúde pública
Gomes, Christopher de Paula
The adoption of crypto assets is increasing and new uses for them are emerging every day. Crypto asset networks enable high scalability, provide privacy to their users, and offer&#13;
reliable registration of virtual objects and transactions. Crypto assets can be non-fungible&#13;
tokens(NFTs) generated by a specific contract, but which are distinct from each other. NFTs can represent real-world objects and provide traceability guarantees that are compatible with the requirements of a drug sales control mechanism. Given this context, this work analyzes the application of NFTs in the representation of purchase permits for restricted drugs. It proposes an architectural model based on Blockchains and NTFs to allows the storage of medical prescriptions and the respective permissions for the acquisition of controlled drugs. This model was implemented and evaluated using Ethereum'ssecond layer for testing, called Mumbai. In this network, the cost of NFT creation requests is less than R$0.01 and the transaction time is less than 15 seconds, which makes the implementation viable. These resultsshow that it is possible to implement a medical prescription system througha scalable, cheap, and reliable network.
</description>
<pubDate>Wed, 28 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12217</guid>
<dc:date>2022-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>BeeBr: uma proposta de arquitetura computacional na apicultura, para a predição de problemas na colmeia</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12154</link>
<description>BeeBr: uma proposta de arquitetura computacional na apicultura, para a predição de problemas na colmeia
Soares, Marcelo Barbosa
Brazilian beekeeping is a segment formed mainly by small families, in which the activity&#13;
is a supplement to income. The sector has obsolete working methods, using in their entirety exercises manually and without any technological intervention; as a consequence, it leads to an exhausting process for the beekeeper, as well as negative effects on the health of the bees. Given this scenario, this study proposes to develop a model of computational architecture, in which it aims to contribute to beekeeping, minimizing interventions with the hives and thus ensuring the well-being of insects. Contrary to other research, it is intended to expose, from the point of view of scientific contribution, the elaboration and training of a new model of machine learning, which aims at the prediction of swarming, as well as exposing energy-efficient and sustainable solutions in&#13;
terms of energy consumption of IoT equipment. On the technological side, BeeBr offers a&#13;
complete low-cost solution for the beekeeping segment. As a result, BeeBr enabled readings of six hive variables for a period of 20 days. Through the collected data, a statistical analysis and the design of three experiments were allowed for the evaluation of the modern machine learning model; in final numbers, it was possible to reach&#13;
values above 93% of hits in the swarm prediction and gains of 16.67% in relation to energy efficiency.
</description>
<pubDate>Fri, 18 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12154</guid>
<dc:date>2022-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Sistema baseado em tolerância a falha para o monitoramento de  práticas esportivas de alto desempenho</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12153</link>
<description>Sistema baseado em tolerância a falha para o monitoramento de  práticas esportivas de alto desempenho
Nascimento Júnior, Josué Francisco do
Currently, high performance sport has demanded a lot from athletes and, therefore, it is essential to have constant monitoring of training, so the athlete’s performance can be analyzed day after day until competition. The IoT (Internet of Things), through Wearables, for data collection, is the best option for monitoring through the information collected, quickly offering the coach better decision-making based on what is perceived in the training. . More and more data is generated from IoT devices and Wearables in sports, and much of this data is lost or misused. In this context, the fault tolerance concept is the set of techniques used to detect, mask and tolerate system failures, allowing a better management of the collection process from the sensors. In this way, with the system close to 100% functioning, the athlete will have access to performance information, in a detailed and relevant way, from training and competitions. Therefore, the present work aims to propose a system capable of monitoring the performance of athletes, in different environments, managing the collection of data from the sensors, through a system&#13;
that is tolerant to both hardware and software faults. In particular, the system aims to carry out a survey of which sensors are capable of monitoring, and in this way provide accurate detailing. of the information collected, standardizing the data, according to the modality that will be monitored. During the entire monitoring process, the system will check the availability of the hardware in real time. The scientific contribution of this work arose from the need for more reliable systems, this need arose due to the growing demand for high reliability. The management of monitoring of high performance athletes, in different environments, and with the main objective of offering detailed data, from the training of an athlete, for a more assertive decisionmaking by his coach. The proposed system was evaluated, through a case study, carried out with a prototype applied to training, of athletics. Several sensors were used along the way, and a fault-tolerant system will be responsible for keeping the system as available as possible. The results obtained showed that the computational system, proposed through fault tolerance, was successful in the collections. In particular, the evaluated system was able to obtain data from athletes during training in an acceptable time, even with the occurrence of partial failures.
</description>
<pubDate>Wed, 21 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12153</guid>
<dc:date>2022-09-21T00:00:00Z</dc:date>
</item>
<item>
<title>Um modelo de rede neural para estimar a produtividade de milho a partir de imagens de satélite</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12151</link>
<description>Um modelo de rede neural para estimar a produtividade de milho a partir de imagens de satélite
Apoitia, Carlos Eduardo de Moura
The soy moratorium has been an important instrument aimed at reducing deforestation in the Amazon biome. However, the existence of illegal deforestation of forests grows proportionally with the numbers of embargoed lands. Since in the moratorium there are some loopholes that have been exploited by producers linked to deforestation and practices such as grain laundering, which makes it difficult for analysts and control agencies to identify the real productivity of a property. This work proposes a model of artificial neural networks, called Deep Yield Prediction (DYP), capable of decoding the information learned using convolutional neural networks (CNN) in corn crop yield predictions, in a way that helps in the identification of how much a property will produce. Therefore collaborating to break the cycle of grain laundering and illegal deforestation in the Amazon biome.
</description>
<pubDate>Mon, 26 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12151</guid>
<dc:date>2022-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>SmellGuru: a machine learning-based approach to predict design problems</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12013</link>
<description>SmellGuru: a machine learning-based approach to predict design problems
Silva, Robson Keemps da
Nowadays, the prediction of source code design problems plays an essential role in the software development industry, identifying defective architectural modules in advance. For this reason, some studies have explored this subject in the last decade due to relation with aspects of maintenance and modularity. Unfortunately, the current literature lacks (1) a generic workflow approach that contains key steps to predict design problems, (2) a language to allow developers to specify design problems, and (3) a machine learning model to generate predictions of design problems. Therefore, this dissertation proposes ModelGuru, which is a machine learning-based approach to predict design problems. In particular, this study (1) introduces an intelligible workflow that provides clear guidance to users and facilitates the inclusion of new strategies or steps to improve predictions; (2) proposes a domain-specific language (DSL) to specify bad smells, along with a tool support; and (3) proposes a machine model to support the prediction&#13;
of design problems. In addition, this study carried out a systematic review of the literature&#13;
that allowed creating an overview of the current literature on the subject of predicting design problems. An exploratory study was carried out to understand the impact of the proposed DSL on three variables: correctness rate of the created specifications, error-rate and time invested to elaborate the specifications of design problems. The initial results obtained, supported bystatistical tests, point to for encouraging results by revealing an above correct rate than 50%, error rate below 30% and effort less than 15 minutes to specify a bad smell. The evaluation of the proposed SmellGuru approach was carried out with 23 participants, students and professionals from Brazilian companies with professional experience in software development. It was possible to assess the perceived ease of use, perceived usefulness and behavioral intention of using the proposed SmellGuru approach. Respondents agree that SmellGuru is easy to interpret&#13;
(43.47%), Innovative (60.86%) and would make the software easier to maintain (78.26%). Finally, this study draws up some implications and shows the potential of adopting the proposed approach for supporting the specification and prediction of design problems.
</description>
<pubDate>Fri, 16 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12013</guid>
<dc:date>2022-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Avaliação de uma interface do usuário (UI) incluída no software de realidade virtual imersiva (RVi) mosis LAB</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/12011</link>
<description>Avaliação de uma interface do usuário (UI) incluída no software de realidade virtual imersiva (RVi) mosis LAB
Weppo, Branda Eloá
Discomfort and eye strain are common problems during prolonged use of Immersive Virtual Reality. Due to the lack of protocols for the development of interfaces in this environment, proposing convenient solutions for this task can be a complex challenge. Semiotics, the study of the construction of meaning from the senses, has been seen as an alternative to improve User Experience (UX) in environments designed with a 3D interface. The present research proposes the definition of analytical guidelines that evaluate the UX applied to the Mosis LAB software, considering the adjustment of the environment according to semiotics and UX concepts. Works related to the research topic were raised containing ideas with the potential to improve the UX in the virtual environment targeted by the study. The research includes simulations with each&#13;
element of the Mosis LAB to assess the user’s perception from a semiotic perspective.&#13;
Theoretical support in Semiotics and User Perception research has considerable potential to improve UX in the virtual environment targeted by the study. Therefore, it tends to improve the efficiency of the interaction, reducing potential physical and mental damage, thus enabling longer continuous use of the system.
</description>
<pubDate>Wed, 14 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/12011</guid>
<dc:date>2022-09-14T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-LoRa - Rede de Comunicação com LoRa multi-rádio e multi-hop para IoT em larga escala</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11890</link>
<description>Multi-LoRa - Rede de Comunicação com LoRa multi-rádio e multi-hop para IoT em larga escala
Albuquerque, Eliel de
IoT (Internet of Things) devices must cover large areas of arable land to collect vital information, avoiding financial losses to improve efficiency in agriculture. It is critical to perform data collection on farms where network connectivity is a limitation. LoRaWAN (Long Range Wide Area Network) provides a wide coverage area for IoT applications using unlicensed frequency bands with low power consumption and low throughput. However, LoRaWAN is not suitable for data acquisition on farms with thousands of hectares and coverage equal to or greater than dimensions presented in the literature in regions with slope or slope of soil, a multi-hop LoRa (Long-Range) network has emerged as a promising solution in applications that require deployment with extensions equal to or greater than the area recommended. However, a multihop architecture must be designed to deal with the limitations of deploying LoRa in large-scale scenarios such as its half-duplex nature. This work presents a multi-radio and multi-hop LoRa communication architecture to improve the coverage and service for large-scale IoT deployment&#13;
in rural areas called Multi-LoRa. Furthermore, we present a hardware prototype to physically implement the Multi-LoRa architecture. The results show that Multi-LoRa effectively mitigates the difficulties of multi-hop communication over LoRa for large-scale IoT deployment. Multi-LoRa reduced delay by 60% and packet loss by 2.9% compared to different Multi-LoRa configurations in a small-scale physical test environment and large-scale emulation environment.
</description>
<pubDate>Tue, 26 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11890</guid>
<dc:date>2022-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>DeepCADD: a deep neural network for automatic detection of coronary artery disease</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11829</link>
<description>DeepCADD: a deep neural network for automatic detection of coronary artery disease
Freitas, Samuel Armbrust
CONTEXT: Cardiovascular diseases represent the number one cause of death globally,&#13;
which include the most common disorders in the heart’s health, namely coronary artery disease (CAD). CAD is mainly caused by fat accumulated in the arteries’ internal walls, creating an atherosclerotic plaque that impacts the functional behavior of the blood flow. Anatomical plaque characteristics are essential for a complete functional assessment of CAD. In fact, there is no unique method to assess all the coronary artery segments with high accuracy. OBJECTIVE: Such a panorama evidences the need for new techniques applied to image exams to improve the functional assessment of cardiovascular diseases by replacing manual activities with an automated segment selection. METHODOLOGY: This study presents a deep object detection neural network architecture, called DeepCADD to determine the lesion location in right coronary arteries (RCA) angiography exams. Using a Mask Region-Based Convolutional Neural Network (R-CNN), we expect to reach precision comparable to the gold standard, automating one step of the current protocol. We replace the Mask R-CNN’s backbone with a ResNet-50 trained with coronary artery segments to improve the small features detection. We also train the&#13;
whole DeepCADD architecture with angiographies collected in a local institution. RESULTS: DeepCADD outperformed similar networks in terms of sensitivity and presented a significant correlation with specialists during the validation, which suggests that DeepCADD can be used in the current angiography protocol. CONCLUSION: DeepCADD increases the correlation between the specialists and provides visual CAD suggestions, specially in multi-vessel lesions, which differentiates DeepCADD from the current literature. DeepCADD detects a high number of true positive candidates for lesion quantification, which we expect to extend for further arteries and dynamic evaluation in future research.
</description>
<pubDate>Fri, 18 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11829</guid>
<dc:date>2022-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>ATHENA I: an architecture for real-time monitoring of physiological signals supported by artificial intelligence</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11778</link>
<description>ATHENA I: an architecture for real-time monitoring of physiological signals supported by artificial intelligence
Fröhlich, William da Rosa
Wearable sensors may obtain reliable physiological signals to diagnose diseases and detect changes. Wearables can measure signs such as electrocardiogram, heart rate, electroencephalogram, electromyogram, or galvanic skin response. All these signals have intrinsic characteristics in a normal state and change if associated with illness. The literature presents the Machine Learning Approaches and Deep Learning Models as alternatives to pattern detection in physiological signals. The state-of-the-art in this area indicates a trend to use wearables for continuous monitoring of patients, whether in a hospital or home environment, as it is a portable and noninvasive option. In addition, many studies point out the low cost of wearable sensors as another advantage compared to traditional hospital medical equipment. Other studies highlight the possibility&#13;
of supporting automatic diseases detection, especially chronic diseases, using artificial&#13;
intelligence in physiological signals. Based on the review carried out, it is possible to conclude that there are still new development opportunities. The studied papers do not address at the same time aspects of lower cost, greater flexibility, wide use of Machine Learning resources, and communication of results. This work’s main objective is to develop an architecture for multisignal acquisition with wearable sensors for continuous monitoring and stress detection. The architecture comprises wearable sensors and single-board computers—the wearable sensor for data processing and single-board computer to communicate the results to other platforms. The differentials of this architecture consist of the integration of resources for multi-signal acquisition for continuous monitoring of patients with a low implementation cost, flexibility, and ease of use. We developed a prototype in a modular way, and we tested each module of the architecture. These tests aimed to guarantee the independence of the components, carefully evaluating the stability and plausibility of the data. We also carried out two practical stress-inducing experiments.&#13;
The first composed a proprietary dataset to generate a Machine Learning model, and the&#13;
second allowed full architecture assessment, focusing on real-time detection. The training and classification results of the Machine Learning model showed promising results, with accuracy above 98.72% for binary classification and 92.72% for classification with three classes. When analyzing the real-time classification, we obtained an accuracy of 69.00% for participants in the first round of experiments. The architecture presented excellent communication and operation stability. During the experiments, the architecture performed short and long acquisitions efficiently. The acquired data showed promising results, with plausible and justifiable values within the context of the experiment performed. The classification results obtained when testing the model with participants who were in training, the results were relatively high.
</description>
<pubDate>Tue, 22 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11778</guid>
<dc:date>2022-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Avaliação da integração de abordagens de verificação de fake news</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11775</link>
<description>Avaliação da integração de abordagens de verificação de fake news
Camargo, Jonathan Pereira
The agility with which information can be produced, disseminated and consumed by virtue of the internet not only democratized the access to information, but also generated an abundance of it, having as a side effect the facilitation of the propagation of false or biased information, popularly known as Fake News. Due to the high volume and ephemerality of the data, automated methods with artificial intelligence techniques become essential in the Fake News verification process. From the reading of the state of the art, it is observed that the existing approaches have limits of applicability in specific contexts and that there is no approach capable of dealing with different contexts without compromising their results. Based on this, this work proposes the integration of two methods to evaluate the integration of different methods of verification of Fake News in order to expand the scope of application in different contexts. The text classification method obtained an accuracy of 95.33% using Random Forest, while the fact-checking method with question answering was able to adequately answer the elaborated questions. A comparison methodology was proposed to qualitatively analyze the results of the experiments, which allowed the identification of contributions and future work. The texts classified as false negatives in the classification experiment served as a subsidy for the elaboration of the questions tested in the fact-checking experiment with question answering, validating the complementarity between the methods.
</description>
<pubDate>Thu, 07 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11775</guid>
<dc:date>2022-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>Extração de informações em imagens de tráfego: uma abordagem com aprendizado profundo</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11556</link>
<description>Extração de informações em imagens de tráfego: uma abordagem com aprendizado profundo
Fraga, Vitor Augusto
Traffic systems are fundamental in the development of cities. However, these systems increasingly suffer from problems such as congestion. Problems like this can increase fuel consumption and air pollution. In addition, it directly affects people’s health. For example, studies indicate that exposure to traffic is a factor that collaborates with the early stages of myocardial infarction. An efficient way to reduce this problem category is to perform traffic light control intelligently by reinforcement learning or traffic management algorithms. However, it becomes necessary to extract information from the environment to implement this solution category. The advent of digital image processing and convolutional neural networks made it possible to extract data in a less problematic way compared to more traditional methods, such as installing sensors on the tracks. Using images, it is possible to extract different categories of data, such as the number of vehicles in a lane, the time they are stopped, and the category that this work proposes to extract, the origin and destination of vehicles at intersections. With motivation generated from the need to obtain data to solve problems related to traffic, this work contributes with a complete pipeline for image processing in traffic intersections filmed with aerial angle. The pipeline detects vehicles, identifies their trajectories, and metrifies origins and destinations, thus differentiating itself from the researched works in the literature. The pipeline consists of three main blocks. A custom YOLO (You Only Look Once) convolutional neural network capable of detecting vehicles in aerial angled footage. The second block has the implementation of a tracking method referenced in the literature whose objective is to attribute identity to vehicles in all frames. Finally, the third block is called origins and destinations, whose objective is to measure the number of vehicles that pass through a single location in the scene and extract the number of vehicles by the route. As an evaluation method, each block of the pipeline was metric. The detector model reached the result of IDP 77.5% and IDR of 95.8%. The tracking algorithm obtained a result of MOTA 72.6% and MOTP 74.4%. Since each block of the pipeline depends on the other, the overall result is seen through the metrification of the third, "origins and destinations". This step is evaluated in two phases, the first being the number of vehicles that pass through a single point in the scene, where the average OD Error% is 1.80% and the average OD Error% for routes it is 7.53%.
</description>
<pubDate>Wed, 06 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11556</guid>
<dc:date>2022-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>Graduation Mentoring Recommender: um modelo de sistema de recomendação de atividades complementares para capacitação profissional do aluno de graduação</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11548</link>
<description>Graduation Mentoring Recommender: um modelo de sistema de recomendação de atividades complementares para capacitação profissional do aluno de graduação
Marques, Gerson Adriano
The search for a personalized education for students has been the subject of study for many years. Through the use of digital platforms, educational institutions have facilities to offer support so that the educational path of students is flexible, with greater focus on areas of interest. As a contribution to a personalized learning process, this paper proposes a recommendation system model to recommend undergraduate students complementary activities according to their professional and personal goals, which complement or extend their current educational path, bringing the student closer to professional areas and complementary activities, requirements that are contained in the MEC guidelines. As part of the construction of the model, two experiments were conducted to better understand the scenario of the recommendations and obtain information. The first experiment used Collaborative Filtering (FC) techniques, where the objective was to generate recommendations to the student based on the student's access history. In the second experiment, the technique used was Content Based (BC) which the goal to find similar activities based on the contents of the activities. The third experiment was composed by the techniques of the previous experiments, FC and BC, composing a hybrid approach of recommendation. The last experiment was composed by the Graph Based (BG) technique. The development of this work will have as main contributions: to evaluate the benefits that Recommendation Systems, composed with multiple techniques, can offer to the student's formative path; to propose a model of recommendation system for the expansion of the formative path of the undergraduate student through complementary activities according to their professional preferences.
</description>
<pubDate>Tue, 05 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11548</guid>
<dc:date>2022-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>Utilização da busca tabu para a geração de um modelo aplicado ao Job-shop scheduling problem considerando um sistema de manufatura flexível</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11466</link>
<description>Utilização da busca tabu para a geração de um modelo aplicado ao Job-shop scheduling problem considerando um sistema de manufatura flexível
Müller, Gilberto Irajá
This paper has the aim of generating a scheduling model applied to Job-shop Scheduling Problem in Flexible Manufacturing System, which considers the makespan, total tardiness time, total stop time, total idle time. The model proposed is composed for: (a) an objective function that reflects, through its variables of decision and its weights, the optimization strategies, and (b) arquitecture that is divided in five phases. The model used the Tabu Search algorithm which, through two strategies neighborhoods generation, searching the objective function optimization. The model architecture is based on extraction of production demand, in the Group Technology, in the Dispatching Rules, in the Tabu Search algorithm and save production plan, to deal the Part Selections (Part Families) and Scheduling Problems. Through a study of case, it has been realized several experiments which makes it possible the comparison of optimization strategies and real scheduling, and which proves conflicts in decision variables. For model validation it has been used classic works which propose the solution of Job-Shop Scheduling Problem.
</description>
<pubDate>Mon, 20 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11466</guid>
<dc:date>2006-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>BGNDL: arquitetura de deep learning para diferenciação da proteína biglycan em tecido mamário com e sem câncer</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11265</link>
<description>BGNDL: arquitetura de deep learning para diferenciação da proteína biglycan em tecido mamário com e sem câncer
Silva Neto, Pedro Clarindo da
Artificial Intelligence and Machine Learning have become important allies in healthcare. In this context, Deep Learning has provided support for critical medical tasks, including di agnosis, outcome prediction, and treatment response. Histological images, the focus of this work, come from the tissues of the human body. The diagnosis of many diseases, especially malignant diseases, depends on the evaluation of histological sections. Within the scenario of diagnostic imaging evaluations, variations exist. In the literature, it has been shown that de spite the consistency of results in the same rater, there is a difference between different raters. According to the literature, differences in visual perception and clinical training can lead to inconsistencies in diagnostic and prognostic opinions since pathological analysis is naturally subjective. Routine staining of tissues for microscopic study is not always sufficient. In these cases biological markers, the biomarkers, are used as complements. In this regard, the growing interest in biomarker research has increased due to rising research costs and the time required to develop a new compound. For these biomarkers to be used in research, it is necessary that they go through a validation process, where they need to be measured in a test system, where one of the properties, the sensitivity of the biomarker, will be evaluated in this work. With the exposure of this scenario, this work promoted, through Deep Learning, the creation of the CNN architecture that will check, from histological images with the biomarker Biglycan, if there is a difference between the expression of Biglycan between tissues with and without breast cancer . The association of Deep Learning and Biglycan protein expression by DAB staining intensity using color deconvolution is new and necessary for biomarker validation. In this sense, the main contributions of this work are: Creation of an original dataset of histological images with and without breast cancer that were subjected to the immunohistochemistry technique to de termine Biglycan protein expression, Automation of the color deconvolution model to analyze only images with DAB expression and Development of a CNN architecture that can determine whether there is a difference between Biglycan expression between tissues with and without breast cancer. The breast histology images were classified by an average percentage greater than 93%, indicating that there is a difference between Biglycan biomarker expression between tissues with and without breast cancer.
</description>
<pubDate>Tue, 05 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11265</guid>
<dc:date>2022-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>Aprendizado por reforço profundo explicável: um estudo com controle semafórico inteligente</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/11078</link>
<description>Aprendizado por reforço profundo explicável: um estudo com controle semafórico inteligente
Schreiber, Lincoln Vinicius
With the fast increase in urbanization levels, the problem of congestion has become even more evident for society, the environment, and the economy. One practical approach to alleviating this problem is adaptive traffic signal control (ATSC). Deep reinforcement learning algorithms have shown great potential for such control. However, these methods can be viewed as black boxes since their learned policies are not easily understood or explainable. The lack of explainability of these algorithms may be limiting their use in real-world conditions. One framework that can provide explanations for any deep learning model is SHAP. It considers models as black boxes and explains them using post-hot techniques, providing explanations based on the response of that model with different inputs, without analyzing or going into internal points (such as parameters and architecture). The state of the art for using SHAP with a deep reinforcement learning algorithm to control traffic lights can demonstrate consistency in the logic of the agent’s decision making, also presenting the reaction according to the traffic in each lane. However, it could not demonstrate the relation of some sensors with the chosen action intuitively and needed to present several figures to understand the impact of the state on the action. This paper presents two approaches based on the Deep Q-Network algorithm to explain the policy learned through the SHAP framework. The first uses the XGBoost algorithm as a function approximation, and the second uses a neural network. Each approach went through a process of studying and optimizing its hyperparameters. The environment was characterized as an MDP, and we modeled it in two different ways, namely Cyclic MDP and Selector MDP. These models allowed us to choose different actions and have different representations of the environment. Both approaches presented the impact of features on each action through the SHAP framework, which promotes understanding of how the agent behaves under different traffic conditions. This work also describes the application of Explainable AI in intelligent traffic signal control, demonstrating how to interpret the model and the limitations of the approach. Furthermore, as a final result, our methods improved travel time, speed, and throughput in two different scenarios, outperforming the FixedTime, SOTL, and MaxPressure baselines.
</description>
<pubDate>Fri, 18 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/11078</guid>
<dc:date>2022-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Arterial: um modelo inteligente para a prevenção ao vazamento de informações de prontuários eletrônicos utilizando processamento de linguagem natural</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10900</link>
<description>Arterial: um modelo inteligente para a prevenção ao vazamento de informações de prontuários eletrônicos utilizando processamento de linguagem natural
Goldschmidt, Guilherme
Over the past decade, there has been a steady increase in healthcare security breaches. A study on patient privacy and data security showed that 94% of hospitals had at least one security breach in the past two years. In most cases, the attacks originated from internal actors. Therefore, it is essential that healthcare organizations protect their sensitive information such as test results, diagnoses, prescriptions, surveys, and personal customer information. A leak of sensitive data can result in a great economic loss and/or damage to the organization’s image. There is also in Brazil the General Law for the Protection of Personal Data (LGPD), which provides for various aspects of the personal protection of information. Information protection systems have been taking shape over the last few years, such as firewalls, intrusion detection and prevention systems (IDS/IPS) and virtual private networks (VPN). However, these technologies work very well on well-defined, structured and constant data, unlike medical records that have free writing fields. Complementing these technologies are Data Leakage Prevention Systems (DLPS). DLP systems help to identify, monitor, protect and reduce the risk of leaking sensitive data. However, conventional DLP solutions use only subscription comparisons and/or static comparisons. Thus, we propose to develop a model based on new technologies such as Natural Language Processing (NLP), Entity Recognition (NER) and Artificial Neural Networks (ANN) to be more assertive in extracting information and recognizing entities. Thus contributing with new perspectives to literature and therefore to the scientific community. Three approaches were implemented and tested, two based on ANN and the next based on machine learning algorithms. As a result, the approach that took in its implementation the use of machine learning algorithm reached 98.0% of Accuracy, 86.0% of Recall and 91.0% of F1-Score. Keywords: Electronic Health Record
</description>
<pubDate>Tue, 21 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10900</guid>
<dc:date>2021-12-21T00:00:00Z</dc:date>
</item>
<item>
<title>EventChain: uma proposta de estilo arquitetural para sistemas orientados a cadeia de eventos na área financeira</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10575</link>
<description>EventChain: uma proposta de estilo arquitetural para sistemas orientados a cadeia de eventos na área financeira
Luz, Maicon Azevedo da
Architectural styles are important for engineering as they bridge the gap between requirements and implementation design. Its function is to express a set of features of a software architecture, aiming to provide a broad view of the communication between the components of the software architecture, facilitating reuse and reducing complexity. With the growth of the financial area, such companies employed different architectural styles in software development aiming to increase reusability, performance and security. The literature on the topic, however, lacks studies that investigate modern architectural styles that focus on the specific needs of software architectures for the development of applications in the financial area, such as scalability, high availability, consistency and integrity of information. Also, given the recent growth in this area, make developing new applications simple and robust. This dissertation, therefore, presents the EventChain, which is an architectural style oriented to the chain of events, which employs the use of asynchronous communication and Blockchain for the development of applications in the financial area. The proposed architectural style was evaluated in two ways. The first is the construction of a prototype in order to assess the feasibility and demonstrate its operation, and the second, the application of a technological acceptance questionnaire to assess the acceptance of the architectural style by industry professionals. The results obtained show that the proposed architectural style is a viable, functional implementation that meets the requirements of systems in the financial area. Finally, it is concluded that the architectural style represents a new approach with great potential to facilitate the development of new systems in the financial area, which addresses specific requirements and makes the implementation of new applications flexible.
</description>
<pubDate>Thu, 09 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10575</guid>
<dc:date>2021-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>Personalidade e redes sociais: agrupando e analisando características comportamentais de usuários de redes sociais a partir da combinação de traços de personalidade, dados demográficos e pegadas digitais</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/10092</link>
<description>Personalidade e redes sociais: agrupando e analisando características comportamentais de usuários de redes sociais a partir da combinação de traços de personalidade, dados demográficos e pegadas digitais
Tamiosso, Daniel
Digital social networks are becoming more mainstream, offering a massive platform for analyzing human behavior in computer-mediated contexts. Algorithms can explore human behavior by analyzing digital footprints left by people when interacting with social networks. Digital footprints can be produced actively (in a consenting way) and passively (unintentionally). It is through them that studies can explore behavior and social interaction on a large scale. Thus, the discovery of essential and valuable information from digital footprints left on social networks is carried out using pattern recognition technologies and statistical and mathematical techniques; this discipline is referred to as data mining. This research seeks to identify user profiles in social networks by grouping behavior data in social networks (digital footprints), demographic data, and socio-affective profiles (personality traits). More specifically, unsupervised machine learning algorithms (clustering) such as K-means and Spectral Clustering are applied. Unlike other works on personality detection on social networks, the proposed work explores clustering techniques to group users with similar profiles by collecting their digital footprints, demographic data, and personality traits. From there, that work aims to understand the personality manifestations of social network users through their behavior, i.e, the role that different personalities play in the behavior of users on social networks. Although this work analyzes a small group of users (157 participants), some correlations observed in the related bibliography could be found. That work a first step for future incremental works in order to raise awareness about the relationship of social networks, Personality Computation and the several underlying fields related to strictly personal and sensitive data. This research also brings as a contribution a new set of labeled and high-dimensional data (a database), which combine behavioral data with characteristics extracted from active and passive digital footprints, personality and demographic information from a social network in Portuguese.
</description>
<pubDate>Tue, 17 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/10092</guid>
<dc:date>2021-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>Formulação de um novo índice espectral para identificação de rochas carbonáticas</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9885</link>
<description>Formulação de um novo índice espectral para identificação de rochas carbonáticas
Müller, Marianne
Carbonate outcrops are one of the focuses of research in the oil industry. In situ research is the most efficient way to characterize an outcrop, though it requires time and high investments. Analysis using remote sensing data converted into indicators that attest to the presence of certain materials on the study site, such as spectral indexes or fraction images, represents a viable alternative. Although spectral indices for carbonate rocks already exist, they present limitations mostly related to the input data’s resolution and the indicator’s extension to other environments. The present master’s thesis proposed approaching this problem in a structured way to formulate a new index that surpasses those existing in the literature’s performance. The present master’s thesis proposed approaching this problem in a structured way to formulate a new index that surpasses those existing in the literature’s performance. The mining procedure was used on the available data to understand how decisions are made in order to separate carbonate targets from any other material. Afterward, data scatter plots were produced to locate the region that concentrates targets of interest in the multispectral space, as well as to visualize how they differ from other targets. Finally, a versatile spectral index for carbonate rocks was formulated, with parameters adjusted according to the sensor in use (in this case, the OLI-Landsat 8 sensor), offering the user the possibility to adapt the index to his scenario. The index was calculated for an image and compared with a Ground Truth (GT) produced from visual interpretation followed by data degradation to reach the appropriate scale in each situation. The proposed index followed all the premises indicated by the literature and achieved a performance of 83.2 % of global accuracy, a value relatively higher than the other indexes referenced in the current literature. These results suggest the potential for further research to test the index application to other study areas and other sensors’ data (with the necessary adjustments).
</description>
<pubDate>Thu, 29 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9885</guid>
<dc:date>2021-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Hathor: um modelo computacional para cuidado ubíquo de crianças</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9853</link>
<description>Hathor: um modelo computacional para cuidado ubíquo de crianças
Santos, Nícolas Bordignon dos
Recent research indicates that the number of accidents with children has grown every year and their families feel responsible for maintaining the safety of their children full time, generating frustration and guilt when an accident happens. The home environment is one of the main sources of concern for these caregivers when dealing with accidents with young children. The differential of this work is that the Hathor model performs ubiquitous monitoring of children in the home environment for their families, in addition to detecting and avoiding accident risks based on historical contexts. This facilitates the control and monitoring of children by parents. The model has an application with the objective of capturing data from the routine of eating, sleeping, bathroom and activities and notifying parents about risks or an unbalanced routine. The implemented prototype consists of a neural network for the identification of children in real-time images using the YOLO version 5 network and a risk identification module for the detection of the child’s proximity to a predefined risk area, as well as the prediction of the child’s encounter with risk based on their speed of movement through the images. As contributions this work brings a framework for integration between triggers and monitoring systems, in order to supervise a child in the home environment, predicting and reacting to risks identified by the system. A systematic mapping of the area of assistive robotics and its integration with intelligent environments, categorizing the studies found through proposed taxonomies regarding its proposed use, its integration technologies with intelligent environments, its technologies of interaction with human beings, as well as its target Audience. This mapping shows the trend of technologies used in the area, where studies can be found and the growth of the study area in recent years.
</description>
<pubDate>Tue, 30 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9853</guid>
<dc:date>2021-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>CognIDE: Uma abordagem para integração de dados psicofisiológicos em ambientes integrados de desenvolvimento</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9838</link>
<description>CognIDE: Uma abordagem para integração de dados psicofisiológicos em ambientes integrados de desenvolvimento
Vieira, Roger Denis
The search for an understanding of how the developer’s brain behaves during software development is an object of study of increasing interest in recent years. Despite the development of new studies collecting psychophysiological data from developers and using them in experiments, little has been explored about the applicability of such data in software engineering. Therefore, the development of this work aims to propose a tool for the integration of psychophysiological data in integrated development environments, to present them to developers, and verify their impact on the software development process. Therefore, the tool CognIDE was developed based on the opportunities identified by conducting a systematic mapping of the literature, where 2084 studies were identified, of which 27 were selected as primary studies. For the evaluation of the proposed tool, a controlled experiment was carried out and executed among 61 individuals in the technology area, aiming to assess the impact of presenting, in the IDE, the Cognitive Load metric on their Anomalies Perceptions and Levels of Refactoring Intention. As a result, it was observed that presenting the Cognitive Load metric, when it presents its value as High, combined with the number of Code Anomalies, can assist the developer in the identification of anomalies, in addition to serving as input for the decision of refactoring excerpts from source code. The development of this work brought as main contributions: (1) expansion of the state-of-the-art regarding the integration of psychophysiological data in IDEs and its applicability in software engineering; (2) the implementation of the CognIDE tool and its approach to data integration; (3) empirical knowledge about the impact of displaying developers’ psychophysiological data in IDEs.
</description>
<pubDate>Wed, 14 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9838</guid>
<dc:date>2021-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Segmentação de fácies sísmicas com redes neurais</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9808</link>
<description>Segmentação de fácies sísmicas com redes neurais
Lima, Gefersom Cardoso
The interpretation of seismic data is important for the characterization of the shape of the sediments in a geological study area. Traditionally, this work is carried out by visually choosing points that represent the limits of seismic facies and executing a tool to make the inference of other limit points. This process requires a lot of manual labor and can allow some facies to go unidentified, making the resulting work less detailed than it could be. With the increase in the use of deep learning focused on image segmentation, its application in helping seismic interpretation can bring gains by decreasing manual work and the time spent when studying a geological area. Thus, in this work we made a study of the application of deep neural networks of the encoder-decode type for the identification of seismic facies separating lines. As a result, we created a neural network called DNFS, which is based on U-Net and StNet, has fewer parameters than these and is aimed at binary segmentation of seismic data. This type of segmentation allowed us to segment an arbitrary number of seismic facies just by focusing on the transition between them. To use binary segmentation we use a simple method of adapting the data sets on which we did the experiments. This adaptation uses black lines between the intersections of the seismic facies and white color for the rest of the labeled image. For the calculation of loss we use a function composed by the linear combination of the cross-entropy and Jaccard loss functions. To optimize the coefficient of the linear combination of the function that weighs the weight of cross-entropy and Jaccard loss in the loss value, we performed several experiments with the result that if the cross-entropy contributes 75 % and Jaccard loss with 25 %, we could obtain predictions with high fidelity of the separating lines between the seismic facies. We also carried out an extensive experimental evaluation and adjustments of the hyper-parameters and compared the results with the base networks U-Net and StNet applied on the same data sets. In the end, we obtained a neural network that can be trained in approximately 15 minutes and offers an index above 95% relative to the IoU metric on the StData-12 and Facies-Mark datasets.
</description>
<pubDate>Wed, 07 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9808</guid>
<dc:date>2021-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>Modelo de classificação automática de sinais fisiológicos para identificação de estresse</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9807</link>
<description>Modelo de classificação automática de sinais fisiológicos para identificação de estresse
Rodrigues, Clarissa Almeida
Stress has become a relevant disease in today's society, due to a number of factors linked to the context of contemporary life. This imbalance impacts both the personal and professional spheres of individuals because it is associated with the development of several pathologies. The evidence of the state of stress can be identified through different physiological changes, and wearable sensors can be used to measure these signals automatically. Machine Learning approaches have been used for the automatic identification of stress patterns based on the use of data generated by wearable sensors monitoring physiological signals. Despite positive results, these initiatives present a gap in the combined use of several physiological signals and in the use of biological markers for the annotation of data. In order to explore possibilities to describe a model for classifying stress with multiple physiological signals, experiments were developed with different signal combinations (EMG, EDA and ECG) using different machine learning algorithms, using three different datasets (BeWell, WESAD and Training2017). According to experiments carried out in the context of multisignals, the best result was using ECG and EMG when processed with Gaussian Naïve Bayes, obtaining an accuracy of 90%.
</description>
<pubDate>Fri, 09 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9807</guid>
<dc:date>2021-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>PlaceRAN - Uma solução de posicionamento das funções de rádio de acesso móveis virtualizadas de quinta geração</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9782</link>
<description>PlaceRAN - Uma solução de posicionamento das funções de rádio de acesso móveis virtualizadas de quinta geração
Morais, Fernando Zanferrari
To achieve the digital transformations envisioned and driven by the demand for services from society, the advances in the fifth-generation mobile networks, specified by the standardizing organizations jointly with the industry, are largely disruptive compared to the previous networks. To meet the demand scale, the new generation of access radio networks (NG-RAN) is guided by two concepts: (i) radio functions decoupling and disaggregation in up to three units and (ii) immersion in software concepts, mainly virtualization. Thus resulting in the vNG-RAN architecture. In this context, placement of the radio functions in three units under the transport networks and the computational resources is defined as an NP-hard problem. Also, the decision-making between the RAN disaggregation, the routing in the transport network, and the strategies for allocating computational resources is an unprecedented research challenge and with high interest from the industry. Therefore, this dissertation presents the PlaceRAN solution: a placement optimization solution focused on network planning, combined with an orchestrator to develop vNG-RAN, i.e., a virtualized RAN. First, the placement optimization solution has three stages with the objectives: (i) maximize the aggregation of radio functions and minimize the use of computational resources, (ii) minimize the number of Disaggregated RAN Combination (DRC), (iii) prioritize DRCs according to the chosen placement strategy. For the orchestrator, it is aligned with the Network Function Virtualization (NFV) architecture. It aims to: (i) provide the allocation of virtualized radio functions based on the placement optimization solution and (ii) be aware of the network topology and computational resources. The evaluation was conducted with two real networks and parameters for the optimization solution, and for the orchestrator, an experiment based on one of these real networks was emulated. The results show that the placement solutions reach up to 80% aggregation of the virtualized radio units (vRU, vDU, and vCU) in less than 20% of the computational resources, reducing the number of DRCs guided by the placement strategy. Likewise, the orchestrator has total functionality in allocating demand for the optimization solution under the production container orchestration platform. Thus obtaining total viability for the development of vNG-RAN. Finally, the PlaceRAN solution contributes to the scientific field for the advancement of virtualization and disaggregation of the NG-RAN architecture and aligns with the main industry initiatives.
</description>
<pubDate>Tue, 23 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9782</guid>
<dc:date>2021-03-23T00:00:00Z</dc:date>
</item>
<item>
<title>ELASTIC5GC- elasticidade proativa no Core 5G para melhorar a utilização de recursos e a capacidade de atendimento</title>
<link>http://repositorio.jesuita.org.br/handle/UNISINOS/9750</link>
<description>ELASTIC5GC- elasticidade proativa no Core 5G para melhorar a utilização de recursos e a capacidade de atendimento
Cunha, Luiz Felipe da Silva
The next generation of mobile telecomunications (5G) is close to be deployed around the world, however, many aspects of its implementation are open, the main topic of this next generation is to attend the future reality, i.e., that is leverage by IoT, with a prediction of more than 79.4 zettabytes of traffic per year and about 41.6 billions of connected devices. In this way, the 3GGP launched the release 15, one of presented items is a service based architeture to the core that, besides another characteristics, decouples services in a way where each service has a exclusive responsibility, making easily service multipliyng to attend most dynamic and several scenarios. This work shows a model to increase service capacity of the devices and improve the allocation of computational resources in the core. So, an architecture is proposed to provide proactive horizontal elasticity into the core. This architecture has as main components (i) a load balancer, that balancing all communications between the access network and the core related to user equipments for all mobility managing functions available and, (ii) a elasticity manager that does a allocation/deallocation of this functions, using processing load prediction of mobility managing functions. This prediction is calculated throught tendency of historical measurements using time series, more specifically a auto-regressive integrate moving average model. To show results this model was subjected to 3 load patterns. With use of this model was possible to reduct up to 38.38% in allocation of computational resources and to increase of up to 33.22% service capacity. In this way, was showed that networwing functions replications make feasible to increase service capacity and, using proactive elasticity, decrease drastically computational resource usage.
</description>
<pubDate>Fri, 05 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repositorio.jesuita.org.br/handle/UNISINOS/9750</guid>
<dc:date>2021-03-05T00:00:00Z</dc:date>
</item>
</channel>
</rss>
