Project description:Debilitating hearing loss (HL) affects ~6% of the human population. Only 20% of the people in need of a hearing assistive device will eventually seek and acquire one. The number of people that are satisfied with their Hearing Aids (HAids) and continue using them in the long term is even lower. Understanding the personal, behavioral, environmental, or other factors that correlate with the optimal HAid fitting and with users' experience of HAids is a significant step in improving patient satisfaction and quality of life, while reducing societal and financial burden. In SMART BEAR we are addressing this need by making use of the capacity of modern HAids to provide dynamic logging of their operation and by combining this information with a big amount of information about the medical, environmental, and social context of each HAid user. We are studying hearing rehabilitation through a 12-month continuous monitoring of HL patients, collecting data, such as participants' demographics, audiometric and medical data, their cognitive and mental status, their habits, and preferences, through a set of medical devices and wearables, as well as through face-to-face and remote clinical assessments and fitting/fine-tuning sessions. Descriptive, AI-based analysis and assessment of the relationships between heterogeneous data and HL-related parameters will help clinical researchers to better understand the overall health profiles of HL patients, and to identify patterns or relations that may be proven essential for future clinical trials. In addition, the future state and behavioral (e.g., HAids Satisfiability and HAids usage) of the patients will be predicted with time-dependent machine learning models to assist the clinical researchers to decide on the nature of the interventions. Explainable Artificial Intelligence (XAI) techniques will be leveraged to better understand the factors that play a significant role in the success of a hearing rehabilitation program, constructing patient profiles. This paper is a conceptual one aiming to describe the upcoming data collection process and proposed framework for providing a comprehensive profile for patients with HL in the context of EU-funded SMART BEAR project. Such patient profiles can be invaluable in HL treatment as they can help to identify the characteristics making patients more prone to drop out and stop using their HAids, using their HAids sufficiently long during the day, and being more satisfied by their HAids experience. They can also help decrease the number of needed remote sessions with their Audiologist for counseling, and/or HAids fine tuning, or the number of manual changes of HAids program (as indication of poor sound quality and bad adaptation of HAids configuration to patients' real needs and daily challenges), leading to reduced healthcare cost.
Project description:Kidney diseases form part of the major health burdens experienced all over the world. Kidney diseases are linked to high economic burden, deaths, and morbidity rates. The great importance of collecting a large quantity of health-related data among human cohorts, what scholars refer to as "big data", has increasingly been identified, with the establishment of a large group of cohorts and the usage of electronic health records (EHRs) in nephrology and transplantation. These data are valuable, and can potentially be utilized by researchers to advance knowledge in the field. Furthermore, progress in big data is stimulating the flourishing of artificial intelligence (AI), which is an excellent tool for handling, and subsequently processing, a great amount of data and may be applied to highlight more information on the effectiveness of medicine in kidney-related complications for the purpose of more precise phenotype and outcome prediction. In this article, we discuss the advances and challenges in big data, the use of EHRs and AI, with great emphasis on the usage of nephrology and transplantation.
Project description:In the era of continuous development of computer technology, the application of artificial intelligence (AI) and big data is becoming more and more extensive. With the help of powerful computer and network technology, the art of visual communication (VISCOM) has ushered in a new chapter of digitalization and intelligence. How vision can better perform interdisciplinary and interdisciplinary artistic expression between art and technology and how to use more novel technology, richer forms, and more appropriate ways to express art has become a new problem in visual art creation. This essay aims to investigate and apply VISCOM art through big data and AI methods. This essay proposed the STING algorithm for big data for multi-resolution information clustering in VISCOM art. In addition, the convolutional neural network (CNN) in AI technology was used to identify the conveyed objects or scenes to achieve the purpose of designing art with different characteristics for different scenes and groups of people. STING is a multi-resolution clustering technique for big data, with the advantage of efficient data processing. In the experimental part, this essay selected a variety of design contents in VISCOM art, including logo design, text design, scene design, packaging design and poster design. STING and CNN algorithms were used to cluster and AI-identify the design elements 16 of the design projects might contain. The results showed that the overall average clustering accuracy was above 82%, the accuracy of scene element recognition mainly was above 80%, and the accuracy of facial recognition was above 80%; this showed that this essay applied AI and big data to the design of VISCOM, and had a good effect on the clustering and identification of design elements. According to expert scores, these applications' reliability and practicality scores were above 70 points, with an average of about 80 points. Therefore, applying big data and AI to VISCOM in this essay is reliable and feasible.
Project description:Technological advances in big data (large amounts of highly varied data from many different sources that may be processed rapidly), data sciences and artificial intelligence can improve health-system functions and promote personalized care and public good. However, these technologies will not replace the fundamental components of the health system, such as ethical leadership and governance, or avoid the need for a robust ethical and regulatory environment. In this paper, we discuss what a robust ethical and regulatory environment might look like for big data analytics in health insurance, and describe examples of safeguards and participatory mechanisms that should be established. First, a clear and effective data governance framework is critical. Legal standards need to be enacted and insurers should be encouraged and given incentives to adopt a human-centred approach in the design and use of big data analytics and artificial intelligence. Second, a clear and accountable process is necessary to explain what information can be used and how it can be used. Third, people whose data may be used should be empowered through their active involvement in determining how their personal data may be managed and governed. Fourth, insurers and governance bodies, including regulators and policy-makers, need to work together to ensure that the big data analytics based on artificial intelligence that are developed are transparent and accurate. Unless an enabling ethical environment is in place, the use of such analytics will likely contribute to the proliferation of unconnected data systems, worsen existing inequalities, and erode trustworthiness and trust.
Project description:Realty management relies on data from previous successful and failed purchase and utilization outcomes. The cumulative data at different stages are used to improve utilization efficacy. The vital problem is selecting data for analyzing the value incremental sequence and profitable utilization. This article proposes a knowledge-dependent data processing scheme (KDPS) to augment precise data analysis. This scheme operates on two levels. Data selection based on previous stagnant outcomes is performed in the first level. Different data processing is performed in the second level to mend the first level's flaws. Data processing uses knowledge acquired from the sales process, amenities, and market value. Based on the knowledge determined from successful realty sales and incremental features, further processing for new improvements and existing stagnancy mitigation is recommended. The stagnancy and realty values are used as knowledge for training the data processing system. This ensures definite profitable features meeting the amenity requirements under reduced stagnancy time. The proposed scheme improves the processing rate, stagnancy detection, success rate, and training ratio by 8.2%, 10.25%, 10.28%, and 7%, respectively. It reduces the processing time by 8.56% compared to the existing methods.
Project description:SARS-CoV2 is a novel coronavirus, responsible for the COVID-19 pandemic declared by the World Health Organization. Thanks to the latest advancements in the field of molecular and computational techniques and information and communication technologies (ICTs), artificial intelligence (AI) and Big Data can help in handling the huge, unprecedented amount of data derived from public health surveillance, real-time epidemic outbreaks monitoring, trend now-casting/forecasting, regular situation briefing and updating from governmental institutions and organisms, and health facility utilization information. The present review is aimed at overviewing the potential applications of AI and Big Data in the global effort to manage the pandemic.
Project description:Artificial intelligence (AI) is expected to support clinical judgement in medicine. We constructed a new predictive model for diabetic kidney diseases (DKD) using AI, processing natural language and longitudinal data with big data machine learning, based on the electronic medical records (EMR) of 64,059 diabetes patients. AI extracted raw features from the previous 6 months as the reference period and selected 24 factors to find time series patterns relating to 6-month DKD aggravation, using a convolutional autoencoder. AI constructed the predictive model with 3,073 features, including time series data using logistic regression analysis. AI could predict DKD aggravation with 71% accuracy. Furthermore, the group with DKD aggravation had a significantly higher incidence of hemodialysis than the non-aggravation group, over 10 years (N = 2,900). The new predictive model by AI could detect progression of DKD and may contribute to more effective and accurate intervention to reduce hemodialysis.
Project description:Prevention and treatment of hypertension (HTN) are a challenging public health problem. Recent evidence suggests that artificial intelligence (AI) has potential to be a promising tool for reducing the global burden of HTN, and furthering precision medicine related to cardiovascular (CV) diseases including HTN. Since AI can stimulate human thought processes and learning with complex algorithms and advanced computational power, AI can be applied to multimodal and big data, including genetics, epigenetics, proteomics, metabolomics, CV imaging, socioeconomic, behavioral, and environmental factors. AI demonstrates the ability to identify risk factors and phenotypes of HTN, predict the risk of incident HTN, diagnose HTN, estimate blood pressure (BP), develop novel cuffless methods for BP measurement, and comprehensively identify factors associated with treatment adherence and success. Moreover, AI has also been used to analyze data from major randomized controlled trials exploring different BP targets to uncover previously undescribed factors associated with CV outcomes. Therefore, AI-integrated HTN care has the potential to transform clinical practice by incorporating personalized prevention and treatment approaches, such as determining optimal and patient-specific BP goals, identifying the most effective antihypertensive medication regimen for an individual, and developing interventions targeting modifiable risk factors. Although the role of AI in HTN has been increasingly recognized over the past decade, it remains in its infancy, and future studies with big data analysis and N-of-1 study design are needed to further demonstrate the applicability of AI in HTN prevention and treatment.