Rearch Topics

Summary


In order to analyze and control systems and estimate state quantities, etc., that cannot be directly observed, a model that mathematically describes the properties of the target system is required. However, physical models of complex phenomena in the real world are often faced with difficulties. Therefore, in our laboratory, we analyze data that have been measured and collected using various sensors or through a small number of experiments, and are developing technology for constructing statistical models that express phenomena, as well as data clustering technology for facilitating the interpretation of phenomena. Furthermore, we approach various real-world problems using these models.
 
Specifically, we are conducting research on human systems, such as medical AI and medical devices, and on process systems such as controlling and optimizing production processes, using small data analysis as a key concept. In our laboratory, we conduct, not only data analysis and algorithm design, but also data collection, animal experiments, human subject experiments, software development, and regulatory correspondence, in a coherent manner.
 
Here are some of the themes. Please contact us for further details or regarding other themes.

 

Human Systems


Epilepsy is a disease or symptom that causes seizures, such as convulsions or loss of consciousness, due to abnormal nerve activities that occur in the network of brain cells. In the spring of 2012, a runaway car caused a heart-wrenching accident with many casualties in the Gion district of Kyoto. It was reported that an epileptic seizure of the driver caused this accident, which led to a revision of the law in 2013, placing some restrictions on the acquisition of a driver's license for patients with epilepsy. Acquisition of a driver's license for patients with epilepsy had been conditionally recognized under the revised Road Traffic Act of 2002, so legally restricting the behavior of epilepsy patients in this way goes against the trend.

Accidents associated with epileptic seizures can lead to serious injury or death; not only traffic accidents, but many deaths by drowing in bathrooms and burns while cooking on stoves due to seizures also have been reported. However, if patients could detect signs of seizures even a few seconds prior to occurrence, they could secure their physical safety before the seizure, which would improve their quality of life (QoL).

It has been known that heart rate variability (HRV), which is related to autonomic nervous functions, changes before the onset of epileptic seizures. We showed for the first time in the world that epileptic seizures can be predicted by continuously monitoring heart rate data using a wearable sensor and analyzing them in real time with a smartphone app.

In this research, we are developing a system for predicting seizures in cooperation with over 20 epilepsy specialists in 13 hospitals and facilities nationwide. Of these facilities, Nagoya University is in charge of developing algorithms and apps that can analyze the information of wearable heart rate sensors and predict seizures.

In fiscal 2017, the program was selected for the Elemental Technology Development Course, AMED Advanced Measurement Program, and is under development with the aim to initiate clinical trials in 2020.

It is known that localized cooling of the brain can suppress abnormal brain activity caused by epilepsy and stroke. It is thought that, by locally constricting cerebral blood vessels and reducing cerebral blood flow by cooling, nerve cells in the area are not fed and are suppressed from firing.

Therefore, in this study, we are developing an intracranial implanted device that uses a Peltier element to locally cool the brain as a new treatment for intractable epilepsy. This laboratory is in charge of device design and analysis by means of computation fluid dynamics (CFD) simulation, as well as the design of the control system.

Although there are a large number––one million––of domestically diagnosed epilepsy cases, there are many epilepsy patients who are being treated by non-specialists because epilepsy specialists are lacking and are unevenly distributed. For epilepsy care, appropriate drug selection based on the diagnosis of the syndrome and seizure type, and a referral to an epilepsy surgeon at the appropriate timing, if there is a surgery indication, are necessary, and, for improvement of the patient's QoL, it is also necessary to select appropriate drugs that take into consideration the patient's life design and living conditions, such as physical complications, working conditions, and preferences regarding raising children. However, it is difficult for non-specialists to keep up with the current treatment of epilepsy, which is being constantly updated, so it has not always been possible to provide high-quality treatment to epilepsy patients. As it takes time to train epilepsy specialists, there is an urgent need to improve the quality of epilepsy treatment by non-specialists.
In this research, which aims to improve the quality of medical treatment of epilepsy by physicians other than epilepsy specialists and improving the QoL of epilepsy patients, we are developing devices, collecting clinical data, and designing algorithms in orderto develop an AI-based foundation for supporting medical treatment of epilepsy.

In 2018, this research was selected as JST PRESTO's "Creating Information Infrastructure Technology for New Social System Design" program.

Dementia with Lewy bodies (DLB) is one of the three major types of dementia, being the second most-common after Alzheimer's disease and accounting for 20% of all dementias. However, DLB in the early stage of the disease does not frequently display symptoms common in other dimentia, such as forgetfulness, and thus is less likely to be recognized as dementia.

The importance of prevention of pathogenesis and early detection of dementia is drawing attention, as it has been reported that the number of patients would decrease by nearly 50% if onset is delayed by 5 years, and that the related medical expenses would become approximately 1/20. It is expected that the quality of life of dementia patients could be maintained and that their healthy life span could be prolonged through appropriate care and drug treatment based on early detection. According to the Strategy to Accelerate Dementia Measures (commonly known as the New Orange Plan), formulated by the Ministry of Health, Labour and Welfare in 2015, emphasizes the importance of early diagnosis of DLB for the purpose of early intervention for DLB patients, and the development of a DLB early screening system using an objective method has been in demand.
DLB is caused by Lewy bodies that accumulate in nerve cells and sets off the progression of neurodegeneration from long before its onset; as a result, prodromic symptoms such as autonomic disorder, olfactory disorder, and REM sleep behavior disorder (RBD) are known to appear before the onset of DLB. Accordingly, if the autonomic nervous function of the elderly could be continuously monitored through HRV analysis, it may be possible to diagnose DLB early.

In this study, we are engaged in the development of a system that can diagnose DLB early, as a form of horizontal development with respect to the wearable sensors and HRV analysis algorithms that thus far have been developed for the seizure prediction system.

It is said that medical artificial intelligence (AI) could be used to cut medical expenses, reduce medical errors, and allow the early detection of diseases, and application of AI in the medical field has been called for for quite a while, but incorporation of AI in medical practice has not progressed as had been expected.

There are various causal factors, one example of which is the problem of data collection. In order to collect data efficiently, it is essential to conduct ethics screening, to collaborate among multiple institutions, and to deal with the protection of personal information, etc, but, even if these problems are solved and the data could be accessed, actual analysis of them is difficult at many clinical sites, because the data are not saved with consideration to subsequent analyses. On the other hand, since the engineer side has also not been able to keep up with the understanding of the medical practice, for example, if there is a problem in the procedure for the ethics review, the project could be held up.

We are promoting the fostering of AI personnel for both medical professionals and engineers as part of Ministry of Health, Labour and Welfare's Grants-in-Aid for Scientific Research “Research on technological innovation of health care artificial intelligence and improvement of international competitiveness."

A large amount of medical data, such as medical checkups, routine medical examinations, and patient registries are accumulated on a daily basis; in particular, data on physical check-ups, in which many examination items are simultaneously carried out routinely, are extremely attractive because they have the potential to reveal causes of diseases that have so far been overlooked.

We use an analysis method called causal reasoning to identify causes of the disease, rather than merely correlations, from the data, also with consideration to the viewpoint of physiology and pathology.

It is said that drivers who have experienced drowsy driving amount to about 30% of all holders of driver's licenses, and 17.6% of driver fatality accidents are caused by careless driving (police agency traffic accident statistics, 2013), including drowsy driving, so accidents caused by drowsy driving is a societal problem. In this research, we developed a technology for monitoring the mental and physical states of the driver using a wearable sensor to detect sleepiness and give a warning in real time.

Specifically, a low waking state is detected by using the heartbeat pattern when the driver is awake as the normal pattern, and monitoring the deviation from the normal heartbeat pattern using a machine learning algorithm. In an experiment involving subjects using a driving simulator, we were able to detect low waking states 30 seconds before the occurrence of a collision accident caused by drowsy driving. This result is compared with sleep determined with an EEG to conduct validation.

This research has been put into practical use in 2017 as a service for NTT DATA MSE and NTT DOCOMO's wearable sensor "hitoe"

Sleep apnea syndrome (SAS) is a disease that causes respiratory arrest or respiratory loss during sleep. It is a serious disease that causes not only daytime sleepiness and performance deterioration but also various lifestyle-related diseases. It is said that there are potentially more than 2 million patients in Japan, but because it is difficult to recognize that you are suffering from SAS, only approximately 120,000 patients are being seen at a hospital. The conventional SAS diagnostic method––overnight sleep polygraph test (PSG)––can be performed only in limited facilities, so it is desirable that an appropriate combination of PSG and a simple monitor be used. However, an appropriate use of a simple monitor requires the guidance and supervision of a doctor, so the simple monitor is difficult to use for potential SAS patients who do not realize that they have SAS.

In this study, we developed an SAS screening method based on HRV monitoring. By measuring the heartbeat during sleep with a wearable sensor, one can easily screen, at home, for the possibility of suffering SAS. The method developed in this study could be of great social significance, as it can provide opportunities to screen for SAS in potential patients, which was not possible using conventional methods.

Stroke is a general term for cerebrovascular disorders, such as cerebral infarction, cerebral hemorrhage, and subarachnoid hemorrhage, where the success or failure of treatment is time-dependent. Starting treatment as early as possible not only improves survival rates but also alleviates aftereffects. Since, with cerebral infarction, it is possible to dissolve thrombus with a drug by means of thrombolytic therapy (t-PA treatment), if it is during the acute phase within 4.5 hours from onset, it is important to be able to recognize the onset as quickly as possible; however, since the main symptom of stroke is consciousness disorder, paralysis, etc., and onsets frequently occur at night, it is difficult even for the patients themselves to recognize it and call for emergency. In fact, the indication rate for tPA treatment is only around 10%.

Therefore, in this study, we have developed a system that detects stroke in the acute phase by utilizing heart rate variability (HRV) analysis. Since it is difficult to acquire large amounts of clinical data in the acute phase of stroke, a physiological survey has so far been conducted on the relationship between the location and volume of stroke and HRV using the rat middle cerebral artery occlusion (MCAO) model, and, by analyzing the HRV, we have shown that an infarct could be detected early.

In the field of psychology, it is well-known that a subject's experience will change if a heart rate that is different from the subject's actual heart rate is presented. This phenomenon is called pseudo heart rate (fHR) feedback.

Our aim is to construct a new game experience, develop contents using fHR feedback, and investigate the fHR mechanism.

Process Systems


In order to appropriately control the system or promptly detect anomalies, it is necessary to measure important variables in real time.

In the production process, it is desirable to measure important variables related to product quality and safety online in order to improve production efficiency and to prevent the outflow of defective products. For example, in the chemical process, temperature and flow rate can be measured online, but since analysis of product composition uses chromatography, composition measurement requires a large lead time and cost, and, therefore, many processes may be forced to operate conservatively. If such variables could be measured online, we could achieve a more efficient operation.

As such, if it becomes possible to measure online variables that cannot be measured online using hardware sensors or that require a large cost to do so, it would be of great benefit if we could measure them online by means of software. In other words, we can construct a mathematical model that estimates variables that are difficult to measure from variables that are easy to measure online. Such mathematical models are called soft sensors and are used in various scenarios in the industrial world.

However, there has been the problem that the performance and properties of the machine and equipment change with age and maintenance, and the performance of the soft sensor being operated is degraded. We have developed a soft sensor design method that follows processes that change in this manner and named it Correlated Just-In-Time (CoJIT) modeling.
The developed CoJIT modeling was verified using actual data of the chemical process, and it was confirmed that it had a higher prediction performance than conventional methods.

 
Products that are mass-produced have some differences in machine specs, even if the catalog specifications are the same. In such cases, it becomes difficult to properly design mathematical models and control systems that are applicable to all devices. Although the same manufacturing devices are operated in parallel in the semiconductor process, there are still differences in the equipment, so it is not always possible to apply a single model or control parameter to all the devices, but constructing a model and adjusting the parameter for each equipment requires much time and effort.

Therefore, clustering device characteristics based on data measured for each device and constructing a model for each classified characteristic significantly reduces the time and effort of modeling compared to constructing a model for each device.

In this research, we focused on the fact that device characteristics are expressed as the difference in the correlation between the measurement variables of the device, and developed a method to cluster data using the difference in the correlation, which we call NC spectral clustering (NCSC). This makes it possible to introduce statistical models at a low cost, even when there are many devices. Furthermore, we have also developed a soft sensor design method and an anomaly detection system using NCSC, and have confirmed their performances through simulation of chemical processes.

Generally, in the construction of soft sensors, the fitting performance for samples for model construction improves as the number of input variables is increased; however, when variables that are not physically related to the output variables are used as input variables, the prediction performance for unknown samples will be reduced. Therefore, input variables must be selected appropriately. Since selection of input variables often relies on trial and error, it is a burdensome task for those in the field, so the development of a systematic input variable selection method has been in demand in order to improve the prediction accuracy of soft sensors and to improve design efficiency. Yes.

In this research, we used NC spectral clustering (NCSC) to classify variables according to the correlation between the variables and to construct variable groups, and developed a variable selection method to determine whether each variable group should be adopted as an input variable. This is called NCSC-type variable selection (NCSC-VS).

NCSC-VS has been demonstrated to be capable of constructing soft sensors with higher estimation accuracy than conventional variable selection methods through soft sensor design in chemical processes and calibration curve design in pharmaceutical processes.

 
In the industry, mass production is shifting to high-mix low-volume production of fine chemicals, with high-performance and high added value.

Attention has been focused on batch processes suitable for such high-mix low-volume production. The characteristic of batch processes is that the process is operated according to preset operation profiles, and optimization of the operation profile for improving the quality and yield becomes an indispensable technology to achieve high productivity.

In this research, we developed a technology that models and optimizes batch process operation profiles by means of wavelet analysis, in order to improve quality and yield.

CONTACT




Please use the form below to inquire about research content, and to request a laboratory visit or consultation on admissions