In RL5 we will develop innovative methodologies for knowledge extraction for biodata and imagiological data sensed in the patient providing cues about the state of the patient. The fusion of the patient specific knowledge with the evidence gathered from the population in RL4 will enable a better quantification of the risk and a more accurate diagnosis.
Under this principled GER vision we will guide our research activity towards application independent methodologies, thus pursuing the following main scientific objectives:
- Develop methodologies for signal denoising and reconstruction that are able to learn different noise models from different modalities or applications;
- Assess signal and image quality using learning methodologies, allowing the identification of signals/images (or local patches) that are not adequate for further analysis;
- Develop feature extraction and detection methodologies adaptable to different signal/image acquisition modalities and different medical problems;
- Develop learning methodologies for pathology detection in a screening context, and particularly explore how optimally design decision systems in partially annotated data;
- Translate the decision process based in low level feature from physiological signals/images/voice/video in a high level self-explanatory model, enabling the identification and representation of relevant clinical information;
- Obtain decisions supported in a small subset of explanatory variables/cues, without resorting to restrictive sparse models, which always impose the same small subset of explanatory variables;
- Evaluate the gain obtained by information fusion from multiple radiological modalities to detect tumorous masses and suspicious regions, over non-combined methods;
- Create methodologies to improve the visualisation and interaction of medical information with direct participation of the patient;
- Develop methodologies to predict the evolution tumors;
- Design methodologies to morbidity assessment and prevention after surgery.
The emergent clinical scenarios that are envisioned in this project will give rise to large amounts of data coming from different sources, including ubiquitous sensing (RL1), clinical data from electronic health records (RL3), and diverse devices being increasingly used for medical imaging and biomedical signal monitoring (RL5). Our information will go to the cloud (RL3) – a link that companies are already facilitating – where analytics software will let specialists study your body, and what treatments are likely to work on it. The analysis of the patient specific risk and doctors’ recommendations will be backed by analysis of countless other people’s bodies and treatments (RL4) and by the information collected from the patient itself (RL5).
In order to tackle the specific objectives of this research line, the activities are organized into four inter-related work packages (WP):
- WP 1: Bio-signal and image modeling and analysis
- WP 2: Machine learning for computer aided diagnosis
- WP 3: Multimodal integration of patient data
- WP 4: Novel approaches for medical information communication