Current projets

SmartCheck : Study of posture, locomotion and running (2012-)

This interdisciplinary work gathers a team of mathematicians, biomedical engineers, computer scientists, clinicians, and neurophysiologists involving two CNRS labs and several hospital divisions and involved the COGNAC G research team (Univerity Paris Descartes & CNRS) and the CMLA (ENS Paris Saclay & CNRS).

Since the breakthrough of self-quantified applications, extracting various health-related data from wearable sensors has become a real subject of interest. There exist many software applications on the market that use the sensors embedded in smart phones to calculate the number of steps, the travelled distance, the average speed... and to provide all kinds of statistics on weight loss, energy spent... Our first aim is to develop algorithms and technologies in order to perform such tasks in a medical context, by extracting robust and reliable parameters that can be used for longitudinal follow-up and diagnostic assistance. Our second aim is to design, create and study large databases of physiological signals that can be used for example to conduct medical research.

In particular, the SmartCheck project focuses on the study of posture, locomotion and running and involves several signal processing/machine learning tasks such as segmentation, classification, representation learning, etc...

Other current projects are linked to tensor-based approaches for the study of states of consciousness during anesthesia and to the study of arm movements for physical rehabilitation.

Keywords : biomedical signal processing, pattern recognition, machine learning, change-point detection, dictionary learning, feature extraction, tensor data processing, graph signal processing...

Detection, estimation and compensation of involuntary camera movements in videos (2015-)

This work is conduced within the L2TI at University Paris 13.

Hand-held video cameras often lead to shaking videos, which usually have low visual quality. Video stabilization is a technique that aims to reduce or remove the unwanted camera motion to obtain more stable videos. This step plays a crucial role in many domains of computer vision such as the recovery of scene structure, augmented reality, action recognition, object recognition and tracking etc...

Keywords : video processing, video stabilization, quality enhancement

Former projects

Sound processing and reproducibility (Postdoc, ENS Cachan, 2012-2013)

This work aimed at creating a journal publishing reproducible algorithms for sound processing and was led in collaboration with Jean Michel Morel at ENS Cachan.

The idea was to create a journal publishing reproducible algorithms for sound processing. Every publication contained a detailed description and implementation of the algorithm and a demo program. Every submission was peer-reviewed (including source code and demo program) so as to certify that the implementation fits the description. Several algorithms were submitted and implemented in this framework, focusing on low-level sound processing methods such as interpolation, denoising, or sound representations.

Keywords : sound processing, reproducible science, audio denoising, signal representation

Segmentation and classification of accelerometer signals (Postdoc, TELECOM ParisTech, 2010-2012)

This work was focused on the processing of accelerometer signals, in association with Pascal Bianchi and Jérémie Jakubowicz at TELECOM ParisTech. This work was funded by the ANR project SVELTE.

The monitoring of energy expenditure (EE) can be useful in the prevention and the treatment of obese or elderly people. Even if there exist some reliable methods to evaluate the level of physical activity (such as oxygen uptake measurement or doubly labeled water), those are often expensive and intrusive and then do not suit for daily use. An alternative approach for the assessment of EE involves the use of unconstrained portable systems such as accelerometers. Our aim was to segment and classify the accelerometer data recorded on the waist and shin in order to evaluate the EE associated to each activity as precisely as possible.

We developped dedicated methods for 3 different tasks. First, classification of static activities (standing, sitting, laying...) was performed in the time domain, classification of periodic activities (walking, running, biking) was performed in the frequency domain by using the Wasserstein distance

Keywords : biomedical signal processing, change-point detection, classification, pattern recognition, machine learning, Wasserstein distances

Template-based chord recognition from audio signal (PhD, TELECOM ParisTech, 2007-2010)

My PhD work focused on automatic chord recognition from audio signal, under the supervision of Yves Grenier and Cédric Févotte at TELECOM ParisTech. You can find the slides of the PhD defense and the PhD thesis in the Publications section.

This thesis is in line with the music signal processing field and focuses in particular on the automatic chord transcription from audio signals. Indeed, for the past ten years, numerous works have aimed at representing music signals in a compact and relevant way, for example for indexation or music similarity search. Chord transcription constitutes a simple and robust way of extracting harmonic and rhythmic information from songs and can notably be used by musicians to playback musical pieces.

We propose here two approaches for automatic chord recognition from audio signals, which are based only on theoretical chord templates, that is to say on the chord definitions. In particular, our systems neither need extensive music knowledge nor training. Our first approach is deterministic and relies on the joint use of chord templates, measures of fit and post-processing filtering. We first extract from the signal a succession of chroma vectors, which are then compared to chord templates thanks to several measures of fit. The so defined recognition criterion is then filtered, so as to take into account the temporal aspect of the task. The detected chord for each frame is finally the one minimizing the recognition criterion. This method notably entered an international evaluation (MIREX 2009) and obtained very fair results.

Our second approach is probabilistic and builds on some components introduced in our deterministic method. By drawing a parallel between measures of fit and probability models, we can define a novel probabilistic framework for chord recognition. The probability of each chord in a song is learned from the song through an Expectation-Maximization (EM) algorithm. As a result, a relevant and sparse chord vocabulary is extracted for every song, which in turn leads to better chord transcriptions. This method is compared to numerous state-of-the-art systems, with several corpora and metrics, which allow a complete and multi-facet evaluation.

Keywords : music signal processing, chord recognition, music information retrieval, pattern recognition

Image fusion using optimization of statistical measurements (MSc, Imperial College London, 2007)

My MSc work focused on image fusion, under the supervision of Tania Stathaki and Nikolaos Mitianoudis at Imperial College London. You can find the MSc thesis in the Publications section.

The purpose of image fusion is to create a perceptually enhanced image from K multi-focus or multi-sensors images. In the methods we are about to describe we do not a priori know the ground truth image: these are blind fusion methods. There are mostly two groups of fusion methods depending on the way they are applied: transform domain methods and spatial domain methods. The Dispersion Minimisation (DMF) and Kurtosis Maximisation (KMF) techniques we are going to discuss are spatial domain methods, that is to say that the fusion is simply performed on the image itself by combining the K input images using appropriate weights for the different pixels.

Our aim is therefore to estimate the different weights that optimally measure the contribution of each pixel in source images to the fused one. The key issue is to improve visual perception by summing the weighted source images. In order to evaluate the weights we are using iterative methods which use cost functions based on two parameters: the dispersion (for the DMF method) and the kurtosis (for the KMF). The optimisation of these cost functions enables us to get a fused image which is supposed to be least-distorted that the input ones.

We also introduce some improvements to these methods such as optimal learning rates or the use of the notion of neighbourhood.

Keywords : image processing, image fusion