Sensing

Pervasive sensing is the primary focus of our research group. We sense everything, humans and the environment, and with different modalities, like the locomotive, acoustic, RF, medical, etc., under different infrastructures, like active and passive sensing, wearables, fixed-infrastructure, sensing, etc. The objective is to develop ubiquitous applications that can be leveraged through low-cost sensing infrastructure. We use different technologies, like signal processing, machine learning, deep learning, etc., from the algorithmic perspective to meet our goals. Some of our recent exciting projects are as follows.

Smart Transportation: Sensing the Road, the Vehicle, and the Driver

Road travel in developing countries, particularly in the Indian subcontinent, is very sporadic because of multiple socio-economic factors. The roads are bumpy in many places; infrastructure is not very good, the streets are congested due to heavy traffic, and so on. We develop pervasive sensing modalities to sense the road, the transport infrastructure, and the driver. One critical issue is understanding various points of interest (PoIs) on the road, which affect travel. We use smartphone-based crowdsensing, leveraging the embedded sensors on today's smartphones like IMU, GPS, etc., to capture various PoIs and tag them over the map. One of the crucial issues is monitoring a driver's driving behavior and how the driver interacts with various landmarks on the road, like speed breakers, potholes, turns, etc. We also aim to understand the driving behavior and its impact on different maneuvers taken by the driver. Collectively, we target the lifestyle of citizens on the road to develop assistive technologies to support them during their daily commute.

Human Sensing: Activity Recognition and Annotation

Human activity recognition (HAR) has been one of the essential building blocks behind various pervasive applications. We are working on developing lightweight and cost-effective approaches for HAR, starting from macro-activities, like walking, running, writing on a board, meeting group detection, etc., to micro and fine-grained activities like cooking in a smart-home scenario. We use various modalities, like IMU, acoustic, RF, etc., for inferring the activities. Another area of our prime focus is activity annotation. The typical HAR models work in a supervised environment; therefore, they need a massive amount of labeled data to train the models. The question is, how do we label or annotate this data? We work to develop a robust and automated approach to use auxiliary sensing modalities, like acoustic, to label the IMU data for activity recognition. This is a challenging problem as the method does not have any prior knowledge; therefore, it needs to work in an unsupervised way. We also work on understanding the granularity and informativeness of these labels and how the generated labels can contribute to the development of large-scale activity recognition models.

Environment Sensing: Making the World a Smart Place to Live

Of late, we started working on sensing the environment. Six out of top 10 polluted cities in the world are from India. We aim to develop a low-cost, portable pollution monitoring device that can help individuals sense the environment for the presence of different pollutants in outdoor and indoor environments. We also work on designing an efficient mechanism for deciding the placement of pollution monitoring devices in an indoor setup, understand the impact of pollutants on the cognitive and behavioral aspects of humans, and understand the impact of several indoor activities on the pollution level of the room.

Affective Sensing: Understand Human Behavior from Passive Sensing

There has been a recent development in designing applications based on behavioral HCI, for example, emotion-aware music player, facial expression-based device control, etc. Recently we started working towards exploring passive sensing technologies, such as mmWave sensing, acoustic sensing, etc., towards understanding the user's facial expressions, emotions, and behavioral aspects. Typical, different facial expressions result from a set of Action Units (AU) (facial muscle movements). Such facial muscle movements are the building blocks of expressions and can be categorized under ocular (around eye region), nasal (around nose region), and oral (around mouth area) groups. For example, a particular expression, say "Happiness," is a combination of facial muscle movements, majorly around the oral region and subtly around the ocular and nasal areas. We aim to use various passive sensing technologies, such as acoustic sensing, mmWave radar-based sensing, smartphone-embedded LiDAR, etc., to sense the user's facial expression.