Computer Human Interaction

The prime focus of our lab is to develop pervasive and ubiquitous systems for computer-human interactions. Towards this goal, we use interactive sensing modalities, like video, audio, RF sensing, smartphone-based sensing, smart wearables, etc., to design assistive systems for the common masses, particularly for the regions of underdeveloped and developing countries. Some of our current projects are as follows.

Smart Interface for Virtual Meeting Apps

For over two decades, video conferencing has been a productive approach for exchanging conversations between multiple participants through a digital online mode. During the COVID-19 pandemic and beyond, it became a necessity rather than an option when almost every meeting, be it a classroom teaching or a business meeting, is being conducted in the virtual mode through various online video conferencing platforms. Nevertheless, there has been a serious concern about these meetings' quality due to the lack of engagement from the participants, particularly in the business meetings or the classroom teachings, educational seminars, etc. Many participants tend to be passive during the sessions, mainly when they find other more exciting activities, like reading a storybook or an article over the Internet or browsing through their social networking feeds. Consequently, attending the meeting becomes merely a proof of participation, like giving the class attendance while not following the lectures!
We work on developing intelligent interfaces for online meeting attendees to monitor their cognitive involvement in the meetings. We explore the behavioral patterns of individuals during online discussions and then use active and passive sensing modalities, like video, RF, acoustics, wearables, etc., to see whether the participant is cognitively involved in the meeting's discussions.

Mobile Interface Design for Challenged and Disabled Peoples

Today's smartphones are getting smarter; however, they might not be absolutely friendly for challenged and disabled people. For example, people with medical issues like dactylitis, sarcopenia, and joint pains might have difficulty typing using a smartphone's conventional QWERTY soft-keyboard. Existing gaze or voice-based approaches do not work well without commercial trackers or noisy environments. We develop intelligent interfaces to help such peoples interact with the smartphone seamlessly, using alternate modalities such as head gestures, visual cues, etc. This is particularly challenging as the method must be efficient enough to run over a smartphone. We develop lightweight tracking techniques by leveraging online learning to solve this problem.