Uncategorized

EU Grant ““Artificial Intelligence for the Deaf”

We have recently been awarded a EU grant on the topic: “Artificial Intelligence for the Deaf”.
The project aims to pursues cross-disciplinary breakthrough innovation to offer a comprehensive suite of solutions catering to deaf people communication needs. This is done through creating multi-modal, high-accuracy deep learning models for deaf/hearing people interactive communication. 
The project involves partners from 6 countries, including University of Kent, Cyprus University of Technology, European Union of the Deaf, and Georgia Tech.
Lots of travelling to come. 🙂
Advertisements
Uncategorized

Paper accepted in VR and Dementia

We are very excited to announce that our paper, in collaboration with St Andrew’s Healthcare, has been accepted for publication/presentation in ACM Conference on Human Factors in Computing Systems (CHI 2019). This is one of the few studies investigating the use of VR among people with Dementia in a psychiatric hospital.

Title: Bring the Outside In: Providing Accessible Experiences Through VR for People with Dementia in Locked Psychiatric Hospitals

Abstract: Many people with dementia (PWD) residing in long-term care may face barriers in accessing experiences beyond their physical premises; this may be due to location, mobility constraints, legal mental health act restrictions, or offence-related restrictions. In recent years, there have been research interests towards designing non-pharmacological interventions aiming to improve the Quality of Life (QoL) for PWD within long-term care. We explored the use of Virtual Reality (VR) as a tool to provide 360°-video based experiences for individuals with moderate to severe dementia residing in a locked psychiatric hospital. We discuss at depth the appeal of using VR for PWD, and the observed impact of such interaction. We also present the design opportunities, pitfalls, and recommendations for future deployment in healthcare services. This paper demonstrates the potential of VR as a virtual alternative to experiences that may be difficult to reach for PWD residing within locked setting.

This is a research area we have been working on for the past few years, and we are very glad to know that hospitals are now keen to adopt VR in their clinical practice. More information to follow regarding the development of this work.,

Uncategorized

EPSRC PhD Studentship in VR and Dementia

Title: Semi-automated personalised VR experience for patients with dementia within a secure psychiatric setting

University of Kent and St Andrew’s Healthcare 

We have a PhD studentship jointly funded by EPSRC and St Andrew’s Healthcare looking into investigating the design and deployment of personalised VR in a locked hospital context for people with dementia (PWD).

Many PWD residing in long-term care may face barriers in accessing experiences beyond their physical premises; this may be due to location, mobility constraints, legal mental health act restrictions, or offence-related restrictions. There have been research interests towards designing non-pharmacological interventions aiming to improve the Quality of Life for PWD within long-term care, including uses of digital technology, such as Virtual Reality (VR). A pilot study undertaken by St Andrew’s and University of Kent has recently demonstrated the potential of using 360°-video VR based experiences for individuals with moderate to severe dementia residing in a locked psychiatric hospital.

In this PhD, you will be working alongside clinical researchers, clinicians and patients, to help design a technological system which allows users to generate personalised VR apps which are easy to deploy in a restricted clinical setting. You will also study patient’s preferences of VR content through automated behavioural/emotional sensing integrated within the VR. This will help clinicians to better understand how the patients interact with VR and suggest relevant contents which are likely to have the optimal impact on the patient’s wellbeing. Algorithms will be developed to study such behaviours, which could lead to an automated system for selecting and generating relevant VR contents the patients might find engaging and interesting.

We are looking for an excellent candidate with a relevant first degree and strong Masters degree in computer sciences, electronic engineering or related areas relevant to the PhD topic. Experience in VR and machine learning is advantageous.

Principally based on Kent Campus, it is also expected that the student will develop a close working relationship with the hospital partner and conduct experimental work on their sites from time to time.

Funding Details: Funding is available at the current home fee rate of £4,195.00 (rate for 2018/19) together with a maintenance grant of £14,777.00 (rate for 2018/19).

Length of Award: 3 years (PhD)

Eligibility: UK domiciled students

Enquiries:  Any enquiries relating to the project should be directed to Chee Siang (Jim) Ang (csa8@kent.ac.uk)

Application:  Apply online at https://www.kent.ac.uk/courses/postgraduate/how-to-apply/ and select the following:

Programme – PhD in Electronic Engineering

School – School of Engineering and Digital Arts

Research topic – Semi-automated personalised VR experience for patients with dementia within a secure psychiatric setting

Deadline:  8 February 2019

Uncategorized

Paper on VR interpretation bias training for anxiety

Some more news on VR work. A new paper, “Believing is Seeing: A proof-of-concept study on using mobile Virtual Reality to boost the effects of Interpretation Bias Modification for anxiety” has been accepted to JMIR Mental Health.

I am fortunate to be working with many passionate PhD students interested in VR work 🙂

Abstract of the paper:

Background:Cognitive Bias Modification of Interpretations (CBM-I) is a computerized intervention designed to change negatively biased interpretations of ambiguous information, which underlie and reinforce anxiety. The repetitive and monotonous features of CBM-I can negatively impact on training adherence and learning processes.

Objectives:This proof-of-concept study examined whether performing a CBM-I training using mobile Virtual Reality technology (VR-CBM-I) improves training experience and effectiveness.

Methods: Forty-two students high in trait anxiety completed one session of either VR-CBM-I or standard CBM-I training for performance anxiety. Participants’ feelings of immersion and presence, emotional reactivity to a stressor, and changes in interpretation bias and state anxiety, were assessed.

Results:The VR-CBM-I resulted in greater feelings of presence (P< 0.001, d = 1.47) and immersion (P< 0.001, ηp2= 0.74) in the training scenarios and outperformed the standard training in effects on state anxiety (P< 0.001, ηp2= 0.3) and emotional reactivity to a stressor (P= 0.027, ηp2= 0.12). Both training varieties successfully increased the endorsement of positive interpretations (P < 0.001, drepeated measures (drm) = 0.79) and decreased negative ones. (P < 0.001, drm= 0.72) In addition, changes in the emotional outcomes were correlated with greater feelings of immersion and presence.

Conclusions:This study provided first evidence that 1) the putative working principles underlying CBM-I trainings can be translated into a virtual environment, and 2) VR holds promised as a tool to boost the effects of CMB-I training for highly anxious individuals, whilst increasing users’ experience with the training application.

Uncategorized

Paper published on Deep Learning and Emotion Detection

After several rounds of reviews and revisions, our paper, entitled: “Deep Learning Analysis of Mobile Physiological, Environmental and Location Sensor Data for Emotion Detection” is now published in Information Fusion. The full paper has been made “gold open access” and can be read here.

Abstract:

The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%)

Uncategorized

Paper on Sensor Tech for Education in Rural Thailand

We have recently got a paper accepted in PLOS ONE on the title: Investigating the use of sensor-based IoET to facilitate learning for children in rural Thailand.

Abstract

A sensor-based Internet of Educational Things (IoET) platform named OBSY was iteratively designed, developed and evaluated to support education in rural regions in Thailand. To assess the effectiveness of this platform, a study was carried out at four primary schools located near the Thai northern border with 244 students and 8 teachers. Participants were asked to carry out three science-based learning activities and were measured for improvements in learning outcome and learning engagement. Overall, the results showed that students in the IoET group who had used OBSY to learn showed significantly higher learning outcome and had better learning engagement than those in the control condition. In addition, for those in the IoET group, there was no significant effect regarding gender, home location (Urban or Rural), age, prior experience with technology and ethnicity on learning outcome. For learning engagement, only age was found to influence interest/enjoyment.

Uncategorized

Another VR paper accepted

Our paper, “Virtual Reality: The Effect of Body Consciousness on the Experience of Exercise Sensations” has been accepted for publication in Psychology of Sport & Exercise. This is a study in collaboration with our Sport Science and Psychology colleague.

Abstract:

Objectives: Past research has shown that Virtual Reality (VR) is an effective method for reducing the perception of pain and effort associated with exercise. As pain and effort are subjective feelings, they are influenced by a variety of psychological factors, including one’s awareness of internal body sensations, known as Private Body Consciousness (PBC). The goal of the present study was to investigate whether the effectiveness of VR in reducing the feeling of exercise pain and effort is moderated by PBC.

Design and Methods: Eighty participants were recruited to this study and were randomly assigned to a VR or a non-VR control group. All participants were required to maintain a 20% 1RM isometric bicep curl, whilst reporting ratings of pain intensity and perception of effort. Participants in the VR group completed the isometric bicep curl task whilst wearing a VR device which simulated an exercising environment. Participants in the non-VR group completed a conventional isometric bicep curl exercise without VR. Participants’ heart rate was continuously monitored along with time to exhaustion. A questionnaire was used to assess PBC.

Results: Participants in the VR group reported significantly lower pain and effort and exhibited longer time to exhaustion compared to the non-VR group. Notably, PBC had no effect on these measures and did not interact with the VR manipulation.

Conclusions: Results verified that VR during exercise could reduce negative sensations associated with exercise regardless of the levels of PBC.

 

Uncategorized

Paper on “Can the crowd tell how I feel?”

Our paper:

Can the crowd tell how I feel? Trait empathy and ethnic background in a visual pain judgment task 

has been published in “Universal Access in the Information Society” journal.

Abstract: Many advocate for artificial agents to be empathic. Crowdsourcing could help, by facilitating human-in-the-loop approaches and data set creation for visual emotion recognition algorithms. Although crowdsourcing has been employed successfully for a range of tasks, it is not clear how effective crowdsourcing is when the task involves subjective rating of emotions. We examined relationships between demographics, empathy, and ethnic identity in pain emotion recognition tasks. Amazon MTurkers viewed images of strangers in painful settings, and tagged subjects’ emotions. They rated their level of pain arousal and confidence in their responses, and completed tests to gauge trait empathy and ethnic identity. We found that Caucasian participants were less confident than others, even when viewing other Caucasians in pain. Gender correlated to word choices for describing images, though not to pain arousal or confidence. The results underscore the need for verified information on crowdworkers, to harness diversity effectively for metadata generation tasks.

Uncategorized

Paper accepted in IEEE Access

Right before the start of the new academic year, I am pleased to announce that our paper,  “NotiMind: Utilizing Responses to Smart Phone Notifications as Affective sensors” has been accepted in IEEE Access journal.  The funny thing about this journal is that they ask us to submit a “Multimedia Abstract” along with the paper!

There you go, my very first “Multimedia Abstract”. It does the job I suppose:

And the “traditional” abstract:

Today’s mobile phone users are faced with large numbers of notifications on social media, ranging from new followers on Twitter and emails to messages received from WhatsApp and Facebook. These digital alerts continuously disrupt activities through instant calls for attention. This paper examines closely the way everyday users interact with notifications and their impact on users’ emotion. Fifty users were recruited to download our application NotiMind and use it over a five-week period. Users’ phones collected thousands of social and system notifications along with affect data collected via self-reported PANAS tests three times a day. Results showed a noticeable correlation between positive affective measures and keyboard activities. When large numbers of Post and Remove notifications occur, a corresponding increase in negative affective measures is detected. Our predictive model has achieved a good accuracy level using three different “in the wild” classifiers (F-measure 74-78% within-subject model, 72-76% global model). Our findings show that it is possible to automatically predict when people are experiencing positive, neutral or negative affective states based on interactions with notifications.  We also show how our findings open the door to a wide range of applications in relation to emotion awareness on social and mobile communication.

publication, research

Paper accepted in MobileHCI 2017

Our paper, entitled “Designing a ubiquitous sensor-based platform to facilitate learning for young children in Thailand” has been accepted in MobileHCI conference 2017.

Abstract

Education plays an important role in helping developing nations reduce poverty and improving quality of life. Ubiquitous and mobile technologies could greatly enhance education in such regions by providing augmented access to learning. This paper presents a three-year iterative study where a ubiquitous sensor based learning platform was designed, developed and tested to support science learning among primary school students in underprivileged Northern Thailand. The platform is built upon the school’s existing mobile devices and was expanded to include sensor-based technology. Throughout the iterative design process, observations, interviews and group discussions were carried out with stakeholders. This lead to key reflections and design concepts such as the value of injecting anthropomorphic qualities into the learning device and providing personally and culturally relevant learning experiences through technology. Overall, the results outlined in this paper help contribute to knowledge regarding the design, development and implementation of ubiquitous sensor-based technology to support learning.