Uncategorized

Paper on VR interpretation bias training for anxiety

Some more news on VR work. A new paper, “Believing is Seeing: A proof-of-concept study on using mobile Virtual Reality to boost the effects of Interpretation Bias Modification for anxiety” has been accepted to JMIR Mental Health.

I am fortunate to be working with many passionate PhD students interested in VR work 🙂

Abstract of the paper:

Background:Cognitive Bias Modification of Interpretations (CBM-I) is a computerized intervention designed to change negatively biased interpretations of ambiguous information, which underlie and reinforce anxiety. The repetitive and monotonous features of CBM-I can negatively impact on training adherence and learning processes.

Objectives:This proof-of-concept study examined whether performing a CBM-I training using mobile Virtual Reality technology (VR-CBM-I) improves training experience and effectiveness.

Methods: Forty-two students high in trait anxiety completed one session of either VR-CBM-I or standard CBM-I training for performance anxiety. Participants’ feelings of immersion and presence, emotional reactivity to a stressor, and changes in interpretation bias and state anxiety, were assessed.

Results:The VR-CBM-I resulted in greater feelings of presence (P< 0.001, d = 1.47) and immersion (P< 0.001, ηp2= 0.74) in the training scenarios and outperformed the standard training in effects on state anxiety (P< 0.001, ηp2= 0.3) and emotional reactivity to a stressor (P= 0.027, ηp2= 0.12). Both training varieties successfully increased the endorsement of positive interpretations (P < 0.001, drepeated measures (drm) = 0.79) and decreased negative ones. (P < 0.001, drm= 0.72) In addition, changes in the emotional outcomes were correlated with greater feelings of immersion and presence.

Conclusions:This study provided first evidence that 1) the putative working principles underlying CBM-I trainings can be translated into a virtual environment, and 2) VR holds promised as a tool to boost the effects of CMB-I training for highly anxious individuals, whilst increasing users’ experience with the training application.

Advertisements
Uncategorized

Paper published on Deep Learning and Emotion Detection

After several rounds of reviews and revisions, our paper, entitled: “Deep Learning Analysis of Mobile Physiological, Environmental and Location Sensor Data for Emotion Detection” is now published in Information Fusion. The full paper has been made “gold open access” and can be read here.

Abstract:

The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%)

Uncategorized

Paper on Sensor Tech for Education in Rural Thailand

We have recently got a paper accepted in PLOS ONE on the title: Investigating the use of sensor-based IoET to facilitate learning for children in rural Thailand.

Abstract

A sensor-based Internet of Educational Things (IoET) platform named OBSY was iteratively designed, developed and evaluated to support education in rural regions in Thailand. To assess the effectiveness of this platform, a study was carried out at four primary schools located near the Thai northern border with 244 students and 8 teachers. Participants were asked to carry out three science-based learning activities and were measured for improvements in learning outcome and learning engagement. Overall, the results showed that students in the IoET group who had used OBSY to learn showed significantly higher learning outcome and had better learning engagement than those in the control condition. In addition, for those in the IoET group, there was no significant effect regarding gender, home location (Urban or Rural), age, prior experience with technology and ethnicity on learning outcome. For learning engagement, only age was found to influence interest/enjoyment.

Uncategorized

Another VR paper accepted

Our paper, “Virtual Reality: The Effect of Body Consciousness on the Experience of Exercise Sensations” has been accepted for publication in Psychology of Sport & Exercise. This is a study in collaboration with our Sport Science and Psychology colleague.

Abstract:

Objectives: Past research has shown that Virtual Reality (VR) is an effective method for reducing the perception of pain and effort associated with exercise. As pain and effort are subjective feelings, they are influenced by a variety of psychological factors, including one’s awareness of internal body sensations, known as Private Body Consciousness (PBC). The goal of the present study was to investigate whether the effectiveness of VR in reducing the feeling of exercise pain and effort is moderated by PBC.

Design and Methods: Eighty participants were recruited to this study and were randomly assigned to a VR or a non-VR control group. All participants were required to maintain a 20% 1RM isometric bicep curl, whilst reporting ratings of pain intensity and perception of effort. Participants in the VR group completed the isometric bicep curl task whilst wearing a VR device which simulated an exercising environment. Participants in the non-VR group completed a conventional isometric bicep curl exercise without VR. Participants’ heart rate was continuously monitored along with time to exhaustion. A questionnaire was used to assess PBC.

Results: Participants in the VR group reported significantly lower pain and effort and exhibited longer time to exhaustion compared to the non-VR group. Notably, PBC had no effect on these measures and did not interact with the VR manipulation.

Conclusions: Results verified that VR during exercise could reduce negative sensations associated with exercise regardless of the levels of PBC.

 

Uncategorized

Paper on “Can the crowd tell how I feel?”

Our paper:

Can the crowd tell how I feel? Trait empathy and ethnic background in a visual pain judgment task 

has been published in “Universal Access in the Information Society” journal.

Abstract: Many advocate for artificial agents to be empathic. Crowdsourcing could help, by facilitating human-in-the-loop approaches and data set creation for visual emotion recognition algorithms. Although crowdsourcing has been employed successfully for a range of tasks, it is not clear how effective crowdsourcing is when the task involves subjective rating of emotions. We examined relationships between demographics, empathy, and ethnic identity in pain emotion recognition tasks. Amazon MTurkers viewed images of strangers in painful settings, and tagged subjects’ emotions. They rated their level of pain arousal and confidence in their responses, and completed tests to gauge trait empathy and ethnic identity. We found that Caucasian participants were less confident than others, even when viewing other Caucasians in pain. Gender correlated to word choices for describing images, though not to pain arousal or confidence. The results underscore the need for verified information on crowdworkers, to harness diversity effectively for metadata generation tasks.

Uncategorized

Paper accepted in IEEE Access

Right before the start of the new academic year, I am pleased to announce that our paper,  “NotiMind: Utilizing Responses to Smart Phone Notifications as Affective sensors” has been accepted in IEEE Access journal.  The funny thing about this journal is that they ask us to submit a “Multimedia Abstract” along with the paper!

There you go, my very first “Multimedia Abstract”. It does the job I suppose:

And the “traditional” abstract:

Today’s mobile phone users are faced with large numbers of notifications on social media, ranging from new followers on Twitter and emails to messages received from WhatsApp and Facebook. These digital alerts continuously disrupt activities through instant calls for attention. This paper examines closely the way everyday users interact with notifications and their impact on users’ emotion. Fifty users were recruited to download our application NotiMind and use it over a five-week period. Users’ phones collected thousands of social and system notifications along with affect data collected via self-reported PANAS tests three times a day. Results showed a noticeable correlation between positive affective measures and keyboard activities. When large numbers of Post and Remove notifications occur, a corresponding increase in negative affective measures is detected. Our predictive model has achieved a good accuracy level using three different “in the wild” classifiers (F-measure 74-78% within-subject model, 72-76% global model). Our findings show that it is possible to automatically predict when people are experiencing positive, neutral or negative affective states based on interactions with notifications.  We also show how our findings open the door to a wide range of applications in relation to emotion awareness on social and mobile communication.

publication, research

Paper accepted in MobileHCI 2017

Our paper, entitled “Designing a ubiquitous sensor-based platform to facilitate learning for young children in Thailand” has been accepted in MobileHCI conference 2017.

Abstract

Education plays an important role in helping developing nations reduce poverty and improving quality of life. Ubiquitous and mobile technologies could greatly enhance education in such regions by providing augmented access to learning. This paper presents a three-year iterative study where a ubiquitous sensor based learning platform was designed, developed and tested to support science learning among primary school students in underprivileged Northern Thailand. The platform is built upon the school’s existing mobile devices and was expanded to include sensor-based technology. Throughout the iterative design process, observations, interviews and group discussions were carried out with stakeholders. This lead to key reflections and design concepts such as the value of injecting anthropomorphic qualities into the learning device and providing personally and culturally relevant learning experiences through technology. Overall, the results outlined in this paper help contribute to knowledge regarding the design, development and implementation of ubiquitous sensor-based technology to support learning.

publication, Uncategorized

VR paper accepted in INTERACT conference

Some more good news to share regarding publications. In collaboration with Dr Lex Mauger from Kent Sport and Exercise Sciences, we have been working on how virtual reality (VR) can be used to reduce the pain people feel when exercising. More information about the project can be found here.

Title: How Real is Unreal? Virtual Reality and the Impact of Visual Imagery on the Experience of Exercise-Induced Pain

Abstract. As a consequence of prolonged muscle contraction, acute pain arises during exercise as a result of a build-up of noxious biochemical in and around the muscle. Specific visual cues, e.g., the size of the object in weight lifting exercises, may reduce acute pain experienced during exercise. In this study, we examined how Virtual Reality(VR) can facilitate this “material-weight illusion”, influencing perception of task difficulty, thus reducing perceived pain. We found that when vision understated the real weight, the time to exhaustion was 2 minutes longer. Furthermore, participants’ heart rate was significantly lower by 5-7 bpm in the understated session. We concluded that visual-proprioceptive information modulated the individual’s willingness to continue to exercise for longer, primarily by reducing the intensity of negative perceptions of pain and effort associated with exercise. This result could inform the design of VR aimed at increasing the level of physical activity and thus a healthier lifestyle.

journal, Uncategorized

Paper accepted in Nature Scientific Reports

After two years of collaboration with Dr Yeo from VCU , our work in swallowing sensing with skin-like electronics is finally accepted for publication in Scientific Reports, an open access Nature journal.  More information about the project can be found here.

We are currently working on a second project on wireless chewing monitoring through a mobile app! Watch this space for more updates.

Title: Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

Abstract: We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long- term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI- driven rehabilitation for patients with swallowing disorders.

Uncategorized

Industry-Sponsored PhD Studentship

Title: Water leakage detection and wastage monitoring through advanced sensing and data modelling

University of Kent – School of Engineering and Digital Arts

Supervisors:  Professor Yong Yan, Dr Jim Ang and Dr Anthony Emeakaroha

Leakage and inadequate use of water costs some organisations a significant amount of fund that could be used for good cause. For instance, a public service organisation in England consumes an estimated 38.8 million cubic metres of water and generates approximately 26.3 million cubic metres of sewage per year. The use of water is also linked to the overall carbon footprint of an organisation through heating for hot water and energy consumption to pump water to the taps. This industry-sponsored PhD research programme aims to apply the state-of-the-art leak detection and data modelling techniques to minimise water leaks and wastage. Two key challenges in water management are the unnoticed leakages and human behaviour towards usage. There are various methods for detecting leaks in a water distribution system. Despite the availability of such methods, many leaks including slow leaks via dripping taps and small holes on underground pipelines are still undetected. New techniques based on advanced signal processing algorithms, precision mass flow balancing and communication techniques have the potential to overcome the limitations of the current techniques. In terms of user behaviour, there have been applications of machine learning techniques to model human behaviour to predict the efficiency of resource usage in recent years. The applicability of such techniques in predicting water usage and minimising water wastage in the service industry will be investigated. It is proposed to combine the improved leak detection techniques, data modelling and human behaviour analytics to minimise leaks and wastage of water.

We are looking for an excellent candidate with a relevant first degree and strong Masters degree in electrical/electronic engineering, physical sciences or related areas relevant to the PhD topic. Experience in sensors, instrumentation and machine learning is advantageous.

The successful applicant will be expected to undertake some teaching commensurate with his/her experience. Principally based on Kent Campus, it is also expected that the student will develop a close working relationship with industrial partners and conduct experimental work on their sites from time to time.

Funding Details:  Funding is available at the home/EU fee rate of £4,121.  There will also be combined maintenance funding of £14,296.

Length of Award: 3 years (PhD)

Eligibility: Open to all applicants (UK students, EU students and international students)

Enquiries:  Any enquiries relating to the project should be directed to Professor Yong Yan y.yan@kent.ac.uk.

Application:  Apply online at https://www.kent.ac.uk/courses/postgraduate/apply-online/262 and select the following:

Programme – PhD in Electronic Engineering

School – School of Engineering and Digital Arts

Research topic – Water leakage detection and wastage monitoring through advanced sensing and data modelling

Deadline:  26 February 2017