Uncategorized

VR paper accepted in INTERACT conference

Some more good news to share regarding publications. In collaboration with Dr Lex Mauger from Kent Sport and Exercise Sciences, we have been working on how virtual reality (VR) can be used to reduce the pain people feel when exercising. More information about the project can be found here.

Title: How Real is Unreal? Virtual Reality and the Impact of Visual Imagery on the Experience of Exercise-Induced Pain

Abstract. As a consequence of prolonged muscle contraction, acute pain arises during exercise as a result of a build-up of noxious biochemical in and around the muscle. Specific visual cues, e.g., the size of the object in weight lifting exercises, may reduce acute pain experienced during exercise. In this study, we examined how Virtual Reality(VR) can facilitate this “material-weight illusion”, influencing perception of task difficulty, thus reducing perceived pain. We found that when vision understated the real weight, the time to exhaustion was 2 minutes longer. Furthermore, participants’ heart rate was significantly lower by 5-7 bpm in the understated session. We concluded that visual-proprioceptive information modulated the individual’s willingness to continue to exercise for longer, primarily by reducing the intensity of negative perceptions of pain and effort associated with exercise. This result could inform the design of VR aimed at increasing the level of physical activity and thus a healthier lifestyle.

Paper accepted in Nature Scientific Reports

After two years of collaboration with Dr Yeo from VCU , our work in swallowing sensing with skin-like electronics is finally accepted for publication in Scientific Reports, an open access Nature journal.  More information about the project can be found here.

We are currently working on a second project on wireless chewing monitoring through a mobile app! Watch this space for more updates.

Title: Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

Abstract: We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long- term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI- driven rehabilitation for patients with swallowing disorders.

Industry-Sponsored PhD Studentship

Title: Water leakage detection and wastage monitoring through advanced sensing and data modelling

University of Kent – School of Engineering and Digital Arts

Supervisors:  Professor Yong Yan, Dr Jim Ang and Dr Anthony Emeakaroha

Leakage and inadequate use of water costs some organisations a significant amount of fund that could be used for good cause. For instance, a public service organisation in England consumes an estimated 38.8 million cubic metres of water and generates approximately 26.3 million cubic metres of sewage per year. The use of water is also linked to the overall carbon footprint of an organisation through heating for hot water and energy consumption to pump water to the taps. This industry-sponsored PhD research programme aims to apply the state-of-the-art leak detection and data modelling techniques to minimise water leaks and wastage. Two key challenges in water management are the unnoticed leakages and human behaviour towards usage. There are various methods for detecting leaks in a water distribution system. Despite the availability of such methods, many leaks including slow leaks via dripping taps and small holes on underground pipelines are still undetected. New techniques based on advanced signal processing algorithms, precision mass flow balancing and communication techniques have the potential to overcome the limitations of the current techniques. In terms of user behaviour, there have been applications of machine learning techniques to model human behaviour to predict the efficiency of resource usage in recent years. The applicability of such techniques in predicting water usage and minimising water wastage in the service industry will be investigated. It is proposed to combine the improved leak detection techniques, data modelling and human behaviour analytics to minimise leaks and wastage of water.

We are looking for an excellent candidate with a relevant first degree and strong Masters degree in electrical/electronic engineering, physical sciences or related areas relevant to the PhD topic. Experience in sensors, instrumentation and machine learning is advantageous.

The successful applicant will be expected to undertake some teaching commensurate with his/her experience. Principally based on Kent Campus, it is also expected that the student will develop a close working relationship with industrial partners and conduct experimental work on their sites from time to time.

Funding Details:  Funding is available at the home/EU fee rate of £4,121.  There will also be combined maintenance funding of £14,296.

Length of Award: 3 years (PhD)

Eligibility: Open to all applicants (UK students, EU students and international students)

Enquiries:  Any enquiries relating to the project should be directed to Professor Yong Yan y.yan@kent.ac.uk.

Application:  Apply online at https://www.kent.ac.uk/courses/postgraduate/apply-online/262 and select the following:

Programme – PhD in Electronic Engineering

School – School of Engineering and Digital Arts

Research topic – Water leakage detection and wastage monitoring through advanced sensing and data modelling

Deadline:  26 February 2017

Hololens and its future

mhololensWhilst in Parma giving a talk in the Spatial Audio research meeting, I had a chance to play with Microsoft Hololens! In case you have not been following the tech news in the past few years, Hololens is a Augmented Reality headset (not to be confused with Virtual Reality) which blends digital information (e.g. virtual objects, graphical user interfaces, etc.) in the physical environment, allowing the users to experience a “mixed reality” world.

The idea of AR is not new, but two things really got me excited about Hololens: 1) handsfree AR experience, allowing users to use both hands to interact with the mixed reality world, 2) its ability to scan and understand the physical space.

Having tried it, I can say something about these two points. Handsfree experience is well executed to a certain extent. I can point, pinch, swipe digital information quite smoothly. However the tradeoff is that you now have to wear a whole computer on your face, and it is not pleasant at all! In fact, even within a few minutes, my face/head hurt and I cannot imagine myself using it for more than 30 minutes.

The second point on Hololens having “spatial understanding” of the physical environment is what sets Hololens and other AR applications (e.g. Pokemon Go) apart, and this is what makes AR more technologically challenging than VR. Hololens is able to scan a physical room and and map the information digitally in 3D. This is a very important point, as having this digital spatial information allows Hololens to superimpose digital stuff more precisely onto the physical world. For instance, if we would play Pokemon Go on Hololens, the Pokemon would appear on a surface of a room, e.g. on top of a table, instead of just at the centre of the screen regardless where in the room the user is looking at!

At its current price (3000USD), I don’t imagine any casual consumer jumping on it just yet, especially also given other well known problems, such as the small field of vision and low battery life.

I am however very excited by the future of this kind of wearable spatially-aware AR! In fact, we are currently working with a company called Daqri which manufactures a similar AR wearable device used in industrial environments.

End of 2016

Let’s say 2016 has been an eventful year, in my personal professional as well as global political sphere, and I didn’t mean it in a good way! In any case, I am glad that we are now bidding 2016 goodbye. Never have I felt so eager to get rid of a year before, but 2017, here I come.

Well, at least the NIHR (National Institution of Health Research) project I was involved in some years ago has finally produced some results. This is the abstract of the newly published open access paper in Health Technology Assessment:

Background

This report details the development of the Men’s Safer Sex website and the results of a feasibility randomised controlled trial (RCT), health economic assessment and qualitative evaluation.

Objectives

(1) Develop the Men’s Safer Sex website to address barriers to condom use; (2) determine the best design for an online RCT; (3) inform the methods for collecting and analysing health economic data; (4) assess the Sexual Quality of Life (SQoL) questionnaire and European Quality of Life-5 Dimensions, three-level version (EQ-5D-3L) to calculate quality-adjusted life-years (QALYs); and (5) explore clinic staff and men’s views of online research methodology.

Methods

(1) Website development: we combined evidence from research literature and the views of experts (n = 18) and male clinic users (n = 43); (2) feasibility RCT: 159 heterosexually active men were recruited from three sexual health clinics and were randomised by computer to the Men’s Safer Sex website plus usual care (n = 84) or usual clinic care only (n = 75). Men were invited to complete online questionnaires at 3, 6, 9 and 12 months, and sexually transmitted infection (STI) diagnoses were recorded from clinic notes at 12 months; (3) health economic evaluation: we investigated the impact of using different questionnaires to calculate utilities and QALYs (the EQ-5D-3L and SQoL questionnaire), and compared different methods to collect resource use; and (4) qualitative evaluation: thematic analysis of interviews with 11 male trial participants and nine clinic staff, as well as free-text comments from online outcome questionnaires.

Results

(1) Software errors and clinic Wi-Fi access presented significant challenges. Response rates for online questionnaires were poor but improved with larger vouchers (from 36% with £10 to 50% with £30). Clinical records were located for 94% of participants for STI diagnoses. There were no group differences in condomless sex with female partners [incidence rate ratio (IRR) 1.01, 95% confidence interval (CI) 0.52 to 1.96]. New STI diagnoses were recorded for 8.8% (7/80) of the intervention group and 13.0% (9/69) of the control group (IRR 0.75, 95% CI 0.29 to 1.89). (2) Health-care resource data were more complete using patient files than questionnaires. The probability that the intervention is cost-effective is sensitive to the source of data used and whether or not data on intended pregnancies are included. (3) The pilot RCT fitted well around clinical activities but 37% of the intervention group did not see the Men’s Safer Sex website and technical problems were frustrating. Men’s views of the Men’s Safer Sex website and research procedures were largely positive.

Conclusions

It would be feasible to conduct a large-scale RCT using clinic STI diagnoses as a primary outcome; however, technical errors and a poor response rate limited the collection of online self-reported outcomes. The next steps are (1) to optimise software for online trials, (2) to find the best ways to integrate digital health promotion with clinical services, (3) to develop more precise methods for collecting resource use data and (4) to work out how to overcome barriers to digital intervention testing and implementation in the NHS.

https://www.journalslibrary.nihr.ac.uk/hta/hta20910/#/abstract

Holiday over, and newly accepted paper

So the summer holiday is over, and I am finally back to the normal routine of teaching, publishing and proposal-ing.

To start the new term with something positive, we have recently got a paper accepted in ACM Transactions on Internet Technology.  It is a side project I have been working on for the past two years and I am glad that it is finally accepted for publication!

Show Me You Care: Trait Empathy, Linguistic Style and Mimicry on Facebook

Linguistic mimicry, the adoption of another’s language patterns, is a subconscious behavior with pro-social benefits. However, some professions advocate its conscious use in empathic communication. This involves mutual mimicry; effective communicators mimic their interlocutors, who also mimic them back. Since mimicry has often been studied in face-to-face contexts, we ask whether individuals with empathic dispositions have unique communication styles and/or elicit mimicry in mediated communication on Facebook. Participants completed Davis’ Interpersonal Reactivity Index and provided access to Facebook activity. We confirm that dispositional empathy is correlated to the use of particular stylistic features. In addition, we identify four empathy profiles and find correlations to writing style. When a linguistic feature is used, this often “triggers” use by friends. However, the presence of particular features, rather than participant disposition, best predicts mimicry. This suggests that machine-human communications could be enhanced based on recently used features, without extensive user profiling.

Summer travel plan 2016

Whew, the 2015-16 academic year has finally come to an end, which means summer is upon us! As usual, I often get asked by students: so where are you going for holiday the whole summer? Well no, we have no summer holidays like students do and we really need to explain this to students, lest they think we are just off somewhere sunny doing nothing for 2 months! …. unless we are?

Anyway, unusually for me, I will be travelling quite a bit this summer, mainly (ehem) for work!

a) June: Royal College of Surgeon in London to discuss possible collaboration in use of Augmented Reality Helmet for field surgery

b) July: UCL, London to discuss Internet of Educational Things

c) July: Dubrovnik, Croatia to attend a summer shcool in HCI methods

d) July/August: Limassol, Cyprus to give a seminar and to write a grant proposal

e) Sep: Malaysia (yeah) for a holiday

f) Sep: Ghent University, Belgium to discuss potential collaboration in computational photonics

I am already dreading the start of the new academic year 🙂

Machine learning in high-throughput imaging enabled label-free cell screening

Recently, I have been exploring potential collaborations with my colleagues, Dr Chao Wang and Dr Farzin Deravi at Kent in the area of bio-photonics . At the same time, a friend of mine took up a job as a manager of Centre for Nano- and Biophotonics at Ghent University. He pointed out to me there is a travel grant scheme to encourage Kent-Ghent collaboration. One things led to another, we have now been awarded a small travel grant to explore how machine learning techniques can be used to analyse cell images.

The proposed project aims to integrate machine learning based feature extraction with high-throughput cell imaging technologies for high-precision label-free cell screening.  The project will bring together complementary expertise and allow both Kent and Ghent teams to explore new research opportunities in the interdisciplinary field of biophotonics, big data photonics and computational photonics.  It is anticipated that this collaborative project will lead to a long-term collaboration between the groups underpinned by Kent’s policy as the European University.

I realise this is not the usual kind of research I do, but I am pretty excited to learn something new and potentially fun!

How effective is VR in Eliciting Human Emotions

A psychology colleague, Dr Mario Weick and I have just been awarded a small grant (~£5,000) from the Faculty of Social Sciences to carry out some pilot work using virtual reality (VR) to elicit human emotions. Specifically, we aim to develop a repository of VR stimuli for researchers to elicit different affective experiences (feelings and emotions) in people. Currently, the main source for such stimuli is the International Affective Picture System (IAPS). We believe that using videos with 360 VR views could be a more effective way to elicit emotions. 

This cross-School project will develop 360-degree videos designed and validated to elicit affective experiences, covering all four quadrants shown in the following image.

Picture1

Currently, I have a visiting researcher from Malaysia (UiTM), Dr Emma Nuraihan Mior Ibrahim, who is interested in using 360 video VR to elicit inter-group empathic feelings.

More information to follow as the studies in VR progress.

Virtual Reality coming of age?

GearVR

Recently, there have been a lot of buzzes in the tech world about a decades old technology, Virtual Reality (VR). Big guys the like of Facebook, Google and Samsung are trying to convince us that VR is finally ready for the masses. But is it? I have a few PhD/MSc students wanting to do their research in VR and healthcare. I have been exploring a few popular options, including Oculus Rift (developer version 1 & 2), Google Cardboard, and Samsung Gear VR. So how good are they?

Let’s start with Oculus. Oculus Dev ver 1 was good for what it was – one of the first low cost consumer VR headsets. The resolution was not great, and one could literally count the pixels on screen. It has high latency – meaning there is a noticeable delay between controller input (e.g. rotating your head), and screen update. This results in dizziness especially for a VR experience with fast motions.

Oculus Dev ver 2 is a significant improvement. The resolution is much better, and they seem to get the latency down to an acceptable level. Even when playing a relatively fast paced game, I didn’t feel dizzy at all. However, the set up is a pain, with lots of cables and an external head tracker. Like ver 1, the headset needs to be attached to a desktop/laptop.

Google Cardboard is probably the easiest one to get into. You only need a smart phone (iOS or Android) and the cheap looking cardboard headset (costing around £4-£15). It works quite well, but it is not as immersive as Oculus or Samsung Gear VR. It is good if you want to experience what the hype VR is about, but not willing to shell out big money.

I just received a Samsung Gear VR (see figure) yesterday, and have been experimenting with it. I am quite impressed by the ergonomics and ease of use. Samsung really got almost everything right for a mobile VR headset. It works pretty much like Cardboard, where you slot in your Samsung phones (only certain high end Samsung phones are supported currently) and it just works right away – no cables! The resolution is as good as Oculus ver 2! The user interface is beautifully designed and I am blown away by some of the 360 stereoscopic videos one can get for free from the VR store. There is however a huge problem with latency. Whilst the resolution is as good as Oculus v 2, the latency issue is as bad as Oculus v 1! I got very sick after 1-2 minute playing an action game where one runs around inside human cells. It is really a shame. Perhaps with newer phones (like S7), this problem will go away? We will see. For now, Gear VR is still good for 360 videos, and other VR experience with slow or no motion.