Paper on VR and Brain Machine Interaction accepted

Our latest paper “Wireless Soft Scalp Electronics and Virtual Reality System for Motor Imagery-based Brain-Machine Interfaces” has been accepted to Advanced Science.  This is the output of on-going collaboration with Prof Yeo from Georgia Tech.

Abstract: Motor imagery offers an excellent opportunity as a stimulus-free paradigm for brain-machine interfaces. Conventional electroencephalography (EEG) for motor imagery requires a hair cap with multiple wired electrodes and messy gels, causing motion artifacts. Here, we introduce a wireless scalp electronic system with virtual reality for real-time, continuous classification of motor imagery brain signals. This low-profile, portable system integrates imperceptible microneedle electrodes and soft wireless circuits. Virtual reality addresses subject variance in detectable EEG response to motor imagery by providing clear, consistent visuals and instant biofeedback. The wearable soft system offers advantageous contact surface area and reduced electrode impedance density, resulting in significantly enhanced EEG signals and classification accuracy. The combination with convolutional neural network-machine learning provides a real-time, continuous motor imagery-based brain-machine interface. With four human subjects, the scalp electronic system offers a high classification accuracy (93.22±1.33% for four classes), allowing wireless, real-time control of a virtual reality game. 

In the press:

  • Georgia Tech News
  • Psychology today
  • Collaborator in a ERC grant on conspiracy theory social media analysis

    I am pleased to be a collaborator of a recently funded ERC grant led by Prof Karen Douglas (Kent Psychology) on the topic of conspiracy theory. The project will focus on when and how conspiracy theories affect the decisions and wellbeing of individuals and society, and what people can gain and lose from spreading conspiracy theories.

    My contribution will be in using a deep learning approach to natural language processing to automatically analyse social media datasets to better understand the relationships between conspiracy communication patterns and the longevity of conspiracy theories.

    Paper in CHI 2021

    Some more good news, 2021! Our paper, “Living Memory Home: Understanding Continuing Bond in the Digital Age through Backstage Grieving” has been accepted for CHI 2021. The paper focuses on how people cope with the passing of their loved one via a private digital platform, as opposed to a public space like social media. It is a collaboration with Cornell University and Kyoto Institute of Technology.

    Abstract. Prolong Grief Disorder (PGD) is a condition in which mourners are stuck in the grief process for a prolonged period and continue to suffer from an intense, mal-adaptive level of grief. Despite the increased popularity of virtual mourning practices, and subsequently the emergence of HCI research in this area, there is little research looking into how continuing bonds maintained digitally promote or impede bereavement adjustment. Through a one-month diary study and in-depth interviews with 17 participants who recently lost their loved ones, we identified four broad mechanisms of how grievers engage in what we called “backstage” grieving (as opposed to bereavement through digital public space like social media). We further discuss how this personal and private grieving is important in maintaining emotional well-being hence avoiding developing PGD, as well as possible design opportunities and challenges for future digital tools to support grieving.

    Press release:

    First paper in 2021

    Our first paper of 2021 is out! We developed a web-based platform to enable crowdsourcing annotation of cell biology image data; the platform is powered by a semi-automated AI to support non-expert annotators to improve the annotation efficiency in a specialised domain. 

    The paper, entitled “A crowdsourcing semi-automatic image segmentation platform for cell biology” is published in the journal of Computers in Biology and Medicine.

    Abstract: State-of-the-art computer-vision algorithms rely on big and accurately annotated data, which are expensive, laborious and time-consuming to generate. This task is even more challenging when it comes to microbiological images, because they require specialized expertise for accurate annotation. Previous studies show that crowdsourcing and assistive-annotation tools are two potential solutions to address this challenge. In this work, we have developed a web-based platform to enable crowdsourcing annotation of image data; the platform is powered by a semi-automated assistive tool to support non-expert annotators to improve the annotation efficiency. The behavior of annotators with and without the assistive tool is analyzed, using biological images of different complexity. More specifically, non-experts have been asked to use the platform to annotate microbiological images of gut parasites, which are compared with annotations by experts. A quantitative evaluation is carried out on the results, confirming that the assistive tools can noticeably decrease the non-expert annotation’s cost (time, click, interaction, etc.) while preserving or even improving the annotation’s quality. The annotation quality of non-experts has been investigated using IOU (intersection of union), precision and recall; based on this analysis we propose some ideas on how to better design similar crowdsourcing and assistive platforms.

    Paper: Towards image‑based cancer cell lines authentication using deep neural networks

    Our work on using deep learning for cancer cell lines authentication has been published in Scientific Reports. This is a collaboration with Kent Biosciences. We used transfer learning to train a convolutional neural network to classify different cancer cell lines. Our model is able to differentiate the cell lines and their drug treated counterparts, a task even human biologists are unable to do. The paper is available online.

    cancer cell images
    Fig 1. Sample of cancer cell images : top row are parental cells, and bottom row are drug-treated counterparts.

    3 minute video summary of the paper:

    Abstract: Although short tandem repeat (STR) analysis is available as a reliable method for the determination of the genetic origin of cell lines, the occurrence of misauthenticated cell lines remains an important issue. Reasons include the cost, effort and time associated with STR analysis. Moreover, there are currently no methods for the discrimination between isogenic cell lines (cell lines of the same genetic origin, e.g. different cell lines derived from the same organism, clonal sublines, sublines adapted to grow under certain conditions). Hence, additional complementary, ideally low‑cost and low‑effort methods are required that enable (1) the monitoring of cell line identity as part of the daily laboratory routine and 2) the authentication of isogenic cell lines. In this research, we automate the process of cell line identification by image‑based analysis using deep convolutional neural networks. Two different convolutional neural networks models (MobileNet and InceptionResNet V2) were trained to automatically identify four parental cancer cell line (COLO 704, EFO‑21, EFO‑27 and UKF‑NB‑3) and their sublines adapted to the anti‑cancer drugs cisplatin (COLO‑704rCDDP1000, EFO‑21rCDDP2000, EFO‑27rCDDP2000) or oxaliplatin (UKF‑NB‑3rOXALI2000), hence resulting in an eight‑class problem. Our best performing model, InceptionResNet V2, achieved an average of 0.91 F1‑score on tenfold cross validation with an average area under the curve (AUC) of 0.95, for the 8‑class problem. Our best model also achieved an average F1‑score of 0.94 and 0.96 on the authentication through a classification process of the four parental cell lines and the respective drug‑adapted cells, respectively, on a four‑class problem separately. These findings provide the basis for further development of the application of deep learning for the automation of cell line authentication into a readily available easy‑to‑use methodology that enables routine monitoring of the identity of cell lines including isogenic cell lines. It should be noted that, this is just a proof of principal that, images can also be used as a method for authentication of cancer cell lines and not a replacement for the STR method.

    Paper accepted: design reflection on VR for healthcare

    Our paper, on “A Reflection on Virtual Reality Design for Psychological, Cognitive & Behavioral Interventions: Design Needs, Opportunities & Challenges”, has been accepted in International Journal of Human Computer Interaction. It is a piece of work in collaboration with Kyoto Institute of Technology, and Cornell University. We reflected on the various VR healthcare projects we have done in the past few years and discussed the co-design processes, highlighting design challenges and opportunities for future work.

    Abstract: Despite the substantial research interest in using Virtual Reality (VR) in healthcare in general and in Psychological, Cognitive, and Behavioral (PC&B) interventions in specific, as well as emerging research supporting the efficacy of VR in healthcare, the design process of translating therapies into VR to meet the needs of critical stakeholders such as users and clinicians is rarely addressed. In this paper, we aim to shed light onto the design needs, opportunities and challenges in designing efficient and effective PC&B-VR interventions. Through analyzing the co-design processes of four user-centered PC&B-VR interventions, we examined how therapies were adapted into VR to meet stakeholders’ requirements, explored design elements for meaningful experiences, and investigated how the understanding of healthcare contexts contribute to the VR intervention design. This paper presents the HCI research community with design opportunities and challenges as well as future directions for PC&B-VR intervention design. 

    Call for papers: IJHCS Special Issue on Advances in Human-Centred Dementia Technology

    [Deadlines extended due to difficulty in carrying out research in dementia during the pandemic]

    My collaborators and I will be editing a special issue for International Journal of Human Computer Studies, on Advances in Human-Centred Dementia Technology. We welcome papers in novel and emerging technologies with a human-centered perspective.


    There are approximately 46 million people living with dementia worldwide and these numbers are expected to rise to 102 million by 2040. Dementia is a syndrome of progressive and irreversible cognitive decline affecting daily functioning, commonly caused by Alzheimer’s disease or vascular dementia. With a complex array of dementia symptoms, people with dementia progressively lose their sense of autonomy; including engagement in activities of daily living and capacity to make decisions. Emerging research in HCI has begun to examine various uses of technology to aid and assist people with dementia to improve their wellbeing, such as to provide interventions for training, assistive technology, cognitive assessment, or reminiscence. For instance, immersive technologies such as virtual reality have been explored as a tool for training and rehabilitation for individuals with early or mild dementia. There are also emerging interests in using smart devices, AI-powered devices to help better manage the condition. 

    This Special Issue calls for high-quality, multidisciplinary research papers in the field of human-centred computing, engineering and life sciences which aim to address issues associated with dementia with novel technologies. We are interested in submissions focusing on emerging technologies to support dementia care, diagnosis, early intervention, prevention, monitoring, to name a few. The submission must address these issues from a human-centred perspective. Manuscripts must be original, but significant revisions of papers recently presented at conferences and workshops will be considered (at least 50% new material). Some topics of interest include but are not limited to the follow:

    • Wearable and sensing technologies for monitoring and timely intervention.
    • Artificial intelligence and machine learning for data analytics to improve care and management. 
    • Novel applications of virtual and augmented reality for dementia care. 
    • Technology for cognitive training and prevention. 
    • Communication technologies to alleviate social isolation and loneliness. 
    • Robots for assistive care and companionship.

    Revised Schedule

    Manuscript submission due: 15 September, 2021

    First round notification: 15 November, 2021

    Revised manuscript due: 15 January , 2022

    Final decision made: 15 March, 2022

    Final revision due: 15 April, 2022

    Expected publication date: Throughout 2021-22

    Guest editors

    Chee Siang (Jim) Ang, University of Kent, UK

    Panote Siriaraya, Kyoto Institute of Technology, Japan

    Eiman Kanjo, Nottingham Trent University, UK

    Francesca Falzarano, Cornell University, USA

    Holly Prigerson, Cornell University, USA

    Submission instructions

    Please refer to the following for submission guides:

    When submitting your manuscript please select the article type “HCDT”. Please submit your manuscript before the submission deadline (15 March 2021). All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Once your manuscript is accepted, it will go into production, and will be simultaneously published in the current regular issue and pulled into the online Special Issue. 

    Paper on Multi-user VR for remote eating disorder therapy

    Our paper, “Now i can see me” designing a multi-user virtual reality remote psychotherapy for body weight and shape concerns, has been published in Journal of Human Computer Interaction. In this study, we iteratively co-designed with therapists and representative users a multi-user virtual reality system, and tested the system with 14 young women at risk of eating disorder and 7 therapists. In the post cover-19 world, perhaps this form of remote immersive system might offer a solution to carry out therapy for mental health problems?

    Screenshots of the VR system

    Recent years have seen a growing research interest towards designing computer-assisted health interventions aiming to improve mental health services. Digital technologies are becoming common methods for diagnosis, therapy, and training. With the advent of lower-cost VR head-mounted-displays (HMDs) and high internet data transfer capacity, there is a new opportunity for applying immersive VR tools to augment existing interventions. This study is among the first to explore the use of a Multi-User Virtual Reality (MUVR) system as a therapeutic medium for participants at high-risk for developing Eating Disorders. This paper demonstrates the positive effect of using MUVR remote psychotherapy to enhance traditional therapeutic practices. The study capitalises on the opportunities which are offered by a MUVR remote psychotherapeutic session to enhance the outcome of Acceptance and Commitment Therapy, Play Therapy and Exposure Therapy for sufferers with body shape and weight concerns. Moreover, the study presents the design opportunities and challenges of such technology, while strengths on the feasibility, and the positive user acceptability of introducing MUVR to facilitate remote psychotherapy. Finally, the appeal of using VR for remote psychotherapy and its observed positive impact on both therapists and participants is discussed.