Improving doctor-patient communication using VR

  Home/Improving doctor-patient communication using VR
Introduction

Communication is the most important part of the doctor and patient relationship. Not only medication, but empathy is what patients need from doctors for fast healing. In times of lockdown prompted by the ensuing pandemic, human contact is being minimized. Doctors in training, i.e., MBBS students are facing a major challenge in how to learn to communicate with patients effectively and with empathy.

This can be alleviated by developing responsive, interactive agents that can be visually simulated in VR and can serve as endpoints for the doctors in training to practise their communication skills on.

The agents should be able to see and hear the doctor, and respond (in synthesized speech and behaviour) according to programmed personality types and maladies. The agent behaviour can be modelled using various behavioural animation and machine learning techniques. The interaction of the doctor with the said virtual agent can be monitored/recorded and this can be analysed by experienced doctors to give feedback to the trainee doctor about how to improve their communication skills. Such agents are known as Emotional Conversational Agents (ECA) in literature.

Since a project like this can be used to address various kinds of situations and doctors in many different stages of medical education, we need to narrow the scope of the problem to begin.

 

Scope of the problem

The scope of the doctor-patient communication problem that this project will aim to address is the following:

  1. The simulation will center around a first interaction between a doctor and a patient, where the doctor asks the patient for consent to examine the patient.
  2. Variety in this situation can be created by varying the complaints the patient comes in with, the personality and behaviour of the patient.
  3. Preloaded disease modules to be available which are mandated by the NMC UG curriculum.
  4. To begin with, this will be restricted to adult patient simulations (not children) only of both genders.
  5. The virtual patients will be visually simulated with appropriate environments. The system will monitor the doctor’s speech, facial expressions and interactive responses within the system and adapt accordingly.
  6. The aim should be to develop the backend agent simulation framework that responds to various inputs and layer the consent scenario on it as a frontend starting example.
  7. Extensions to the backend include modeling memory for the agents to simulate repeated visits to the doctor, and modeling child agents.
  8. Extensions to the frontend would entail more complicated scenarios with the addition of more complex symptoms, diagnosis, and medical conditions and interactions.

 

Technical Challenges

The projects will have two broad categories of challenges:

  1. Medical challenges
    • This would involve creating patient types, with their associated symptoms, physiological conditions, medication details and case histories. This will require inputs from collaborating medical teams and doctors to tailor this solution to medical students in the early stages of their education.
    • Develop checklists for quick evaluation of the actions performed by the doctors during the interaction with the virtual patient agents.
    • Developing feedback mechanisms to the trainee doctors based on recorded interactions with the agents.
    • Test run the developed solution and give constructive feedback to the engineering teams for improvement and changes. Institutional level short term research projects may be done to know the constructive feedback from the end user (Target user).
  2. Engineering challenges
    • Develop ECAs to simulate the various patient types
      • Facial and full body animation to simulate emotion and behaviour. Customise the behaviours modelled to the Indian settings.
      • Parse and respond to the doctor’s speech and actions.
      • Speech synthesis for the patient agents.
      • Ability to support communication in local languages.
    • Model assets and environments to populate these simulations.
    • Implement and support the scenarios envisioned by the collaborating medical teams.
      • A (visual) scripting interface can be built that can enable the scripting of these scenarios by changing the various parameters offered by the system.
      • Give visual feedback to the doctor trainee during simulation as to how they are performing.
      • Ability to record and archive the interaction for offline feedback from experts.
Participation Requirement

Participation is invited from entities with technical competence and experience in anatomy, image segmentation, modeling and rendering. Here are specific details:

 

Who can submit a proposal under the call?
  • Proposals can be submitted by principal investigators (PIs) and and co-investigators (co-PIs) belonging to:
    • Indian industry (big industry or startups)
    • Indian academia
    • Indian hospitals
      Teams may include investigators from one, two, or all three categories above. Expertise in all parts of the proposed project should be represented in the Team.
    • Faculty of academic institutes
    • Scientists from Research Laboratories
    • Individual experts who can implement the project in an institution/SIRO Recognized lab etc.
    • Startups & MSME's with established credibility
    • COP can also be addressed to person/entity of acknowledged success in the problem area (based on past works)
  • All PI and Co-PI details have to be present in the proposal at the time of submission, and the choice of each partner in the project must be justified by their experience and the work that they propose to undertake in the project.

 

Project specifics and deliverables
  1. The proposal must include a detailed work division and plan, including final integration and delivery.
  2. The proposal must include six-monthly detailed budget estimates for the entire duration of the project.
  3. The proposal must aim to create a framework for simulation of doctor-patient communication as explained in this call. The specific focus should be on the kind of scenarios mentioned in this call - the exact scenario be crystallized with the help of the medical partners submitting the project, and included as a part of the proposal.
  4. The proposal must include a description of how the ECAs will be implemented, what capabilities they will have and how they will interact with the doctor user. The ECAs can be based on available open source backends.
  5. The proposal must separate the backend that deals with the simulation of the character representing the patient and the interaction with the doctor, from the frontend that builds knowledge about the scenario being simulated and allows customizing the patient and disease/symptom specifics.
  6. Further details as mentioned in the scope of the project should be incorporated, and clarified in the proposal that is submitted.
  7. All the software created during the project must be well documented as per community standards, unit tested and maintained using versioning tools like Git.

 

Project Budget and Duration
  1. The proposal must contain a detailed description of the budget required for the overall project, and its division by partner and budget head.
  2. The project will be awarded for two or three years, with regular monitoring as outlined below.

 

Project Monitoring
  1. A project advisory and monitoring sub-committee will be formed from the TIH AR/VR vertical core committee. They will review the submitted proposals, and monitor the project once it is awarded.
  2. The project will be subject to regular review by this committee every six months. The budget of the project will not be released for the subsequent half-year, if the project fails to meet the milestones and deliverables of the current half-year.

 

Project Submission

Deadline of submission of proposal: 15 January 2022.