Protocol
Abstract
Background: Usability tests provide important insight into user preferences, functional issues, and differences between target groups for health interventions and products. However, there is limited guidance on how to adapt the usability testing approach for a youth audience, especially for digital health interventions.
Objective: This protocol paper outlines a novel approach for conducting usability tests with a diverse audience of youth, parents, and clinicians in the development of 2 digital health tools for the pediatric emergency department (ED) setting.
Methods: This paper outlines a protocol for usability testing as part of a broader study aimed at co-designing ED discharge communication tools with youth, parents, and clinicians. The broader study involved co-designing 2 digital tools: one for asthma and one for concussions. A multimethods approach to usability testing was used to assess the functionality of these tools through 2 rounds of testing. A mix of youth, parents, and ED clinicians were invited to participate in each round of usability testing. Participants were asked to provide feedback on the tools through quantitative surveys and open-ended qualitative questions. The usability testing approach was adapted to suit each target group, such as including a youth in the data collection process, to enhance the quality of the data. The severity of usability problems was analyzed following the first round of testing, and each tool was refined based on this feedback. The second round of usability tests involved collecting both qualitative and quantitative feedback on the revised tools.
Results: All usability data have been collected and are being analyzed. Outcomes will be disseminated through a subsequent publication. Results will include demographic characteristics from each user group from both rounds of testing, severity of usability scores, qualitative and quantitative feedback, and differences in test outcomes between each target group.
Conclusions: This paper provides novel guidance for conducting usability tests with youth participants when designing digital health tools. By using a comprehensive co-design and usability testing approach, we anticipate that final tools will be highly relevant to the end users and will lead to better uptake and patient outcomes when pilot-tested in future studies. The outlined approach may be adapted to different health care contexts for other youth participants. Further research should continue to explore ways to design usability tests that are suitable for youth audiences, as there is still a significant gap in the literature around this topic.
International Registered Report Identifier (IRRID): DERR1-10.2196/64350
doi:10.2196/64350
Keywords
Introduction
The International Organization for Standardization describes usability as “the extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [
]. Usability testing is a method in which a product is evaluated by users as they perform tasks, and may include formative or summative testing [ ]. Usability testing is considered a cornerstone of user-centered design, valuable for capturing user preferences, to identify any functional issues and determine differences between how certain demographic groups use a product or tool [ ]. This information is helpful for both designers and researchers who want to ensure that a product suits the needs of the end user.In the context of eHealth (ie, digital health tools and health information technologies), applying methods and approaches to usability testing could improve the design and implementation of new interventions. This includes digital tools such as mobile apps, kiosks, virtual care, and electronic health records [
]. The World Health Organization recognizes eHealth interventions as a global priority, as part of the 2020-2025 global strategy on digital health, to create more efficient and effective health care systems [ ]. Despite this global health priority setting, it is unclear how often eHealth interventions undergo usability testing prior to implementation. One systematic review reported limited or poor-quality usability testing of electronic health records prior to implementation [ ], while another review of 104 eHealth interventions reported that only 38% of them included an aspect of usability testing [ ]. Often, eHealth researchers rely on industry-focused protocols, which may lack transferability to complex health care contexts. To evolve the rigor and field of usability testing in the development of eHealth interventions, more diverse testing protocols applied in specific health contexts (eg, emergency departments [EDs]), for specific populations (eg, youth, their caregivers, and clinicians), or for specific types of eHealth interventions (ie, websites versus kiosks) are needed.Further, eHealth researchers should develop usability protocols that are developmentally appropriate for the target users (ie, age and condition) and relevant to the specific context (ie, hospital or outpatient clinic) where they may be deployed. A scoping review by Maramba et al [
] identified 133 studies where usability testing informed the development of eHealth interventions. However, no studies in the review reported on the usability testing of eHealth interventions in the ED setting, and despite 9 studies being related to child health [ ], only 2 studies included youth (aged 14-21 years) as usability testing participants [ , ]. This represents a significant research-to-practice gap as youth and their parents (eg, parents, caregivers, and/or legal guardians) are typically early adopters of eHealth interventions, and their insights could benefit broader adoption. Health services researchers need guidance on how to conduct or adapt usability tests to ensure that end users are appropriately involved in design. The aim of this protocol paper is to describe a youth-, parent-, and clinician-focused approach to usability testing of 2 eHealth interventions for pediatric EDs. This paper will highlight key testing session logistics, considerations for test user eligibility, testing activities and scenarios, adaptations for youth test users, and approaches to synthesizing multimethods usability data.Methods
Study Design
This protocol paper describes one component of the emergency department discharge communication strategies (EDUCATE) study, which aimed to evaluate a co-design method for discharge communication tools for use in the pediatric ED [
]. Based on methodological guidance from Barnum [ ] and calls for a more comprehensive approach to usability testing, as reported in previous literature [ ], a multimethods approach including quantitative (ie, surveys and severity scoring) and qualitative (ie, open-ended interviews) data was used. Using a multimethods approach allowed us to view usability data from different lenses, with count and frequency data from surveys and experiential data from qualitative sources. The protocol was designed to support remote synchronous usability testing. Remote, synchronous usability testing has previously been shown to be as effective as in-person usability testing for eHealth interventions among both adults and youth [ ]. Formative usability testing was used, where tools were evaluated through 2 iterative cycles with a small number of participants [ ] to identify any errors prior to implementation. The Template for Intervention Description and Replication (TIDieR) checklist was used to guide the reporting of this protocol paper [ ] ( ).Ethical Considerations
The study received ethical approval from the institutional review board at IWK Health (#1024004). All participants provided written informed consent prior to each usability test. Additional consent was obtained from a research team member whose image is included in the study materials. Following each usability test, participants received a unique identifier, and their data remained anonymous and confidential. Upon completion of the usability test, all participants received a CAD $30 (US $20.96) gift voucher as a reimbursement for their time.
Tool Development
During the first phase of the EDUCATE study, 2 electronic discharge communication tools were co-designed by parents, youth, and ED clinicians (ie, nurses and physicians) [
]. A full description of the co-design process will be published in a future publication and is briefly described here. Two co-design teams were established, one for asthma and one for concussion, and met 8 times over a 2-year period between 2020 and 2022. Each co-design team worked together to develop an interactive web-based tool that would address a key discharge communication issue for youth and families visiting the ED. One tool was co-designed to help parents and youth decide whether to visit the ED during an asthma attack, while the second tool was co-designed to help parents and youth navigate the postconcussion recovery journey after leaving the ED. Two user design experts integrated the co-design teams’ ideas into 2 digital tools, which were then assessed for usability.Usability Testing Steps
The usability testing process involved four steps, based on usability testing literature [
]: (1) defining the user profiles, (2) the think-aloud process, (3) task-based scenarios, and (4) refining and retesting. Each usability test was facilitated by a researcher trained in mixed and multimethods health services research (MS). Previous literature shows that usability tests often use one approach (ie, quantitative, qualitative, or heuristic methods) to collect usability data, but few use multiple methods [ ]. Therefore, a combination of quantitative and qualitative methods was used to gather comprehensive usability data and to identify as many usability issues as possible. The usability tests included a combination of quantitative self-report survey questions, qualitative think-aloud processes, observations, and open-ended interview questions, and were planned to last approximately 60 minutes. This protocol paper focuses on the usability testing process, while more details on recruitment, study setting, and outcome data will be reported in a future publication. and the following section describes each of the four usability steps in detail, including how each step was adapted for each target population (ie, youth, parents, and ED clinicians).Step and item | Goal | |||
Usability step 1: defining the user profiles | ||||
Presession | ||||
Screening questionnaire |
| |||
Informed consent |
| |||
Round 1 (total session time: 60 minutes) | ||||
Opening script (5 minutes) |
| |||
Demographic survey (5 minutes) |
| |||
Usability step 2: think-aloud process—overview of the “think-aloud” process (5 minutes) |
| |||
Usability step 3: task-based scenarios | ||||
Task 1: first impressions (20 minutes) |
| |||
Task 2: scenario-based activities (×2; 15 minutes) |
| |||
Gibson survey (5 minutes) |
| |||
Thank you and closing remarks (5 minutes) |
| |||
Postsession | ||||
Quantitative data analysis |
| |||
Qualitative data analysis |
| |||
Usability step 4: refining and retesting | ||||
Presession | ||||
Screening questionnaire |
| |||
Informed consent |
| |||
Round 2 | ||||
Demographic survey |
| |||
Posttask questionnaire |
|
Step 1: Defining the User Profiles
Overview
Barnum [
] proposes that defining the user profile is an important first step in usability testing. As our tools were codeveloped by and for ED clinicians, parents, and youth, these 3 target groups were chosen as the user profiles. For this study, youth included any individual aged between 12 and 19 years who had visited the ED for either asthma or concussion in the past year. Parent users included any adult (>18 years) who visited the ED with their child for asthma or concussion presentations in the past year, and the clinician profile included any nurse or physician employed in a pediatric ED setting. Nielsen and Landauer [ ] argue that 85% of usability issues could be identified with as few as 5 participants, and Barnum [ ] proposes that formative usability testing is better suited to a smaller number of participants. Therefore, we aimed to include 2 to 3 participants from each user group (youth, parents, ED nurses, and ED physicians) for each tool across 2 study sites for a proposed sample of 16 to 24 participants from each site in each round of usability testing ( ).
Eligibility and Pretest Survey
To determine eligibility, a screening survey was administered to interested participants through the REDCap (Research Electronic Data Capture; Vanderbilt University) platform [
]. Since participants would be testing a digital tool, it was important to screen for health literacy level and access to a computer with audio/video capabilities. To assess health literacy level, the REDCap survey included branching so that parents would be directed to the METER health literacy test [ ], which is shown to be a quick and valid measurement of health literacy level among adults. Youth were directed to the Health Literacy Assessment Tool 8 test [ ], a quick, feasible and accurate health literacy assessment tool for youth [ ]. Clinicians did not have to complete a health literacy test. The screening survey can be found in . Participants provided written informed consent and agreed on a day and time to complete a test session with the facilitator (MS).Test Session Setup
Participants joined the session remotely via the Zoom (Zoom Video Communications) using their own computer and webcam. The facilitator provided a brief overview of the study and allowed participants to ask questions to ensure that they understood the expectations of the usability test. The facilitator described that the aim of the usability test was to find problems with the tool and assured participants that their skills and abilities were not being evaluated. This was important to create a safe testing space, particularly for youth participants. Participants were then asked to complete a demographic questionnaire on REDCap prior to the start of the test. The facilitator shared a link to the survey and allowed the participant to complete the test in real time to ensure higher completion rate. Sessions were video recorded.
Step 2: the Think-Aloud Process
The think-aloud process involves participants talking through their thought process as they complete a task or solve a problem [
]. This approach is common in usability tests of eHealth interventions [ , ] and is valuable for understanding participants’ decision-making processes rather than strictly observing their behaviors [ , ]. Following the completion of the demographic survey, and immediately prior to the start of the usability test, the facilitator explained to participants how to use a think-aloud approach during the usability test. A mock example was used, which involved navigating a popular Canadian departmental store’s website so that test users could become familiar with the think-aloud process in a web-based environment they recognized. While it was important to demonstrate the think-aloud process with all participants, the co-design team suggested that a second think-aloud example featuring a youth should be modeled for youth participants prior to their usability test. Therefore, a youth member of the co-design team created a 1-minute video of themselves using the think-aloud process to find their way on a popular theme park’s map ( ). This was played for all youth participants prior to the start of the usability test. Examples of the think-aloud process were shared with the co-design teams and refined based on their feedback prior to starting the usability tests with participants.
Step 3: Task-Based Scenarios
Overview
Once participants confirmed that they were comfortable with the think-aloud process, the facilitator started video-recording the usability testing session. The facilitator used a variety of techniques to fully evaluate the usability of the eHealth intervention by adapting traditional usability methods for the end user population (ie, youth, parents, or clinicians). Before sharing a link to the tool, the facilitator shared a link to a web-based Microsoft Word document to guide the user through tasks and included visual prompts to support the scenario-based exercises. This approach was used to eliminate the need for alternating screen sharing by the facilitator and participant and to reduce complexity in the remote testing environment [
]. Six documents were developed, based on the 6 unique user groups (ie, youth with asthma, their parents, and their clinicians; and youth with concussions, their parents, and their clinicians).Task 1: First Impressions
The first task in the usability test was designed to gather participants’ first impressions of the tool. This crucial step in usability testing can quickly determine whether users like or dislike a tool in about 80% of cases [
]. After opening the tool, users were invited to click around, using the think-aloud process to describe their initial reactions to the tool. The facilitator used open-ended prompts to encourage verbalizations of what the participant was thinking; these included the following: “What are your first impressions of the tool?” “What do you think the purpose of the tool is?” Participants were then asked to choose 5-10 words from an adapted list of 118 desirability reaction words of Benedeck and Miner [ ]. For usability tests, the Nielsen Norman Group [ ] suggests adapting the original list to include approximately 25 words that are appropriate for the user interface being evaluated, with at least 40% of chosen words having a negative connotation. This activity aimed to gather additional user satisfaction data while helping participants, particularly youth, feel more comfortable about sharing their honest thoughts about the tool. This approach has been successfully used in previous usability tests [ ]. The list of desirability words can be found in .Task 2: Scenario-Based Questions
Next, participants were presented with a scenario relevant to their user identity (ie, youth, parent, or clinician) and medical condition (ie, asthma or concussion). Each scenario was designed by a research team member (MS) based on the user’s persona, as outlined by Quesenbery and Brooks [
]. This involved crafting a situation with the user as the main character where they must achieve a specific goal by using the eHealth intervention being tested. The facilitator used a visual guide and a predetermined script to describe a scenario and then ask participants to complete 2 tasks. The tasks were designed to walk participants through key features of the tool so that additional navigation and usability errors could be easily identified. Participants were asked to use the think-aloud process to describe their thoughts and decisions as they completed each task. Participants were asked open-ended questions about the scenario-based activities, such as the following: “How did you find using the tool to complete that task?” “Is there anything you would change about the tool to make that task easier?” The user-specific tasks are outlined in .Global Feedback
Following the scenario-based tasks, participants were asked open-ended questions about the tool, such as the following: “Is there anything else you would change about the tool to make it better?” “On what device/format would you most likely use this tool if you were to use it in the future?” Quantitative data about the functionality and satisfaction of the tool were captured through a REDCap survey, which was administered to participants at the end of the usability test. The facilitator shared a link to the survey using the chat feature of the web-based meeting platform and waited until participants completed the task in real time, to ensure a high completion rate. This posttest was adapted from a survey by Gibson et al [
], which is a validated tool for collecting patient and provider satisfaction data on educational resources. The survey by Gibson et al [ ] aims to gather information on visual appeal, functionality, content, and intended use. Branching was used to direct participants to the correct survey in REDCap, as the youth survey also included a question to understand the impact of seeing a youth-led example of the think-aloud process. The posttest surveys adapted from Gibson et al [ ] can be found in [ ].Step 4: Refining and Retesting
Following each usability testing session, the video recording was uploaded to a secure, password-protected internet server. Four coders (MS, JC, AG, LW) watched the videos and independently scored the usability issues using a combination of Nielsen’s [
] scoring system and qualitative analysis. The Nielsen scoring system for severity of usability issues is based on a 5-point scale, ranging from 0 (no usability problem) to 4 (catastrophic usability problem) [ ]. Nielsen [ ] proposes 3 factors associated with a usability issue: frequency of the problem, impact of the problem, and persistence of the problem. If an eHealth intervention is evaluated to include only minor usability issues (score of 0-2), then the tool may be released without further refinement, while an eHealth intervention with major or catastrophic issues (score of 3-4) should undergo alterations before another round of testing and/or public release [ ]. Each coder scored the recorded usability sessions using a deductive approach, following Nielson’s [ ] scoring system. A numerical value and descriptive details were entered into an Excel (Microsoft Corp) sheet to explain the reasoning underlying each score. Coders then met to discuss their scores and reach a consensus on final severity scores for each usability test.In conjunction with Nielson’s [
] severity scoring step, each reviewer made notes about users’ open-ended responses or comments during the usability session. For example, if users described their dislike of a certain feature of the eHealth intervention, the reviewer documented this during the qualitative analysis. Directed content analysis was used to understand the qualitative data and identify the most reported user issues [ ]. This type of qualitative analysis allows for a deeper interpretation of qualitative data, often informed by a theory or previous research, and allows for the quantification of the data [ ]. In this study, the quantitative findings informed the qualitative data analysis and allowed researchers to calculate the number of user issues with additional context. Following qualitative analysis and severity scoring of each usability test, a description of the most severe usability issues and a list of proposed changes was sent to the design team. The developers then refined the tools by addressing severe usability issues and the most common cosmetic concerns.The refined tools were brought to each co-design team for further input before undergoing a second round of usability testing with another sample of target users. As the first round aimed to identify catastrophic usability issues, the second round was intended to identify additional, minor issues. This second round of usability testing involved a modified, remote, asynchronous approach to capture any additional usability issues in the tools. As the first round aimed to identify catastrophic usability issues, the second round was intended to identify additional minor issues. Therefore, a modified approach was used to quickly gather usability information without placing unnecessary burden on participants. To capture remote usability data, images and video clips were embedded in a new REDCap survey to demonstrate the main functions and features of the tools (
). Participants then completed posttask questionnaires using a Likert scale, informed by Nielson’s [ ] methods, with additional free-text boxes to capture qualitative data. Basic demographic questions were also included in the survey.The final version of each tool was then presented to the co-design team members for their thoughts. Each team made final decisions about what refinements should be incorporated into each tool, signaling the end of the usability testing process.

Results
The first round of usability testing was conducted between December 2021 and July 2022, while the second round of testing was conducted between November 2022 and March 2023. Data analysis of round 1 took place in July and August 2022 and informed the second round of testing. The final results from both rounds of usability testing will be shared in a future publication. Outcome data will include an overview of severity of usability scores from round 1, qualitative feedback on tool usability and satisfaction from rounds 1 and 2 of testing, and demographic details about study participants. Details about the changes made between rounds 1 and 2 of usability testing will also be presented and may include changes such as button size or location, colors, and new navigation pathways. We will describe any observations related to user characteristics and feedback and identify opportunities for future usability testing and implementation.
Discussion
Anticipated Findings
This paper addresses an important gap in the academic usability literature by detailing a co-designed approach to usability testing that was adapted for youth, parents, and clinicians. In particular, this paper describes multiple adaptations that were made to the testing procedures to address developmental stages and comfort levels of youth participants. These adaptations included modeling the think-aloud technique by other youth, allowing for a test-and-try before initiating the recording, screening for the appropriate level of health literacy so participants would be able to complete tasks, using multiple methods for soliciting feedback (self-report survey, observation, and interviews) so participants had varied opportunities to express opinions and suggestions, keeping testing sessions brief and accessible offsite (ie, via Zoom), keeping the sessions short (<60 minutes), using branching logic in data collection methods so participants only accessed information relevant to them, and including less cognitively demanding activities (eg, word desirability activity) to solicit feedback. By applying these approaches to usability testing, it is anticipated that the feedback will be highly relevant, leading to a more user-centered product. We expect that the first round of usability testing will lead to several changes to the tools, while the second round may result in fewer or more minor changes. By using a co-design approach and bringing the usability feedback to each co-design team for further consideration, the next step of piloting the 2 tools in ED settings will lead to positive uptake and outcomes. Previous studies have indicated the benefits of using a co-design approach to engage more end users [
]. However, few usability studies tend to include the youth perspective, even when they are the target audience [ ]; hence, we expect this paper to be a significant and beneficial contribution to the usability literature.Strengths and Limitations
While this paper provides a comprehensive overview of an approach to usability testing for youth, there are several limitations to consider. Due to the remote nature of the usability tests, participants require internet and computer access. Further, among individuals who do complete a remote usability test, the differences in home environments and technical equipment may affect the quality of the testing process [
]. Future work may focus on offering technical support or request a specific technology setup, as these concerns may have limited participation for some individuals, particularly those from lower socioeconomic backgrounds. Additionally, while it is not a requirement to speak English as a first language to participate in the study, the digital tools were only designed in English, and therefore non–English speakers may be unable to complete the usability tests. Although a small sample of participants is needed to identify most usability issues, a small sample may reduce the generalizability of the findings, which may be seen as a limitation. We have future research planned to mitigate both of these concerns by co-designing multilingual digital tools with broader populations to ensure that the specific needs of clinicians, parents, and youth from varied backgrounds are met. Finally, while the findings of the usability tests may not be generalizable to a non-ED health care context or for individuals presenting with medical conditions apart from concussion and asthma, the techniques used to engage youth may be applied to any usability testing setting.Key Recommendations and Conclusion
Youth provide valuable perspectives into eHealth intervention designs and therefore should be included in the usability process; however, there is a significant gap in the literature around usability testing with youth in health services. Therefore, researchers may find the methods used in this paper helpful for guiding usability tests with youth participants in other health care contexts. Further outcome data are needed to determine what works well in youth-based usability studies, some of which will be shared through a future publication presenting the outcomes of the detailed approach.
Acknowledgments
We would like to acknowledge the members of each co-design team who designed the digital health tools assessed through usability testing.
Data Availability
The datasets generated during this study are not publicly available because the results have not yet been analyzed but are available from the corresponding author on reasonable request.
Conflicts of Interest
None declared.
TIDieR (Template for Intervention Description and Replication) reporting guidelines.
DOCX File , 495 KBScreening survey for usability participants.
DOCX File , 209 KBThe list of desirability words.
DOCX File , 14 KBAn overview of the user-specific tasks.
DOCX File , 20 KBPosttest surveys adapted from Gibson et al [
].DOCX File , 279 KBReferences
- Bevan N, Carter J, Earthy J, Geis T, Harker S. New ISO Standards for Usability, Usability Reports and Usability Measures. 2016. Presented at: Human-Computer Interaction. Theory, Design, Development and Practice (HCI 2016); July 17-22, 2016; Toronto, ON. [CrossRef]
- Barnum CM. Usability Testing Essentials. Burlington, MA. Morgan Kaufmann Publishers; 2010.
- Albert B, Tullis T. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Oxford. Newnes; 2013.
- Gentles SJ, Lokker C, McKibbon KA. Health information technology to facilitate communication involving health care providers, caregivers, and pediatric patients: a scoping review. J Med Internet Res. Jun 18, 2010;12(2):e22. [FREE Full text] [CrossRef] [Medline]
- Global strategy on digital health 2020-2025. World Health Organization. 2021. URL: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf [accessed 2025-03-13]
- Ellsworth MA, Dziadzko M, O'Horo JC, Farrell AM, Zhang J, Herasevich V. An appraisal of published usability evaluations of electronic health records via systematic review. J Am Med Inform Assoc. Jan 2017;24(1):218-226. [FREE Full text] [CrossRef] [Medline]
- Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform. Jun 2019;126:95-104. [CrossRef] [Medline]
- Fiks AG, Fleisher L, Berrigan L, Sykes E, Mayne SL, Gruver R, et al. Usability, acceptability, and impact of a pediatric teledermatology mobile health application. Telemed J E Health. Mar 2018;24(3):236-245. [CrossRef] [Medline]
- Webb MJ, Wadley G, Sanci LA. Improving patient-centered care for young people in general practice with a codesigned screening app: mixed methods study. JMIR Mhealth Uhealth. Aug 11, 2017;5(8):e118. [FREE Full text] [CrossRef] [Medline]
- Curran JA, Bishop A, Plint A, MacPhee S, Zemek R, Chorney J, et al. Understanding discharge communication behaviours in a pediatric emergency care context: a mixed methods observation study protocol. BMC Health Serv Res. Apr 17, 2017;17(1):276. [FREE Full text] [CrossRef] [Medline]
- Wozney L, Baxter P, Newton AS. Usability evaluation with mental health professionals and young people to develop an internet-based cognitive-behaviour therapy program for adolescents with anxiety disorders. BMC Pediatr. Dec 16, 2015;15(1):213. [FREE Full text] [CrossRef] [Medline]
- Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. Mar 07, 2014;348(mar07 3):g1687-g1687. [FREE Full text] [CrossRef] [Medline]
- Curran JA, Cassidy C, Bishop A, Wozney L, Plint AC, Ritchie K, et al. Pediatric Emergency Research Canada (PERC). Codesigning discharge communication interventions with healthcare providers, youth and parents for emergency practice settings: EDUCATE study protocol. BMJ Open. May 11, 2020;10(5):e038314. [FREE Full text] [CrossRef] [Medline]
- Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. 1993. Presented at: INTERCHI93: Conference on Human Factors in Computing; April 24-29, 1993; Amsterdam. [CrossRef]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. Apr 2009;42(2):377-381. [FREE Full text] [CrossRef] [Medline]
- Rawson KA, Gunstad J, Hughes J, Spitznagel MB, Potter V, Waechter D, et al. The METER: a brief, self-administered measure of health literacy. J Gen Intern Med. Jan 3, 2010;25(1):67-71. [FREE Full text] [CrossRef] [Medline]
- Abel T, Hofmann K, Ackermann S, Bucher S, Sakarya S. Health literacy among young adults: a short survey tool for public health and health promotion research. Health Promot Int. Sep 30, 2015;30(3):725-735. [FREE Full text] [CrossRef] [Medline]
- van Someren MW, Barnard YF, Sandberg JAC. The Think Aloud Method - A Practical Guide to Modelling Cognitive Processes. London. Academic Press; 1994.
- Jaspers MWM, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform. Nov 2004;73(11-12):781-795. [CrossRef] [Medline]
- Nielsen J. Getting Usability Used. In: Nordby K, Helmersen P, Gilmore DJ, Arnesen SA, editors. Human—Computer Interaction. IFIP Advances in Information and Communication Technology. Boston, MA. Springer; 1995.
- Wozney LM, Baxter P, Fast H, Cleghorn L, Hundert AS, Newton AS. Sociotechnical human factors involved in remote online usability testing of two eHealth interventions. JMIR Hum Factors. Feb 03, 2016;3(1):e6. [FREE Full text] [CrossRef] [Medline]
- Benedek J, Miner T. Measuring Desirability: New methods for evaluating desirability in a usability lab setting. 2010. Presented at: Usability Professionals’ Association Conference; May 2010; Munich, Germany.
- Moran K. Using the Microsoft Desirability Toolkit to Test Visual Appeal. Nielsen Norman Group. 2016. URL: https://www.nngroup.com/articles/microsoft-desirability-toolkit/ [accessed 2025-07-11]
- Hundert AS, Campbell-Yeo M, Brook HR, Wozney LM, O'Connor K. Development and usability evaluation of a desktop software application for pain assessment in infants. Can J Pain. Nov 14, 2018;2(1):302-314. [FREE Full text] [CrossRef] [Medline]
- Quesenbery W, Brooks K. Storytelling for User Experience: Crafting Stories for Better Design. New York, NY. Rosenfeld Media; 2010.
- Gibson PA, Ruby C, Craig MD. A health/patient education database for family practice. Bull Med Libr Assoc. Oct 1991;79(4):357-369. [FREE Full text] [Medline]
- Nielsen J. Severity Ratings for Usability Problems. Nielsen Norman Group. 1994. URL: https://www.nngroup.com/articles/how-to-rate-the-severity-of-usability-problems/ [accessed 2024-05-06]
- Hsieh H, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. Nov 2005;15(9):1277-1288. [CrossRef] [Medline]
- Domecq JP, Prutsky G, Elraiyah T, Wang Z, Nabhan M, Shippee N, et al. Patient engagement in research: a systematic review. BMC Health Serv Res. Feb 26, 2014;14:89. [FREE Full text] [CrossRef] [Medline]
Abbreviations
ED: emergency department |
EDUCATE: emergency department discharge communication strategies |
REDCap: Research Electronic Data Capture |
TIDieR: Template for Intervention Description and Replication |
Edited by A Schwartz; submitted 16.07.24; peer-reviewed by E Bai; comments to author 13.09.24; revised version received 08.11.24; accepted 27.02.25; published 14.04.25.
Copyright©Mari Somerville, Lori Wozney, Allyson Gallant, Janet A Curran. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 14.04.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.