The Ready to Reduce Risk (3R) Study for a Group Educational Intervention With Telephone and Text Messaging Support to Improve Medication Adherence for the Primary Prevention of Cardiovascular Disease: Protocol for a Randomized Controlled Trial

Background Poor adherence to cardiovascular medications is associated with worse clinical outcomes. Evidence for effective education interventions that address medication adherence for the primary prevention of cardiovascular disease is lacking. The Ready to Reduce Risk (3R) study aims to investigate whether a complex intervention, involving group education plus telephone and text messaging follow-up support, can improve medication adherence and reduce cardiovascular risk. Objective This protocol paper details the design and rationale for the development of the 3R intervention and the study methods used. Methods This is an open and pragmatic randomized controlled trial with 12 months of follow-up. We recruited participants from primary care and randomly assigned them at a 1:1 frequency, stratified by sex and age, to either a control group (usual care from a general practitioner) or an intervention group involving 2 facilitated group education sessions with telephone and text messaging follow-up support, with a theoretical underpinning and using recognized behavioral change techniques. The primary outcome was medication adherence to statins. The primary measure was an objective, novel, urine-based biochemical measure of medication adherence. We also used the 8-item Morisky Medication Adherence Scale to assess medication adherence. Secondary outcomes were changes in total cholesterol, blood pressure, high-density lipoprotein, total cholesterol to high-density lipoprotein ratio, body mass index, waist to hip ratio, waist circumference, smoking behavior, physical activity, fruit and vegetable intake, patient activation level, quality of life, health status, health and medication beliefs, and overall cardiovascular disease risk score. We also considered process outcomes relating to acceptability and feasibility of the 3R intervention. Results We recruited 212 participants between May 2015 and March 2017. The 12-month follow-up data collection clinics were completed in April 2018, and data analysis will commence once all study data have been collected and verified. Conclusions This study will identify a potentially clinically useful and effective educational intervention for the primary prevention of cardiovascular disease. Medication adherence to statins is being assessed using a novel urine assay as an objective measure, in conjunction with other validated measures. Trial Registration International Standard Randomized Controlled Trial Number ISRCTN16863160; http://www.isrctn.com/ISRCTN16863160 (Archived by WebCite at http://www.webcitation.org/734PqfdQw) International Registered Report Identifier (IRRID) DERR1-10.2196/11289

Highly recommended v) Conclusions/Discussions in abstract for negative trials: Discuss the primary outcome -if the trial is negative (primary outcome not changed), and the intervention was not used, discuss whether negative results are attributable to lack of uptake and discuss reasons.

Background and objectives
2a Scientific background and explanation of rationale i) Describe the problem and the type of system/solution that is object of the study: intended as stand-alone intervention vs. incorporated in broader health care program?[1] Intended for a particular patient population?[1] Goals of the intervention, e.g., being more cost-effective to other interventions [1], replace or complement other solutions?(Note: Details about the intervention are provided in "Methods" under 5)

Essential
ii) Scientific background, rationale: What is known about the (type of) system that is the object of the study (be sure to discuss the use of similar systems for other conditions/diagnoses, if appropriate), motivation for the study, i.e., what are the reasons for and what is the context for this specific study, from which stakeholder viewpoint is the study performed, potential impact of findings [2].Briefly justify the choice of the comparator.[6] used to design them (instructional strategy [1], behaviour change techniques, persuasive features, etc., see e.g., [7,8] for terminology).This includes an in-depth description of the content (including where it is coming from and who developed it) [1], "whether [and how] it is tailored to individual circumstances and allows users to track their progress and receive feedback" [6].This also includes a description of communication delivery channels and -if computer-mediated communication is a component -whether communication was synchronous or asynchronous [6].It also includes information on presentation strategies [1], including page design principles, average amount of text on pages, presence of hyperlinks to other resources etc. [1].

Essential
Essential ix) Describe use parameters (e.g., intended "doses" and optimal timing for use) [1].Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use [1], if any, or was the intervention used ad libitum.

Highly Recommended
x) Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as cointervention.Detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered" [6].It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability).[6] for some items to be included in informed consent documents.

Highly recommended xi) Report any prompts/reminders used:
Highly Recommended iii) Safety and security procedures, incl.privacy considerations, and "any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)" [1].

Participant flow (a diagram is strongly recommended)
13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center No EHEALTH-specific additions here 13b For each group, losses and exclusions after randomisation, together with reasons i) Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) [5] or other figures or tables demonstrating usage/dose/engagement.

Recruitment
14a Dates defining the periods of recruitment and follow-up i) Indicate if critical "secular events" [1] fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources" [1].

Highly Recommended 14b Why the trial ended or was stopped [early]
No EHEALTH-specific additions here thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group).Always clearly define "use" of the intervention.

Essential
ii) Primary analysis should be intent-to-treat; secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i).

Highly Recommended
Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) i) In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical.This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length".These must be accompanied by a technical description how a Highly Recommended metric like a "session" is defined (e.g., timeout after idle time) [1]

No EHEALTH-specific additions here
Competing interests X27 (not a CONSORT item) i) In addition to the usual declaration of interests (financial or otherwise), also state the "relation of the study team towards the system being evaluated" [2], i.e., state if the authors/evaluators are distinct from or identical with the developers/sponsors of the intervention.

key features/functionalities/components of the intervention and comparator in the abstract
Only report in the abstract what the main paper is reporting.If this information is missing from the main body of text, consider adding it) ii) Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any).(Note: Only report in the abstract what the main paper is reporting.If this information is missing from the main body of text, consider adding it) . If possible, also mention theories and principles used for designing the site.Keep in mind the needs of systematic reviewers and indexers by including important synonyms.

vs. closed, web-based (self-assessment) vs. face-to-face assessments in abstract
: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic or a closed online user group (closed usergroup trial), and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment).Clearly say if outcomes were selfassessed through questionnaires (as common in web-based trials).Note: In traditional offline trials, an open trial (open-label trial) is a type of clinical trial in which both the researchers and participants know which treatment is being administered.To avoid confusion, use "blinded" or "unblinded" to indicated the level of blinding instead of "open", as "open" in web-based trials usually refers to "open access" (i.e.participants can self-enrol) (Note: Only report in the abstract what the main paper is reporting.If this information is missing from the main body of text, consider adding it)

Information given during recruitment
Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial.Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b).
[6]pecify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results.i)Clearlyreportifoutcomeswere (self-)assessed through online questionnaires (as common in web-based trials) or otherwise.Essential ii) Report how institutional affiliations are displayed to potential participants [on ehealth media], as affiliations with prestigious hospitals or universities may affect volunteer rates, use, and reactions with regards to an intervention" [1].(Not a required item -describe only if this may bias results) i) Mention names, credential, affiliations of the developers, sponsors, and owners[6](if authors/evaluators are owners or developer of the software, this needs to be declared in a "Conflict of interest" section or mentioned elsewhere in the manuscript).Highly Recommended administered ii) Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results.

replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the
algorithms used.Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scientific reporting.Highly Recommended vi) Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org,and/or publishing the source code or screenshots/videos alongside the article).As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login.or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi).Essential viii) Describe mode

whether and how "use" (including intensity of use
/dosage) was defined/measured/monitored (logins, logfile analysis, etc.).Use/adoption metrics are important process outcomes that should be reported in any ehealth trial.

whether, how, and when qualitative feedback was obtained from
participants (e.g., through emails, feedback forms, interviews, focus groups).

Describe whether and how expected attrition was taken into account when calculating the sample size
[1,3]ecify who was blinded, and who wasn't.Usually, in web-based trialsit is not possible to blind the participants[1,3](this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any).Essential ii) Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g.,

techniques to deal with attrition / missing values:
[4] all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials.Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic[4]).

qualitative feedback from participants or observations from staff/researchers, if
available, on strengths and shortcomings of the application, especially if they point to unintended/unexpected effects or uses.This includes (if available) reasons for why people did or did not use the application as intended by the developers.
[2]Include privacy breaches, technical problems.This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches[1], technical problems, and other unexpected/unintended incidents."Unintendedeffects"also includes unintended positive effects[2].

study questions and summarize the answers suggested by the data [2], starting with primary outcomes and process outcomes (use).
[2]ical limitations in ehealth trials: Participants in ehealth trials are rarely blinded.Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error.Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events.In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations[2].

if there were elements in the RCT that would be different in a routine application setting
(e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting.