[This week’s objectives as per professor: Compare and contrast differences and similarities of qualitative, quantitative, and mixed-method designs. we discovered that, more and more, nurses are using qualitative data to provide a holistic understanding of a phenomenon. As a practice scholar, you’ll use qualitative inquiry and data to enrich your understanding of a practice problem. Good qualitative research uses a systematic and rigorous approach that aims to answer questions that address what something is like (such as a patient experience), what people think or feel about something that has occurred, or why something has happened. Different from numbers, qualitative data often takes the form of words or text and images. Nurses with advanced education are called to transform the health of our nation. I have no doubt that you’ll answer the call by translating best evidence to gain new insights on needed change.]
[Profesor comments about week objectives: This week you have Appraising Quantitative Research the Johns Hopkins Research Evidence Appraisal Tool and the Johns Hopkins Individual Evidence Summary Tool. Let’s recap what you learned this week. We continued our dialogue about research designs by examining quantitative inquiry and data. You appraised a quantitative research study with my guidance, and reflected on how quantitative methods differ from qualitative methods. You continued building an evidence summary table, adding two quantitative research studies. That’s a lot of progress! I know you gained an understanding of how the research question drives the selection and appraisal of the design. What’s more, you explored how the roles of quantitative inquiry and data can be used by the practice scholar to influence patient outcomes.]
[Profesor comet about week objectives: Hello, and welcome to Week 3! Last week, you explored quantitative inquiry and data. This week, we’ll take that knowledge a step further—you’ll have the opportunity to investigate summaries of multiple research studies, including systematic reviews. In the hierarchy of evidence quality, systematic reviews of randomized controlled trials are considered the gold standard of evidence. During this week’s leg of the journey, you’ll learn how to appraise systematic reviews and other summaries of multiple studies using the Johns Hopkins Nursing Evidence-Based Practice Research Evidence Appraisal Tool. I bet you’re getting familiar with this tool after applying it in Weeks 1 and 2. Using this industry standard provides a systematic process of appraisal and assists you in locating high-quality evidence. Besides systematic reviews, we’ll examine other summaries of multiple studies, including an integrative review, meta-analysis, and meta-synthesis. How does an integrative review, meta-analysis, systematic review, and meta-synthesis differ? I’m sure you’re ready to find answers to these questions, so let’s get started! This week’s focus was summaries of multiple studies. You examined how the integrative review, meta-analysis, systematic review, and meta-synthesis differ and how they are similar. More importantly, you learned the value and importance these research summaries play in informing practice change. Research is a critical component in solving practice problems. As a DNP-prepared nurse, your critical appraisal of research study articles includes reviewing research to locate high-quality evidence. After just three weeks, look how far you’ve come already! Let’s move on to Week 4.]
[Profesor comet about week objectives: Week 4 marks the midpoint of your journey in NR714! You’re making tremendous progress in learning analytic methods! This leg of the journey involves the exploration of data analysis. If you’re at all like me, you might feel anxious about analyzing qualitative and quantitative data. Don’t worry! You’ll have lots of support from me and your course faculty. Most likely, you’re already familiar with some aspects of data analysis, and this week, we’ll explore some of the most widely recognized statistical procedures. I think you’ll enjoy exploring the learning activity about descriptive statistics, where you’ll analyze data from a real scenario to see how statistics inform practice change. As you review weekly content, consider how to research study data management and statistical analysis directly influence the conclusions made by researchers. In a scientific study, the concepts in which a researcher is interested are referred to as variables. One distinction relates to independent and dependent variables. Another distinction that has relevance for statistical analysis concerns discrete and continuous variables. This week we learn about that.
A power analysis is a statistical procedure needed to determine an effective sample size to make a reasonable conclusion.
Power Analyses (Ali & Bhaskar, 206; Polit & Beck, 2017)
Chi-Square is powerful for what it is intended to do – determine if variables are associated in any way. Chi-Square Analysis (Ali & Bhaskar, 206; Polit & Beck, 2017)
The null hypothesis states that there is no relationship between variables. As such, if the null is accepted, you are agreeing that there is no relationship between variables.
Null Hypothesis Testing (Polit & Beck, 2017)
Null Hypothesis – suggests there is no relationship between variables, populations, etc. meaning there was an error in sampling.
Alternative Hypothesis – suggests there is a relationship between variables, populations, etc.
Rejecting the Null Hypothesis – suggests being in support of the Alternative Hypothesis.
Week 4’s concepts certainly offered some great information! Let’s take some time to go over everything you learned this week. This part of your journey focused on analyzing qualitative and quantitative data. You’ve probably realized that as you explored some of the most commonly used descriptive statistics, you’re already familiar with several of these procedures. But did you know that the Chi-Square Test determines whether two variables are independent or related? You do now! By using the knowledge of this and other descriptive statistical tests, you’ll be able to select high-quality evidence to lead successful change in practice. And with that understanding under your belt, you’re ready to move forward to Week 5! ]
let’s review everything you learned in Week 5. You now know that data analysis plays an essential role in your ability to translate best available evidence to a practice problem. After many years of chaotic attempts at progress, nurses have come to realize that, to be useful, evidence from data analysis must be carefully examined, organized, and given meaning. Evaluating the meaning of the study results and forecasting the usefulness of the findings to unique practice settings is the hallmark of the practice scholar. Now that you’re here, you understand not only the importance of data analysis, but also the skill needed to determine whether to use evidence in practice. This knowledge is vital to your role as a practice scholar! I can tell that you’re ready to move onward. See you in Week 6! ]
Evidence synthesis plays a critical role in translation science. Many nurses confuse the evidence synthesis with an evidence summary. You now know an important difference. An evidence synthesis brings together information from different sources of information in order to interpret existing knowledge, identify knowledge gaps, and inform decision making. ]
PICOT is a mnemonic for a formatting tool that serves two purposes. Each purpose is of equal importance to the practice scholar as the practice change project is designed.
P Patient/PopulationThe practice scholar states the relevant patients/individuals. What does the research literature say about the age, gender, specific characteristics, social that would be important in this practice problem? The concise articulation of the population is supported by a synthesis of research evidence.
I InterventionThe practice scholar states the research-evidence based intervention to be translated to practice. The research-evidence based intervention is supported by research evidence. The practice scholar translates/transfers the research-evidence based intervention as it was tested and empirically proven across research studies. Note: The practice scholar does not alter the research-evidence based intervention. Implementation fidelity requires no changes be made to the empirically proven research intervention.
C ComparisonThe practice scholar describes current practice. In a practice change project, the comparison is practice as usual.
The practice scholar’s intention is to learn if the translation of a research-evidence based intervention influences local practice outcomes. Practice change projects are not intended to design or test an intervention, therefore, a prediction is not made as outcomes are identified.
The practice outcome may be singular or several stated outcomes. The practice scholar returns to research studies where the research-evidence based intervention was tested. These studies describe how study outcomes were measured. The practice scholar confirms that identified outcomes in the practice change project are numerical and are captured using reliable and valid measures.
Reliability in statistics and psychometrics is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions. For example, measurements of people’s height and weight are often extremely reliable.
Validity in statistics and psychometrics refers to the extent to which an assessment accurately measures what it is intended to measure.
T TimeframeThe timeframe for the DNP Practice Change Project is 8–10 weeks.