Prevalence of Psychiatric Illness on the Adolescents after Detention
Prevalence of Psychiatric Illness on the Adolescents after Detention
Order 100% Plagiarism free paper
This is the Topic
Complete a PICO(T) search on a topic that pertains to your practice setting. Select one of the articles from your search. Identify the descriptive statistics. Then describe the inferential tests that were used in the article (in other words, t-tests and chi-squares). Given the p-values related to the tests, how do you interpret the results? Are they statistically significant, also clinically significant? What are the recommendations based on this paper? Share some alternate explanations (mediating or intervening variables) for the results of the study. If your chosen study does not contain inferential tests, then choose a different article that does contain inferential tests so you can participate in the discussion.
Write in Times New Roman, a minimum of 350-450 words. Include 2-3 references that are not older than 5 years. I prefer the topic to be on adolescents with psychiatric illness but if it is too difficult to find you can write about adults. Only use scholarly articles. Do not use any references from .com sites. Follow the instructions above please. This is course work; it does not require a title page. Thank you.
This is the Reading Assignment
Introduction
Healthcare disciplines use statistical methods to analyze research while implementing best practices to improve patient outcomes. Statistics provide the essence of the content published in major peer-reviewed journals; reports prepared for use in government, industry, and general public through web resources; and reported outcomes of private healthcare organizations. Therefore, it is critical for advanced-practice nurses to understand how to interpret and analyze statistical reports and research data.
In this lesson, we will review the statistics most commonly used in research. Recall that a statistic is a measurement, such as blood pressure, or a calculation, such as mean blood pressure in a sample. Statistics help us understand quantities and are useful in making decisions. It is crucial for the nurse leader and educator to be able to accurately interpret and critique the results of statistical testing. Several types of software are available to conduct statistical analyses.
Descriptive Statistics
The results of quantitative research are divided into two categories of statistics: descriptive statistics and inferential statistics.
Descriptive statistics provide a picture of the population of interest in a study by gathering numerical information from a representative sample, particularly if the sample is randomly selected. There are several types of descriptive statistics.
• Demographic parameters (variables)
o Age
o Gender
o Race/ethnicity
o Educational level
o Religion
o Socioeconomic status or income
• Other variables of interest in a study, based on the research question
o Physiological parameters, such as pulse, blood pressure, serum glucose level
o Attitudes, feelings, or perceptions, such as depression, self-efficacy
o Behaviors, such as physical activity, dietary habits, or hours devoted to homework
Descriptive variables are reflected in research questions and the purposes of research.
In strictly descriptive research, demographic variables are important. For example, “What is the typical age, race, and educational level of the people in Ironridge?”
In explanatory research, researchers seek to elucidate relationships and differences in variables. Therefore, we might ask, “How are educational level and income related in the city of Summerville?” In another example, “What is the difference in income level between Ironridge and Summerville?”
In predictive research, the researcher seeks the combinations of variables that reflect a dependent variable. These are often confounded by mediating or more obscure variables, which are typically demographic or medical variables, such as age or comorbidities.
In prescriptive research, we are testing the effect of an intervention on a dependent variable, such as, “Does a weekly exercise class improve mood in elderly women?” In this example, we are controlling for age and gender.
Levels of Measurement and Frequency Tables
The most common descriptive statistics are frequencies, means, modes, and medians. Understanding these statistics first requires a discussion of the levels of measurement. Note the following levels of measurement, from the one providing the most information at the top to the one providing the least information on the bottom.
Image Description
Frequencies, which are simply counts, are typically reported for nominal and ordinal variables. Frequencies can be displayed in tables or graphs. An example of a frequency distribution table follows.
Table 1
Educational Levels of the Sample, Displayed by Gender
Female Male Total
Educational Level n (%) n (%) n (%)
Less Than High School 5 (5%) 7 (7%) 12 (12%)
High School 12 (12%) 14 (14%) 26 (26%)
Some College 13 (13%) 9 (9%) 22 (22%)
Four Year Degree 15 (15%) 13 (13%) 28 (28%)
Graduate Degree 5 (5%) 7 (7%) 12 (12%)
Total 50 (50%) 50 (50%) 100 (100%)
Measures of Central Tendency and Variation and the Normal Probability Distribution
Frequency counts and frequency tables could be developed on all variables but are not as useful as means, modes, and medians when describing continuous variables, such as age or blood pressure. These measures of central tendency help the consumer of research understand the patterns in the data.
The mean of a distribution is simply the average. The standard deviation, a measure of the variation in scores, represents the average amount of deviation of scores or values from the mean. Both the mean and the standard deviation should be reported in statistical results, but the range of scores is also useful in giving the consumer or researcher information regarding the distribution of scores. The mode of a distribution is the most common score. The median of a distribution is the midpoint of a distribution, the point in the distribution where 50% of the scores lie below and 50% of the scores lie above.
Calculating probabilities is dependent primarily upon the mean and standard deviation of a collection of scores, which is the probability distribution. The probability of a specific outcome is the proportion of times that the outcome would occur most of the time in repeated interval observations. A simple example is tossing a coin 10 times. The probability is that the outcome results in heads four times and tails six times. However, if the coin is tossed 100 times, the probability of the outcome being heads will be 50% of the time and tails the other 50% of the time. This is considered the law of large numbers.
The probability distribution of a continuous variable (in the above example, coins tossed) lists the possible outcomes together with their probabilities. A probability distribution has parameters describing the central tendency (mean) and variability (standard deviation). When the values for a continuous variable are graphed, a normal probability distribution is the result. The properties and characteristics of the normal probability distribution are the following.
• Bell-shaped and symmetrical
• The empirical rule for normal distribution consists of the following:
o 68.2% of the population measurements lie within one standard deviation of the mean.
o 95.4% of the population measurements lie with two standard deviations of the mean.
o 99.7% of the population measurements lie within three standard deviations of the mean.
Confidence Intervals
Although descriptive statistics are very useful because they show the structure and shape of the findings from a research study and they illustrate any trends over time or differences among groups, these statistics only describe the sample. By themselves, descriptive statistics are only an estimate of a possible data point in the population; they do not give an indication of how likely that point estimation reflects the true value in the population.
In an effort to indicate how likely a point estimate like a mean value is, interval estimation can be used by constructing a confidence interval (CI) around a point estimate. A range is calculated around a mean value or odds ratio. The two most common CIs are 95% and 99%. A 95% CI means that out of 100 repetitions of a study, the true value in the population would be in the middle 95% of the distribution. A 99% CI means that out of 100 repetitions of a study, the true value in the population would be in the middle 99% of the distribution.
Another aspect of interpreting the confidence interval is that the wider the interval, the less useful the point estimate is because the point estimate is less precise. In addition, if the CI for a mean difference between groups contains zero (0), then the results will probably not have statistical significance (the null hypothesis of no difference would be true if the mean difference was zero). If the interval for an odds ratio contains one (1), then the results will probably not be statistically significant (the null hypothesis of no difference would be true if an odds ratio was one [either event is equally likely]).
Transcript
Risk-Related Statistics
When reading healthcare statistics, you will also read about various statistics related to the probability of a medical event occurring. These are most helpful when selecting a medical intervention.
Absolute risk (AR)–This is the probability that an event will occur in a particular population. For example, “What is the absolute risk of people in Massachusetts developing lung cancer?”
Relative risk (RR)–This is the probability that a medical event will occur in people who have been exposed to a risk in a relationship compared to those who have not been exposed. For example, “What is the relative risk that people who smoke will develop lung cancer in comparison with people who do not smoke?”
Relative Risk Reduction (RRR)–This is the proportion of risk that is lessened with the introduction of an intervention.
Odds Ratio (OR)–This is the proportion of people who have been exposed to a risk who experience the related disease compared to those who have not.
Number Needed to Treat (NNT)–This is the number of people who must be treated for one person to benefit from an intervention. For example, “How many people need to receive a new chemotherapy drug before one person benefits? The NNT represents the amount of decrease in risk from a disease when an intervention is used. When the NNT is small, the intervention is probably more effective than if the NNT is high.
Inferential Statistics
Inferential statistics are used to determine how confident we can be that the descriptive statistics obtained from the sample can be inferred to the population. It usually is not practical to study an entire population. As a result, inferential statistical tests were developed to determine the probability that the findings from the sample in a study can be inferred to the population. In other words, inferential statistical tests determine whether the same differences or similarities in descriptive statistics obtained from the sample would be found in the population if the entire population were studied. Thus, inferential statistics help us infer from the sample to the population.
All significance tests have five components: assumptions, hypothesis, p-value, level of significance, and test statistics.
Assumptions refer to suppositions about the type of data included in a study, the population distribution, characteristics of the population, the randomness of the sample, sample size, and the underlying theory being tested. We tend to assume that the sample represents the population in inferential statistics.
Transcript
The hypothesis is the scientific method used to make a prediction about a population parameter. A parameter can be a mean, median, or proportion. The tentative prediction is tested based on the measure of the variable obtained from a sample. Once the hypothesis is identified, the researcher will perform experiments to either prove or disprove the hypothesis.
The null hypothesis is symbolized by Ho. The null hypothesis is the hypothesis that an intervention does not affect an outcome or that a relationship does not exist. The decision based on inferential testing is either to reject the null hypothesis or fail to reject the null hypothesis. An example of a null hypothesis is, “Nurses working at Magnet hospitals do not score higher on job satisfaction than nurses working at non-Magnet hospitals.” Researchers typically do not believe their null hypotheses but state their hypotheses negatively because proving that something is true is never possible.
The alternative hypothesis is symbolized by Ha. It is the hypothesis that contradicts the null hypothesis and is also known as the research hypothesis. An example of an alternative hypothesis is, “Nurses working at Magnet hospitals score higher on job satisfaction than nurses working at non-Magnet hospitals.” The decision as to whether to use a null or an alternative hypothesis, or both, belongs to the researcher.
The level of significance is represented by the Greek letter alpha (α). The two most common alpha levels are 0.05 and 0.01. Of these alpha levels, 0.05 is the more commonly used. If an alpha level is not specified in a published research article, then it is assumed to be 0.05.
Going back to the normal distribution, the area under the curve of a probability distribution is the probability of any value falling in that area. If the test statistic falls in the critical region beyond the tails (p≤ 0.05 or 0.01), the probability of that happening by error is acceptably small and the findings of the analysis are statistically significant. (See the above normal distribution illustration.)
The p-value summarizes the evidence in the data about the null hypothesis. The p-value is the probability, if Ho is true, that the test statistics would fall in this value.
• For example, a p-value of 0.26 indicates that the observed data would not be unusual if Ho were true. However, if the p-value equaled .01, then the data would be very unlikely and would provide strong evidence against Ho.
Thus, if a p-value in the hypothesis example is .01, then the alternative hypothesis would be true. Using the previous example, with a p-value of .01, nurses at Magnet hospitals would score higher on a job satisfaction survey in comparison to nurses at non-Magnet Magnet hospitals, and the higher scores are not likely due to chance.
Transcript
The test statistic is the statistical calculation from the sample data to test the null hypothesis (e.g., t-test, chi square tests). Researchers have developed hundreds of test statistics designed to detect relationships or differences in their data. We will cover the most common test statistics in the following section.
Common Test Statistics
The most common statistics rely on determining correlations or differences. The type of test depends on the research question or hypothesis and the levels of measurement of the variables.
Measurement of Correlation and Regression
Pearson’s r
The Pearson’s r statistic, also known as the product-moment correlation coefficient, is the most common measurement of correlation and is used to relate two interval or ratio variables. Pearson’s r can range from 0 to 1, depending on the strength of a relationship, and can be either positive or negative. Examining a correlation table is interesting to the research consumer, but it has its limitations. Knowing two variables are strongly related does not tell you which variable came first.
Tip
Establishing causation requires three things:
One variable must precede the other, a relationship must exist, and no alternative variable can explain the relationship.
Spearman’s rho (r)
Spearman’s rho is similar to Pearson’s r but is calculated with ordinal level variables. For both correlation statistics, a correlation of 0.0–.30 indicates no relationship or a very weak relationship, a correlation of .31–.69 is considered a moderate relationship, and a relationship of .70 or above is considered a strong relationship.
Multiple Regression
Multiple regression is one of the several multivariate procedures used to describe complex predictive relationships. Multiple regression is based on multiple relationships among several proposed variables. What are the variables that best predict an undergraduate student’s grade-point average? What patients are most vulnerable to falls? What combination of preventative measures will prevent pressure sores?
Calculation of the multiple regression statistic, R2, is complex and requires special knowledge and skill, as well as appropriate software. The statistician enters a set of variables in a regression equation to discover the best predictor of an outcome. Multiple regression is very helpful in identifying risk factors for a disease or other outcome when the dependent variable is an interval or ratio variable, but two procedures, logistic regression and discriminant function analysis, are used to predict nominal variables.
Transcript
Measurement of Difference
Chi Square Test
The chi square test is one of the most widely used statistical calculations in inferential statistics. The chi square test investigates differences in frequencies in nominal or ordinal variables between two or more independent groups. When the chi-square statistic has a p value ≤ 0.05, we conclude that the two groups in question will have statistically different frequencies.
Expert Says
The Question
A researcher is curious about whether there is a difference in the number of male nurses and female nurses between Magnet and non-Magnet hospitals. The chi-square = 13.2, p = .35. Is there a significant difference in the number of male and female nurses in Magnet versus non-Magnet hospitals?
Your Answer
Compare Answers
T-tests
The t-test determines if there is a statistically significant difference between the means of two groups. It is frequently used when comparing two groups, specifically in a randomized experimental design in which one group receives an intervention and the other does not. It can also be used to examine pretest and posttest scores within one group. For example, in the following illustration, a job satisfaction score is compared between nurses working for a Magnet hospital versus a non-Magnet hospital.
Image Description
Press the ESC key to close the image description and return to the page.
Deaconess Society Objectives
The Question
A researcher conducts a t-test to determine whether nurses at Magnet hospitals are more satisfied with their work than nurses at non-Magnet hospitals. He or she sets the significance level of the test at .05. The t = 4.23, p = .034. Assuming that he or she has set up a quality study, is the result evidence that nurses at Magnet hospitals are significantly more satisfied with their work than nurses at non-Magnet hospitals? Why or why not?
Your Answer
Compare Answers
Analysis of Variance
Analysis of variance (ANOVA) is a more complicated procedure in which the means of three or more groups are compared. In experimental research, the three groups might represent an experimental treatment group, a usual treatment group, and a placebo group. When neither the subjects nor the participants know who is in which group, the study is said to be double-blinded. The means that are compared at the end of the study would be some type of interval or ratio variable, such as a score on a depression scale or a mean blood pressure. The illustrations below illustrate the difference in study groups in an ANOVA.
Image Description
The following are other analysis-of-variance procedures that you will see.
• Repeated measures ANOVA, comparing three or more dependent variables on each participant
• Analysis of covariance (ANCOVA), in which covariates or mediating variables are included in the analysis, thus accounting for their influence on the dependent variable among the study groups
• Multiple analysis of variance (MANOVA), in which the researcher is interested in more than one dependent variable among three different groups (example is a study of an intervention for an infection, usual treatment, and placebo groups with both leukocyte account and temperature as dependent variables)
A Note on Clinical Versus Statistical Significance
When reading research reports, finding statistically significant results is exciting. Does a study showing a statistically significant result mean that a tested intervention is promising? Recall that statistical significance is the confidence we have that the results from a study did not happen by chance or error. Statistical significance is the result of the magnitude of difference between measures of central tendency (for example, two mean values), the standard deviation, and the sample size.
With a large enough sample size, a very small difference in mean values can be significant. In a study of medication effectiveness, for example, a difference of 1.5% (3% had relief of symptoms in the experimental group versus 1.5% in the control group) can be statistically significant in a sample of 3,000 people. However, is that result clinically significant? Should clinical practice be changed due to this result?
Clinical significance is based on a broader examination of research results and several questions (Peake, 2013).
• Large sample questions—Is the difference in outcomes small but statistically significant, which occurs with very large sample sizes? Note that the larger the sample size, the greater the probability of significant results by chance alone, so small, significant results may be related to sample size.
• Use of graphs and tables—Do the graphs and tables displayed in a study demonstrate meaningful results, or are they misleading? Small, statistically significant results can be made to look large and meaningful.
• Clinician factors—How much should clinicians also rely on their experience and clinical expertise? Does the clinician’s knowledge of different cultures affect clinician choice?
• Patient and family concerns—What would the effect of a new treatment have on patient lifestyles and ability to comply with treatment?
• Small sample questions—Does the study show large differences in outcomes that are not statistically significant? If so, was the sample size small, indicating the need for further study?
Transcript
Summary
Understanding the statistics covered in this lesson provides a basis for reading research articles and is an important foundation for evidence-based practice. Examining studies in the aggregate to determine the body of evidence on a research question is especially important, because individual studies sometimes conflict with each other.
Clinical practice guidelines and best-practice documents help us examine the findings of research in the aggregate. In Week 6, we will discuss how these particular documents help us translate research into practice through the development of care protocols and other types of care plans.
References
Peake, R. W. (2013). Significance for the sake of significance: The relevance of statistical data. Clinical Chemistry, 59, 1002.