MPA625 M7D1: Henry Ford Jones & Universal Principles Part 1

Each part has to have at least 500 Words

Use Question/ Answer format.

In this activity, you will critically evaluate the information provided in the Hubbard & Meyer (2013) article. It is important to remember to provide objective responses that include evidence to support your rationale.

For this activity, first read the following article:

Hubbard, R. & Meyer, C. K. (2013). The rise of statistical significance testing in public administration research and why this is a mistake (Links to an external site.)Links to an external site.. Journal of Business & Behavioral Sciences, 25(1), pp. 4-20. Retrieved from http://vlib.excelsior.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=90141348&site=eds-live&scope=site

Consider the statement as cited in Hubbard and Meyer, 2013, p. 12: “Thus, for example, Henry Ford Jones wrote in the early twentieth century that a goal of political science should be to provide ‘universal principles permanent in their applicability’ (Ross, 1991, p. 288).”

Respond to the following:

• What is wrong with this statement?

• In addition to and apart from what you have identified about what is wrong with the statement, determine and discuss why Henry Ford Jones’s views are inherently biased.

M7D2: P-Values or Not? Part 2

Each part has to have at least 500 Words

Use Question/ Answer format.

In this activity, you will provide a critical analysis of the Hubbard & Meyer (2013) article concerning the usefulness of P-values in the public sector. You will use what you have learned about data analysis and generating conclusions to analyze and generate conclusions about Hubbard & Meyer’s article. It is important to remember to provide objective responses that include evidence to support your rationale.

For this activity, first read the following article:

Hubbard, R. & Meyer, C. K. (2013). The rise of statistical significance testing in public administration research and why this is a mistake (Links to an external site.)Links to an external site.. Journal of Business & Behavioral Sciences, 25(1), pp. 4-20. Retrieved from http://vlib.excelsior.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=90141348&site=eds-live&scope=site

Respond to the following:

• Provide a critical analysis of this article. Determine whether you agree or disagree with Hubbard & Meyer’s theory about the P-value. Identify and explain any gaps in the research or data contained in the article.

• Suggest ways to improve/disprove this argument, according to your position. Identify information that would strengthen or weaken the argument. Include evidence to support your rationale.

Resources and Notes:

Trochim, W. M. K. (2006). Analysis. (Links to an external site.)Links to an external site. Research Methods Knowledge Base. Retrieved from http://www.socialresearchmethods.net/kb/analysis.php

Module 7: Module Notes: Statistics & Decision-Making

Let’s return to the science fair. If you recall, there was a student who was interested in learning more about the relation between plant growth and sunlight exposure. This student manipulated the amount of sunlight the plant received each day in order to identify the conditions under which the plant was more or less likely to grow. Because variables are manipulated, this is known as a true experiment.

Let us look at different aspects of the experiment.

Select each tab to learn more.

• Aspect 1: Chi-Aquare to Analyze Data

We know that the Chi-Square Test generates a P-value. If we conclude that the student measuring plant growth identified at least one set of variables as nominal, as in the plants that received some sunlight compared to plants that received no sunlight, then we can also say that it is acceptable to use a Chi-Square to analyze this data. On the basis of what we know, we can conclude that the likely P-value between these two variables is P = .05, meaning that there is a strong positive relation between plant growth and sunlight exposure. It means that there is only a 5% probability that this relation between sunlight exposure and plant growth is based on chance. We know this is true because we know that sunlight is a major factor in plant growth. We would have anticipated accepting this hypothesis and rejecting the null hypothesis.

• Aspect 2: Alternative Analysis

Suppose the student was interested in how many plants died without exposure to sunlight compared to those who only suffered without exposure to sunlight. The student might calculate the confidence interval and determine that 56%–66% of the plants actually died without exposure to sunlight. Again, based on what we know about plant growth, we might have anticipated this outcome as well.

• Aspect 3: Analyzing Accuracy

Now that we have generated a P-value and a Confidence Level, we must consider which of the two is more accurate. Based on the findings from each of these tests, can we conclude that the 56% of the plants that died is equal to or the same as saying that most of the plants exposed to sunlight showed more growth? Keep in mind that to some degree, we expected these outcomes. However, in the real world, we will be required to address all assumptions before executing a study to minimize bias in the analysis. In this example, we are not concerned about ethics or accuracy. However, in the real world, public administrators must place ethics at the forefront of all decisions, and the public expects transparency and accurate facts to inform administrative decisions.

• Aspect 4: Measures to Guide Decision-Making

Earlier in the course, we discussed the importance of a well-rounded statistical analysis because we know that people are not numbers. In social sciences, administrators often rely on the P-value to inform decision-making. However, some researchers argue that even the P-value, paired with a qualitative analysis perhaps, is itself a poor measure of statistical significance. Research indicates that a better measure to guide decision-making can be obtained from the interpretation of sample statistics, confidence intervals, and effect sizes.

Module 7: Module Notes: Considering Alternatives

In the science fair, even in the true scientific community, there is (perhaps) some room for failure. In the scientific community, scientists have the option to run the analysis multiple times to ensure its accuracy before it impacts society or any living beings. However, social scientists are not given this opportunity. If we wanted to evaluate the number of traffic accidents at an intersection, we could not execute a true experiment that might impact human life.

Select each tab to learn more.

• Pareto Optimality & Effect Size

Even when we have executed a successful study of an issue, we still have to be careful to make responsible decisions that do more good than harm. This is known as Pareto Optimality. It is the careful balance between choosing policies that do the most good for the collective while only negatively impacting a few. In statistical terms, this is similar to evaluating the Effect Size, which is the size of the effect a variable or phenomenon has on a population. The questions for the public administrator become:

How much of an Effect Size is acceptable?

Are there populations within the sample that can be excluded from the Effect Size & if so why?

This is no easy task. The notion of improving and harming individuals is subjective and can vary according to an individual’s sociocultural background. Like statistical analysis, all things must be considered within a given context.

• P-Values and Confidence Intervals

Administrators using analysis that generates P-values are essentially stating that there is only a 5% probability that the findings occurred by chance. But whether or not this value can be applied to the total population depends upon the sample size and other characteristics. You learned earlier that in a population of 5,000 or more, a sample of 400 people is always sufficient. You must ask yourself:

Is it better to know that you can expect a particular outcome 95% of the time if the same test were reconstructed repeatedly (P-value)?

Is it better to know that the particular variable is impacted in a particular way 55% of the time (Confidence Intervals)?

If we agree that the P-value is the most significant measure, then we agree that a sample size of 400 in a population of 5,000 or more is usually acceptable, and that these findings can be applied to the general population. If we agree that the Confidence Interval is the more accurate measure, then we have to determine the percentage that would be acceptable for decision-making. One could argue as little as 51% because it is more than half.

However, giving consideration to Effect Size and Pareto Optimality, we know that the general population cannot mean every citizen in the United States. Even though we have the right population and sample size numbers, this does not mean that making a decision for all 5,000 people or even 51% of those people is Pareto Optimal. Odds are there are hugely important differences in this population of 5,000 people and, therefore, improving or worsening conditions for one may be different for another.

• Simple Statistics

This is where considering sample statistics is important. The sample statistics include the mean, variance, and proportion of the sample itself. This is essentially an evaluation of central tendencies within a collected sample, rather than a data set. You learned that the mean is the best measure of points within a data set because it includes all points. But you also learned that including all points might negatively impact the data and require you to move the averages to get a more accurate view of the data over a period of time. You must consider questions like:

Hubbard and Meyer (2013) assert that the P-value is the least precise measure of all of the options presented here. They suggest that each of the other measures offers a more accurate view than the P-value and that the use of the P-value is arbitrary, mindless, and habitual at best (Hubbard & Meyer, 2013). What do you think?

Reference

Hubbard, R. & Meyer, C. K. (2013). The rise of statistical significance testing in public administration research and why this is a mistake (Links to an external site.)Links to an external site.. Journal of Business & Behavioral Sciences, 25(1), pp. 4–20. Retrieved from http://vlib.excelsior.edu/login?url=http://

search.ebscohost.com/login.aspx?direct=true&db=bth&AN=90141348&site=eds-live&scope=site

Module 7: Module Notes: Expert Talk

You have been provided with a great deal of information in this module. It may seem like a bit of an overload. Take a moment to step away from statistics to make sense of what has been presented.

You know that the American government is a federalist structure. The Founding Fathers specifically designed the government this way because they wanted each state to have the freedom to maintain their own customs and cultures through governance. This means that every government decision, whether made at the state or local level, must consider the diverse populations it impacts.

Does that sound similar to our course theme? That is because there is not really a right or wrong answer. Sometimes it is best to use one test, sometimes you may need a mixture of different tests, or you may need more than one set of variables. It can become very complex, but in the end, it all depends upon context. It depends on the populations, the variables, and the data. Choosing the incorrect statistical test can yield incorrect results and lead to bad decisions. However, assuming that one test or one set of numbers can be used to make a decision without considering the context may be considerably more dangerous.

This is Following is from:

Trochim, W. M. K. (2006). Analysis. (Links to an external site.)Links to an external site. Research Methods Knowledge Base. Retrieved from http://www.socialresearchmethods.net/kb/analysis.php

Analysis

By the time you get to the analysis of your data, most of the really difficult work has been done. It’s much more difficult to: define the research problem; develop and implement a sampling plan; conceptualize, operationalize and test your measures; and develop a design structure. If you have done this work well, the analysis of the data is usually a fairly straightforward affair.

In most social research the data analysis involves three major steps, done in roughly this order:

•Cleaning and organizing the data for analysis (Data Preparation)

•Describing the data (Descriptive Statistics)

•Testing Hypotheses and Models (Inferential Statistics)

Data Preparation involves checking or logging the data in; checking the data for accuracy; entering the data into the computer; transforming the data; and developing and documenting a database structure that integrates the various measures.

Descriptive Statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data. With descriptive statistics you are simply describing what is, what the data shows.

Inferential Statistics investigate questions, models and hypotheses. In many cases, the conclusions from inferential statistics extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population thinks. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study. Thus, we use inferential statistics to make inferences from our data to more general conditions; we use descriptive statistics simply to describe what’s going on in our data.

In most research studies, the analysis section follows these three phases of analysis. Descriptions of how the data were prepared tend to be brief and to focus on only the more unique aspects to your study, such as specific data transformations that are performed. The descriptive statistics that you actually look at can be voluminous. In most write-ups, these are carefully selected and organized into summary tables and graphs that only show the most relevant or important information. Usually, the researcher links each of the inferential analyses to specific research questions or hypotheses that were raised in the introduction, or notes any models that were tested that emerged as part of the analysis. In most analysis write-ups it’s especially critical to not “miss the forest for the trees.” If you present too much detail, the reader may not be able to follow the central line of the results. Often extensive analysis details are appropriately relegated to appendices, reserving only the most critical analysis summaries for the body of the report itself.

Resource

Trochim, W. M. K. (2006). Analysis. (Links to an external site.)Links to an external site. Research Methods Knowledge Base. Retrieved from http://www.socialresearchmethods.net/kb/analysis.php