First Glossary Example:

Research: "a systematic investigation, including research development, testing and evaluation, designed to contribute to generalizable knowledge. When evaluating a specific project, it is helpful to focus on two key elements:
  1. the project involves a systematic investigation, and
  2. the design – meaning the goal, purpose, or intent – of the investigation is to develop or contribute to generalizable knowledge. Having only one of these properties means that the activity is not “research” and should not be handled as such.
One helpful question to aid in determining whether a project is "research", is whether the investigators desire or may desire to publish the results of their project in a journal, or present some aspect of the project at an academic meeting. While this is often a practical key factor in determining intent involving a project, there is a distinction between publication that is merely educational in intent, such as a medical journal that may contain an article that discusses information that is not the result of a research activity versus an article that addresses results of a research activity." <references/>
  • Alysia--Research on human subjects requires //IRB approval//--we will be covering this in class later in the semester.


Literature Review: (Chapter 4, pg. 89 in the textbook)


Definition: A way for a researcher to organize the literature they have read. This is a running document in which articles, books, or other information is summarized and grouped by topic, or by how the research relates to their study in some way. It may aid researchers in finding the “gaps” in their research area, and later will help finding the information they would like to reference in their studies. This is generally the first step in a research process.

As the researchers were looking at their “literature review”, they realized that more research is needed in the behavioral area of RtI, as their lit review was sparse in that area; therefore, they decided to design a study focusing on behavioral change using RtI. (Mary Kate Watters)

Sampling Error: Wiersma, W. (2000). Sampling designs. In Research methods in education: An introduction (7th ed., pp. 269-294).

Defintion: Basically, sampling error occurs because you are looking at a sample of the population versus the entire population. Larger error will come from a smaller sample from a large population. The greater your sample size, the more like the whole population it will be. Variation occurs in quantitative data because of this.

Example: Let’s say our population is all 2nd graders at Tallahassee Elementary (234 students). We select a random sample of 75 of those students to give a gifted screener, the Kaufman Brief Intelligence Test (KBIT). The mean IQ score is 98. We know that the mean would change if we had screened every student, but we can be confident that it will be around the 98 we got. The difference between what we got and what we’d get if we had screened the entire population would be considered the sampling error.
-- Taryn McCormick

Single-Subject Design or Research
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 11, pg. 320)
Maxwell, J. (2004). Causal Explanation, Qualitative Research, and Scientific Inquiry in Education. Educational Research, 33(2), 3-11.

Definition: A research design that looks more at individual verses group effects. Participants in the study are compared to themselves. All designs of this type have a phase where the participant is observed and no treatment is given. There is also a phase where treatments are given. The no treatment and treatment phases are repeatedly replicated and analyzed to identify effective treatments.

Example: School psychologist may use this design in the schools to determine an effective intervention for a student. For example, there is a teacher who has a student who engages in off-task behavior. The school psychologist suggests that the teacher uses differential reinforcement to try to encourage the child to display on-task behavior. The school psychologist may observe the classroom and conduct a single-subject design to see if the intervention is effective.
-Sadé Tate


Purpose Statement: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 5, pg. 121)

Definition: A concise statement (including one or more complete sentences) explaining the focus, direction, or reason for a research study. A purpose statement is used in both qualitative and quantitative research.

Example: When completing assignment 4 in this class, we were asked to determine the purpose of the study of the article we selected. When reading through the article we chose, we all looked for the purpose statement, to find out what the author was hoping to accomplish in their research, or what the focus of the article was. This was most likely one or a few sentences describing the purpose of the research. When we found the author's purpose statement, we were able to complete that part of the assignment.
-Mary Kate Watters

Nomothetic: Selected Readings from the Research Methods Knowledge Base Website (pg. 7)

Definition:derived from the Greek, meaning enacting laws. The laws are universal, or general, as opposed to laws formed for individuals. In psychology, the term nomothetic is often used to describe social research. Social research is concerned with generalizing findings to a population or group, as opposed to one individual. Social psychologists are mostly interested in studying a cohort or class of people. The opposite of nomothetic is idiographic, meaning to study the individual.

Example: In studying the effects of group therapy for the treatment of anxiety, researchers decided to use a nomothetic approach in order to pertain to the general case and look for a possible evidence based treatment modality.

-Katie Lazarus

Maximal Variation Sampling: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 8, pg. 214)

Definition: A sampling procedure in qualitative research in which the researcher specifically selects individuals that vary based on a particular characteristic.

Example: As a qualitative researcher you may identify education level as the characteristic for your cases. Using maximal variation sampling you may select an individual with a high school degree, an individual with a 2-year degree and an individual with a 4-year degree.
-Marisa Warner

Comparison Question: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 5, pg. 136)

Definition: These are questions that researchers may ask when they are conducting a study in which there are two or more groups where at least one group receives the intervention and one does not, in order to determine how the groups differed based on the independent variable.

Example: “How do adolescent boys compare to adolescent girls in their perceptions of body image”.
-Marisa Warner


Confounding Variables (Chapter 5, pg. 130 in the textbook)

Definition: Confounding variables (also called spurious variables) are traits or factors that cannot be directly measured or controlled, because the specific elements and how they affect the dependent variable are interrelated among the other variables. Since it is difficult to discriminate these spurious variables from the independent variables, the independent variables’ level of influence on the outcome and whether the independent variables caused the effect are unknown.

Example: An experiment evaluates the effects of CBT in group counseling versus CBT in individual counseling on symptoms of anxiety in adolescents. Since anxiety and counseling involve many complex components, there are many potential confounding variables in this study, such as the participants’ personality traits, counselor’s style of counseling, initial severity of anxiety, and level of social support. It is difficult to determine whether the effects were caused by the type of treatment the participants received, or by the individual differences among the participants.
-Priscilla Berardo


Null Hypothesis (Chapter 5, pg. 137 in the textbook)

Definition: The null hypothesis (H0) predicts that there is no statistically significant difference, relationship, or effect between independent variables and dependent variables. Basically, the null hypothesis is the opposite of the alternative hypothesis. A null hypothesis can be true or false, and is either rejected or not rejected.

Example: A study is examining the effects of Zoloft versus placebo on symptoms of depression in women ages 18 to 25, at Florida State University. The null hypothesis predicts there is no difference between the Zoloft treatment and placebo in terms of depression symptoms for women ages 18 to 25, at Florida State University.
-Priscilla Berardo


Snowball Sampling (Chapter 6, p. 155-156)

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: Snowball Sampling is essentially asking the participants you have chosen to select other participants for your study. The benefit to this is that you can accumulate large numbers of people for your sample. This does however also create the problem that you don’t always know exactly who will be part of your sample and therefore you cannot be sure they represent your designated population.

Example: In order to collect data for my study on the onset of college student, underage drinking, I sent an email out with the link to my survey to members of the freshmen class of FSU. I attempted to accumulate a snowball sampling by asking each student whom I sent the email to forward it to three other FSU freshmen they knew.
-Deanna Allen

Quasi-experimental research design:

A quasi-experimental research design is used when the researcher has little or no control over assignment of groups to specific treatments or other aspects of the study. Quasi-experimental designs are used when researchers cannot randomly assign participants to treatment or control groups. There are less threats to external validity, in that natural environments are not highly controlled such as a lab environment, however internal validity is threatened because of the lack of control researchers have over certain aspects of the study. This makes it more possible that results could be due to a third variable.

Example: Researchers want to study the effect of maternal alcohol use on growing embryos. A true experimental design would randomly assign pregnant women in groups where they would drink alcohol. This would be highly unethical and illegal, so instead researchers ask mother how much alcohol they used during pregnancy and assigned them to groups.


[[#x---Shuttleworth, Martyn (2008). Quasi-Experimental Design. Retrieved [Date of Retrieval] from Experiment Resources: http://www.experiment-resources.com/quasi-experimental-design.html]]Shuttleworth, Martyn (2008). Quasi-Experimental Design. Retrieved [Date of Retrieval] from Experiment Resources: http://www.experiment-resources.com/quasi-experimental-design.html


-Kerrie Donk

Narrative Discussion: (Chapter 9, pg. 262-263)
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc.
Definition: A part of qualitative research where the researcher thoroughly discusses and writes about the findings of their study. Although the length and amount of information included in this section will vary from study to study, it should provide detailed descriptions, themes, or any other findings in a narrative (written) format.
Example: The author summarized their findings in the narrative discussion of their research article. In addition to summarizing their research, the author also raised additional questions, included personal reflections, and challenged assumptions about the research.
-Mary Kate Watters


Continuous Variable: (Chapter 5, pg. 125) Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc.

Definition: A type of variable measured by researchers that is based on a continuum, or a continuing scale. The scale may have limits, but any value in between the limits may be possible.

Example: Time to complete something is a typical “continuous variable”. It may take one person 4:56 minutes to run a marathon, where it may take a different individual 6:27mins to run the same length. The time it takes to run a marathon would be a continuous variable, because it could be any amount of time.

Non-Example: The number of questions out of 10 that someone scored correctly. This number is not continuous because someone could only score 1-2-3-4-5-6-7-8-9-10. When scoring either right or wrong, someone could not score 6.24.
-Mary kate Watters

Interrater reliability: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 6, pg. 171)

Definition: The degree of agreement among raters. It is used when two or more people make observations of another's behavior.

Example: Perhaps two school psychologists perform a structured behavior observation (BOSS) on a second grade student. Following their observation they compare their results to determine if they are similar or different. This is looking at interrater reliability.

-Mary Wilcox


Ratio scale: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 6, pg. 176)


Definition: A type of scale used in research, although not frequently in education research, also known as a true zero scale. This is a response scale that allows participants to choose an option with a true (natural) zero, and there are equal distances between the units.



Example: In a study, researchers asked participants to check a response regarding their current ,weight. Weight is considered a ratio scale of measurement.
-Mary Wilcox


Control variable: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 5, pg. 128)

Definition: An independent variable that remains unchanged and may influence values/ results. It is necessary for researchers to consider control variables in their experiments. These are variables such as personal demographics or characteristics.

Example: A control variable might be gender. In an experiment looking at helping behavior, researchers may control for gender through statistical procedures.
-Mary Wilcox



Open-ended questions: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 8, p. 225)

Definition: May be asked when conducting a qualitative study. A type of interview question designed to encourage a full and meaningful answer, allowing for responder to voice their experiences and feelings without the constraint of the perspective of the interviwer.

Example: Tell me about your relationship with the student. What do you think about these two students in your class?
-Mary Wilcox


Extreme Case Sampling: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (Chapter 8, p. 215)

Definition: Form of sampling in which the researcher uses a low-incidence or special case that displays special characteristics that are not typical in the rest of the sample/population.

Example: A case-study on a child who was the sole survivor of a massacre in his hometown.
-Mariana Diaz

Abstract: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (Chapter 4, p. 105)

Definition: The abstract is a quick overview of the study/research being presented. It typically provides reasoning for the study, what was done, results, and future direction. The length may vary depending on the type of journal.

Example: A journal article on a reading intervention would have a small paragraph before the introduction. There it would state the problem being addressed (e.g. difficulties with reading comprehension), research method used (i.e. experimental design testing the particular intervention), results/findings (i.e. intervention effective with only 2nd graders), and conclusions (i.e. further research needed with other age groups).
-Mariana Diaz

Convenience Sampling (Chapter 6, pg. 155)

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: Convenience Sampling is a non-probability sampling where the researcher selects participants for their study based on availability. An issue with using convenience sampling for your study is that it is not necessarily an accurate representation of your sample population.

Example: When wanting to collect data on EPLS student opinions on educational quality of the EPLS department of The College of Education at FSU, the researcher decided to use convenience sampling by using people she knew within the department to conduct her study.
-Deanna Allen

Contrary Evidence (Chapter 9, pg. 257)

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: Contrary evidence is information that does not support of confirm the themes and provides contradictory evidence for a theme.

Example: A research experiment trying to prove the negative effects of alcohol consumption displaying data that shows alcohol consumption to have positive effects on one's well-being.
- Hanz Medard

Mean (Chapter 7, pg. 192)

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: mean is the total of the scores divided by the number of scores.

Example: Three students took an exam for their child psychology class, Tony scored a 92/100, Albert scored a 84/100 and Kelly scored a 91/100. The mean score of the students who took the exam was an 89 out of 100.
- Hanz Medard


Nominal Scales (Chapter 6, pg 175)

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research.New Jersey: Pearson Education, Inc.

Definition: the name for measures when the sample or grouping is categorical. They don’t have numerical value and they cannot be ranked.

Example: There are several breeds of dogs. They are categorical and do not have any numerical value assigned to them. Even though they the bred can vary in size, name in itself does not.
~Ruth Arnold

Standard Deviation (SD) (Chapter 7, pg 194)
Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research.New Jersey: Pearson Education, Inc.

Definition: SD tells us, with one number, the distribution of scores. It says that if there is a large standard deviation, then the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean. It tells us about the scores and how far the scores are from the average. The SD can be either good or bad depending on your hypothesis.

I was taught a great example of SD. If you investigate two cities (one by the ocean and one inland), they may have the same average temperature BUT the actual recorded temperatures may vary greatly. The city temperatures, because of the asphalt, vary more drastically than the city by the water. So the inland city will have a greater standard deviation and the city by the water will have a smaller standard deviation.
~Ruth Arnold

Homogeneous Sampling (Chapter 8, pg 216)
Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research.New Jersey: Pearson Education, Inc.

Definition: This is when researchers pick out one group to investigate unique straits that maybe common or uncommon among the members

Example: In the movie Freakonomics, they wanted to know about the SES status of people with distinctive African names living in America. They interviewed and surveyed the African American population (homogeneous sample) to understand and significance, if any, of their names and how it impacts their lives.
~Ruth Arnold

Effect Size (Chapter 7, pg 196)
Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research.New Jersey: Pearson Education, Inc.

Definition: It is providing a number to show how different two groups are (usually the control group and experimental group). It is an easy way to see if the treatment really had any effect. It is calculated by dividing the difference of the Mean of the control group from the mean of the experimental group by the Standard Deviation.

(mean of experimental group – mean of control group)/ Standard Deviation = Effect Size.

A larger the effect size shows that the treatment is more likely to work for others.

Example: The effect size for the experiment that breathing (treatment) is will likely be very effective for staying alive (group). On the other hand, when the treatment is not does not work (or there is no effect) for the experimental group the effect size will be small.
~Ruth Arnold

Construct Validity:
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (p. 173)

Definition: Relates to generalizing. In a research study, it is how we can make inferences from how well the aspects of the study were operationalized to the construct or concept of the study. How well the variables that were measured in the study, accurately measure what the study says they are measuring.

Example: Psychologists who are trying to create a new IQ measure may try to operationally define something like “intelligence”. They could then choose questions that are intended to measure their definition of intelligence and increase the construct validity.
-- Taryn McCormick


Threats to Internal Validity:
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (p. 308)

Definition: Various elements within an experimental research study that interfere with the whether or not there is a true "cause and effect" between the independent and dependent variables. They can involve either the participants or the procedures of the study.

Example: Participants maturing throughout the study could affect their behavior or characteristics that are being measured. If a longitudinal study measured the effects of sports on kids' self-esteem over years, natural development that takes place when they get wiser, older, and smarter might have more of an effect on their self esteem than sports did.
Devin Jordan

Posttest
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (p. 301)

Definition: When you measure some quality about a participant after they have received treatment.

Example: You want to see if exercise has an effect on symptoms of depression, and you measure depressive symptoms of people in the treatment group after they have gone through the treatment of exercise.
Devin Jordan

Mixed Method Research:
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (p. 552)

Johnson, B., Onwegbuzie, Al, & Turner, L. (2007). Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research, 1(2), 112-133.

Definition: A method of conducting research that combines elements of quantitative research, which involves measurement of quantity, and qualitative research, which involves procedures of inquiry. The purpose of this type of design is to understand the constructs to be measured more in-depth, than one could by using one type of methodology.

Example: If I wanted to study pregnancy in women over 40, I could use a mixed method research design. By measuring their anxiety level, before and after an intervention was put in place, I would be conducting a quantitative research design. In addition, I could also assess factors that increase and decrease their anxiety levels through a qualitative research design.

-Sadé Tate


One-tailed test of significance: (Chapter 7, page 197). Looking at a normal curve the areas on either end are known as the “tails”. Conducting an experiment, when the data falls in the “critical region” the null hypothesis is rejected (thus a difference is suspected between the groups being studied). In a two-tailed test of significance, the critical region appears on BOTH ends of the curve, whereas a one-tailed test of significance only has the region on one end. This is generally used when prior research has already established a likely direction. A one tailed test is said to have more power because we are more likely to reject the null hypothesis.

Example: Kellogs Cereal claims that there are two scoops of raisins in every box of Raisin Bran cereal. Testing this hypothesis would be a one tailed test as you already have a reason to expect a certain outcome (ie two scoops of raisins). ~Brittany Brown

Positive/Negative Correlations: (Chapter 12, pg. 363). To say that two variables are correlated is to say that a change in one variable will result in a change to the other. A positive correlation indicates that both variables change in the same way (one goes up, the other goes up; one goes down, the other goes down). Conversely, if two variables are negatively correlated, a change in one causes the opposite change in the other. As one goes up, the other goes down.

Example: My energy level is positively correlated with caffeine consumption (ie the more caffeine I drink the higher my energy levels. Also the less caffeine I drink, the lower my energy levels). On the other hand, the more my mood improves the less ice cream I eat (or the worse my mood gets the more ice cream I eat), indicating a negative correlation.

Brittany Brown
Stratified Sampling- (p. 154-155) a type of sampling procedure used in quantitative research where the population is organized into groups according to a particular characteristic (ex: age or gender) and then random sampling is used after the initial groups are formed
Example: Stratified sampling may be used in a situation where the population is not balanced. For example, perhaps the population consists of counseling graduate students. There are significantly more women than men in these programs. A researcher might stratify the sample to ensure that the males are chosen first, and then females are randomly chosen after that.
-Hilary Bornstein
Convenience Sampling-(p. 155) a researcher chooses a sample based on those who want/are willing/are available to participate in the setting
Example: This may occur in an inpatient setting, where a certain pathology/population is readily available to the researcher.
-Hilary Bornstein *Variables*- a trait or attribute of a person or group that can be measured or observedExample: Reading achievement in 3rd graders-Hilary Bornstein Ordinal Scales: Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 6, pg. 176) Definition: An ordinal scale is used by researchers in which they have participants rank characteristics, traits, or attributes from best to worst. There is an inherent order to these scales in which participants rank traits etc. from most important to least important. Ordinal scales are often called categorical scales or ranking scales. Example: The researcher used an ordinal scale when measuring participants attitudes towards religion. Participants were asked to rate if they thought religion was "highly important", "somewhat important" or "of no importance" to them. -Katie Lazarus

Grounded Theory
Creswell, J. (2008). Educational Research, Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey. Pearson Education, Inc. (Chapter 14, pg. 432-434).

“Student Friendly Definition”: Grounded theory designs help to form a theory that explains a process, action, or interaction. Grounded theory designs are to be used when you need to develop a new theory about a process, because pre-existing theories are not sufficient at addressing the topic or problem. Grounded theory design requires rigorous quantitative features and provides detailed systematic procedures for analyzing data, making it helpful to beginning researchers.
Example of Grounded Theory: An example of when you would use a grounded theory design is if you were interested in looking at the career decision making process in adults with spinal cord injuries.
-Amber O'Shea
Realist Ethnography
Creswell, J. (2008). Educational Research, Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey. Pearson Education, Inc. (Chapter 15, pgs. 475-476).

“Student Friendly Definition”: Realist Ethnography is an approach where the researcher reports a detailed and third-person account of a situation. The report is delivered in an objective tone free of personal bias and includes information about the setting, situation, and population, which was directly observed at a field site. Direct quotes are sometimes used in Realist Ethnography to accurately report on the studied population.
Example of Realist Ethnography: Realist Ethnography is frequently used in the field of cultural anthropology. For example, if a researcher were interested in learning more about the daily activities, traditions, values, and beliefs of a different culture, they may use this technique by staying with the selected population and recording objective data and cultural descriptions (i.e. vocational activities, social status systems, family dynamics).
-Amber O'Shea
Institutional Review Board:
Creswell, J. (2008). Educational Research, Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey. Pearson Education, Inc. (Chapter 6, pgs. 157-158).

“Student Friendly Definition”: A committee made up of faculty members that review research proposals that will use human subjects. The purpose of the Institutional Review Board (IRB) is to evaluate and approve research proposals that preserve and protect the rights of human subjects.
Example: If you were working with a team of faculty members at FSU to conduct an experiment on the effectiveness of a new treatment program on improving symptoms of ADHD in college students, you would need to identify the individuals that sit on the FSU IRB, become familiar with the requirements for review process, determine what information the review board needs about your study, and submit a description of your proposed study to the IRB.
-Amber O'Shea

History
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 6, pg. 308)
Definition: one of the potential threats to internal validity in an experiment in where time passes between the beginning of the experiment and the end, and events that may occur during that time that affect that outcome of the experiment
Example: When implementing a health program at a high school to decrease substance use, one student at the high school dies as a result of a drug overdose. This event will have an affect on the overall awareness of the students at the high school and would impact the outcome of the health program's effectiveness of decreasing substance use.
- Jackie Berry

Maturation
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 6, pg. 308)
Definition: A potential threat to internal validity in an experiment in which participants develop or change during the experiment. These changes have the ability to affect the participants scores between the pre and post test.
Example: When looking at the effectiveness of an intervention to help high school sprinters breathe more effectively, practicing the treatment could impact the post test results of the treatment.
- Jackie Berry

Purposeful Sampling (Chapter 8, p. 214)
Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: Purposeful Sampling is conducted when a researcher wants to understand a central incidence or trend and therefore selects the participants or sites of the study with intention. These particular participants or sites are chosen because they possess a lot of data for the particular topic of research. Purposeful Sampling serves as an umbrella under which there are 9 types of sampling strategies. These strategies are further categorized based on when data is collected and then even further by the intent of the study.

Example: The researcher decided that in order to study underage college drinking he would need use purposeful sampling and focus his study on participants between the ages of 18 and 20 who were attending college.
-Deanna Allen

Typical Sampling (Chapter 8, p. 216)
Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. New Jersey: Pearson Education, Inc.

Definition: Though the terms ‘typical’ is often very debatable dependent upon who you ask, typical sampling is a subcategory of purposeful sampling referring to collecting data on what might be a typical person or site in a situation. This data is collected for persons unfamiliar with that type of situation. The way in which you might determine what is typical would be to collect data from all cases and then determine from there what constitutes a typical case by looking at all the potential data.

Example: One might use typical sampling with a college freshman who lives on campus towards the end of their freshman year in order to collect data on what college dorm living is like.
-Deanna Allen


Quantitative Research
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 2, pg. 46)

Definition: A type of research that involves measurable/quantifiable data and is commonly used in educational research; A type of research that involves data that can be easily quantified (transformed into numbers) and calculated using Statistics

Example: Some psychologists prefer quantitative research because it is more objective than qualitative research.

-Alyssa Fredricks


Qualitative Research
Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc. (chapter 2, pg. 46)

Definition: A type of educations research that involves subjective opinions about behaviors; Data collection involves words that highlight common themes and trends in the research

Example: The book Dibs in Search of Self is a case study that uses a qualitative research design to examine the affects of play therapy.

-Alyssa Fredricks

1. Random Assignment - Jenny High
2. Chapter 11 - Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc.
3. This is one way to assign participants of a research study to the different groups of the research study so that there is no systematic way that the participants are divided into groups in order to minimize confounding variables and bias. This also distinguishes a quasi-experiment (participants are not randomly assigned to groups) from a true experiment (participants are randomly assigned to groups). It should also be noted that this takes place once the participants have already been selected for the research study.
4. The researchers used “random assignment” in order to randomly assign the participants to either the treatment or control group of the research study once informed consent was obtained. The researchers used a random number generator to randomly assign a number to each participant. The odd-numbered were placed in the treatment group, and the even-numbered participants were placed in the control group.
1. Random Selection - Jenny High
2. Chapter 11 - Creswell, J. W. (2008). Educational Research. Planning, conducting, and evaluating quantitative and qualitative research. New Jersey. Pearson Education, Inc.
3. This is one way to select participants for a research study from a particular population so that there is not systematic way that the participants are chosen to be participants. This should not be confused with random assignment, as random selection specifically deals with the process of randomly recruiting participants for the research study.
4. The researchers used “random selection” when recruiting participants for their research study to see if iPads would improve student learning of middle school students. All of the middle school students in the particular area were entered in a database. A random number generator was used to randomly assign a number to each student. The number 1-100 were then selected to be participants in the research study.



Etic Data: (Chapter 15, pg. 482 in the textbook)

Definition: Raw data that ethnographers gather while doing fieldwork, which is based on the interpretations that these trained professionals make to explain phenomena that participants in the field discuss in their own way.

The ethnographer compiled "etic data" regarding his assessment of the day to day activities he witnessed the teenage homeless population he was following angage in.

-Brett Gellers


Resentful demoralization: (Chapter 11, pg. 309 in the textbook)

Definition: A phenomenon that occurs in a controlled experiment when participants in the control group become resentful of not recieving the experimental treatment or become resentful that the treatment they are recieving is inadequate or sub-par to the other group's treatment. This could result in the control group affecting the validity of the results if they begin to become uncooperative or non-compliant.

It appears that feelings of resentful demoralization were present in the control group in this particular study, and as a result, the participants in the control group stopped even trying to do well on the achievement test that was administered, thus nullifying the observed results of the experimental intervention that was applied to the treatment group before they took the achievement test.

-Brett Gellers

Action Research Designs (Chapter 18, pg. 597- this is for everyone who took NCE last weekend!)
Definition: This is a type of design that is similar to mixed methods research, and are typically procedures done by teachers or educators (often in teams) to gather information about the educational setting. The information is gathered to (hopefully) improve the actually setting, teaching methods, and/or student learning. Data may be quantitative, qualitative, or both.

Example: The school principle invited a group of educators to conduct action research in order to effectively assess the problem of low reading scores at the school, and to gather qualitative data regarding teaching styles used in reading classes.

-Emily Kennelly

Reflexivity (Chapter 2, pg. 58)
Definition: In completing a research study, a qualitative researcher will reflect on their own biases, values, and assumptions in order to discuss their role or position in the research. It can be referred to as "being reflexive".

Example: When the qualitative researcher failed to disclose any personal biases or values which may have affected his research study, his colleagues said that he was not being reflexive.

-Emily Kennelly

theoretical sampling (Ch. 14, pg. 442)
Definition: data collection that is focused on developing a theory about a certain phenomenon; data collected from several individuals that may have different perspectives to shed light on the phenomenon

Example: If a researcher wants to develop a theory about how parents select a school for their child to attend, they would interview the parents (who have firsthand experience) as well as teachers, and the principal to get a broad idea of their views on what it means for a parent to select a school for their child to attend.

-Kerrie Donk

critical ethnographies (Ch. 15, pg. 478)
Definition: research aimed at advocating for groups that are margenalized in our society; speaking for those that may not have a voice

Example: A researcher wants to study schools that provide priviledges to certain types of students, create in equalities for students among different SES classes, encouraging boys to talk during class while encouraging girls to stay silent.

-Kerrie Donk

practical action research (Ch. 18, pg. 599)
Definition: teachers, students, counselors, and administrators that come together to do research on a problem issue that is impacting their school; this is systematically studying a local problem and its purpose it to improve practice

Example: An elementary school teacher studies the disruptive behavior of a child in her classroom.

-Kerrie Donk

personal experience story (Ch. 16, pg. 514)
Definition: a narrative of an individual's story about a certain topic or broad range of topics; these can be both personal and social in nature and they usually just focus on a single episode or event in a person's life rather than being about the entire life of a person

Example: A teacher conveys his stance on the Response to Intervention process and his feelings regarding the RTI training he received.

-Kerrie Donk

collaboration (Ch. 16, pg. 522)
Definition: refers to the process of including the participant as the research writes his or her story into their research practice; the participant is able to see how the researcher used his or her story in the research and has a voice as to whether it was portrayed correctly

Example: A researcher is doing qualitative research on teachers' job satisfaction; she collects narrative stories from 3 teachers and lets them see the manuscripts and how she incorporated their stories into her research; she also receives feedback from them in how she used their stories in her research

-Kerrie Donk

embedded design (Ch. 17, pg. 558)
Definition: collecting qualitative and quantitative data together with one provided support missing from the other

Example: During an experiment, the researcher collects qualitative data during a trial of an intervention to examine how participants were feeling during the intervention. The research also goes on to collect quantitative data to find out whether or not the participants are improving as a result of the intervention.

-Kerrie Donk

Interconnecting Themes: (page 317-318)

Student Friendly Definition: the researcher relates different themes together and finds connections between them to make a time line sequence. This is often used in qualitative research

Example: using a geriatric interview to understand a client's past for explorations into lifespan development theories. The researcher will most likely be given many events and need to use interconnecting themes for proper analysis or the interview.

-Jenn Petersen

Qualitative Research Question: (page 54-55)
Definition: In qualitative research, the questions tend to be broad and focus on the client's experience.

Example: Interviewing different clients using open ended questions and allowing for lengthy explanations. Later the researcher interprets these responses and draws conclusions.

-Jenn Petersen