Educational Research: Chapter 10
Purpose of experimental research
        Researcher manipulates at least one independent variable, controls other relevant variables, and observes the effect on one or more dependent variables.   –           Only type of research that can test hypotheses to establish cause-effect relations.          Most confidence that A causes B
How experimental differs from causal-comparative research
          Experimental research has both random selection and random assignment.  –     Causal-comparative has only random selection, not assignment (populations are preexisting)
How experimental differs from correlational research
          Correlational study predicts a particular score for particular individual.           Experimental is more “global” (e.

g. if you use X Approach you will probably get different results than if you use Approach Y.)

Difference between “random selection” and “random assignment.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

Random selection is the ability to randomly select the participants in the research studyRandom selection of subjects is the best way to ensure they are representative of the populationRandom assignment is the ability to randomly assign participants to different groups in the study
Difference between “control group” and “comparison group”
Control group the group the receives a different treatment or is treated as usual (e.g. would maintain the same method of instruction)Comparison group receive different treatments are equated on all other variables that influence performance on the dependent variable.  (also called the experimental group
Control in experimental research
the efforts to remove the influence of any variable, other than the independent variable IV; that may affect Ss performance on the dependent variable DV.– Control in experimental research is best described as an attempt

  limit threats to internal validity.

– Subject variables and Enviornmental variables          (e.g. Who are more effective tutors parents or students.

Make sure the both groups receive the same amount of time with tutor.)          Control extraneous variables affecting performance of the dependent variable are threats to the validity of the experiment

Two kinds of variables that need to control in experimental research
          Subject/ attribute variables, on which participants in the different groups may differ (e.g. gender, age, race)            Environmental variables, variables in the setting that may cause unwanted differences between groups (e.

g. time of day, years of experience, physical setting)

Difference between “actively/ enviornmental” “assigned/ attributed” variables
Actively manipulated variables the independent variables that are manipulated by the experimenter (e.g.

, instructional method, time of day, behavior managment procedures)          You can assign or randomly assign people to group          the researcher selects the treatments and decides which group will get which treatmentAssigned/ attributed variables (subject variables): characteristics intrinsic to the subject you can’t assign          (e.g. gender, age, race, SES, creativity)  

Validity in Experimental Research
that the effects of the outcome is due to the cause of the independent variable (the effects of A caused B) 

internal validityobserved differences on the DV are a direct result of manipulation of the IV — not some other variable
external validitythe extent to with results are generalizable to groups and environments outside of the experimental setting

Difference between “internal” and “external” threats to validity
Internal validity (within the study) focuses on the threats or rival examinations that influence the outcomes of an experimental study but are not due to the independent variabledef: The control of extraneous variables to ensure that the treatment alone causes the effect           (e.g. difference in amount of tutoring time of students by parents versus student tutors)External validity focuses on the threats or rival explanations that disallow the results of a study to be generalized to other setting or groups (synonym for external is generalized)def: The ability to generalize the results of a study           (e.g. results should be applicable to other groups of parent versus student tutoring)
 Threats to external validity
         pretest-Treatment Interaction, Multiple-Treatment Interference,  specificity of variables, Treatment diffusion, experimenter effects, reactive arrangementso   Pretest-Treatment Interaction– use of a pretest makes you less generalizable just by the mere fact that there was no pretest.

  A way to prevent extraneous variable.  There is no control for this. o   Specificity of variables– because there is so unique and specific in study, this lessons the ability to generalize o   Reactive arrangements– subjects know they are in an experiment so they act differently.o   Multiple treatment interference (only a factor in multiple treatment) carry over effect so artificial that it makes it very hard to generalizableEcological validity refers to the context to which results generalizeExperimentor bias:  example of active experimentor effectTreatment diffusion:  when participants in different treatments talk to one another about the study. then they borrow treatment from the other which causes overlapping          (e.g.

results should be applicable to other groups of parent versus student tutoring)          if results cannot be replicated in other settings by other researchers, the study has low external, or ecological, validity

          threats to internal validity
history, maturation, testing, instrumentation, statistical regression, differential selection of participants, mortality, selection-maturation interactiono   history– many things happen along with the Independent variable that may have caused the effect between the pretest and posttest.  A way to prevent this extraneous variable history would be to establish a control group.o   Maturation– as students mature they get stronger, faster, and more agile.

Way to prevent this extraneous variable would be a control group.– when subjects who drop out of the study are systematically different than those that stay ino   Testing- practice effects- you do better just because you have practiced the test already. A way to prevent this extraneous variable would be a control group.

o   Instrumentation– something is wrong with your instrument (unequal validity).  A way to prevent this extraneous variable is control groupo   Statistical regression– using extreme groups (GT kids, remedial kids). Usually kids at the bottom will score higher on the second because they can not get any lower.  A way to prevent this extraneous variable is a control group.o   Selection– unequal groups differ beforehand section because there was no random assignment.  A way to control is random assignmento   Mortality- people dropping out, loss of subjects.  Not loss of numbers but unequal numbers.

  A way to prevent extraneous variable is adding a pretest.

Five ways to control extraneous variables
1.       Randomization subjects are assigned at random (by chance) to groups (e.g. same on participants variables such as gender, ability, or prior experience)best way to control: random selection of participants and group assignment2.       Matching is a technique used for equating groups on one or more variables (e.

g. random assignment of pairs)attempts to make membership in groups equal3.       Comparing homogeneous groups or Subgroups (e.g. only groups of the same I.

Q.)cost of using homogenous grous is loss in external validity4.       Participants as their own control is a single group of participants who are exposed to multiple treatments, one at a time. Helpful because the same participant get both treatments.5.

       Analysis of covariance a statistical method for equating randomly formed group on one or more variables.ANCOVA is a statistical procedure that statistically controls for pretest differences between groups

Single-variable designs
Investigates one independent variable
Pre-experimental group designs
do not do a very good job of controlling threats to validity and should be avoided.          The worst design not useful in for most purposes except, perhaps, to provide a preliminary investigation of a problem
One-shot case study
involves a single group that is exposed to a treatment (X) and then posttested (O)          Threats not controlled are history, maturation, mortality
One-group pretest-posttest design
 involves a single group that is pretested (O), exposed to treatment (X), and then tested again (O).          Threats not controlled are history and maturation
Statistic-group comparison
involves at least two nonrandomly formed groups, one that receives a new or unusual treatment, an both groups are postested.          Difficult to determine whether the treatment groups are equivalent
True experimental group designs
 has random design of participants          control for nearly all threats to internal and external validity          Have one characteristic in common that the other designs do not have: random assignment of participants to groups          Provides a very high degree of control and are always to be preferred over pre-experimental and Quasi-experimental
Solomon Four-Group Design
involves random assignment of subjects to noe of four group.

  Two of the groups are pretested, and two are ont; one of the pretested groups and one of the unpretested groups receive the experimental treatment.  All four groups are posttested.– Perfect experimental design

Pretest-posttest control group design
 involves at least two groups, both of which are formed by random assignment. Both groups are administered a pretest, one group receives a new or unusual treatment, and both groups are posttested.          A variation of this design seeks to control extraneous variables more closely by randomly assigning members of matched pairs to the treatment groups.
Posttest-only control group design
same as the pretest-posttest control group design except there is no pretest.

  Participants are randomly assigned to at least two groups, exposed to the independent variable, and posttested to determine the effectiveness of the treatment.          A variation of this design is random assignment of matched pairs

Quasi-experimental group designs
 When random assignment is not possible, a quasi-experimental design may provide adequate controls          Does not control as well as true experimental designs but do a much better job than the pre-experimental designs
Nonequivalient control group design
is like the pretest-posttest control group design except that th nonequivalent control group design does not involve random assignment.  If the differences between the groups on any major variable are identified, analysis of covariance can be used to statistically equate the groups.
Multiple time-series design
 is a variation that involves adding a control group to the basic design.  This variation eliminates all threats to internal validity
Counterbalanced design
all groups receive all treatments but in a different order, the number of groups equals the number of treatments, and groups are posttested after each treatment.  This design is usually employed when administration of a pretest is not possible
Potential threats to validity in a experimental design
          Any uncontrolled          d extraneous variables that affect performance on the dependent variable are threats to the validity of an experiment.          Threats to internal validity- history, maturation, testing, instrumentation, statistical regression, differential selection, mortality          Threats to external validity- pretest-treatment interaction, multiple-treatment interference, selection-treatment interaction, specificity of variables
Factorial design
involves two or more independent variables, at least one of which is manipulated by the researcher.

It is used to test whether the effects of an independent variable are generalizable across all levels or whether the effects are specific to particular levels (i.e. there is an interaction between the variables)          2×2 is the simplest factorial design.            Rarely include more than three factors

scores for levels of a first independent variable change depending on the levels of a second independent variable
Difference between single variable design groups and factorial design groups
          Single variable design involves one manipulated independent variable; a factorial variable design is any design that involves two or more independent variables, at least one of which is manipulated.          Factorial designs can demonstrate relations that a single variable cannot.  For example, a variable found not to be effective in a single-variable study may interact significantly with another variable.Interaction effect allows the researcher to see interaction of variables


X= treatment
O = test (pre-test or post-test)
R = random assignment of Ss to groups