A | B |
Experimental Design | A study in which there is random assignment of subjects to different groups so that there are no major differences between the control and comparison groups. The environment, Sample assignment (random assignment in two groups), Treatment/Intervention |
Interval Variable | A variable in which both order of data points and distance between data points can be determined, e.g., percentage scores and distances |
External Validity | The extent to which the results of a study are generalizable or transferable. |
Face Validity | How a measure or procedure appears. |
Factor Analysis | A statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. |
Fidelity | Refers to the extent to which an intervention is implemented as intended by the designers of the intervention. Refers not only to whether or not all the intervention components and activities were actually implemented, but whether they were implemented in the proper manner. |
Generalizability | The extent to which research findings and conclusions from a study conducted on a sample population can be applied to the population at large. |
Grounded Theory | Practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist. |
Historical Research | The systematic collection and evaluation of data related to past occurrences in order to describe causes, effects, trends of those events which may help to explain present events and anticipate future events. |
Holistic Perspective | Taking almost every action or communication of the whole phenomenon of a certain community or culture into account in research |
Hypothesis | A tentative explanation based on theory to for certain behaviors, events or phenomena that have occurred or will occur, or a prediction about the outcome of an experiment. In experimental research the prediction would be about how the treatment/program will affect the outcomes |
Implementation & Replicability Studies | Are designed to explain the conditions in which a program or practice was implemented and/or replicated |
Randomized Controlled Trials (RCTS) | Participants are arbitrarily assigned to receive either an intervention or control treatment (often usual care services). This allows the effect of the intervention can be studied in groups of people who are: (1) the same at the outset and (2) treated the same way, except for the intervention(s) being studied. Any differences seen in the groups at the end can be attributed to the difference in treatment alone, and not to bias or chance. |
Independent Variable | Is a variable that precedes, influences or predicts the dependent variable. Includes treatment, state of variable, such as age, size, weight, etc. |
Inductive | A form of reasoning in which a generalized conclusion is formulated from particular instances |
Inductive Analysis | A form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but forms questions throughout the research process. |
Internal Consistency | The extent to which all questions or items assess the same characteristic, skill, or quality. |
Internal Validity | The rigor with which the study was conducted (e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and wasn't measured) and (2) the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. |
Interrater Reliability | The extent to which two or more individuals agree. It addresses the consistency of the implementation of a rating system. |
Experimental Group | The group in an experimental design that receives the treatment; is needed to receive the treatment under investigation and has to be matched with control group in terms of age, abilities, race, etc. |