Frequently Asked Questions on the DJUSD AIM Study

AIM

The Vanguard on Tuesday met with several of the researchers on the DJUSD study: How Does the AIM Program Affect Student Outcomes in the Davis Joint Unified School District? Following the report, the authors were asked to submit frequently asked questions about the AIM Study, but the follow up does not appear to have been widely distributed.

The Vanguard is publishing the questions and answers in their entirety.

Carrell, Kurlaender, Page, & Kramer, University of California, Davis How Does the AIM Program Affect Student Outcomes in the Davis Joint Unified School District? A REPORT SUBMITTED TO DAVIS JOINT UNIFIED SCHOOL DISTRICT

Frequently asked questions about our study:

  1. Did the study only compare STAR test scores among students scoring a 96 percent vs. a 95 percent on OLSAT screenings (the student groups just above and below AIM admission)?

As described in the report’s appendix, we did not confine our analyses to students at the 95/96th percentile break. We examined students with OLSAT scores from the 80th to the 99th percentile to uncover which scores were associated with large jumps in participation in GATE / AIM self-contained classrooms.

These empirically estimated jumps occurred at scores ranging from 89-96, and our estimates do not change when we examine years with jumps occurring at relatively low scores vs. years with jumps occurring at relatively high scores. Our results also do not change when we confine our analysis to students with scores that are very close to the discontinuities (jumps) vs. when we include students with scores that are relatively far from the discontinuity. Most of the analyses, including the results we presented, include students with scores on the OLSAT at the 97th, 98th, and 99th percentiles, even when the score at which the jump occurred is lower than the 96th percentile.

  1. Is it true that before 2013 (when the lottery was implemented) virtually no students scoring less than 98 percent on the OLSAT were actually in self-contained GATE classrooms?

No. In 2007, 71% of students in fourth grade GATE/AIM self-contained classrooms had an OLSAT score at the 97th percentile or lower. From 2008 to 2013, the percentages were 64%, 84%, 83, 80%, 81%, and 84% respectively. On average, from 2007 – 2013, 78% of students scored at the 97th percentile or lower. This means that most students in GATE/AIM self-contained classrooms had OLSAT scores lower than the 98th percentile. In fact, almost half of students in GATE/AIM self-contained classrooms from 2007-2013 (46%) scored below the 90th percentile on the OLSAT. These students largely entered the program through other means such as re-testing through the TONI, or through private testing.

  1. How many of the kids who fell into the 96th percentile were in self-contained AIM classrooms vs. opted out?

Over the period of our study (2007 to 2013), 54% of fourth grade students who scored at the 96th percentile on the OLSAT enrolled in GATE/AIM self-contained classrooms. Additionally, 65% of students who scored in the 97th though 99th percentiles were enrolled in GATE/AIM self-contained classrooms in the fourth grade.

  1. Did your study include only children on the margin of qualifying for AIM classrooms, or did it also include children in self-contained classrooms?

Our analyses include students who entered self-contained AIM classrooms.

  1. Why not compare students who enroll in an AIM self-contained classroom to qualified students who choose not to enroll?

As described in our report, comparisons of students who choose to enroll in GATE/AIM vs. those who choose not to enroll will not produce estimates of the program’s true effect. Instead, these estimates will also reflect the effect of all the immeasurable differences between the two groups, including motivation, parental support, or parent’s access to information about the program.

To produce accurate estimates of the effect of the GATE/AIM program, we instead use a regression discontinuity design (RDD) methodology. This methodology is recognized by the US Department of Education’s, Institution of Education Science’s (IES) What Works Clearinghouse, as a “best practices” research design for evaluating the impact of education programs. This methodology relies on the jumps in the probability of GATE/AIM qualification, which also corresponds to an increase in the probability of enrollment in an AIM/GATE self-contained classroom.

However, less sophisticated methods, like multivariate regression, also produce estimates of the GATE/AIM program that are not statistically distinguishable from zero. This is striking because we would expect all of the unmeasured factors that we cannot control for using these less sophisticated methods to generate estimates that overstate the effect of the AIM/GATE program.

  1. Do your results apply to those higher percentile students in the 98th/99th percentile?

Our estimated effects are most applicable to those students whose OLSAT scores are close to the cutoffs (which can vary between 89 and 96). This limitation of our study is driven by the nature of the program that has been implemented by the DJUSD, which primarily serves students who are not at the 98th and 99th percentile. The average OLSAT score of students who qualified from 2007-2013 was 83. We note, however, that our estimates are consistent when we limit our analyses to years during which the cutoff was 96. For these years, students at the 98th and 99th percentile were very close to the cutoff.

Author

  • David Greenwald

    Greenwald is the founder, editor, and executive director of the Davis Vanguard. He founded the Vanguard in 2006. David Greenwald moved to Davis in 1996 to attend Graduate School at UC Davis in Political Science. He lives in South Davis with his wife Cecilia Escamilla Greenwald and three children.

    View all posts

Categories:

Breaking News DJUSD Education

Tags:

8 comments

  1. there seems to be a key question here – (a) do supporters of gate buy these results and (b) can results be improved with tweaks to the program or do they nessitate massive change

  2. I’m missing something.  What “student outcome” was measured by this study?

    Jumps in program participation rates? Jumps STAR test results?

    What do they mean by jumps? increases?

    What do the researches mean by  “true effect” of the program?  Do they really mean the effect on the one variable they evaluated?  Is it that the more students who qualify, the more students participate?

    What do they mean by “estimates of the GATE/AIM program that are not statistically distinguishable from zero”? Seems like some words are missing “estimates of the GATE/Aim program[‘s effects on XXXX] that are not…”

    Increasing, decreasing or affecting STAR test results of participants has never been a goal of the program.  I’m sure someone has told us before, but why was that the metric chosen for evaluating the program’s effects (if that is, indeed, what they’re talking about)?  At information nights provided parents, the GATE/AIM program sells itself based on other metrics, including improving graduation rates and decreasing depression in this population.

        1. Was the information gathered in this way, to reassure one group of parents that being in a regular classroom wouldn’t affect their child’s STAR test scores, as a surrogate for achievement?

Leave a Comment