Analysis: Data Can Answer Key Questions About AIM Identification But District Has to Be Transparent

AIM

gate-2

On Monday, based on conversations that the Vanguard had with key members of the community and a review of the reports by Tobin White and Scott Carrell, the Vanguard noted among other things that 331 of 492 retested through TONI (Test of Nonverbal Intelligence) had no risk factors at all.

Since then, the Vanguard has been told that those numbers are inaccurate. But the district has thus far withheld those data, citing confidentiality law regarding access to student-level data.

The Vanguard was told that a number of citizens filed a Public Records Act request asking for, among other things, access to the data provided to Tobin White and Scott Carrell. The district apparently reached some sort of understanding with the researchers to provide student data that is otherwise private to the university.

However, there are questions as to what data the researchers actually acquired. The Vanguard has been told that the researchers never talked to the former director of the AIM program, Deanne Quinn, and never had access to her data.

While the district is certainly within its legal rights to withhold private student records from the public, there are clearly ways that the district could get around such problems.  The district maintained that the data was released to the researchers pursuant to a written agreement for limited research purposes.

There appears nothing to prevent the district from releasing the data more generally by simply redacting the names.  It is not as though the records – detached from the names – have significant identifying information.

The issue of GATE identification is among the key issues facing the district. Critics maintain that the integrity of the program is undermined by the loose and capricious way in which retesting is administered.

The OLSAT (Otis-Lennon School Ability Test) test is used as a general means to screen GATE-identified students.  About 25 percent of those who qualify do so on the OLSAT alone. However, critics point to the 27 percent identified through private testing and 49 percent through the TONI as evidence that the identification process is fatally flawed.

Tobin White noted that just three percent of students score at the 99th-percentile through the OLSAT, while 28 percent do so through TONI. He found that students administered the TONI were six times more likely to qualify than those taking only the OLSAT and nine times more likely to score in the 99th percentile.

However, defenders of the program, point out that a key part of the analysis was missed by critics – the search and serve process.

The main instrument that the district uses is the OLSAT, which attempts to assess “examinees’ ability to cope with school learning tasks, to suggest their possible placement for school learning functions, and to evaluate their achievement in relation to the talents they bring to school learning situations.”

If the student does not qualify based on the minimum OLSAT scores at the 96th percentile, there are a set of specific criteria for triggering the search and serve process for re-screening. Yesterday, we wrote that the district is not “enforcing the rule that retesting for the TONI occur with students within five standard errors of the 96th percentile.”

But if you read the Gate Master Plan, you see that this is only one criterion for retesting. As was pointed out to me, if you believe that the OLSAT is problematic for low-SES (socioeconomic status) kids and kids with other learning impairments, then you wouldn’t want simply to rely on those who are near-misses to fill the program.

Instead, there are several assessments that trigger the search and serve process. These include risk factors such as: socioeconomic status, language, health, designated special education, etc. There are also work sample assessments and parent or teacher indicators of gifted characteristics.

While Tobin White found 331 of 492 retested through TONI had no risk factors at all, defenders point out that Professor White is making conclusions based on information that was not available to him.

The district did not keep records on the kids who do not qualify for the AIM program due to legal prohibitions. Moreover, the state cannot keep records of those who do qualify without permission of the parents.

People familiar with the search and serve process believe that the number of those who retest with no risk factors is far closer to 15 to 20 percent.  There are a variety reasons why the students may retest.  Some may be ethnic minorities who do not have identified risk factors, but otherwise meet the criteria.

But there are other reasons as well.  Sometimes the students fill out the information incorrectly on the scantron and it does not record the information.  Sometimes they may skip sections of the test for no apparent reason. Finally, sometimes they are given incorrect instructions.

One of the reasons that the TONI test takers end up qualifying at a higher rate than the general population is that the retesting is based on a full assessment of the student’s abilities – look at the gap between math and verbal, their scoring on the STAR tests, teacher and parent evaluations, and entire profiles.

The argument that has been made to the Vanguard by both sides of this debate is that we need to move from the realm of anecdote to the realm of data analysis.  However, that data analysis cannot take place without the access to good and reliable data.

The Vanguard is going to pursue getting the kind of data that can verify or refute the claims by the researchers who have written reports. If we wish to assess the AIM program, we should do so based on the best possible data and there are ways to protect the privacy of children while allowing the public a full range of information to scrutinize claims made on both sides.

—David M. Greenwald reporting

Author

  • David Greenwald

    Greenwald is the founder, editor, and executive director of the Davis Vanguard. He founded the Vanguard in 2006. David Greenwald moved to Davis in 1996 to attend Graduate School at UC Davis in Political Science. He lives in South Davis with his wife Cecilia Escamilla Greenwald and three children.

    View all posts

Categories:

Breaking News DJUSD Education

Tags:

10 comments

  1. It doesn’t matter whether the students had risk factors or not; when three fourths of the students in a selective program fail to qualify on the primary qualifier (district OLSAT), it’s a non-starter. Requiring a score of 96 for admittance, but actually admitting down to 91 is a non-starter. Instituting a search and serve to identify under-represented minorities who may have been at a disadvantage on the OLSAT, but using it to admit mostly white and Asian students (and then being sued for putting the minority students at the end of the wait list) is a non-starter. Our selection process has morphed into something very different from what it was 10 years ago. How we fix it is a legitimate discussion…that it needs fixing is not.

    1. first, i don’t read anyone arguing that it doesn’t need changing, but in order to determine that we need adequate data.

      “It doesn’t matter whether the students had risk factors or not; when three fourths of the students in a selective program fail to qualify on the primary qualifier (district OLSAT), it’s a non-starter.”

      that’s not exactly true.  first, if you accept that the olsat is a flawed measure especially for at-risk kids, then you realize that a relatively low number will qualify.  second, one-quarter of the students qualifying through private testing might or might not be a problem, depending on why they are taking the private testing.  but it seems like this is false: “Instituting a search and serve to identify under-represented minorities who may have been at a disadvantage on the OLSAT, but using it to admit mostly white and Asian students” – who is taking the toni?  and is the problem who is getting identified through the toni or who is actually going into the program?  those are different issues, no?

      1. There is a letter in the Enterprise from Debbie Nichol Poulos talking about some test that has never come up in any discussion – CogAT – and then goes on to describe problems with the test.

        She also states:

        If it eliminates private testing, the district must provide an alternative way to identify children with learning disabilities or language barriers, or those from diverse ethnic backgrounds or low socioeconomic status.

        What she fails to note is that students who are identified through private testing have been white and Asian, the average OLSAT scores are in the 70th percentile, can be done without any qualifying risk factor, can be tested multiple times starting in kindergarten before the required District OLSAT test, and now form the largest percentage of students identified as GATE in the District.

        She makes the accusation that the elimination of private testing is discriminatory.

        So there are people who are arguing that no change is needed.

        1. “If it eliminates private testing, the district must provide an alternative way to identify children with learning disabilities or language barriers, or those from diverse ethnic backgrounds or low socioeconomic status.”

          and what you fail to note is that the district once had a gate counselor who could have done much of this without the need for private testing, but that position was cut in 2008 due to budget cuts.  there are also the students who come to the district after the olsat is administered, and there are no current provisions for dealing with them.

        2. …and what you fail to note is that the district once had a gate counselor who could have done much of this without the need for private testing, but that position was cut…. 

          Thought this warranted highlighting.

      2. The OLSAT actually works quite well if you are selecting for the very high achieving student (and where you set the cut-off will define how high achieving) and to a lesser extent, the “gifted” student. The issue in using it for classic “gifted” identification is that serious anxiety problems (sensory, social, processing) can be a factor in test taking. The OLSAT in combination with teacher evaluations improves “gifted” evaluation. The bottom line is,  the OLSAT isn’t the problem; our use (or misuse) of the OLSAT is the problem. We are no longer trying to identify just “gifted” or even very high achievers. By lowering the admission criteria to 91, you select for a very different student than a criteria of 96 or 97. By admitting a large number of much lower scoring students based on vague cause and effect criteria, you select for a very different student than the students who met the 91 criteria. The end result is a program that no one can define (or agree) who it is for, or why they warrant a separate classroom. Rather than debate what Ms Quinn did or didn’t do, the board directed the district to bring back a clearly defined AIM program…and an admission process that does better than a 75% failure rate.

  2. Great reporting, Vanguard.  Keep it up.

    “there are questions as to what data the researchers actually acquired. The Vanguard has been told that the researchers never talked to the former director of the AIM program”

    Wow–if this is true. Wouldn’t talking to the program coordinator and asking her to explain why the retesting was done have been a key step in researching the program? How can one be sure why students were retested without asking the person who did the retesting?

  3. David, thank you and Don Shor for your efforts to be objective arbiters of this discussion.  The lack of transparency in the board and administration’s actions has been troubling to say the least.  This is as much about ethics as it is the data.

Leave a Comment