Sunday Commentary II: Is the District’s AIM Revamp a Failure?

AIM

AIM

One of the big debates that occurred on Thursday night at the school board meeting is the extent to which the AIM changes can be considered a failure.

In the course of her discussion, some of which her colleagues challenged as off-topic, Board President Madhavi Sunder suggested, “I do not feel that our identification process is complete yet.  I believe that there are students that we have not given fair opportunity to see whether or not they actually have gifted potential.”

This is a critical point, especially in a program that has greatly reduced the number of black and Hispanic qualified students.

But Madhavi Sunder raised the ire of some of her colleagues when she suggested that the Naglieri Nonverbal Ability Test and other new tests they used have “failed.”

Ms. Sunder added that we need to make sure we are giving fair access to English Language Learners, low income students, learning disabled students, and racial minorities, “that we know are often unfairly disadvantaged on the OLSAT [Otis-Lennon School Ability Test].”  She said they retested these groups “and we failed to identify almost any.”

But there was push-back here. Susan Lovenburg stated, “I don’t agree that the process that the board put in place is a failure.  It functioned as it was intended to and if not, we need to make changes going forward.”

Ms. Archer would later add, “I do agree with President Sunder that we have to address this year, but I really object to calling our program a failure when you don’t know the percentages of third graders in the different minority groups.”

Alan Fernandes said he “wouldn’t call it a failure,” but “I would also not say it’s a total success.”

But Madhavi Sunder offered critical statistics that suggested that the use of the Naglieri was inappropriate.

She stated that “we (have) three percent African American in this district and the number that is identified is zero percent.”

She pointed out that there was only a three percent success rate on the Naglieri and a 32 percent success rate for the CogAT (Cognitive Abilities Test).

“That was the test (CogAT) that we gave to more advantaged students,” she argued.  “What upsets me is that we gave the disadvantaged students a much harder to succeed on test.”  In the past, they were given the TONI (Test of Nonverbal Intelligence) which had a 14.6 percent success rate.  “We didn’t give the TONI to a single low income student this year.”

Under the new guidelines, for students with risk factors related to language or culture, the TONI may be administered. For students with economic risk factors, the Naglieri  may be administered.  On the other hand, the CogAT is administered for those who scored in the standard error of measure on the OLSAT.

But does this make sense?  In the fall of 2012, the New York City School District overhauled their gifted program, announcing that “the Naglieri Nonverbal Ability Test, also known as the NNAT, will count for two-thirds of a student’s score.”  A year later they altered that to 50-50, NNAT and OLSAT.

The problem that NYC was trying to fix is the same problem we face – the over-representation of white and Asian students in gifted programs with the underrepresentation of blacks and Hispanics.

Writes Slate in an article this year, “In New York City elementary schools, according to a local newspaper, G&T programs are approximately 70 percent white and Asian while the public school population is 70 percent black and Hispanic. In 2012, New York revamped the test, partially in an effort to make it more inclusive, but so far enrollment statistics have hardly budged.”

In other words, the Naglieri failed to change the demographic distribution of students identified for gifted classes in New York. So why would we expect this to be the appropriate test in Davis?  That Slate article came out in September 2015, before the board voted on final approvals.

When Madhavi Sunder jumped in with the suggestion we go back to the TONI, which had a 14.6 percent success rate as opposed to the 3 percent for the Naglieri, Tom Adams was quick to jump in.

Tom Adams stated, “The use of the TONI before was not appropriate – it’s a test intended for English Learners (and) we were using it as a second test for a lot of groups for whom it’s really not intended.”

But, despite Mr. Adams credentials on educational research, his assessment does not square with the literature.

A 2011 review notes, “The TONI-4 has several strengths. It decreases cultural and language factors that often influence verbal-based intelligence tests.”

The issue here is that for many low-SES (socioeconomic status) and especially students who are either English-Learners or children of English learners, there are language issues that may block the accurate assessment of their cognitive abilities.

As Madhavi Sunder pointed out in her response, the company that develops and distributes the TONI-4 notes that it is “[d]esigned for both children and adults, the language-free format makes it ideal for testing those who have previously been difficult to assess, including people with communication disorders, learning disabilities, or problems caused by intellectual disability, autism, stroke, head injury, neuropsychological impairment, or disease.”

But they add, “It also accommodates the needs of individuals who are not proficient in English.”

And that is not just English learners, that can be disadvantaged populations whose verbal skills lag behind those of their cohorts.

In other words, Mr. Adams is completely wrong that the TONI was an inappropriate test, though he may have been correct in questioning the process in which it was used.

In conclusion here, while I might stop short of using the term “fail” in a past tense, done deal, sort of manner,  I would agree that the test is failing to identify blacks and Hispanics – and likely other people from disadvantaged backgrounds.

I would also argue that, with more careful research, the district could have and should have avoided using the Naglieri, as there is no evidence that it has worked as the district attempted to utilize it.

Finally, given the concerns about getting the testing right, the district should delay implementing the 98th percentile threshold until they have fixed the identification process.  Pushing forward at this point would be a strong signal that the skeptics and critics are correct – this is simply a way to pare down the program until it is either a single strand, or the self-contained program disappears altogether.

—David M. Greenwald reporting

Author

  • David Greenwald

    Greenwald is the founder, editor, and executive director of the Davis Vanguard. He founded the Vanguard in 2006. David Greenwald moved to Davis in 1996 to attend Graduate School at UC Davis in Political Science. He lives in South Davis with his wife Cecilia Escamilla Greenwald and three children.

    View all posts

Categories:

Breaking News DJUSD Education School Board

Tags:

30 comments

  1. Are you implying that black students and low SES students are not proficient in English and that puts them at a disadvantage when taking the OLSAT and other tests?  That we should be administering a test designed for students that have been difficult to assess because of communication problems?  Again, I believe that these assertions need to be reviewed through some sort of peer review process.

    1. You’re really misusing the term peer review which refers to the process for academic research. What you are calling for here is more study, which I would support. I’m simply pointing out problems with the current process.

      1. There have already been two studies on the use or misuse of the TONI by DJUSD, but you and others here have rejected them.

        What you seem to be saying is that we have a GATE program that identifies students from educated, privileged backgrounds who have a large enough vocabulary to score high on the admissions tests and this has shut out students of color.  Is that really what’s happening – not just here but across the nation?

        1. Actually I don’t disagree that the identification system needed to be fixed. But that doesn’t mean that the TONI is an inappropriate test under some circumstances. Also you seem to be ignoring that they implemented the Naglieri when an examination in New York would have shown it to be ineffective.

        2. ryankelly:  So we are identifying the already well educated, but not really the gifted student?

          DG:  I don’t think we have data to answer that question

          I think there’s already strong evidence to answer that question.

          Don Shor and Eric Hays wrote public pieces for the Vanguard and the Enterprise respectively, probably intending to make different points, but they cited evidence for higher rates of GATE identification in school districts affiliated with UC campus communities (Goleta, La Jolla, Irvine, Berkeley), or with communities that would have higher education levels.  Although high rates of GATE identification in those districts don’t directly say what the parent education levels of the GATE-identified students are, it points to a correlation.

          I made a public information request of the Davis district asking for the parent education levels of AIM/GATE identified students in Davis for the current school year, and the results I received said that about 82% of students come from families with graduate or professional (for instance, law, medical, or vetinary) education, and about 2% of students came from families with a high school diploma or less.  That is under the old system of AIM/GATE identification.  The numbers for the district at large is about 57% with graduate/professional education, and about 8% with a high school diploma or less.

          As I understand it, Deanne Quinn’s screening criteria identified alternative identification measures for indicators of race, ELL status, and income status, but not parent education level.  It does make sense that highly educated parents would provide their children with a more cognitive-enriching environment (introducing a larger vocabulary to their kids, more varied cognitive opportunities and experiences), and that this would result in higher scores on standardized tests measuring cognitive ability.  So her results indicate diversities in the AIM program for race/ethnicity, ELL status, and income status, but not for parent education level.  But is this actually measuring giftedness or only high academic/cognitive achievement?

          There are non-cognitive components to giftedness and to foundational (grade school) education that are not being addressed.

    2. She stated that “we (have) three percent African American in this district and the number that is identified is zero percent.”

      With such a small number of African American students, I would expect that some years ALL African American students would qualify for AIM, while other years zero would qualify, while still other years, one student would qualify, and so forth.  I wish we had real data for our district and our long time coordinator hadn’t manipulated the data.

      1. Please provide the evidence to back up that outrageous allegation regarding the former coordinator. Sounds like slander to me.

        The coordinator only followed out the Board’s and administration’s orders. She did not write the former identification procedures that now seem to be attributed to her.

  2. We can debate testing methodology all we want but the truth is that we already know that there were kids from underrepresented minorities that tested near the arbitrary cutoff score for admittance. The important question is would allowing some of these kids in to fill a third section have been the correct choice? When answering that question I think the answer is an overwhelmingly obvious, of course.

    I can think of four reasons following the course suggested by Fernandes and Sunder would have been a better outcome. First it would have allowed a third section to be housed at the the preferred location of the majority of students who were admitted and led to believe that there would be a section at NDE. Second it would have added diversity to the program so that the kids are exposed to kids who come from a more diverse set of backgrounds. Third it would grant access to kids from the underrepresented minorities we are most concerned about falling behind. Fourth there is evidence that these same additional underrepresented minority kids thrive when given the opportunity to participate.

    For all these reasons it seems to me that taking a step back from the dogmatism of testing methodology and hard cut off scores could easily have resulted in a better outcome.

    1. Misanthrop

      The important question is would allowing some of these kids in to fill a third section have been the correct choice? When answering that question I think the answer is an overwhelmingly obvious, of course.”

      When posing the question and answer as you have done, I agree that this would have been the best course. However, I do believe that this opens the door to the unintended consequence of attempting to pad the program. I certainly see ( on the basis of past personal experience) the possibility that affluent parents would see the open strand as a an invitation to try to get their own child a slot even if they did not quite meet the testing criteria and regardless of whether or not their child would, in reality, benefit from this placement in any way other than the perceived “bragging rights” of a child in the AIM program, much the same way some parents display their child’s academic successes on bumper stickers.

    2. Do away with trying to identify giftedness or high cognitive achievement, and just let parents decide if their child belongs in a GATE/AIM program?

       

      1. I think the test should be one of the factors but not the deciding factor. I think you give a test and with the results of the test you allow the parents to be advised by the school counselors as to  the suitability of a child for the program. Then you let the parents decide. You could even offer a second test for parents who wanted more information or a different kind of analysis. The question I would pose back to you is how does a test score identify who will be  successful or benefit from being in the program? Tobin White showed that with private testing the average Olsat scores were much lower than the cutoff yet nobody was complaining that the kids in the program were misplaced rather it was argued only that they were being over identified.

        The one caveat would be that for kids who were not being successful the parents would need to be told that if your kid wasn’t being successful that they should seriously consider a different placement that would be more appropriate.

        1. I would be open to more students trying the program and seeing if it was a good fit. The important thing is that it would need to stay a GATE program and not just modified or watered down when parents complain it’s too demanding or doesn’t interest their student.

        2. Misanthrop:  The question I would pose back to you is how does a test score identify who will be  successful or benefit from being in the program?

          I don’t know.  I think a correlation between a high test score and a good fit in the program is probably crude at best.  I’ve posted more than once that when I read the district’s response to “How do I know if my child is ‘gifted’,” I’m not sure how cognitive performance on an OLSAT or equivalent alternate tests measure appropriately for things like sensitivity to issues of morality and justice, having varied and multiple interests, demonstrating a sophisticated sense of humor, or being curious.  The caveat frequently mentioned is that no test is perfect and should be absolutely relied upon, but it seems that we absolutely are relying upon standardized tests (I don’t know of other ‘deciding factors’), and maybe we’re forgetting to question how valid the test is for measuring true giftedness by this definition.  We’re definitely measuring certain cognitive performance, but there is more to maturing than cognitive achievement.

           

  3. GATE/AIM starts to look like a private academy for the academically-advanced that gets to justify its existence riding on the coattails of a small minority of students with specific learning disabilities.

    1. “Starts to look?” Where have you been Frankly. This is the core conflict that has never been resolved, that you have a large contingent of kids ready for a more challenging curriculum in a well educated community and then you also have a subset that are ill served in a regular classroom. One thing is certain however, that the current selection process does not resolve the policy conflict at the heart of the issue.

      For me the bottom line is choice, if families choose the program why not let demand be met by supply? In this case a third strand could easily have been filled by people choosing to opt in. Since there is no definitive quantitative way of measuring giftedness why not simply give a test and let families, in consultation with school officials, decide what is the best placement for a child?

      Its so simple that the failure to implement such a program is astounding. Until the district gets to this solution the school board is going to forever be locked in a groundhog’s day like repetition of the same painful unresolved discussion it spent four hours on the other night. The elements of the argument will change depending on the immediate issue but the underlying debate will remain.

        1. Frankly

          “As long as the district provides adequate choice for the entire population of students, I am in full support of it.”

          Watch out. I it would appear that we are in agreement on yet another issue.

  4. Misanthrop

    For me the bottom line is choice, if families choose the program why not let demand be met by supply?”

    I would agree if the program had been objectively demonstrated to be of benefit ( not anecdotally for some students) and were objectively demonstrated not to do any harm ( also not anecdotal accounts as seen here in posts). It would also be true if all choices were objectively as advantageous. This does not appear to me to be the case. So each side is left arguing based on their own preferences rather than any objective criteria. It seems to me that we are substituting passionate belief for objective information on both sides.

    I do not pretend to know what the best course of action is. What is clear to me is that a “right” solution has not been definitively established. It is also clear that some are so convinced of their own “rightness” that they are willing to vilify those who do not agree with their position. I do not see this as constructive engagement.

  5. “I do not pretend to know what the best course of action is.”

    The answer is to let families decide in consultation with school officials. I think one assumption that most would agree upon is that, in general, parents will do what they believe to be in the best interest of their children. In some cases this may prove to be the incorrect decision. Of course most parents struggle with this sort of reflection about many decisions they make regarding their children. At the point at which it becomes obvious that a child is misplaced in a program it would be incumbent upon school officials to try to intervene with the parents for a more appropriate placement. Perhaps a form parents sign upon admittance should include a warning that if the student is not happy or successful in the program parents should seriously consider a different placement.

    This is, by the way, what counselors do everyday, figure out what is the best placement for each child based upon the entire set of data.

  6. Since the OLSAT scores are the best predictor of success in the AIM curriculum (based on research performed on other districts in other places at other times) and the AIM curriculum is a verbally rich curriculum (i.e. nonverbal entrance tests do not predict success with the curriculum), it seems to me that the next rational step is to provide the OLSAT in Spanish to students who speak Spanish at home and in other native languages for students who speak other languages at home.

  7. MrsW

    Such a simple straightforward suggestion. Has this not already been done. I guess that this just illustrates the problem with assumptions. It never occurred to me that any primarily verbal test looking for giftedness as opposed to language acquisition would not have been administered in the speaker’s primary language !

  8. It seems to me that we are having difficulty identifying truly gifted students in the District and, instead, are primarily labeling students who are educated, privileged, as gifted with all the entitlements and opportunities that this labeling offers.  If AIM were not a GATE program, but just an honors program for high-achieving students, then maybe the District could control admissions more.  The application process may include testing, but could also could include demonstrated academic excellence and consider risk factors that may affect performance.  Because it stops being an entitlement based on the results of a test score, admission would look more like admission to a university or private school.  It would be a competitive process and the District could decide how large the program could be and ensure that students of color or SES were admitted.  Students who do not appear to be keeping up could be directed to leave the program for a more suitable educational setting.  This would satisfy educated families with high achieving students, I think.

    The District would have to have a separate GATE program for exceptional students who are not doing well in a normal classroom setting.  The qualifications for this program would be very narrow, very high, and very selective.

  9. This claim that Minority kids are being disproportionately affected by the new tests is a bit of a red herring. According to the results released by the district, Only 12 white kids would be in GATE this year if the scoring threshold was 98. That’s 26 fewer white kids. Only 30 Asians would qualify. Thats 8 fewer Asians. Latinos would drop by one from 4 to 3. Blacks would drop from 1 to zero. White students would be losing a pretty big percentage of slots while the effect on Minorities is pretty de minimus.

Leave a Comment