You should have your child’s score today or tomorrow.. Every parent will look and say : “WTF kid, you’ve gone down!”

Blame not your child. Blame not his teacher… Instead, blame Jack Markell… (Mark Murphy leaves in 9 days so all the blame on him will be sent there simply to shift it from where it deserves to lie which is), Blame Dave Sokola. Blame Earl Jacques… These three destroyed your child’s score….

The achievement level setting process was set by two groups. an online panel that allowed broad stakeholder participation and provided a wide data set, and a more traditional in-person workshop that provided focused judgment from a representative stakeholder panel... Both panels used the “Bookmark Proceedure”, (Lewis, Mitzel, Mercado, & Schultz, 2012),

In technical speak each multiple-choice item was mapped in accordance with its “probability of correct response” for each scale score, and each constructed response item was mapped once for each score point, that is, for the probability

of obtaining a score of 1, 2, 3, or higher at each scale score point.

The standard Bookmark procedure (Mitzel et al., 2001) is a complete set of activities designed to yield cut scores on the basis of participants’ reviews of collections of test items. The Bookmark procedure is so named because participants

express their judgments by entering markers in a specially designed booklet consisting of a set of items placed in difficulty order, with items ordered from easiest to hardest.

This method is versatile in mixed testing environments and so with increased new testing it has acquired new popularity. The participant reviews the test. He ranks each task in one of four categories. (Below Basic, Basic, Proficient, and Advanced), The participant then organizes all the questions based on his judgment from easiest to most complicated, and puts them down in a little book.

The Bookmark procedure relies on the basic relationship between person ability () and item difficulty (b), discrimination (a), and pseudo-guessing (c), where the probability of answering a dichotomously scored item correctly (P) can be expressed as shown in equation

**Pj(X=1|) = cj + (1 – cj)/{1+exp[-1.7aj( – bj)]} **

where Pj is the probability of answering correctly, θ is the ability required, aj is the item discrimination index, exp is the exponential function, and bj is the item difficulty index.

With each question in the Bookmark procedure, the basic question participants must answer is: ** “Is it likely that the minimally qualified or borderline examinee will answer ****this SR item correctly?**

Which means we have to define “Likely”….

The Smarter Balanced chose .67 (short for .66666666666666) or 2 times out of three. In other methods like the Rasch model using a binary system, “likely” is defined as 1 out of 2 or .50. Why the Smarter Balanced chose this method they neglect to say; they just mention that the originators of the Bookmark pointed to 2/3 and their own experience aligned closer to the 2/3’s assessment.

In the Bookmark procedure, panelists are typically asked to find an item that a certain percentage of examinees at a critical threshold will be able to answer correctly.** The cut score is identified at the point in an ordered item booklet beyond which panelists can no longer say that the target group would have the specified likelihood of answering correctly.**

So this translates directly to parents in words like this… The level of passing for your child was determined by panels who thought that at that certain level picked, a group lacking a certain brightness about them, would not be able to correctly answer the question other than by randomly guessing.

The score for basic is set to where that sub group would be able to answer the questions, but those not yet in that subgroup, would get them wrong. The same was done between basic and proficient, and the same was done between proficient and excellent.

Parents should note this is completely different from how you were graded… When you were graded by your teacher, you studied a book or paper, or your notes, and then took a test to see how many of those things you could remember. if your memory was excellent you got an A. If not, you got a score based on the number you got right. Those scores were preset as 93% to 100% as A’s, 85% to 92% as B’s; 84% to 77% as C’s; 76% to 69% as D’s. Below was an F.

Your score was solely determined by how many you got right. Strictly mathematical. No one was trying to determine if you were “proficient” or not.. You were ranked by how many answers you remembered correctly. If the test was easy, or if the teacher was great, everyone got an A because they all got more than 93% correct….

Where the Smarter Balanced Assessment differs from that, is that all cannot do well and get an A by doing work correctly. The cut percentages are set ahead of time…

The standards are based on predetermining the failure rate first,and working backwards from that… We start by saying. We want 30% to pass and be called “proficient”. One could as arbitrarily say they want 50% to pass, or 60%, or 70%. That is the first step…. picking a guideline… Then, the participants go through the test marking their books and mapping out questions from easy to tough.

The score setters looks at the participants mapped data, and see from their scores, where that marker-cutting-line should be set… Here is an example… Our 1500 person field test shows us that scores ranged from 2000 to 2800. From the top working down, that student at the 30% of 1500, number 450 got 2554… That will be the cut score now set between proficient and basic….

2554 and above will be proficient. 2553 and below will be basic….

The problem with this approach is that if all the field test students tend to be good students and get between 2600 and 2800, the cut score are to be set somewhere 30% down from the top of that group…. Which bodes ill when other students take it. So composition of the field group is very important in basing any credibility of these field scores.

In the Delaware pilot year (2014), many teachers told their kids not to worry about the Smarter Balanced Field tests. They were already burned out from the DCAS; they had already gotten their scores and everything, and this was after all… just a field test of no consequence for the kids. Most kids did not take it seriously. Many guessed randomly.

So when the cut scores were devised off those field tests, they took 30% of the lower scores and inaccurately predicted that would be the area where 30% of students were proficient and all others were not.

When our children took the test in earnest this past spring, the lowly “guessed-at” cut score, was so low that instead of 30% being deemed proficient, it came out as 50% who were deemed proficient. Many said that was due to the success of the DOE but if you listen carefully to the DOE, they are backpedaling furiously to not mention that. The reason being that this year’s scores has now more accurately set the bench mark, and that when devising where next years’ 30% cut off should be found, it will be much closer to where only 30% will actually succeed. Your child may be deemed proficient this year… but by next year may score below proficient if this time he fell anywhere between that 50% person’s and the 30% person’s score across the state …

When you have a floating scale set by people theorizing and not teaching, a scale based allegedly on those formulas mentioned above but whose data imputing into those formulas is as subjective as hell (ability required, discrimination index, exponential function etc); it becomes very obvious that whether such methods are right or wrong, this in no way benefits your student.

When you get your results you will ask: what questions did he get wrong? How can he study to become better? Or possibly was the question actually wrong and your very smart little student answered correctly, but was scored by machine incorrectly?

From this test there is no feedback. No one can find out. Teachers cannot find out. No one ever is allowed to see the actual test. Your child therefore continues to make the same mistake a year later, and a year later, and a year later.

This is why parents today are asking their principals to opt out their child… This test does your child no good, and it takes away so much from his overall educational experience. He is better off with one teacher, one set of rules, and the grade he gets, he gets because he got answers right…. When his test gets returned, he looks and instantly knows the right answer hence going forward….

For your child’s sake, Join us other intelligent parents and opt out your child from taking the Smarter Balanced Assessment….

## 1 comment

Comments feed for this article

September 21, 2015 at 6:01 am

Kevin OhlandtReblogged this on Exceptional Delaware.