My district is in transition regarding beginning of the year assessments. In fact, because of the prevalence of our old acronym in our effort to retire the old acronym BOYA we are now calling them “assessments at the beginning of the year.” While we are in transition, there has been some confusion about the expectation for what teachers should administer. “What test should I do? The old BOYA? Something new? What?”

And I ask them:** “well, what do you want to know about your students?”**

For context…

**I hope my wonderful colleagues will forgive me for being somewhat flippant about the work on the old Beginning of Year Assessments (BOYAs), which I do think has had a great deal of value over the years, at both an individual, school and district level. **

About 10 years ago, a bunch of math specialists sat in a room and designed 20 to 25 questions assessing some basic skills at each grade level. The fourth grade BOYA addressed fourth grade content per the common core standards, the third grade assessment addressed third grade standards, etc., and they questions were closely matched and/or duplicated in an End of Year Assessment (EOYA). This meant that questions included content that students may not have had any exposure to in September. In general, the questions were DOK 1.

In at least grades 2-5, the test was almost always administered in complete silence within the first week or two of school. Teachers then entered the data into spreadsheets.

Many students in every class scored 100% or close to it, leaving us to wonder what the data meant. Had a really mastered fourth grade content on a deep level? Meanwhile, other kids were frustrated and had trouble answering the first page.

Here are two sample questions from the 4th grade BOYA:

As a specialist, I always felt like this was dead data from day one. How was I supposed to even use it? The questions were all over the place, representing many different domains. I could not get a sense for any student thinking. And, what, where we supposed to pull it back out in February when we were teaching about perimeter and area? Who’s to say the original data was valid, much less still accurate?

Then I read Tracy Zager‘s (@tracyzager) blog post about messages assessments like the BOYA communicate. (“How Not To Start the Year in Math,” September 1, 2016) This left me feeling even more sour on the BOYA. Not only did the data feel unhelpful to me and the classroom teachers, but the experience of taking the BOYA may indeed communicate all the wrong messages about math class. **What do we value?** Compliance? Silence? Knowing all of the many ways to classify a square?

I started only reviewing BOYA data with teachers that requested it, specifically, or just confirming some vague concerns with outlier underperforming students. “Oh! Hmm. Let’s learn more about them. Shall we pick a clinical interview to do?” No item analysis. No labels.

This year, it looks like teachers have more choice in the matter, and that K-2 are being encouraged to do clinical interviews (Kathy Richardson). Grades 3 – 8 are being given formative assessment probes that offer a deeper look into a narrower piece of content, but they’re also welcome to use other assessments including the old BOYAs. A number of my colleagues have administered the old BOYAs already. Middle school will be trying out the new probes shortly, and I’m excited to hear what we learn.

Because the choices can seem overwhelming, colleagues are asking me more and more for my opinion on what to use. I respond: “What do you want to know about your students?** What messages do you want to communicate about math?”**

It’s the beginning of the year, so I’m hoping that any goal is at least partially formative! Whatever assessments we do should be helping us plan deliberately and joyfully.

Some teachers want to use the old BOYAs to screen students, to “get a sense for the kids.”

“Tell me more,” I probe, clearly to their frustration. No one wants to say that they’re looking to sort kids into boxes — but I think that that is an unfair oversimplification of the agenda, anyway. I think the teachers might be looking to see which students May need more frequent check-in’s. Who may need some guided math groups at the beginning of the year? How should we plan to differentiate and accommodate for some of their content gaps (or strengths)?

I guess a 25-question test on grade level content can give you that information — I *am* worried about fifth graders who lack accurate and efficient strategies for 3-digit addition — but I also wonder if there are other ways to determine this, without subjecting an entire class to the BOYA. To me, the BOYAs feel less diagnostic and more silently judgmental — but I’m projecting.

I would have real guilt of omission if I did not confess that the old BOYAs felt especially biased. Many of our higher SES students, particularly from white and Asian families, take outside math classes. These students are much more likely to perform well on a test assessing FUTURE content than our other students. What was this communicating to our students: that we place a premium value on the work of students who choose to/*can afford* to take outside math classes?

On day one, I really don’t care if my students know the vocabulary word for a five-sided polygon, can tell time to the half hour, and can calculate perimeter accurately. I’d much rather know how they attack a worthy problem, how they work with one another, and how they feel about the subject of mathematics. I am much more interested in the mathematical practice standards than the content standards in the fall.

– Tracy Zager, “How Not To Start the Year In Math” (9/1/2016)

When I was a classroom teacher, I would have loved a BOYA. I did end up creating my own, which I can’t find and was probably created with some obsolete version of Microsoft Word, anyway. (And would probably be a bit embarrassing.) I do remember this: there were only 4 computational questions, and the rest was all less routine problem solving. While most of the students did it independently, they were allowed to ask me questions, or join a small group I led on the front rug to talk about the problems. I took extensive notes. I didn’t allow students to collaborate except when in my small group, mostly out of a desperate need for control. I imagine it was like two steps forwards and one step back. It didn’t exactly communicate the feel that I would now say I want for the year, but it was a start, and even as a young career teacher I knew that **silent, independent data isn’t always the best data**.

I am fortunate to work in a building full of dynamic, excited educators who think about their practice and want what’s best for the kids. It’s been fun to watch them make decisions: do I want to interview students? Do I want to give a probe? …a traditional paper and pencil screening assessment? How will we use the data? That act — determining how THEY want to use data, even data from the old BOYA, to help themselves — goes so far beyond mere compliance with a district mandate. It gives us the power to transform. I guess I’m hopeful that we’ll make it through this transition focused on kids and learning and understanding.

My school has been struggling with figuring out beginning of the year assessments, as well. And I very much agree that the information teachers gather is most powerful when they have the opportunity to make considered decisions about what they care about the most and then find out about it – a mini research project!

We have just decided to experiment in our K-2 band with some number sense screeners put out developed by the Boulder Valley School District. So far, they have been fascinating to do with the students. If you haven’t looked at them, I encourage you to check them out. They are a free, teacher developed resource: https://sites.google.com/site/mathscreeners/home.

LikeLike

How fantastic would it be if everyone could treat the “beginning of the year” assessments as little action research projects? (…and how great would it be if we could have time built into our days to collaborate and learn from one another’s projects? 🙂 )

Thank you for the link to the Boulder Valley screeners. I hadn’t seen them before! From a quick glance, it looks like all of the beginning of year screeners are interview-based? I love that! I will have to spend some time next week looking more at these to see how they relate to some of the other tools I have used (Kathy Richardson interviews, New Zealand interviews like the GLoSS and JAM, CGI work, the Cognitive-Based Assessments by Michael Battista… it’s less putting together a puzzle, which has a finished product and mind, and more figuring out how to tell the story of mathematical development in a way that helps us improve student learning experiences. Or something like that!)

Belated l’shana tova!

LikeLike

It’s been really interesting doing the beginning of the year BVSD interview screeners. They are aligned with Math Recovery materials, by the way. Thanks for blogging and launching these conversations!

LikeLike