Blog Archive

Monday, March 15, 2010

"Educator: 'Race to the Top's' 10 false assumptions" article by Marion Brady, ( My reflection here

"Race to the Top? National standards for math, science, and other school subjects? The high-powered push to put them in place makes it clear that the politicians, business leaders, and wealthy philanthropists who’ve run America’s education show for the last two decades are as clueless about educating as they’ve always been.
If they weren’t, they’d know that adopting national standards will be counterproductive, and that the "Race to the Top" will fail for the same reason "No Child Left Behind" failed—because it’s based on false assumptions.
This article was written six months ago, but I watched it blow up when I posted it to twitter last week. Partly because we are now down to the "final 16" states and with that many feel jaded, and underappreciated or undervalued.  There is an undercurrent of skepticism when looking at race to the top funding and selection, which makes sense in an environment where it feels like there are winners and losers. I've heard the terms "race to the middle, and race to nowhere" in recent weeks; signs that there are divisive opinions on the entire process. I haven't dug as deep as I would like, but I wonder why certain states weren't chosen, and why Colorado is the most western state.
After reading this article my thoughts are that...
  • More work doesn't mean better work.
  • How we measure teacher effectiveness, is just as important as measuring student proficiency.
  • Teaching kids is more important than teaching curriculum
  • Assigning blame for the ills of our schools do little than create a finger pointing circus.
  • If we can set aside our own bias and agendas (government, unions, community) we can come together because our goals are the same, we share a vision of creating successful environments, and experiences.
  • We continue using terms like incentives, transparency, and accountability, but what does that truly look like? 
With the availability of online pd and teachers looking to apply these learnings in their own settings and labs, why aren't there better examples of measuring teacher effectiveness in place. How do we measure the true value of professional development without losing the ability to meet teachers where they are and deliver learning opportunities just in time, that are relevant and useful?

No comments: