Blogs

Leading with Context

By Jason Leahy posted 01-17-2012 07:28 AM

  
Happy New Year! I hope the holiday season brought you rest and the opportunity to rejuvenate.  Thank you for the tireless and committed effort you provide your students, teachers and entire learning community. 

As you press head on into a new year, I am sure the thought of goals or resolutions has come to mind.  I must confess that losing a few pounds and exercising a bit more has run across the ticker of my brain.  (Having four, soon to be five, children at home should help my exercise regimen.)  As you consider both your personal and professional goals for this year, one bit of encouragement I have for you is to be sure to lead with context when it comes to educating kids.  2012 will show an increased focus (if that's possible) on student performance (a.k.a. student achievement on various assessments) with the implementation of performance evaluations happening in many places.  Common Core Standards also will continue to raise the level of concern placed on student performance on assessments.  If lead and managed with the right context, much good can come as a result, but again, it must be lead and managed with the right context.

What is context?  For the purposes of this writing, I define context as ensuring that we view the importance placed on assessments with the proper perspective when compared to what else we should be sure to educate kids.  i.e. emotional intelligence, teamwork, problem solving, life-long learning, etc.  Yong Zhoa (http://zhaolearning.com/) provided terrific context for assessments during the IPA's last Principals Professional Conference.  Here's a snapshot of what he had to share.

The United States has a long history of being bad test-takers:
  • In the 1960s, we ranked 12th out of 12 countries tested in the First International Math Study (FIMS).  We were 14th out of 18 countries in the First International Science Study (FISS).
  • The U.S. didn't do a whole lot better in the 70s or 80s.  On the Second International Math Study (SIMS), we ranked 12th out of 15 on number systems, 14 out of 15 on algebra, 12 out of 15 on geometry and 12 out of 15 on calculus.  On the Second International Science Study (SISS), we were 14th out of 14 on biology, 12th out of 14 on chemistry, and 10th out of 14 on physics.
  • Between 1990 and 2007, 8th graders on the Third International Math and Science Study (TIMSS) did begin to improving ranking 28th out of 42 countries in 1995, 15th in 2003, and 9th in 2007.

Conversely, let's look at some stats that Dr. Zhao shared that I believe are more telling and reinforce why context related to assessments is so important:

  • FIMS scores in 1964 correlate at r = -0.48 with 2002 PPP-GDP (Purchasing Power Parity-Gross Domestic Product).  In short, the higher a nation's test score 40 years ago, the worse its economic performance on this measure of national wealth.
  • The nations that scored better than the U.S. in 1964 had an average economic growth rate for the decade of 1992-2002 of 2.5%; the growth rate for the U.S. during that decade was 3.3%.  The average economic growth rate for the decade 1992-2002 correlates with FIMS at r = -0.24.  Like the generation of wealth, the rate of economic growth for nations improved as test scores dropped.
  • The average rank on the Quality of Like Index for nations that scored above the U.S. on FIMS was 10.8.  The U.S. ranked seventh (lower numbers are better).  FIMS scores correlated with Quality of Life at r = -0.57.
  • On the Economy Intelligence Unit's Index of Democracy, those nations that scored below the median on FIMS have a higher average rank on achieving democracy (9.8) than do the nations that scored above the median (18).  Once again, the U.S. scored higher on attaining democracy than did nations with higher 1964 test scores.
  • An alternative to the Quality of Life Index, the Most Livable Countries Index, shows that six of the nine countries that scored higher on FIMS than the U.S. are worse places to live.  Livability correlates with FIMS scores at r = -0.49.
  • The number of patents issued in 2004 is one indicator of how creative the generation of students tested in 1964 turned out to be.  The average number of patents per million people for the nations with FIMS scores higher than the U.S. is 127.  The U.S. clobbered the world on creativity with 326 patents per million people.  However, FIMS scores do correlate with the number of patents issued: r = 0.13 with the U.S. and r = 0.49 without the U.S.

Daniel Goleman may sum up all of this best in his book Emotional Intelligence: Why It Can Matter More Than IQ (1997) by stating, "One of psychology's open secrets is the relative inability of grades, IQ or SAT scores, despite their popular mystique, to predict unerringly who will succeed in life... At best, IQ contributes about 20 percent to the factors that determine life success, which leaves 80 percent to other forces."

So, what can we do to ensure assessments and test taking is put in the right context?  Here are a few thoughts:

  1. Mitigate and cushion the blow of negative consequences brought on by policies and legislation drafted and deployed far from your school door.  Considering the last decade living under NCLB, we've gotten pretty good at this.
  2. Continually focus your learning community on the proper uses for assessments which is to improve the instruction delivered to students.
  3. Constantly communicate with your students, teachers, parents and greater community about what is most important for your students to be able to know and do.

I am interested in learning what you think on this topic and how we "lead with context" regarding assessments.  Please share your thoughts.

0 comments
12 views

Permalink