My January column referred to a growing trend among brave school leaders who are placing a stop on all technology purchases until someone can tell them what benefit they will get in return (in student performance) from those expenditures. I even suggested that technology purchases should not occur unless someone provides supporting data and proof of a specific instructional outcome.

That column generated a few responses asking how we can isolate one factor, technology, and its impact on student achievement?

One of the best demonstrations of“curriculum-driven technology” that I have observed exists at Leadmine Elementary School, Wake County Public School System, Raleigh, NC. The school has 550 students of which more than half are on the free lunch program, and the students speak 20-30 languages. Yet, 98.7 percent of all fifth graders (including the challenged students) test at or above their grade level. They have developed a data-driven decision making program regarding student instruction that is the brainchild of the principal, Dr. Gregory S. Decker, and it is highly technology dependent.

Even though the program is highly technology dependent, the school started with“Curriculum Alignment — Creating a Curriculum Design Framework” that outlined what they taught at each grade level, and aligned that with the state’s Course of Study. They also identified the student skills that demonstrate student mastery in each subject area and grade level. They then placed that framework and student skills into a curriculum map using current research on best practice teaching methodology, cognitive learning theories and brain research, indicating when they would teach specific items. The third step was to create curriculum benchmarks for quarterly assessment and minimum standards of achievement which provide quality control, measuring what and how much a student has learned rather than measuring what was taught. These assessments, which have been correlated to the curriculum, determine where children are versus where they should be, and enabling timely intervention wherever it is needed.

They then wanted to provide differentiation — differentiated instruction for students based on their individual needs. They wanted to allow for compacting the curriculum for gifted students or direct intervention for students who needed focus on specific curriculum concepts. Unfortunately, they found that different software provided different solutions. At least they were driving the software purchases from their curriculum alignment, mapping and benchmarking, making the process much simpler than prior to developing these tools. One of the primary requirements for all software integrated into their curriculum design was that it have a management system that provides real-time data on student assessment.

Additionally, they agreed with the ISTE (International Society for Technology in Education) concept of technology literacy through providing weekly hands-on time with technology. They determined each student would have a minimum of 90 minutes per week (three days a week at 30 minutes each session on the computer), and use that time to strengthen their skills in mathematics, reading, writing and language arts.

I have talked to teachers whose school districts have implemented some variation of this concept, and it seems there are two flavors rising to the top. One environment is centrally driven, where the actual instructional delivery method and course content is predefined, and the teacher is trained and expected to execute a specific instructional program simultaneously with all of the other teachers at their grade level or area of content. The other environment allows for individual teacher creativity in how the content is delivered, but still holds the teacher accountable for student achievement, based on the common curriculum and benchmarks.

Regardless of which flavor suits your district best, there are profound and broad implications. The data can be mapped by total school, grade level, curriculum area, individual classroom, groups of student or individual students. All performance parameters are there in black and white, and no one can hide. Once a baseline of performance is established, imagine what happens to your assessment of new technology purchases. Every time you change your resources within a very short amount of time you will have real-time data indicating that the students gained those skills faster or slower than the prior group who used the old resource. Hokey Smokes, Bullwinkle! That’s technology purchases driven by student performance.

About the Author

Glenn Meeks is president of Meeks Educational Technology located in Cary, N.C. He can be reached at