Author: Marr, Mary Beth
Date published: July 1, 2010
Reading is the most critical academic skill students will learn and one of the best predictors of overall success in school...and society... There is no question that teaching all of our students to read by the end of third grade, as mandated under NCLB, is a challenge (Hosp & Fuchs, 2005, p. 9).
Promoting reading success and/or preventing or reducing reading failures among low responding students is not the responsibility of any one group or individual (Algozzine, Daunic, & Smith, 2009; Kavale & Spaulding, 2008; Therrien, Wickstrom, & Jones, 2006; Vaughn, 2003; Vaughn, Wanzek, Murray, Scammacca, Linan-Thompson, & Woodruff, 2009). Administrators need assistance identifying, implementing, and evaluating effective interventions. Teachers need help identifying reading problems as well as efficiently and effectively teaching and monitoring progress. Students need to be taught appropriate literacy skills and supervised in their learning and demonstration of them. Parents need assistance participating as partners in making schools safe and effective places to send their children. In this context, despite its "still experimental" status (cf. Kavale & Spaulding, 2008), Response to Intervention (RtI) has emerged as the new way to think about both identification, prevention, and treatment for the "most vulnerable, academically unresponsive children" in schools and school districts (Fuchs & Deshler,2007,p. 131).
RtI " ...integrates high quality teaching and assessment methods in a systematic way so that students who are not successful when presented with one set of instructional methods can be given the chance to succeed with the use of other practices" (Brown-Chidsey & Steege, 2005, p. 3). RtI is based on the critical , but simple concept that ". . .quality instruction must be in place for all before it can be said that some have [special problems]" (Sailor, Doolittle, Bradley, and Danielson, 2009, p. 734). According to Bradley, Danielson, and Doolittle (2007), the popularity of RtI is partly grounded in the promise that "teachers no longer would have to wait for students to fail before the students could receive services" and partly in the pledge of change at the first indication of unresponsiveness to classroom implementations of scientifically-based interventions (p. 8). Those who promise potential payoff from RtI see it coming from early identification of and strong preventive intervention for academic problems (Mesmer & Mesmer, 2008).
RtI is "...a multitier prevention model that has at least three tiers" (Bradley, et al, 2007, p. 9, emphasis added). In this context, a "tier" refers to intervention provided in response to increasing needs of students. A three-tier prevention model is aimed at catching students eany-before they fall significantly behind-and providing the supports they need throughout their early years of schooling (Vaughn, 2003). Primary (Tier I) interventions are designed to address the majority of students' instructional needs. Once a student has been identified as needing additional support, RtI directs the use of Secondary (Tier II) intervention for students for whom Tier I instruction was insufficient (i.e., students who fall behind on benchmarks skills and require additional intervention to achieve expectations). Tier II is small-group supplemental instruction. When students fail to profit from high-quality implementation of Tier II interventions, RtI proponents direct that additional support is provided. Tertiary (Tier III) intervention is specifically designed and customized instruction that is extended beyond the time allocated for Tiers I and II.
In this classroom-based study, we examined the effectiveness of systematic, tiered interventions on children referred to as "difficult-to-remediate ," "treatment resisters," and/or "low responders." In the emerging world of RtI, our work focused on students scoring low on academic measures who were likely to need additional instruction to achieve expected and acceptable levels of academic performance (Vaughn et al., 2009). Our interventions involved increasingly more intensive presentation and modeling, direct explanation, and guided practice of key early literacy skills within general education classroom settings prior to recommending or providing special education services.
We completed our study as part of the Behavior and Reading Improvement Center's (BRIC) on-going research. The Center was federally-supported and its work focused on prevention of behavior and reading problems for children in K-3 classrooms.
Participants and Setting
The children in the initial comparison study (N = 541) were in second grade in 14 elementary schools randomly selected from those serving large numbers of students at risk for failure in a large urban/suburban district in the southeastern region of the United States. Two groups were represented: Students in schools participating in a multiyear research project investigating a three-tier intervention model (n = 219) and students in demographically-matched comparison schools (n = 322).
Girls and boys were similarly (X^sup 2^ = 0.146, df= l,p>.01) represented across treatment (59%: 41%) and comparison (60%: 40%) groups. Students from African- American and Hispanic ethnic and cultural backgrounds were also similarly (X^sup 2^ = 0.671 , df = 1, p >.01) represented in our treatment (62%: 38%) and comparison (66%: 34%) groups. All subsequent analyses were completed using combined gender and ethnic groups. Scores on fall oral reading fluency benchmark assessments for children in our study all fell in the "at-risk" (0-25) or "some risk" (26-43) categories designated by the test developers and were used to establish statistical equivalence in our causal-comparative, quasi-experimental design.
All children in the district participated in 90 minutes of classroom-based core reading instruction and 30 minutes of individual work time (IWT) as part of the comprehensive literacy modeled implemented in the participating district. Using fall benchmark screening scores, children in treatment schools were assigned to one of three groups based on level and types of supplemental instruction provided during IWT: Primary intervention only (PCFB-I: n = 91), primary plus commercially-available tertiary intervention, (PCFB-II: n = 83), and primary plus project-developed tertiary intervention, (PCFB-III: n = 45).
Primary intervention only. Peer-mediated learning groups in which children work together to support each other represent ". . .powerful academic interventions that can prevent and/or remediate reading failure before it leads to even more devastating outcomes. . ." (Maheady, Mallette, & Harper, 2006, p. 66). In these approaches, classes are divided into groups or competing teams in which students engage in a structured learning activity, evaluate each other's performance and provide immediate feedback, and assume reciprocal roles of tutor and tutee. We were interested in applying key features of peermediated learning as a small-group targeted intervention for students who were failing to make adequate progress (i.e., "treatment resistere" or non-responders) while participating in whole-group teacher-directed instruction.
If you were to walk into one of the classrooms during the intervention phase of our research, you would see a student passing out the fluency folders and the student coaches (strong readers) partnering with a struggling reader at desks or in locations throughout the classroom. The teacher would start the activity with a direction to find partners and read the identified passage chorally. Each student would be reading a different passage that would match his or her independent reading level. When students finished the first read of the passage, they would raise their hands. When everyone was finished, the partners would read aloud the selection a second time alternating sentences from "coach" read to "partner" read. This phase of the reading was intended to provide a fluent model for the struggling reader as well as an opportunity for support as needed. A third reading would occur with the struggling reader reading the passage aloud and the "coach" providing additional help with unknown words if needed. Usually by this time the reader would readily read the selection. Meanwhile the teacher would be monitoring how well students followed directions. After this guided and independent practice, the teacher would have students time their reading of the passage. The teacher would encourage students to read fluently and with expression. She/he would start a timer (for one minute) while the "coach" monitored his or her reader and noted how far he or she read in the passage . At the end of one minute, the reader would mark where he or she stopped reading, locate the number of words read, and record this on the chart in his or her folder. The coach also might assist with this summative activity. A little later in the day the teacher would check folders to monitor progress and move students forward to a higher level passage for the next fluency practice session.
The district fluency benchmark for second grade was 90 words-per-minute. If a student reached benchmark, the teacher placed a star on a chart and noted the next story to read. If students did not reach benchmark, they would practice reading this same story for up to three fluency sessions before moving on to a new story. Although a new story was assigned, the fluency folder contained roughly three stories at levels 6-31. This allowed the students to practice reading at their level, but avoided the problem of a student memorizing the story if he or she reached a plateau in progress.
The fluency practice took roughly 1012 minutes once students had learned the routine. Our treatment classrooms used this intervention at least three times a week and found that by beginning the literacy period with the intervention, fewer interruptions occurred and it set the stage for the literacy period to follow. We provide additional information for "getting started" with PCFB in Appendix A.
To create the reading passages, we selected stories of general interest to our students that were leveled in difficulty by both publishers (Wright Group, Rigby, Sunshine) and authors Fountas and Pinnell (1996) and prepared a set of 46 120-150 word practice materials. When possible, we placed complete sentences on each line and replaced pronouns or inserted terms for clarity since pictures were not included with the reading passages. We also created sentences or inserted sentences from the end of the passage to close it off so that it made sense to the reader. A sample story is shown in Figure 1.
Primary plus commercially-available secondary intervention. We used Reading Mastery materials (Engelmann & Bruner, 1995a,b) as our secondary level intervention in second grade because it addressed phonemic awareness, word attack skills, and comprehension strategies needed to bring students to an appropriate level in reading. The approach is based on Direct techniques, providing the kind careful instruction that is needed to teach skills. It is a program that (a) emphasizes giving students strategies for and comprehension, (b) provides instruction in all subskills that lead mastery of those strategies, (c) provides a carefully controlled sequence of skills which those subskills are introduced and reinforced, (d) gives children adequate and realistic practice geared to the needs of children, and (e) provides individualizing provisions (i.e., placement, skipping) . It was used during the IWT period for students with benchmark scores below those of their peers using only PCFB and above those of peers needing more intensive intervention.
Primary plus project-developed tertiary intervention. We developed a supplemental intervention designed to increase phonemic awareness, alphabetic understanding, decoding skills, and fluency of students who were not progressing at the expected rate for their grade level or through screening were shown to be atrisk for failure in reading. The scripted lessons followed formats and a sequence of skills recommended in Direct Instruction Reading (Camine, Silbert, Kame'enui, & Tarver, 2004) and incorporated Simmons and Kame'enui's (1998) six principles of instructional design. The lessons were brief (i.e., approximately 10 min) and designed to be taught to 1-3 students at a time and provided instruction in the following areas (a) auditory skills of blending and segmenting (i.e., phonemic awareness); (b) letter-sound correspondences (i.e., alphabetic understanding); (c) reading phonetically regular words (i.e., decoding); (d) fluency building with connected text, and (e) sight word practice. The 110 lessons covered all of the decoding skills addressed through first grade and served as a tertiary intervention for second grade students demonstrating the greatest evidence of treatment resistance. It was used during the IWT portion of the literacy period with project-trained assistants serving as the interventionists.
The district used the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002a) for monitoring early literacy progress of students in kindergarten through second grade. The DIBELS measures include subtests of letter recognition and tasks related to phonological awareness, the alphabetic principle, and reading aloud. Similar assessments are widely used and accepted as valid and reliable curriculum-based measures of reading (cf. Wayman, Wallace, Wiley, Tichá, & Espin, 2007).
The DIBELS Oral Reading Fluency (ORF) passages were developed primarily to have appropriate and consistent readability for a grade level and benchmark expectations are available (see Table 1) for use in making instructional planning and progress monitoring decisions (Good, Wallin, Simmons, Kame'enui, & Kaminski, 2002; Kaminski & Good, 1998; Good, Kaminski, Shinn, Braaten, Shinn, & Laimon, 2003; Good, Simmons, & Kaminski, 2001). It is a brief, individually administered, standardized, timed assessment that teachers were trained to use as part of the district's comprehensive reading model. The ORF subtest was used in the district's RtI implementation to identify students at risk for reading problems.
During the ORF assessment, students read a passage aloud for 1 min. Words omitted, substituted, and hesitations of more than 3 seconds are scored as errors. Words self-corrected within 3 seconds are scored as accurate. We used the median score of 3 passage readings as the pre- and post-test measures in our study. The alternate-form reliability for ORF is reported as ranging from .89 to .96. The concurrent validity of second-grade ORF with the other similar measures ranges from .91 to 36. The reported average Spache readability estimate for first grade benchmark passages was 2.2 (Good & Kaminski, 2002b). In another study of the reliability and validity of the second grade passages, the median alternate-form reliability coefficient was .94 and concurrent validity estimates ranged from .92 to .96 with a median reported of .95 (Good, Kaminski, Smith, &Bratten, 2001).
To control for possible contamination resulting from shared ideas and extant teaching methods, teachers participated in individualized professional development prior to and during the intervention phase of our study. During the initial sessions, teachers were provided with a brief overview of the importance of oral reading fluency as well as the materials and passages needed for planning and implementing PCFB in their classrooms. They also received regular "check-support-connect" visits from researchers and graduate assistants to ensure questions regarding implementation expectations and efforts were addressed in a timely manner. Teachers were also "monitored" on an unannounced basis to assess treatment fidelity.
We used a checklist that included the essential steps for implementing the PCFB program to assess the extent to which it was being implemented as intended (e.g., Teacher times the students as a group. Coach helps with difficult words, points out errors, and helps the Reader correct them. Reader reads at an appropriate pace not racing through the text.). Each of the 18 items was scored using the following scale during unannounced observations: 0 = not met, .5 = initiating, 1 = met expectations, and 2 = mastery plus; and, we averaged the score on each of the items to obtain a marker for the integrity with which the intervention was implemented. The average treatment fidelity for teachers participating in this study was 1.00 (SD=O. 19) and was considered acceptable as evidence that the intervention was implemented as intended.
We also monitored implementation of secondary and tertiary interventions that were coupled with PCFB. We used similar checklists and regularly conducted announced and unannounced observations to assess levels of implementation fidelity. Implementation problems were addressed on an individual basis as needed. The overall treatment fidelity for secondary and tertiary interventions considered adequate and acceptable across interventionists, classrooms, and schools.
Teachers in all classrooms provided the district's regular literacy program which included 90 minutes of daily language arts instruction using a variety of reading materials (e.g., easy story books, popular children's literature, and basal readers) and a 30-minute independent work time (IWT) that followed the group instruction block each day. Children in treatment schools participated in the peer coach fluency building activities for varying numbers of weeks during the school year during IWT when children in control classrooms were also given opportunities to reread familiar stories and text, independent reading, and independent writing. Time allocated to reading instruction was the same for all children; however, IWT activities varied as the "intervention conditions" of the study.
Research Questions and Data Analysis
Oral reading fluency scores were compared using descriptive and graphic displays and two-factor analyses of variance with repeated measures. The level of statistical significance for all tests was set at 0.01 and effect sizes were calculated (using Hedges' g) for all comparisons of interest. Two general research questions were addressed:
* To what extent does oral reading fluency performance improve for students participating in Peer Coaching Fluency Building compared to peers participating in district-directed core reading activities?
* To what extent does oral reading fluency performance improve for students participating in Peer Coaching Fluency Building compared to peers also participating in targeted, small group supplemental instruction or individualized supplemental instruction?
Means, standard deviations, and analysis of variance summary statistics for students with complete data (n = 515) participating in PCFB and their comparison group peers are in Table 2. Statistically significant main effects for Group, F (1, 513) = 13.70,p < .01, Occasion, F (2, 1026) = 1895.11,p < .01, and the interaction of Group and Occasion F (2, 1026) = 9.13, p < .01 were observed. While fall oral reading fluency scores were similar (t = 2.34, df = 513,p > 0.01), winter scores(t = 3.89, df = 513,p < 0.01), spring scores (t = 3.51, df = 513,p < 0.01), and improvements were different across treatment and comparison groups (see Figure 2). Fall to spring oral reading fluency improved 47 words per minute for students in the treatment group compared to 41 words per minute for their peers in the comparison group.
Means, standard deviations, and analysis of variance summary statistics for students (N= 219) participating in the PCFB-Only Intervention, PCFB plus Secondary Intervention, and PCFB plus Tertiary Intervention are in Table 3. Statistically significant main effects were observed for Group, F (2, 216) = 26.22, p <.01, and Occasional, F (1,216) = 1526.50, p < .01 ; the interaction of Group and Occasion F (2, 216) = 1.13, p > .01 was not significant. Follow-up analyses of the Group main effect indicated that oral reading fluency scores for students participating in PCFB-Only (M= 63.72) and PCFB plus Secondary Intervention (M = 57.37) were similar and statistically significantly different that those for students participating in the PCFB plus Tertiary Intervention (M = 42.76). Follow-up analyses of the Occasion main effect indicated that spring oral reading fluency scores for all students (M = 75.93) were statistically significantly higher than winter scores (M = 57.98) which were statistically significantly higher than their fall scores (M = 29 .94) . These outcomes are displayed in Figure 3 .
Professional wisdom and research have consistently supported the importance of fluency in the development of reading proficiency, and a variety of effective methods for assessment and instruction of reading fluency have been developed (Allington, 1977, 1983, 2001; Cunningham, 2005; Dudley, 2005; Dudley & Mather, 2005; Hasbrouck & Tindal, 1992; Hudson, Lane, Pullin, 2005; Rasinski, 2000, 2003, 2004). Opportunities to develop reading fluency are important for all readers, but teachers of struggling readers in particular must recognize the importance of incorporating explicit fluency-based instruction into thenreading programs (Allington, 1977; Cunningham, 2005; Hasbrouck & Tindal, 1992; Hudson, Lane, Pullin, 2005; Larson, 2002; National Reading Panel, 2000; Snow, Burns, & Griffin, 1998).
The children who participated in PCFB showed statistically significant growth in reading fluency as compared with their peers in "control" second grade classrooms. These outcomes support those of a pilot study in which students in the fluency practice class had scores improve from a winter pretest ORF mean score of 51 to a posttest May ORF mean score of 91, while students in a control classroom obtained a pretest mean ORF of 50 and a posttest mean score in ORF of 70 (Marr, Nicholson, & Felker, 2004). We also found differences in response to intervention outcomes for students who participated in additional more intensive instructional opportunities (i.e., PCFB-II and PCFBIII).
Response to Intervention models emphasize increasingly differentiated and intensified instruction in efforts to help "classroom teachers, reading/literacy specialists, speech language pathologists, teachers of English learners, special educators, administrators, and others as they work toward goals of preventing language and literacy difficulties among American's children and improving instruction in these areas for all students" (International Reading Association, 2009, p. 1). In this study, students received different levels and amounts of intervention based on their benchmark oral reading performance. Procedures were standardized and fidelity of implementation was regularly monitored to ensure that all treatment group participants received similar primary and/or supplementary instruction. Our documenting that additional instruction did not result in differential outcomes prompted two additional post-hoc analyses.
First, the finding that participating in Peer Coaching Fluency Building alone (PCFB-I) and Peer Coaching Fluency Building with additional, more intensive, intervention (PCFB-II) resulted in similar outcomes was surprising. The logic of RtI directs that additional exposure to high quality "secondary" intervention improves opportunities to learn (i.e., more instruction, better outcomes). Fall oral reading fluency scores for students who participated in PCFB-I (M = 37 .25) were slightly above, but statistically similar (t = 2.39, df = 172, p > .01), to those for students who participated in PCFB-II (M = 32.46). The overall time (39.14 v. 44.39, respectively) and rate of engagement (41.17 v. 43.04, respectively) of these students in the peer coaching intervention was also statistically equivalent, suggesting that the addition of more intensive intervention may have had little or no effect on the overall performance of students who received it. Interestingly, students who received the most intensive combination of interventions evidenced less growth in oral reading fluency despite participating in PCFB for more time (51.49 v. 39.14 and 44.39, respectively) and at higher rates of engagement (49.47 v. 41.17 and 43.04, respectively) than their peers. Further investigation of the potentially reductive effects of "too much" intervention is clearly warranted. Similarly, the need remains for establishing the long-term resilience or sustainability of the effects of PCFB observed in our study.
Implications for Improvement of Practice
The oral reading fluency intervention that we evaluated is practical and manageable in the classroom. It is easy to implement because students learn to monitor and document their own progress and improvements. After initial implementation, students take ownership of the practice. As with other peer-mediated and repeated reading interventions, when given the opportunity to read stories on their own level, the students gain confidence in their ability and become excited about reading. They also enjoy working with a peer and celebrating improvements and growth in their oral reading fluency.
In our review of research on fluency, we found that there are specific features that need to be a part of instruction to improve oral reading fluency skills. Our intervention (Peer Coaching Fluency Building) incorporates these elements of successful practice and effective instruction, including: modeling fluent reading for the student, providing support or feedback with difficult words, providing opportunities to read a text more than once to gain confidence and control over the reading (e.g. repeated readings), charting student progress, and identifying a benchmark or target the student needs to achieve with each reading (Chard, Vaughn, & Tyler, 2002; Fuchs, Fuchs, & Burnish, 2000; Maheady, Mallette, & Harper, 2006; Reutzel, Fawson, & Smith, 2008; Reutzel & Hollingsworth, 1993; Reutzel, Jones, Fawson, & Smith, 2008; Reutzel & Smith, 2004; Samuels, 1979; Therrien, 2004; Themen, Wickstrom, & Jones, 2006).
While the likelihood of implementing all of the above features into an existing literacy program may seem overwhelming, our intervention can be readily adapted to most classrooms (see Appendix A). To begin, using existing oral reading fluency data, peer partners are selected to "coach" and support the struggling readers in the classroom. Coaches assist with modeling fluent reading, providing feedback, timing, and charting fluency progress. Each coach is given a list of explicit directions to guide them with each fluency session. The directions are kept in the reader's fluency folder and ready with each practice session.
A second feature of this model is the reading material. Each reader has a folder containing a series of short passages leveled in difficulty (see Figure 1). Passages range from a Reading Recovery level 6 (early first grade) to a level 31 (3rd grade) (Fountas & Pinnell, 1996). These leveled passages allow the teacher to match what is being read to the child's independent reading level thus individualizing the fluency practice for each student. Each passage gradually increases in difficulty to scaffold and support the student as he or she reaches a fluency benchmark and then moves up to slightly to more difficult material before eventually reaching grade level material. The passages are short (120150 words) and slightly modified versions of real stories from Wright Group, Rigby, Sunshine publishers, and other rich selections of children's literature. The text is meaningful and fun to read and it engages the students while they practice their fluency.
Charting progress motivates the students to practice to meet their goal. Each reader has a chart in his or her fluency folder that allows him or her to record how many words per minute he or she read during each fluency session, and to see his or her progress on a regular basis. This almost daily feedback encourages the student to continue to practice and improve. It also provides important assessment information that "informs" the teacher as he/she plans literacy instruction to target students' specific literacy needs (e.g., word analysis instruction).
Limitations and Need for Future Research
Our study combined extant data and quasi-experimental comparisons; and, while we did not randomly assign participants to treatment conditions, we grounded participation in criteria currently advocated as best RtI practice and we established the statistical equivalence of our groups when it was appropriate to do so. Similar methods have been used by other researchers investigating comparable interventions and practices (cf. Denton, Fletcher, Anthony, & Francis, 2006; Vaughn et al., 2009) and in the context of conducting research in authentic educational environments, we considered this to be a reasonable rather than confining limit in our work.
We also focused on a singular definition and measure of oral reading fluency. The assessments we used were those represented in the formative and summative evaluation data compiled in the comprehensive reading model being implemented in the district. Documenting the extent to which similar outcomes would be evident on different measures (e.g., high-stakes outcome tests) remains for future research.
Adams, M. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.
Algozzine, B., Daunic, A., & Smith, S. (2009). Preventing problem behavior. Thousand Oaks, CA: Corwin.
Arlington, R. L. (1977). If they don't read much, how they ever gonna get good? Journal of Reading, 21, 57-61.
Allington, R. L. (1983). Fluency: The neglected goal. Reading Teacher, 36, 556-561.
Allington, R.L. (2001). Kids need to learn to read fluently. In R.L. Allington (Ed.), What really matters for struggling readers (pp. 70-86). New York: Addison Wesley.
Bradley, R., Danielson, L., & Doolittle, J. (2007). Responsiveness to intervention: 1997 to 2007. Teaching Exceptional Children, 39(5), 8-12.
Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice. New York: Guilford.
Camine, D. W., Silbert, J., Kame'enui, E. J., & Tarver, S. G. (2004). Direct instruction reading (4th ed.) Upper Saddle River, NJ: Pearson/Merrill/Prentice Hall.
Chard,D.J.,Vaughn,S.&Tyler,B.J.(2002)Asynthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35(5), 386-406.
Cunningham, P. (2005). Struggling readers: "If they don't read much, how they ever gonna get good?" Reading Teacher, 59, 88-90.
Denton, C, A., Fletcher, J. M., Anthony, J. L-, & Francis, D. J. (2006). An evaluation of intensive intervention for students with persistent reading difficulties. Journal of Learning Disabilities, 39, 447-466.
Dudley, A. M. (2005). Rethinking reading fluency for struggling adolescent readers. Beyond Behavior, 16-22.
Dudley, A. M., & Mather, N. (2005). Getting up to speed on reading fluency. New England Reading Association Journal, 41(1).
Englemann, S., & Bruner, E. C. (1995a). Reading mastery classic II. Columbus, Ohio: SRA/Macmillan/McGraw-Hill.
Englemann, S., & Bruner, E. C. (1995b). Reading mastery. Fast cycle. Columbus, Ohio: SRA/Macmillan/McGraw-Hill.
Foorman , B ., Francis , D . , Fletcher, J . , Schatschneider, C, & Mehta, P. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children. Journal of Educational Psychology, 90, 37-55.
Fountas, I.C. & Pinnell.G.S. (1996) Guided Reading: Good first teaching for all children. Appendix M, Portsmouth, NH: Heinemann.
Fuchs, D., & Deshler, D. D. (2007). What we need to know about responsiveness to intervention (and shouldn't be afraid to ask). Learning Disabilities Research & Practice, 22, 129-136.
Fuchs, D., Fuchs, L. & Burnish, P. (2000). Peerassisted learning strategies: An evidence-based practice to promote reading achievement. Learning Disabilities Research & Practice, 15(2), 85-91.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239-256.
Good, R. H., & Jefferson, G. (1998). Contemporary perspectives on curriculum-based measurement validity. In M. R. Shinn (Ed.), Advanced applications of Curriculum-Based Measurement (pp. 61-88). New York: Guilford.
Good, R. H., & Kaminski, R. A. (2002b). DIBELS oral reading fluency passages for first through third grades (Tech. Rep. No. 10). Eugene; University of Oregon.
Good, R. H., & Kaminski. R. A. (Eds.). (2002a). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement. Available: http://dihels .uoregon .edu/
Good, R. H., Kaminski, R. A., Shinn, M., Bratten, J., Shinn, M., & Laimon, L. (2003). Technical adequacy and decision making utility of DIBELS (Technical Report). Eugene, OR: University of Oregon.
Good, R. H., Kaminski, R. A., Smith, S., & Bratten, J. (2001). Technical Adequacy of Second Grade DIBELS Oral Reading Fluency Passages (Technical Report No. 8). Eugene, OR: University of Oregon.
Good, R. H., Simmons, D.C., & Kame'enui, E. (2001). The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257-288.
Good, R. H., Wallin, J., Simmons, D. C, Kame'enui, E. J., & Kaminski, R. A. (2002). System-wide Percentile Ranks for DIBELS Benchmark Assessment (Technical Report 9). Eugene, OR: University of Oregon.
Hasbrouck, J. E., & Tindal, G. (1992). Curriculum-based oral reading fluency norms for students in grades 2 through 5. Teaching Exceptional Children, 24(3), 41-44.
Hosp, M., K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word reading, and comprehension: Do the relations change with grade? School Psychology Review, 34, 9-26.
Hudson, R. F, Lane, H. B., & Pullen, P. C. (2005). Reading fluency assessment and instruction: What, why, and how? The Reading Teacher, 58,702-714.
International Reading Association. (2009). IRA Commission on RTI: Working Draft of Guiding Principles. Reading Today, 26(4), 1,4-6.
Kaminski, R. A., & Good, R. H. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25 ,215-227.
Kaminski, R. A., & Good, R. H. (1998). Assessing early literacy skills in a problem-solving model: Dynamic Indicators of Basic Early Literacy Skills. In M. R. Shinn (Ed.), Advanced applications of Curriculum-Based Measurement (pp. 113-142). New York: Guilford.
Kavale, K. A., & Spaulding, L. S. (2008). Is response to intervention good policy for specific learning disability? Learning Disabilities Research and Practice, 23, 169-179.
Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices. Journal of Educational Psychology, 95, 3-21.
Larson, J. (2002). Literacy as snake oil : Beyond the quick fix (New literacies and digital epistemologies, Vol. I) . New York: Peter Lang Publishing.
Maheady, L., Mallette, B., & Harper, G. F. (2006). Four classwide peer tutoring models: Similarities, differences, and implications for research and practice. Reading & Writing Quarterly, 22, 65-89.
Marr, M. B., Nicholson, K., & Felker, R. (2004, November). Using partners to build reading fluency. Paper presented at the S. E. regional conference of the International Reading Association, Savannah, Ga.
Mesmer, E. M., & Mesmer, H. A. E. (2008). Response to intervention (RTI): What teachers of reading need to know. The Reading Teacher, 62, 280-290.
National Institute of Child Health and Human Development (2000a). Report of the National Reading Panel. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.
National Institute of Child Health and Human Development (2000b) . Report of the National Reading Panel. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction [Online] . Retrieved June 26, 2006, from http://www.nichd.nih.gov/publications/nrp/sm allbook.htm.
National Institute of Child Health and Human Development (2000c). Report of the National Reading Panel. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction: Reports of the Subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office.
National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services.
Rasinski, T. (2000). Speed does matter in reading. The Reading Teacher, 54(2), 146-151.
Rasinski, T. (2003). The Fluent Reader: Oral reading strategies for building word recognition, fluency and comprehension. New York, NY: Scholastic Professional Books.
Rasinski, T. (2004). Creating fluent readers. Educational Leadership, 61 , 46-51 .
Reutzel, D. R., & Hollingsworth, P. M. (1993). Effects of fluency training on second graders' reading comprehension. Journal of Educational Research, 86, 325-331.
Reutzel, D. R., & Smith, J. A. (2004). Accelerating struggling readers' progress: A comparative analysis of expert opinion and current research recommendations. Reading & Writing Quarterly, 20, 63-89.
Reutzel, D. R., Fawson, P. C, & Smith, J. A. (2008). Reconsidering silent sustained reading: An exploratory study of scaffolded silent reading. Journal of Educational Research, 102, 37-50.
Reutzel, D. R., Jones, C. D., Fawson, P. C, & Smith, J. A. (2008). Scaffolded silent reading: A complement to guided repeated reading that works. The Reading Teacher, 62, 194-207.
Sailor, W., Doolittle, J., Bradley, R., & Danielson, L. (2009). Response to intervention and positive behavior support. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds), Handbook of positive behavior support (pp. 729-753). New York: Springer.
Samuels, S. J. (2006). Reading fluency: Its past, present, and future. In T. Rasinski, C. Blachowicz, & K., Lems (Eds.), Fluency instruction: Research-based best practices (pp. 7-20). New York: The Guildford Press.
Samuels , S.J.(1979). The method of repeated reading. The Reading Teacher, 32, 403-408.
Shanahan, T. (2006). Developing fluency in the context of effective literacy instruction. In T. Rasinski, C. Blachowicz, & K., Lems (Eds.), Fluency instruction: Research-based best practices (pp. 21-38). New York: The Guildford Press.
Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.
Therrien, W.J. (2004). Fluency and comprehension gains as a result of repeated reading. Remedial and Special Education 25(4), 252-261.
Therrien, W. J., Wickstrom, K., & Jones, K. (2006). Effect of a combined repeated reading and question generation intervention on reading achievement. Learning Disabilities Research and Practice, 21, 89-97.
Vaughn, S. (2003, December). How many tiers are needed for response to intervention to achieve acceptable prevention outcomes? Paper presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention Symposium, Kansas City, MO.
Vaughn, S., Wanzek, J., Murray, C. S., Sammacca, N., Linan-Thompson, S., & Woodruff, A. L. (2009). Response to early reading intervention: Examining higher and lower responders. Exceptional Children, 75, 165-183.
Walker, B. J., Mokhtari, K., & Sargent, S. (2006). Reading fluency: More than fast and accurate reading. In T. Rasinski, C. Blachowicz, & K., Lems (Eds.), Fluency instruction: Research-based best practices (pp. 86-105). New York: The Guildford Press.
Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41, 85-120.
Zutell, J., & Rasinski, T. V. (1991). Training teachers to attend to their students' oral reading fluency. Theory into Practice, 30, 211-217.
MARY BETH MARR
University of North Carolina at Charlotte
REBECCA L. KAVEL
Charlotte Mecklenburg Schools
KATHERINE KELLER DUGAN
Behavior and Reading Improvement Center
Support for this research was provided in part by Grant No. H238X00001 from the U.S. Department of Education, Office of Special Education Programs, awarded to the University of North Carolina at Charlotte. The opinions expressed do not necessarily reflect the position or policy of the Department of Education, and no official endorsement should be inferred. Correspondence concerning this article should be addressed to Bob Algozzine, BRIC/EDLD/COED, University of North Carolina at Charlotte, 9201 University City Blvd, Charlotte, NC 28223 , United States of America, [email@example.com]. The efforts of teachers at Albemarle Road, Druid Hills, Idlewild, Montclaire, Piney Grove, Thomasboro, and Walter G. Byers Elementary Schools in Charlotte, NC are gratefully acknowledged.
Once reading materials are compiled, using this partner fluency model involves a few simple steps, including selecting target students and coaches, creating clear directions to eliminate variation in implementation across students and coaches, and modeling and monitoring activities and performance.
Selecting target students. Setting up this practice in the classroom is not difficult and readily fits within the structure of the school day. In our example, children who needed the fluency practice were identified using the Oral Reading Fluency (ORF) scores and benchmarks from the DIBELS assessments (Good & Kaminski, 2002) - no additional testing was necessary. Our students were in second grade reading below 44 words-per-minute at the beginning of the school year, but scoring at 50 or above on the Nonsense Word Fluency (NWF) subtest of DIBELS . The Nonsense Word Fluency scores indicate that the students have some phonics knowledge, can decode simple 3-4 letter words, such that fluency difficulty is not related to a lack of decoding skill.
Selecting coaches and creating directions. Students were paired with a classroom peer who scored above the ORF and NWF benchmarks. This peer became the fluency "coach." Typically, we had 10-12 students who needed fluency practice and 10-12 peer coaches, allowing most of the entire class to participate in fluency sessions.
A list of explicit directions was created for the coaches to follow and placed in each fluency folder. The directions are as follows:
1 . Sit with your partner and find the story to read. (Stories are numbered. Each student is placed in a different level story based on their independent reading level)
2. You and your partner read the story together out loud. (Choral reading)
3. Now you and your partner will alternate reading each sentence in the story. You read the first sentence and your partner will read the next. (Modeling)
4 . Now, your partner (target student) reads the story by himself. You will help him with any words he can't read. (Support)
5 . Have your partner read the story one more time. (Optional)
6. Now you can help time your partner who will read for one minute. The teacher will tell you when to begin and when to stop. (With practice , students can learn to do this independently)
7. Help your partner find the number of words that he/she read and write it on the chart.
Modeling and monitoring instruction. Once the students are identified and folders created, the teacher begins with a few short mini-lessons to model fluent reading of one of the passages in the folder and to engage the students in the reading by echo reading and chorally reading the selection. The teacher is sure to describe and model smooth, fluent, expressive reading. Then the teacher demonstrates how to count the number of words read, using the numbers listed at the end of each sentence, and demonstrates how to record the number of words read on the chart. In a second mini-lesson the teacher allows students to practice following the "coach" directions step by step.