Building the capacity of state early childhood administrators: CEELO FY2015

February 10, 2016

Screen Shot 2016-02-09 at 4.14.02 PM

At CEELO we believe all organizations benefit from a continuous improvement process based on evaluation. That’s why we’re not only engaged in providing Technical Assistance (TA) to states across the country, but we also evaluate our own work and act upon feedback to enhance our services and outreach.

The Center on Enhancing Early Learning Outcomes (CEELO) is one of 22 comprehensive centers funded by the U.S. Department of Education, Office of Elementary and Secondary Programs. CEELO is designed to increase the capacity of State Education Agencies (SEAs) to implement comprehensive and aligned early learning systems to increase the number of children from birth through third grade that are prepared to succeed in school. The Annual Report, a requirement of our funding annually, outlines the impact of the technical assistance provided in the third year of the 5 years of the project. Between October 1, 2014 and September 30, 2015 CEELO provided three types of technical assistance: (1) Responsive TA; (2) Strategic TA; and (3) Information Resources and Technology Supported TA.

I. Responsive Technical Assistance to States: CEELO provides targeted support and consultation to states to address policy issues impacting children birth through third grade.

Northeastern Children's Center-11

The Range of TA Provided by CEELO in Year Three

General: CEELO co-developed a national webinar on NAESP P-3 standards and the role of state education agencies in supporting principal leadership to implement a Birth to Third Grade framework. It was designed as a follow-up to a summer 2014 introduction by NAESP that was requested by NAECS-SDE members. Follow- up evaluations were favorable: one individual reported that it was outstanding in every way! Your experience and expertise in the field comes through loud and clear in the content and delivery of these webinars. Thank you! Another reported that the webinar provided deep and detailed information.”

Targeted. In May and July 2015, CEELO held meetings of early childhood specialists from Northeast state departments of education for two separate in-person meetings in Waltham, MA. The first meeting focused on PDG start-up activities and provided participants with an opportunity to network and learn from one another about promising practices and challenges in early implementation. The second meeting, held in collaboration with the Regional Education Lab- Northeast and Islands (RELNEI), focused on Kindergarten Entry Assessment design and implementation. Participants had opportunities to hear from researchers’ key findings from selected states and engage in conversations about how to address common design and implementation challenges. Participants requested ongoing follow up conversations with one another. As a result, CEELO has facilitated monthly peer exchange calls among state specialists in the Northeast. The meetings were favorably evaluated. One respondent reported, person-to-person consultation has been helpful to a very large degree.” Another stated, I loved meeting all the folks from the New England states.”

Intensive: CEELO supported the development of Nevada’s Office of Early Learning strategic plan. Beginning in Year 2 and continuing in Year 3, through a series of intensive meetings, the CEELO co-director convened key stakeholders who articulated the vision for the new office, developed a strategic plan, and crafted an operational plan that has guided ongoing operations for the new office. Direct results include improved internal and external communications, and staffing plans and professional development plans for new office staff. One key informant noted that the CEELO TA provider was fabulous in helping prepare, organize and facilitate our strategic planning meeting for our new Office of Early Learning and Development in the Nevada Department of Education. It was very helpful to have someone with outside expertise and such great experience working with other states help us think through the planning and organizing of our new office to hopefully help shape and provide guidance to our agency leadership, restructuring and organizing of our office.”

II. Strategic TA: CEELO engages in multiple efforts supporting all 50 states and territories in sustained initiatives addressing CEELO’s five focus areas. All activities are designed to build capacity and promote SEA policy and leadership development.

Northeastern Children's Center-15

Selected Examples of Strategic TA

Building Leadership Skills of Early Childhood Administrators: During Year 3, the first cohort of the Leadership Academy was implemented. The Fellows met 4 times, engaged with their coaches, and completed their Job Embedded Projects.

During the final months of Year 3, CEELO prepared for the second cohort, offering applications and selecting Fellows who will engage in the Leadership Academy during Year 4. For in-depth information on the design and structure, as well as participant feedback, see State Early Education Leadership Academy: Report on Year 1, 2014-2015 as well as the online Leadership Academy Page.

CEELO, in collaboration with the BUILD initiative, conducted the first cohort of the Learning Table in CEELO Year 3. A report documenting state policies to promote effective teaching and learning was produced.

CEELO will continue to support the Think Tank with a second cohort in CEELO Year 4.

Building Capacity of States to Access Research and Best Practice: The 2015 National Roundtable was successfully held, focused on the theme of “Leading for Excellence”. Of the 150 attendees, 41 state agencies were represented by 88 attendees with 25 states bringing a team.

Building Capacity to Access Research and Information to Inform Policy: CEELO sponsored or co-sponsored 13 webinars CEELO TA staff also presented at 18 national and regional meetings sponsored by other organizations on topics of relevance to SEAs and CEELO priorities.

Building Capacity of Preschool Development Grantees-Expansion States to implement a high quality preschool program. CEELO provided TA on 23 requests for support on PDG-related topics. These are described in the responsive technical assistance portion of the full report, with links to relevant resources. CEELO also convened PDG staff from multiple states in 3 peer exchanges in 2015.

III. Information Resources: CEELO produces numerous publications aimed at encouraging best practices and enhancing child outcomes.Northeastern Children's Center-14

CEELO responded to 100% of the 50 information requests made across the range of CEELO priority topics. Requesters were interested in both research around the topic and information on how other states were addressing critical questions related to our core objectives, including assessment, workforce, systems, data, and birth to third grade. CEELO develops different types of resources including Policy Briefs, Fast Facts, Annotated Bibliographies, and Tools. Selected examples are outlined below, along with links to resources developed from those queries:

  • Bachelor’s degree requirements for pre-K lead teachers
  • Funding (e.g., funding formulas for per-student expenditures, funding formulas for pre-K)
  • Child assessment
  • Research on high quality pre-K and child outcomes
  • Retention
  • Teacher evaluation and student growth objectives
  • Quality Rating and Improvement Systems


IV. Data on Impact of CEELO TA: Building capacity in SEAs is a primary and important aim of the TA CEELO provides. CEELO surveyed SEA staff and asked about the ways in which the TA has affected SEA capacity. Survey results reveal that respondents were most likely to report using the TA to share ideas and lessons learned with colleagues, provide authoritative support to advance their SEA work, increase an understanding of a topic, and develop relationships. Many used the TA provided in multiple ways.

Screen Shot 2016-02-09 at 4.05.05 PMScreen Shot 2016-02-09 at 4.04.57 PM

Screen Shot 2016-02-09 at 4.06.49 PM

CEELO TA to a State in Transition

What CEELO Did: Coordinating closely with the liaison from the Northeast Comprehensive Center, CEELO provided technical assistance to support the development of a strategic plan to implement a system of professional development for early childhood educators in one state. CEELO facilitated a full-day meeting comprising stakeholders from state agencies, regional offices within the state, and professional development providers.

Shortly after the meeting, the newly elected governor placed restrictions on state spending, offered early retirement options for state employees, and changed strategic direction for early education in the state. To respond to these changes, the state education agency asked CEELO to meet with a team of state staff to translate the strategic plan into an operational plan that could provide a useful guide for state work for the upcoming year.

How the Assistance Impacted the State: Independent evaluations reveal that stakeholders reported the assistance helped with longer-term planning and provided state employees with needed support during a time of staffing challenges. One individual who participated in the longer-term strategic planning process, as well as the process of developing an operational plan, reported that CEELO, “Facilitated discussion of relevant issues and resulted in concrete action.” Another comment was, “I really appreciated the paper on research of best practices — this is something I have been wanting since we cannot use our grant funds to travel out of state to conferences. The session seemed responsive to the needs we verbalized at our meetings.”

What Challenges and Issues Exist for the State: As the state seeks to implement the strategic plan to support the creation of a system of tiered professional development supports for early education teachers, the state education agency will continue to work with CEELO to implement the existing plan. The state education agency has asked CEELO to provide TA in Year 4 to ensure courses offered are aligned with the state’s broader education goals. Specifically, the state is seeking to support the effective implementation of formative and summative assessments and is in the process of implementing a B-3rd Grade framework of supports. The SEA is eager to align the professional development strategic plan with ongoing work on assessment and the state’s B-3rd Grade framework so that educators can easily see how these activities are aligned, rather than viewing each separately.

Conclusion and Recommendations

Northeastern Children's Center-13

As designed, the annual evaluation has identified a few areas for improvement and continued focus of TA delivery and relationship building between CEELO TA liaisons, State administrators, and Comprehensive Center staff in Year 4. These are:

  • Expand opportunities for states to learn from one another and tailor experiences to meet participants’ needs.
  • Provide information in formats that can be directly used to inform policy and procedure.
  • Engage state personnel in designing strategic technical assistance.
  • Proactively lead state education agencies in advancing an early learning agenda.

Please see the Annual Report section of our website for the full report from Year 3 and previous years and explore the CEELO website and for more information on our ongoing technical assistance and resources.

Lessons learned from Vanderbilt’s study of Tennessee Pre-K

October 2, 2015

Newly released findings from Vanderbilt’s rigorous study of Tennessee’s state-funded pre-K program are a needed tonic for overly optimistic views. No study stands alone, but in the context of the larger literature the Tennessee study is a clear warning against complacency, wishful thinking, and easy promises. Much hard work is required if high quality preschool programs are to be the norm rather than the exception, and substantive long-term gains will not be produced if programs are not overwhelmingly good to excellent. However, the Vanderbilt study also leaves researchers with a number of puzzles and a similar warning that researchers must not become complacent and have some hard work ahead.

Let’s review the study’s findings regarding child outcomes. Moderate advantages in literacy and math achievement were found for the pre-K group at the end of the pre-K year and on teacher ratings of behavior at the beginning of kindergarten. However, by the end of kindergarten these were no longer evident and on one measure the no-pre-K group had already surpassed those who had attended pre-K. The pre-K children were less likely to have been retained in kindergarten (4% v. 6%) but were much more likely to receive special education services in kindergarten than the no-pre-K group (12% v. 6%). The pre-K group’s advantage in grade repetition did not continue, but it did continue to have a higher rate of special education services (14% v. 9%) in first grade.

By the end of second grade, the no-pre-K group was significantly ahead of the pre-K group in literacy and math achievement. The most recent report shows essentially the same results, though fewer are statistically significant. Teacher ratings of behavior essentially show no differences between groups in grades 2 and 3. Oddly, special education is not even mentioned in the third grade report. This is puzzling since prior reports emphasized that it would be important to determine whether the higher rate of special education services for the pre-K group persisted. It is also odd that no results are reported for grade retention.

If we are to really understand the Tennessee results, we need to know more than simply what the outcomes were. We need information on the quality of the pre-K program, subsequent educational experiences, and the study itself. It has been widely noted that Tennessee’s program met 9 of 10 benchmarks for quality standards in our annual State of Preschool report, but this should not be taken as evidence that Tennessee had a high quality program. Anyone who has read the State of Preschool knows better. It (p.10) specifies that the benchmarks “are not, in themselves, guarantees of quality. Arguably some of them are quite low (e.g., hours of professional development), even though many states do not meet them. Moreover, they are primarily indicators of the resources available to programs, not whether these resources are used well. In addition to high standards, effective pre-K programs require adequate funding and the continuous improvement of strong practices.

The State of Preschool reported that Tennessee’s state funding was nearly $2300 per child short of the per child amount needed to implement the benchmarks. More importantly, the Vanderbilt researchers found that only 15% of the classrooms rated good or better on the ECERS-R. They also found that only 9% of time was spent in small groups; the vast majority was spent in transitions, meals, and whole group. This contrasts sharply with the high quality and focus on intentional teaching in small groups and one-on-one for programs found to have long-term gains (Camilli et al and Barnett 2011). The Tennessee program was evaluated just after a major expansion, and it is possible that quality was lowered as a result.

Less seems to be known about subsequent educational experiences. Tennessee is among the lowest ranking states for K-12 expenditures (cite Quality Counts), which is suggestive but far from definitive regarding experiences in K-3. We can speculate that kindergarten and first grade catch up those who don’t go to pre-K, perhaps at the expense of those who did, and to fail to build on early advantages. However, these are hypotheses that need rigorous investigation. Vanderbilt did find that the pre-K group was more likely to receive special education. Perhaps this lowered expectations for achievement and the level of the instruction for enough of the pre-K group to tilt results in favor of the no-pre-K group. Such an iatrogenic effect of pre-K would be unprecedented, but it is not impossible. There are, however, other potential explanations.

Much has been made of this study being a randomized trial, but that point is not as important as might be thought. One reason is that across the whole literature, randomized trials do not yield findings that are particularly different from strong quasi-experimental studies. The Head Start National Impact Study and rigorous evaluations of Head Start nationally using ECLS-K yield nearly identical estimates of impacts in the first years of school. Another reason is that the new Vanderbilt study has more in common with rigorous quasi-experimental studies than “gold standard” randomized trials. Two waves were randomly assigned. In the first wave, just 46% of families assigned to pre-K and 32% assigned to the control group agreed to be in the study. In the second wave, the researchers were able to increase these figures to 74% and 68%, respectively. These low rates of participation that differ between pre-K and no-pre-K groups raise the same selection bias threat faced by quasi-experimental studies. And, uncorrected selection bias is the simplest explanation for both the higher special education rate for the pre-K group and the very small later achievement advantage of the no-pre-K group. I don’t think the bias could be nearly strong enough to have overturned large persistent gains for the pre-K group.

Even a “perfect” randomized trial has weaknesses. Compensatory rivalry has long been recognized as a threat to the validity of randomized trials. In Tennessee one group got pre-K; the other sought it but was refused. It appears that some went away angry. Families who agreed to stay in the study could have worked very hard to help their children catch up and eventually surpass their peers who had the advantage of pre-K. Alternatively, families who received the advantage of pre-K could have relaxed their efforts to support their children’s learning. Similar behavior has been suggested by other studies, including a preschool randomized trial I conducted years ago for children with language delays. Such behaviors also could occur even without a randomized trial, but it seems less likely.

Randomized trials of individual children also create artificial situations for subsequent schooling. If only some eligible children receive the program, do kindergarten teachers spend more time to help those who did not attend catch and “neglect” those who had preschool? Would kindergarten teachers change their practices to build on pre-K if the vast majority of their children had attended pre-K and not just some; perhaps they would only change with support and professional development?

Clearly, the Vanderbilt study has given the early childhood field much to think about. I am reminded of Don Campbell’s admonition not to evaluate a program until it is proud. However, programs may also be in the habit of becoming proud a bit too easily. We have a great deal of hard work in front of us to produce more programs that might be expected to produce long-term results and are therefore worth evaluating. Researchers also would do well to design studies that would illuminate the features of subsequent education that best build upon gains from preschool.

What we should not do is despair of progress. The media tend to focus on just the latest study, especially if it seems to give bad news. They present a distorted view of the world. Early childhood has a large evidence base that is on balance more positive than negative. There is a consensus that programs can be effective and that high quality is a key to success. Research does help us move forward. Head Start responded to the National Impact study with reforms that produced major improvements. Some states and cities have developed even stronger programs. Tennessee can learn much from those that could turn its program around. If it integrates change with evaluation in a continuous improvement system, Tennessee’s program could in turn become a model for others over the next 5 to 10 years.

–Steve Barnett, Director, NIEER

When Research and Emotions Collide

May 20, 2015

Certain practices evoke strong reactions among early educators. Kindergarten “red-shirting (Katz, 2000),” academic “hothousing” (Hills, 1987), and developmentally inappropriate practice raise ire, yet pale in comparison to the issue of retaining children early in their school careers. As an increasing number of states adopt policies supporting, even requiring retention, emotions run high among early educators, policymakers, and parents on the topic.

Retention has been common practice for many decades but is retention the right way to go? Everyone agrees that a student will be well served by possessing necessary skills to learn and apply new information, yet we recognize that not all children grasp new information and skills at the same level or at the same time. Thus, the debate over the merits and faults of retention persists.

So what does research have to say about retention? Among many in my generation, retention of young children was seen as bad practice and policy, shaped years ago by research conducted by Shepard and Smith (1987) and reinforced by Jimerson (2001) and others. But as a scientist I know research contributes to understanding, and I strive to let emerging research inform my opinion rather than the reverse. So I hit the journals to review the literature, learning the issue is more nuanced than one might imagine.

You can simply ask, “Does retention work?” but the answer may be hidden behind several doors, not all of which lead to the same conclusion. The answer you get depends on the questions you ask, such as:

  • Does the design of the research influence results?
  • What criteria are used by states and schools to base retention decisions on, and do different criteria yield different research findings?
  • What does research says about the short- and long-term academic and social/emotional/behavioral effects of retention?
  • Does the age or grade when retention occurs make a difference in students outcomes?
  • Is retention an effective educational strategy for young children below third grade?
  • Does retention affect certain groups of students differently?
  • Are there effective alternatives to retention?

These questions were among those examined by the Southeast Regional Comprehensive Center Early Childhood Community of Practice and CEELO, when early education leaders from several state departments of education were invited to explore retention as an effective education strategy for young children.

I’ll spare you the details of research shared in this “succinct” blog, but here are a couple of my research-informed takeaways about a practice which affects nearly 450,000 elementary school children annually, a quarter of whom are kindergartners and 60% boys. Both teacher- and test-based methods for determining retention are associated with short-term academic gains (typically restricted to literacy) that fade, even disappear, over several years. Research shows mixed results on the impact of retention on short-term social/emotional/behavioral development while there is evidence of adverse long-term effects, including school drop-out. Retained children are 20–30% more likely to drop out of school. The fairness of retention policy has been called into question, fueled by a recent report from the Office for Civil Rights, confirming that retention disproportionately affects children of color, those who are low-income, and those with diagnosed learning difficulties, with wide variation in rates across states. Additional research shared with the Community of Practice about retention’s complexities can be found here.

I came away further convinced that the decision to retain a young child, while well-intentioned, is an important, potentially life-changing event; one that should include consideration of multiple factors as to its advisability for a particular child. Inflexible policies based on a single point-in-time assessment, on a single topic or skill (e.g., literacy), may be politically popular, expedient, and, as some would argue, fair, but the research doesn’t convincingly support the practice to ensure intended short- and long-term outcomes for all students.

Further, costs associated with retention are typically absent from policy discussions. We know significant numbers of children are retained in the early years, including kindergarten (Table 1), and average K-12 student costs hover around $12,000 per year. The cost of retention and lack of comparison to less costly, effective alternatives such as remediation or peer tutoring should cause staunch proponents to rethink their position. Combined with long-term costs associated with drop-out, crime, and unemployment, retention makes little cents or sense when signs point to the supplemental interventions–not to sitting through another year in the same grade repeating every subject–as having great impact.

While some encouraging short-term results have been associated with retention, policymakers shouldn’t wave the checkered flag just yet. We would be wise to examine the full body of research evidence, considering both short- and long-term consequences and the critical importance of providing children, parents, and teachers with timely educational and emotional support throughout a student’s career. Layer in the evidence questioning retention as a cost-effective use of resources, and the caution flag should be brought out. When it comes to declaring victory through retention, too much contrary evidence exists and too many important questions remain to allow our emotions to set policy in stone.

American Indian/  Alaska 
Native HI/ Other Pacific Islander
Black/ African American
Hispanic/ Latino of any race
Two or more races
US 4% 7% 2% 8% 5% 4% 5% 4%
AL 6% 8% 5% 14% 5% 9% 9% 5%
AK 4% 6% 4% 8% 2% 4% 3% 3%
AZ 3% 5% 2% 7% 4% 3% 3% 3%
AR 12% 11% 13% 14% 26% 13% 11% 8%
CA 3% 9% 2% 5% 5% 3% 4% 4%
CO 2% 5% 2% 4% 2% 2% 3% 2%
CT 5% 12% 3% 16% 8% 8% 8% 3%
DE 3% 5% 2% 0% 4% 4% 3% 2%
DC 3% 33% 2% 0% 4% 4% 3% 1%
FL 5% 9% 3% 4% 7% 5% 7% 4%
GA 6% 4% 3% 11% 5% 7% 8% 5%
HI 12% 21% 7% 13% 11% 14% 12% 13%
ID 2% 3% 3% 3% 1% 3% 1% 1%
IL 2% 2% 1% 2% 2% 1% 3% 2%
IN 5% 5% 3% 0% 6% 6% 6% 4%
IA 2% 11% 2% 3% 3% 4% 3% 2%
KS 2% 4% 2% 0% 2% 3% 2% 2%
KY 4% 8% 3% 5% 2% 5% 5% 4%
LA 4% 3% 2% 0% 5% 4% 4% 4%
ME 4% 5% 4% 14% 6% 5% 5% 4%
MD 2% 0% 2% 27% 3% 4% 2% 2%
MA 3% 5% 3% 8% 5% 5% 7% 2%
MI 7% 12% 5% 7% 6% 9% 11% 6%
MN 2% 7% 1% 11% 4% 3% 2% 2%
MS 8% 10% 7% 5% 8% 14% 1% 8%
MO 3% 5% 2% 6% 4% 4% 4% 3%
MT 4% 6% 0.0% 6% 4% 6% 4% 4%
NE 4% 9% 2% 19% 3% 4% 4% 3%
NC 5% 9% 3% 5% 6% 5% 6% 4%
ND 5% 8% 14% 27% 13% 10% 3% 4%
NV 2% 3% 1% 2% 4% 2% 1% 2%
NH 3% 0% 1% 0% 5% 5% 0% 3%
NJ 3% 6% 1% 3% 5% 4% 5% 2%
NM 4% 6% 2% 0% 5% 4% 3% 4%
NY 3% 4% 2% 4% 4% 3% 3% 2%
OH 4% 6% 5% 6% 7% 7% 7% 3%
OK 7% 9% 5% 8% 8% 8% 6% 7%
OR 2% 7% 1% 2% 2% 2% 2% 2%
PA 2% 0.0% 1% 0% 3% 2% 2% 2%
RI 2% 16% 1% 0% 4% 3% 5% 1%
SC 5% 6% 2% 3% 5% 5% 7% 4%
SD 4% 12% 4% 0% 6% 7% 5% 3%
TN 5% 3% 2% 15% 4% 5% 7% 5%
TX 4% 6% 3% 8% 3% 4% 7% 5%
UT 1% 1% 0.0% 1% 1% 1% 1% 1%
VT 3% 0% 2% 0% 6% 0% 1% 3%
VA 4% 4% 2% 4% 5% 5% 4% 3%
WA 2% 6% 1% 4% 2% 3% 2% 2%
WV 6% 0.0% 3% 0% 7% 7% 7% 6%
WI 2% 2% 2% 6% 3% 2% 2% 2%
WY 5% 10% 4% 33% 17% 7% 3% 4%

SOURCE: U.S. Department of Education, Office for Civil Rights, Civil Rights Data Collection, 2011–12.

–Jim Squires, Senior Research Fellow

Young immigrants and dual language learners: Participation in pre-K and Kindergarten entry gaps

February 18, 2015

In a recent webinar, NIEER discussed what it means to be Hispanic and a DLL (a dual language learner) or Hispanic and come from a home with immigrant parents. We showed that Hispanic children, DLLs, and children with an immigrant background have lower rates of participation in center-based care (including Head Start) pre-K programs than White non-Hispanic children. We considered the impacts on enrollment of home language and of varied immigrant backgrounds, which make this group quite heterogeneous. We found that enrollment rates do show that while non-DLL Hispanics and Native Hispanics had enrollment rates above 60 percent, much like White children, about 45-50 percent of DLLs and Immigrant background Hispanics were enrolled in center-based care.

Pre-K participation of Hispanics in center-based care

That is, only one in two DLL Hispanics or Immigrant Hispanics attend a center-based program. This suggests that aspects of language and immigration status are likely defining why children participate.

We then wondered about similarities between these enrollment patterns and kindergarten entry gaps. Using Whites as the group of reference, it turns out that Hispanic DLLs and Hispanic immigrant children have very large performance gaps in reading, math, and language. These two groups pretty much drive the overall Hispanic gaps observed at kindergarten. What about Hispanic children who are both DLL and of immigrant background? Hispanic DLL children from an immigrant background show very large performance gaps, unlike Native-born English-speaking Hispanics, who fare quite well relative to Whites. It appears we are failing this group.

Kindergarten gaps for Hispanic students, math, reading, and language

Patterns are somewhat different when we look at socio-emotional developmental gaps. These do not resemble those for reading, math, and language. On the contrary, while most Hispanics differ little from Whites in terms of approaches to learning, self-control, or problems with externalizing and internalizing behaviors , Hispanic DLL children who are Native-born show large gaps across all of these domains except for internalizing behaviors.

Kindergarten gaps for Hispanic children, social-emotional skills

Putting this all together, clearly policy makers should focus on increasing access, outreach, and participation in high-quality early education for any and all Hispanic children, but especially for Hispanic DLL children and children whose parents are immigrants. Moeover, policy makers and practitioners both should recognize how diverse Hispanics are as a group, and how the needs of DLL Hispanic children differ depending on their family histories .

Addressing these issues in early care and education begins with obtaining a better understanding who our children are and who are we serving (and not serving), including:

  • screening language abilities
  • developing guidelines and standards that address the needs of these groups
  • promoting the proliferation of bilingual programs
  • and, planning ways to engage and effectively work with diverse groups of Hispanic children.

How well we do this in the first years of their lives will have important consequences for their developmental pathways and their opportunities, and this will be reflected in the our society 15-20 years from now.

–Milagros Nores, PhD, Associate Director of Research

The research says high quality preschool does benefit kids

October 21, 2014

In a response for the Washington Post Answer Sheet, Steve Barnett, director of the National Institute for Early Education Research deconstructs a new Cato Institute policy brief by David J. Armor, professor emeritus of public policy at George Mason University, who also has a piece on arguing his position under the headline “We have no idea if universal preschool actually helps kids.” We do know. It does. Here are some excerpts from the post, which can be read in its entirety here, outlining what the research really says:

First, if one really believes that today’s preschool programs are much less effective than the Perry Preschool and Abecedarian programs because those programs were so much more costly and intensive, and started earlier, then the logical conclusion is that today’s programs should be better funded, more intensive, and start earlier. I would agree. Head Start needs to be put on steroids. New Jersey’s Abbott pre-K model (discussed later) starts at 3 and provides a guide as it has been found to have solid long-term effects on achievement and school success. Given the high rates of return estimated for the Perry and Abecedarian programs, it is economically foolish not to move ahead with stronger programs.

Blog set 3Second, Armor’s claims regarding flaws in the regression discontinuity (RD) studies of pre-K programs in New Jersey, Tulsa, Boston, and elsewhere are purely hypothetical and unsubstantiated. Every research study has limitations and potential weaknesses, including experiments. It is not enough to simply speculate about possible flaws; one must assess how likely they are to matter. (See the extended post for more details.)

Third, the evidence that Armor relies on to argue that Head Start and Tennessee pre-K have no long-term effects is not experimental. It’s akin to the evidence from the Chicago Longitudinal Study and other quasi-experimental studies that he disregards when they find persistent impacts. Bartik points to serious methodological concerns with this research. Even more disconcerting is Armor’s failure to recognize the import of all the evidence he cites from the Tennessee study. Tennessee has both a larger experimental study and a smaller quasi-experimental substudy. The larger experiment finds that pre-K reduces subsequent grade retention, from 8% to 4%. The smaller quasi-experimental substudy Armor cites as proof of fade-out finds a much smaller reduction from 6% to 4%. Armor fails to grasp that this indicates serious downward bias in the quasi-experimental substudy or that both approaches find a large subsequent impact on grade retention, contradicting his claim of fade-out.

Among the many additional errors in Armor’s review I address 3 that I find particularly egregious. First, he miscalculates cost. Second, he misses much of the most rigorous evidence. And, third he misrepresents the New Jersey Abbott pre-K programs and its impacts. (See the extended post for more details.)

When a reviewer calls for policy makers to hold off on a policy decision because more research is needed, one might assume that he had considered all the relevant research. However, Armor’s review omits much of the relevant research. (See the extended post for more details.)

Those who want an even more comprehensive assessment of the flaws in Armor’s review can turn to Tim Bartik’s blog post and a paper NIEER released last year, as little of Armor’s argument is new. For a more thorough review of the evidence regarding the benefits of preschool I recommend the NIEER papers and WSIPP papers already cited and a recent review by an array of distinguished researchers in child development policy.

If all the evidence is taken into account, I believe that policy makers from across the political spectrum will come to the conclusion that high-quality pre-K is indeed a sound public investment.

–Steve Barnett, NIEER Director

Is New York City Mayor Bill De Blasio’s method for expanding Pre-K a model for other cities?

September 19, 2014

In this week’s edition of The Weekly Wonk, the weekly online magazine of the New America Foundation, experts were asked: Is New York City Mayor Bill De Blasio’s method for expanding Pre-K a model for other cities? NIEER Director Steve Barnett and Policy Researcher Coordinator Megan Carolan were among those who weighed in. Their responses can be read below. Please visit the original post here to see all responses.

Steve BarnettSteve Barnett, NIEER Director:

Whether NYC offers a good model for other cities to follow in expanding pre-K is something that we will only know after some years.  However, it is not too soon to say that NYC offers one important lesson for other cities.  When adequate funding is available, cities (and states) can expand enrollment quickly on a large scale at high standards.

A key reason for that is there is a substantial pool of well-qualified early childhood teachers who do not teach because of the field’s abysmally low financial compensation and poor working conditions.  When we offer a decent salary, benefits, and a professional working environment many more teachers become available.  Of course, NYC also put a lot of hard and smart work into finding suitable space and recruiting families to participate.   Whether NYC achieves its ultimate goal of offering a high-quality education to every child will not be known for some time, but this will depend on the extent to which NYC has put into place a continuous improvement system to build quality over time.

It would be a mistake to assume that high quality can be achieved at scale anywhere from the very beginning no matter how slow the expansion. Excellence in practice must be developed on the job through peer learning, coaching and other supports.  If NYC successfully puts a continuous improvement system in place and quality steadily improves over the next several years, then it will have much to offer as a model for the rest of the nation.

Megan Carolan, Policy Research Coordinator

When New York City opened the doors to expanded pre-K for thousands of 4-year-olds earlier this month, it marked a huge departure from the scene just a year ago, when Mayor de Blasio was still seen as a longshot candidate and Christine Quinn was focusing on preschool loans. Other cities looking to expand their early childhood offerings may wonder how New YorkMeganColor changed so quickly.

Preschool wasn’t a new expansion for de Blasio: expanding pre-K was a hugely personal priority for the Mayor and his wife, and de Blasio has been highlighting the shortage of seats when he served as Public Advocate from 2010 until his mayoral election. The de Blasio camp built partnerships both at a personal and political level from the start; the public debate with Governor Andrew Cuomo was never over whether to fund preschool, but how to fund it to balance the needs of the state and the city. Coalition-building didn’t stop there. In order to both solidify political support for this endeavor, and to build on existing capacity, the Mayor was clear about including community- and faith-based providers.

Despite the image of tough-talking New York swagger, what really aided the rapid expansion was compromise and building partnerships (some of the very social skills kids will learn in pre-K!). Bring together diverse stakeholders as well as local and state officials in an effort so clearly supported by residents put pre-K in the fast lane. No two cities will have the same mix of existing systems and political ideologies, but collaboration and compromise are key to meeting the needs of young learners across the country.

Formative Assessment:  Points to Consider for Policy Makers, Teachers, and Researchers

April 16, 2014

Formative assessment is one area in early childhood education where policy is moving at lightning speed. There’s been a lot of support for the appropriateness of this approach to assessment for young learners. Many policy makers and data users have “talked the talk,” perfecting the lingo and pushing the implementation of policies for this approach. Yet there are essential questions to consider when rolling out a plan or process for a state. In the brief released by the Center on Enhancing Early Learning Outcomes (CEELO), I outline several considerations for policy makers in moving such initiatives. They’re briefly outlined below, along with considerations for teachers and researchers.

For Policy Makers

Policies around formative assessment in early childhood education will be most successful when the below “top 10” items are considered thoughtfully before implementing.

Overall Considerations for Policymakers Responsible for Formative Assessment Systems

  1. Does the purpose of the assessment match the intended use of the assessment? Is it appropriate for the age and background of the children who will be assessed?
  2. Does the assessment combine information from multiple sources/caregivers?
  3. Are the necessary contextual supports in place to roll out the assessment and use data effectively? (e.g., training, time, ongoing support)
  4. Does the assessment have a base or trajectory/continuum aligned to child developmental expectations, standards, and curricula?  Does it include all key domains?
  5. Does the assessment have a systematic approach and acceptable reliability and validity data?   Has it been used successfully with similar children?
  6. Are the data easily collected and interpreted to effectively inform teaching and learning?
  7. What technology is necessary to gather data?
  8. Are the data useful to teachers and other stakeholders?
  9. What are the policies for implementation and what is the roll-out plan for the assessment?
  10.  Will data be gathered and maintained within FERPA and other security guidelines? Are there processes in place to inform stakeholders about how data are being gathered and held securely to allay concerns?

I encourage all stakeholders in assessment (policy makers, administrators, parents/caregivers, etc.) to exercise patience with teachers learning the science of this process and perfecting the art of implementing such an approach. Although many effective teachers across the decades have been doing this instinctively, as we make the approach more systematic, explicit, and transparent, teachers may have a steep learning curve. However, with the considerations above as a part of the decision-making process, teachers will find it easier to be successful.  This policy report provides a guide and framework to early childhood policymakers considering  formative assessment. The report defines formative assessment and outlines its process and  application in the context of early childhood. The substance of this document is the issues for  consideration in the implementation of the formative assessment process. This guide provides a  practical roadmap for decision-makers by offering several key questions to consider in the process of  selecting, supporting, and using data to inform and improve instruction.This policy report provides a guide and framework to early childhood policymakers considering formative assessment. This guide provides a practical roadmap for decision-makers by offering several key questions to consider in the process of selecting, supporting, and using data to inform and improve instruction.

For Teachers

The intent of formative assessment is to implement the process of using data (observation or other) to inform individualized instruction. The link between this type of embedded assessment and instruction should be seamless. Teachers work with great effort at this on several different levels. Effective early childhood teachers:

  • use immediate feedback from children in the moment and adjust the interaction based on this feedback.
  • collect evidence over time to evaluate the child’s growth and to plan long-term learning goals. These goals are reviewed periodically and adjusted based on new evidence.
  • look at aggregate data across their classrooms.  They examine the data for trends and self-reflect on their teaching practices based on what the data are showing.

For Researchers

We must move forward by setting a strong research agenda on the effects of formative assessment in early childhood classrooms–and not allow policy to outpace research.  We need further research around using formative assessment processes to collect, analyze, and use the data to improve teaching and learning in the early childhood classroom. This must first include randomized trials of formative assessment, to examine the impact on classroom quality and child outcomes. The field needs a clear understanding of how teachers are trained and supported in collecting and using the data, and just what supports are needed for success. This should be coupled with a qualitative understanding of how teachers are using data in their classrooms. Finally, an understanding of who is using the data, in what capacity–and how it fits within the larger assessment system–should be components of any examination of formative assessment.

Shannon Riley-Ayers, Assistant Research Professor, NIEER and CEELO

%d bloggers like this: