Time to flip the ‘majority minority’ terminology in schools?

Time to flip the ‘majority minority’ terminology in schools?

You may have missed it but, back in June, the U.S. Department of Education released a report called The State of School Diversity in the United States. Page 6 of the report noted that White students now make up less than half of all students enrolled in American public schools. In other words, they are now the minority. Here are a couple of relevant paragraphs from page 6:

In the 1950s, before the Brown decision, White students made up 9 in 10 students enrolled in public schools. Enrollment data from the National Center for Education Statistics (NCES) in 2022 indicate that White students now make up less than half (45 percent) of all students enrolled in public schools. While the overall school population has become more racially and ethnically diverse, some research suggests that, between 1991 and 2000, segregation between White students and Black students increased and, between 2000 and 2020, remained unchanged, and that socioeconomic isolation is likely to have increased between 1998 and 2020.


According to federal data, nearly one-third of students attend public schools in which the vast majority of enrolled students (75 percent or more) are students of color (Figure 4). Students of color disproportionately attend schools with a vast majority of students of color: 3 in 5 Black and Latino students and 2 in 5 American Indian/Alaska Native students attend schools where at least 75 percent of students are students of color (Figure 4), whereas about half of White students (46 percent) attend schools in which students of color make up less than 25 percent of the student population.

Racial isolation in schools generally results in a number of inequities, including reduced access to learning resources and qualified teachers. The report noted that the greatest driver of school segregation continues to be between-district segregation.

I encourage you to read the report to understand where American schools stand these days regarding desegregation. It’s not pretty. Also, language matters. Perhaps we now should be calling schools that are predominantly White ‘majority minority schools?’

Thinking about NAEP in Colorado and the Denver Public Schools

[this blog post is a follow-up response to the Twitter exchange with Van Schoales posted below]

2022 10 26 Van Schoales 01

2022 10 26 Van Schoales 02

2022 10 26 Van Schoales 03

Hi Van.

Although I’ve admired your work for years, you and I have never met, which means that we don’t have a relationship to lean into. Seth Godin reminds me that ‘if your audience isn’t listening, it’s your fault, not theirs.’ So please take this post as a very-public apology for whatever anger or defensiveness I sparked by my tweet. I’m sure that it could have been worded more artfully, and I regret not phrasing it in a way that maybe would have been received better. I offer this longer-form blog post as an attempt to bypass the lack of nuance available in 280 Twitter characters. I tend to do much of my thinking out loud in front of others, because their feedback makes me smarter.

I don’t know if you had a chance to read my previous blog post, Much ado about NAEP, but I tried to make two key points. The first was that the timing of the March 2022 NAEP tests should give us pause when interpreting the results. Most educators in America probably would tell us that, as difficult as the 2020-2021 school year was, the 2021-2022 year was even tougher. Kids and families still were dealing with incredible trauma, children showed us daily in classrooms (or by their absences) that they needed more support, and we didn’t do a great job of effectively serving large numbers of our young people. Then we had another big COVID spike right before the NAEP administration, and we have absolutely no idea how that impacted student test-takers except that probably a whole bunch of them (and their families) were ill, absent, struggling, etc. during the months that immediately preceded the test. We also know that these things were most true for our least-resourced children. All of this together feels like a reason to take the 2022 NAEP results not just with a grain but a giant boulder of salt. It’s difficult enough in normal times to help our students feel motivated for standardized tests. It’s even tougher given the March 2022 context and when NAEP results don’t mean anything to students personally.

The second point that I tried to make was that ‘Proficient’ on NAEP doesn’t mean what most folks think it does. To quote Tom Loveless, former director of the Brown Center on Education Policy at the Brookings Institution, in most states the Proficient label is “significantly above” what most state policymakers and assessments deem as ‘grade level,’ and the more accurate NAEP proficiency level for most states would be closer to Basic. This is rarely if ever mentioned in media coverage of NAEP results. Accordingly, most people tend to interpret ‘proficient’ in the ordinary sense of the word rather than in an aspirational sense, which is why I mention it whenever I think it is relevant (e.g., in my tweet). As I said in my blog post post, we can have some interesting discussions about whether Basic or Proficient is the right NAEP target for states, but we should at least recognize that Proficient is very aspirational in most parts of the country.

Here in Colorado, about 75% of 4th graders met the Basic NAEP standard in math for 2022, and about 36% met the Proficient standard. In 8th grade, about 63% of Colorado students met the Basic NAEP standard in math in 2022, and about 28% of students met the Proficient standard. Those results are essentially equivalent to the 2022 national NAEP averages for math. For reading, about 68% of Colorado 4th graders met the Basic NAEP standard in 2022, and about 38% met the Proficient standard. For 8th grade, about 73% of students met the Basic NAEP standard for 2022 in reading, and about 34% met the Proficient standard. Colorado students’ NAEP results in reading were a few percentage points higher than the 2022 national averages. As many have noted, all of the scores for both Colorado and the nation are down from 2019, which is to be expected.

In my blog post, I also quoted Loveless’ statement that, because the NAEP Proficient standard is so aspirational, “If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades.” [emphasis added]. We also have evidence that similar percentages of students in nominally higher-performing countries also would have trouble meeting the NAEP Proficient mark. NCES has done the work of mapping Colorado’s state standards for proficiency to NAEP equivalent scores. For math, Colorado’s standard is well above NAEP Proficient in 4th grade and close to Proficient in 8th grade. For reading, Colorado’s standard is much closer to Proficient than Basic in both 4th and 8th grade. Colorado’s standards clearly are more aspirational than those of most other states. Accordingly, fewer Colorado students will be deemed ‘at grade level’ than if our benchmarks were set closer to those elsewhere.

All of which brings us to the concerns that you note in the Denver Public Schools (DPS). DPS is one of 26 urban districts that was sampled in 2022 and, as you stated at DPS Boardhawk, results were worse than for Colorado and for the nation as a whole. DPS’ 4th grade math, 8th grade math, and 8th grade reading results all were essentially equivalent to the large city averages. DPS’ 4th grade reading results were generally a few percentage points higher than the large city averages. Enormous equity gaps exist across student subcategories and, unfortunately, NAEP noted that the 2022 performance gaps are not significantly different from those of 2017. Here is a more-detailed breakdown for DPS in a few categories (click on the links to see these tables):

  • 4th grade math
    • White students: 92% Basic, 62% Proficient
    • Black students: 49% Basic, 12% Proficient
    • Hispanic students: 48% Basic, 13% Proficient
    • School lunch not eligible: 77% Basic, 47% Proficient
    • School lunch eligible: 48% Basic, 10% Proficient
  • 8th grade math
    • White students: 81% Basic, 53% Proficient
    • Black students: 41% Basic, 11% Proficient
    • Hispanic students: 40% Basic, 10% Proficient
    • School lunch not eligible: 66% Basic, 35% Proficient
    • School lunch eligible: 38% Basic, 10% Proficient
  • 4th grade reading
    • White students: 84% Basic, 63% Proficient
    • Black students: 46% Basic, 14% Proficient
    • Hispanic students: 37% Basic, 14% Proficient
    • School lunch not eligible: 72% Basic, 48% Proficient
    • School lunch eligible: 36% Basic, 11% Proficient
  • 8th grade reading
    • White students: 88% Basic, 58% Proficient
    • Black students: 54% Basic, 15% Proficient
    • Hispanic students: 52% Basic, 16% Proficient
    • School lunch not eligible: 76% Basic, 42% Proficient
    • School lunch eligible: 51% Basic, 16% Proficient

Yikes! Those performance gaps are both troubling and persistent! They’re also similar to the other large city districts that were sampled, better than some and worse than others. Denver basically is in the middle of the pack for the 2022 NAEP sample of large city school districts.

To quote my own tweet, these performance “divides continue to be of concern.” Like you, I believe that DPS should be extremely transparent about those performance gaps. Also like you, I hope that DPS identifies publicly some concrete plans and actions to remedy its existing equity issues. Additionally, I’m cognizant of the difference between NAEP performance gaps and NAEP performance levels (which is what I was trying to say, albeit artlessly, in my tweet). If Colorado makes it harder for students to be deemed ‘proficient’ than in most other states, of course we’re going to say, “look, fewer kids are proficient!” That’s how we set up the system in the first place (and, once again, we can have a rich discussion about where the line should be set for proficiency). We’re also probably going to say that fewer students are ‘proficient’ in a large urban school system because, sadly, that’s basically the pattern that we see in big city school districts all across the country. Equity gaps are large and persistent in America for students of color, who live in poverty, whose primary language isn’t English, or who have a disability, and the past couple of decades of school reform haven’t done much to alter those. DPS isn’t doing great on these fronts, but it’s not an outlier either.

You said in your tweet that I should be “outraged by Black and Latinx proficiency levels.” Am I – and, without speaking for them, probably my colleagues at the University of Colorado Denver – outraged about proficiency gaps? Yes, of course. Just like you, we also care about equity and we all are fighting for historically-marginalized children across a variety of fronts. Am I personally outraged about proficiency levels? Less so, given the fact that Colorado decided to set a much higher bar than most other states. ‘Proficiency’ is a politically-determined label, not a context-free indicator. If DPS was in most states in America, we would say that 41% of its Black students were ‘at grade level’ in 8th grade math instead of 11%. That number is still terrible, particularly compared to their White student peers, but it’s not “1 in 10” either. Again, if Colorado sets the bar higher, by definition fewer students will be proficient. For me, the gaps are much more alarming than whatever level we apply to children’s performance. I think that the concern is in the inequity, not the label?

Van, this is a long post. You may disagree with much of what I said here, and who knows if you even read through to the end or not. But if you did, let me close with this: I think that you and I both have a similar passion for equity in schools, and I also think that we both have a passion for making school different, particularly for historically-marginalized children and families. I might be wrong, but I don’t think so. Given your work with DSST and The Odyssey School of Denver and my work around instructional redesign and leadership for deeper learning, I think that we might have a really interesting and productive conversation together. Let me know if you ever want to have a meetup. I’m happy to join you for lunch or coffee at whatever location is easy for you.

Thanks in advance if you actually read through all of this. Hope we get a chance to talk sometime.


Much ado about NAEP

Scores on the National Assessment of Educational Progress (NAEP) are down after the pandemic. Surprise!

Four big thoughts on all of this…

1. Below is the Centers for Disease Control and Prevention (CDC) graph of daily COVID cases in the U.S. Note the huge spike in January 2022 due to the Omicron variant. Also note that the National Center for Education Statistics (NCES) chose to administer the NAEP tests in March 2022, during the downswing of that huge spike in cases and after two years of COVID trauma (six weeks later America hit the 1 million dead mark). How many kids, families, and educators were ill, recovering from being ill, or still traumatized from loved ones’ deaths, illnesses, or long recoveries? We’ll never know.


2. Always remember that the labels for NAEP ‘proficiency’ levels are confusing. Journalists (and others) are failing us when they don’t report out what NAEP levels mean. For instance, the New York Times reported this graph today from NCES:

2022 10 24 NCES NAEP scores“Appalling,” right?! That’s what the U.S. Secretary of Education, Miguel Cardona, said about these results. Just look at those low numbers in blue! 

BUT… ‘Proficient’ on NAEP doesn’t mean what most folks assume it does. NAEP itself says that ‘Proficient’ does not mean ‘at grade level.’ Instead, the label Proficient is more aspirational. Indeed, it’s so aspirational that most states are not trying to reach that level with their annual assessments. See the map below from NCES (or make your own), which shows that most states are trying for their children to achieve NAEP’s Basic level, not Proficient:

2019 Grade 4 Reading NAEP and state standards

Once again, in the words of Tom Loveless, former director of the Brown Center on Education Policy at the Brookings Institution, “Proficient on NAEP does not mean grade level performance. It’s significantly above that.” So essentially the New York Times and others are reporting that “only one-fourth of 8th graders performed significantly above grade level in math.” Does that result surprise anyone?

Loveless noted in 2016 that:

Equating NAEP proficiency with grade level is bogus. Indeed, the validity of the achievement levels themselves is questionable. They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education. The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.”

Loveless also stated:

The National Center for Education Statistics warns that federal law requires that NAEP achievement levels be used on a trial basis until the Commissioner of Education Statistics determines that the achievement levels are “reasonable, valid, and informative to the public.” As the NCES website states, “So far, no Commissioner has made such a determination, and the achievement levels remain in a trial status. The achievement levels should continue to be interpreted and used with caution.”


Confounding NAEP proficient with grade-level is uninformed. Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use. If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades. [emphasis added]

In 2009, Gerald Bracey, one of our nation’s foremost experts on educational assessment, stated:

In its prescriptive aspect, the NAEP reports the percentage of students reaching various achievement levels—Basic, Proficient, and Advanced. The achievement levels have been roundly criticized by many, including the U.S. Government Accounting Office (1993), the National Academy of Sciences (Pellegrino, Jones, & Mitchell, 1999); and the National Academy of Education (Shepard, 1993). These critiques point out that the methods for constructing the levels are flawed, that the levels demand unreasonably high performance, and that they yield results that are not corroborated by other measures.


In spite of the criticisms, the U.S. Department of Education permitted the flawed levels to be used until something better was developed. Unfortunately, no one has ever worked on developing anything better—perhaps because the apparently low student performance indicated by the small percentage of test-takers reaching Proficient has proven too politically useful to school critics.


For instance, education reformers and politicians have lamented that only about one-third of 8th graders read at the Proficient level. On the surface, this does seem awful. Yet, if students in other nations took the NAEP, only about one-third of them would also score Proficient—even in the nations scoring highest on international reading comparisons (Rothstein, Jacobsen, & Wilder, 2006).

Similarly, James Harvey, executive director of the National Superintendents Roundtable (he also helped write A Nation at Risk), noted:

The NAEP benchmarks might be more convincing if most students elsewhere could handily meet them. But that’s a hard case to make, judging by a 2007 analysis from Gary Phillips, former acting commissioner of NCES. Phillips set out to map NAEP benchmarks onto international assessments in science and mathematics.


Only Taipei and Singapore have a significantly higher percentage of “proficient” students in eighth grade science (by the NAEP benchmark) than the United States. In math, the average performance of eighth-grade students could be classified as “proficient” in [only] six jurisdictions: Singapore, Korea, Taipei, Hong Kong, Japan, and Flemish Belgium. It seems that when average results by jurisdiction place typical students at the NAEP proficient level, the jurisdictions involved are typically wealthy.

We can argue whether the correct benchmark is Basic or we should be striving for Proficient, and we all can agree that more kids need more support to reach desired academic benchmarks. But let’s don’t pretend that ‘Proficient’ on NAEP aligns with most people’s common understandings of that term. We should be especially wary of those educational ‘reformers’ who use the NAEP Proficient benchmark to cudgel schools and educators.

3. Lest we think that these NAEP results are new and surprising, it should be noted that scores on NAEP already were stagnant. Achievement gaps already were widening. After nearly two decades of the No Child Left Behind Act and standards-based, testing-oriented educational reform – and almost 40 years after the A Nation at Risk report – the 2018 and 2019 NAEP results showed that the bifurcation of American student performance remained “stubbornly wide.” We continue to do the same things while expecting different results, instead of fundamentally rethinking how we do school.

4. The pundits already are chiming in on the 2022 NAEP results. They’re blaming overly-cautious superintendents and school boards, “woke” educators, teacher unions, parents, online learning, video games, social media, screen addiction, “kids these days who don’t want to work,” state governors, and anything else they can point a finger at. As I said yesterday, it’s fascinating how many people were prescient and omniscient during unprecedented times, when extremely challenging decisions needed to be made with little historical guidance, in an environment of conflicting opinions about what was right. Despite the massive swirl of disagreement about what should have occurred during the pandemic, many folks are righteously certain that they have the correct answer and everyone else is wrong. The lack of grace, understanding, and humility is staggering. 

Also, look again at the graph above. One way for journalists, commentators, and policymakers to frame those results is to call them ‘appalling.’ Another way is to say:

Scores are down but, even during a deadly global pandemic that shut down schools and traumatized families, the math and reading achievement of about two-thirds of our students stayed at grade level or above. How do we help the rest?

Always consider how an issue is framed and whose interests it serves to frame it that way (and why).

We can whirl ourselves into a tizzy of righteous finger-pointing, which is what many folks will do because it serves their agenda to do so. Or we can

I think that it’s unlikely that many states, schools, and communities will actually do this because of the fragility and brittleness of our school structures. But I’m pretty sure that the path forward is not simply doubling down on more math, reading, and testing, and it sure isn’t uncritically accepting NAEP results.

Your thoughts?

2022 10 23 mcleod tweet

Doing the same thing over and over again…

WincingHechinger Report just published an article on how having teachers study student data doesn’t actually result in better student learning outcomes.

Think about that for a minute. That finding is pretty counterintuitive, right? For at least two decades now we have been asking teachers to take summative and formative data and analyze the heck out of them. We create data teams and data walls. We implement benchmarking assessments and professional learning communities (PLCs). We make graphs and charts and tables. We sort and rank students and we flag and color code their data… And yet, research study after research study confirms that all of it has no positive impact on student learning:

[Heather Hill, professor at the Harvard Graduate School of Education] “reviewed 23 student outcomes from 10 different data programs used in schools and found that the majority showed no benefits for students” . . . . Similarly, “another pair of researchers also reviewed studies on the use of data analysis in schools, much of which is produced by assessments throughout the school year, and reached the same conclusion. ‘Research does not show that using interim assessments improves student learning,’ said Susan Brookhart, professor emerita at Duquesne University and associate editor of the journal Applied Measurement in Education.” 

All of that time. All of that energy. All of that effort. Most of it for nothing. NOTHING.

No wonder the long-term reviews of standards-, testing-, and data-oriented educational policy and reform efforts have concluded that they are mostly a complete waste. We’re not closing gaps with other countries on international assessments. Instead, our own country’s achievement gaps are widening. The same patterns are occurring with our own national assessments here in the United States. Similarly, our efforts to ‘toughen’ teacher evaluations also show no positive impact on students. It’s all pointless. POINTLESS.

The past two decades have been incredibly maddening and demoralizing for millions of educators and students. And for what? NOTHING.

Are school administrators even paying attention? Or are they still leaning into outdated, unproductive paradigms of school reform?

This was the line in the article that really stood out for me:

Most commonly, teachers review or re-teach the topic the way they did the first time or they give a student a worksheet for more practice drills.

In other words, in school after school, across all of these different studies, our response to students who are struggling is to… do the same thing again. Good grief.

Make school different.


Here are some additional paragraphs from the Hill article:

Goertz and colleagues also observed that rather than dig into student misunderstandings, teachers often proposed non-mathematical reasons for students’ failure, then moved on. In other words, the teachers mostly didn’t seem to use student test-score data to deepen their understanding of how students learn, to think about what drives student misconceptions, or to modify instructional techniques.


Field notes from teacher data-team meetings suggest a heavy focus on “watch list” students—those predicted to barely pass or to fail the annual state reading assessment. Teachers reported on each student, celebrating learning gains or giving reasons for poor performance—a bad week at home, students’ failure to study, or poor test-taking skills. Occasionally, other teachers chimed in with advice about how to help a student over a reading trouble spot—for instance, helping students develop reading fluency by breaking down words or sorting words by long or short vowel sounds. But this focus on instruction proved fleeting, more about suggesting short-term tasks or activities than improving instruction as a whole.


Common goals for improving reading instruction, such as how to ask more complex questions or encourage students to use more evidence in their explanations, did not surface in these meetings. Rather, teachers focused on students’ progress or lack of it. That could result in extra attention for a watch-list student, to the individual student’s benefit, but it was unlikely to improve instruction or boost learning for the class as a whole.

I think my takeaways from all of this are that:

  1. As would be expected, analyzing student data alone doesn’t do much for us. We also need to have effective interventions.
  2. Despite our best intentions and rhetoric, the research indicates that most schools don’t actually engage in effective interventions.

So all of our data-driven, PLC, RTI, etc. work isn’t actually doing much for us, at least in terms of student learning outcomes. Learning gaps continue to persist. Teacher instruction isn’t changing. And so on…

Image credit: Wincing, Frédéric Poirot

The importance of social studies and information literacy

As someone who grew up in the Washington, D.C. suburbs and whose parents worked for the federal government, today’s events have been… challenging.

I think that what I will say here is:

  1. Policymakers, you know how you’ve minimized the importance of history, government, and civics in all of your education reform efforts over the past couple of decades? Yeah, that was probably a big mistake…
  2. Superintendents and principals, are you ready yet to pay more attention to information literacy throughout your P-12 curricula?

Mandates that are bad for kids

Tom Dunn said:

As a former school superintendent . . . . I felt perpetually conflicted about being forced to implement mandates that were, frankly, bad for kids. The irony is how often the very politicians who denounce bullying use their power to beat adults into submission with their ill-conceived laws. In education, they do this through threats of financial penalty against districts that dare disobey them, by threatening the professional licensure of educators who don’t do as they are told, and/or through character assassination of those who dare question them.

via Ohio’s Aggressive School Vouchers Set to Cripple Even High-Scoring Public Schools

2 questions about cheating, copying, and student ‘integrity’

ScoldingWe’re so quick to bemoan the lack of ethics in our students. They cheat. They copy. They take shortcuts on the work. We complain incessantly about their work ethic, their commitment to their classwork and homework, and their failure to find interest or meaning in the learning tasks we put before them.

Lost in these laments is any recognition that a vast amount of what we ask our students to do in school is indeed actually meaningless. From a life success standpoint. From a future relevance standpoint. From a ‘you can look this up in Google in 3 seconds so why I am spending days on this?’ standpoint. From a ‘why on earth would a [x]-year-old care about this at all?’ standpoint.


1. If we repeatedly put meaningless work in front of students – and, in turn, they repeatedly do whatever it takes to get that work out of the way as quickly as possible so they can get back to something more meaningful in their lives – whose ‘integrity’ is the real concern?

2. If our responses to the first question are along the lines of ‘we know better than they do what they need’ or ‘there are things students have to learn in this class (and that might mean we have to force students to do them),’ is that a sign of…  [select all that apply]

a) our keen judgment and ultimate wisdom as educators?

b) our arrogance?

c) our need for control?

d) our unwillingness to let children actually own their learning?

e) our complicity in the district, state, federal, and corporate curriculum / assessment machinery?

f) our own helplessness as educators?

g) something else?

Those in glass houses should not throw stones. – European proverb

Great marketing [or forced compliance] won’t be enough to boost sales of your junk product. – Seth Godin

Meaning is in the eye of the beholder.

Image credit: Scolding, Louis Ressel

The surveillance of our youth

Big BrotherLike many school districts, the Southeast Polk School District in Pleasant Hill, Iowa monitors the Web usage of its students on district-provided computers for inappropriate activity. And like some school districts, Southeast Polk also uses a monitoring service that sends weekly emails to parents summarizing their students’ Internet search history. This raises some difficult issues because we know that young people need space away from the heavy thumb of adults for healthy identity formation and the development of self. 

Why do teenagers go to the mall, or congregate at the park, or cruise the strip, or gravitate toward the online spaces where adults aren’t? Because they need spaces that are separate from us. Should we monitor every single book or online resource that our children read? Should we use biometric school lunch checkout systems so that we can see exactly what our children eat for lunch each day? Should we dig through our children’s belongings and rooms every morning after they leave for school to see if they’re doing something that they shouldn’t? Should we install RFID and GPS tags into our children’s clothing and backpacks so that we can track them in real time? Should we slap lifelogging cameras on our kids and review them every evening? Should we install keystroke logging software or monitor everything that youth search for on the Internet? Which of these makes you uncomfortable and which doesn’t?

We can think of numerous reasons why students might search the Internet for things that they don’t want their parents to know about, just like they talk daily about things that they don’t want their parents to know about. For instance, perhaps there is a gay boy who’s struggling to make sense of things but is not ready to come out to his family yet. Or a teenage girl with liberal politics in an ultraconservative family. Or a young couple that is pregnant and searching for information and options before they tell their parents. Or a teen who’s in a spat with a peer but doesn’t want clueless adults stepping in and creating more drama. Or any teen or tween with normal adolescent concerns who just needs some information, resources, or nonlocal empathy and connection. Do these students deserve some space? Do they deserve a presumption of privacy? Or should they immediately and automatically be outed by school software?

danah boyd asks some important questions about youth privacy, including Who has the right to monitor youth? and Which actors continue to assert power over youth? She also notes that:

Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. . . . How do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them? . . . How do we create eyes on the digital street? How do we do so in a way that’s not creepy?

Similarly, First Monday notes:

The right to privacy is stipulated in Article 12 of the Universal Declaration of Human Rights [and] Article 17 of the International Covenant on Civil and Political Rights, as well as numerous international and regional human rights treaties and conventions [and has been found to be a protected Constitutional right by the U.S. Supreme Court]. The right to privacy essentially protects the integrity of the individual and his or her home, family, and correspondence. A common denominator for the different areas of privacy is access control: thus control over what others know about us; control over private decisions and actions; and control over a physical space. The right to privacy builds on the presumption that a zone of autonomy around the individual is central to individual freedom and self-determination.

Should school districts be complicit in the hypersurveillance of our young people? What messages do we send our students when we monitor their every action and send out weekly reports? Are we creating digital social graphs for our children and then placing them in the hands of commercial vendors? Are we intentionally instituting oppositional and distrustful stances against our own students? Are we fostering the creation of graduates who will shrug at the infringement of their civil liberties as adults because their families and educators have done so for years?

I wonder if there’s an opt out for families that don’t want to Big Brother or helicopter parent their children…

See also

Image credit: Big Brother is watching you, Photon

Be proud of your pockets of innovation. AND…

PocketsEvery school system has pockets of innovation. Those three forward-thinking teachers in the elementary school, that one grade-level team in the middle school, the department that’s really trying to do something different at the high school, that amazing principal over there, and so on. As school leaders we’re proud of – and point to – that cutting-edge work and rightfully so.

But we also have to recognize that pockets of innovation mean that inequities exist. What if you’re a student that doesn’t have one of those forward-thinking elementary teachers, who isn’t on that middle school team, who has nominal exposure to that innovative high school department, or who doesn’t attend that principal’s building? You’re out of luck.

We always will have educators who are ahead of others. That’s inevitable. What’s not inevitable is our lack of a plan to scale desired innovations. What’s not inevitable is our lack of a guaranteed viable curriculum that strives for every student to accomplish more than mastery of factual recall and procedural regurgitation. If we want our pockets of innovation to ever be more than just pockets, we have to intentionally and purposefully scaffold and design and support to move the entire system to something greater. We also have to be smart about the design choices that we make. For instance, that intervention / remediation / extension time block that you created in your school schedule? During that time, who suffers through low-level thinking work in order to ‘catch up’ and who’s building robots or rockets? The very mechanisms that we create to close achievement gaps often intensify life success gaps.

Who in your schools gets to become future-ready and who doesn’t? Are you remedying traditional inequities or exacerbating them? What’s your plan to scale your innovations so that every student has opportunities to be prepared for life success, not just a few?

Image credit: Pockets, Astera Schneeweisz

It’s 2018, not 1918. Basic skills are not enough.

NYC Schools Opening (U.S. Library of Congress)Here is a quote from Kentucky Education Commissioner, Dr. Wayne Lewis (who is a friend of mine). The context is a statewide conversation about higher education standards for Kentucky high school graduates.

On Tuesday, Lewis said the current system already penalizes students by not actually preparing them for success.

“When we give them a diploma without ensuring that they have basic skills and they go to post-secondary education and they hit a brick wall – when they get into those English and math gateway courses, when they don’t have the necessary basic skills or preparation to get a job and take care of themselves,” he said. “Those kids are held accountable right now.”

Compare this with the following quote from Dr. Marc Tucker, outgoing CEO of the National Center on Education and the Economy:

The jobs that were lost to globalization are not coming back. They are being automated. American manufacturing is actually doing very well, but much of the manufacturing work that was done by people a few years ago is now being done by machines. The same thing is true of mining and steel making. The jobs of gas station attendants were automated years ago. The jobs of retail clerks are ebbing fast. Even the jobs in Amazon’s warehouses are being automated. AI-powered systems are doing legal research, diagnosing cancer, writing music, serving as network newscasters, and doing surgery.

The thing that unites the “left behind,” whether they are rural whites in communities with boarded-up storefronts and peeling paint on their homes or urban African-Americans without jobs or any prospect of getting them, is lack of the kind of education and skills that employers are willing to pay decent wages for. . . . The difference between the young people that Facebook is hiring at $140,000 per year for their first jobs and the UBER drivers in the same cities for $10 an hour is their education and skill levels.

With due respect to Wayne, I think Tucker is right. Basic skills aren’t enough these days for many/most American high school graduates to succeed in postsecondary and/or ‘get a job and take care of themselves.’ Basic skills are necessary but insufficient. If we don’t frame future readiness and life success as more than basic skills, we’re doing our students and graduates a grave disservice. As Tucker notes,

What unites the first phase of globalization with the second phase of globalization is the fact that, whether the work is manufacturing or services, whether it is highly skilled or low-skill work, the employer can look for people with the requisite skills anywhere. Whatever your skill level, you are now in competition with people all over the world who have similar skills and who are willing to work for less.

That is bad news for Americans because we charge a lot for our labor. That is especially true for our low-skill and semi-skilled people – people who have basic literacy, but little more. Many nations that were largely illiterate in the 1970s have now built education systems that are capable of producing levels of basic literacy equal to those in the United States, and those newly literate people are now competing directly with the workers in the United States who have only basic literacy, which is roughly half of our workforce. The cruel fact is that our low- and semi-skilled workers – roughly half of our workforce – are very high priced in the global market for labor. That is why their real wages have not gone up in decades. They are a commodity, and the price they charge at the minimum wage level for that commodity is more than they are worth on the global market.

Neither state nor federal policymakers can change that fact. And it is that fact, not unfair trade practices, that is leading ultimately to the kind of anger and despair that is corroding our politics. I refer here not only to the anger and despair of rural and urban working-class whites, but also to the despair of inner-city African-Americans and many Latinos who are also trapped by the dynamics I have just described.

We must have a bigger vision for our graduates than basic skills. And we need to stop using this term as if it were enough.

SIDE NOTE: While we’re at it, we also know that 3rd grade retention is one of the dumbest things we can do in school. As a researcher and former university educational leadership faculty member, Wayne should know better than this.

Image credit: N.Y. schools opening, Library of Congress