Kids These Days

posted in: Race and Gender | 0

“Kids these days….” The phrase doesn’t need to be completed and usually isn’t. But the meaning is clear. Children have disappointed their parents. Or, if not their own, other parents’ kids have, with their ignorance, rudeness, and general misbehavior. The older generation too easily believes that things were different in their youth. Their parents would never have allowed them to get away with x. They could never have said y without a rebuke to withering to contemplate. Fear kept them–and their entire generation, apparently–in line. The evidence, as presented in Steven Mintz’s Huck’s Raft: A History of American Childhood (2004), does not bear them out in this belief.

Science has taught us that memory is fallible. It is a creative process, not a passive one; more story-telling than video recording. Our memories of childhood are especially unreliable. Mintz’s book shows that American childhood has always been hard; parenting always been mostly about muddling through. The general process of “civilizing the barbarians,” in Thomas Sowell‘s memorable phrasing, may not change, but the particulars do, from each child, family, and historical moment. The ebbs and flows and cycling throughs of these particulars are the subject of this post, in six areas of interest: parenting, work and play, school, girls and boys, marriage, and the so-called generation gap.


Cult of Domesticity

The story of parenting in America is not one of unbroken progress toward more enlightened views of childrearing. Still, we have come a long way, baby, since the Puritans. For our New England ancestors, children were miniature adults, fully capable of sin but lacking the adult’s ability to resist temptation. Mintz cites Cotton Mather declaiming, “Are they Young? Yet the Devil has been with them already… They go astray as soon as they are born. They no longer step than stray, they no sooner lisp than they ly.” [11] Far from protecting children from life’s horrors, Puritans made them central to their children’s curriculum. At least John Norris did when he counselled children to “be much in contemplation of the last four things, Heaven, Hell, Death and Judgment. Place yourself frequently on your deathbeds, in your Coffins, and in your Graves. Act over frequently in your Minds, the solemnity of your own funerals; and entertain your Imaginations with all the lively scenes of Mortality.” [20]

Interestingly, Anne Bradstreet‘s words written in the seventeenth century sound almost modern by comparison: “Diverse children have their different natures; some are like flesh which nothing but salt will keep from putrefaction; some again like tender fruits which are best preserved with sugar; those parents are wise that can fit their nurture according to the Nature.” [19]

For all that we Americans look to our Puritan heritage, where childrearing is concerned we would do better to consider the Quakers of Pennsylvania. In them we find the birth of what Mintz calls the Private Family, “bound by common ties of affection.” [49] The eighteenth century intellectual ferment was upending the traditional childrearing model, in any case. Enlightenment writers were repudiating coercive parenting and urging a more nurturing model based on teaching by example and the cultivation of a child’s natural talents. By 1800 the groundwork had been laid for the romantic conception of childhood as a precious time of development and children themselves as a precious gift–at least among the growing middle class. [51]

Accompanying the changes, of course, we hear critical comments that have resounded from various quarters over the last two centuries. Mintz cites an early republic clergyman grousing, “Fathers, mothers, sons & daughters, young & old, all mix together, & talk & joke alike so that you cannot discover any distinction made or any respect shewn one more than to the another.” [74] Two hundred pages and a hundred and fifty years later in his narrative, Mintz cites a sociologist expressing similar qualms about what he called America’s “filiarchy.”  “It is the children who set the basic design. Their friendships are translated into the mother’s friendships, and these, in turn, to the family’s.” [277] Continuity as well as change.

The Romantic view of childhood grew alongside the American middle class, and the protected childhood became the ideal for almost two centuries, at least for those who did not rely on their children’s labor for their well-being. The ideal reached its apotheosis in the mid-20th century when middle class status was a fact of life for a majority of Americans. At about the same time, it became more vulnerable to critique from a growing class of professional psychologists. “For every problem child is a problem parent,” became a common trope. Smothering mothers, domineering mothers, weak fathers, absent fathers, permissive parenting: all came under scrutiny.

The 1970s brought the beginning of the end the protected childhood as dominant ideal. Stagflation, gas lines, “malaise,” and, in 1983, the publication of A Nation At Risk, paved the way for the prepared childhood to take its place. Mintz’s book, published in 2004, ends with a discussion of school shootings, but does not consider changes that have occurred in the wake of the War on Terror, the Great Recession, the iPhone, social media, deepening political polarization, or the pandemic. We do know that children’s literature authors and publishers are tackling weightier social and psychological issues, A Series of Unfortunate Events (1999) being one possible marker for the beginning of a trend. We still protect our children from adult concerns, but we are more willing–even driven–to be frank about them in a way that the Puritans might recognize.

Bureau of Labor Statistics

Work and Play

Before the middle class Private Family, Mintz avers, there were three family structures in colonial America. By far the most common was the Farm/Artisanal Family, which included servants, apprentices, and possibly other wage laborers sharing a roof with parents and children. Children in this familial “unit of production” lent their labor as much as they were able. Work, what schooling could be had, and simple play were the three legs of their stool of life. Puritans might have been nonplussed by their children’s “inordinate love of play” [19], but other immigrant groups had no such qualms.

Whatever happened to the apprentice system we all read about in Johnny Tremain? Mintz teaches us that the Revolution itself had a great deal to do with its demise. The revolutionary ideology–distrust of (patriarchal) authority and a jealous regard for liberty–made fewer young men willing to sell their service to a possibly tyrannical master for seven years. (See: Benjamin Franklin.) Economic factors contributed, too, but the end result was a steep drop of (white, male) youth entering the skilled trades. Instead, they became assistants and errand boys, or entered a life of factory work. Or, those who could afford to stayed in school longer with the expectation of a clerkship or management job on the other side. The experience of work and play was diverging for children of the middle class and those of the working poor.

Iowa Dept of Cultural Affairs

Middle class children were the beneficiaries of protected childhood, as discussed above. Yet Mintz argues that freedom from work came with a loss of personal freedom. Farm and urban working class children had less privacy in the home but more freedom to roam outside of it. They had more responsibility to help support the family but less parental intrusiveness within it. Among the working poor, Mintz informs us, children were often responsible to bring in 20% or more of the family income. We learn that many wage labor jobs were seasonal and, into the twentieth century, fathers might be unemployed an average of three months of the year. [136, 204] Post-emancipation African American children’s labor were possibly their parents’ only assets, forty acres and a mule, notwithstanding. They were put to work, sometimes contracting with a former planter in slave-like conditions.

Mintz cites a few memories of former farm children. They seem to speak to a recognizable truth: growing up on a pioneer farm was lonely and hard. Hamlin Garland’s family moved around the plains in very much the random pattern of the Ingalls family. Garland uses a tone not unknown Laura Ingalls Wilder’s writing but, here at least, without a hint of nostalgia: “It was lonely work…. There is a certain pathos in the sight of that small boy tugging and kicking at the stubborn turf in the effort to free his plow. Such misfortune’s loom large in a lad’s horizon.” [151]

We (I!) tend to think of child labor laws coming out of the Progressive Era. In fact, the (minimally restrictive) Keating-Owen Act of 1916 was struck down by the Supreme Court after just two hundred and seventy-three days in operation and was followed by Congressional dithering throughout the 1920s. It took the Depression and the Wages and Fair Labor Standards Act of 1938 to outlaw child labor below age sixteen. There weren’t enough jobs for grown men, so it couldn’t have been too hard a sell to American businessmen. Children stayed in school longer, and the high school population grew through the 1930s. For the first time, a majority of American seventeen-year-olds would graduate from high school. The trend would abruptly reverse during the war years when two million more teenagers entered the labor force and  more than a million left high schools. [259]

In 1953 twenty-nine percent of sixteen- and seventeen-year-old boys and eighteen percent of girls held down jobs. By 2004 these percentages had risen to forty-four and forty-three, respectively. But Mintz points out this back-to-work trend is largely a middle class phenomenon. Mintz cites a figure of forty to sixty percent youth unemployment in poor urban neighborhoods. [353] This flips the situation of a hundred years ago (of Eddie Rickenbacker and his dozen jobs by age 16) on its head. (Sort of.) Today’s middle class kids work, not because they have to to supplement their family’s income, but for spending money (OK, Eddie’s mama would sometimes allow him a quarter to spend a Saturday at Olentangy Park), for big ticket items like cars or college, and for the work experience they need to sell themselves to future employers. Too few of today’s urban poor benefit from any of this. In today’s world, the rich keep getting richer.


The Puritans were not our ideal of sensitive parents, but they were progressive childrearers in at least two ways. First, Mintz tells us, they were among the first societies to criminalized child-abuse. Also, they made the establishment of schools a legal requirement. If children were to be saved from perdition they had to be educated. If the colony was to survive as a community of believers, education was paramount. Though the requirement was not universally enforced, the foundation of American public schooling had been laid.

Reader’s Digest

In the early years of the Republic, only about a quarter of children, ages five to nineteen, attended school. (I don’t believe this datum includes African American children, for whom schooling was actively forbidden.) But by mid-century the fraction had risen to one-half, though regional differences could be pronounced. The South was still under fifty percent attendance at the end of the century. By the turn of the twentieth century, the public school had become the great assimilator for the rising tide of immigrant children flooding American urban centers, especially. Some New York City teachers faced classes of sixty or seventy children, many of them speaking little or no English. Mintz says one school had to find classroom placements for a hundred and twenty-five new students admitted on a single day in 1905.

High school took much longer to become a normative experience for American children. In 1915, only about twenty percent of teenagers attended high school, more of them girls than boys. These secondary schools were already becoming more adult-controlled than they had been. In the nineteenth century, sports teams, debating clubs, and school newspapers had been largely student organized and run. In the early twentieth century, high school students began trading autonomy for better facilities and coaches, a trend that would never be reversed. High schools reached half of American teenagers beginning in 1928, though Mintz stresses that it wasn’t until the Depression years that high school became truly normative for American adolescents. The  dramatic rise of the American high school is made clear by this datum provided by Mintz: Between 1900 and 1930 about 11,000 new high schools were opened, an average of one every day of the year for three decades. Just as quickly, the transformation was (temporarily) reversed by the country’s entry into the war. A million and a quarter teenagers dropped out of school and entered the labor force. [175, 199]

Today we are used to perennial hand-wringing over the state of American public schools. Life magazine’s lament from the 1950s sounds like it could have been written  last year: “United States high school students are…ignorant of things [elementary] school students would have known a generation ago”) The complaints and recriminations go back many decades, but not indefinitely. Mintz made me think our goals for education were much more modest in the first part of the twentieth century. A 1918 report on secondary schools contended that practical and vocational training were all that was needed for the great bulk of high school students. A consensus toward tracked secondary schooling prevailed through the war  years. Even after Sputnik, the Conant Report of 1959 essentially endorsed a meritocratic, tracked basis for the American high school, along with other, more novel recommendations.

The eighty percent might have been deemed unfit for post-secondary education, yet college attendance quadrupled in the years between 1946 and 1970. Here is the genesis of some of our educational woes in the United States. We fail at two ends, doing both things badly. We have made college and college-prep the one track to get ahead, “the surest ticket to the middle class,” in President Obama’s words. At the same time, we track our students into those we believe can and those we don’t but pretend like we do. We beat the latter down with twelve years of schooling, rubbing their nose in the fact that they are the losers in the lottery of school ability, completely overlooking the skills and talents they have that might not include the analysis of Shakespearean plays. We have to provide multiple viable paths through adolescence to adulthood and the world of work.

Girls and Boys

Early American infants and toddlers were not discriminated by sex. Both boys and girls wore long tresses and  ankle-length dresses. It wasn’t until the 1920s, Mintz says, when babies were color-coded at birth: blue for boys and white for girls. (I’m not sure if he says when white was replaced by pink. The 1940s?)

In the colonial era, girls were given limited access to education. They were taught to read, but not even a third of women, in 1700, could write their own names.

Colonial Society of Massachusetts

As parenting customs liberalized in the next century and more (see: “Parenting” above), many girls enjoyed carefree childhoods but could look forward only to a straitened and staid motherhood after marriage. In the early 1800s, when the industrial revolution forced families to send their daughters to work for wages, some of those girls found a certain liberation being on their own among peers, even in the rigid structure and long hours of a textile mill. At least some girls of all classes began to speak of the “marriage crisis.” Would they or would they not marry? Mintz quotes a brief diary entry that speaks poignantly to the quandary girls faced: “And now these pages must come to a close, for the romance ends when the heroine marries.” [84] Frances Willard, future founder of the Women’s Christian Temperance Union, described her passage from girl- to womanhood no less affectingly: “My ‘back’ hair is twisted up like a corkscrew: I carry eighteen hairpins; my head aches miserably; my feet are entangled in the skirt of my hateful ne gown. I can never jump over a fence again, so long as I live.” [86]

Meanwhile, Mintz says, boys’ education stressed “physicality, dirt, and violence.” Even so, the American male was so poorly trained in marksmanship, the National Rifle Association was founded after the Civil War to ensure the men of the future would never again be caught unprepared for war. As the Frontier was declared closed in 1890s, many American men (See: Theodore Roosevelt) wrung their hands over what the loss would mean for the education of the American boy. Athletics arose to allow boys to prove themselves on less deadly fields of “battle.” The Boy Scouts was established in 1910 to teach the wilderness skills that had been lost by the previous generation. Even so, some Americans continued to worry that boys were being “feminized,” according to Mintz. Young men were no better prepared physically and mentally for the armed forces in 1941 than they were in 1917 or 1861. More than five million prospective enlistees were rejected for service in World War II.

If the possibilities for women remained largely circumscribed through the 1800s, those for girls opened new vistas, at least in the literary imagination. In the flowering of children’s literature which ended the century, girl heroines were spunky and bold, but also ingenuous and without guile. Their power, Mintz says, was their ability to redeem curmudgeonly men and women.  The theme continued well into the twentieth century. Think: Marilla Cuthbert in Anne of Green Gables. Think, too, of the classic roles of Mary Pickford on the silver screen. The reality for girls was almost certainly less exciting. Mintz cites a thirteen-year-old’s diary entry from 1890: “We  cannot do anything in this house[;] as soon as we start to have any fun we are stopped. …It seems as though we are kept in a glass case.” [223]

Girlish innocence gave way to a certain decadence in the 1920s. Girls and young women became more active in asserting their independence, notably by drinking and smoking. Mintz points out that many of the trends we associate with the Roaring Twenties had begun twenty years earlier: short hair, slim figure, even the term flapper. The 1920s brought a new independence but also new tyrannies to live under. Many would argue the same for our girls in our own era. Mintz’s book comes too early to address the effects of social media, especially on girls’ mental health. Nor does he address the new focus on boys’ special developmental needs that has cropped up among both male and female cultural observers, or the yet more recent awareness of transgender children and teens.

Best Life Online

Courtship and Marriage

The median age of marriage has risen and fallen throughout American history. In Puritan New England, a religious community trying to survive in a wilderness, women typically married at about twenty, five years earlier than their compatriots back in England. As a result, they bore more children, four or five more four than they would have had they not emigrated. The average age of marriage rose steadily through 1910, when it passed twenty-four years of age. At the same time, the fertility rate fell from eight to two. World War II initiated a steep decline in marriage age when young couples rushed to the altar before the men were deployed overseas. The trend continued through the 1950s and into the 1960s, accompanied by the Baby Boom of lore when fertility approached four births per woman (and by a boom in divorces in the 1960s and 1970s). Since then, the age of first marriage has risen steeply, and the birth rate dropped quickly to a level below replacement where it has fluctuated for the past forty years.

Like flapper, dating was a term coined in the 1890s that came into full usage in the 1920s, enabled especially by the automobile. Unlike Victorian courtship, in which a male caller visited a young woman in her parents’ parlor, dating meant leaving home, unchaperoned, usually to a commercial amusement of some kind (moving pictures, the town fair, a vaudeville show), paid for by the young man. Dating, Mintz says, introduced a new double standard, in which young men expected sexual returns (often just kissing!) in return for their financial investment and young women were responsible to set the limits on how far sexual intimacy advanced.

Going steady was a new thing in post-war America. The way Mintz tells it, it was a fraught concept. Parents scorned it. Children were probably ambivalent. By then, they could expect to be married shortly after their twentieth birthday, if not before, so dating held a “special urgency,” according to Mintz. [286] In that light, going steady was akin to a trial run, testing the waters of an impending monogamy. I can only imagine this would have led others to shun the practice, just as their parents wanted them to

Dating went on the endangered species list by 1970, and was more or less extinct by the 1980s. In its place: hanging out and hooking up. The age of first sexual activity dropped sharply through the 1970s but has risen ever since. Mintz’s book was published too early to discuss the role of texting in children’s courtship behaviors, or remark on the overall drop in sexual activity among the young.

“Blackboard Jungle”

Generation Gap

I come to the raison d’etre of this entire post. Adults decrying children is a story as old as time, to borrow Mrs. Potts’ memorable phrase. Here, Ezekiel Rogers in 1657: “I find the Greatest Trouble and Grief about the Rising Generation. Young People are stirred here; but they strengthen one another in Evil, by Example, by Counsel.” [25] There, the Atlantic Monthly in 1865: “What shall we do with our children? …The Slaveholder’s Rebellion is put down; but how shall we deal with the never-ceasing  revolt of the new generation against the old? And how to keep our Young American under the thumb of his mother and father without breaking his spirit?” [186]

But Mintz doesn’t discern a true split in the interests of the young and their elders until the mid-twentieth century when youth became its own demographic for commercial marketers. A discomfiting thought, that. We saw above how parents were increasingly concerned to provide a nurturing environment for their children and give them the material comforts denied to themselves in the Depression years. Spaces for youth (drive-in theaters, roller rinks, diners) and time to hang out in them, youth products (toys, TV shows, and records) gave youth their own culture for the first time. We have never looked back.

National print media, also on the rise, took note and editorialized. “Are We Trapped in a Child-Centered World?” blared a Newsweek headline in the 1950s. “Is the Younger Generation Soft and Spoiled?” trumpeted another.    [315-316] It wasn’t just that this latest crop of American youngsters was overindulged; they were unruly. “Let’s Face It: Our Teenagers Are Out of Control.” [291] Juvenile delinquency and gang violence became common concerns in the allegedly staid 1950s. In fact, the number of gangs in cities “exploded,” according to Mintz, and their activities became more racially charged. There is a reason West Side Story was produced in 1957. Yet, Mintz also makes clear that evidence of urban youth gangs goes back to 1807, and the gangs of the early twentieth century were also numerous and virulent.

Juvenile delinquency had had its ups and downs, as well. In the wake of the Civil War it rose steeply, as orphaned children often roamed the streets, uncared for. Eddie Rickenbacker bemoaned the rise of delinquency beginning in the war years. In 1944, he noted a thirty percent increase in youth crime since Pearl Harbor. Six years later he called the current trend “the greatest crime wave of young people in [American] history.” But Mintz cautions that the apparent surge in juvenile delinquency is as likely a result of increased number of psychologists and social workers, as well as their heightened attention to such problems. Besides, Captain Eddie would have done well to recall his own Horsehead Gang from turn-pf-the-century Columbus. Breaking streetlamps up and down Miller Avenue definitely counts as delinquency.

And we haven’t even talked about the Sixties yet.

Today’s Parent

Jean Twenge, professor of psychology from San Diego State University, has built a career on the study of twenty-first century youth and the young adults they are becoming. The generation she describes is less sexually active, less into drugs, and less intent on asserting independence than their Baby Boom or Gen X parents. They are also more programmed, more goal oriented, and more comfortable with adults. In short, they are far from the little hellions we Boomers and Gen-Xers were when growing up. Even so, today’s their parents undoubtedly uttered those time-honored words at least once in their upbringing: “Kids these days….”



A (Perennial) Conflict of Visions

posted in: Political Economy | 0

Thomas Sowell published A Conflict of Visions thirty-five years ago, the first volume in a trilogy on the nature of political  struggle. It was received with a yawn, by the establishment, at least. Sowell seemed to speak for no one but himself (and the Hoover Institution that employed him), so he could be safely ignored. In 1987, the forward progress of the Civil Rights movement had been halted. Reagan’s two elections signified an end to New Deal/Great Society government, but Newt Gingrich Republicans had yet to take control of Congress. The liberal order, decades in ascendance, was wounded but not yet felled. Certainly not in the academy, where Sowell had tried to make his mark before seeking refuge within the friendlier halls of Hoover.

I read Sowell’s book, in part, as an apologia for his apostasy. Here was a black man in America unwilling to travel the path of Civil Rights orthodoxy–and he paid for it with a kind of excommunication. He had a different vision of the way the world worked, and what freedom and equality looked like. It was a vision, he showed in his book, that was part of a long and respected tradition, just as the more liberal vision of his detractors was. The strength of the book resides in its unwillingness to assert the truth of one vision over the other. For that reason, if for no other, A Conflict of Visions, is a book for our time of “political polarization.” The most strident voices today speak as if the current situation is “unprecedented” and the threat of opposition victory is “existential”–as if the present crisis weren’t always the most important to be faced, at least by those living through it.


Still, as I settled into the book, I was concerned the book might be foisting up a false dichotomy, an oversimplification of the world into two opposing views into which all disagreements fall. Sowell calls them the constrained and the unconstrained visions and anticipates this concern by addressing it directly early in the book. “Virtually no one believes that man is 100 percent unconstrained and virtually no one believes that man is 100 percent constrained.” [39] He revisits the concern during the body of the work and again in the concluding chapter: “In much the same way, believers in an unconstrained vision do not deny that man has any limitations. They simply do not treat these limitations as decisive in theories of social phenomena….” [214] Sowell seems primarily an idealist–he is dealing in ideas, after all–but his inclinations are balanced by a consistent realism. He uses a dichotomy of two visions to understand the world, yet is mindful to avoid either/or thinking.

But what does Sowell mean by constrained and unconstrained visions? First, visions. A vision is neither a theory nor an ideology, he says. It precedes both, precedes conscious argument. He says it is first and foremost a “sense of causation” and “more like a hunch or a ‘gut feeling’ than it is like an exercise in logic or factual verification.” [16] The constrained vision finds causation in human nature, incentives, and social processes. The unconstrained vision locates it in reason, knowledge, and human perfectability. It is essentially a dichotomy of conservative vs. liberal: a “nasty, brutish, and short” view of the world vs. a “noble savage” view. But Sowell takes pains throughout the text to show that such oversimplification is misplaced. The world is not so neat. A vision is a model, after all, which, by definition, “must leave many important phenomena unexplained….” [15]

Nevertheless, knowing Sowell shares a constrained vision with the likes of Adam Smith, Alexander Hamilton, and Friedrich Hayek, I couldn’t help asking myself throughout, Is he stacking the deck? Is he putting the unconstrained vision of the likes of William Godwin, G. B. Shaw, and Ronald Dworkin in the least attractive light? Some of the citations from the unconstrained’s–Godwin, in particular–seemed starry-eyed and woefully misguided to a steel-eyed realist like myself. Was that intentional? Actually, the razor cut both ways. Sowell quotes Smith saying, “The peace and order of society is of more importance than even the relief of the miserable.” [35] Besides sounding cold-hearted, it is no more self-evident or provable than the unconstrained quotation that accompanies it: “[Revolutionary chaos and violence] is the price we pay for freedom.”


And yet, the book confirmed my self-conception as conservative, small c-, while giving me a new way of understanding my thinking. Instead of the word that comes with baggage and requires qualification of the lower case, Sowell’s c- term is tied to a long tradition and has more explanatory power. I have developed a constrained vision (though he says a vision is pre-rational, so has my predisposition always been there?) because of my understanding of human nature as fixed (evolving, perhaps, put at the glacial pace of evolution). For years, in political discussions (often in my own head) I have asked my interlocutor, But where are the incentives? Though I don’t use Smith’s term in my discourse, I fundamentally believe that humans act in their self-interest. I believe their actions have as many, if not more, unintended as intended consequences. The intention to do good does not equate with doing unalloyed good. According to the Sowell’s index, unintended consequences appears on seven pages. Incentives is found on seventeen. Human nature appears or is discussed on thirty-two. I surely hold a constrained vision of the world.


Yet there were times, while reading the book, when I felt myself not fully within the camp. Though I feel affinity for the constrained vision’s epistemic humility–another phrase I have thought or uttered consistently in recent years, though it is not used by Sowell–I found I resisted the conclusions that their assumption implied. I can follow Hayek when he objects to what we might today call social justice warriors: “The particulars of a spontaneous order cannot be just or unjust [when] the results are not intended or foreseen in their totality by anybody.” [196] Radicals today, as ever, argue as if there were a conspiracy of “them” for a very specific result. This makes no more sense to me than it did to Hayek, yet from that premise I cannot conclude that we can or should not try for certain democratically agreed on results, using democratically agreed upon policies. Government exists for just such purposes. (I am brought to mind of Alexander Hamilton, a paragon of constrained thinking in Sowell’s book who was also the Founding Father’s greatest advocate for a strong executive government.) “A mastery of social details” is not, as Hayek would have it, “inherently ‘beyond our ken.'” At least, we cannot give up trying, with appropriate humility and a pragmatist’s reliance on trial and error.

My mind was brought to Maynard Keynes, too, who wrote, somewhat cheekily, to Friedrich Hayek after the publication and popular reception of the Austrian economist’s The Road to Serfdom.

You agree that the line has to be drawn somewhere, and that the logical extreme is not possible.  But you give us no guidance whatever as to where to draw it. …you are, on your own argument, done for, since you are trying to persuade us that so soon one moves an inch in the planned direction you are necessarily launched on the slippery slope….”

We must be mindful of the limits of our knowledge and the boundlessness of unintended consequences, but we must not use that outlook–that vision–as a dogmatism to tie our hands. We should be tackling issues of childcare and preschool, healthcare and climate change, just not all at once in spitball, see-what-sticks fashion. Build Back Better is a travesty to those of the constrained vision.


In its epistemological skepticism, the constrained vision also puts little faith in the ability of individual knowledge to save humanity, as it were. Wisdom and truth come from “the experience of the many, rather than the articulation of the few.” [152] “Solutions” to old problems create new ones. (See: Steve Jobs and the iPhone.)  They imply trade-offs.  I have said as much many times in recent years. Yet the deep systemic knowledge as described by Sowell, sometimes called tradition, has never been completely reliable. There is a reason new generations react against it, creating a new syntheses, a new “traditions.” I guess that is Sowell’s point about society-wide knowledge. Science, too, is a social process. Albert Einstein is not as important as the enterprise he was a part of. And yet, who’s to deny that Einstein (and other great scientific minds) aren’t crucial in advancing our society, for good and bad. Godwin may be naïve in his faith in the power of reason: “Truth, and above all political truth, is not hard of acquisition,” requiring merely “independent and impartial discussion.” Yet Edmund Burke  protests too insistently when he contends, “We  know that we have made no discoveries, and we think  that no discoveries are to be made in morality; nor many in the great principles of government, nor in the idea of liberty….” [72, 78]

Sowell rarely uses the term, but I kept thinking of the struggle over the value of elites in today’s political climate. Today’s Trumpians would surely endorse Hobbes’s sentiments, if with updated terminology and orthography: “A plain husband-man is more Prudent in the affaires of his own house, than a Privy Counselor in the affaires of other men.” [66] The truth of the statement is undeniable on its face, though not so much when used rhetorically. The Privy Counselor (read: senator, representative, assemblyman, council member) has it as her task to attempt to know the affairs of as many of her constituents/constituent groups as she can. We want her to succeed in this task, while recognizing that it will always remain substantially outside her grasp. (And, yes, that some politicians will be influenced by venal motives to begin with.) I, too, doubt that highly educated/cultivated/civilized citizens–self-described or otherwise–are sufficient to make heaven on earth, as it were. But I do think they are necessary. The highly skilled and knowledgeable in all domains–elites–are desperately needed. They need to be respected, if not worshipped on pedestals. They need also need to respect the limits of their expertise. (The case of Hamilton is again apropos. He was, according to Sowell, “suspicious of skilled articulation, which could be ‘mere painting and exaggeration.'” Yet, it took one to know one: he was the Founding Father’s greatest and most prolific rhetorician.) [65]

In fact, Sowell grants that not all visions–or holders of visions–fall neatly into one side or the other of his dichotomy, as he has defined it. Marxism takes a highly constrained view of the world until, following the dictatorship of the proletariat, it adopts a highly unconstrained one. John Stuart Mill’s thinking was even more flexibly hybrid. Thomas Malthus’s views were so constrained he was anathema to men such as Goodwin and Condorcet. Yet, in the 1980s, when Sowell was writing, environmentalists found much to utilize in their arguments from within their unconstrained vision. American supporters of Soviet communism, abandoned its vision at once when the Ribbentrop Pact with Nazi Germany was exposed. New Deal liberalism, a product of the unconstrained vision, was sometimes defended in later years with more constrained-vision justification: The welfare state is “here to stay.” [117] Nor are advocates of the constrained vision blindly in favor of the status quo. Smith, Burke, and Hamilton were all outspoken opponents of slavery views and supportive of colonial independence.


The elephant in the room, as it were, is the behavioral economic revolution which has occurred mostly in the years since the publication of A Conflict of Visions. Often, as I read, I would sense Sowell nosing up to the idea of confirmation bias, without naming it, since the concept/term didn’t yet exist: “Evidence for or against one’s own vision can be weighed differently, and being convinced is ultimately a subjective process.” [206] Sowell’s overall exposition is extremely logical and language dependent (as in the unconstrained vision!). His dependence on citations from economists and social thinkers relies on the “articulation of the few” (as in the unconstrained vision!). There are no psycho-social experimental studies, as from a Daniel Kahneman or a Richard Thaler. Yet A Conflict of Visions is entirely about pre-rational thinking, for it is Sowell’s very definition of visions. He may not have won a Nobel prize, but he does provide a great service by tying all of us into a larger dialogue that has been going on for at least two centuries.

And this is the crux of Sowell’s book. It exhorts us to respect the motives and sincerity of our political opponents. Liberals and conservatives can both have strong moral senses, even if they disagree about causation for how to achieve moral ends, or even what those ends should be. It reminds us, too, that we can be guilty of ignoring inconvenient facts, of constructing intellectual Rube Goldberg devices in an effort to maintain our convictions. On the other hand, Sowell makes clear that no vision can fit the facts completely. Contradictions will always arise. “Efforts to adjust and modify visions to accommodate discordant evidence are not inherently self-deception.” [214] A descent into nihilism (or critical theory!) does not follow. Nor does a lazy middle-of-the-road-ism: “It is no less arbitrary and dogmatic to declare a priori that ‘the truth lies somewhere in between.’ It may. It may not. On some highly specific issue, it might lie entirely on one side–and on another issue, with the other side.” [215]


No, Sowell is very clear that facts matter. He is also clear that facts by themselves do not take sides. The fight over their meaning will endure in perpetuity. And this is Sowell’s most important insight. We should accept that fact and accommodate ourselves to it.

It is…necessary to understand that a very fundamental conflict between two visions has persisted as a dominant ideological phenomenon for centuries, and shows no signs of disappearing. The inevitable compromises of practical day-to-day politics are more in the nature of truces than of peace treaties. Like other truces, they break down from time to time in various parts of the world amid bitter recriminations or even bloodshed. [117-118]

We must live with each other and our different visions, by all means struggle over their meaning and their implementation. With that understanding, by keeping them in creative suspension, we can minimize both the bitterness and thebloodshed.

Forgotten Radical

posted in: Race and Gender | 0
“He was buried in Hyde Park, his impact largely forgotten in the city and region that he loved enough to want to change.” [352]

Thus ends Kerry Greenidge’s 2020 biography of the–yes–much neglected William Monroe Trotter. He was born in 1872 on his grandparents’ farm outside Chillicothe, Ohio. His father, James, was born into slavery of a Charlottesville, Virginia-born mother named Letitia, and a Mississippi planter named Richard Trotter. As the master’s children, James and his mulatto siblings enjoyed privileges denied other slaves, including formal education. Using the extra freedom she had grown used to, Letitia escaped north with her three children to Ohio in 1849.

In Cincinnati, James attended the abolitionist-founded Gilmore’s School, where he studied Latin, Greek, and algebra, among other things. After teaching several years, he served in the Civil War, earning the rank of lieutenant, a title he went by for the rest of his life. James Trotter’s example showed Monroe he need not be intimidated by powerful white men. [12]  The father’s words guided the son’s activism throughout his life: “Only the colored people themselves can deliver us from the wilderness.” [33]

James’s money aided him, too. The lieutenant gained a post office job in Reconstruction and, settling in suburban Boston, put his family in the middle class. He didn’t pay for his son’s Harvard education, but he did leave him a substantial sum at his death, while Monroe was still a still a student. There was no trodden path for the Negro Ivy Leaguer follow after graduation in 1895. There were, rather, many obstacles deliberately placed in his way. Not even the banks with purportedly progressive leadership would hire him. He went into real estate with his inheritance as seed money and made a modest fortune that would carry him–mostly–through his storied, if forgotten, career as the Guardian of Boston.

Here is a timeline of his accomplishments:


1901: Founds of the Guardian, Boston’s colored weekly, the self-proclaimed “greatest race paper in the country”

  • The paper serves as tribune for the colored “genteel poor” of Boston and New England for three decades
  • Subscriptions never numbered much more than two thousand; the paper never paid for itself
  • Quality declined: an erstwhile Trotter supporter mocked it as “the worst-run colored paper in America,” yet it remained a sentimental favorite throughout its run

1902: Organizes Guardian-sponsored rally at Faneuil Hall in support of the 14th Amendment

  • Emerges as a militant voice for colored Boston in opposition to the deferential appeasement of Booker Washington and the Tuskegee Machine.

1903: Helps found National Suffrage League which begins to galvanize New Negro political activity.

  • Later merges with National Equal Rights League which Trotter leads for next three decades.
  • There were other organizations he helped form and/or lead, but it is an alphabet soup, hard to keep track of

1905: Organizes, with W.E.B. DuBois, the Niagara Movement to push the Roosevelt administration to adopt civil rights policies:

  1. Enforcement of 15th Amendment and investigation of southern state constitutions
  2. Support for desegregated interstate travel
  3. Federal support for southern black education

(Trotter’s concern for petty rivalries “immature vindictiveness” [128] torpedoed any progress that the convention might have made.)

1907: Organizes “Remember Brownsville” campaign for colored Boston

  • Members of the all-black Twenty-fifth Infantry defended themselves against white mob violence in south Texas and were dishonorably discharged for doing so

1910: Eschews involvement in the newly-formed National Association for the Advancement of Colored People

  • The NAACP is heavy on white leadership and does not advocate the “centrality of black people to civil rights activism” [Greenidge, 180]

1912: Helps Woodrow Wilson win presidency with black votes

  • Believes blacks will gain political power by voting “race first” rather than give blind support to Republicans
  • Meets with Wilson, gains a pledge of support for Negro civil rights, convinces significant numbers of black electorate to vote for the Democrat

1913: Leads vocal protest when Wilson immediately reneges on pledge; allows segregation of federal jobs and dismissal of large numbers of civil service employees

  • Earns notoriety among whites and distinction among blacks for standing up to the president in White House meeting
  • Wilson: “You are the only American citizen that has ever come into this office who has talked to me with a tone and a background of passion that was evident.” (Read: “uppity Negro”)

1915: Leads successful protest against the release of KKK-glorifying film, The Birth of a Nation

  • Greenidge: “Trotter’s very public and extremely popular Birth of a Nation protest was less concerned with changing white minds than it was with igniting black consciousness.” [212]

1917: Agitates for rights of Black soldiers after US declaration of war

  • Supported Wilson’s War for Democracy as long as it “vouchsaf[ed] freedom and equality of rights to all citizens of the United States regardless of the incidence of race or color over which they have no control.” [241]

1919: Travels to Paris Peace Conference to voice Liberty League’s demands for post-war colonial settlement

and Black civil rights

  • The federal government denied passports to all non-conciliatory Negroes
  • Trotter dressed as a cook for the ship’s crew and sneaked off in disguise to get to Paris (perhaps Trotter’s most celebrated action in his storied career)

1919: Co-founds the African Blood Brotherhood with Cyril Briggs

  • Membership is small yet its promotion of black armed self-defense is significant
  • Played a role in preparations made by Black Tulsans of Greenwood in their self-defense

1921: Trotter makes forceful yet politic statement To President Harding in wake of Tulsa Massacre compared to NAACP’s more guarded one

  • James Weldon Johnson: “…an utterance from [the President] at this time on the violence and reign of terror at Tulsa, Oklahoma, would have an inestimable effect.”
  • Trotter: “…the citizens of Massachusetts look to you in giving aid to the afflicted, and they will stand behind you in any endeavor to punish the guilty and to make such inhuman and barbaric crimes forever impossible in this land of freedom and justice.”

For the next decade, William Monroe Trotter continued his struggle in behalf of Black Americans and their civil rights, against the growth of the Ku Klux Klan, for the passage of an anti-lynching bill in Congress, and to desegregate institutions in Boston and beyond. The intractability of the issue (the possibility of Negro equal rights seemed as elusive as ever), his aging body and mind (a younger generation of activists made him feel increasingly out of touch), the deepening of the Depression (sinking personal finances and those of the Boston colored community made the demise the Guardian, his life’s work, unavoidable): the weight of these facts led Trotter to take the ultimate act of self-destruction: he jumped from the roof of his three-story apartment building.

It was a shocking end to a man who had galvanized so many in the Black community, in Boston, New England, the United States, and beyond. William Monroe Trotter had literally given his life to the cause of Negro equality in the United States. The intransigence and apathy of his white countrymen meant he would not live to see the fruits of his life’s work.


Words from the Baltimore Afro-American, in late 1926 after the Old Mon’s meeting with President Coolidge in the White House, when Trotter was already a tired, rumpled, “eccentric old man,” capture the significance of a civil rights paladin who had outlived his time but could not live long enough to win the changes he sought:

If Trotter did no  more than let the President know that the Negro is not blind to the injustice heaped upon him, no more than remind him that black men consider themselves just as much a part of these United States as any other  race; no more than let him see that there are still men in the race with backbone enough to tell him that we are not satisfied with existing conditions, that we are no asleep–his mission was a success. [334]

A fitting tribute for a largely forgotten radical and humanist.


Greennidge, Kerri K. Black Radical: The Life and Times of William Monroe Trotter. New York: Liveright Publishing Corporation, 2020.


Confirmation Bias

posted in: Race and Gender | 0

We attend to data that support (confirm) our existing beliefs (biases). The phenomenon is so common it has a name: confirmation bias.


I was aware of this process as I read Kerri Greenidge’s important new biography, Black Radical: The Life and Times of William Monroe Trotter. For, in the wake of the murder of George Floyd, I have consistently objected to voices (at least three in print) who find the solution in “the work that needs to be done amongst white people in white communities across the United States.”


My understandings are informed by my experiences on the playground, as a teacher. I have learned to counsel students they can’t control what their friend does, let alone says or thinks. What they can do is tell the friend how they feel and move away if she continues to annoy–and call in the teacher when words or actions break school rules, ie., harassment. In the larger society, too, we all need to speak up, firmly but respectfully, when we feel mistreated. Same in the larger society. Working for fair enforcement of existing laws and the passage of fairer, more effective ones seems a more productive use of time and effort than simply expecting others to change their hearts and minds.


So I felt took note when reading the following passage from the introduction of the book:


Monroe Trotter challenged the lie at the heart of American arguments over racism that persist to this day: that anti-blackness is a feeling rather than a persistent, defining force in the country’s political, social, and economic life; and northern white progressives, innately less “racist” than their counterparts in the conservative South, are the moral arbiters of a more racially just future. As Trotter’s life of activism indicates, only black people can define what racial justice looks like, and they can only do this through constant agitation for the political, economic, and civil rights enshrined in the Constitution during Reconstruction, yet denied through violent resistance, anti-black policies, and general white apathy. [xvi, italics mine]


A careful reader will see that this passage isn’t a slam dunk for my position, as described above. For, another reader of Greenidge might rather find support for her belief in institutional racism. (Trotter was astute and, perhaps, ahead of his time in perceiving it.) A third might feel confirmed in his anti-racist conviction that white liberals are self-deluded and as much a part of the problem as more overt racists. A fourth might focus on Trotter’s example of radical action. (The paragraph is a slam dunk for the relevance of confirmation bias, itself.)


Greenidge’s title makes clear she sees Trotter as a radical. His contemporaries did, too, black as well as white. Readers of this space know that I have little affinity for radicals or faith in the fruits of revolution. Yet I do for Trotter. Inconsistency? Not necessarily. Nothing that Trotter said or did was either destructive or unwarranted. In Greenidge’s telling, he was relentless but also principled. He was a tribune for the “genteel poor” (Greenidge’s term) even as he habitually sported a suit and tie and preferred Mozart to ragtime. Trotter was a man of integrity.


Confirmation bias notwithstanding, I can hardly turn the following passage (near the end of the book and the tragic end of Trotter’s life) to my purposes, no matter how hard I might try. Pushing to desegregate Boston City Hospital, the “Old Mon” of Boston met with hospital trustee, Carl Dreyfus, a progressive by all accounts who nevertheless thought Trotter’s aims misguided. He blithely explained that “most white people got along well with individual colored people, but they did not get along with masses of colored people generally.” To which Greenidge speculates:


If there was a single moment in Trotter’s life that precipitated his emotional decline, perhaps it was this–a devastating confirmation that whiteness itself supported and maintained institutionalized racism, and perhaps, that only whiteness could eventually destroy it. [344]


Perhaps, too, Greenidge agrees with the woman who so exercised me last year when she said, “If racism is going to end, it’s because white people end it.” But, again, readers of this space will know I have no difficulty holding contradictions in creative opposition. For, of course, white people’s racial attitudes have to change (and Black people’s, too). The question is what are we going to do about it while we slowly do (and don’t). Trotter’s response was to ignite Negro political consciousness and unite Negro political action. He succeeded in the former; his failure in the latter slowly destroyed him from the inside out. But if William Monroe Trotter is inspirational–and he is–it is because he didn’t let white America either define him or dictate his actions.


That’s my view. But, then again, I’m biased.