To Spend Time Is To Explore Time: Giorgio Morandi and Edmund de Waal at Stockholm’s Artipelag

For someone like me, of a generation brought to visual consciousness amidst gallery-goers who understand paintings through a camera lens and a Google search, a first reaction to the work of Giorgio Morandi is often not dissimilar to helplessness. “Is that it?”, one wonders on seeing one of his still life paintings of a few vases and vessels. Then, when one sees a few more: Is this some kind of trick? And, upon seeing room after room of Morandi’s vases and jugs, bottles and biscuit tins: Did he really have nothing more to say?

But that won’t happen at Artipelag, a new modern museum set gently upon an island in Stockholm’s archipelago. For when you finally arrive at some of Morandi’s best work, collected from private and public collections around the world, you will have been primed to slow down, to forget the phone, to forget whatever tasks you had to get done today and whatever He Who Must Not Be Named said on Twitter—and to simply look.

Edmund de Waal is responsible for this, as his work occupies the first half of the exhibition in a cavernous space, more glass than wall, with slices of view out to pine trees and Sweden’s grey, brackish water (these views are still life too, if not natura morta). Depending how you first come across de Waal you may know him either as a writer or a potter. He is both, and he does both very well from his London studio. The Hare With the Amber Eyes is his family memoir published in 2010.

Edmund de Waal pottery Stockholm Artipelag

At Artipelag de Waal is a potter. He has created hundreds of small, cylindrical porcelain vases and vessels, finished in various glazes of whites and blacks, and has set them inside cases and vitrines. Some of these cases are set upon slabs of transparent plexiglass so that they appear to hover in space; some are transparent, inviting analysis of de Waal’s vessels, and others are opaque, so that merely the shadows of the vessels can be seen. One vitrine, Epyllion (2013) is entirely opaque when you stand close to it, and entirely transparent when viewed from a distance.

De Waal’s vases are all empty, of course—empty of matter. But over time it began to seem as if they held time and presence, since the more I looked the more I became aware of myself standing in the gallery. Or rather, lying here in the gallery, as his stunning Atmosphere (2014) pieces hang from the ceiling with conveniently placed long couches below. “My cloudscapes”, de Waal calls them—catching “The fugitive movement of the sky”.

It is particularly true in both de Waal’s and Morandi’s work that the more you put in the more you get out—spend time with these vessels, walk in between them and around them, lie beneath them and in front of them, and they just continue to give. “To spend time is to explore time”, de Waal writes in the exhibition catalogue—“This process is not a means to an end.” Here our modern conception of time is flipped upside down, and we find ourselves not investing it or even spending it—not aiming for some future, cultured return—but simply observing it.

Where Minimalist art of the 70s and 80s makes us conscious of space, de Waal and Morandi’s work makes us conscious of time. If you happen to see Richard Serra’s massive steel sculptures in the Guggenheim Bilbao shortly before or after this exhibition in Stockholm, you will notice curious resonances. Each exhibition occupies similarly massive spaces; each artist dominates the space he occupies, though they do so in different ways; and each is similarly resistant to movements and labels, despite America’s need for labels forcing Serra into the Minimalism box. Perhaps most significantly, each artist suspends himself in time and space: de Waal quite literally with his hanging and floating, transparent and opaque vessels; Serra by changing our path through space and in his use of a timeless material; Morandi, by being a modern Old Master—by painting still lives in oil on canvas during Duchamp’s lifetime, by in turn ignoring Futurism and the return to order despite living in Italy, and by rejecting all modern demands of change and progress in his own work. And all of them, by returning us to a time where art was free of avant-gardist teleology—“You feel suddenly free”, Robert Hughes wrote of being inside Serra’s Guggenheim sculptures, “far from the dead zone of mass-media quotation, released from all that vulgar, tedious postmodernist litter and twitter…” This art aspires not to newness, but to timelessness.

Jeff Koons created The New, and contrasted it with The Old. In de Waal and Morandi’s work no such opposition exists. Enter this gallery and you exit Modernism’s conception of time and progress, leaving behind along with it all that is pre and post, avant-garde and rear-guard, money and metrics, fame and fortune. “What matters is to touch the core, the essence of things”, said Morandi of his art—and here one gets close to that, finding oneself held still in reality. And after all, Morandi added, “Nothing is more abstract than reality.”

Summer With Picasso and Giacometti

2017 seems an appropriate year for two big shows, the Reina Sofia’s Pity and Terror: Picasso’s Path to Guernica and Tate Modern’s Giacometti. As the White House talks of showing us “fire and fury like the world has never seen”, gallery-goers in Madrid and London can look back to the future and see both how we got where we are and how we know where we’re going. This is particularly useful for those of us born after the Cold War and the age of existentialism, in case we might have believed that everything going on in global politics in 2017 was somehow new.

Here’s what you know before going to either of these exhibitions: Picasso’s Guernica is about the suffering of war, and Giacometti’s sculptures are symbols of existentialism. A woman with her head thrown back, screaming, a dead child in her arms; a gaunt, emaciated man walking onwards, to work or to the grave, but ever going nowhere. The twentieth century saw a lot of both war and existentialism, the latter likely growing out of the former. We know that, but because we’ve seen Guernica endlessly reproduced, and because Giacometti’s Walking Man is little more than a substitute for the word existentialism, it has become difficult to see anything more. The question each of these exhibitions asks is: can we un-see the images and artists we think we know, and re-see them in all their depth and relevance? By contextualising the artists’ work chronologically, can we see and feel afresh the stirrings and lessons the images contain; and might we then see more clearly this familiar world that suddenly seems so strange?

To each question, an emphatic “yes”.

Picasso’s Guernica, painted in 1937 for a giant wall in the Spanish Pavilion at the Paris World’s Fair, is, of course, a reaction to the blanket bombing of the Basque town of Guernica by the German Luftwaffe just two months earlier. Yet before the bombing happened, Picasso had already been granted the commission to produce a painting of Guernica’s size for the Pavilion, and he had been experimenting with ideas for a few months. The original plan was to do a version of one of his favourite themes, “The Artist’s Studio”, and he had completed 12 sketches on that idea (and had even planned to put a bust of his mistress at the time, Marie-Therese Walter, on either side of the painting once hung at the Pavilion). As Anne Wagner, one of the co-curators of the exhibition, notes in an article accompanying the exhibition: “It is as if, in his [Picasso’s] eyes, the end was in sight.”

Then the bombing happened, on the 26th of April. What do you do, if you’re Picasso? You’re already recognised as perhaps the greatest living painter, you’ve been granted a huge commission by your home country (though you’ve lived in France since your youth), you’re on the home stretch with ideas for this, the biggest work you’ll ever produce, and then—bombs are dropped. Hundreds, even thousands, are killed (it speaks volumes that death estimates vary so greatly). A town in your home country is literally levelled, and by a foreign country merely practising bombing techniques, showing the world what it’s got, that it’s the most powerful country on earth… It’s a single event, of which you’re only reading news reports (which are vague, at this stage—some are arguing that the bombing was done by Basque anarchists; and you aren’t even seeing any actual photos, because journalists can’t get in or out of the town for some while), and yet—it changes everything. “The Studio: The Painter and His Model” suddenly seems quaint, to put it mildly, and the planned busts of your mistress now seem just daft, self-gratifying to the extreme. Events outside of your control force you to respond. And just five weeks later, you’ve declared Guernica complete.

In getting at the work’s dualities, the other co-curator of the exhibition, T. J. Clark, quotes A. C. Bradley’s description of the Greeks:

“Everywhere, from the crushed rocks beneath our feet to the soul of man, we see power, intelligence, life and glory, which astound us and seem to call for our worship. And everywhere we see them perishing, devouring one another and destroying themselves, often with dreadful pain, as though they came into being for no other end.”

It’s exactly that kind of duality that we see in all Picasso’s work. Picasso famously said that he never painted subjects, but only themes. Anne Wagner explains in an LRB article how Picasso told Andre Malraux after he had completed Guernica that he had painted death “as a skull, not a car crash”. His other themes included birth, pregnancy, suffering, murder, the couple, death, rebellion… For Picasso, there are hideous contradictions he needs to respond to. Just days before he had been working on a painting glorifying art and artists—himself, essentially—as well as his lover, Marie-Therese. He had been glorifying some of those themes like birth and pregnancy (Marie-Therese had given birth to his daughter two years earlier)—which for him were always associated with his own pregnancy, of ideas, and his giving birth to them in painting. He had been glorifying power, intelligence, life and glory, beauty—themes called into question by a senseless, needless bombing. How do you respond, if you’re Picasso?

It’s this kind of narrative that the Reina Sofia exhibition does so well at capturing. We see not just his sketches for Guernica, but his paintings all the way back to his post-Cubist still lives inside his studio, to his disturbing depictions of tangled lovers kissing/penetrating/attacking one another on a beach. We see his changing depictions of the (many) women in his life, and the themes of pregnancy, birth and suffering. And Guernica, when we do finally see it (albeit, perhaps appropriately, from behind crowds) we can’t help but see it differently. Even though we’ve grown up seeing the mural on coffee cups and t-shirts, it has now regained its poignancy and power, and it seems to contain both sides of Bradley’s description of the Greeks. All the death, murder and suffering, yes—but also all the things that cause us to suffer, like our children and our lovers, our ideas (is it Edison’s illuminating bulb that sits above the wails of terror, or Goya’s lantern from the Third of May?) and our religion (a candle is thrust into the scene through a window, giving us light in darkness and inviting us somewhere else—or is this merely a vigil?).

Swollen breasts, life-giving, hang over the child and point at the man pinned by the spooked horse to the ground. But look again, and are not the breasts shaped like bombs; areola, like explosive casing, and nipples like a trigger? For after all, even the dictator, now dropping bombs, did place his mouth here, did gain sustenance here.

— — — —

In London, at the Tate Modern, Giacometti shows us what humanity looks like when the death, murder and suffering themes win out over their counterparts life, birth and hope. Giacometti was twenty years Picasso’s junior: is this a generational difference, or merely a response to the post-war world? (The two were known to holiday together on the French Riviera, until a falling out, but they remained great admirers of each others’ work).

Walk into the exhibition and in the first room you will be greeted by perhaps fifty busts produced across Giacometti’s lifetime, from the first in 1917 (when he was just 16). At first they are done in plaster, the face a flat disc with features painted on. In the middle period, they are tiny—truly, tiny—because Giacometti was forced to spend the war in his native Switzerland holed up in a small hotel room. He produced sculptures the size he could bring back to France with him in a matchbox—and I think the results are some of his best; a head or body so small you need to strain to see its features, yet attached to a base so large that you can almost feel the unbearable weight of being (I almost felt that Kundera were trying to respond to these busts). And, in his later period, the characteristic elongation from nose to nape, where the face meets in a sharp point before the eyes and extends backwards, all the wrinkles and gauntness exaggerated and emphasised.

If you’re familiar with Giacometti, it shouldn’t be a surprise that these aren’t happy faces. But then again, what sculptor ever painted happy subjects? Bronze and marble (though Giacometti did not use the latter, he did have a preference for displaying the plaster originals before they were cast in bronze) are mediums that say permanence, timelessness, greatness—and so we expect to see grand subjects. Think Rodin, or, in non-figurative sculpture, even Brancusi’s objects have a kind of grand sincerity to them. But what struck me in this first room of the Giacometti exhibition, which I thought the best of all, was how decidedly ordinary and routine all these faces look, how haggard and temporary. To put it more bluntly, even the youngest subjects look not so far away from the grave. And yet—they are for the most part cast in bronze, and will outlast us all, thanks to the artistic efforts of one of our species’ members.

The exhibition is large. We see Giacometti as a struggling Surrealist, struggling because he seemed interested only in the human form (he was later expelled from the group for his continuing to create representational works). We see his Chariot, and The Dog (“It’s me”, Giacometti is reported to have said. “One day I saw myself in the street like that. I was the dog.”). We see the Women of Venice, the plaster originals brought almost all together for the first time since they were displayed at the Venice Biennale for the first time in 1956. There are Giacometti’s oil paintings, with their cage-like interiors which so influenced Francis Bacon. And in the final room we see his towering tall women, double our height, so that we are left with the sense of having walked for a while among giants.

In Picasso: Pity and Terror we saw the coupling of all that is best in the world with all that we would rather not think about—death and suffering with birth and joy. In Giacometti we see the duality of temporality, or of being in human time on the one hand, with the permanence of humankind thanks to our endeavours, on the other. We trudge onwards to work and to the grave, and we have our crises along the way, but all that haggardness is outlasted by the permanence of what we accomplish. It’s a strange duality—and strange too is the sight of Giacometti’s brother Diego throughout the exhibition, looking always so tired of it all, yet being looked at tirelessly by thousands each day, thousands who somehow come out of the gallery with a new vitality.

— — — —

I might be too frank here, but 2017 has had me thinking a lot more about death. Just a few days before I saw Giacometti in London there had been the terrible terrorist attacks on a nearby bridge and at Borough Market. Then there are the talks of war, maybe of the nuclear kind, on the Korean Peninsula—and bombs continue to drop in Syria. More personally, I’ve had to attend the funerals of close family members, and bad bicycle crashes have left me feeling very mortal. (Days after I wrote this Las Ramblas in Barcelona became the site of another terrorist attack—Picasso’s childhood home given a new relationship to Guernica).

These threats and worries aren’t out of the ordinary. Certainly my awareness of them at this age indicates the relative comfort of my upbringing. The second half of the twentieth century has been called the nuclear peace for a reason, and deaths from terrorist attacks have statistically never been less likely. Maybe all that is unique in my thinking about death is having the combination of a growing individual awareness of death—call that growing up—at the same time as politics itself seems for the first time in my conscious life to present death as a possible outcome.

High on a wall in the Guernica exhibition was printed a quote of Hannah Arendt’s, getting at the heart of the idea of death in both Picasso and Giacometti:

“Death, whether faced in actual dying or in the inner awareness of one’s own mortality, is perhaps the most anti-political experience there is. It signifies that we shall disappear from the world of appearances and leave the company of our fellow-men, which are the conditions of all politics. As far as human experience is concerned, death indicates an extreme of loneliness and impotence. But faced collectively and in action, death changes its countenance; now nothing seems more likely to intensify our vitality than its proximity. Something we are usually hardly aware of, namely, that our own death is accompanied by the potential immortality of the group we belong to and, in the final analysis, of the species, moves into the centre of our experience. It is as though life itself, the immortal life of the species, nourished, as it were, by the sempiternal dying of its individual members, is ‘surging upward’, actualised in the practice of violence.”

What I’m talking about is becoming aware of both of these kinds of death, at the same time: death as the most non-political event there is, and death as an ultimate outcome of politics. That’s why 2017 was an appropriate year for these two big shows. In Giacometti we see not death itself, but the non-political awareness of it: the slow decline towards an unobserved, solitary, nighttime departure. And in Pity and Terror: Picasso’s Path to Guernica we see all the contradictory vitality of political death. Mortality, solitariness, ageing and sickness in Giacometti; terrorism, war, energy, and pain in Picasso.

Summer in London was bookended this year by two very different kinds of events. Maybe it’s crude to speak of them in the same paragraph, but it’s the kind of opposition we’ve seen throughout these two exhibitions, the kind that Picasso spent his life depicting. Summer began with terrorism, and it ended with Wimbledon—and at Wimbledon, with an old genius, facing ageing, mortality and decline, showing it’s not all over yet. “Genius is not replicable”, David Foster Wallace wrote of Roger Federer (and I read him as writing of genius everywhere, including that of Picasso and Giacometti): “Inspiration, though, is contagious, and multiform — and even just to see, close up, power and aggression made vulnerable to beauty is to feel inspired and (in a fleeting, mortal way) reconciled.”

It may be fleeting indeed, but that reconciliation is what art—and especially that of Pablo Picasso and Alberto Giacometti—finally offers us.


Note: As if to emphasise the point, from late 2016 to early 2017 the Musee Picasso in Paris curated an exhibition called Picasso-Giacometti, putting together for the first time the work of these two artists. I didn’t get to see the exhibition, but it seems too fitting that the two artists met in Paris, half way between their later stand-alone exhibitions in Madrid and London.

A Guide for Non-Americans: What is “college” and how does it differ from university?

An introduction to American higher education and the liberal arts for New Zealanders and Australians considering tertiary study in the US

Applying as a New Zealander to study for university in the United States was a daunting process. There were so many things I’d never heard of: the SATs, the Common App, “safety schools”, “reach schools”, financial aid, liberal arts, to name just a few. But I think what made the process most daunting right from the start—and what made it difficult to find the right information—was that I didn’t understand the fundamental terms to describe the university itself in the American system.

Americans do not go to university for undergraduate study. They go to “college”. In New Zealand everyone talks about going to “uni” straight out of high school, and we’ll be choosing between doing law, medicine, commerce, engineering, architecture, and other vocational degrees. But when you go to American universities’ websites to look at studying those things, you’ll soon see that all of them require a “college” degree as a prerequisite. And no, that doesn’t mean a high school diploma, despite us in NZ using “college” to refer to high school. This can all be overwhelming and can make little sense for those of us in countries that follow a roughly “British” model of education, and it’s something I get asked about often. So here’s a brief guide.

The first thing to know is that in the American system of higher education (that’s what we call “tertiary” education), you can only complete a Bachelor of Arts or a Bachelor of Science for your undergraduate degree. That’s right—no BCom, MB ChB, LLB, BAS, BE etc. In the United States, your undergraduate degree will only be a BA or a BSc, with honours, and it will take you four years to complete. (There are some exceptions to this rule, but they are in a very small minority). The “honours” part is usually not optional, as it is in Australia or New Zealand—if you get into the college, you’re going to do the honours part of the degree. This is why you’ll often hear many top schools in the United States called “honours colleges”, because simply by being admitted, every student is doing an honours program—it’s not something you apply for in your third year of study depending on how well you’ve done so far, as it is in New Zealand and Australia.

The term “college” itself comes from the fact that traditionally, higher education in America was conducted at liberal arts colleges. There were no universities. Yale was Yale College, not Yale University. Harvard was likewise Harvard College. These were usually small institutions at which students spent four years (as they do today) and completed a BA(Hons) (as they do today), and what made them truly unique was that all students lived on campus accommodation for all four years of their education (as they do today). Very often, professors lived and dined with students as well (as they often still do today!) So “college” is a physical place where you live and study for four years while completing your undergraduate education.

Universities later grew up around the colleges and began to offer professional or vocational degrees, which in the American system can only be studied at the postgraduate level. Yale University today, for instance, is now comprised of fourteen “schools”—one of which is Yale College, the only part of Yale to offer an undergraduate education. The other thirteen schools all offer postgraduate degrees, and twelve of them offer professional/vocational degrees—law, medicine, architecture, business, and so on. And yes, you first need an undergraduate “college” degree (or an equivalent undergraduate degree from another country) to gain admission into one of the “schools”.

So “colleges” are now often small parts of overall universities in the United States, but they remain an important centre for the whole institution. Sometimes, the original colleges declined to set up larger universities around the small college—these are now usually referred to as “small liberal arts colleges”, where the whole institution still only offers a BA or a BSc with Honours. Examples are Pomona College, Swarthmore College, Amherst, Williams, Wellesley, Haverford, Middlebury, Carleton, to name some of the better known ones.

One other thing to note is that because you live on campus in the American college, you will be placed in a specific residential college for your four years of study. While studying at Yale College, for example (itself one of the fourteen schools of Yale University), I was placed into Branford College, one of the fourteen constituent residential colleges that make up Yale College. Each of the fourteen colleges is its own residential community with student rooms, a dining hall, a courtyard, and other facilities. (Fourteen is not a significant number; it’s merely random that that is the number of schools and colleges at Yale.) Each residential college will have a “Rector” or a “Head”, who is responsible for your residential life, as well as numerous other staff. Residential colleges truly are like smaller families or communities within the larger college, which itself is a smaller part of the overall university. (That’s a lot of uses of the word “college”—I hope it’s all clear!)

In New Zealand and Australia, as well as many other countries, we have a certain disdain for “arts degrees”. The implication is that students who do an arts degree either weren’t smart enough to get into a program like law or medicine, or simply wanted an easy time at university. I got a great deal of flack from friends and friends’ parents when I was applying to US universities when they heard I would be doing an arts degree—they assumed I was giving up and wanted to party at university, as is often the stereotype here in New Zealand. But in the United States, this couldn’t be further from the reality, since all undergraduate programs are what we call “arts degrees”. The American model of higher education is entirely built around the arts degree, and it is virtually impossible to study anything other than an arts degree for your undergraduate education. Even doing a BSc for your undergraduate years will require you to have taken many courses in the humanities and social sciences—the BSc is essentially an indication that you majored in a science subject at college, and intend to pursue some kind of science-related study for your postgraduate degree.

This emphasis on the arts in the American system, and the impossibility of doing any professional or vocational study for undergraduate education, gets to the heart of what makes higher education in the US truly unique. It all comes down to “liberal education”, or the “liberal arts degree”. The term is widely misunderstood, even in the United States, and yet it’s critical to understanding US colleges and to determining whether study in the US might be for you.

Here’s how I explain liberal education: whereas university study in Australia and New Zealand is concerned with learning how to do things—how to be a lawyer, or a doctor, or a businessperson, and so on—undergraduate college education in the United States is intended to be about learning what you should do. 

Let’s delve into this a bit more. Undergraduate liberal arts colleges in the US—and those based on the US model, like Yale-NUS College in Singapore, where I study—often sell themselves based on the breadth of education you’ll receive. The idea is that whereas in Australia or NZ we immediately do highly specialised professional degrees and learn little outside that subject area, in the US undergraduate education is about broadening one’s mind, trying a whole range of subjects, and essentially having four years to explore intellectually before committing to a “vocation” which you will study at the postgraduate level. You’ll choose a “major”, which is the area you think you’re most interested in, but often there is a “common curriculum” or “distribution requirements” that forces you to take a range of subjects and classes outside that area. At Yale-NUS, for instance, one’s major (mine is Philosophy, Politics and Economics, for instance) is only 30% of all the courses that you take during the four years.

Breadth is an important characteristic of the liberal arts, but it is not the defining characteristic. What breadth achieves—and the reason all liberal arts colleges offer it—is that the point of your undergraduate education is to figure out what you want in life. Liberal education is about having four years to learn not about how to do things, though that may indeed be a part of your education, but instead to explore so that you can work out what you should want to do. Instead of asking during your undergraduate years, “How do I be a doctor, or a lawyer, or a journalist?”, liberal education is about asking “Should I want to be a doctor? Or should I be a writer instead?” It’s about using classes, and professors, and books, and mealtimes every day in the dining hall with friends and teachers, to learn about yourself for four years before committing to a profession or a vocation. With those four years of exploration, you should then be much more confident in the decisions you make about what to do with the rest of your life. And, of course, you then specialise to become a lawyer or a doctor during your postgraduate study.

Another way of explaining it is that university in Australia or New Zealand trains you to be a specialist in a certain subject area in which you’ll work for life; college in the US educates you on what it means to be human, so you’re more sure of what you should later train to be.

There are, of course, advantages and disadvantages to each system. In the US you’re more likely to become a well-rounded human being with wide-ranging knowledge and interests, and you’re more likely to be confident and sure of what vocation you choose to commit to by the time you think about postgraduate education and beyond. The downside is that you spend an extra four years doing this, whereas in Australia and New Zealand (as well as South Africa, India, and really anywhere that has developed its education system from the British model) you would spend that time specialising and then beginning your career earlier. You can therefore find yourself further behind in a career than those who had studied the vocation for their undergraduate study. A college education, then, is a luxury; not everyone can afford it, and we should remember that this kind of choice in education is a huge privilege.

Which system better suits you is therefore an individual choice, though I personally am a strong proponent of taking the extra time to learn about ourselves and the rest of our lives that comes from the American system. I think it encourages people to discover what truly matters to them—what kind of interests and work you’re willing to devote your life to. Without being very sure of the decision you make in the Australian and New Zealand education systems, you might find yourself waking up in your mid-twenties realising that the degree you’ve spent five or more years training for is in fact not what you want to do with the rest of your life. That’s a costly and disappointing realisation to have. Studying at an American college, by contrast, will give you a foundational understanding of yourself—will have (if it lives up to its ideal) helped you answer the questions of what you should do with your life, rather than how to do it—that will help you be a good person, whatever you later go on to do. Of course, you can have this kind of education for yourself in an Australian or New Zealand university by structuring your own education: you can do a BA(Hons) as you would in the United States. This is a great option, but you will need a degree of self-motivation and determination that you might not need at college in the US; here, you’ll face the stigma attached to “arts degrees” and won’t have the encouragement to explore intellectually that you would at a US college.

I’ll leave it there, and further note only that the picture I’ve painted of US colleges is a kind of ideal type. The degree to which colleges live up to that ideal will depend on the institution, and even down to the classes you take and professors you get—but nonetheless the fundamentals stand. That’s a brief primer on colleges vs universities, and what truly makes the American “liberal arts college” unique. Feel free to leave a comment or contact me if you have other questions, and I can always delve into specifics on other college-related questions in another blog post.

To end, here’s some additional reading I strongly recommend to anyone who is interested in the US higher education system and the difference between education and training:

Why Teach?, by Mark Edmundson

The Voice of Liberal Learning, by Michael Oakeshott

College: What it Was, Is and Should Be, by Andrew Delbanco

In Defense of a Liberal Education, by Fareed Zakaria

What It Means to be Against Everything: A Brief Review of Mark Greif’s Book

“We have no language but health. Those who criticise dieting as unhealthy operate in the same field as those who criticise overweight as unhealthy. Even those who think we overfixate on the health of our food call it an unhealthy fixation. But choosing another reason for living, as things now stand, seems to be choosing death. Is the trouble that there seems to be no other reason for living that isn’t a joke, or that isn’t dangerous for everyone–like the zealot’s will to die for God or the nation? Or is the problem that any other system than this one involves a death-seeking nihilism about knowledge and modernity, a refusal to admit what scientists, or researchers, or nutritionists, or the newest diet faddists, have turned up? As their researches narrow the boundaries of life. 

Health is our model of all things invisible and unfelt. If, in this day and age, we rejected the need to live longer, what would rich Westerners live for instead?”

Greif’s overarching criticism across many of his essays is that we live as if the point of living was to extend life. In Against Exercise he criticises our use of time simply on self-maintenance and self-prolongation, whereby we give up life to supposedly extend it. The same applies to food: we spend our days thinking and worrying about what to eat, restricting what we eat, so that we may be “healthy”, as if health was the point of life rather than its means. As soon as we became secure in our food supply, we began restricting our diets in a kind of confusion of what to do with our newfound freedom.

Individual phenomena are used in Greif’s work as examples of his overarching critique: that we value the wrong things without realising it. “I had to show”, Greif writes in the introduction to his collection of essays, Against Everything, “how every commonplace thing might be a compromise. The standards universally supposed might not be “universal.” Or they simply might not suit a universe in which my mother and I could happily live.” ‘Foodieism’ and exercise are where he deconstructs most destructively the ends towards which we direct our lives.

Health—through food, and exercise—is precisely the area where we feel, as a society, that we are making progress. The prevailing narrative is that we’ve seen through the destructiveness and dangers of large-scale food capitalism, and are now aware enough to ‘do the right thing’—buying local and organic, for a start. To critique that improvement can seem curmudgeonly, perhaps rash. We improve ourselves, and try to improve the planet, and yet here Greif is to criticise, to tell us we’re mistaken. Would there ever be a world in which he wouldn’t find something to criticise, even his own utopia?

And yet he manages to criticise gracefully. Tactfully, even, so as to avoid knee-jerk anger at his own naysaying. I read Greif as a countervailing voice, someone who knows (and maybe even hopes) he won’t be taken fully seriously, and yet hopes that by arguing “against everything”, we will be able to find a middle way through our problems, avoiding the worst of the dangers. It is hard to believe he wants to be taken seriously—he is arguing, essentially, that we are all mistaken in our thinking about food, the logical conclusion to which is that we simply should not think about it, eating whatever we want whenever we want. But by reminding of the dangers of the path we are on, we can improve that path and avoid its pitfalls.

Greif acknowledges the endlessness, and even the destructiveness, of being “against everything”. But for him it is not a negative attitude towards modern society; it seems more a state of being where one always maintains the belief that things can be improved. “I knew a ‘philosopher’ to be a mind that was unafraid to be against everything”, Grief says; “Against everything, if it was corrupt, dubious, enervating, untrue to us, false to happiness… To wish to be against everything is to want the world to be bigger than all of it, disposed to dissolve rules and compromises in a gallon or a drop, while an ocean of possibility rolls around us.”

So when he is against exercise, and against modern food, and against “the concept of experience”, reality television, YouTube and the hipster, Greif at his core merely wants to show that modern life need not be all-encompassing. The ocean of possibility rolls all around, and ultimately, “No matter what you are supposed to do, you can prove the supposition wrong, just by doing something else.”

Grief’s essays shed light on that opposite, cutting through prevailing narratives, and showing that the very things life seems to demand of us are what we should be most sceptical of.

Seneca on the true purpose of philosophy

Seneca diagnosed the problem with philosophy two thousand years ago. In one of his letters that make up the Epistulae morales ad Lucilium (often called Letters from a Stoic when in translated book form), he writes that “What I should like those subtle teachers (philosophers)… to teach me is this: what my duties are to a friend and to a man, rather than the number of senses in which the expression ‘friend’ is used and how many different meanings the word ‘man’ has.” He goes on:

“One is led to believe that unless one has constructed syllogisms of the craftiest kind, and has reduced fallacies to a compact form in which a false conclusion is derived from a true premise, one will not be in a position to distinguish what one should aim at and what one should avoid. It makes one ashamed—that men of our advanced years should turn a thing as serious as this (philosophy) into a game.”

There are some of us who have had a strong gut reaction against every formal philosophy class we’ve ever taken, yet have been quite unable to say why. Was it a certain professor or teacher? No, because my views have been that way across every class and every professor. Was it a certain period of philosophy, a certain philosopher? It can’t be, because I’ve tried such a range, each time thinking it was just that class I didn’t like, and then trying another to find it exactly the same. Just what is it exactly that repels us so? Philosophy is meant to help us live an examined life, and yet in class all we examine are the constructions of sentences and arcane arguments.

Seneca mocks precisely these kinds of things in philosophy:

“‘Mouse is a syllable, and a mouse nibbles cheese; therefore, a syllable nibbles cheese.’ Suppose for the moment I can’t detect the fallacy in that. What danger am I placed in by such lack of insight? What serious consequences are there in it for me? What I have to fear, no doubt, is the possibility, one of these days, of my catching a syllable in a mousetrap or even having my cheese eaten up by a book if I’m not careful… What childish fatuities these are! Is this what we philosophers acquire wrinkles in our brow for?… Is this what we teach with faces grave and pale?”

I criticise my philosophy classes at the same time as I read philosophy each day in my spare time. The two are not the same. I read philosophy, and do not know what I’d do without it; I study philosophy, and wonder what the point of it is. Maybe the difference is, I enjoy philosophy, but do not enjoy the study of philosophising, which often seems to be what we do in university—the constructions a thinker used to make a point, rather than whether and how their point can help us live our lives.

When I read philosophy, I love it for its practicality. It’s often like having a chat about the important things in life with an old friend. In your head, you argue back and forth, put a philosopher’s argument up against another’s that you’ve read, and listen while they debate what you should do in a given situation. There are no rules, no rights and wrongs, though they can help you discover what you believe to be right and wrong, good and bad, wise and stupid. When studying philosophy at school and university, however, there are rules: it’s all about the precise meaning of words, the structure of your sentences, the strictness of your prose. This all becomes so important in this kind of philosophy—and your professor always demands it—that the more real purpose of reading philosophy is completely forgotten.

Seneca tells us exactly what philosophy is for, what it should aim at:

“Shall I tell you what philosophy holds out to humanity? Counsel. One person is facing death, another is vexed by poverty, while another is tormented by wealth—whether his own or someone else’s; one man is appalled by his misfortunes while another longs to get away from his own prosperity; one man is suffering at the hands of men, another at the hands of the gods. What’s the point of concocting whimsies for me of the sort I’ve just been mentioning (the mouse trap example)? This isn’t the place for fun—you’re called in to help the unhappy… All right if you can point out to me where those puzzles are likely to bring such people relief. Which of them removes cravings or brings them under control? If only they were simply unhelpful! They’re actually harmful.”

I think we all understand, at some deep level, the real kind of philosophy that Seneca describes; it’s just a shame that philosophy in universities, developing as they have along the analytic tradition, have become focussed on exactly the kind that he writes against. It’s easier, after all, for a teacher to grade a paper on logical fallacies or mechanics of argument than it is to grade a paper on how philosophy can help us live. But when it comes to our lives—and that’s what education is for—the former matters very little, and the latter a great deal. So it’s up to us to find a teacher who understands this (and they do exist, don’t get me wrong!), or whether we can learn from university philosophy while working around its frustrating requirements. Whatever the case is, philosophy is too important to ignore entirely, and let’s hope studying philosophy at university hasn’t put some people off forever.

Dubious Lessons of a Well-Intended Education: On Fakework

Let’s be honest: education teaches us some truly dubious life lessons.

A friend of mine recently took a class in which the sole assignment for the whole semester was a single 6,000 word research paper on a topic of one’s choice. Despite giving her professor assurances to the contrary, she began the assignment the night before it was due. She wrote the entire paper in one sitting, editing as she went, and submitted without proofreading.

She said she deserved a bad grade, and would’ve accepted one with resolve. She’d been unengaged by the class and was planning to declare it as a pass/fail. And yet—when she received the graded paper back a few weeks later, it had received an A, and her professor was effusive in his praise. He wrote to her in an email something along the lines of: this is one of the best undergraduate papers I’ve ever read, and I can tell how much effort you’ve put into this. Keep up the hard work, and may your successes continue.

The lesson my friend learned was one of smart work, as opposed to hard work. Pretend to work hard, put in the minimal amount of effort necessary, confuse with big words, elegant sentences and a complex thesis, and the rewards will follow. Success depends as much on impression as on reality—the impression of hard work, the impression of intelligence.

The kind of ‘smart work’ I’m talking about is more than the “hack” mentality put forward by blogs like Lifehacker, and more than the productivity mantra of Silicon Valley. Where those look to help reduce the time it takes to carry out a given task (and that is, after all, the idea of technological progress), the smart work taught by our schools and universities changes what it means to complete a task. A task is complete so long as it gives the impression of it, no matter the thought, detail, care, conscience or morality behind it. Perhaps a better term is fakework.

“Yes, and?”, some will ask. “The activity is still complete. What’s it to others how it was completed? And besides, they’ll never know.”

Modern culture itself seems built on a similar kind of impressionism. It is probably a result of modern advertising, the ever-increasing fight by companies for our attention, the ever-decreasing time we feel we have. Politics is now the competition of the sound-bite. Advertising gives the impression of life transformation through the purchase of a product, when of course the underlying product can never live up to the impression that was sold.

We are taught the lesson in our schools and universities, because everyone—teachers and professors included—are subject to the same laws of impressionism. Teachers have similar constraints on their time as students, if not more, and it seems the trick, for many (though by no means all!), is to give the impression of having thoughtfully read and graded a paper without having truly done so. Because both students and teachers engage with it, it becomes one of the unspoken myths of one’s education. So long as you give the impression of hard work—and don’t call others out on theirs—all will be fine.

We take the lesson with us to the workplace, and it moves us onwards, forwards, upwards.

The problem is, we come to believe it. Fakework becomes not just an unspoken reality of our education systems, but a rule of modern life. If we could once switch fakework on and off depending on the activity, soon we forget it underlies our actions. And for some things in life, hard work is the only solution. It’s those times when the mere impression of it counts for absolutely nothing.

Like when your doctor tells you you’re at risk of a heart attack, and that you urgently need to get fit to improve your heart.

Like when you’re about to become a mother or a father and have just a few months to learn everything you need to know to keep your child safe and healthy and to give them the right start in life.

Like when you’re laid off at 55 and decide to write the novel you always wanted to write.

Like when your father has a stroke and you’re his sole care giver.

In these situations, and so many more where the only one watching is our own conscience and the only people affected are the ones we care most about, hard work is all there is.

Education is so all-encompassing, all-consuming, that we fail to see how the lessons we learn, no matter how broken were the incentives through which we learn them, are lessons we take with us through life. Our views, habits and approaches to life are formed when we aren’t watching; they’re formed when we’re looking the other way, trying to get an assignment done the night before it’s due. I suppose one should try always to keep a watchful eye turned in this direction, and to see every assignment and task as an opportunity to practice the habits and approaches we’ll need when life most tests us. We don’t want to be left floundering, wondering why fakework isn’t working exactly when we need it most.

Transforming the Gold of Our Lives into the Base Lead of Commerce

Recently I’ve quoted perhaps too often Annie Dillard’s slap-in-the-face line, “How we spend our days is, of course, how we spend our lives”. It’s a slap in the face because of its simplicity and because of its great importance. And the “of course” tucked so effortlessly in the middle because, OF COURSE it’s true, though we forget it every day.

But I’ve wondered too about the choice of the verb “spend”. It’s not something I noticed on first reading, and yet after having discovered Mark Slouka’s line about any “loathsome platitude” that compares time to money—“the very alchemy by which the very gold of our lives is transformed into the base lead of commerce”—I can’t un-see it. (Mark Slouka, of course, also tucks his line inside brackets half-way through a separate paragraph, as if it were too obvious to mention).

What does it mean to “spend our days”, and to “spend our lives?” It’s as if we have a savings account, and the trick to life is to not deplete the account too quickly. The commerce metaphor conjures subconscious ideas of frugality and the time value of money; save today, for every dollar saved today will be a dollar and a bit the next. Be smart with your account, because you’ll need to support yourself for years to come.

Metaphors are dangerous, especially those that enter into daily usage. Rarely do we reflect on how they might shape our thinking, the ways in which our minds come to take on the ideas embedded within them. And the greatest risk of all is that we do not ask whether the metaphor is apt; whether by analogising the most important thing we have—time—we are losing sight of what is really at stake.

We cannot “spend our days” in the way we spend money; we do not know how many days we have, our days are not comparable to one another in objective quantities, and we cannot save a day today and get a day and a bit tomorrow. Time and money stand opposed; to get one, we must deplete the other. And yet by saying that time is money, and that we spend our days, we forget that we are not merely trading apples for oranges; to think that way is to be stuck still in the realm of commerce, where decisions are merely orderings of preferences. Instead we come to think the only thing we really have, and the very thing we cannot count on, is merely a kind of purchasing power. Time is outside the realm of commerce entirely, for it cannot be purchased. It cannot factor into preference orderings like an iPhone can. It’s the most crucial thing that Michael Sandel forgot to include in his book What Money Can’t Buy.

Let us say instead, how we live our days is how we live our lives. Living is what takes up time. One lives life and spends money; one cannot live money, nor spend time, though for too long we’ve pretended we could.