Bulletpoint Philosophy

“The man whose horse trots a mile in a minute does not carry the most important message.” — Henry David Thoreau

 

“Do you think we spend too much time thinking about life instead of living it?”

I’ve always wanted to begin an essay that way. The question raises all kinds of paradoxes. In asking it, am I already guilty of an excess of reflection? In reading my asking it, are you too prioritising thought and reflection over action? For that matter, why do we dichotomise thinking and living—as if to think were to be frozen in time, like Rodin’s sculpture?

I was at a Starbucks in New Haven working on an essay (“Describe the two-component account of moral weakness, and explain what you think is the most serious objection facing that theory.”) It was early morning in New Haven, but therefore early evening on the other side of the world when one of my closest friends sent me a text asking that troubling question. She gave no context, she said nothing else, she simply asked the question and it appeared on my screen in a little blue bubble designed by Apple in California, accompanied by a childish sound suggesting that a pigeon had just flown the question half way around the world. 

The short answer—the tl;dr—is: no. No, I don’t think we spend too much time thinking about life instead of living it. The slightly longer version: that seems to be our problem, really. 

This essay is, in its entirety, the answer I wanted to give my friend, but felt at the time incapable of. For as much as I wanted to answer her there and then, I felt that to do so—to send messages in glossy bubbles and to fill her ears with those tinny pigeon-noises—would be to belie my lack of thought, whatever I happened to say. I believed then, and I believe more strongly now, that the very form in which we present our thoughts can say almost as much as the thoughts themselves. A truism, perhaps proven by modernism itself. But invert this, and we get another truth: that what we say—and, prior to that, how we think—depends upon the form which we have available to say it. At a time when technology has changed these forms more in the past decade than in the few centuries before, now seems a good time to stop, to assess. Just what are we saying, and how are we saying it?

I propose we call ours the age of the bulletpoint philosophy. It is a time of quick fixes and strange philosophical mixes to life’s pressing problems. I could cherry-pick examples, but I scarcely need to. Websites like LifeHack, LifeHacker, Study Hacks, Zen Habits: as I check them while writing this, I get articles from “Get a Better You: Powerful workouts, easy recipes and wellness tips for an awesome life”, to “The Life-Changing Magic of the Inbox Sort Folder”; “How To Write Every Day” and “All The Passive Aggressive Stuff You Should Never Do In A Relationship”. In such philosophy (and I am one who believes philosophy is naught but counsel in the problems of living) we find Stoicism meeting Buddhism, hippie culture meeting the corporate incarnation of the Protestant ethic. Google is even more helpful. Ask for the meaning of life and I get 86 million answers in 0.76 seconds, with the best answer highlighted in a box at the top, lest I were to feel overwhelmed. (1. Stop Playing by the Rules; 2. Step Outside of Your Comfort Zone; 3. Find Your Joy; 4. Listen to Your Intuition; 5. Appreciate the Individual Moments.)

For some years I have been simultaneously attracted to and revolted by this kind of writing. On the one hand, it seems to help. I’m inspired to change my life, to find my joy. It has persuaded me to be an early riser, and to become vegetarian; to get rid of some of my possessions, and to try meditating. Admitting this, I’m horrified. Surely someone who attends a so-called elite university should be more discerning, taking life lessons from Shakespeare rather than Tim Ferris? 

Like an anonymous street artist whose work is soon framed and placed in bourgeois living rooms, this writing first appeared on personal blogs but before long became its own genre with a proud place on major media websites. It has so far remained nameless as a genre. But “to name a sensibility”, wrote Sontag—“to draw its contours and to recount its history, requires a deep sympathy modified by revulsion.” 

I have both. Let us examine.

— — — — 

A simple chronology: first there were philosophers; then came professors of philosophy; now we have the bulletpoint philosophers and those who love to live. First there were those who loved to examine life; then came those who loved to reflect on those who reflected on life; then came those who said screw it all, get on with living by the most simple and immediate means—“Stop Playing By The Rules.”

The shift, in other words, has been has been away from thought and towards action. It mimics the decline of the public intellectual and the rise of the “hustle”, the latter growing up in Silicon Valley among coders in garages and venture capitalists on Sand Hill Road. Hustling: from the world of gangs and live fast, die young, to the world of t-shirts, computers and “fail fast”. The “hustle” is a response to a world seen as too focused on thought; it is a backlash against a world too intellectual, the world of professors of philosophy who spend their lives reflecting upon others’ reflections upon life. Far better, the hustle imagines, to do, to act, and to “make a difference”. Change the world. In this conception, progress is seen as coming exclusively from action, not thought—if you’re talking you’re not walking, if you’re thinking you’re not winning. 

The term “hack”, which has now entered daily language and the titles of numerous blogs I read, has clear origins. Urban Dictionary, ever-accurate, suggests “a clever solution to a tricky problem”. A coder in a garage gets stuck on a tricky problem in an algorithm, but sits up with it long enough, drinks enough Red Bull, and develops a clever solution. The next day his mother dies; but he knows that if he looks for it (or cogitates on it for a moment) there will be a clever solution somewhere, an “elegant” way of dealing with his feelings. LifeHacker matches the coder’s website HackerNews: there is no longer anything to separate life’s problems and those of web development. Two professors recently brought out a book based on their popular Stanford course: Design Your Life: How To Build a Well-Lived, Joyful Life. Build it, as Jobs and Wozniak once built the Macintosh; design it, as Facebook designs its icons.

Productivity cults sprang up to match the new hustle mentality with the technologies that Silicon Valley was creating. In 2001 we get the book Getting Things Done: The Art of Stress-Free Productivity, as if the markets’ failures a year before were simply a failure of productivity. David Allen’s book soon creates “GTD” cults of those committed to squeezing every bit of “value” out of their minutes and seconds (the author boasts of having more than 35 different careers by age 35—a fact to be obscured at all costs anywhere other than in this brave new world). 

A drive towards productivity was hardly unique to that era. But what made this something different, something more far-reaching, was how the idea of the “hustle” developed among precisely those people who were building technologies that the rest of the world would soon use. Consumer technologies were developed in their creators’ image—an image of productivity, efficiency and action. 

It is tempting to speak of big-brother-like powers and the forces of authority. But the deadening of the philosophical imagination is far more innocent than all that. Paul Starr’s important book The Creation of the Media: Political Origins of Modern Communications sets about showing, very effectively, this: that “The constraints in the architecture of technical systems and social institutions are rarely so clear and overpowering as to compel a single design.” The technologies we end up with are not at all inevitable—they could have taken on multitudes of other forms. And yet “At times of decision—constitutive moments, if you will—ideas and culture come into play… ”

Those directly involved in the creation of technologies are not aware of the ideas and culture that constrict their work, and nor do they see how those restrictions will extend, through their devices, apps and websites, to the minds of all those who use them. Yet through these constitutive moments, Silicon Valley’s hackers have inadvertently shaped our present philosophical imagination.

Why say in 1000 words what you can say in 140 characters? Why keep a commonplace book when you can save everything into Evernote and search it in an instant? Why send letters when email, and nowadays Facebook Messenger, are so freakishly efficient? In a world that believes in action over thought, life over reflection, brevity is the order of the day. Eloquence is for professors condemned to reflect on others who once did.

As Facebook and Twitter became mainstream, so too did the concepts of life that undergird them. We never saw it happen, but in beginning to think in 140 characters the public took on the hustle mindset. In writing emails instead of letters we too came to favour brevity over eloquence. In using an iPhone, productivity and efficiency become our ends rather than our means.

— — — — 

There will be no women or men of letters in the age of action. The mundanity of email precludes their existence. 

The term always meant something more expansive than the actual letters that those men and women wrote. But correspondence was symptomatic of the minds behind them. To read letters themselves is an experience in seeing the development of a philosophy—the disagreements a mind had with itself, the examination of ideas from many angles, the contradiction of oneself through dialogue. The Waste Land came to us fully formed, and it was only years after Eliot’s death that we could see the fraught years behind the philosophy. For a finished work shows none of its process, none of the internal wrangling and grappling that genuine thought requires—“A line will take us hours maybe;/ Yet if it does not seem a moment’s thought,/Our stitching and unstitching has been naught,” Yeats put it. The paradox of all genuine philosophy: it must appear fully-formed and complete, yet behind it must stand insecurity and hesitancy.

None of that today; no hesitancy. Philosophies on life’s greatest problems are formed in an instant, and no years of struggle or dialogue stand behind them. The technologies of craft and communication discourage, if not outright prevent it. 

Email, Twitter, Facebook—technologies created by those who came of intellectual age (to give them the benefit of the doubt) during the time and in the place of the hustle, of productivity, of action. As a car drives on roads so our minds move along the tracks society lays. The car can turn any direction we wish—over there, over that grassy field, quick, round the corner, through the first gate, into our first world. But turn that direction it does not; on asphalt it stays. Our minds too are free to leave the tracks laid for us, but they do not. It does not occur to us to turn off, and even if it did, we wouldn’t know where to turn.

The three parts of the development of bulletpoint philosophy: 1. The development of communities and a culture that favour action over thought, productivity over reflection, hustle over cogitation. 2. The extension of that culture into consumer technologies through Starr’s “constitutive choices”. 3. The restriction of minds to those roads that technology lays. 

Write me a profound email and I will post you a letter. Profundity seems not to occur to one on a busy screen, sent from an “address” containing the @ of the internet and a corporate logo (I use Gmail). Insight seems difficult when one is interrupted, constantly, by the dings of incoming messages; or when the ‘note’ you are about to send will soon make a swooshing, whooshing noise from your computer’s speakers, as those messenger pigeons take flight. There is something childlike and innocent about these technologies, whether it be the colourful, playful letters of Google staring at you from the upper left-hand corner, or the conversion of brackets and colons into smiling yellow faces. Everywhere there are reminders that the ‘hacker’ who made this tool is the same person who last week published on Medium, “I Lost an Argument with a Vegan. Here’s what I Learned.”

There will be no collected letters published, no record of the growth of great minds. If emails are kept at all, their content will match the mundanity of their form. Instead we get 86 million answers to the meaning of life, each posted immediately and without reflection or correspondence.

Sometimes, it seems, everyone is a philosopher and no one a thinker. 

— — — — 

Perhaps sensing the debasement of philosophy in the public realm, professors of philosophy have turned inwards. They have tried to re-intellectualise the discipline that now seems so un-intellectual. It is an honest response, perhaps even a noble one. But of course they pulled on the public pendulum slightly too firmly. They overcompensated.

Professors of philosophy killed philosophy, as Thoreau told us, but they now find themselves in the position of trying to resuscitate it. Yet as in politics, so in the academy: as bulletpoint philosophy took over the middle ground, professors of philosophy found themselves retreating far further to the wings, back to a kind of “core base”. If the public’s philosophy was now too easy to understand, professors certainly made it less so. Now no one understands them. 

One article, noting upon the death of Derek Parfit, put it: “Parfit was an outstanding philosopher. However, few people outside academic philosophy could name one of his books.” Tellingly, the same article notes the 1950s to the 1990s as the “golden age” of academic philosophy. I have dated the growth of bulletpoint philosophies to the late 1990s. 

Academic philosophy is now more impenetrable than ever before. Parfit’s On What Matters was published in 2011, and readers are presented with two volumes of clearly rigorous thinking on… what? Moral philosophy, but what more can a lay-reader say? Academic philosophy has always been written for small circles in the thought that it would in its own way “trickle down”, through those educated at universities, into organisations and public debate. But when academic philosophers write more than ever for themselves, and when the public has shifted away from public intellectuals towards bulletpoint philosophy, those who can stand with a foot in both worlds, able to translate one for the other, are few. (Sontag, where are you?)

We’re now in the old high-brow, low-brow binary. Professors of philosophy don’t read bulletpoint philosophy, and dismiss all those who do, retreating back into their own circles of self-satisfied work. Those who read bulletpoint philosophy don’t understand a word of what those professors write, and so cannot “lift” their intellectual sights. As with politics, so with philosophy: the area of intersection in views is now nowhere to be found. Left and right have never been further apart, never less able to reconcile differences, and philosophy has never before been wrenched to such extremes. This state of affairs is self-perpetuating. Oil and water do not mix.

I hold my iPhone close to my chest when reading bulletpoint philosophy because I do not want to be seen in public reading such stuff. It is a high-brow response to lower-brow work; a philosophical equivalent of Clement Greenberg being seen reading The New Yorker. I am blameworthy for this, I’m sure. Better, these days, to be like Sontag, to embrace all culture. But even she recanted that view. Culture must have some moral depth, she seemed to say in her later work.

— — — — 

I should be pleased by the simplification of philosophy, by its return to more direct intervention in people’s lives. The academicism of philosophy has frustrated me deeply—my college philosophy classes are notable only for how removed they were from anything resembling wisdom and life (I’ve not had a Cavell). I’ve been drawn to stoicism for its directness, its sincerity in helping with the problems of life. Stoicism was (is) my youthful overcompensation to what I saw as the irrelevance of academic philosophy. Bulletpoint philosophy is similarly direct, and similarly earnest. (Seneca would write on Medium should he write today). So why do I resist it?

Partly it is having read enough philosophy to know the difference, and to know that bulletpoint philosophies do not deserve a claim to philosophy at all. But again, that is just a high-brow response. The contradiction in it is that if the test of real philosophy is its helpfulness in living life, then bulletpoint philosophies can indeed claim that. And yet still I resist; still I look for some genuine reasons to justify my aversion. I shall hazard some:

  • Form, more than content, contains powerful lessons. And life is shown to be simple by the simplicity of the bulletpoint form. (The bulletpoint is the essence of simple form: it merely posits, while eschewing any regard for order or argumentative development.) We are therefore led astray, simplifying life when what we need most is to understand its complexity—to understand that we may not understand it all. 
  • The subtextual lesson we learn from bulletpoint philosophies is that there is a simple, external answer for all of our problems. That the sole difficulty is in finding the right answer; as if application did not matter.
  • Philosophy is turned into statements of certainty via bulletpoints. It comes here strangely close to science. In a weird way this is continuous with at the other extreme the pre-eminence of analytic philosophy (intended to produce rigour and a degree of certainty unknown to the continental tradition—to move philosophy closer to science, in other words). But philosophy’s necessity is in answering all those human concerns that science can never answer. Science tells how to do, not what should be done. Our age is in dire need of the latter; our problem is too much of the former.
  • Value is placed on information over understanding. Find the bulletpoints, the logic says, and your problems will be solved. Philosophy of old knew that the challenge lay in understanding philosophy in terms of one’s own life. Its form, leading us along in prose and metaphor and ideas, aids the business of understanding. It thereby leads to real wisdom—wisdom being applied knowledge. Bulletpoint philosophy is readily understood in terms of its words, but this paradoxically hinders us from application.
  • There is no philosophical dialogue. We do not enter into the great debate. We simply consume tenuous self-help, as we consume the news, needing our next fix the following morning.
  • Complexity is seen as deficiency. Simplicity is the order of the day. But some ideas are complex, and can only be expressed as such. 
  • We are given no sense of philosophical categories or oppositions. We are simply given a worldview without any understanding of what else might exist, or what the counterarguments are. 
  • Genuine philosophy is often complex because human lives are so complex. We cannot solve the problems of life like we can solve a blocked drain, by searching Google for a local plumber.

— — — — 

Bulletpoint philosophy proves this, if little else: that life’s problems are keenly felt. In its proliferation, questions of the soul have been directly asked, and directly answered. 

In history political turbulence has often been met with a turning inwards, and the inwards-looking world of philosophy is both a challenge and an answer to dictators and fascists. The period of the Warring States in China gave us Confucianism and Mohism, Legalism and Daoism. French existentialism grew out of the carnage of the Second World War. Philosophy says to tyrants: you can challenge my possessions and my material existence, but I have another life which you cannot see nor understand nor diminish. 

What I’m getting at is that philosophy laughs at Trump. Bulletpoint philosophy, however, is understood by him. 

Suddenly cynicism of sincerity seems outdated. Postmodernism mocked questions of the meaning of life, but Trump mocks the postmodern. If it was a politics that took itself too seriously that led to the ironic mode, it is a politics that embarrasses itself that draws us back to earnestness. 

Academic philosophy takes itself so seriously to the point of impenetrability. Bulletpoint philosophy sees itself with an ironic expression, and thinks that more than 1000 words on the meaning of life were to risk sincerity. An age as fraught as ours will turn back into philosophy as a kind of spirituality. Let us hope, then, that the philosophy it encounters is a genuine philosophy, and not one built on the flawed poles of equations and bulletpoints.

What can I say? Alain de Botton seems to have it right after all.

 


 

This essay was completed in March 2017.

The Shallows by Nicholas Carr: A Summary

Note: This is a book review of Nicholas Carr’s The Shallows that I originally published in September of 2011 on this blog. Republishing after being asked by someone for the link. 

A Review/Summary of The Shallows by Nicholas CarrI’ve just finished reading The Shallows, a book by Nicholas Carr. It’s a reasonably technical book that goes in-depth into the workings of our brains to look at how the Internet is affecting the way we “think, read, and remember”.

Carr starts off by explaining how he’s been having trouble focussing recently. He says that he sits down to read a book but finds himself unable to read a page without looking up from the book, and he finds his mind wandering off on tangents quite often. He also says that he has trouble focussing on other tasks, and can’t remember things as well as he used to be able to. I have the same problems, and Carr even says that he reckons most people who use the Internet these days will be suffering the same things.

From there, he goes on to describe in detail why it is that we’re finding ourselves so distracted nowadays. In essence, his thesis is that new media will change the way that our brain works, and there are many side-effects to this. A side effect of the Internet is that we find it harder to focus.

When things like the typewriter was invented, Carr uses the description of how Nietzsche found his writing style change when he used a typewriter. He started using smaller, more choppy sentences, and this was as a direct result of simply changing the medium he used to write.

When the wristwatch was invented, people found themselves more efficient but also a lot more tired as they were now acting by bodily rhythms that other people had set for them, instead of by their natural body clock.

All these technological changes, Carr argues, have side-effects that mostly affect our deep-brain thinking. Here’s a few examples.

Carr comes to the conclusion that there are generally two types of knowledge: deep domain expertise, and knowing where to find relevant information. While the Internet gives us access to all relevant information, it reduces our deep domain expertise as we no longer need to store as much information in our brains.

The Windows operating system was the birth of true multitasking. Before this, people did one thing at a time on computers. They would word process, or they would email. There was no capacity to do both at the same time. Therefore there were no distractions to what people were working on. But with Windows, people suddenly had distractions, as different applications would run at the same time. People thought this would lead to an increase in productivity, but in many ways productivity has decreased because people are now no longer as focussed on what they are working on.

The part of The Shallows that got me thinking most was the very last chapter. Carr describes how new technologies make us lose part of ourselves. Clocks made us lose our natural rhythm. Maps made us lose our spacial recognition capacities. He gives many more examples. But the Internet, unlike most of these other technologies, is perhaps making us lose our touch with the real world. Our brains jump around constantly as if we are browsing websites. We are constantly pressured to be looking at our phones and computers and replying to messages. The end result is that we live more and more inside the Internet, and when we need to leave it, we can’t work as well as we previously could.

It’s not like we can change the course of technology and reverse these negative effects. But it’s worth thinking about how to mitigate them, and to that end, Carr’s The Shallows is an excellent place to start.

A Future Without Personal History

Note: In 2011 I wrote this article for ReadWrite, a widely-read blog covering the technology industry, on what would happen if we didn’t make an effort to store our communication history. I lamented how older generations could look back through letters, physical records of their lives with one another, and yet we would seemingly be left with nothing. The article inspired an impassioned response from journalist Paul Carr at the blog TechCrunch and a lively online debate. I ultimately ended up founding a company based on the premise I wrote about.

I thought I’d re-share the article as I rediscovered it. I was sixteen at the time—things have certainly changed, and you’ll have to excuse the writing. And, irony of ironies, I now rather enjoy writing letters.


Remember those pieces of paper with handwritten words on them that you used to post to people? “Letters” I think they’re called. To be honest though, I wouldn’t have a clue, as I’ve neither sent nor received one in my 16-year-old life.

I’m sure the majority of readers here have at least sent a personal letter to friends or family in their lifetime. However, the same cannot be said about my generation. I’ve sent tens of thousands of emails, Facebook messages, SMSs, and IMs – but never a single letter.

More than solely being a form of communication, letters are a very effective historical item. Think about letters sent home to families from the soldiers on the battlefields of both world wars. Letters were kept because they have a perceived value – it took time and effort to send a letter, and therefore people viewed them as much more valuable.

My parents still have letters that they received more than 30 years ago, and when they read them now they say that they detail entire relationships and friendships. They have vast amounts of information about their own history stored inside the letters that they sent and received. It goes even further than that. My grandmother still has letters she received from her grandmother. If it weren’t for those letters, all that information about my own family history would have been lost, or confined to memory (which, as my parents are discovering, fails us all eventually).

And yet, I can’t tell anyone what I was discussing with someone a month ago. That’s testament to the digital age that I, and everyone in my generation, is a native member of. I find myself feeling incredibly guilty that my parents and grandparents went to so much effort to ensure that our family history was kept, and here I am frequently losing information about my life.

The frequency and brevity of messages sent today combined with the numerous mediums used means that this personal information now has a much lower perceived value: Your email storage fills up – you delete all your messages. You get a new mobile phone – all of your SMS’s are lost.

Some people are already worrying about what may happen if we continue to throw away our information. For example, the U.S. Library of Congress announced in April last year that it would be archiving every Twitter message ever sent. Sure it’s a phenomenal undertaking, but in no way is it enough. Think about all the different mediums of communication you use.

For example, today alone I have communicated with people via SMS, email, Facebook messages, Facebook chat, Whatsapp Messenger, Skype chat, and Twitter. Out of those, only my public Twitter updates are being stored. There are other efforts like the Library of Congress’ undertaking, but mass archiving won’t help us store our individual histories in a way that we can access.

What happens if, in three years, I want to go back through all my communications with my girlfriend? I may not be using an iPhone in three years, so all of my messages on Whatsapp Messenger will be gone. I definitely won’t be using the same mobile phone, so all of my SMS’s will be gone. My Gmail storage will have filled up, so I won’t have any of our emails any more. I doubt I’ll even still be using Facebook – there’s all of that communication gone.

All of this information that is so important and so relevant to me personally is just disappearing, and I won’t be able to track the relationships and friendships that I have had.

Personally, I am now backing up my computer daily, and copying and pasting communication from all different formats into different documents stored both on hard drive and in the cloud. While it’s a start, it’s an absolutely horrific task, and doesn’t completely work (I’m not going to be transcribing my SMS’s into a document).

The abundance of technology is severely devaluing information. Do we go on ignoring this fact, and losing the details of our lives? Or do we do the hard work, and attempt to effectively store our communications? I know that I’ll be putting in the hard work – at least until the magicians in Silicon Valley come up with a better solution.

The Future of Social Networks

Note: I wrote this article in 2011, looking at how social networks could more accurately mimic real life societies. It ended up being the single most-read and most-commented-on piece on my blog. I was sixteen at the time, so excuse the writing. Interesting to see both how the numbers have changed since 2011 (600 million users! One and a half years!), as well as how Facebook has and has not moved closer to the vision I outlined.


So Facebook has 600 million users. Many people are saying that Facebook will now be here for ever, and the entire planet will eventually be on Facebook. The same people are saying it will grow to be the biggest company in history, and that it’ll make a killing for investors. I disagree. This article explains why I disagree, and discusses what social networks should look like to succeed.

Social networks are still in early days. I don’t think they’ve really matured in any way, because they are still built on false assumptions that were made beginning with the first few mainstream social networks. The system of “friending” is completely broken, and yet many people don’t realize it because they don’t stop to ask why it is that way.

Facebook says that all my friends and contacts are of equal importance to me. They know this isn’t true, but there is no way for me to distinguish between friends I am truly close with or contacts that I met at a conference and felt obliged to accept on Facebook. In real life, we rank our connections in order of how important they are to us and how close we are with them. But on Facebook, this system has gone out the window because that functionality is not built into the social network.

But there is more about Facebook that is broken. Facebook is a “one-size-fits-all” social network. In other words, it thinks that everyone will find use in Facebook as long as they are on it with their friends. They believe that the higher the number of users they have, the more likely it is that people will keep joining. But this view goes against societal laws.

We live in societies in real life because we surround ourselves with people who share similar values, beliefs, and interests. Sure, the fact that I support one political party over another says that I have slightly different values to the person next to me, but fundamentally our values and beliefs are very similar. And living in a society allows me to know that anybody I meet will have fundamentally the same mindset as me. People who share similar religions live in the same societies, because they understand each other. This means that I can meet new people, and be social with a group outside of my existing close friends, with the knowledge that anybody I meet will be essentially similar to me.

Think about the term social network for a moment. When we hear it, we think of online social networks, like Facebook, with a system of “friending” and where we only communicate with our existing contacts. But social network is a broad term. Actually, it kind of describes how we relate to our contacts in real life. We have our own social network in real life, and you know what? It works. It’s called our society, and it’s been around for decades, if not centuries.

My question is: why aren’t online social networks built like physical societies?

Imagine this model as three circles, one inside of the other. The inner circle has your core group of friends and family – you share everything with them. There may only be 25 people in there, but these are the people who you would call to tell them something important that has just happened. They mean a lot to you. You’ll connect with these people by “friending” them – ie. mutual designation.

The next circle, which is quite a few times larger than the inner circle, is made up of your connections. These are the people who you’ve met at conferences, or know from school – you’re not close with them, but you’d talk to them if you saw them on the street. To connect with these people, you just have to specify them as a connection. It’s more like “following” them, only they will see that you have specified them as a connection and they can specify you back.

The third and final circle is made up of outer society. People you don’t know, but who you may meet someday. You cross paths with these people every day, but just haven’t yet taken the time to stop and talk to them. This final circle is huge – many, many times bigger than the previous two – and you have no direct link to them unless you choose to.

What this model allows is for us to differentiate between true “friends”, and mere “connections”. You can have a clear distinction between the two, allowing you to know more clearly who what you are sharing will reach. It gives you the ability to share more with those you really care about, without annoying connections. And, likewise, it allows you to share things with connections that you wouldn’t share with your family. And what about “outer society”? Well, you can interact with them as much or as little as you want.

The beauty of this model is that it allows us to choose how we want to use our social network. If we want to use it like Facebook, we can do that – the choice is entirely up to us.

But there will not be just one social network that looks like this. There will be tens, if not hundreds of them – each with millions of users. The social network that you are a part of will be a representation of who you are as a person. It will signify your values, beliefs, and interests.

When will this shift in model of social networks occur? I believe it will start in a year and a half, and reach the mainstream in about three years from now. That’s time for these new social networks to be built and perfected.

In any case, the battle of the social networks is far from over. Facebook hasn’t won, and there are plenty of genius programmers at colleges around the world. Good luck.

The Liberal Arts and Two Visions of the Future

There are two separate and entirely incompatible strands of thought about liberal education passing through public discourse at present.

The first argues that liberal education is a solution to increasing mechanisation of the work force, an antidote for the feeling of alienation and a loss of meaning, and the way to produce broad-minded, deep-hearted leaders. As Asia invests in the liberal arts, and as a new public narrative along these lines becomes more common in the United States, the liberal arts appear on the one hand to be experiencing a resurgence.

The second narrative argues that the liberal arts, and more specifically the humanities that make up their centrepiece, are worthless in a world where value is created digitally. This view is summarised succinctly by Silicon Valley venture capitalist Vinod Khosla, who writes in one of his polemics that “Little of the material taught in Liberal Arts programs today is relevant to the future.” Instead, science and technology are the paths to progressing humanity and improving the world.

The inability of these two strands of thought to connect or engage with one another points to the central issue: they each have incompatible visions of the future.

One imagines a world where morals, character, public service and living well are the purpose of education. The other imagines a world where humanity is advanced by technology, and education must focus on preparing the minds necessary for this advancement.

Recognising which vision for the future we hold dear is the start of knowing what education means to us individually. And by acknowledging that those who disagree with us about the value of liberal education do so not out of ignorance but from a different vision of a noble future, perhaps for the first time these narratives may engage with one another.