FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

When Work Didn’t Follow You Home

In a recent article written for Slate, journalist Dan Kois recounts the shock his younger coworkers expressed when they discovered that he had, earlier in his career, earned a master’s degree while working a full-time job. “It was easy,” he explained:

“I worked at a literary agency during the day, I got off work at 5 p.m., and I studied at night. The key was that this was just after the turn of the millennium. ‘But what would you do when you had work emails?’ these coworkers asked. ‘I didn’t get work emails,’ I said. ‘I barely had the internet in my apartment.'”

In his article, Kois goes on to interview other members of Generation X about their lives in the early 2000s, before the arrival of smartphones or even widely available internet. They shared tales of coming home and just watching whatever show happened to be on TV (maybe “Seventh Heaven,” or “Law and Order”). They also talked about going to the movies on a random weekday evening because they had nothing else to do, or just heading to a bar where they hoped to run into friends, and often would.

The threads that kept catching my attention, however, were about work communication. “The very idea that, once work hours were over, no one could get hold of you—via email, text, Slack, whatever—is completely alien to contemporary young people,” Kois explained. But this reality made a huge difference when it came to the perception of busyness and exhaustion. When work was done at work, and there was no chance of continuing your labors at home, your job didn’t seem nearly as all-consuming or onerous .

There’s a lot about early 2000s culture I’m not eager to excavate, but this idea of the constrained workday certainly seems worthy of nostalgia.

The post When Work Didn’t Follow You Home appeared first on Cal Newport.

On the Slow Productivity of John Wick

I found myself recently, as one does, watching the mini-documentary featurettes included on the DVD for the popular 2014 Keanu Reeves movie, John Wick — an enjoyably self-aware neon noir revenge-o-matic, filmed cinematically on anamorphic lenses.

At the core of John Wick‘s success are the action sequences. The movie’s director, Chad Stahelski, is a former stuntman who played Reeve’s double in The Matrix trilogy and subsequently made a name for himself as a second unit director specializing in filming fights. When Reeves asked Stahelski to helm Wick, he had exactly this experience in mind. Stahelski rose to the challenge, making the ambitious choice to feature a visually-arresting blend of judo, jiu-jitsu, and tactical 3-gun shooting. In contrast to the hand-held, chaotic, quick-cutting style that defines the Bourne and Taken franchises, Stahelski decided to capture his sequences in long takes that emphasized the balletic precision of the fighting.

The problem with this plan, of course, is that it required Keanu Reeves to become sufficiently good at judo, jiu-jitsu, and tactical 3-gun shooting so as not to look clumsy for Stahelski’s stable camera. Reeves was game. According to the featurette I watched, to prepare for production, he trained eight hours a day, four months in a row. The effort paid off. The action set pieces in the movie were show-stopping, and after initially struggling to find a distributor, the film, made on a modest budget, went on to earn $86 million, kicking off a franchise that has since brought in hundreds of millions more.

What struck me as I watched this behind-the-scenes feature is how differently creatives who work in the arts think about productivity as compared to creatives who work in office jobs. For Keanu Reeves, it was obvious that the most productive path was to focus all of his attention on a single goal: becoming really good at Stahelski’s innovative brand of gun fu. Doing this, and basically only this, month after month, materialized hundreds of millions of dollar of profit out of the entertainment ether.

In office jobs, by contrast, productivity remains rooted in notions of busyness and multi-faceted activity. The most productive knowledge workers are those who stay on top of their inboxes and somehow juggle the dozens of obligations, from the small tasks to major projects, hurled in their direction every week. Movie-making is of course different than, say, being a marketing executive, or professor, or project manager, but creating things that are too good to be ignored, regardless of the setting, is an activity that almost without exception requires undivided attention. Are we so sure that the definition of “productive” that defines knowledge work really is the most profitable use of our talents?

John Wick may be shallow entertainment, but the story of its success highlights some deep lessons about what the rest of us might be missing in our pursuit of a job well done.

The post On the Slow Productivity of John Wick appeared first on Cal Newport.

The End of Screens?

Image by Sightful

Believe it or not, one of the most important technology announcements of the past few months had nothing to do with artificial intelligence. While critics and boosters continue to stir and fret over the latest capabilities of ChatGPT, a largely unknown 60-person start-up, based out of Tel Aviv, quietly began demoing a product that might foretell an equally impactful economic disruption.

The company is named Sightful and their new offering is Spacetop: “the world’s first augmented reality laptop.” Spacetop consists of a standard computer keyboard tethered to pair of goggles, styled like an unusually chunky pair of sport sunglasses. When you put on the goggles, the Spacetop technology inserts multiple large virtual computer screens into your visual field, floating above the keyboard as if you were using a computer connected to large external monitors.

As oppose to virtual reality technology, which places you into an entirely artificial setting, Spacetop is an example of augmented reality (AR), which places virtual elements into the real world. The goggles are transparent: when you put them on at your table in Starbucks you still see the coffee shop all around you. The difference is now there are also virtual computer screens floating above your macchiato.

To be clear, I don’t believe that this specific product, which is just now entering a limited, 1000-person beta testing phase, will imminently upend the technology industry. The goggles are still too big and unwieldy (more Google Glass than Ray Ban), and the field of vision for their virtual projections remains too limited to fully support the illusion of screens that exist in real space.

But I increasingly believe that Sightful may have stumbled into the right strategy for finally pushing AR into the mainstream. Unlike Magic Leap, the over-hyped Google-backed start-up that burned through $4 billion trying to develop a general-purpose AR device that could do all things for all people, Sightful has remained much more focused with their initial product.

Spacetop solves a narrow problem that’s perfectly suited for AR: limited screen space for mobile computing. Their initial audience will likely be power users who desperately crave monitor real estate. (As I learned researching a 2021 New Yorker article about working in virtual reality, computer programmers, in particular, will happily embrace even the most wonky of cutting-edge technologies if it allow them to use more windows simultaneously.)

This narrowness simplifies many of the technical issues that afflicted the general-purpose AR technologies developed by companies like Magic Leap. Projecting virtual screens is much easier than trying to render arbitrary 3D objects in a real space, as you don’t have to worry about matching the ambient lighting. Furthermore, the keyboard base provides a familiar user interface and vastly simplifies the process of tracking head movements.

In other words, this is a problem that AR has a chance to convincingly solve. And once this door is open, and AR emerges as a legitimate profitable consumer technology, significant disruption might soon follow.

Imagine the following scenario:

  • In the third generation of their technology, Sightful achieves a small enough form-factor and large enough field of vision for their AR goggles to appeal to the much broader market segment of business users looking for more screen space when working away from the orfice.
  • Seeing the potential, Apple invests several hundred million dollars to develop the iGlass: a pair of fashion-forward AR goggles, connected wirelessly to an elegant, foldable base on which you can touch or type, marketed as a replacement for the iPad and MacBook that can fit in your pocket while still providing you a screen bigger than their biggest studio monitors.
  • Spooked, Samsung scrambles to release a high-end AR television experience that allows you to enjoy a virtual 200-inch television in any room.
  • Apple smells blood and adds television functionality as a software update to iGlass. Soon Samsung’s market drastically shrinks. This sets off the first of multiple cataclysmic consolidations in the consumer electronics sector.
  • Within a decade, we find ourselves in a world largely devoid of screens. Computation unfolds in the cloud and is presented to us as digital projections on thin plastic optical wave-guides positioned inches from our eyes.

I don’t, at this point, mean this prognostication to be either optimistic or dystopian. I want only to emphasize that in a moment in which we’re all so enthralled with the question of whether or not autoregressive token predictors might take our jobs, there are some other major technological fault lines that are beginning to rumble and might very well be close to radically shifting.

#####

In other news:

  • Speaking of a potential AR revolution, I talked about Apple’s upcoming splashy entrance into this space during the final segment of Episode 249 of my podcast, Deep Questions.
  • My friend Adam Alter, who I quoted extensively in Digital Minimalism, has a fantastic new book out titled Anatomy of a Breakthrough. Here’s my blurb from the back cover: “A deeply researched and compelling guide to breaking through the inevitable obstacles on the path to meaningful accomplishment.” Check it out!

The post The End of Screens? appeared first on Cal Newport.

On Kids and Smartphones

Not long ago, my kids’ school asked me to give a talk to middle school students and their parents about smartphones. I’ve written extensively on the intersection of technology and society in both my books and New Yorker articles, but the specific issue of young people and phones is one I’ve only tackled on a small number of occasions (e.g., here and here). This invited lecture therefore provided me a great opportunity to bring myself up to speed on the research relevant to this topic.

I was fascinated by what I discovered.

In my talk, I ended up not only summarizing the current state-of-the-art thinking about kids and phones, but also diving into the history of this literature, including how it got started, evolved, adjusted to criticism, and, over the last handful of years, ultimately coalesced around a rough consensus.

Assuming that other people might find this story interesting, I recorded a version of this talk for Episode 246 of my podcast, Deep Questions. Earlier today, I also released it as a standalone video. If you’re concerned, or even just interested, in what researchers currently believe to be true about the dangers involved in giving a phone to a kid before they’re ready, I humbly suggest watching my presentation.

In the meantime, I thought it might be useful to summarize a few of the more interesting observations that I uncovered:

  • Concern that young people were becoming more anxious, and that smartphones might be playing a role, began to bubble up among mental health professionals and educators starting around 2012. It was, as much as anything else, Jean Twenge’s 2017 cover story for The Atlantic, titled “Have Smartphones Destroyed a Generation?”, that subsequently shoved this concern into the broader cultural conversation.
  • Between 2017 and 2020, a period I call The Data Wars, there were many back-and-forth fights in the research literature, in which harms would be identified, followed by critics pushing back and arguing that the harms were exaggerated, followed then by responses to these critiques. This was normal and healthy: exactly the empirical thrust and parry you want to see in the early stages of an emerging scientific hypothesis.
  • Over the last few years, a rough consensus has emerged that there really are significant harms in giving young people unrestricted access to the internet through smartphones. This is particularly true for pre-pubescent girls. This consensus arose in part because the main critiques raised during The Data Wars were resoundingly answered, and because, more recently, multiple independent threads of inquiry (including natural experiments, randomized controlled trials, and self-report data) all pointed toward the same indications of harm.
  • The research community concerned about these issues are converging on the idea that the safe age to give a kid unrestricted access to a smartphone is 16. (The Surgeon General recently suggested something similar.)
  • You might guess that the middle school students who attended my talk balked at this conclusion, but reality is more complicated. They didn’t fully embrace my presentation, but they didn’t reject it either. Many professed to recognize the harms of unrestricted internet access at their age and are wary about it. (My oldest son, by contrast, who is 10, is decidedly not happy with me for spreading these vile lies at his school.)

This is clearly a fascinating and complicated topic that seems to be rapidly evolving. If you’re struggling with these developments, I hope you find my talk somewhat useful. I’m convinced that our culture will eventually adapt to these issues. Ten years from now, there won’t be much debate about what’s appropriate when it comes to kids and these technologies. Until then, however, we’re all sort of on our own, so the more we know, the better off we’ll be.

The post On Kids and Smartphones appeared first on Cal Newport.

Danielle Steel and the Tragic Appeal of Overwork

Based on a tip from a reader, I recently tumbled down an esoteric rabbit hole aimed at the writing habits of the novelist Danielle Steel. Even if you don’t read Steel, you’ve almost certainly heard of her work. One of the best-selling authors of all time, Steel has written more than 190 books that have cumulatively sold over 800 million copies. She publishes multiple titles per year, often juggling up to five projects simultaneously. Unlike James Patterson, however, who also pushes out multiple books per year, Steel writes every word of every manuscript by herself.

How does she pull this off? She works all the time. According to a 2019 Glamour profile, Steel starts writing at 8:30 am and will continue all day and into the night. It’s not unusual for her to spend 20 to 22 hours at her desk. She eats one piece of toast for breakfast and nibbles on bittersweet chocolate bars for lunch. A sign in her office reads: “There are no miracles. There is only discipline.”

These details fascinate me. Steel is phenomenally successful, but her story reads like a Greek tragedy. She could, of course, decide to only write a single book per year, and still be a fabulously bestselling author, while also, you know, sleeping. Indeed, her cultural impact might even increase if she slowed down, as this extra breathing room might allow her to more carefully apply her abundant talent.

But there’s a primal action-reward feedback loop embedded into the experience of disciplined effort leading to success. Once you experience its pleasures it’s natural to crave more. For Steel, this dynamic seems to have spiraled out of control. Like King Midas, lost in his gilded loneliness, Steel cannot leave the typewriter. She earned everything she hoped for, but in the process she lost the ability to step away and enjoy it.

I think this dynamic, to one degree or another, impacts anyone who has been fortunate enough to experience some success in their field. Doing important work matters and sometimes this requires sacrifices. But there’s also a deep part of our humanity that responds to these successes — and the positive feedback they generate — by pushing us to seek this high at ever-increasing frequencies.

One of the keys to cultivating a deep life seems to be figuring out how to ride this razor’s edge; to avoid the easy cynicism of dismissing effort altogether, while also avoiding Steel’s 20-hour days. This is an incredibly hard challenge, yet it’s one that receives limited attention and generates almost no formal instruction. I don’t have a simple solution but I thought it was worth emphasizing. For a notable subset of talented individuals burnout is less about their exploitation by others than it is their uneasy dialogue with themselves.

The post Danielle Steel and the Tragic Appeal of Overwork appeared first on Cal Newport.

My Thoughts on ChatGPT

In recent months, I’ve received quite a few emails from readers expressing concerns about ChatGPT. I remained quiet on this topic, however, as I was writing a big New Yorker piece on this technology and didn’t want to scoop my own work. Earlier today, my article was finally published, so now I’m free to share my thoughts.

If you’ve been following the online discussion about these new tools you might have noticed that the rhetoric about their impact has been intensifying. What started as bemused wonder about ChatGPT’s clever answers to esoteric questions moved to fears about how it could be used to cheat on tests or eliminate jobs before finally landing on calls, in the pages of the New York Times, for world leaders to “respond to this moment at the level of challenge it presents,” buying us time to “learn to master AI before it masters us.”

The motivating premise of my New Yorker article is the belief that this cycle of increasing concern is being fueled, in part, by a lack of a deep understanding about how this latest generation of chatbots actually operate. As I write:

“Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?”

I then spend several thousand words trying to detail the key ideas that explain how the large language models that drive tools like ChatGPT really function. I’m not, of course, going to replicate all of that exposition here, but I do want to briefly summarize two relevant conclusions:

  • ChatGPT is almost certainly not going to take your job. Once you understand how it works, it becomes clear that ChatGPT’s functionality is crudely reducible to the following: it can write grammatically-correct text about an arbitrary combination of known subjects in an arbitrary combination of known styles, where “known” means it encountered it sufficiently many times in its training data. This ability can produce impressive chat transcripts that spread virally on Twitter, but it’s not useful enough to disrupt most existing jobs. The bulk of the writing that knowledge workers actually perform tends to involve bespoke information about their specific organization and field. ChatGPT can write a funny poem about a peanut butter sandwich, but it doesn’t know how to write an effective email to the Dean’s office at my university with a subtle question about our hiring policies.
  • ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory. It’s possible that we might one day day create a self-aware AI (keep an eye on this guy), but if such an intelligence does arise, it will not be in the form of a large language model.

I’m sure that I will have more thoughts to share on AI going forward. In the meantime, I recommend that you check out my article, if you’re able. For now, however, I’ll leave you with some concluding thoughts from my essay.

“It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy,” I wrote. “ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem.”

The post My Thoughts on ChatGPT appeared first on Cal Newport.

On Taylor Koekkoek’s Defiant Disconnection

An article appearing last month in the Los Angeles Times book section opens with a nondescript picture of a young man in a Hawaiian shirt standing in front of a brick wall. The caption is arresting: “Taylor Koekkoek is one of the best short-story writers of his (young) generation. So why haven’t you heard of him?”

On March 21st, Koekkoek (pronounced, cook-cook) published his debut short story collection, Thrillville, USA. Those who have read it seem to love it. The Paris Review called it a “raw and remarkable debut story collection.” The author of the LA Times piece braved a blizzard in a rental car just for the chance to interview Koekkoek at his Oregon house. And yet, the book has so far escaped wide notice: At the time of this writing, its Amazon rank is around 175,000.

The LA Times provides some insight into this state of affairs:

“A Google search reveals very little about the writer: a few published stories, no social media trail, author bios at a handful of universities that feature the same photo of an amiable-looking young white man in a Hawaiian shirt. If one were to make up an identity for a fictitious writer, the results would resemble something like the sum total of Koekkoek’s online experience.” [emphasis mine]

It’s possible that Koekkoek will go on to make the standard moves for someone his age: engaging in social media, creating waves online, brashly carving out an audience. (Indeed, since his book came out, he seems to have started an Instagram account that currently features three posts.) But there’s a part of me that hopes he resists this well-worn path; that he continues to let his soulful words speak for themselves, and that, ultimately, the sheer quality of what he’s doing wins him grand recognition.

This would be a nice counterpoint to our current moment of instinctive self-promotion. A reminder for the rest of us, nervous about slipping into digital oblivion, that what ultimately matters is the fundamental value of what we produce. Everything else is distraction.

In other news…

  • My apologies for my recent radio silence on this newsletter. In a coincidence of scheduling, of the type that happens now and again, I had an academic, magazine, and book-related deadline all fall into the same three-week period, so something had to give. I should be back to a more normal pace of posting now.
  • I suppose I should mention that, a few weeks ago, I was profiled by the Financial Times Weekend Magazine. Believe it or not, I was the cover story (!?). You can read it here.

The post On Taylor Koekkoek’s Defiant Disconnection appeared first on Cal Newport.

Guillermo del Toro’s Inspiration Machine

When the Academy Award-winning director Guillermo del Toro was a boy growing up in Guadalajara, his mother bought him a Victorian-style writing desk. “I kept my comic books in the drawers, my books and horror action figures on the shelves, and my writing and drawing stuff on the desk,” Del Toro recalled in a 2016 profile. “I guess that was the first, smallest version of my collection.”

As the director began to find success as an adult with his beautifully imagined, macabre fantasies, like Hellboy, Pan’s Labyrinth and Nightmare Alley, he was able to indulge his collecting instinct more seriously, amassing “a vast physical collection of strange and wonderful memorabilia.” Eventually, Del Toro’s objects became too much to manage.

As he explained in an NPR interview:

“We were living in a three-bedroom house and I magically had occupied four spaces. So it came to a point where the collection was much bigger than the family life. I was hanging up a picture, a really creepy painting by Richard Corben. My wife says, ‘That’s too close to the kitchen, the kids are gonna be freaked out.'”

So Del Toro took the natural next step: he bought a second house in the same neighborhood. His plan was to use the new residence to organize and store his growing collection and provide a quiet place for him to work. As an homage to Charles Dickens, he called it Bleak House.

By 2016, Bleak House contained over 10,000 items, including artwork, sculptures, artifacts and movies. It also featured thirteen different reference libraries. Housed in a room dedicated to a haunted mansion theme, for example, are Del Toro’s books on mythology, folklore, and fairy tales. The screening room boasts over 7,000 DVDs. One space includes a simulated rain storm that pours outside a fake window. This latter location is one of Del Toro’s favorite places to write.

What interests me about this story is less its eccentricity than its pragmatism.  As Del Toro explained in a video tour of the house, he was inspired by the original research library built at Disney Studios, and in particular, its philosophy that “when you create a group of extraordinary artists, you should definitely feed their imagination with all sorts of images.”

Del Toro designed Bleak House to fuel the creativity on which his career depends. “It’s here to try to provoke a sort of a shock to the system,” he said, “and aid in circulation of the lifeblood of imagination, which is curiosity.”

Truly deep work — the type that redefines genres — is truly hard. In such efforts, our brain needs all the help it can get.

#####

In other news: in the most recent episode of my podcast, Deep Questions, I tackled thirteen questions in a row, including one on developing discipline and another on planning projects with unpredictable time demands. Are you listening to Deep Questions yet? If not, you should be!

The post Guillermo del Toro’s Inspiration Machine appeared first on Cal Newport.

On Teenage Luddites

Back in 2019, when I was on tour for my book, Digital Minimalism, I chatted with more than a few parents. I was surprised by how many told me a similar story: their teenage children had become fed up with the shallowness of online life and decided, all on their own, to deactivate their social media accounts, and in some cases, abandon their smartphones altogether.

Ever since then, when an interviewer asks me about youth and technology addiction, I tend to adopt an optimistic tone.  “We’re approaching a moment in which not using these apps will be seen as the authentic, counter-cultural move,” I’ll explain. “We don’t need to convince teenagers to stop using their phones, we just need them to discover on their own just how uncool these online media conglomerates, with their creepy geek overlords, really are.”

According to a recent New York Times article that many of my readers sent me, we might finally be seeing evidence that this shift is beginning to pick up speed. The piece, written by Alex Vadukul, and titled “‘Luddite’ Teens Don’t Want Your Likes,” chronicles a group of Brooklyn high school students who formed what they call the Luddite Club, an informal organization dedicated to promoting “a lifestyle of self-literation from social media and technology.”

The article opens on a meeting of the Luddite Club being held on a dirt mound in a tucked-away corner of Prospect Park. According to Vadukul, some of the members drew in sketchpads or worked on watercolor painting. Books were common, with one particularly precocious teen reading “The Consolation of Philosophy,” while another was — honest-to-God — whittling a stick. Kurt Vonnegut is popular in the club. As is Jon Krakauer’s Into the Wild.

The group’s founder, a 17-year-old named Logan Lane, said she had a hard time recruiting members. But the word seems to be spreading. The crew gathering in Prospect Park had heard of three different nearby high schools that were rumored to be starting their own chapters.  Lane showed up to her interview with Vadukul wearing quilted jeans she had sewed herself. She explained that once she was freed from her phone, she had started learning what life as a teenager in the city used to be like. She took to borrowing books from the library to read in the park. For a while, she fell in with a crew that taught her how to graffiti subway cars. Her parents were upset they didn’t always know where she was at night.

Here’s my carefully considered response to all of this: Yes. Very much, yes.

This is exactly what teenagers in Brooklyn should be doing: Reading books they don’t understand, getting into trouble, trying on intellectual identities without worrying about widespread scrutiny, sewing their own jeans, and yes, if they want, whittling sticks. How long did we really think young people would be willing to give up all of this wonderful mess in exchange for monotonously boosting the value of Meta stock?

There was, however, one passage in particular that gave me the most hope that a shift in teenagers’ relationships with their phones might actually be imminent. “My parents are so addicted,” said Lane. “My mom got on Twitter, and I’ve seen it tear her apart. But I guess I also like [being offline], because I get to feel a little superior to them.”

Zuckerberg is screwed.

The post On Teenage Luddites appeared first on Cal Newport.

Ann Patchett on Scheduling Creativity

In a recent interview for the BBC podcast Spark & Fire, the novelist Ann Patchett discusses some of the difficulties that come along with finding success as a writer.

“It used to be a novel lived very nicely in my head as a constant companion,” she explains. “As time goes on and I now have this other thing which is my career, and all the things that people want me to do, that is very distracting to day dreaming and working in your head.”

As a result, Patchett finds herself needing to specifically put aside time just to think. As she elaborates:

“Sometimes I sit down in my office on my mediation cushion. Not to meditate, but just to sit as if meditating. I start the timer, I light a candle, I sit down on my little green poof and I say to myself: ‘Now you have twenty minutes to think about your novel. Namaste.'”

She goes on to say that she finds it “pathetic” that she has to “block out time for thinking.” Patchett is not alone in this dismay:  many authors share a similar despair. (I remember my friend Ryan Holiday once putting it this way in an interview: “The better you become at writing, the more the world conspires to prevent you from writing.”)

It occurred to me, however, as I listened to this interview, that Patchett’s concerns provide a warning that applies well beyond the rarified world of professional authors. Creative insight of any type — be it business strategy, an ad campaign, or computer code — requires cognitive space to emerge. It doesn’t take much daily activity before original thought is starved of the neuronal nutriments required to grow.

Modern knowledge work, if anything, is a shallow distraction generation machine. A professional schedule riven with email, Slack, and calendar invites is one that cannot also support whatever form of inspired thought moves the needle in your particular field.

And yet, how many of us are serious about blocking off and protecting significant amounts of time to do nothing but think? To act, in other words, like Ann Patchett on her green meditation poof? It is perhaps pathetic that we’ve come to a point where something as natural as creativity requires artificial support, but it is where we are. We should start acknowledging this reality.

#####

Speaking of podcasts, it was brought to my attention recently that I should provide more updates here about what I’m up to on my own podcast, Deep Questions with Cal Newport. On Monday’s episode (#225), I talked about my recent appearance on Sam Harris’s podcast, and then chat with a New York Times bestselling thriller writer about the reality of her profession.

Author photo credit: Heidi Ross

The post Ann Patchett on Scheduling Creativity appeared first on Cal Newport.

Pliny the Younger on Happy and Honorable Seclusion

A reader recently pointed me toward an intriguing letter, reproduced a few weeks ago in the always-impressive Areopagus newsletter, that was originally sent from Pliny the Younger to his friend Minicius Fundanus around 100 AD. Among other topics, the letter touches on the difficulty of completing meaningful work in a distracted world.

As Pliny writes:

“I always realize [that city life is distracting] when I am at Laurentum, reading and writing and finding time to take the exercise which keeps my mind fit for work. There is nothing there for me to say or hear which I would afterwards regret, no one disturbs me with malicious gossip, and I have no one to blame — except myself — when writing doesn’t come easily. Hopes and fears do not worry me, and my time is not wasted in idle talk; I share my thoughts with no one but my books. It is a good life and a genuine one, a seclusion which is happy and honorable, more rewarding than any “business” can ever be. The sea and shore are my private Helicon, an endless source of inspiration.”
 
Pliny’s advice led me to do some more digging on what exactly he meant when he quipped: “when I am at Laurentum.” It turns out that Pliny maintained a rambling villa on the sea, southwest of Rome. According to an article I found, written by a British architect, Pliny’s property had been specifically configured to support focus:
 
 
“Away from the main body of the Villa, but connected to it by means of a covered arcade, is Pliny’s Retreat – a place where he can write in peace away from all distractions…I’ve shown the Retreat as a circular room with arms making the shape of a Greek cross. Pliny could then position himself wherever he liked to catch the light and the view from sunrise to sunset as he wrote.  He valued writing above everything else to him it was the most important part of the Villa.”

We shouldn’t, of course, be too literal in extracting practical advice from the life of Pliny the Younger. As a member of a lower aristocratic order in Classical Antiquity, Pliny’s life, in its details, is far different than, say, the standard middle-class twenty-first century knowledge worker. In other words, me telling you to build an outbuilding modeled after the Greek cross away from the main structures of your seaside villa might not evince appreciation.

But I did find it fascinating that even as far back as two thousand years ago, those who made a living with their mind (Pliny was a magistrate and lawyer) struggled with distraction, and found solace in the pursuit of something deeper.

#####

In other news…

In the most recent episode of my podcast, Deep Questions, I tackle the tension between ambition and burnout, describing a model for pursuing the former while avoiding the latter.

YouTube superstar (and former doctor) Ali Abdaal also released an interview with me in which we discussed the challenges of leaving a well-worn path to pursue something new.

The post Pliny the Younger on Happy and Honorable Seclusion appeared first on Cal Newport.

❌