- Availability of additional communication mechanisms
- Failure of other communication avenues
- Consequences of anonymity
- Designing the anonymous communication channel
- Long-term considerations
- A drop in users
- A drop in engagement
- Advertiser enmity
- Disinformation and fake news
- Former executives speak out
- Regulatory mood is hardening
- GDPR
- Antagonism with the news industry
- Assume that no matter how amazing your new tech is, people are going to adopt it slowly.
- Give your early adopters every chance you can to use your offering together with the existing tools that they will continue to need in order to work with people who haven’t caught up yet.
- And if you’re building a communication tool, make it as simple as possible for others to build compatible tools, because they will expand the network of people your users can communicate with to populations you haven’t thought of and probably don’t understand.
What we can learn from Seneca about dying well
As I’ve shared before, next to my bed at home I have a memento mori, an object to remind me before I go to sleep and when I get up that one day I will die. It kind of puts things in perspective.
“Study death always,” Seneca counseled his friend Lucilius, and he took his own advice. From what is likely his earliest work, the Consolation to Marcia (written around AD 40), to the magnum opus of his last years (63–65), the Moral Epistles, Seneca returned again and again to this theme. It crops up in the midst of unrelated discussions, as though never far from his mind; a ringing endorsement of rational suicide, for example, intrudes without warning into advice about keeping one’s temper, in On Anger. Examined together, Seneca’s thoughts organize themselves around a few key themes: the universality of death; its importance as life’s final and most defining rite of passage; its part in purely natural cycles and processes; and its ability to liberate us, by freeing souls from bodies or, in the case of suicide, to give us an escape from pain, from the degradation of enslavement, or from cruel kings and tyrants who might otherwise destroy our moral integrity.Seneca was forced to take his own life by his own pupil, the more-than-a-little-crazy Roman Emperor, Nero. However, his whole life had been a preparation for such an eventuality.
Seneca, like many leading Romans of his day, found that larger moral framework in Stoicism, a Greek school of thought that had been imported to Rome in the preceding century and had begun to flourish there. The Stoics taught their followers to seek an inner kingdom, the kingdom of the mind, where adherence to virtue and contemplation of nature could bring happiness even to an abused slave, an impoverished exile, or a prisoner on the rack. Wealth and position were regarded by the Stoics as adiaphora, “indifferents,” conducing neither to happiness nor to its opposite. Freedom and health were desirable only in that they allowed one to keep one’s thoughts and ethical choices in harmony with Logos, the divine Reason that, in the Stoic view, ruled the cosmos and gave rise to all true happiness. If freedom were destroyed by a tyrant or health were forever compromised, such that the promptings of Reason could no longer be obeyed, then death might be preferable to life, and suicide, or self-euthanasia, might be justified.Given that death is the last taboo in our society, it's an interesting way to live your life. Being ready at any time to die, having lived a life that you're satisfied with, seems like the right approach to me.
“Study death,” “rehearse for death,” “practice death”—this constant refrain in his writings did not, in Seneca’s eyes, spring from a morbid fixation but rather from a recognition of how much was at stake in navigating this essential, and final, rite of passage. As he wrote in On the Shortness of Life, “A whole lifetime is needed to learn how to live, and—perhaps you’ll find this more surprising—a whole lifetime is needed to learn how to die.”Source: Lapham's Quarterly
Anonymity vs accountability
As this article points out, organisational culture is a delicate balance between many things, including accountability and anonymity:
Though some assurance of anonymity is necessary in a few sensitive and exceptional scenarios, dependence on anonymous feedback channels within an organization may stunt the normalization of a culture that encourages diversity and community.Anonymity can be helpful and positive:
For example, an anonymous suggestion program to garner ideas from members or employees in an organization may strengthen inclusivity and enhance the diversity of suggestions the organization receives. It would also make for a more meritocratic decision-making process, as anonymity would ensure that the quality of the articulated idea, rather than the rank and reputation of the articulator, is what's under evaluation. Allowing members to anonymously vote for anonymously-submitted ideas would help curb the influence of office politics in decisions affecting the organization's growth....but also problematic:
Reliance on anonymous speech for serious organizational decision-making may also contribute to complacency in an organizational culture that falls short of openness. Outlets for anonymous speech may be as similar to open as crowdsourcing is—or rather, is not. Like efforts to crowdsource creative ideas, anonymous suggestion programs may create an organizational environment in which diverse perspectives are only valued when an organization's leaders find it convenient to take advantage of members' ideas.The author gives some advice to leaders under five sub-headings:
There's some great advice in here, and I'll certainly be reflecting on it with the organisations of which I'm part.
Source: opensource.com
On your deathbed, you're not going to wish that you'd spent more time on Facebook
As many readers of my work will know, I don’t have a Facebook account. This article uses Facebook as a proxy for something that, whether you’ve got an account on the world’s largest social network or not, will be familiar:
The trick, like anything to which you're psychologically addicted, is to reframe what you're doing:An increasing number of us are coming to realize that our relationships with our phones are not exactly what a couples therapist would describe as “healthy.” According to data from Moment, a time-tracking app with nearly five million users, the average person spends four hours a day interacting with his or her phone.
The thing I find hardest is to leave my phone in a different room, or not take it with me when I go out. There's always a reason for this (usually 'being contactable') but not having it constantly alongside you is probably a good idea:Many people equate spending less time on their phones with denying themselves pleasure — and who likes to do that? Instead, think of it this way: The time you spend on your phone is time you’re not spending doing other pleasurable things, like hanging out with a friend or pursuing a hobby. Instead of thinking of it as “spending less time on your phone,” think of it as “spending more time on your life.”
There's a great re-adjustment happening with our attitude towards devices and the services we use on them. In a separate BBC News article, Amol Rajan outlines some reasons why Facebook usage may have actually peaked:Leave your phone at home while you go for a walk. Stare out of a window during your commute instead of checking your email. At first, you may be surprised by how powerfully you crave your phone. Pay attention to your craving. What does it feel like in your body? What’s happening in your mind? Keep observing it, and eventually, you may find that it fades away on its own.
Interesting times.
Source: The New York Times / BBC News
The Goldilocks Rule
In this article from 2016, James Clear investigates motivation:
Why do we stay motivated to reach some goals, but not others? Why do we say we want something, but give up on it after a few days? What is the difference between the areas where we naturally stay motivated and those where we give up?The answer, which is obvious when we think about it, is that we need appropriate challenges in our lives:
Tasks that are significantly below your current abilities are boring. Tasks that are significantly beyond your current abilities are discouraging. But tasks that are right on the border of success and failure are incredibly motivating to our human brains. We want nothing more than to master a skill just beyond our current horizon.But he doesn’t stop there. He goes on to talk about Mihaly Csikszentmihalyi’s notion of peak performance, or ‘flow’ states:We can call this phenomenonThe Goldilocks Rule. The Goldilocks Rule states that humans experience peak motivation when working on tasks that are right on the edge of their current abilities. Not too hard. Not too easy. Just right.
In order to reach this state of peak performance... you not only need to work on challenges at the right degree of difficulty, but also measure your immediate progress. As psychologist Jonathan Haidt explains, one of the keys to reaching a flow state is that “you get immediate feedback about how you are doing at each step.”Video games are great at inducing flow states; traditional classroom-based learning experiences, not so much. The key is to create these experiences yourself by finding optimum challenge and immediate feedback.
Source: Lifehacker
On the death of Google/Apache Wave (and the lessons we can learn from it)
This article is entitled ‘How not to replace email’ and details both the demise of Google Wave and it’s open source continuation, Apache Wave:
As of a month ago, the Apache Wave project is “retired”. Few people noticed; in the seven years that Wave was an Apache Incubator open source project, it never had an official release, and was stuck at version 0.4-rc10 for the last three years.Yes, I know! There's been a couple of times over the last few years when I've thought that Wave would have been perfect for a project I was working on. But the open source version never seemed to be 'ready'.
The world want ready for it in 2010, but now would seem to be the perfect time for something like Wave:
2017 was a year of rapidly growing interest in federated communications tools such as Mastodon, which is an alternative to Twitter that doesn’t rely on a single central corporation. So this seems like a good time to revisit an early federated attempt to reinvent how we use the internet to communicate with each other.As the author notes, the problem was the overblown hype around it, causing Google to pull it after just three months. He quoted a friend of his who at one time was an active user:
We’d start sending messages with lots of diagrams, sketches, and stuff cribbed from Google Images, and then be able to turn those sort of longer-than-IM-shorter-than-email messages into actual design documents gradually.I feel this too, and it’s actually something we’ve been talking about for internal communications at Moodle. Telegram, (which we use kind of like Slack) is good for short, sharp communication, but there’s a gulf between that and, say, an email conversation or threaded forum discussion.In fact, I’d argue that even having a system that’s a messaging system designed for “a paragraph or two” was on its own worthwhile: even Slack isn’t quite geared toward that, and contrariwise, email […] felt more heavyweight than that. Wave felt like it encouraged the right amount of information per message.
Perhaps this is the sweet spot for the ‘social networking’ aspect of Project MoodleNet?
Wave’s failure didn’t have anything to do with the ideas that went into it.Helpfully, the author outlines some projects he’s been part of, after stating (my emphasis):Those ideas and goals are sound, and this failure even provided good evidence that there’s a real need for something kind of like Wave: fifty thousand people signed a petition to “Save Google Wave” after Google announced they were shutting Wave down. Like so many petitions, it didn’t help (obviously), but if a mediocre implementation got tens of thousands of passionate fans, what could a good implementation do?
I’d say the single most important lesson to take away here, for a technology project at least, is that interoperability is key.It's a really useful article with many practical applications (well, for me at least...)
Source: Jamey Sharp
To lose old styles of reading is to lose a part of ourselves
Sometimes I think we’re living in the end times:
I wrote my doctoral thesis on digital literacies. There was a real sense in the 1990s that reading on screen was very different to reading on paper. We've kind of lost that sense of difference, and I think perhaps we need to regain it:Out for dinner with another writer, I said, "I think I've forgotten how to read."
"Yes!" he replied, pointing his knife. "Everybody has."
"No, really," I said. "I mean I actually can't do it any more."
He nodded: "Nobody can read like they used to. But nobody wants to talk about it."
We don't really talk about 'hypertext' any more, as it's almost the default type of text that we read. As such, reading on paper doesn't really prepare us for it:For most of modern life, printed matter was, as the media critic Neil Postman put it, "the model, the metaphor, and the measure of all discourse." The resonance of printed books – their lineal structure, the demands they make on our attention – touches every corner of the world we've inherited. But online life makes me into a different kind of reader – a cynical one. I scrounge, now, for the useful fact; I zero in on the shareable link. My attention – and thus my experience – fractures. Online reading is about clicks, and comments, and points. When I take that mindset and try to apply it to a beaten-up paperback, my mind bucks.
Me too. I train myself to read longer articles through mechanisms such as writing Thought Shrapnel posts and newsletters each week. But I don't read like I used to; I read for utility rather than pleasure and just for the sake of it.For a long time, I convinced myself that a childhood spent immersed in old-fashioned books would insulate me somehow from our new media climate – that I could keep on reading and writing in the old way because my mind was formed in pre-internet days. But the mind is plastic – and I have changed. I'm not the reader I was.
It's funny. We've such a connection with books, but for most of human history we've done without them:The suggestion that, in a few generations, our experience of media will be reinvented shouldn't surprise us. We should, instead, marvel at the fact we ever read books at all. Great researchers such as Maryanne Wolf and Alison Gopnik remind us that the human brain was never designed to read. Rather, elements of the visual cortex – which evolved for other purposes – were hijacked in order to pull off the trick. The deep reading that a novel demands doesn't come easy and it was never "natural." Our default state is, if anything, one of distractedness. The gaze shifts, the attention flits; we scour the environment for clues. (Otherwise, that predator in the shadows might eat us.) How primed are we for distraction? One famous study found humans would rather give themselves electric shocks than sit alone with their thoughts for 10 minutes. We disobey those instincts every time we get lost in a book.
There's several theses in all of this around fake news, the role of reading in a democracy, and how information spreads. For now, I continue to be amazed at the power of the web on the fabric of societies.Literacy has only been common (outside the elite) since the 19th century. And it's hardly been crystallized since then. Our habits of reading could easily become antiquated. The writer Clay Shirky even suggests that we've lately been "emptily praising" Tolstoy and Proust. Those old, solitary experiences with literature were "just a side-effect of living in an environment of impoverished access." In our online world, we can move on. And our brains – only temporarily hijacked by books – will now be hijacked by whatever comes next.
Source: The Globe and Mail
Does the world need interactive emails?
I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.
The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:
See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.
It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.
Source: TechCrunch
Does the world need interactive emails?
I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.
The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:
See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.
It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.
Source: TechCrunch
The Kano model
Using the example of the innovation of a customised home page from the early days of Flickr, this article helps break down how to delight users:
Years ago, we came across the work of Noriaka Kano, a Japanese expert in customer satisfaction and quality management. In studying his writing, we learned about a model he created in the 1980s, known as the Kano Model.The article does a great job of explaining how you can implement great features but they don't particularly get users excited:
Capabilities that users expect will frustrate those users when they don’t work. However, when they work well, they don’t delight those users. A basic expectation, at best, can reach a neutral satisfaction a point where it, in essence, becomes invisible to the user.So it’s a process of continual improvement, and marginal gains in some areas:Try as it might, Google’s development team can only reduce the file-save problems to the point of it working 100% of the time. However, users will never say, “Google Docs is an awesome product because it saves my documents so well.” They just expect files to always be saved correctly.
One of the predictions that the Kano Model makes is that once customers become accustomed to excitement generator features, those features are not as delightful. The features initially become part of the performance payoff and then eventually migrate to basic expectations.Lots to think about here, particularly with Project MoodleNet.
Source: UIE
Is the gig economy the mass exploitation of millennials?
The answer is, “yes, probably”.
The 'sharing economy' and 'gig economy' are nothing of the sort. They're a problematic and highly disingenuous way for employers to not care about the people who create value in their business.If the living wage is a pay scale calculated to be that of an appropriate amount of money to pay a worker so they can live, how is it possible, in a legal or moral sense to pay someone less? We are witnessing a concerted effort to devalue labour, where the primary concern of business is profit, not the economic wellbeing of its employees.
The problem, of course, is late-stage capitalism:The employer washes their hands of the worker. Their immediate utility is the sole concern. From a profit point of view, absolutely we can appreciate the logic. However, we forget that the worker also exists as a member of society, and when business is allowed to use and exploit people in this manner, we endanger societal cohesiveness.
And the alternative? Co-operation.The neoliberal project has encouraged us to adopt a hyper-individualistic approach to life and work. For all the speak of teamwork, in this economy the individual reigns supreme and it is destroying young workers. The present system has become unfeasible. The neoliberal project needs to be reeled back in. The free market needs a firm hand because the invisible one has lost its grip.
Source: The Irish Times
Humans are not machines
Can we teach machines to be ‘fully human’? It’s a fascinating question, as it makes us think carefully about what it actually means to be a human being.
Humans aren’t just about inputs and outputs. There’s some things that we ‘know’ in different ways. Take music, for example.
In philosophy, it’s common to describe the mind as a kind of machine that operates on a set of representations, which serve as proxies for worldly states of affairs, and get recombined ‘offline’ in a manner that’s not dictated by what’s happening in the immediate environment. So if you can’t consciously represent the finer details of a guitar solo, the way is surely barred to having any grasp of its nuances. Claiming that you have a ‘merely visceral’ grasp of music really amounts to saying that you don’t understand it at all. Right?There's activities we do and actions we peform that aren't the result of conscious thought. What status do we give them?
Getting swept up in a musical performance is just one among a whole host of familiar activities that seem less about computing information, and more about feeling our way as we go: selecting an outfit that’s chic without being fussy, avoiding collisions with other pedestrians on the pavement, or adding just a pinch of salt to the casserole. If we sometimes live in the world in a thoughtful and considered way, we go with the flow a lot, too.What sets humans apart from animals is the ability to plan and to pay attention to absract things and ideas:
Now, the world contains many things that we can’t perceive. I am unlikely to find a square root in my sock drawer, or to spot the categorical imperative lurking behind the couch. I can, however, perceive concrete things, and work out their approximate size, shape and colour just by paying attention to them. I can also perceive events occurring around me, and get a rough idea of their duration and how they relate to each other in time. I hear that the knock at the door came just before the cat leapt off the couch, and I have a sense of how long it took for the cat to sidle out of the room.Time is one of the most abstract of the day-to-day things we deal with as humans:
Our conscious experience of time is philosophically puzzling. On the one hand, it’s intuitive to suppose that we perceive only what’s happening rightnow. But on the other, we seem to have immediate perceptual experiences of motion and change: I don’t need to infer from a series of ‘still’ impressions of your hand that it is waving, or work out a connection between isolated tones in order to hear a melody. These intuitions seem to contradict each other: how can I perceive motion and change if I am only really conscious of what’s occurring now? We face a choice: either we don’t really perceive motion and change, or the now of our perception encompasses more than the present instant – each of which seems problematic in its own way. Philosophers such as Franz Brentano and Edmund Husserl, as well as a host of more recent commentators, have debated how best to solve the dilemma.So where does that leave us in terms of the differences between humans and machines?
Human attempts at making sense of the world often involve representing, calculating and deliberating. This isn’t the kind of thing that typically goes on in the 55 Bar, nor is it necessarily happening in the Lutheran church just down the block, or on a muddy football pitch in a remote Irish village. But gathering to make music, play games or engage in religious worship are far from being mindless activities. And making sense of the world is not necessarily just a matter of representing it.To me, that last sentence is key: the world isn't just representations. It's deeper and more visceral than that.
Source: Aeon
Legislating against manipulated 'facts' is a slippery slope
In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.
As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:
Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:
This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?We need to be vigilant. Those we trust today may not be trustworthy tomorrow.
Source: The Guardian
Why we forget most of what we read
I read a lot of stuff, and I remember random bits of it. I used to be reasonably disciplined about bookmarking stuff, but then realised I hardly ever went back through my bookmarks. So, instead, I try to use what I read, which is kind of the reason for Thought Shrapnel…
Surely some people can read a book or watch a movie once and retain the plot perfectly. But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain. It might leave a film in the tub, but the rest is gone.Well, indeed. Nice metaphor.
In the internet age, recall memory—the ability to spontaneously call information up in your mind—has become less necessary. It’s still good for bar trivia, or remembering your to-do list, but largely, [Jared Horvath, a research fellow at the University of Melbourne] says, what’s called recognition memory is more important. “So long as you know where that information is at and how to access it, then you don’t really need to recall it,” he says.Exactly. You need to know how to find that article you read that backs up the argument you're making. You don't need to remember all of the details. Search skills are really important.
One study showed that recalling details about episodes for those bingeing on Netflix series was much lower than for thoose who spaced them out. I guess that’s unsurprising.
People are binging on the written word, too. In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them. It’s hard to imagine that’s decreased in the nine years since. In “Binge-Reading Disorder,” an article for The Morning News, Nikkitha Bakshani analyzes the meaning of this statistic. “Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it ‘sticks.’”For anyone who knows about spaced learning, the conclusions are pretty obvious:
The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out. I used to get irritated in school when an English-class syllabus would have us read only three chapters a week, but there was a good reason for that. Memories get reinforced the more you recall them, Horvath says. If you read a book all in one stretch—on an airplane, say—you’re just holding the story in your working memory that whole time. “You’re never actually reaccessing it,” he says.So apply what you learn and you're putting it to work. Hence this post!
Source: The Atlantic (via e180)
Should you lower your expectations?
“Aim for the stars and maybe you’ll hit the treetops” was always the kind of advice I was given when I was younger. But extremely high expectations of oneself is not always a great thing. We have to learn that we’ve got limits. Some are physical, some are mental, and some are cultural:
The problem with placing too much emphasis on your expectations—especially when they are exceedingly high—is that if you don’t meet them, you’re liable to feel sad, perhaps even burned out. This isn’t to say that you shouldn’t strive for excellence, but there’s wisdom in not letting perfect be the enemy of good.A (now famous) 2006 study found that people in Denmark are the happiest in the world. Researchers also found that have remarkably low expectations. And then:
In a more recent study that included more than 18,000 participants and was published in 2014 in the Proceedings of the National Academy of Sciences, researchers from University College in London examined people’s happiness from moment to moment. They found that “momentary happiness in response to outcomes of a probabilistic reward task is not explained by current task earnings, but by the combined influence of the recent reward expectations and prediction errors arising from those expectations.” In other words: Happiness at any given moment equals reality minus expectations.So if you've always got very high expectations that aren't being met, that's not a great situation to be in
In the words of Jason Fried, founder and CEO of software company Basecamp and author of multiple books on workplace performance: “I used to set expectations in my head all day long. But constantly measuring reality against an imagined reality is taxing and tiring, [and] often wrings the joy out of experiencing something for what it is.”Source: Outside
Why do some things go viral?
I love internet memes and included a few in my TEDx talk a few years ago. The term ‘meme’ comes from Richard Dawkins who coined the term in the 1970s:
But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?Memes aren't things that you necessarily want to find engaging or persuasive. They're kind of parasitic on the human mind:
Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.Dawkins doesn't like the use of the word 'meme' to refer to what we see on the internet:
According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation.So... why should we care?
While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”It is indeed a humbling thing, but one that a the study of Philosphy prepares you for, particularly Stoicism. Your mind is the one thing you can control, so be careful out there on the internet, reader.
Source: Nautilus