Tag: philosophy (page 1 of 2)

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

Teasing things apart a bit, Nguyen gives some definitions:

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.

That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.

So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they’re wrong. That’s never worked in the past, and it won’t work now. Our problem isn’t a deficit in access to information, it’s a deficit in trust.

Source: Aeon (via Ian O’Byrne)

Conversational implicature

In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:

When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).

From my point of view, this is similar to the difference between productive and unproductive ambiguity.

The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?

For example, and as the article notes, “shall we go upstairs?” can mean a sexual invitation, which may or may not later imply consent. It’s a tricky area.

I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.

 To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?

To use conversational implicature is human. It can be annoying. It can turn political. But it’s an extremely useful tool, and certainly lubricates us all rubbing along together.

Source: Aeon

The tenets of ‘Slow Thought’

The slow movement began with ‘slow food’ which was in opposition to, unsurprisingly, ‘fast food’. Since then there’s been, with greater and lesser success, ‘slow’ versions of many things: education, cinema, religion… you name it.

In this article, the author suggests ‘slow thought’. Unfortunately, the connotation around ‘slow thinking’ is already negative so I don’t think the manifesto they provide will catch on. They also quote French philosophers…

In the tradition of the Slow Movement, I hereby declare my manifesto for ‘Slow Thought’. This is the first step toward a psychiatry of the event, based on the French philosopher Alain Badiou’s central notion of the event, a new foundation for ontology – how we think of being or existence. An event is an unpredictable break in our everyday worlds that opens new possibilities. The three conditions for an event are: that something happens to us (by pure accident, no destiny, no determinism), that we name what happens, and that we remain faithful to it. In Badiou’s philosophy, we become subjects through the event. By naming it and maintaining fidelity to the event, the subject emerges as a subject to its truth. ‘Being there,’ as traditional phenomenology would have it, is not enough. My proposal for ‘evental psychiatry’ will describe both how we get stuck in our everyday worlds, and what makes change and new things possible for us.

That being said, if only the author could state them more simple and standalone, I think the ‘seven proclamations’ do have value:

  1. Slow Thought is marked by peripatetic Socratic walks, the face-to-face encounter of Levinas, and Bakhtin’s dialogic conversations
  2. Slow Thought creates its own time and place
  3. Slow Thought has no other object than itself
  4. Slow Thought is porous
  5. Slow Thought is playful
  6. Slow Thought is a counter-method, rather than a method, for thinking as it relaxes, releases and liberates thought from its constraints and the trauma of tradition
  7. Slow Thought is deliberate

Isn’t this just Philosophy? In any case, my favourite paragraph is probably this one:

Slow Thought is a porous way of thinking that is non-categorical, open to contingency, allowing people to adapt spontaneously to the exigencies and vicissitudes of life. Italians have a name for this: arrangiarsi – more than ‘making do’ or ‘getting by’, it is the art of improvisation, a way of using the resources at hand to forge solutions. The porosity of Slow Thought opens the way for potential responses to human predicaments.

We definitely need more ‘arrangiarsi’ in the world.

Source: Aeon

 

Memento mori

As I’ve mentioned before on Thought Shrapnel, next to my bed I have a memento mori, an object that reminds me that one day I will die.

My friend Ian O’Byrne had some sad news last week: his grandmother died. However, in an absolutely fantastic and very well-written post he wrote in the aftermath, he mentioned how meditating regularly on death, and having a memento mori has really helped him to live his life to the fullest.

I believe that it is reminders like this one that we desperately need in our own lives. It seems like a normal practice that may of us would rather ignore death, or do everything to avoid and pretend is not true. It may be the root of ego that causes us to run away from anything that reminds us of this reality. As a safety mechanism, we build this comfortable narrative that avoids this tough subject.

We also at times simply refuse to look at life as it is. We’re scared to meditate and reflect on the fact that we are all going to die. Just the fact that I wrote this post, and you’re reading it may strike you as a bit dark and macabre.

With all of our technological, surgical, and pharmaceutical inventions and devices, we expect, almost demand, to live a long life, live it in good health and look good doing it. We live in denial that we will die. But, previous civilizations were acutely aware of their own mortality. Memento mori was the philosophy of reflecting on your own death as a form of spiritual improvement, and rejecting earthly vanities.

So having a memento mori isn’t morbid, it’s actually a symbol that you’re looking to maximise your time here on earth. When I used a Mac, I had a skull icon at the top of the dock on the left-hand side of my screen.

Ian suggests some alternatives:

There are multiple ways to include this process of memento mori in your life. For some, it is as simple as including artwork and symbols in your home and daily interactions. These may be symbols of mortality which encourage reflection on the meaning and fleetingness of life. In my home we have skulls in various pieces of art and sculptures that help serve as a reminder.

I had opportunity last week to revisit Buster Benson’s 2013 influential post Live Like a Hydra. In it, he references an experiment he called If I Lived 100 Times whereby he modelled life expectancy data for someone his age. It’s interesting reading and certainly makes you think. How many books will you read before you die? How many new countries will you travel to? It makes you think.

Back to Ian’s article and he turns to the Stoic philosopher Epictetus for some advice:

Memento mori is an opportunity, should you take it, to reflect on the invigorating and humbling aspects of life. By no means am I an expert on this. I still struggle daily with understanding my role and mission in life. In these struggles, I also need to remember that I may not wake up tomorrow. As stated by Epictetus, “Keep death and exile before your eyes each day, along with everything that seems terrible— by doing so, you’ll never have a base thought nor will you have excessive desire.” These opportunities to reflect and meditate provide an opportunity to create and enjoy the life you want.

Wise words indeed.

Source: W. Ian O’Byrne

Archives of Radical Philosophy

A quick one to note that the entire archive (1972-2018) of Radical Philosophy is now online. It describes itself as a “UK-based journal of socialist and feminist philosophy” and there’s articles in there from Pierre Bourdieu, Judith Butler, and Richard Rorty.

If nothing else, these essays and many others should upend facile notions of leftist academic philosophy as dominated by “postmodern” denials of truth, morality, freedom, and Enlightenment thought, as doctrinaire Stalinism, or little more than thought policing through dogmatic political correctness. For every argument in the pages of Radical Philosophy that might confirm certain readers’ biases, there are dozens more that will challenge their assumptions, bearing out Foucault’s observation that “philosophy cannot be an endless scrutiny of its own propositions.”

That’s my bedtime reading sorted for the foreseeable, then…

Source: Open Culture

Is your smartphone a very real part of who you are?

I really enjoy Aeon’s articles, and probably should think about becoming a paying subscriber. They make me think.

This one is about your identity and how much of it is bound up with your smartphone:

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself – and all this dating back years.

I did some work on mind, brain, and personal identity as part of my undergraduate studies in Philosophy. I’m certainly sympathetic to the argument that things outside our body can become part of who we are:

Andy Clark and David Chalmers… argued in ‘The Extended Mind’ (1998) that technology is actually part of us. According to traditional cognitive science, ‘thinking’ is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

So if you’ve always got your smartphone with you, it’s possible to outsource things to it. For example, you don’t have to remember so many things, you just need to know how to retrieve them. In the age of voice assistants, that becomes ever-easier.

This is known as the ‘extended mind thesis’.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of ‘extended’ assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterising the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

These are certainly questions I’m interested in. I’ve seen some predictions that Philosphy graduates are going to be earning more than Computer Science graduates in a decade’s time. I can see why (and I certainly hope so!)

Source: Aeon

The ‘loudness’ of our thoughts affects how we judge external sounds

This is really interesting:

The “loudness” of our thoughts — or how we imagine saying something — influences how we judge the loudness of real, external sounds, a team of researchers from NYU Shanghai and NYU has found.

No-one but you knows what it’s like to be inside your head and be subject to the constant barrage of hopes, fears, dreams — and thoughts:

“Our ‘thoughts’ are silent to others — but not to ourselves, in our own heads — so the loudness in our thoughts influences the loudness of what we hear,” says Poeppel, a professor of psychology and neural science.

Using an imagery-perception repetition paradigm, the team found that auditory imagery will decrease the sensitivity of actual loudness perception, with support from both behavioural loudness ratings and human electrophysiological (EEG and MEG) results.

“That is, after imagined speaking in your mind, the actual sounds you hear will become softer — the louder the volume during imagery, the softer perception will be,” explains Tian, assistant professor of neural and cognitive sciences at NYU Shanghai. “This is because imagery and perception activate the same auditory brain areas. The preceding imagery already activates the auditory areas once, and when the same brain regions are needed for perception, they are ‘tired’ and will respond less.”

This is why meditation, both in terms of trying to still your mind, and meditating on positive things you read, is such a useful activity.

As anyone who’s studied philosophy, psychology, and/or neuroscience knows, we don’t experience the world directly, but find ways to interpret the “bloomin’ buzzin’ confusion”:

According to Tian, the study demonstrates that perception is a result of interaction between top-down (e.g. our cognition) and bottom-up (e.g. sensory processing of external stimulation) processes. This is because human beings not only receive and analyze upcoming external signals passively, but also interpret and manipulate them actively to form perception.

Source: Science Daily

What we can learn from Seneca about dying well

As I’ve shared before, next to my bed at home I have a memento mori, an object to remind me before I go to sleep and when I get up that one day I will die. It kind of puts things in perspective.

“Study death always,” Seneca counseled his friend Lucilius, and he took his own advice. From what is likely his earliest work, the Consolation to Marcia (written around AD 40), to the magnum opus of his last years (63–65), the Moral Epistles, Seneca returned again and again to this theme. It crops up in the midst of unrelated discussions, as though never far from his mind; a ringing endorsement of rational suicide, for example, intrudes without warning into advice about keeping one’s temper, in On Anger. Examined together, Seneca’s thoughts organize themselves around a few key themes: the universality of death; its importance as life’s final and most defining rite of passage; its part in purely natural cycles and processes; and its ability to liberate us, by freeing souls from bodies or, in the case of suicide, to give us an escape from pain, from the degradation of enslavement, or from cruel kings and tyrants who might otherwise destroy our moral integrity.

Seneca was forced to take his own life by his own pupil, the more-than-a-little-crazy Roman Emperor, Nero. However, his whole life had been a preparation for such an eventuality.

Seneca, like many leading Romans of his day, found that larger moral framework in Stoicism, a Greek school of thought that had been imported to Rome in the preceding century and had begun to flourish there. The Stoics taught their followers to seek an inner kingdom, the kingdom of the mind, where adherence to virtue and contemplation of nature could bring happiness even to an abused slave, an impoverished exile, or a prisoner on the rack. Wealth and position were regarded by the Stoics as adiaphora, “indifferents,” conducing neither to happiness nor to its opposite. Freedom and health were desirable only in that they allowed one to keep one’s thoughts and ethical choices in harmony with Logos, the divine Reason that, in the Stoic view, ruled the cosmos and gave rise to all true happiness. If freedom were destroyed by a tyrant or health were forever compromised, such that the promptings of Reason could no longer be obeyed, then death might be preferable to life, and suicide, or self-euthanasia, might be justified.

Given that death is the last taboo in our society, it’s an interesting way to live your life. Being ready at any time to die, having lived a life that you’re satisfied with, seems like the right approach to me.

“Study death,” “rehearse for death,” “practice death”—this constant refrain in his writings did not, in Seneca’s eyes, spring from a morbid fixation but rather from a recognition of how much was at stake in navigating this essential, and final, rite of passage. As he wrote in On the Shortness of Life, “A whole lifetime is needed to learn how to live, and—perhaps you’ll find this more surprising—a whole lifetime is needed to learn how to die.”

Source: Lapham’s Quarterly

Humans are not machines

Can we teach machines to be ‘fully human’? It’s a fascinating question, as it makes us think carefully about what it actually means to be a human being.

Humans aren’t just about inputs and outputs. There’s some things that we ‘know’ in different ways. Take music, for example.

In philosophy, it’s common to describe the mind as a kind of machine that operates on a set of representations, which serve as proxies for worldly states of affairs, and get recombined ‘offline’ in a manner that’s not dictated by what’s happening in the immediate environment. So if you can’t consciously represent the finer details of a guitar solo, the way is surely barred to having any grasp of its nuances. Claiming that you have a ‘merely visceral’ grasp of music really amounts to saying that you don’t understand it at all. Right?

There’s activities we do and actions we peform that aren’t the result of conscious thought. What status do we give them?

Getting swept up in a musical performance is just one among a whole host of familiar activities that seem less about computing information, and more about feeling our way as we go: selecting an outfit that’s chic without being fussy, avoiding collisions with other pedestrians on the pavement, or adding just a pinch of salt to the casserole. If we sometimes live in the world in a thoughtful and considered way, we go with the flow a lot, too.

What sets humans apart from animals is the ability to plan and to pay attention to absract things and ideas:

Now, the world contains many things that we can’t perceive. I am unlikely to find a square root in my sock drawer, or to spot the categorical imperative lurking behind the couch. I can, however, perceive concrete things, and work out their approximate size, shape and colour just by paying attention to them. I can also perceive events occurring around me, and get a rough idea of their duration and how they relate to each other in time. I hear that the knock at the door came just before the cat leapt off the couch, and I have a sense of how long it took for the cat to sidle out of the room.

Time is one of the most abstract of the day-to-day things we deal with as humans:

Our conscious experience of time is philosophically puzzling. On the one hand, it’s intuitive to suppose that we perceive only what’s happening rightnow. But on the other, we seem to have immediate perceptual experiences of motion and change: I don’t need to infer from a series of ‘still’ impressions of your hand that it is waving, or work out a connection between isolated tones in order to hear a melody. These intuitions seem to contradict each other: how can I perceive motion and change if I am only really conscious of what’s occurring now? We face a choice: either we don’t really perceive motion and change, or the now of our perception encompasses more than the present instant – each of which seems problematic in its own way. Philosophers such as Franz Brentano and Edmund Husserl, as well as a host of more recent commentators, have debated how best to solve the dilemma.

So where does that leave us in terms of the differences between humans and machines?

Human attempts at making sense of the world often involve representing, calculating and deliberating. This isn’t the kind of thing that typically goes on in the 55 Bar, nor is it necessarily happening in the Lutheran church just down the block, or on a muddy football pitch in a remote Irish village. But gathering to make music, play games or engage in religious worship are far from being mindless activities. And making sense of the world is not necessarily just a matter of representing it.

To me, that last sentence is key: the world isn’t just representations. It’s deeper and more visceral than that.

Source: Aeon

Are cows less valuable than wolves?

When debating with people, one of my go-to approaches is getting them to think through the logical consequences of their actions. Effectively, I’m a serial invoker of Kant’s categorical imperative: what would happen if everyone acted like this?

This article gets people to think about a world full of vegans:

Vegetarianism and veganism are becoming more popular. Alternative sources of protein, including lab-grown meat, are becoming available. This trend away from farmed meat-eating looks set to continue. From an environmental perspective and a welfare perspective, that’s a good thing. But how far should we go? Would it be good if the last cow died?

Well, let’s think it through…

There is a distinct difference between cattle on the one hand, and pandas and wolves on the other. Modern cattle owe their existence to selective breeding by human beings: they are very different animals from the wild oxen from which they are descended. We might think that this difference is relevant to their moral value. We might think, that is, along the following lines: we have a duty to preserve the natural world as far as we can. Wolves and pandas belong to that natural world; they occupy their place in it due to the mechanisms of evolution. So we have a duty to preserve them (not an absolute duty of course: rather one duty among many others – to our children, to each other, and so on – each of which makes different and sometimes conflicting demands on us).

Right, so that’s quite complex.

If we think, as I do, that being cultural is itself an adaptation, a natural feature of human beings, then we shouldn’t think that the ways in which we are cultural exempt us from nature, or that the products of our culture are themselves unnatural.

In other words, we should put to one side our status of mammals at the top of the food chain when thinking about this stuff. Fascinating.

Source: Aeon