Origami unicorn

Photo of origami unicorn by Jo Nakashima

Erin Kissane wrote a long essay about Threads and the Fediverse. It’s worth a read in its own right, but the thing that really stood out to me for some reason was a random-ish link to instructions for making an origami unicorn.

There is zero chance of me ever making this, but I’m passing it on in case you’re less bad at this kind of thing. For me, it’s not the folding that I find difficult, it’s the rotational 3D stuff. I even find it difficult putting the duvet cover on the right way round (much to my wife’s amusement/dismay).

This model was first designed in 2014, but this is an updated version with some “bug fixes” (legs are properly locked) and a color changed horn.

Source: Jo Nakashima

The best antidote for the tendency to caricature one’s opponent

Daniel Dennett sitting in the woods, cleaning his glasses

Daniel Dennett is a philosopher who I enjoyed reading an undergraduate studying towards a Philosophy degree. I don’t think I’ve read him since, although his book Intuition Pumps and Other Tools for Thinking is on my list of books I’d like to read.

Maria Popova has extracted four rules which Dennett cites in Intuition Pumps which originally come from game theorist Anatol Rapoport. Sounds like good advice to me, especially in this fractured, fragmented world.

How to compose a successful critical commentary:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

  3. You should mention anything you have learned from your target.

  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Source: The Marginalian

Image: Daniel Dennett (via The New Yorker)

Endlessly clever

Yellow side of a solved Rubik's cube on a yellow background

Ethan Marcotte takes a phrase used in passing by a friend and applies it to his own career. He makes a good point.

(I noticed that Marcotte’s logo resembles the Firefox imagery that was used while I was at Mozilla. I typed that organisation and his name into a search engine and serendipitiously discovered With Great Tech Comes Great Responsibility, which I don’t think I’ve seen before?)

As tech workers, we’re expected to constantly adapt — to be, well, endlessly clever. We’re asked to learn the latest framework, the newest design tool, the latest research methodology. Our tools keep getting updated, processes become more complex, and the simple act of just doing work seems to get redefined overnight.

And crucially, we’re not the ones who get to redefine how we work. Most recently, our industry’s relentless investment in “artificial intelligence” means that every time a new Devin or Firefly or Sora announces itself, the rest of us have to ask how we’ll adapt this time.

Dunno. Maybe it’s time we step out of that negotiation cycle, and start deciding what we want our work to look like.

Source: Ethan Marcotte

Image: Daniele Franchi

When should you replace running shoes?

Photograph of back half of a running shoe showing midsole

John Sutton knows more about this area than I do. Not only his he an ultramarathon runner but he works in the area of ‘carbon literacy’ and sustainability. I’m also sure that he’s correct that the claims that you need to replace your running shoes after a certain number of miles is driven by marketing departments.

Still, I’ve definitely experienced creeping lower-back pain when getting to around 650 miles in a pair of running shoes. Of course, now I’m wondering whether it’s all psychosomatic…

With age and high mileage, it is said that the midsole no longer provides the cushioning that you need to prevent injury. This is cited as the main reason that shoes need replacing on a regular basis. Again, looking at the Lightboost midsole on these shoes, I see no evidence of crushing or squashing and I certainly don’t think I can feel any difference to the foot strike than when they were new. Obviously, any change in perceived cushioning is likely to be imperceptibly gradual and I could only really confirm that the cushioning was no longer up to snuff by comparing them directly with a new pair. These shoes are at a premium price (£170) and as such, I would expect them to be made of premium materials and built to last. My visual inspection of them suggests that they are still in excellent condition.

On the face of it, I see no obvious reason why I should retire these Ultraboost Lights any time soon. However, that seems to go against industry recommendations. What if invisible midsole damage has been so gradual that I haven’t noticed it? Now that I’ve reached 500 miles, am I likely to injure myself through continued usage? As a triathlete, I know from years of bitter experience that I am far more likely to injure myself on a run than I am cycling or swimming. So, anything I can do to improve my chances of not getting injured would be a powerful incentive to act. Thus, if it could be proven scientifically that buying a new pair of trainers every 300 – 500 miles would lessen my chances of injury, then I would take that evidence very seriously indeed.

[…]

In a previous blog post I discussed the carbon footprint of a pair of running shoes (usually between 8kg and 16kg of CO2 per pair). In the great scheme of things, this is not a huge figure (until you scale up to the billions pairs of trainers sold each year and the realisation that virtually all of these are destined for landfill at end of life). My Ultraboosts have a significant content made from ocean plastic and recycled plastic which reduces their carbon footprint by 10% compared to the previous model made with non-recycled materials. 10% is better than nothing, and the use of some ocean plastic is much better than taking plastic bottles out of the recycling loop and spinning them into polyester. But, I can do a lot better than 10% by not swapping my shoes for a new pair until they are properly worn out. Simply by deciding to double the mileage and aiming for at least 1000 miles out of these shoes (hopefully more) I can at least halve the carbon footprint of my running shoe consumption.

Source: Irontwit

Human agency in a world of AI

An equilateral triangle with most of it shaded red except the top (pointy) bit which is shaded yellow. The red part is labelled 'The bit technology can do' and the yellow part is labelled 'The human bit'

Dave White, Head of Digital Education and Academic Practice at the University of the Arts in London, reflects on a recent conference he attended where the tone seemed to be somewhat ‘defensive’. Instead of cheerleading for tech, the opening video and keynote instead focused on human agency.

White notes that this may be heartening but it’s a narrative that’s overly-simplistic. The creative process involves technology of all different types and descriptions. It’s not just the case that humans “get inspired” and then just use technology to achieve their ends.

The downside of these triangles is that they imply ‘development’ is a kind of ladder. You climb your way to the top where the best stuff happens. Anyone who has ever undertaken a creative process will know that it involves repeatedly moving up and down that ladder or rather, it involves iterating research, experimentation, analysis, reflection and creating (making). Every iteration is an authentic part of the process, every rung of the ladder is repeatedly required, so when I say technology allows us to spend more time at the ‘top’ of these diagrams, I’m not suggesting that we should try and avoid the rest.

I’d argue that attempting to erase the rest of the process with technology is missing the point(y). However, a positive reading would be that, as opposed the zero-sum-gain notion, a well-informed incorporation of technology could make the pointy bit a bit bigger (or more pointy). The tech could support us to explore a constantly shifting and, I hope, expanding, notion of humanness. This idea is very much in tension with the Surveillance Capitalism, Silicon Valley, reading of our times. I’m not saying that the tech does support us to explore our humanity, I’m saying it could and what is involved in that ‘could’ is worth thinking about.

Source: David White

On preparing, issuing, and claiming badges

I attended a Navigatr webinar at lunchtime today where they shared this graphic which underscores the importance of encouraging badge earners to share their achievements on social networks.

What I appreciated about the webinar was the way in which the team explained the importance of preparing for and then following up the issue of the badge to ensure that it’s claimed.

Our study of several recently shared digital badges on social media as shown below showed that on average, a posted badge received 500-1k impressions and 25 interactions, of which, 4-5 were actual comments.  We found that the number of connections and days since posted lead to increases in the number of interactions.  Engagement seemed to plateau around 4-5 days and those with several hundred to 500+ connections were most likely to receive numerous interactions.  Location – whether the US or abroad did not seem to matter, suggesting the power of social media is universal when it comes to engagement.
Source: Improve Brand Engagement with Digital Badges | BadgeCert

Telling stories using cartoons

Liza Donnelly is a cartoonist for the New Yorker. In this article, which is an output from some preparatory work for a talk she’s preparing, she talks about how the best cartoons work.

I’ve had the privilege of working with Bryan Mathers over the last decade and it really is a fascinating process. In fact, he’s just delivered a bunch of artwork for the work we’re doing around Open Recognition. Check it out here!

New Yorker cartoon
Story is everywhere. In single panel cartoons, they have to be kept in one image. It’s tricky and challenging and I love it. I like to say that a single panel cartoon is like a mini stage. The artist is a set designer, choreographer, script writer, costume designer, casting director. Each element in the drawing needs to be necessary for the idea, no more, no less; there are exceptions of course. Some creators are known for a style that is overly detailed and complicated, and that is part of the voice of the artist and contributes to the story. The image is a moment in time, and you have to feel that there is time before the moment you see, and a continuation after that moment. And the characters are well “described” in the execution.

[…]

Bottom line: story in the best New Yorker cartoons tell us a story about the characters that are in the drawing, and about ourselves. This is why we love them so much—they are fun, entertaining and are about us.

Source: Storytelling In Drawing | Seeing Things

AI = surveillance

Social networks are surveillance systems. Loyalty cards are surveillance systems. AI language models are surveillance systems.

We live in a panopticon.

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. (Her remarks lightly edited for clarity.)

“It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said. “The Venn diagram is a circle.”

“And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

Source: Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology’ | TechCrunch

Screens, addiction, and parenting

I spent my lunchtime packaging up my beloved PlayStation 5. I’m going to send it to my brother-in-law and his family until my son heads off to university. This directly impacts me and my extra-curricular activities, but I’m at my wits end.

He can’t control his use of it, sadly. Combined with his use of a smartphone, I feel like I’ve failed as a parent despite all of the things I’ve tried. I wrote my doctoral thesis on digital literacies, for goodness sake.

Ben Werdmuller’s at the other end of the spectrum with his son. I wish him the best of luck.

Kid under chair looking at screens
We walk our son to daycare via the local elementary school. This morning, as we wheeled his empty stroller back past the building, a school bus pulled up outside and a stream of eight-year-olds came tumbling out in front of us. As we stood there and watched them walk one by one into the building, I saw iPhone after iPhone after iPhone clutched in chubby little hands. Instagram; YouTube; texting.

It’s obvious that he’ll get into computers early: he’s the son of someone who learned to write code at the same time as writing English and a cognitive scientist who does research for a big FAANG company. Give him half a chance and he’ll already grab someone’s phone or laptop and find modes none of us knew existed — and he’s barely a year old. The only question is how he’ll get into computers.

[…]

He’s entering a very different cultural landscape where computers occupy a very different space. Those early 8-bit machines were, by necessity, all about creation: you often had to type in a BASIC script before you could use any software at all. In contrast, today’s devices are optimized to keep you consuming, and to capture your engagement at all costs. Those iPhones those kids were holding are designed to be addiction machines.

Source: Parenting in the age of the internet | Ben Werdmuller

Conspicuously sesquipedalian communication

Getting people to understand your ideas is a difficult thing. That’s why it’s been so gratifying to work at various times with Bryan Mathers over the last decade. We humans are much better at processing visual inputs than deciphering text.

That being said, as Derek Thompson shows in this article, you have to begin with the realisation that simple is smart. It’s much easier to just write down what’s in your head that do so in a way that’s easy for others to understand.

In some ways, this reminds me of my work on ambiguity, which was a side-product of the work I did on my doctoral thesis. It’s also a good reminder that one of the best uses that most people can make of AI tools such as ChatGPT is to simplify their work.

Shadow of person typing
High school taught me big words. College rewarded me for using big words. Then I graduated and realized that intelligent readers outside the classroom don’t want big words. They want complex ideas made simple.  If you don’t believe it from a journalist, believe it from an academic: “When people feel insecure about their social standing in a group, they are more likely to use jargon in an attempt to be admired and respected,” the Columbia University psychologist Adam Galinsky told me. His study and other research found that when people use complicated language, they tend to come across as low-status or less intelligent. Why? It’s the complexity trap: Complicated language and jargon offer writers the illusion of sophistication, but jargon can send a signal to some readers that the writer is dense or overcompensating. Conspicuously sesquipedalian communication can signal compensatory behavior resulting from suboptimal perspective-taking strategies. What? Exactly; never write like that. Smart people respect simple language not because simple words are easy, but because expressing interesting ideas in small words takes a lot of work.
Source: Why Simple Is Smart | The Atlantic

What people are really using generative AI for

As I’ve written several times before here on Thought Shrapnel, society seems to act as though the giant, monolithic, hugely profitable porn industry just doesn’t… exist? This despite the fact it tends to be a driver of technical innovation. I won’t get into details, but feel free to search for phrases such as ‘teledildonics’.

So this article from the new (and absolutely excellent) 404 Media on a venture capitalist firm’s overview of the emerging generative AI industry shouldn’t come as too much of a surprise. As a society and as an industry, we don’t make progress on policy, ethics, and safety by pretending things aren’t happening.

As the father, seeing this kind of news is more than a little disturbing. And we don’t deal with all of it by burying our head in the sand, shaking our head, or crossing our fingers.

The Andreesen Horowitz (also called a16z) analysis is derived from crude but telling data—internet traffic. Using website traffic tracking company Similarweb, a16z ranks the top 50 generative AI websites on the internet by monthly visits, as of June 2023. This data provides an incomplete picture of what people are doing with AI because it’s not tracking use of popular AI apps like Replika (where people sext with virtual companions) or Telegram chatbots like Forever Companion, which allows users to talk to chatbots trained on the voices of influencers like Amouranth and Caryn Marjorie (who just want to talk about sex).

[…]

What I can tell you without a doubt by looking at this list of the top 50 generative AI websites is that, as has always been the case online and with technology generally, porn is a major driving force in how people use generative AI in their day to day lives.

[…]

Even if we put ethical questions aside, it is absurd that a tech industry kingmaker like a16z can look at this data, write a blog titled “How Are Consumers Using Generative AI?” and not come to the obvious conclusion that people are using it to jerk off. If you are actually interested in the generative AI boom and you are not identifying porn as core use for the technology, you are either not paying attention or intentionally pretending it’s not happening.

Source: 404 Media Generative AI Market Analysis: People Love to Cum

Oh great, another skills passport

I’ve spent the last 12 years working in the ecosystem around Open Badges, which provides an alternative accreditation system. It didn’t come out of thin air, and before this there was plenty of work around e-portfolios. Next up we’ve got Verifiable Credentials which allow for lots of things, including endorsement.

Frustratingly, over the past couple of decades, people several steps removed from actual jobs markets and education systems decide to weigh in. Inevitably, they use the metaphor closest to hand, which tends to be a ‘passport’.

This not only is the wrong metaphor, but it diverts money and attention from fixing some of the real issues in the system. I’d suggest that these are at least threefold:

  1. Taxonomic straightjackets — we don't tend to recognise everything that makes for a valuable employee or colleague. There are behaviours that are valuable, as well esoteric knowledge and skills that don't fit into pre-defined taxonomies.
  2. Hiring is broken — this deserves a whole other blog post, but current systems tend to automate the very things that need a human touch. Hence, applicants spend an inordinate amount of time searching for and applying for jobs, while algorithms reject people who would be a perfectly good fit.
  3. References are outdated — one organisation I used to work for stopped taking references because a) in most jurisdictions, it's against the law to make negative comments, and b) they're generally unreliable. Yet the whole system is predicated on them. Endorsements and recommendations based on network relationships are much more valuable.
I could go on, and probably will over at my personal blog. Or perhaps the Australian government can give me $9.1 million to point them in the right direction.
The passport system is intended to help workers advertise their full range of qualifications, micro-credentials, prior learning, workplace experience and general capabilities.

Businesses, unions, tertiary institutions and students are among those the federal government says will be consulted about the initiative.

Treasurer Jim Chalmers said the goal was to make it easier for employers to find highly-qualified staff and for workers to have their qualifications recognised.

“We want to make it easier for more workers in more industries to adapt and adopt new technology and to grab the opportunities on offer in the defining decade ahead of us,” Chalmers said.

Source: National Skills Passport: Government aims to connect workers and employers | SBS News

If your heart isn’t it, it’s probably because there’s no heart anywhere in the process

One thing I’ve learned spending over a decade thinking about Open Badges and alternative credentials is that hiring is broken. Although there are mitigations and workarounds — some of which I’ve implemented when hiring a team and helping others do so — the whole thing is a dumpster fire.

This article by Paul Fuhr discusses the horror show that is job hunting in the age of platforms such as Indeed. He does a great job of showing how automated and dehumanising the whole hiring process is. Platforms are more focused on user engagement than genuinely aiding job seekers; applicants are reduced to mere data points.

Not only that, but the lack of human-centricity to the whole thing fails to accommodate those with non-linear careers while simultaneously trivialising the job search process. Unsurprisingly, he’s calling for root-and-branch reform of the  current job market. I can’t help but think that badges and alternative credentials can make the whole thing more transparent and fair, moving away from automated metrics.

I’ve applied for (quite literally) thousands of jobs. Very quickly, I went from being surgically precise about job applications to taking a shotgun-blast approach to it all, spraying applications out in every direction. I’ve clicked the “Submit” button on countless career sites. I’ve created four different versions of my resume. I’ve spent more time on LinkedIn than any other site, too, though I suspect Reddit is happy to have some server bandwidth back.

Searching for a steady job is a disheartening and depressingly tedious affair, but it doesn’t have to be. If I’m qualified for anything at the moment, though, it’s being qualified to weigh in on the contemporary job-search experience. I know what it is, what it isn’t, what it pretends to be, why it no longer works, and what needs to change. And thanks to a year-plus of trying to find consistent work, it’s no longer about connecting me with the job of my dreams — it’s about connecting me with my dream of simply having a job.

[…]

Machine learning, AI, automation, yadda yadda yadda. I get it. I understand the “why” of automating the hiring process; I even think it can be a helpful (jargon alert) “arrow in the quiver” for HR. I can’t even imagine a single HR specialist being tasked to locate the right candidate from a huge field of applicants for one job, let alone fifteen jobs at once. That’s like finding a needle in a stack of needles. It’d be paralyzing.

That said, hiring managers and job seekers have arrived at a truly dangerous intersection. Employers have allowed automation to creep in and govern so much of the HR process that it threatens to ignore the whole…well, you know, human part of it all. And some companies insist on doubling-down on this façade; I’ve visited a shocking number of sites that pretend to have an actual human person ready to chat with you (certainly not a bot!), as if they’re impossibly waiting 24/7 to answer your questions.

We’re at a maddeningly mindless moment when it comes to finding employment, but it’s one that could be repaired with some maddeningly simple ideas. For starters, just bring back some humans. Robots can parse your past and distill you down into data, but they’ll never make a genuine connection or get a sense of you are. Also, simplicity works both ways: it benefits the applicant as much as an HR specialist.

Source: Why Resumes Are Dead & How Indeed.com Keeps Killing the Job Market | Paul Fuhr

A trickle, a ripple, a slow rush

This article by Antonia Malchik reflects on her personal journey moving back to her hometown in Montana. It focuses on her deep sense of gratitude for the natural environment and community. She discusses the annual Gathering of the Glacier-Two Medicine Alliance, celebrating the retirement of the last remaining oil lease in the area, which is significant for the Blackfeet Nation.

The part of the article in which I’m most interested is towards the end: a reflective moment by a creek. She writes about the importance of being present in nature and contemplating one’s place and responsibilities in the world. That feeling of being in and of nature after a day’s walking, feeling quite emotional. It stirs my soul just thinking about it.

On my way home, I stopped at a creek I’m fond of, near a trailhead leading into the Bob Marshall Wilderness. The parking lot was empty of other cars or people. Last year when I’d camped there, the creek had held a delightful number of cylindrical caddisfly shells constructed from gravel about the size of a sesame seed. I looked for them but it was too late in the year.

The creek ran cold across my bare feet, its sound and movement and chilly reminders of snowmelt all I really need in this world to ground myself in what’s real, and what matters. I sat there letting my feet go numb and the sound run through me, September’s late afternoon sunlight filtering through the aspen trees to glance off the water.

I don’t even know what to call that sound—a trickle, a ripple, a slow rush?

Sometimes the right answer is an action. Sometimes it’s a change in policy, or in culture. And sometimes it’s simply being, sitting there by a creek reminding yourself what it feels like to be alive, in a place you love. It’s asking questions of belonging and responsibility, and struggling with your own place in the world.

That sound is all of life to me. I could have sat there forever, grown cold and hungry, but I never for a moment would have felt alone.

Source: Sometimes there’s a right answer, sometimes you sit by a creek, and sometimes they’re the same thing | On The Commons

If LLMs are puppets, who's pulling the strings?

The article from the Mozilla Foundation surfaces into the human decisions that shape generative AI. It highlights the ethical and regulatory implications of these decisions, such as data sourcing, model objectives, and the treatment of data workers.

What gets me about all of this is the ‘black box’ nature of it. Ideally, for example, I want it to be super-easy to train an LLM on a defined corpus of data — such as all Thought Shrapnel posts. Asking questions of that dataset would be really useful, as would an emergent taxonomy.

Generative AI products can only be trustworthy if their entire production process is conducted in a trustworthy manner. Considering how pre-trained models are meant to be fine-tuned for various end products, and how many pre-trained models rely on the same data sources, it’s helpful to understand the production of generative AI products in terms of infrastructure. As media studies scholar Luke Munn put it, infrastructures “privilege certain logics and then operationalize them”. They make certain actions and modes of thinking possible ahead of others. The decisions of the creators of pre-training datasets have downstream effects on what LLMs are good or bad at, just as the training of the reward model directly affects the fine-tuned end product.

Therefore, questions of accountability and regulation need to take both phases seriously and employ different approaches for each phase. To further engage in discussion about these questions, we are conducting a study about the decisions and values that shape the data used for pre-training: Who are the creators of popular pre-training datasets, and what values guide their work? Why and how did they create these datasets? What decisions guided the filtering of that data? We will focus on the experiences and objectives of builders of the technology rather than the technology itself with interviews and an analysis of public statements. Stay tuned!

Source: The human decisions that shape generative AI: Who is accountable for what? | Mozilla Foundation

Bad historical maps

Like the author of this article, I love a good map. Whether it’s trekking across hills and mountains with an OS map, or looking through historical maps, there’s something enchanting about understanding territories.

The thing is, though, that maps are literally projections. They leave things out and therefore have to be interpreted. If the maps are out of date, or are being used in a way that’s anachronistic, that leads to a huge problem.

As a History teacher, I used to teach WWI but didn’t know that General von Schlieffen, the Chief of the German General Staff, was obsessed with Hannibal and the Battle of Cannae. Apparently he used stories and maps of how it played out to inform his strategy. The problem was that, not only did it happen a couple of millennia beforehand, but it probably didn’t even play out that way.

Maps like this are a big part of why I became a historian. I probably spent more time looking through the volumes of Colin McEvedy’s Penguin Atlas of History series than any other book when I was a kid.... There’s something beguiling about the thought that a simple arrangement of lines might explain the world — like seeing human history as an enormous game of Civilization 6. But of course, that’s also the problem with using maps as a way of understanding history. If you’re not careful, they go from being helpful tools to misleading simplifications.

[…]

In her book The Guns of August, Barbara Tuchman argues that the memory of Cannae, which was passed down through a succession of military histories until it became a virtual obsession of strategists in the 19th century, helped push the world into an unimaginable catastrophe.

It did so by offering up a model of a “battle of annihilation” that Germany’s war planners believed they could unleash on France. At the head of these planers was General von Schlieffen, the Chief of the German General Staff. The map of Cannae haunted Schlieffen’s dreams.

[…]

Cannae was no vague inspiration. It was a direct model for Germany’s invasion of Belgium and France.

[…]

As the historian Martin Samuels pointed out in his article “the Reality of Cannae,” there is no archaeological evidence for the battle. Nor are there first-hand sources of any kind. Everything we know derives from accounts written sixty years or more after Cannae itself. Suffice to say, when Samuels dug into these sources, he found as many questions as answers. The detailed maps of movements at Cannae that decorated military strategy manuals for hundreds of years, in other words, were largely fanciful. Samuels calls Cannae “the most quoted and least understood battle” in history.

The simplicity of a historical map — the clear labels, the sharp edges, and above all the reduction of thousands or millions of people into abstract symbols — is a big part of why they’re so beguiling. But it’s also why they lead us astray.

[…]

It is sometimes said that the map is not the territory. The map is not the historical argument, either.

Instead, maps are a great way to pose questions about history. They are best approached as a way in: an entry-point rather than an ending. They offer one path toward confronting the enormous complexity of “real” history — the kind made by individual people, on the decidedly imperfect and unmap-like terrain of the world.

Source: Historical maps probably helped cause World War I | Res Obscura

More treasures and secrets from ancient Egypt

Underwater archaeologists have discovered a sunken temple off Egypt’s Mediterranean coast, filled with artefacts related to the god Amun and the goddess Aphrodite. The temple was part of the ancient port city of Thonis-Heracleion, which sank due to a major earthquake and tidal waves.

When we discover the remnants of civilizations buried under the sea and in other places it does make me think about humans in the future discovering what we leave behind. What will they think?

While exploring a canal off the Mediterranean coast of Egypt, underwater archaeologists discovered a sunken temple and a sanctuary brimming with ancient treasures linked to the god Amun and the goddess Aphrodite, respectively.

The temple, which partially collapsed “during a cataclysmic event” during the mid-second century B.C., was originally built for the god Amun; it was so important, pharaohs went to the temple “to receive from the supreme god of the ancient Egyptian pantheon the titles of their power as universal kings,” according to a statement from the European Institute for Underwater Archaeology (IEASM).

[…]

Also at the site divers found underground structures supported by “well-preserved wooden posts and beams” that dated to the fifth century B.C., they wrote in the statement.

[…]

The sanctuary also held a cache of Greek weapons, which could indicate that Greek mercenaries were in the region at one time “defending the access to the Kingdom” at the mouth of the Nile’s westernmost, or Canopic, branch, the researchers said in the statement.

Source: Sunken temple and sanctuary from ancient Egypt found brimming with ‘treasures and secrets’ | Live Science

Death, wrecks, and harsh weather

There was a time, about a decade ago, where although I was based from home, I’d be travelling pretty much every week for work. I was abroad once a month at least.

These days, perhaps with the pandemic as a catalyst, I’m slightly more wary of travelling. It’s probably also a function of age and awareness of how routines affect my body. As an historian, though, I’ve always been amazed by those people who journeyed long distances.

This post by an academic historian of medicine and the body outlines some of the dangers such travellers faced. Pretty amazing, when you think about it.

Unlike today, when it’s entirely possible to have breakfast in London, lunch in Milan and be back at home in time for supper, travel in the early modern period was no easy undertaking. More than this, it was widely acknowledged to be inherently dangerous. What, then, were the perceived risks? Even a brief survey tells us a lot about how travel was regarded in health terms.

First was the risk of accident or death on the journey. In the seventeenth century even relatively short distances on horseback or in a carriage carried dangers. Falls from horses were common, causing injury or even death.

[…]

Travel by sea, even around local coasts, carried its own obvious risks of storm and wreck. So common and widely acknowledged were the vagaries of sea travel that a common reason for making a will in the early modern period was just before embarking on a voyage.

[…]

Once abroad, too travellers were at the mercy of a bevy of dangers, from unfamiliar territories and extreme landscapes to harsh weather and climate, their safety contingent on the quality of their transport and the reliability of their guides.

[…]

Even ‘foreign’ food and drink could be risky. Thomas Tryon’s Miscellania (1696) noted the dangers of ‘intemperance’ and of misjudging the effects of climate upon the body in regard to drinking alchohol [sic]

Source: The Health Risks of Travel in Early-Modern Britain | Dr Alun Withey

Microcast #98 — Endorsement


The introduction to some thoughts on endorsement using Open Badges and Verifiable Credentials within networks of trust.

Show notes


Image: Unsplash

Virtual spaces for learning and collaboration

Today, I’ve been doing a UCL short course. As we were coming back from a break, we were discussing the lack of ‘embodiedness’ in virtual interactions. This reminded me of experiments with different platforms that WAO did during the pandemic.

This post by Alja from Tethix is prompted by a challenge-based learning platform that I took part in last year. They’re focusing on tech ethics (hence the name) and their approach was great. It was just that the tools got in the way to some extent.

I think, after reading this, it’s time to experiment again with some of the tools mentioned in the post. Sometimes you do need a sense of play and feel like you’re connected in ways that go beyond small boxes on a screen.

The Tethix Archipelago emerged from the Challenge Based Learning pilot we did in March last year. For the pilot, we designed a unique collaborative online learning experience in tech ethics and used Mural collaborative whiteboards and visual storytelling to situate the learning journey in a fictional world: the Tethix Archipelago. The Archipelago consists of four islands that emerged from the four essentials skills of the Challenge Based Learning journey: collaboration, exploration, practice, and reflection.

Mural turned out to be a great tool for collaboration and live session guidance, but it didn’t really convey a sense of place. Clicking on a link in a Mural to visit the next leg of your journey just doesn’t feel like traveling, especially when you’re trapped in the same little Zoom box during every live session.

So we started exploring tools that could help us convey a sense of space and discovered Gather and WorkAdventure, among others. These tools offer two-dimensional virtual collaborative spaces where you can walk around a space with your avatar and have proximity-based conversations by using your microphone and camera.

[…]

You might be thinking: this is cute and all, but is this Archipelago all games and play? Well, playfulness is a big part of why we’re experimenting with these game-like worlds; we know that play helps us learn better and can unlock our imagination. But there’s much more to it than just millennial nostalgia for pixel graphics.

As already mentioned, Gather allows us to build a sense of place, both inside rooms and between them. And a sense of place helps with learning and memory encoding. Historical records show ancient Greeks using the method of loci or memory palaces, a technique for improving memory encoding and retrieval, and humans have been developing other mnemonic techniques based on spatial relationship for much longer than that. We’re physical beings, uniquely equipped to understand space, whether physical or represented by pixels.

Source: Welcome to the Tethix Archipelago | Tethix