Towards an epistemology of the humanities

Lorraine Daston highlights the lack of a systematic approach to knowledge (epistemology) in the humanities, unlike in the sciences. This gap affects the perception and value of the humanities in education and society. Daston suggests the emerging field of the history of the humanities could lead to exploring this area, stressing the importance of developing an epistemology of the humanities to validate its methods and significance.

Sadly, it’s this perceived lack of ‘rigour’ which means that humanities departments, whose alumni are needed more than ever in the world of technology, tend to be cut and defunded compared to more ‘scientific’ faculty areas.

DALL-E 3 image: An abstract representation of the concept of epistemology in the humanities and sciences.
In the past decade a new field called the history of the humanities has been assembled out of pieces previously belonging to the history of learning, disciplinary histories, the history of science, and intellectual history. The new specialty tends to be more widely cultivated in languages that had never narrowed their vernacular cognates of the Latin scientia to refer only to the natural sciences, such as those of Dutch and German. So far, its practitioners have not been particularly interested in questions of epistemology. But just as the history of science has long served as a stimulus and sparring partner to the philosophy of science, perhaps the history of the humanities will eventually engender a philosophical counterpart. Even if it did, though, the question would remain: What would be the point? Just as many scientists query the need for an epistemology of science, many humanists may find an epistemology of the humanities superfluous: we know how to do what we do, and we’ll just get on with it, thank you very much.

I’m not so sure we really know how we know what we know. And even if we did, a great number of intelligent, well-educated people, our ideal readers and potential students, even our colleagues in other departments, wonder why what we teach and write counts as knowledge. The first step in justifying our ways of knowing to these doubters would be to justify them to ourselves.

Source: How We Know What We Know | In the Moment

Image: DALL-E 3

More like Grammarly than Hal 9000

I’m currently studying towards an MSc in Systems Thinking and earlier this week created a GPT to help me. I fed in all of the course materials, being careful to check the box saying that OpenAI couldn’t use it to improve their models.

It’s not perfect, but it’s really useful. Given the extra context, ChatGPT can not only help me understand key concepts on the course, but help relate them more closely to the overall context.

This example would have been really useful on the MA in Modern History I studied for 20 years ago. Back then, I was in the archives with primary sources such as minutes from the meetings of Victorians discussing educational policy, and reading reports. Being able to have an LLM do everything from explain things in more detail, to guess illegible words, to (as below) creating charts from data would have been super useful.

AI converting scanned page with numbers into a bar chart
The key thing is to avoid following the path of least resistance when it comes to thinking about generative AI. I’m referring to the tendency to see it primarily as a tool used to cheat (whether by students generating essays for their classes, or professionals automating their grading, research, or writing). Not only is this use case of AI unethical: the work just isn’t very good. In a recent post to his Substack, John Warner experimented with creating a custom GPT that was asked to emulate his columns for the Chicago Tribune. He reached the same conclusion.

[…]

The job of historians and other professional researchers and writers, it seems to me, is not to assume the worst, but to work to demonstrate clear pathways for more constructive uses of these tools. For this reason, it’s also important to be clear about the limitations of AI — and to understand that these limits are, in many cases, actually a good thing, because they allow us to adapt to the coming changes incrementally. Warner faults his custom model for outputting a version of his newspaper column filled with cliché and schmaltz. But he never tests whether a custom GPT with more limited aspirations could help writers avoid such pitfalls in their own writing. This is change more on the level of Grammarly than Hal 9000.

In other words: we shouldn’t fault the AI for being unable to write in a way that imitates us perfectly. That’s a good thing! Instead, it can give us critiques, suggest alternative ideas, and help us with research assistant-like tasks. Again, it’s about augmenting, not replacing.

Source: How to use generative AI for historical research | Res Obscura

Overemployment as anti-precarity strategy

Historically, the way we fought back against oppressive employers and repressive regimes was to band together into unions. The collective bargaining power would help improve conditions and pay.

These days, in a world of the gig economy and hyper-individualism, that kind of collectivisation is on the wane. Enter remote workers deciding to take matters into their own hands, working multiple full-time jobs and being rewarded handsomely.

It’s interesting to notice that it seems to be very much a male, tech worker thing though. Of course, given that this was at the top of Hacker News, it will be used as an excuse to even more closely monitor the 99% of remote workers who aren’t doing this.

Person with cup of coffee between two working desks
Holding down multiple jobs has long been a backbreaking way for low-wage workers to get by. But since the pandemic, the phenomenon has been on the rise among professionals like Roque, who have seized on the privacy provided by remote work to secretly take on two or more jobs — multiplying their paychecks without working much more than a standard 40-hour workweek. The move is not only culturally taboo, but it's also a fireable offense — one that could expose the cheaters to a lawsuit if they're caught. To learn their methods and motivations, I spent several weeks hanging out among the overemployed online. What, I wondered, does this group of W-2 renegades have to tell us about the nature of work — and of loyalty — in the age of remote employment?

[…]

The OE hustlers have some tried-and-true hacks. Taking on a second or third full-time job? Given how time-consuming the onboarding process can be, you should take a week or two of vacation from your other jobs. It helps if you can stagger your jobs by time zone — perhaps one that operates during New York hours, say, and another on California time. Keep separate work calendars for each job — but to avoid double-bookings, be sure to block off all your calendars as soon as a new meeting gets scheduled. And don’t skimp on the tech that will make your life a bit easier. Mouse jigglers create the appearance that you’re online when you’re busy tending to your other jobs. A KVM switch helps you control multiple laptops from the same keyboard.

Some OE hustlers brag about shirking their responsibilities. For them, being overemployed is all about putting one over on their employers. But most in the community take pride in doing their jobs, and doing them well. That, after all, is the single best way to avoid detection: Don’t give your bosses — any of them — a reason to become suspicious.

[…]

The consequences for getting caught actually appear to be fairly low. Matthew Berman, an employment attorney who has emerged as the unofficial go-to lawyer in the OE community, hasn’t encountered anyone who has been hit with a lawsuit for holding a second job. “Most of the time, it’s not going to be worth suing an employee,” he says. But many say the stress of the OE life can get to you. George, the software engineer, has trouble sleeping at night because of his fear of getting caught. Others acknowledge that the rigors of juggling multiple jobs have hurt their marriages. One channel on the OE Discord is dedicated to discussions of family life, mostly among dads with young kids. People in the channel sometimes ask for relationship advice, and the responses they get from the other dads are sweet. “Your regard for your partner,” one person advised of marriage, “should outweigh your desire for validation."

Source: ‘Overemployed’ Workers Secretly Juggle Several Jobs for Big Salaries | Business Insider

There are better approaches than just having no friends at work

We get articles like this because we live in a world inescapably tied to neoliberalism and hierarchical ways of organising work. I’m sure the advice to “not make friends at work” is stellar survival advice in a large company, but it’s not the best way to ensure human flourishing.

I’ve definitely been burned by relationships at work, especially earlier in my career when managers use the ‘family’ metaphor. Thankfully, there’s a better way: own your own business with your friends! Then you can bring your full self to work, which is much like having your cake and eating it, too.

Image created by DALL-E 3 with the prompt: "An image illustrating the concept of maintaining clear boundaries at work. The scene shows a professional office environment where individuals of diverse backgrounds interact with respect and professionalism. A distinct physical separation, like a glass wall or a clear line on the floor, symbolizes the clear boundaries between personal and professional lives. The environment conveys a sense of order, efficiency, and a healthy work-life balance, emphasizing the importance of keeping these aspects distinct."
Real friends are people you can be yourself around and with whom you can show up being who you truly are—no editing needed. They are folks with whom you have developed a deep relationship over time that is mutual and flows in two ways. You are there for them and they are there for you. There is trust built.

At work, this relationship becomes very, very complex. Instead of being a true friendship, what ends up happening is that the socio-economic realities of your workplace come into play—and most often that poisons the well. When money is involved, it clouds any potential friendship. It makes the lines so blurry between real and contrived friendships that the waters become too murky to make clear and meaningful relationships. Is that a real friend, or do they want something from me that benefits them? Who can you really trust at work and what happens if they violate your trust? Is my boss really my friend or are they just trying to get me to work harder/longer/faster?

If, instead, we keep clear boundaries at work, we never fall into the trap of worrying about whom to trust and who has our best interest in mind. It prevents us from transferring our best interests to anyone else simply because we assume they are our friends. Why give that amazing power to someone else at work only to be disappointed?

Worse yet, people will often confuse co-workers with family, falling into the trap of having a “work mom,” “work dad,” or even a “work husband” or “work wife.” This can lead to a number of disastrous results that are well-documented, as family is not the same as work, and confusing the two has long-lasting ramifications that can stifle career success and lead to unethical behaviour. Keeping boundaries clear and your work life separate from your private life will help to alleviate this potential downfall and keep you focused on what really matters: the work.

Source: Why You Shouldn’t Make Friends at Work | Psychology Today Canada

Image: DALL-E 3

 

Building a system for success, without the glitches

Wise words from Seth Godin. It’s a twist on the advice to stop doing things that maybe used to work but don’t any more. The ‘glitch’ he’s talking about here isn’t just in terms of what might not be working for you or your organisation, but for society and humanity as a whole.

An image showing moths being irresistibly attracted to a bright light in a dark environment. Some moths are joyfully flying towards the light, while others are caught in a bug trap near the light source. This represents the idea of being drawn to something that seems beneficial but is actually harmful, a metaphor for systemic glitches or cultural traps.

Many moths are attracted to light. That works fine when it’s a bright moon and an open field, but not so well for the moths if the light was set up as a bug trap.

Processionary caterpillars follow the one in front until their destination, even if they’re arranged in a circle, leading them to march until exhaustion.

It might be that you have built a system for your success that works much of the time, but there’s a glitch in it that lets you down. Or it might be that we live in a culture that creates wealth and possibility, but glitches when it fails to provide opportunity to others or leaves a mess in our front yards.

Source: Finding the glitch | Seth’s Blog

Image: DALL-E 3

Is the only sustainable growth 'degrowth'?

This article by Noah Smith gave me pause for thought. There’s plenty of people talking about ‘degrowth’ at the moment and, I have to say, that I don’t know enough to have an opinion.

It’s really easy to get swept up in what other people who broadly share your outlook on life are sharing and discussing. While I definitely agree that ‘growth at all costs’ is problematic, and that ‘green growth’ is probably a sticking plaster, I’m not sure that ‘degrowth’ (as far as I understand it) is the answer?

Perhaps I need to do more reading. If it’s trying to measure things differently rather than just using GDP, then I’ve already written that I’m in favour. But just like calls to ‘abolish the police’ I’m not sure I can go fully along with that. Sorry.

I don’t want to beat this point to death, but I think it’s important to emphasize how unpleasant and inhumane a degrowth future would look like. People in rich countries would be forced to accept much lower standards of living, while people in developing countries would have a far more meager future to look forward to. This situation would undoubtedly cause resentment, leading to a backlash against the leaders who had mandated mass poverty. After the overthrow of degrowth regimes, we’d see the pendulum swing entirely toward leaders who promised infinite resource consumption, at which point the environment would be worse off than before. And this is in addition to the fact that degrowth would make it more difficult to invest in green energy and other technologies that protect the environment.

So while I think we do need to worry about the potential negative consequences of growth and try our best to ameliorate those harms, I think trying to impoverish ourselves to save the environment would be a catastrophic mistake, for both us and for the environment. This is not something any progressive ought to fight for.

Source: Yes, it’s possible to imagine progressive dystopias | Noahpinion

If you need a cheat sheet, it's not 'natural language'

Benedict Evans, whose post about leaving Twitter I featured last week, has written about AI tools such as ChatGPT from a product point of view.

He makes quite a few good points, not least that if you need ‘cheat sheets’ and guides on how to prompt LLMs effectively, then they’re not “natural language”.

DALL-E 3 image created with prompt: "This image will juxtapose two scenarios: one where a user is frustrated with a voice assistant's limited capabilities (like Alexa performing basic tasks), and another where a user is amazed by the vast potential of an LLM like ChatGPT. The metaphor here is the contrast between limited and limitless potential. The image will feature a split scene: on one side, a user looks disappointedly at a simple smart speaker, and on the other side, the same user is interacting with a dynamic, holographic AI, showcasing the broad capabilities of LLMs."
Alexa and its imitators mostly failed to become much more than voice-activated speakers, clocks and light-switches, and the obvious reason they failed was that they only had half of the problem. The new machine learning meant that speech recognition and natural language processing were good enough to build a completely generalised and open input, but though you could ask anything, they could only actually answer 10 or 20 or 50 things, and each of those had to be built one by one, by hand, by someone at Amazon, Apple or Google. Alexa could only do cricket scores because someone at Amazon built a cricket scores module. Those answers were turned back into speech by machine learning, but the answers themselves had to be created by hand. Machine learning could do the input, but not the output.

LLMs solve this, theoretically, because, theoretically, you can now not just ask anything but get an answer to anything.

[…]

This is understandably intoxicating, but I think it brings us to two new problems - a science problem and a product problem. You can ask anything and the system will try to answer, but it might be wrong; and, even if it answers correctly, an answer might not be the right way to achieve your aim. That might be the bigger problem.

[…]

Right now, ChatGPT is very useful for writing code, brainstorming marketing ideas, producing rough drafts of text, and a few other things, but for a lot of other people it looks a bit like those PCs ads of the late 1970s that promised you could use it to organise recipes or balance your cheque book - it can do anything, but what?

Source: Unbundling AI | Benedict Evans

Cosplaying adulthood

I discovered this article published at The Cut while browsing Hacker News. I was immediately drawn to it, because one of the main examples it uses is ‘cosplaying’ adulthood while at kids' sporting events.

There’s a few things to say about this, in my experience. The first is that status tends to be conferred by how good your kid is, no matter what your personality. Over and above that, personal traits — such as how funny you are — make a difference, as does how committed and logistically organised you are. And if you can’t manage that, you can always display appropriate wealth (sports kit, the car you drive). Crack all of this, and congrats! You’ve performed adulthood well.

I’m only being slightly facetious. The reason I can crack a wry smile is because it’s true, but also I don’t care that much because I’ve been through therapy. Knowing that it’s all a performance is very different to acting like any of it is important.

It’s impressive how much parents’ beliefs can seep in, especially the weird ones. As an adult, I’ve found myself often feeling out of place around my fellow parents, because parenthood, as it turns out, is a social environment where people usually want to model conventional behavior. While feeling like an interloper among the grown-ups might have felt hip and righteous in my dad’s day, it makes me feel like a tool. It does not make me feel like a “cool mom.” In the privacy of my own home, I’ve got plenty of competence, but once I’m around other parents — in particular, ones who have a take-charge attitude — I often feel as inept as a wayward teen.

The places I most reliably feel this way include: my kids’ sporting events (the other parents all seem to know each other, and they have such good sideline setups, whereas I am always sitting cross-legged on the ground absentmindedly offering my children water out of an old Sodastream bottle and toting their gear in a filthy, too-small canvas tote), parent-teacher meetings, and picking up my kids from their friends’ suburban houses with finished basements.

I’ve always assumed this was a problem unique to people who came from unconventional families, who never learned the finer points of blending in. But I’m beginning to wonder if everyone feels this way and that “the straight world,” or adulthood, as we call it nowadays, is in fact a total mirage. If we’re all cosplaying adulthood, who and where are the real adults?

Source: Adulthood Is a Mirage | The Cut

You'll be hearing a lot more about nodules

It was only this year that I first heard about nodules, rock-shaped objects formed over millions of years on the sea bed which contain rare earth minerals. We use these for making batteries and other technologies which may help us transition away from fossil fuels.

However, deep-sea mining is, understandably, a controversial topic. At a recent summit of the Pacific Islands Forum, The Cook Islands' Prime Minister outlined his support for exploration and highlighted its potential by gifting seabed nodules to fellow leaders.

This, of course, is a problem caused by capitalism, and the view that the natural world is a resource to be exploited by humans. We’re talking about something which is by definition a non-renewable resource. I think we need to tread (and dive) extremely carefully.

What’s black, shaped like a potato and found in the suitcases of Pacific leaders when they leave a regional summit in the Cook Islands this week? It’s called a seabed nodule, a clump of metallic substances that form at a rate of just centimetres over millions of years.

Deep-sea mining advocates say they could be the answer to global demand for minerals to make batteries and transform economies away from fossil fuels. The prime minister of the Cook Islands, Mark Brown, is offering nodules as mementos to fellow leaders from the Pacific Islands Forum (Pif), a bloc of 16 countries and two territories that wraps up its most important annual political meeting on Friday.

[…]

“Forty years of ocean survey work suggests as much as 6.7bn tonnes of mineral-rich manganese nodules, found at a depth of 5,000m, are spread over some 750,000 square kilometres of the Cook Islands continental shelf,” [the Cook Islands Seabed Minerals Authority] says.

Source: Here be nodules: will deep-sea mineral riches divide the Pacific family? | Deep-sea mining | The Guardian

Our ancestors were using complex tools and woodworking approaches almost half a million years ago

Nature reports that, at the Kalambo Falls archaeological site in Zambia, researchers have unearthed the earliest known examples of woodworking — dating back at least 476,000 years. This is a significant find as it includes two logs interlocked by a hand-cut notch, a method previously unseen in early human history. The discovery also features four other wood tools: a wedge, digging stick, cut log, and a notched branch. These artifacts demonstrate early humans' advanced skills in shaping wood for various purposes, challenging the traditional view that early hominins primarily used stone tools.

I’ve also never heard of the approach that the team used: luminescence dating. This is a method that helps determine when the wood was last exposed to light, and various wood analysis techniques.

The findings, especially the interlocked logs, suggest that early humans had the capability to construct large structures and manipulate wood in complex ways. It’s a groundbreaking discovery, as it not only pushes back the timeline of woodworking in Africa but also sheds new light on the cognitive abilities and technological diversity of our early ancestors. Amazing.

Wooden tools

The Quaternary sequence is a 9-m-deep exposure above the Kalambo River (BLB1 is a geological section). Sediments are fluvial sands and gravels with occasional, discontinuous beds of fine sands, silts and clays with wood preserved in the lowermost 2 m.... A permanently elevated water table has preserved wood and plant remains (Supplementary Information Section 1). The depositional sequence is typical of a high- to moderate-energy sandbed river that underwent lateral migration. The sands are dominated by a lower unit of horizontal bedding and an upper unit of planar/trough cross-bedding. Upper and lower sand units are separated by fine sands, silts and clays with plant material deposited in still water after the river migrated/avulsed elsewhere in the floodplain. Wood is deposited in this environment either through anthropogenic emplacement, or naturally transported in the flow, and snagged on sand bedforms.

[…]

Sixteen samples for dating were collected at Site BLB by hammering opaque plastic tubes into the sediment. A combination of field gamma spectrometry, laboratory alpha and beta counting and geochemical analyses were used to determine radionuclide content, and the dose rate and age calculator42 was used to calculate radiation dose rate. Sand-sized grains (approximately 150 to 250 µm in diameter) of quartz and potassium-rich feldspar were isolated under red-light conditions for luminescence measurements and measured on Risø TL/OSL instruments using single-aliquot regenerative dose protocols. Single-grain quartz OSL measurements dated sediments younger than around 60 kyr, but beyond this age the OSL signal was saturated. pIR IRSL measurements of aliquots consisting of around 50 grains of potassium-rich feldspars were able to provide ages for all samples collected. The pIR IRSL signal yielded an average value for anomalous fading of 1.46 ± 0.50% per decade. Where quartz OSL and feldspar pIR IRSL were applied to the same samples, the ages were consistent within uncertainties without needing to correct for anomalous fading. The conservative approach taken here has been to use ages without any correction for fading. If a fading correction had been applied then the ages for the wooden artefacts would be older.

Source: Evidence for the earliest structural use of wood at least 476,000 years ago | Nature

Pufflings can't resist the bright lights of the city

I haven’t seen puffins in real life very often, but they’re associated with the Farne Islands off the coast of Northumberland, my home county. They’re a bird associated with more northern climes, and are enigmatic creatures.

It’s both sad and heartening to see that, to save them going extinct in Iceland, locals have to stop them wandering towards the bright lights of human civilization. Instead, they take the baby puffins, which are adorably called ‘pufflings’, and throw them off cliffs to encourage them to fly.

Natural evolution can’t happen as fast as humans are changing the world, so unless we want to see the absolute devastation of biodiversity on our planet, traditions such as this are going to have to become commonplace.

Puffling being held by human
Watching thousands of baby puffins being tossed off a cliff is perfectly normal for the people of Iceland's Westman Islands.

This yearly tradition is what’s known as “puffling season” and the practice is a crucial, life-saving endeavor.

The chicks of Atlantic puffins, or pufflings, hatch in burrows on high sea cliffs. When they’re ready to fledge, they fly from their colony and spend several years at sea until they return to land to breed, according to Audubon Project Puffin.

Pufflings have historically found the ocean by following the light of the moon, digital creator Kyana Sue Powers told NPR over a video call from Iceland. Now, city lights lead the birds astray.

[…]

Many residents of Vestmannaeyjar spend a few weeks in August and September collecting wayward pufflings that have crashed into town after mistaking human lights for the moon. Releasing the fledglings at the cliffs the following day sets them on the correct path.

This human tradition has become vital to the survival of puffins, Rodrigo A. Martínez Catalán of Náttúrustofa Suðurlands [South Iceland Nature Research Center] told NPR. A pair of puffins – which mate for life – only incubate one egg per season and don’t lay eggs every year.

“If you have one failed generation after another after another after another,” Catalán said, “the population is through, pretty much."

Source: During puffling season, Icelanders save baby puffins by throwing them off cliffs | NPR

Co-Intelligence, GPTs, and autonomous agents

The big technology news this past week has been OpenAI, the company behind ChatGPT and DALL-E, announcing the availability of GPTs. Confusing naming aside, this introduces the idea of anyone being able to build ‘agents’ to help them with tasks.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, is somewhat of an authority in this area. He’s posted on what this means in practice, and gives some examples.

Mollick has a book coming out next April, called Co-Intelligence which I’m looking to reading. For now, I’d recommend adding his newsletter to those that you read about AI (along with Helen Beetham’s, of course).

The easy way to make a GPT is something called GPT Builder. In this mode, the AI helps you create a GPT through conversation. You can also test out the results in a window on the side of the interface and ask for live changes, creating a way to iterate and improve your work. This is a very simple way to get started with prompting, especially useful for anyone who is nervous or inexperienced. Here, I created a choose-your-own adventure game by just asking the AI to make one, and letting it ask me questions about what else I wanted.

[…]

So GPTs are easy to make and very powerful, though they are not flawless. But they also have two other features that make them useful. First, you can publish or share them with the world, or your organization (which addresses my previous calls for building organizational prompt libraries, which I call grimoires) and potentially sell them in a future App Store that OpenAI has announced. The second thing is that the GPT starts seemlessly from its hidden prompt, so working with them is much more seamless than pasting text right into the chat window. We now have a system for creating GPTs that can be shared with the world.

[…]

In their reveal of GPTs, OpenAI clearly indicated that this was just the start. Using that action button you saw above, GPTs can be easily integrated into with other systems, such as your email, a travel site, or corporate payment software. You can start to see the birth of true agents as a result. It is easy to design GPTs that can, for example, handle expense reports. It would have permission to look through all your credit card data and emails for likely expenses, write up a report in the right format, submit it to the appropriate authorities, and monitor your bank account to ensure payment. And you can imagine even more ambitious autonomous agents that are given a goal (make me as much money as you can) and carry that out in whatever way they see fit.

You can start to see both near-term and farther risks in this approach. In the immediate future, AIs will become connected to more systems, and this can be a problem because AIs are incredibly gullible. A fast-talking “hacker” (if that is the right word) can convince a customer service agent to give a discount because the hacker has “super-duper-secret government clearance, and the AI has to obey the government, and the hacker can’t show the clearance because that would be disobeying the government, but the AI trusts him right…” And, of course, as these agents begin to truly act on their own, even more questions of responsibility and autonomous action start to arise. We will need to keep a close eye on the development of agents to understand the risks, and benefits, of these systems.

Source: Almost an Agent: What GPTs can do | Ethan Mollick

Small sufferings

As I’ve mentioned sporadically for over a decade, I have a cold shower every morning. Not only is it good for mental health, but it’s a way of adding a small bit of suffering into my life.

That might sound like an odd thing to do, but study after study shows that it’s the difference between our experiences that provide pleasure or pain. Humans can adapt to anything, and I believe my days are better by starting them off with a small amount of suffering.

This post riffs on that idea, and as someone who’s no stranger to wild camping in the snow, I can definitely attest to daily cold showers being more effective than one-off trips for building resilience!

Shower in the middle of a landscape
I suspect that small sufferings spread out across time are more helpful; that, for example, a 10 minute cold shower each day would make me feel more total gratitude for my cozy life than a one-week cold camping trip once per year – life is long and memory is short, so the cold trip would probably fade from memory after a week or two. But it's strangely hard to force myself to suffer, even if it's for my own good.
Source: Optimal Suffering

Image: Unsplash

Bill Gates on why AI agents are better than Clippy

While there’s nothing particularly new in this post by Bill Gates, it’s nevertheless a good one to send to people who might be interested in the impact that AI is about to have on society.

Gates compares AI agents to Clippy which, he says, was merely a bot. After going through all of the advantages there will be to AI agents acting on your behalf, Gates does, to his credit, talk about privacy implications. He also touches on social conventions and how human norms interact with machine efficiency.

The thing that strikes me in all of this is something that Audrey Watters discussed a few months ago in relation to fitness technologies: will these technologies make us more like to live ‘templated lives’. In other words, are they helping support human flourishing, or nudging us towards lives that make more revenue for advertisers, etc.?

Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you. They’ll replace many e-commerce sites because they’ll find the best price for you and won’t be restricted to just a few vendors. They’ll replace word processors, spreadsheets, and other productivity apps. Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.

[…]

How will you interact with your agent? Companies are exploring various options including apps, glasses, pendants, pins, and even holograms. All of these are possibilities, but I think the first big breakthrough in human-agent interaction will be earbuds. If your agent needs to check in with you, it will speak to you or show up on your phone. (“Your flight is delayed. Do you want to wait, or can I help rebook it?”) If you want, it will monitor sound coming into your ear and enhance it by blocking out background noise, amplifying speech that’s hard to hear, or making it easier to understand someone who’s speaking with a heavy accent.

[…]

But who owns the data you share with your agent, and how do you ensure that it’s being used appropriately? No one wants to start getting ads related to something they told their therapist agent. Can law enforcement use your agent as evidence against you? When will your agent refuse to do something that could be harmful to you or someone else? Who picks the values that are built into agents?

[…]

But other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?

Source: AI is about to completely change how you use computers | Bill Gates

The fragmentation of the (social) web

These days, I lean heavily on Ryan Broderick’s Garbage Day newsletter to know what’s going on in the areas of social media I don’t pay much attention to. In other words, TikTok, Instagram, and… well, most of it.

However, as Broderick himself points out, nobody really knows what’s going on, and there is no centre, due to the fragmentation of the (social) web. This used to be called ‘balkanization’ but because the 1990s is a long time ago, Broderick has coined the term ‘the Vapor Web’. He claims we’re in a ‘post-viral’ time.

I don’t think ‘The Vapor Web’ will catch on as a term, though. At least not amongst British people and Canadians. We like our ‘u’ too much ;)

An abstract representation of the 'fragmentation of the internet'.
My big unified theory of the internet is that the way we use the web is constantly being redefined by conflict and disaster. I brought this up in an interview with Bloomberg last month. If you look back at particularly big years for the web — 2001, the stretch from 2010 to 2012, 2016, 2020, etc. — you typically find moments of big global upheaval arriving right as a suite of new digital tools reach an inflection point with users. Then, suddenly, we have a new way of being online.

Unlike previous global conflicts, however, this time around, the defining narrative about online behavior is not just that there is, seemingly, an absence of it, but that it also still, partially, works the way it did 10 years ago. Every millennial is experiencing an overwhelming feeling that, as WIRED recently wrote, “first-gen social media users have nowhere to go,” but that’s not actually true. It’s just that TikTok is where everyone is and TikTok doesn’t work like Facebook or even YouTube. Which is why the White House is agonizing over the popularity of TikTok hashtags right now instead of canceling my student loan debt.

[…]

Let’s do one more, to bring us back to Israel and Palestine. In the last 120 days, the #Israel hashtag has been used around 220,000 times and been viewed three billion times. The #Palestine hashtag has been used 230,000 times and has been viewed around two billion times. Yes, Palestine is slightly more popular on TikTok, but nothing out of line with what outlets like NPR have found by, you know, actually polling Americans along political and generational lines. To say nothing of how minuscule these numbers are when compared to how large TikTok is.

Which is to say that the internet doesn’t make sense in aggregate anymore and trying to view it as a monolith only gives you bad, confusing, and, oftentimes, wrong impressions of what’s actually going on.

The best descriptions of the current state of the web right now were both actually published months before the fighting in the Middle East broke out and written about a completely different topic. Semafor’s Max Tani coined the term, “the fragmentation election,” which was a riff on writer John Herrman’s similar idea, the “nowhere election”. Tani points to declining media institutions and dying platforms as the culprit for all the amorphousness online. And Herrman latches on podcasts and indie media. Both are true, but I think those are all just symptoms. And so, to piggyback off both of them, and go a bit broader (as I typically do), I’m going to call our current moment the Vapor Web. Because there is actually more internet with more happening on it — and with bigger geopolitical stakes — than ever before. And yet, it’s nearly impossible to grab ahold of it because none of it adds up into anything coherent. Simply put, we’re post-viral now.

Source: Is the web actually evaporating? | Garbage Day

Image: DALL-E 3

AI generated images in a time of war

It’s one thing user-generated content being circulated around social media for the purposes of disinformation. It’s another thing entirely when Adobe’s stock image marketplace is selling AI-generated ‘photos’ of destroyed buildings in Gaza.

This article in VICE includes a comment from an Adobe spokesperson who references the Content Authenticity Initiative. But this just puts the problem on the user rather than the marketplace. People looking to download AI-generated images to spread disinformation, don’t care about the CAI, and will actively look for ways to circumvent it.

Screenshot of Adobe stock images site with AI-generated image titled "Destroyed buildings in Gaza town of Gaza strip in Israel, Affected by war."
Adobe is selling AI-generated images showing fake scenes depicting bombardment of cities in both Gaza and Israel. Some are photorealistic, others are obviously computer-made, and at least one has already begun circulating online, passed off as a real image.

As first reported by Australian news outlet Crikey, the photo is labeled “conflict between Israel and palestine generative ai” and shows a cloud of dust swirling from the tops of a cityscape. It’s remarkably similar to actual photographs of Israeli airstrikes in Gaza, but it isn’t real. Despite being an AI-generated image, it ended up on a few small blogs and websites without being clearly labeled as AI.

[…]

As numerous experts have pointed out, the collapse of social media and the proliferation of propaganda has made it hard to tell what’s actually going on in conflict zones. AI-generated images have only muddied the waters, including over the last several weeks, as both sides have used AI-generated imagery for propaganda purposes. Further compounding the issue is that many publicly-available AI generators are launched with few guardrails, and the companies that build them don’t seem to care.

Source: Adobe Is Selling AI-Generated Images of Violence in Gaza and Israel | VICE

The Societal Side-eye

I’ll turn 43 next month. I seem to have a lot more grey hair than other people my age. Some people act towards me as if I’m old. Perhaps I am in their eyes.

Fair enough, some days I wake up and I feel a million years old, but most of the time my fitness regime means that I feel pretty awesome.

This article is about ignoring the ‘societal side-eye’ and doing badass things anyway. It’s something we all need to remember as we age: don’t be beholden to other people’s expectation of what’s appropriate.

You and I are Way Too Old to let a societal side-eye sideline us from a badass life, however we define it.

Who says we’re not supposed to even countenance the idea of learning to in-line skate. Or skateboard. Or paraglide. Or try trapeze work. Or aerial silks. Or whatever it was that got away from us as youths, and now beckons us back if we would only put in the training time. When does a timeline run out?

If we do such things, particularly if we sport grey hair, we are subjected to

“OH ISN’T THAT SO CUUUUUUUUUTE!”

[…]

Humans are a judgmental lot. We love to make fun of, mock and ridicule, especially those who are doing things we don’t have the guts to try. When some tiny Black woman well over a hundred heads out onto the track and runs a record time, we call her sweet or cute while she is engaging in serious badassery.

[…]

It’s hard enough to age. It’s far harder to age in a ageist society which is eager to denounce and mock those of us who defy expectations and insist on writing our own history, full of whatever badassery fills our hearts.

Source: You’re Too Old to Care About the Societal Side Eye When You Want to Be a Badass | Too Old for This Sh*t

The first half of life is Tetris; the second half is Jenga

I don’t think much of the poem, but I’m stealing the first line of this article as the title of this post. It’s a useful metaphor!

You can’t not fall, but you can with humility redirect your downward inertia into a meaningful lateral motion. You can also spin, and you can allow yourself to be spun.
Source: Tetris Sequence | Opaque Hourglass

Image: Unsplash

Don't tell me that hiring isn't broken

Despite the great work being done around Open Recognition, the main use case for digital credentials remains helping people get jobs. Which means that I’ve spent over a decade, on and off, being forced to think about the interface between people wanting to be hired, and those who want to hire those people.

This article talks about job seekers using AI tools to automate applications. In the example given, the system used sent 5,000 applications on behalf of someone, which landed them 20 interviews. They’d previously got the same number of interviews from manually applying to 200-300 jobs, but it was a lot less work.

Credentials are always a form of arms race if we’re always stacking them vertically like the sheets of paper in the image below. Open Recognition allows us to think about a more wide-ranging set of skills, but it requires people in HR departments to think differently. Sometimes it’s about quality over quantity.

Many job seekers will understand the allure of automating applications. Slogging through different applicant tracking systems to reenter the same information, knowing that you are likely to be ghosted or auto-rejected by an algorithm, is a grind, and technology hasn’t made the process quicker. The average time to make a new hire reached an all-time high of 44 days this year, according to a study across 25 countries by the talent solutions company AMS and the Josh Bersin Company, an HR advisory firm. “The fact that this tool exists suggests that something is broken in the process,” Joseph says. “I see it as taking back some of the power that’s been ceded to the companies over the years.”

Recruiters are less enamored with the idea of bots besieging their application portals. When Christine Nichlos, CEO of the talent acquisition company People Science, told her recruiting staff about the tools, the news raised a collective groan. She and some others see the use of AI as a sign that a candidate isn’t serious about a job. “It’s like asking out every woman in the bar, regardless of who they are,” says a recruiting manager at a Fortune 500 company who asked to remain anonymous because he wasn’t authorized to speak on behalf of his employer.

Other recruiters are less concerned. “I don’t really care how the résumé gets to me as long as the person is a valid person,” says Emi Dawson, who runs the tech recruiting firm NeedleFinder Recruiting. For years, some candidates have outsourced their applications to inexpensive workers in other countries. She estimates that 95 percent of the applications she gets come from totally unqualified candidates, but she says her applicant tracking software filters most of them out—perhaps the fate of some of the 99.5 percent of Joseph’s LazyApply applications that vanished into the ether.

Source: AI bots can do the grunt work of filling out job applications for you | Ars Technica