More like Grammarly than Hal 9000

    I’m currently studying towards an MSc in Systems Thinking and earlier this week created a GPT to help me. I fed in all of the course materials, being careful to check the box saying that OpenAI couldn’t use it to improve their models.

    It’s not perfect, but it’s really useful. Given the extra context, ChatGPT can not only help me understand key concepts on the course, but help relate them more closely to the overall context.

    This example would have been really useful on the MA in Modern History I studied for 20 years ago. Back then, I was in the archives with primary sources such as minutes from the meetings of Victorians discussing educational policy, and reading reports. Being able to have an LLM do everything from explain things in more detail, to guess illegible words, to (as below) creating charts from data would have been super useful.

    AI converting scanned page with numbers into a bar chart
    The key thing is to avoid following the path of least resistance when it comes to thinking about generative AI. I’m referring to the tendency to see it primarily as a tool used to cheat (whether by students generating essays for their classes, or professionals automating their grading, research, or writing). Not only is this use case of AI unethical: the work just isn’t very good. In a recent post to his Substack, John Warner experimented with creating a custom GPT that was asked to emulate his columns for the Chicago Tribune. He reached the same conclusion.

    […]

    The job of historians and other professional researchers and writers, it seems to me, is not to assume the worst, but to work to demonstrate clear pathways for more constructive uses of these tools. For this reason, it’s also important to be clear about the limitations of AI — and to understand that these limits are, in many cases, actually a good thing, because they allow us to adapt to the coming changes incrementally. Warner faults his custom model for outputting a version of his newspaper column filled with cliché and schmaltz. But he never tests whether a custom GPT with more limited aspirations could help writers avoid such pitfalls in their own writing. This is change more on the level of Grammarly than Hal 9000.

    In other words: we shouldn’t fault the AI for being unable to write in a way that imitates us perfectly. That’s a good thing! Instead, it can give us critiques, suggest alternative ideas, and help us with research assistant-like tasks. Again, it’s about augmenting, not replacing.

    Source: How to use generative AI for historical research | Res Obscura

    An end to rabbit hole radicalization?

    A new peer-reviewed study suggests that YouTube’s efforts to stop people being radicalized through its recommendation algorithm have been effective. The study monitored 1,181 people’s YouTube activity and found that only 6% watched extremist videos, with most of these deliberately subscribing to extremist channels.

    Interestingly, though, the study cannot account for user behaviour prior to YouTube’s 2019 algorithm changes, which means we can only wonder about how influential the platform was in terms of radicalization up to and including pretty significant elections.

    Around the time of the 2016 election, YouTube became known as a home to the rising alt-right and to massively popular conspiracy theorists. The Google-owned site had more than 1 billion users and was playing host to charismatic personalities who had developed intimate relationships with their audiences, potentially making it a powerful vector for political influence. At the time, Alex Jones’s channel, Infowars, had more than 2 million subscribers. And YouTube’s recommendation algorithm, which accounted for the majority of what people watched on the platform, looked to be pulling people deeper and deeper into dangerous delusions.

    The process of “falling down the rabbit hole” was memorably illustrated by personal accounts of people who had ended up on strange paths into the dark heart of the platform, where they were intrigued and then convinced by extremist rhetoric—an interest in critiques of feminism could lead to men’s rights and then white supremacy and then calls for violence. Most troubling is that a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.

    […]

    The… research is… important, in part because it proposes a specific, technical definition of ‘rabbit hole’. The term has been used in different ways in common speech and even in academic research. Nyhan’s team defined a “rabbit hole event” as one in which a person follows a recommendation to get to a more extreme type of video than they were previously watching. They can’t have been subscribing to the channel they end up on, or to similarly extreme channels, before the recommendation pushed them. This mechanism wasn’t common in their findings at all. They saw it act on only 1 percent of participants, accounting for only 0.002 percent of all views of extremist-channel videos.

    Nyhan was careful not to say that this paper represents a total exoneration of YouTube. The platform hasn’t stopped letting its subscription feature drive traffic to extremists. It also continues to allow users to publish extremist videos. And learning that only a tiny percentage of users stumble across extremist content isn’t the same as learning that no one does; a tiny percentage of a gargantuan user base still represents a large number of people.

    Source: The World Will Never Know the Truth About YouTube’s Rabbit Holes | The Atlantic

    Study shows no link between age at getting first smartphone and mental health issues

    Where we live is unusual for the UK: we have first, middle, and high schools. The knock-on effect of this in the 21st century is that kids aged nine years old are walking to school and, often, taking a smartphone with them.

    This study shows that the average age children were given a phone by parents was 11.6 years old, which meshes with the ‘norm’ (I would argue) in the UK of giving kids one when they go to secondary school.

    What I like about these findings are that parents overall seem to do a pretty good job. It’s been a constant battle with our eldest, who is almost 16, to be honest, but I think he’s developed some useful habits around technology.

    Parents fretting over when to get their children a cell phone can take heart: A rigorous new study from Stanford Medicine did not find a meaningful association between the age at which kids received their first phones and their well-being, as measured by grades, sleep habits and depression symptoms.

    […]

    The research team followed a group of low-income Latino children in Northern California as part of a larger project aimed to prevent childhood obesity. Little prior research has focused on technology acquisition in non-white or low-income populations, the researchers said.

    The average age at which children received their first phones was 11.6 years old, with phone acquisition climbing steeply between 10.7 and 12.5 years of age, a period during which half of the children acquired their first phones. According to the researchers, the results may suggest that each family timed the decision to what they thought was best for their child.

    “One possible explanation for these results is that parents are doing a good job matching their decisions to give their kids phones to their child’s and family’s needs,” Robinson said. “These results should be seen as empowering parents to do what they think is right for their family.”

    Source: Age that kids acquire mobile phones not linked to well-being, says Stanford Medicine study | Stanford Medicine

    Reading is useless

    I like this post by graduate student Beck Tench. Reading is useless, she says, in the same way that meditation is useless. It’s for its own sake, not for something else.

    When I titled this post “reading is useless,” I was referring to a Zen saying that goes, “Meditation is useless.” It means that you meditate to meditate, not to use it for something. And like the saying, I’m being provocative. Of course reading is not useless. We read in useful ways all the time and for good reason. Reading expands our horizons, it helps us understand things, it complicates, it validates, it clarifies. There’s nothing wrong with reading (or meditating for that matter) with a goal in mind, but maybe there is something wrong if we feel we can’t read unless it’s good for something.

    This quarter’s experiment was an effort to allow myself space to “read to read,” nothing more and certainly nothing less. With more time and fewer expectations, I realized that so much happens while I read, the most important of which are the moments and hours of my life. I am smelling, hearing, seeing, feeling, even tasting. What I read takes up place in my thoughts, yes, and also in my heart and bones. My body, which includes my brain, reads along with me and holds the ideas I encounter.

    This suggests to me that reading isn’t just about knowing in an intellectual way, it’s also about holding what I read. The things I read this quarter were held by my body, my dreams, my conversations with others, my drawings and journal entries. I mean holding in an active way, like holding something in your hands in front of you. It takes endurance and patience to actively hold something for very long. As scholars, we need to cultivate patience and endurance for what we read. We need to hold it without doing something with it right away, without having to know.

    Source: Reading is Useless: A 10-Week Experiment in Contemplative Reading | Beck Tench