If your heart isn’t it, it’s probably because there’s no heart anywhere in the process

One thing I’ve learned spending over a decade thinking about Open Badges and alternative credentials is that hiring is broken. Although there are mitigations and workarounds — some of which I’ve implemented when hiring a team and helping others do so — the whole thing is a dumpster fire.

This article by Paul Fuhr discusses the horror show that is job hunting in the age of platforms such as Indeed. He does a great job of showing how automated and dehumanising the whole hiring process is. Platforms are more focused on user engagement than genuinely aiding job seekers; applicants are reduced to mere data points.

Not only that, but the lack of human-centricity to the whole thing fails to accommodate those with non-linear careers while simultaneously trivialising the job search process. Unsurprisingly, he’s calling for root-and-branch reform of the  current job market. I can’t help but think that badges and alternative credentials can make the whole thing more transparent and fair, moving away from automated metrics.

I’ve applied for (quite literally) thousands of jobs. Very quickly, I went from being surgically precise about job applications to taking a shotgun-blast approach to it all, spraying applications out in every direction. I’ve clicked the “Submit” button on countless career sites. I’ve created four different versions of my resume. I’ve spent more time on LinkedIn than any other site, too, though I suspect Reddit is happy to have some server bandwidth back.

Searching for a steady job is a disheartening and depressingly tedious affair, but it doesn’t have to be. If I’m qualified for anything at the moment, though, it’s being qualified to weigh in on the contemporary job-search experience. I know what it is, what it isn’t, what it pretends to be, why it no longer works, and what needs to change. And thanks to a year-plus of trying to find consistent work, it’s no longer about connecting me with the job of my dreams — it’s about connecting me with my dream of simply having a job.


Machine learning, AI, automation, yadda yadda yadda. I get it. I understand the “why” of automating the hiring process; I even think it can be a helpful (jargon alert) “arrow in the quiver” for HR. I can’t even imagine a single HR specialist being tasked to locate the right candidate from a huge field of applicants for one job, let alone fifteen jobs at once. That’s like finding a needle in a stack of needles. It’d be paralyzing.

That said, hiring managers and job seekers have arrived at a truly dangerous intersection. Employers have allowed automation to creep in and govern so much of the HR process that it threatens to ignore the whole…well, you know, human part of it all. And some companies insist on doubling-down on this façade; I’ve visited a shocking number of sites that pretend to have an actual human person ready to chat with you (certainly not a bot!), as if they’re impossibly waiting 24/7 to answer your questions.

We’re at a maddeningly mindless moment when it comes to finding employment, but it’s one that could be repaired with some maddeningly simple ideas. For starters, just bring back some humans. Robots can parse your past and distill you down into data, but they’ll never make a genuine connection or get a sense of you are. Also, simplicity works both ways: it benefits the applicant as much as an HR specialist.

Source: Why Resumes Are Dead & How Indeed.com Keeps Killing the Job Market | Paul Fuhr

A trickle, a ripple, a slow rush

This article by Antonia Malchik reflects on her personal journey moving back to her hometown in Montana. It focuses on her deep sense of gratitude for the natural environment and community. She discusses the annual Gathering of the Glacier-Two Medicine Alliance, celebrating the retirement of the last remaining oil lease in the area, which is significant for the Blackfeet Nation.

The part of the article in which I’m most interested is towards the end: a reflective moment by a creek. She writes about the importance of being present in nature and contemplating one’s place and responsibilities in the world. That feeling of being in and of nature after a day’s walking, feeling quite emotional. It stirs my soul just thinking about it.

On my way home, I stopped at a creek I’m fond of, near a trailhead leading into the Bob Marshall Wilderness. The parking lot was empty of other cars or people. Last year when I’d camped there, the creek had held a delightful number of cylindrical caddisfly shells constructed from gravel about the size of a sesame seed. I looked for them but it was too late in the year.

The creek ran cold across my bare feet, its sound and movement and chilly reminders of snowmelt all I really need in this world to ground myself in what’s real, and what matters. I sat there letting my feet go numb and the sound run through me, September’s late afternoon sunlight filtering through the aspen trees to glance off the water.

I don’t even know what to call that sound—a trickle, a ripple, a slow rush?

Sometimes the right answer is an action. Sometimes it’s a change in policy, or in culture. And sometimes it’s simply being, sitting there by a creek reminding yourself what it feels like to be alive, in a place you love. It’s asking questions of belonging and responsibility, and struggling with your own place in the world.

That sound is all of life to me. I could have sat there forever, grown cold and hungry, but I never for a moment would have felt alone.

Source: Sometimes there’s a right answer, sometimes you sit by a creek, and sometimes they’re the same thing | On The Commons

If LLMs are puppets, who’s pulling the strings?

The article from the Mozilla Foundation surfaces into the human decisions that shape generative AI. It highlights the ethical and regulatory implications of these decisions, such as data sourcing, model objectives, and the treatment of data workers.

What gets me about all of this is the ‘black box’ nature of it. Ideally, for example, I want it to be super-easy to train an LLM on a defined corpus of data — such as all Thought Shrapnel posts. Asking questions of that dataset would be really useful, as would an emergent taxonomy.

Generative AI products can only be trustworthy if their entire production process is conducted in a trustworthy manner. Considering how pre-trained models are meant to be fine-tuned for various end products, and how many pre-trained models rely on the same data sources, it’s helpful to understand the production of generative AI products in terms of infrastructure. As media studies scholar Luke Munn put it, infrastructures “privilege certain logics and then operationalize them”. They make certain actions and modes of thinking possible ahead of others. The decisions of the creators of pre-training datasets have downstream effects on what LLMs are good or bad at, just as the training of the reward model directly affects the fine-tuned end product.

Therefore, questions of accountability and regulation need to take both phases seriously and employ different approaches for each phase. To further engage in discussion about these questions, we are conducting a study about the decisions and values that shape the data used for pre-training: Who are the creators of popular pre-training datasets, and what values guide their work? Why and how did they create these datasets? What decisions guided the filtering of that data? We will focus on the experiences and objectives of builders of the technology rather than the technology itself with interviews and an analysis of public statements. Stay tuned!

Source: The human decisions that shape generative AI: Who is accountable for what? | Mozilla Foundation