Playing the right game
Thanks to Laura for pointing me towards this post by Simone Stolzoff. There’s so much to unpack, which perhaps I’ll do in a separate post. It touches on reputation and credentialing, but also motivation, gamification, and “value self-determination”.
Extracting yourself from the false gods of vanity metrics is hard, but massively liberating. It starts with realising small things like you don’t actually need to keep up a ‘streak’ on Duolingo to learn a language. But there’s a through line from that to coming to the conclusion that you don’t need to win awards for your work, or the status symbol of a fancy car/house.
I interviewed over 100 workers—from kayak guides in Alaska to Wall Street bankers in Manhattan—and met several people who achieved nearly every goal set out for them, only to realize they were winning a game they didn’t enjoy playing.Source: Playing a Career Game You Actually Want to Win | EveryHow do so many of us find ourselves in this position, climbing ladders we don’t truly want to be on? C. Thi Nguyen, a philosopher and game design researcher at the University of Utah, has some answers. Nguyen coined the term “value capture,” a phenomenon that I came to see all around me after I learned about it. Here’s how it works.
Most games establish a world with a clear goal and rankable achievements: Pac-Man must eat all the dots; Mario must save the princess. Video games offer what Nguyen calls “a seductive level of value clarity.” Get points, defeat the boss, win. In many ways, video games are the only true meritocratic games people can play. Everyone plays within clearly defined boundaries, with the same set of inputs. The most skilled wins.
Our careers are different. The games we play with our working hours also come with their own values and metrics that matter. Success is measured by how much money you make—for your company and for yourself. Promotions, bonuses, and raises mark the path to success, like dots along the Pac-Man maze.
These metrics are seductive because of their simplicity. “You might have a nuanced personal definition of success,” Nguyen told me, “but once someone presents you with these simple quantified representations of a value—especially ones that are shared across a company—that clarity trumps your subtler values.” In other words, it is easier to adopt the values of the game than to determine your own. That’s value capture.
There are countless examples of value capture in daily life. You get a Fitbit because you want to improve your health but become obsessed with maximizing your steps. You become a professor in order to inspire students but become fixated on how often your research is cited. You join Twitter because you want to connect with others but become preoccupied by the virality of your content. Naturally, maximizing your steps or citations or retweets is good for the platforms on which these status games are played.
Bad work
Not just artists - we all go through life’s ups and downs, good periods and bad. Right now is the least tolerant time since I’ve been alive. Everyone’s supposed to be on it 24/7.
Viewed in the context of the episode, Sylvester is talking, specifically, about the “professionalization” and “commercialization” of art, and basically the hype machine of the art world.Source: Artists must be allowed to make bad work | Austin Kleon
Digital wallets for verifiable credentials
Purdue University had something like this almost a decade ago, but there’s even more call for this kind of thing now, post-pandemic and in a Verifiable Credentials landscape.
Everyone’s addicted to marrying ‘skills’ with ‘jobs’ but I think there’s definitely an Open Recognition aspect to all of this.
ASU Pocket captures students’ traditional and non-traditional educational credentials, which are now, with the emergence of verifiable credentials, more portable than ever before. This gives students the autonomy to securely own, control and share their holistic evidence of learning with employers.Source: ASU Pocket: A digital wallet to capture learners’ real-time achievementsA digital wallet, like ASU Pocket, holds verifiable credentials – which are digital representations of real-world credentials like government-issued IDs, passports, driver’s licenses, birth certificates, educational degrees, professional certifications, awards, and so on. In the past, these credentials have been stored in physical form, making them susceptible to fraud and loss. However, with advances in technology, these credentials can be stored electronically, using cryptographic techniques to ensure their authenticity. This makes it possible to verify the credential without revealing sensitive information, such as a social security number.
[…]
At ASU Pocket, we also view verifiable credentials as an important tool for social impact. They provide a way for people to document their skills and accomplishments, which can be used to gain new opportunities. For example, someone with a verifiable skill credential for customer service might be able to use it to get a job in a call center. Likewise, someone with a verifiable credential for computer programming might be able to use it to get a job as a software developer.
In both cases, the verifiable credential provides a way for the individual to demonstrate their skills and qualifications gained through or outside of traditional learning pathways. This is especially impactful for marginalized groups who may have difficulty obtaining traditional credentials, such as degrees or certifications.
AI generated art aesthetic
Yes, it’s “just typing prompts” but then drawing is “just making marks on paper”. Love this aesthetic.
Source: An Improbable Future
Bad coffee
I love this essay, not because I necessarily agree with it, but because I agree with the vibe of it. It’s from 2019, so it must have come via my social feeds.
Keith Pandolfi used to own a coffee shop which served the best barista-crafted flat whites, etc. in the area. These days he drinks Maxwell House. Likewise, there’s areas of my life in which I’ve gone from being very fussy to not really caring. It’s the letting go that matters.
The best cup of coffee I ever had was the dirty Viennese blend my teenage friends and I would sip out of chipped ceramic mugs at a cafe near the University of Cincinnati while smoking clove cigarettes and listening to Sisters of Mercy records, imagining what it would be like to be older than we were. The best cup of coffee was the one I enjoyed alone each morning during my freshman year at Ohio State, huddled in the back of a Rax restaurant reading the college paper and dealing with the onset of an anxiety disorder that would never quite be cured.Source: The Case for Bad Coffee | Serious Eats[…]
I don’t have memories of… bonding experiences taking place over a flat white at a Manhattan coffee shop or a $5 cup of nitro iced coffee at a Brooklyn cafe. High-end coffee doesn’t usually lend itself to such moments. Instead, it’s something to be fussed over and praised; you talk more about its origin and its roaster, its flavor notes and its brewing method than you talk to the person you’re enjoying it with. Bad coffee is the stuff you make a full pot of on the weekends just in case some friends stop by. It’s what you sip when you’re alone at the mechanic’s shop getting your oil change, thinking about where your life has taken you; what you nurse as you wait for a loved one to get through a tough surgery. It’s the Sanka you share with an elderly great aunt while listening to her tell stories you’ve heard a thousand times before. Bad coffee is there for you. It is bottomless. It is perfect.
Ungrading the university experience
There’s some discussion of students ‘gaming the system’ in this article about ungrading university courses, but nothing much about AI tools like ChatGPT. This movement has been gathering pace for years, and I think that we’re at a tipping point.
Hopefully, this will lead to more Open Recognition practices rather than just breaking down chunky credentials into microcredentials.
[A]dvocates say the most important reason to adopt un-grading is that students have become so preoccupied with grades, they aren't actually learning.Source: Some colleges are eliminating freshman grades by ‘ungrading’ | NPR“Grades are not a representation of student learning, as hard as it is for us to break the mindset that if the student got an A it means they learned,” said Jody Greene, special adviser to the provost for educational equity and academic success at UCSC, where several faculty are experimenting with various forms of un-grading.
If a student already knew the material before taking the class and got that A, “they didn’t learn anything,” said Greene. And “if the student came in and struggled to get a C-plus, they may have learned a lot.”
[…]
[S]everal colleges and universities… already practice unconventional forms of grading. At Reed College in Oregon, students aren’t shown their grades so that they can “focus on learning, not on grades,” the college says. Students at New College of Florida complete contracts establishing their goals, then get written evaluations about how they’re doing. And students at Brown University in Rhode Island have a choice among written evaluations that only they see, results of “satisfactory” or “no credit,” and letter grades — A, B or C, but no D or F.
MIT has what it calls “ramp-up grading” for first-year students. In their first semesters, they get only a “pass,” without a letter; if they don’t pass, no grade is recorded at all. In their second semesters, they get letter grades, but grades of D and F are not recorded on their transcripts.
Reducing website carbon emissions by blocking ads
Blocking advertising on the web is not only good for increasing the speed and privacy of your own web browsing, but also good for the planet.
What is the environmental impact of visiting the homepage of a media site? What part do advertising, and analytics, play when it comes to the carbon footprint? We tried to answer these questions using GreenFrame, a solution we developed to measure the footprint of our own developments.Source: Media Websites: 70% of the Carbon Footprint Caused by Ads and Stats | MarmelabThe results are insightful: up to 70% of the electricity consumption (and therefore carbon emissions) caused by visiting a French media site is triggered by advertisements and stats. Therefore, using an ad blocker even becomes an ecological gesture.
[…]
Overall we observe the same thing: the carbon footprint of a website decreases if there are no ads or trackers on the website. The difference is significant: Between 32% and 70% of the energy consumed by the browser and the network is due to monetization.
The websites analyzed generate between 70 and 130 million visits per month, and their work has therefore a real impact on the environment.
Reducing the consumption of one of these sites by only 10% (20mWh), per visit for a site with 100 million monthly visitors is equivalent to saving 24,000 kWh per year.
Switching to Arc
It’s not often I’ll post tools here, but after a few days of using it, I’m sold on the Arc browser.
My web browser history over the last quarter of a century goes something like: Netscape Navigator –> Internet Explorer –> Firefox –> Chrome –> Brave –> Arc.
Perhaps I should record a screencast, but the three things I like most about Arc are:
- Build in 'Spaces' (for client projects, etc.)
- Split screen view
- Easel (clip *live* parts of web pages)
Experience a calmer, more personal internet in this browser designed for you. Let go of the clicks, the clutter, the distractions.Source: Arc | The Browser Company
The sleight of hand of crypto
Cory Doctorow is doing the rounds for his new book at the moment. But because he’s Cory, he’s not just phoning it in, or parroting the same lines.
Take this interview in Jacobin, for example. Yes, he’s talking about why he decided to write a story about crypto, but he’s so well informed about this stuff on a technical level that it’s a joy to read the way he explains things.
There’s this kind of performative complexity in a lot of the wickedness in our world — things are made complex so they’ll be hard to understand. The pretense is they’re hard to understand because they’re intrinsically complex. And there’s a term in the finance sector for this, which is “MEGO:” My Eyes Glaze Over. It’s a trick.Source: Cory Doctorow Explains Why Big Tech Is Making the Internet Terrible | Jacobin[…]
A lot of the crypto stuff starts with what a sleight-of-hand artist would do. “Alright, we know that cryptography works and can keep secrets and we know that money is just an agreement among people to treat something as valuable. What if we could use that secrecy when processing payments and in so doing prevent governments from interrupting payments?”
After this setup, the con artist can get the mark to pick his or her poison: “It will stop big government from interfering with the free market” or “It will stop US hegemony from interdicting individuals who are hostile to American interests in other countries and allow them to make transactions” or “It will let you send money to dissident whistleblowers who are being blocked by Visa and American Express.” These are all applications that, depending on the mark’s political views, will affirm the rightness of the endeavor. The mark will think, that is a totally legitimate application.
It starts with a sleight of hand because all the premises that the mark is agreeing with are actually only sort of right. It’s a first approximation of right and there are a lot of devils in the details. And understanding those details requires a pretty sophisticated technical understanding.
AI writing, thinking, and human laziness
In a Twitter thread by Paul Graham that I came across via Hacker News he discusses how it’s always safe to bet on human laziness. Ergo, most writing will be AI-generated in a year’s time.
However, as he says, to write is to think. So while it’s important to learn how to use AI tools, it’s also important to learn how to write.
In this post by Alan Levine, he complains about ChatGPT’s inability to write good code. But the most interesting paragraph (cited below) is the last one in which we, consciously or unconsciously, put the machine on the pedestal and try and cajole it into doing something we can already do.
I’m reading Humanly Possible by Sarah Bakewell at the moment, so I feel like all of this links to humanism in some way. But I’ll save those thoughts until later and I’ve finished the book.
ChatGPT is not lying or really hallucinating, it is just statistically wrong.Source: Lying, Hallucinating? I, MuddGPT | CogDogBlogAnd the thing I am worried about is that in this process, knowing I was likely getting wrong results, I clung to hope it would work. I also found myself skipping my own reasoning and thinking, in the rush to refine my prompts.
Taxing land rather than labour
I think I’ve always been somewhat of a Georgist, but perhaps didn’t know the name for it. The central tenet is that governments should be funded by a tax on land rather than labour.
There’s also the idea that this tax would replace all other taxes, which I guess is kind of the mirror of Universal Basic Income replacing all other benefits. I’m happy to be convinced on that, but already sold on the land tax idea.
Georgism, in some sense, is the idea that no one really owns land, but instead, you rent its exclusive use from everyone else through Land Value Taxes.Source: Developing an intuition for Georgism | Atoms vs Bits[…]
If I claimed to own a 1-dimensional line that ran on the ground, and that you need to step over it, or that I owned a 6-inch cube floating off the ground, and you needed to duck under it, you’d rightly think I was insane.
However, if I own a plot of land, i.e. a 2D space on the surface of the earth, it’s considered either insane (or tragically primitive) to not believe in this.
(Yes, through air rights you own 3D space, but it generally has to be above 2D land, floating cubes still seem nonsensical).
Image: Gautier Pfeiffer
AI and work socialisation
I've bolded what I consider to be the most important part of this article by danah boyd. It's a reflection on two different 'camps' when it comes to AI and jobs, but she surfaces an important change that's already happened in society when it comes to the workforce: we just don't train people any more.
Couple this with AI potentially replacing lower-paid jobs (where people might 'learn the ropes while working) and... well, it's going to be interesting.
Source: Deskilling on the Job | danah boydWhile getting into what it means to be human is likely to be a topic of a later blog post, I want to take a moment to think about the future of work. Camp Automation sees the sky as falling. Camp Augmentation is more focused on how things will just change. If we take Camp Augmentation’s stance, the next question is: what changes should we interrogate more deeply? The first instinct is to focus on how changes can lead to an increase in inequality. This is indeed the most important kinds of analysis to be done. But I want to noodle around for a moment with a different issue: deskilling.
[...]
Today, you are expected to come to most jobs with skills because employers don’t see the point of training you on the job. This helps explain a lot of places where we have serious gaps in talent and opportunity. No one can imagine a nurse trained on the job. But sadly, we don’t even build many structures to create software engineers on the job.
However, there are plenty of places where you are socialized into a profession through menial labor. Consider the legal profession. The work that young lawyers do is junk labor. It is dreadfully boring and doesn’t require a law degree. Moreover, a lot of it is automate-able in ways that would reduce the need for young lawyers. But what does it do to the legal field to not have that training? What do new training pipelines look like? We may be fine with deskilling junior lawyers now, but how do we generate future legal professionals who do the work that machines can’t do?
This is also a challenge in education. Congratulations, students: you now have tools at your disposal that can help you cut corners in new ways (or outright cheat). But what if we deskill young people through technology? How do we help them make the leap into professions that require more advanced skills?
[...]
Whether you are in Camp Augmentation or Camp Automation, it’s really important to look holistically about how skills and jobs fit into society. Even if you dream of automating away all of the jobs, consider what happens on the other side. How do you ensure a future with highly skilled people? This is a lesson that too many war-torn countries have learned the hard way. I’m not worried about the coming dawn of the Terminator, but I am worried that we will use AI to wage war on our own labor forces in pursuit of efficiency. As with all wars, it’s the unintended consequences that will matter most. Who is thinking about the ripple effects of those choices?
Attempting to quantify the unquantifiable
This article, which I discovered via Sentiers, discusses the rise of ‘Quantitative Aesthetics’, or putting numbers on things you like to prove other people wrong. It’s basically numbers as a shorthand for status, and once you realise it, you see it everywhere. It’s the social media-ification of all of the things.
[T]here’s something called the McNamara Fallacy, a.k.a. the Quantitative Fallacy. It is summarized as “if it cannot be measured, it is not important.” The Heller article made me reflect on how a version of it is now very present, and growing, at the grassroots of taste.Source: How We Ended Up in the Era of ‘Quantitative Aesthetics,’ Where Data Points Dictate Taste | ArtnetOn one level, this is seen in a rise of a kind of wonky obsession with business stats in fandoms, invoked as a way to convey the rightness of artistic opinions—what I want to call Quantitative Aesthetics. (There are actually scientists who study aesthetic preference in labs and use the term “quantitative aesthetics.” I am using it in a more diffuse way.)
It manifests in music. As the New York Times wrote in 2020 of the new age of pop fandom, “devotees compare No. 1s and streaming statistics like sports fans do batting averages, championship, wins and shooting percentages.” Last year, another music writer talked about fans internalizing the number-as-proof-of-value mindset to extreme levels: “I see people forcing themselves to listen to certain songs or albums over and over and over just to raise those numbers, to the point they don’t even get enjoyment out of it anymore.”
The same goes for film lovers, who now seem to strangely know a lot about opening-day grosses and foreign box office, and use the stats to argue for the merits of their preferred product. There was an entire campaign by Marvel super-fans to get Avengers: Endgame to outgross Avatar, as if that would prove that comic-book movies really were the best thing in the world.
You can‘t ruminate and listen at the same time
David Cain at Raptitude has a post which is somewhat bizarrely entitled 10 Things I Want to Communicate to the Human Species Before I Die. The first point is about shopping trolleys, so I’m not sure how tongue-in-cheek it all is.
Anyway, without saying whether I agree or disagree with any of the other statements, I want to draw attention to the last one. Ruminating is a complete waste of time, and as someone susceptible to it I want to +1 the advice to get out of your head and listen if you’re succumbing to it.
For me, that often means listening to my iPod in the early hours of the morning while lying sleepless in bed. But it can mean listening to other people, or just your surroundings.
The tendency of the modern human is to live in their head — almost perpetually monologuing and forecasting and rehashing. This is a seldom-helpful habit most of us reinforce constantly by tumbling along with its momentum. You can weaken the grip of the ruminative mind by frequently taking a few seconds to be quiet and listen to your surroundings. Doing this reveals something interesting: when you actively use your attention for listening (or in any other intentional way) it cannot be used for more rumination. Each time you do this, the gravity of the monologuing mind weakens. If even a fraction of the population learned how to perforate their ongoing ruminative thought-mill like this, it might be a different world.Source: 10 Things I Want to Communicate to the Human Species Before I Die | Raptitude
Arc browser is pretty nifty
I’m not going to gush as I’ve had it installed mere hours, but this article persuaded me to actually use the invite code I’d got for the Arc browser. First impressions were good enough for it to replace Brave as my default, for the time being, on my Mac Studio.
My colleague Laura always has tabs for client projects to hand, as she has a Firefox extension which separates tab groups. Arc does this quickly, seamlessly, and by default. Also, I used to have my tabs at the side of my browser and I’m not sure why or how I got out of the habit of doing so.
There are lots of other nice things about Arc which are mentioned in the review. It’s Chromium-based, so everything just works, including bringing across your bookmarks, saved passwords, and browsing history.
I realize calling Arc “the most transformative app I’ve used in decades” is a bold statement that requires a lot of support. I won’t skimp on words in this article telling you why—it’s that important and requires new ways of thinking about how you work on the Web.Source: Arc Will Change the Way You Work on the Web | TidBITS[…]
If the sidebar is Arc’s most prominent interface element, Spaces is the feature that leverages it more than anything else in Arc. A Space is a collection of tabs in the sidebar. It’s easy to switch between them using keyboard shortcuts (Control-1, Control-2, etc., or Command-Option-Left/Right Arrow) or by clicking little icons at the bottom of the sidebar.
You can assign each Space a color, providing an instant visual clue for what Space you’re in. For me, Personal is a green/yellow/teal gradient, TidBITS is purple, and FLRC is blue, while my fourth space—set to hold FLRC tabs for Google Docs and Google Sheets—is yellow. Each Space can also have a custom emoji or icon that identifies it in the switcher at the bottom of the sidebar.
[…]
The most obvious part of Arc’s visual interface is its sidebar. As I said earlier, the sidebar provides access to multiple color-coded Spaces, each with its own collection of tabs. It’s easy to gloss over the importance of putting tabs in a sidebar, but that would be a mistake. Sidebar tabs aren’t simply a vertical version of tabs across the top of the browser window, they’re substantively better.
[…]
But what the sidebar really provides is a sense of comfort, of familiarity. There’s a French phrase, mise en place, that refers to setting out all your ingredients and tools before cooking so everything you need is at hand when you need it. Arc’s sidebar, when populated with the pinned tabs you use and arranged the way you think, provides that sense of mise en place. I actually want to sit down at Arc because it helps me channel my thoughts and actions toward my goals for the day.
Kanban > Scrum
I spend most of my time coordinating with one other human being at work. After that, I’m coordinating with a maximum of three other people internally, and then with clients.
So take what I’ve got to say about Kanban with a pinch of salt. But I’ve worked at bigger organisations, with more fancy methodologies. Still, nothing beats having a board which shows what’s to do, doing, and done (with some tweaks perhaps for ‘feedback needed’ and ‘undead’!)
I’m not saying Scrum doesn’t work. I’m saying the exact opposite. Scrum does work, but it works for the same reasons Kanban does. The difference between them is that Scrum is slower and more prescriptive, and thus less adaptable (or “agile”, whatever you wanna call it).Source: You don’t need Scrum. You just need to do Kanban right. | Lucas F. Costa[…]
[B]ecause Kanban focuses on tasks rather than sprint-sized batches, it pushes responsibility to the edges of the team, meaning engineers are responsible for going after the pieces of information they need to move forward.
When that happens, instead of designing features by committee, which demands a significant amount of back-and-forth discussions, decisions happen locally, and thus are easier to make.
Additionally, fewer people making decisions lead to fewer assumptions. Fewer assumptions, in turn, lead to shipping smaller pieces of software more quickly, allowing teams to truncate bad paths earlier.
Image: Visual Thinkery for WAO
Just this cold beach that nourishes you
I’ve come across so much great art and artists that are either directly or obliquely protesting the coronation, monarchy, and everything the Tories stand for. Here’s one from Robert Montgomery which, I think, is actually from the queen’s jubilee.
Source: BILLBOARDS — ROBERT MONTGOMERY
On co-operative dynamics
Abi Handley (second from the left in this photo) is an inspiration to me and others in the co-op movement. It was a little surprising, therefore, when she told me a couple of months ago that she was stepping down from being a member of Outlandish. After all, she’s been with them for 12 years, almost since the beginning.
As this blog post explains, however, part of understanding the dynamics within a co-operative is to know when to take the reins, and when to step back. She’ll continue to collaborate with Outlandish, but also more than others. This is great for us, and in fact she joined WAO during our last monthly co-op day to explore opportunities.
Congrats, Abi! Onwards and upwards 💪
What I’m trying to achieve in life has changed quite a lot since I started in Outlandish at 29. I’m now 41 and got two kids, a house, built a successful business with people whom I love and respect. So what’s my next challenge? How can I push myself next?Source: Abi is stepping down from being a member of Outlandish | Outlandish blog[…]
Integrity for me is one of the most important principles I try and live by. Modelling the behaviours and values I hold dear for me is the only true way I want to be. Being able to genuinely live and breathe what I support my clients to try in terms of ways of working is essential for me to be an authentic and valuable coach & facilitator. I need to do what I say to others to try (because I believe and see that it works when everyone opts in), always and in every team I work in.
I have struggled in letting go of the role of ‘Mum’ in Outlandish, despite desperately wanting to, and that is not a solo challenge – it is also incredibly difficult to change those kinds of dynamics within any group. By me stepping down, I am modelling the need to not be at the centre of all things Outlandish, because despite trying with all our might, it is so easy to step into the safe role of looking after things when its not going so well or a challenge comes up for us. I don’t think that serves my goals, nor Outlandish’s. I think the best way we can achieve Outlandish being even more co-operative than it already is, is by me stepping back. That’s a scary thing to do for us all, but I’m going to take the risk and be excited about what might happen, for all of us.
Comportamento Geral
As part of the #NotMyKing protests, I came across a printmaker and artist whose work I explored further. Highly discouraged by my wife from putting up something explicitly anti-monarchy, I instead placed this from Katherine Anteney on view through the window of my home office.
It’s from a Brazilian anti-dictatorship protest song from the 1970s and roughly translated as: “Everything is good, everything is great, but what happens tomorrow, mate, when they take your carnival away?"
Seems appropriate for this weekend, anyway.
The internet should be a place for connection, surprise, and delight
As new platforms try to imitate existing ones, it becomes more challenging for users to find unique and diverse voices (and content).
So it’s important for users, developers, and investors to encourage innovation and diversity in online spaces, instead of solely focusing on creating platforms that trap users and prioritise profit.
You know, the internet still has the potential to be a place for connection, surprise, and delight. But it requires a collective effort to resist the monopolistic tendencies of a few dominant players.
This kind of duplication isn't just a clear a failure of imagination; it is the kind of innovation that capitalism rewards. Don't make something new, make the same thing that someone else made very successful, but slightly better. To have a proven concept, after all, is to plagiarize. It's annoying to see millions of dollars thrown at making more-or-less literal dupes of internet companies that everyone is already using begrudgingly and with diminishing emotional returns. It's maybe more frustrating to realize that the goals of these companies is the same as their predecessors, which is to make the internet smaller.[…]
The death of Google Reader is much bemoaned by bloggers like myself, many of whom believe that its end was why blogs died. That’s a beautiful revisionist history that I won’t be taking part in here. Google Reader, which was essentially a very well-designed RSS feed with a mild interactive component, died because Google decided they didn’t want to play the game in the way that its founders had said they’d play it. Those ethical foundations proved extremely easy to discard once some shiny new companies, most notably Facebook and Twitter, began raking in billions of dollars.
[…]
The reason the death of Google Reader matters, here, is that it marks a pivotal moment in the deliberate and engineered shrinking of the internet. When Google Reader died, article discovery shifted. People were no longer reading RSS feeds, finding new sites, following them, and being updated when those sites posted. Instead, they were scrolling on the endless feed of Twitter, and (at the time) Facebook, and they got whatever they got.
[…]
It is worth remembering that the internet wasn’t supposed to be like this. It wasn’t supposed to be six boring men with too much money creating spaces that no one likes but everyone is forced to use because those men have driven every other form of online existence into the ground. The internet was supposed to have pockets, to have enchanting forests you could stumble into and dark ravines you knew better than to enter. The internet was supposed to be a place of opportunity, not just for profit but for surprise and connection and delight. Instead, like most everything American enterprise has promised held some new dream, it has turned out to be the same old thing—a dream for a few, and something much more confining for everyone else.