Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we're joining in with conversations online, then we're not always part of a group, sometimes we're part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their 'tribe' online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I'm not sure it'll scale, though...

Source: Human Systems

Irony doesn't scale

Paul Ford is venerated in Silicon Valley and, based on what I’ve read of his, for good reason. He describes himself as a ‘reluctant capitalist’.

In this post from last year, he discusses building a positive organisational culture:

A lot of businesses, especially agencies, are sick systems. They make a cult of their “visionary” founders. And they keep going but never seem to thrive — they always need just one more lucky break before things improve. Payments are late. Projects are late. The phone rings all weekend. That’s not what we wanted to build. We wanted to thrive.
He sets out characteristics of a 'well system':
  • Hire people who like to work hard and who have something to prove.
  • Encourage people to own and manage large blocks of their own time, and give people time to think and make thinking part of the job—not extra.
  • Let people rest. Encourage them to go home at sensible times. If they work late give them time off to make up for it.
  • Aim for consistency. Set emotional boundaries and expectations, be clear about rewards, and protect people where possible from crises so they can plan their time.
  • Make their success their own and credit them for it.
  • Don’t promise happiness. Promise fair pay and good work.
Ford makes the important point that leaders need to be seen to do and say the right things:
I’m not a robot by any means. But I’ve learned to watch what I say. If there’s one rule that applies everywhere, it’s that Irony Doesn’t Scale. Jokes and asides can be taken out of context; witty complaints can be read as lack of enthusiasm. People are watching closely for clues to their future. Your dry little bon mot can be read as “He’s joking but maybe we are doomed!” You are always just one hilarious joke away from a sick system.
It's a useful post, particuarly for anyone in a leadership position.

Source: Track Changes (via Offscreen newsletter /48)

Web Trends Map 2018 (or 'why we can't have nice things')

My son, who’s now 11 years old, used to have iA’s Web Trends Map v4 on his wall. It was produced in 2009, when he was two:

iA Web Trends Map 4 (2009)

I used it to explain the web to him, as the subway map was a metaphor he could grasp. I’d wondered why iA hadn’t produced more in subsequent years.

Well, the answer is clear in a recent post:

Don’t get too excited. We don’t have it. We tried. We really tried. Many times. The most important ingredient for a Web Trend Map is missing: The Web. Time to bring some of it back.
Basically, the web has been taken over by capitalist interests:
The Web has lost its spirit. The Web is no longer a distributed Web. It is, ironically, a couple of big tubes that belong to a handful of companies. Mainly Google (search), Facebook (social) and Amazon (e-commerce). There is an impressive Chinese line and there are some local players in Russia, Japan, here and there. Overall it has become monotonous and dull. What can we do?
It's difficult. Although I support the aims, objectives, and ideals of the IndieWeb, I can't help but think it's looking backwards instead of forwards. I'm hoping that newer approaches such as federated social networks, distributed ledgers and databases, and regulation such as GDPR have some impact.

Source: iA

So, what do you do?

Say what you want about teaching, it makes it extremely easy to answer the above question.

But that question might not be the best way to build rapport with someone else. In fact, it may be best to avoid talking about work entirely.
It's better, apparently, to find shared ground about common goals and interests:
Research findings from the world of network science and psychology suggests that we tend to prefer and seek out relationships where there is more than one context for connecting with the other person. Sociologists refer to these as multiplex ties, connections where there is an overlap of roles or affiliations from a different social context. If a colleague at work sits on the same nonprofit board as you, or sits next to you in spin class at the local gym, then you two share a multiplex tie. We may prefer relationships with multiplex ties because research suggests that relationships built on multiplex ties tend to be richer, more trusting, and longer lasting
The author of this article suggests you can ask the following questions instead:
  • What excites you right now?
  • What are you looking forward to?
  • What’s the best thing that happened to you this year?
  • Where did you grow up?
  • What do you do for fun?
  • Who is your favorite superhero?
  • Is there a charitable cause you support?
  • What’s the most important thing I should know about you?

Unfortunately, unlike the ubiquitous, “So, do you do?” none of these are useful as conversation-starters. And then, after I’ve corrected for Britishness, there’s exactly zero I’d use in the course of serious adult conversation…

Source: Harvard Business Review

The military implications of fitness tech

I was talking about this last night with a guy who used to be in the army. It’s a BFD.

In March 2017, a member of the Royal Navy ran around HMNB Clyde, the high-security military base that's home to Trident, the UK's nuclear deterrent. His pace wasn't exceptional, but it wasn't leisurely either.

His run, like millions of others around the world, was recorded through the Strava app. A heatmap of more than one billion activities – comprising of 13 billion GPS data points – has been criticised for showing the locations of supposedly secretive military bases. It was thought that, at the very least, the data was totally anonymised. It isn't.

Oops.

The fitness app – which can record a person's GPS location and also host data from devices such as Fitbits and Garmin watches – allows users to create segments and leaderboards. These are areas where a run, swim, or bike ride can be timed and compared. Segments can be seen on the Strava website, rather than on the heatmap.

Computer scientist and developer Steve Loughran detailed how to create a GPS segment and upload it to Strava as an activity. Once uploaded, a segment shows the top times of people running in an area. Which is how it's possible to see the running routes of people inside the high-security walls of HMNB Clyde.

Of course, this is an operational security issue. The military personnel shouldn't really be using Strava while they're living/working on bases.

"The underlying problem is that the devices we wear, carry and drive are now continually reporting information about where and how they are used 'somewhere'," Loughran said. "In comparison to the datasets which the largest web companies have, Strava's is a small set of files, voluntarily uploaded by active users."

Source: WIRED

Audrey Watters on technology addiction

Audrey Watters answers the question whether we’re ‘addicted’ to technology:

I am hesitant to make any clinical diagnosis about technology and addiction – I’m not a medical professional. But I’ll readily make some cultural observations, first and foremost, about how our notions of “addiction” have changed over time. “Addiction” is medical concept but it’s also a cultural one, and it’s long been one tied up in condemning addicts for some sort of moral failure. That is to say, we have labeled certain behaviors as “addictive” when they’ve involve things society doesn’t condone. Watching TV. Using opium. Reading novels. And I think some of what we hear in discussions today about technology usage – particularly about usage among children and teens – is that we don’t like how people act with their phones. They’re on them all the time. They don’t make eye contact. They don’t talk at the dinner table. They eat while staring at their phones. They sleep with their phones. They’re constantly checking them.
The problem is that our devices are designed to be addictive, much like casinos. The apps on our phones are designed to increase certain metrics:
I think we’re starting to realize – or I hope we’re starting to realize – that those metrics might conflict with other values. Privacy, sure. But also etiquette. Autonomy. Personal agency. Free will.
Ultimately, she thinks, this isn't a question of addiction. It's much wider than that:
How are our minds – our sense of well-being, our knowledge of the world – being shaped and mis-shaped by technology? Is “addiction” really the right framework for this discussion? What steps are we going to take to resist the nudges of the tech industry – individually and socially and yes maybe even politically?
Good stuff.

Source: Audrey Watters

No cash, no freedom?

The ‘cashless’ society, eh?

Every time someone talks about getting rid of cash, they are talking about getting rid of your freedom. Every time they actually limit cash, they are limiting your freedom. It does not matter if the people doing it are wonderful Scandinavians or Hindu supremacist Indians, they are people who want to know and control what you do to an unprecedentedly fine-grained scale.
Yep, just because someone cool is doing it doesn't mean it won't have bad consequences. In the rush to add technology to things, we create future dystopias.
Cash isn’t completely anonymous. There’s a reason why old fashioned crooks with huge cash flows had to money-launder: Governments are actually pretty good at saying, “Where’d you get that from?” and getting an explanation. Still, it offers freedom, and the poorer you are, the more freedom it offers. It also is very hard to track specifically, i.e., who made what purchase.

Blockchains won’t be untaxable. The ones which truly are unbreakable will be made illegal; the ones that remain, well, it’s a ledger with every transaction on it, for goodness sakes.

It’s this bit that concerns me:

We are creating a society where even much of what you say, will be knowable and indeed, may eventually be tracked and stored permanently.

If you do not understand why this is not just bad, but terrible, I cannot explain it to you. You have some sort of mental impairment of imagination and ethics.

Source: Ian Welsh

Depression as an evolutionary advantage?

It’s been almost 15 years since I suffered from depression. Since that time, I’ve learned to look after myself mentally and physically to resist whatever natural tendency I have towards spiralling downwards.

I found this article fascinating.

Some psychologists... have argued that depression is not a dysfunction at all, but an evolved mechanism designed to achieve a particular set of benefits.
The dominant popular view seems to be that there's something wrong with your brain chemistry, so exercise, antidepressants and counselling can fix it.
Paul Andrews, an evolutionary psychologist now at McMaster University...  noted that the physical and mental symptoms of depression appeared to form an organized system. There is anhedonia, the lack of pleasure or interest in most activities. There’s an increase in rumination, the obsessing over the source of one’s pain. There’s an increase in certain types of analytical ability. And there’s an uptick in REM sleep, a time when the brain consolidates memories.
However, for me, the fix was to get out of the terrible situation I was in, a teaching job in a very tough school.
If something is broken in your life, you need to bear down and mend it. In this view, the disordered and extreme thinking that accompanies depression, which can leave you feeling worthless and make you catastrophize your circumstances, is needed to punch through everyday positive illusions and focus you on your problems. In a study of 61 depressed subjects, 4 out of 5 reported at least one upside to their rumination, including self-insight, problem solving, and the prevention of future mistakes.
I suffer from migraines, which are bizarre episodes. They're difficult to explain to people as they're a whole-body response. Changing my lifestyle so I don't get migraines is a micro-version of the kind of lifestyle changes you need to make to stave off depression.
These theories do cast some of our traditional responses to depression in a new light, however. If depression is a strategic response that we are programmed to carry out, consciously or unconsciously, does it make sense to try to suppress its symptoms through, say, the use of antidepressants? [Edward] Hagen [an anthropologist at Washington State University] describes antidepressants as painkillers, arguing that it would be unethical for a doctor to treat a broken ankle with Percocet and no cast. You need to fix the underlying problem.
I can't imagine being on antidepressants for any more than a few weeks (as I was). They dull your mind, which allows you to cope with the world as it is, but don't (in my experience) allow you lead a flourishing human life.
Even if depression evolved as a useful tool over the eons, that doesn’t make it useful today. We’ve evolved to crave sugar and fat, but that adaptation is mismatched with our modern environment of caloric abundance, leading to an epidemic of obesity. Depression could be a mismatched condition. Hagen concedes that for most of evolution, we lived with relatives and spent all day with people ready to intervene in our lives, so that episodes of depression might have led to quick solutions. Today, we’re isolated, and we move from city to city, engaging with people less invested in our reproductive fitness. So depressive signals may go unheeded and then compound, leading to consistent, severe dysfunction. A Finnish study found that as urbanization and modernization have increased over the last two centuries, so have suicide rates. That doesn’t mean depression is no longer functional (if indeed it ever was), just that in the modern world it may misfire more than we’d like.
Source: Nautilus

Product managers as knowledge centralisers

If you asked me what I do for a living, I’d probably respond that I work for Moodle, am co-founder of a co-op, and also do some consultancy. What I probably wouldn’t say, although it would be true, is that I’m a product manager.

I’m not particularly focused on ‘commercial success’ but the following section of this article certainly resonates:

When I think of what a great product manager’s qualities should be, I find myself considering where the presence of this role is felt the most. When successful, the outside world perceives commercial success but internally, over the course of building the product, a team would gain a sense of confidence, rooted in a better understanding of the problem being addressed, a higher level of focus and an overall higher level of aptitude. If I were to summarize what I feel a great product manager’s qualities are, it would be the constant dedication to centralizing knowledge for a team in all aspects of the role — the UX, the technology and the strategy.

We haven't got all of the resourcing in place for Project MoodleNet yet, so I'm spending my time making sure the project is set up for success. Things like sorting out the process of how we communicate, signal that things are blocked/finished/need checking, that the project will be GDPR-compliant, that the risk register is complete, that we log decisions.
Product management has been popularized as a role that unified the business, technology and UX/Design demands of a software team. Many of the more established product managers have often noted that they “stumbled” into the role without knowing what their sandbox was and more often than not, they did not even hold the title itself.
Being a product manager is an interdisciplinary role, and I should imagine that most have had varied careers to date. I certainly have.
There is a lot of thinking done around what the ideal product manager should have the power to do and it often hinges around locking down a vision and seeing it through to it’s execution and data collection. However, this portrayal of a product manager as an island of synergy, knowledge and the perfect intersection of business, tech and design is not where the meaty value of the role lies.

[…]

A sense of discipline in the daily tasks such as sprint planning and retrospectives, collecting feedback from users, stand up meetings and such can be seen as something that is not just done for the purpose of order and structure, but as a way of reinforcing and democratizing the institutional knowledge between members of a team. The ability for a team to pivot, the ability to reach consensus, is a byproduct of common, centralized knowledge that is built up from daily actions and maintained and kept alive by the product manager. In the rush of a delivery and of creative chaos , this sense of structure and order has to be lovingly maintained by someone in order for a team to really internally benefit from the fruits of their labour over time.

It’s a great article, and well worth a read.

Source: We Seek

Using VR with kids

I’ve seen conflicting advice regarding using Virtual Reality (VR) with kids, so it’s good to see this from the LSE:

Children are becoming aware of virtual reality (VR) in increasing numbers: in autumn 2016, 40% of those aged 2-15 surveyed in the US had never heard of VR, and this number was halved less than one year later. While the technology is appealing and exciting to children, its potential health and safety issues remain questionable, as there is, to date, limited research into its long-term effects.

I have given my two children (six and nine at the time) experience of VR — albeit in limited bursts. The concern I have is about eyesight, mainly.

As a young technology there are still many unknowns about the long-term risks and effects of VR gaming, although Dubit found no negative effects from short-term play for children’s visual acuity, and little difference between pre- and post-VR play in stereoacuity (which relies on good eyesight for both eyes and good coordination between the two) and balance tests. Only 2 of the 15 children who used the fully immersive head-mounted display showed some stereoacuity after-effects, and none of those using the low-cost Google Cardboard headset showed any. Similarly, a few seemed to be at risk of negative after-effects to their balance after using VR, but most showed no problems.

There's some good advice in this post for VR games/experience designers, and for parents. I'll quote the latter:

While much of a child’s experience with VR may still be in museums, schools or other educational spaces under the guidance of trained adults, as the technology becomes more available in domestic settings, to ensure health and safety at home, parents and carers need to:

  • Allow children to preview the game on YouTube, if available.
  • Provide children with time to readjust to the real world after playing, and give them a break before engaging with activities like crossing roads, climbing stairs or riding bikes, to ensure that balance is restored.
  • Check on the child’s physical and emotional wellbeing after they play.
There's a surprising lack of regulation and guidance in this space, so it's good to see the LSE taking the initiative!

Source: Parenting for a Digital Future

Augmented and Virtual Reality on the web

There were a couple of exciting announcments last week about web technologies being used for Augmented Reality (AR) and Virtual Reality (VR). Using standard technologies that can be used across a range of devices is a game-changer.

First off, Google announced ‘Article’ which provides an straightforward way to add virtual objects to physical spaces.

Google AR

Mozilla, meanwhile directed attention towards A-Frame, which they’ve been supporting for a while. This allows VR experiences to be created using web technologies, including networking users together in-world.

Mozilla VR

Although each have their uses, I think AR is going to be a much bigger deal than Virtual Reality (VR) for most people, mainly because it adds to an experience we’re used to (i.e. the world around us) rather than replacing it.

Sources: Google blog / A-Frame

The horror of the Bett Show

I’ve been to the Bett Show (formely known as BETT, which is how the author refers to it in this article) in many different guises. I’ve been as a classroom teacher, school senior leader, researcher in Higher Education, when I was working in different roles at Mozilla, as a consultant, and now in my role at Moodle.

I go because it’s free, and because it’s a good place to meet up with people I see rarely. While I’ve changed and grown up, the Bett Show is still much the same. As Junaid Mubeen, the author of this article, notes:  

The BETT show is emblematic of much that EdTech gets wrong. No show captures the hype of educational technology quite like the world’s largest education trade show. This week marked my fifth visit to BETT at London’s Excel arena. True to form, my two days at the show left me feeling overwhelmed with the number of products now available in the EdTech market, yet utterly underwhelmed with the educational value on offer.

It's laughable, it really is. I saw all sorts of tat while I was there. I heard that a decent sized stand can set you back around a million pounds.

One senses from these shows that exhibitors are floating from one fad to the next, desperately hoping to attach their technological innovations to education. In this sense, the EdTech world is hopelessly predictable; expect blockchain applications to emerge in not-too-distant future BETT shows.

But of course. I felt particularly sorry this year for educators I know who were effectively sales reps for the companies they've gone to work for. I spent about five hours there, wandering, talking, and catching up with people. I can only imagine the horror of being stuck there for four days straight.

I like the questions Mubeen comes up with. However, the edtech companies are playing a different game. While there’s some interested in pedagogical development, for most of them it’s just another vertical market.

In the meantime, there are four simple questions every self-professed education innovator should demand of themselves:

  • What is your pedagogy? At the very least, can you list your educational goals?
  • What does it mean for your solution to work and how will this be measured in a way that is meaningful and reliable?
  • How are your users supported to achieve their educational goals after the point of sale?
  • How do your solutions interact with other offerings in the marketplace?
Somewhat naïvely, the author says that he looks forward to the day when exhibitors are selected "not on their wallet size but on their ability to address these foundational questions". As there's a for-profit company behind Bett, I think he'd better not hold his breath.

Source: Junaid Mubeen

Issue #289: Loooooong week

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

More haste, less speed

In the last couple of years, there’s been a move to give names to security vulnerabilities that would be otherwise too arcane to discuss in the mainstream media. For example, back in 2014, Heartbleed, “a security bug in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol”, had not only a name but a logo.

The recent media storm around the so-called ‘Spectre’ and ‘Meltdown’ shows how effective this approach is. It also helps that they sound a little like James Bond science fiction.

In this article, Zeynep Tufekci argues that the security vulnerabilities are built on our collective desire for speed:

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.
Helpfully, she gives a layperson's explanation of what went wrong with these two security vulnerabilities:

Almost all modern microprocessors employ tricks to squeeze more performance out of a computer program. A common trick involves having the microprocessor predict what the program is about to do and start doing it before it has been asked to do it — say, fetching data from memory. In a way, modern microprocessors act like attentive butlers, pouring that second glass of wine before you knew you were going to ask for it.

But what if you weren’t going to ask for that wine? What if you were going to switch to port? No problem: The butler just dumps the mistaken glass and gets the port. Yes, some time has been wasted. But in the long run, as long as the overall amount of time gained by anticipating your needs exceeds the time lost, all is well.

Except all is not well. Imagine that you don’t want others to know about the details of the wine cellar. It turns out that by watching your butler’s movements, other people can infer a lot about the cellar. Information is revealed that would not have been had the butler patiently waited for each of your commands, rather than anticipating them. Almost all modern microprocessors make these butler movements, with their revealing traces, and hackers can take advantage.

Right now, she argues, systems have to employ more and more tricks to squeeze performance out of hardware because the software we use is riddled with surveillance and spyware.

But the truth is that our computers are already quite fast. When they are slow for the end-user, it is often because of “bloatware”: badly written programs or advertising scripts that wreak havoc as they try to track your activity online. If we were to fix that problem, we would gain speed (and avoid threatening and needless surveillance of our behavior).

As things stand, we suffer through hack after hack, security failure after security failure. If commercial airplanes fell out of the sky regularly, we wouldn’t just shrug. We would invest in understanding flight dynamics, hold companies accountable that did not use established safety procedures, and dissect and learn from new incidents that caught us by surprise.

And indeed, with airplanes, we did all that. There is no reason we cannot do the same for safety and security of our digital systems.

There have been patches going out over the past few weeks since the vulnerabilities came to light from major vendors. For-profit companies have limited resources, of course, and proprietary, closed-source code. This means there'll be some devices that won't get the security updates at all, leaving end users in a tricky situation: their hardware is now almost worthless. So do they (a) keep on using it, crossing their fingers that nothing bad happens, or (b) bite the bullet and upgrade?

What I think the communities I’m part of could have done better at is shout loudly that there’s an option (c): open source software. No matter how old your hardware, the chances are that someone, somewhere, with the requisite skills will want to fix the vulnerabilities on that device.

Source: The New York Times

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

Reading the web on your own terms

Although it was less than a decade ago since the demise of the wonderful, simple, much-loved Google Reader, it seems like it was a different age entirely.

Subscribing to news feeds and blogs via RSS wasn’t as widely used as it could/should have been, but there was something magical about that period of time.

In this article, the author reflects on that era and suggests that we might want to give it another try:

Well, I believe that RSS was much more than just a fad. It made blogging possible for the first time because you could follow dozens of writers at the same time and attract a considerably large audience if you were the writer. There were no ads (except for the high-quality Daring Fireball kind), no one could slow down your feed with third party scripts, it had a good baseline of typographic standards and, most of all, it was quiet. There were no comments, no likes or retweets. Just the writer’s thoughts and you.
I was a happy user of Google Reader until they pulled the plug. It was a bit more interactive than other feed readers, somehow, in a way I can't quite recall. Everyone used it until they didn't.
The unhealthy bond between RSS and Google Reader is proof of how fragile the web truly is, and it reveals that those communities can disappear just as quickly as they bloom.
Since that time I've been an intermittent user of Feedly. Everyone else, it seems, succumbed to the algorithmic news feeds provided by Facebook, Twitter, and the like.
A friend of mine the other day said that “maybe Medium only exists because Google Reader died — Reader left a vacuum, and the social network filled it.” I’m not entirely sure I agree with that, but it sure seems likely. And if that’s the case then the death of Google Reader probably led to the emergence of email newsletters, too.

[…]

On a similar note, many believe that blogging is making a return. Folks now seem to recognize the value of having your own little plot of land on the web and, although it’s still pretty complex to make your own website and control all that content, it’s worth it in the long run. No one can run ads against your thing. No one can mess with the styles. No one can censor or sunset your writing.

Not only that but when you finish making your website you will have gained superpowers: you now have an independent voice, a URL, and a home on the open web.

I don’t think we can turn the clock back, but it does feel like there might be positive, future-focused ways of improving things through, for example, decentralisation.

Source: Robin Rendle

The NSA (and GCHQ) can find you by your 'voiceprint' even if you're speaking a foreign language on a burner phone

This is pretty incredible:

Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees.

The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.”

Hmmm….

The voice is a unique and readily accessible biometric: Unlike DNA, it can be collected passively and from a great distance, without a subject’s knowledge or consent. Accuracy varies considerably depending on how closely the conditions of the collected voice match those of previous recordings. But in controlled settings — with low background noise, a familiar acoustic environment, and good signal quality — the technology can use a few spoken sentences to precisely match individuals. And the more samples of a given voice that are fed into the computer’s model, the stronger and more “mature” that model becomes.
So yeah, let's put a microphone in every room of our house so that we can tell Alexa to turn off the lights. What could possibly go wrong?

Source: The Intercept

Favourable winds

“If a man does not know to what port he is steering, no wind is favourable to him.”

(Seneca)

Listening to video game soundtracks can improve your productivity

I can attest to the power of this, particularly the Halo soundtrack:

As I write these words, a triumphant horn is erupting in my ear over the rhythmic bowing of violins. In fact, as you read, I would encourage you to listen along—just search “Battlefield One.” I bet you'll focus just a bit better with it playing in the background. After all, as a video game soundtrack it's designed to have exactly that effect.

This is, by far, the best Life Pro Tip I’ve ever gotten or given: Listen to music from video games when you need to focus. It’s a whole genre designed to simultaneously stimulate your senses and blend into the background of your brain, because that’s the point of the soundtrack. It has to engage you, the player, in a task without distracting from it. In fact, the best music would actually direct the listener to the task.

These days I prefer to listen to Brain.fm after I got a lifetime deal via AppSumo a year or so ago. I enjoy music as an art form, but I also appreciate it for the effect it can have on my brain.

Source: Popular Science

 

Technology to connect and communicate

People going to work in factories and offices is a relatively recent invention. For most of human history, people have worked from, or very near to, their home.

But working from home these days is qualitatively different, because we have the internet, as Sarah Jaffe points out in a recent newsletter:

Freelancing is a strange way to work, not because self-supervised labor in the home doesn't have a long history that well predates leaving your house to go to a workplace, but because it relies so much on communication with the outside. I'm waiting on emails from editors and so I am writing to you, my virtual water-cooler companions.

[…]

The internet, then, serves to make work less isolated. I have chats going a lot of the day, unless I’m in super drill-down writing mode, which is less of my job than many people probably expect. My friends have helped me figure out thorny issues in a piece I’m writing and helped me figure out what to write in an email to an editor who’s dropped off the face of the earth and advised me on how much money to ask for. It’s funny, there are so many stories about the way the internet is making us lonely and isolated, and it is sometimes my only human contact. My voice creaked when I answered the phone this morning because I hadn’t yet used it today.

The problem is that capitalism forces us into a situation where we’re competing with others rather than collaborating with them:

How do we use technology to connect and communicate rather than compete? How do we have conversations that further our understandings of things?
I don't actually think it's solely a technology problem, although every technology has inbuilt biases. It's also a problem to be solved at the societal 'operating system' level through, for example, co-owning the organisation for which you work.

Source: Sarah Jaffe