Category: How stuff works (page 1 of 2)

Childhood amnesia

My kids will often ask me about what I was like at their age. It might be about how fast I swam a couple of length freestyle, it could be what music I was into, or when I went on a particular holiday I mentioned in passing. Of course, as I didn’t keep a diary as a child, these questions are almost impossible to answer. I simply can’t remember how old I was when certain things happened.

Over and above that, though, there’s some things that I’ve just completely forgotten. I only realise this when I see, hear, or perhaps smell something that reminds me of a thing that my conscious mind had chosen to leave behind. It’s particularly true of experiences from when we are very young. This phenomenon is known as ‘childhood amnesia’, as an article in Nautilus explains:

On average, people’s memories stretch no farther than age three and a half. Everything before then is a dark abyss. “This is a phenomenon of longstanding focus,” says Patricia Bauer of Emory University, a leading expert on memory development. “It demands our attention because it’s a paradox: Very young children show evidence of memory for events in their lives, yet as adults we have relatively few of these memories.”

In the last few years, scientists have finally started to unravel precisely what is happening in the brain around the time that we forsake recollection of our earliest years. “What we are adding to the story now is the biological basis,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto. This new science suggests that as a necessary part of the passage into adulthood, the brain must let go of much of our childhood.

Interestingly, our seven year-old daughter is on the cusp of this forgetting. She’s slowly forgetting things that she had no problem recalling even last year, and has to be prompted by photographs of the event or experience.

One experiment after another revealed that the memories of children 3 and younger do in fact persist, albeit with limitations. At 6 months of age, infants’ memories last for at least a day; at 9 months, for a month; by age 2, for a year. And in a landmark 1991 study, researchers discovered that four-and-a-half-year-olds could recall detailed memories from a trip to Disney World 18 months prior. Around age 6, however, children begin to forget many of these earliest memories. In a 2005 experiment by Bauer and her colleagues, five-and-a-half-year-olds remembered more than 80 percent of experiences they had at age 3, whereas seven-and-a-half-year-olds remembered less than 40 percent.

It’s fascinating, and also true of later experiences, although to a lesser extent. Our brains conceal some of our memories by rewiring our brain. This is all part of growing up.

This restructuring of memory circuits means that, while some of our childhood memories are truly gone, others persist in a scrambled, refracted way. Studies have shown that people can retrieve at least some childhood memories by responding to specific prompts—dredging up the earliest recollection associated with the word “milk,” for example—or by imagining a house, school, or specific location tied to a certain age and allowing the relevant memories to bubble up on their own.

So we shouldn’t worry too much about remembering childhood experiences in high-fidelity. After all, it’s important to be able to tell new stories to both ourselves and other people, casting prior experiences in a new light.

Source: Nautilus

On ‘instagrammability’

“We shape our tools and thereafter our tools shape us.” (John M. Culkin)

I choose not to use or link to Facebook services, and that includes Instagram and WhatsApp. I do, however, recognise the huge power that Instagram has over some people’s lives which, of course, trickles down to businesses and those looking to “live the Instagram lifestyle”.

The design blog Dezeen picks up on a report from an Australian firm of architects, demonstrating that ‘Instagrammable moments’ are now part of their brief.

The Six Universal Truths of Influence

I’m all for user stories and creating personas but one case looks like grounds for divorce, Bob is seen as the servant of Michelle, who wants to be photographed doing things she’s seen others doing

One case study features Bob and Michelle, a couple with “very different ideas about what their holiday should look like.”

While Bob wants to surf, drink beer and spend quality time with Michelle, she wants to “be pampered and live the Instagram life of fresh coconuts and lounging by the pool.”

In response to this type of user, designers should focus on providing what Michelle wants, since “Bob’s main job this holiday is to take pictures of Michelle.”

“Michelle wants pictures of herself in the pool, of bright colours, and of fresh attractive food,” the report says. “You’ll also find her taking pictures of remarkable indoor and outdoor artwork like murals or inspirational signage.”

It’s easy to roll your eyes at this (and trust me, mine are almost rotating out of their sockets) but the historian in me finds this fascinating. I wonder if future generations will realise that architectural details were a result of photos been taken for a particular service?

Other designers taking users’ Instagram preferences into account include Coordination Asia, who recent project for restaurant chain Gaga in Shanghai has been optimised so design elements fit in a photo frame and maximise the potential for selfies.

Instagram co-founder Mike Krieger told Dezeen that he had noticed that the platform was influencing interior design.

Of course, architects and designers have to start somewhere and perhaps ‘instagrammability’ is a useful creative constraint.

“Hopefully it leads to a creative spark and things feeling different over time,” [Krieger] said. “I think a bad effect would be that same definition of instagrammability in every single space. But instead, if you can make it yours, it can add something to the building.”

Instagram was placed at number 66 in the latest Dezeen Hot List of the most newsworthy forces in world design.

Now that I’ve read this, I’ll be noticing this everywhere, no doubt.

Source: Dezeen

Where memes come from

In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.

This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.

Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.

The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.

Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.

Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.

They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.

Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.

It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.

The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.

That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.

For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.

As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.

It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).

Source: MIT Technology Review

Why NASA is better than Facebook at writing software

Facebook’s motto, until recently, was “move fast and break things”. This chimed with a wider Silicon Valley brogrammer mentality of “f*ck it, ship it”.

NASA’s approach, as this (long-ish) Fast Company article explains, couldn’t be more different to the Silicon Valley narrative. The author, Charles Fishman, explains that the group who write the software for space shuttles are exceptional at what they do. And they don’t even start writing code until they’ve got a complete plan in place.

This software is the work of 260 women and men based in an anonymous office building across the street from the Johnson Space Center in Clear Lake, Texas, southeast of Houston. They work for the “on-board shuttle group,” a branch of Lockheed Martin Corps space mission systems division, and their prowess is world renowned: the shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute (SEI) a measure of the sophistication and reliability of the way they do their work. In fact, the SEI based it standards in part from watching the on-board shuttle group do its work.

There’s an obvious impact, both in terms of financial and human cost, if something goes wrong with a shuttle. Imagine if we had these kinds of standards for the impact of social networks on the psychological health of citizens and democratic health of nations!

NASA knows how good the software has to be. Before every flight, Ted Keller, the senior technical manager of the on-board shuttle group, flies to Florida where he signs a document certifying that the software will not endanger the shuttle. If Keller can’t go, a formal line of succession dictates who can sign in his place.

Bill Pate, who’s worked on the space flight software over the last 22 years, [/url]says the group understands the stakes: “If the software isn’t perfect, some of the people we go to meetings with might die.

Software powers everything. It’s in your watch, your television, and your car. Yet the quality of most software is pretty poor.

“It’s like pre-Sumerian civilization,” says Brad Cox, who wrote the software for Steve Jobs NeXT computer and is a professor at George Mason University. “The way we build software is in the hunter-gatherer stage.”

John Munson, a software engineer and professor of computer science at the University of Idaho, is not quite so generous. “Cave art,” he says. “It’s primitive. We supposedly teach computer science. There’s no science here at all.”

The NASA team can sum-up their process in four propositions:

  1. The product is only as good as the plan for the product.
  2. The best teamwork is a healthy rivalry.
  3. The database is the software base.
  4. Don’t just fix the mistakes — fix whatever permitted the mistake in the first place.

They don’t pull all-nighters. They don’t switch to the latest JavaScript library because it’s all over Hacker News. Everything is documented, and genealogy of the whole code is available to everyone working on it.

The most important things the shuttle group does — carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code — are not expensive. The process isn’t even rocket science. Its standard practice in almost every engineering discipline except software engineering.

I’m going to be bearing this in mind as we build MoodleNet. We’ll have to be a bit more agile than NASA, of course. But planning and process is important stuff.

 

Source: Fast Company

Protocols for the free web

If there’s one thing I’ve learned in my time at the intersection of education and technology, it’s that nobody cares about the important stuff, but people will go crazy if you make a small tweak to an emoji icon. 🙄

The reason you can use any web browser you want to access this website is down to standards. These are collections of protocols that define expected behaviours when you use a web browser to read what I’ve written. There are organisations and working groups ensuring that the internet doesn’t devolve into the Wild West.

This post on the We Distribute blog is an interview with Mike Macgirvin who has spent much of his adult life working on the protocols that enable social interaction on the web to happen. It’s an important read, even for less-than-technical people, as it serves to explain some of the very human decisions that shape the technology that mediates our lives.

There’s nothing magic about a protocol. It’s basically just a gentleman’s agreement about how to implement something. There are a number of levels or grades of protocols from simple in-house conventions all the way to internet specifications. The higher quality protocols have some interesting characteristics. Most importantly, these are intended as actual technical blueprints so that if two independent developers in isolated labs follow the specifications accurately, their implementations should interact together perfectly. This is an important concept.

The level of specification needed to produce this higher quality protocol is a double-edged sword. If you specify things too rigidly, projects using this protocol cannot grow or extend beyond the limits and restrictions you have specified. If you do not specify the implementation rules tightly enough, you will end up with competing products or projects that can both claim to implement the specification, yet are unable to interoperate at a basic level.

For-profit companies, and in particular those who are backed by venture capitalists, are very fond of what’s known as vendor lock-in. While there are moves afoot seeking to limit this, including those provided by GDPR, it’s a game of cat-and-mouse.

The free web, on the other hand, is different. It’s a place where, instead of being beholden to people trying to commodify and intermediate your interactions with other human beings, there is the free exchange of data and ideas.

Unfortunately, as Macgirvin points out, its much easier to enclose something than to ‘lock it open’:

In 2010–2012, the free web lost *hundreds of thousands* of early adopters because we had no way to easily migrate from server to server; and lots of early server administrators closed down with little or no warning. This set the free web back at least five years, because you couldn’t trust your account and identity and friendships and content to exist tomorrow. Most of the other free web projects decided that this problem should be solved by import/export tools (which we’re still waiting for in some cases).

I saw an even bigger problem. Twitter at the time was over capacity and often would be shut down for hours or a few days. What if you didn’t really want to permanently move to another server, but you just wanted to post something and stay in touch with friends/family when your server was having a bad day? This was the impetus for nomadic identity. You could take a thumbdrive and load it into any other server; and your identity is intact and you still have all your friends. Then we allowed you to “clone” your identity so you could have these backup accounts available at any time you needed them. Then we started syncing stuff between your clones so that on server ‘A’ you still have the same exact content and friends that you do on server ‘B’. They’re clones. You can post from either. If one shuts down forever, no big deal. If it has a cert issue that takes 24 hours to fix, no big deal. Your online life can continue, uninterrupted — no matter what happens to individual servers.

The trouble, of course, with all of this, is that things aren’t important until they are. So if you’re using Twitter to share photos of what you had for breakfast or status updates about the facial expressions of your cat, you’re not so bothered if the service experiences some downtime. Fast forward a couple of years and emergency services are using it to reassure the citizenry in the face of impending doom.

Those out to make a profit from commodifying social interaction are like those on the political right; they’re more likely to rally behind one another in the name of capital. The left, in this case represented by the free web, is prone to internecine conflict due to their motivation being more ideological than financial.

The way I look at it is that the free web is like family. Everybody has a dysfunctional family. You have black sheep and relatives you really just want to strangle sometimes. Thanksgiving dinner always turns into a shitfight. They’re all fundamentalist Christians and you’re more Zen Buddhist. You can’t carry on a conversation without arguing about who has the more successful career or chastising cousin Harry for his drug use.

But when you get right down to it — none of this matters. They’re family. We’re all in this together. That’s how it is with the free web, even if some projects like to think that they are the only ones that matter. Everybody matters. Each of our projects brings a unique value proposition to the table, and provides a different set of solutions and decentralised services. You can’t ignore any of them or leave any of them behind. We’re one family and we’re all busy creating something incredible. If you look at only one member of this family, you might be disappointed in the range of services that are being offered. You’re probably missing out completely on what the rest of the family is doing. Together we’re all creating a new and improved social web. There are some awesome projects tackling completely different aspects of decentralisation and offering completely different services. If we could all work together we could probably conquer the world — though that’s unlikely to happen any time soon. The first step is just to all sit down at Thanksgiving dinner without killing each other.

We get to choose the technologies we use in our lives. And those decisions matter. Decentralisation is important, particularly in regards to the social web, because no government or organisation should be given the power to mediate our interactions.

Source: We Distribute

Nobody is ready for GDPR

As a small business owner and co-op founder, GDPR applies to me as much as everyone else. It’s a massive ballache, but I support the philosophy behind what it’s trying to achieve.

After four years of deliberation, the General Data Protection Regulation (GDPR) was officially adopted by the European Union in 2016. The regulation gave companies a two-year runway to get compliant, which is theoretically plenty of time to get shipshape. The reality is messier. Like term papers and tax returns, there are people who get it done early, and then there’s the rest of us.

I’m definitely in “the rest of us” camp, meaning that, over the last week or so, my wife and I have spent time figuring stuff out. The main thing is getting things in order so that  you’ve got a process in place. Different things are going to affect different organisations, well, differently.

But perhaps the GDPR requirement that has everyone tearing their hair out the most is the data subject access request. EU residents have the right to request access to review personal information gathered by companies. Those users — called “data subjects” in GDPR parlance — can ask for their information to be deleted, to be corrected if it’s incorrect, and even get delivered to them in a portable form. But that data might be on five different servers and in god knows how many formats. (This is assuming the company even knows that the data exists in the first place.) A big part of becoming GDPR compliant is setting up internal infrastructures so that these requests can be responded to.

A data subject access request isn’t going to affect our size of business very much. If someone does make a request, we’ve got a list of places from which to manually export the data. That’s obviously not a viable option for larger enterprises, who need to automate.

To be fair, GDPR as a whole is a bit complicated. Alison Cool, a professor of anthropology and information science at the University of Colorado, Boulder, writes in The New York Times that the law is “staggeringly complex” and practically incomprehensible to the people who are trying to comply with it. Scientists and data managers she spoke to “doubted that absolute compliance was even possible.”

To my mind, GDPR is like an much more far-reaching version of the Freedom of Information Act that came into force in the year 2000. That changed the nature of what citizens could expect from public bodies. I hope that the GDPR similarly changes what we all can expect from organisations who process our personal data.

Source: The Verge

The toughest smartphones on the market

I found this interesting:

To help you avoid finding out the horrifying truth when your phone goes clattering to the ground, we tested all of the major smartphones by dropping them over the course of four rounds from 4 feet and 6 feet onto wood and concrete — and even into a toilet — to see which handset is the toughest.

The results?

While the result wasn’t completely unexpected — after all, the phone has a ShatterShield display, which the company guarantees against cracks — the Moto Z2 Force survived drops from 6 feet onto concrete, with barely a scratch.

Apple’s least-expensive phone didn’t prove very tough at all. In fact, the $399 iPhone SE was rendered unusable before all of the others. However, this was not a big surprise, as the newer iPhone 8 and iPhone X are made with much stronger glass than the iPhone SE’s from 2016.

Summary:

  • Motorola Moto Z2 Force – Toughness score: 8.5/10
  • LG X Venture – Toughness score: 6.6/10
  • Apple iPhone X – Toughness score: 6.2/10
  • LG V30 – Toughness score: 6/10
  • Samsung Galaxy S9 – Toughness score: 6/10
  • Motorola Moto G5 Plus – Toughness score: 5.1/10
  • Apple iPhone 8 – Toughness score: 4.9/10
  • Samsung Galaxy Note 8 – Toughness score: 4.3/10
  • OnePlus 5T – Toughness score: 4.3/10
  • Huawei Mate 10 Pro – Toughness score: 4.3/10
  • Google Pixel 2 XL – Toughness score: 4.3/10
  • iPhone SE – Toughness score: 3.9/10

Source: Tom’s Guide

Craig Mod’s subtle redesign of the hardware Kindle

I like Craig Mod’s writing. He’s the guy that’s written on his need to walk, drawing his own calendar, and getting his attention back.

This article is hardware Kindle devices — the  distinction being important given that you can read your books via the Kindle Cloud Reader or, indeed, via an app on pretty much any platform.

As he points out, the user interface remains sub-optimal:

Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.

The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.

And so this model has never felt right.

He suggests an alternative to this which involves physical buttons on the device itself:

Hardware buttons:

  • Page forward
  • Page back
  • Menu
  • (Power/Sleep)

What does this get us?

It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.

I can see why he proposes this, but I’m not so sure about the physical buttons for page turns. The reason I’d say that, is that although I now use a Linux-based bq Cervantes e-reader, before 2015 I had almost every iteration of the hardware Kindle. There’s a reason Amazon removed hardware buttons for page turns.

I read in lots of places, but I read in bed with my wife every day and if there’s one thing she couldn’t stand, it was the clicking noise of me turning the page on my Kindle. Even if I tried to press it quietly, it annoyed her. Touchscreen page turns are much better.

The e-reader I use has a similar touch interaction to the Kindle, so I see where Craig Mod is coming from when he says:

When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.

This, if you haven’t come across it before, is user interface design, or UI design for short. It’s important stuff, for as Steve Jobs famously said: “Everything in this world… was created by people no smarter than you” — and that’s particularly true in tech.

Source: Craig Mod

The death of the newsfeed (is much exaggerated)

Benedict Evans is a venture capitalist who focuses on technology companies. He’s a smart guy with some important insights, and I thought his recent post about the ‘death of the newsfeed’ on social networks was particularly useful.

He points out that it’s pretty inevitable that the average person will, over the course of a few years, add a few hundred ‘friends’ to their connections on any given social network. Let’s say you’re connected with 300 people, and they all share five things each day. That’s 1,500 things you’ll be bombarded with, unless the social network does something about it.

This overload means it now makes little sense to ask for the ‘chronological feed’ back. If you have 1,500 or 3,000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up, which can only be 10% or 20% of what’s actually there. This will be sorted by no logical order at all except whether your friends happened to post them within the last hour. It’s not so much chronological in any useful sense as a random sample, where the randomizer is simply whatever time you yourself happen to open the app. ’What did any of the 300 people that I friended in the last 5 years post between 16:32 and 17:03?’ Meanwhile, giving us detailed manual controls and filters makes little more sense – the entire history of the tech industry tells us that actual normal people would never use them, even if they worked. People don’t file.

So we end up with algorithmic feeds, which is an attempt by social networks to ensure that you see the stuff that you deem important. It is, of course, an almost impossible mission.

[T]here are a bunch of problems around getting the algorithmic newsfeed sample ‘right’, most of which have been discussed at length in the last few years. There are lots of incentives for people (Russians, game developers) to try to manipulate the feed. Using signals of what people seem to want to see risks over-fitting, circularity and filter bubbles. People’s desires change, and they get bored of things, so Facebook has to keep changing the mix to try to reflect that, and this has made it an unreliable partner for everyone from Zynga to newspapers. Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour.

Evans then goes on to raise the problem of what you want to see may be different from what your friends want you to see. So people solve the problem of algorithmic feeds not showing them what they really want by using messaging apps such as WhatsApp and Telegram to interact individually with people or small groups.

The problem with that, though?

The catch is that though these systems look like they reduce sharing overload, you really want group chats. And lots of groups. And when you have 10 WhatsApp groups with 50 people in each, then people will share to them pretty freely. And then you think ‘maybe there should be a screen with a feed of the new posts in all of my groups. You could call it a ‘news feed’. And maybe it should get some intelligence, to show the posts you care about most…

So, to Evans mind (and I’m tempted to agree with him) we’re in a never-ending spiral. The only way I can see out of it is user education, particularly around owning one’s own data and IndieWeb approaches.

Source: Benedict Evans

How to get hired

A great short post from Seth Godin, who explains how things work in the real world when you’re looking for a job or your next gig:

You meet someone. You do a small project. You write an article. It leads to another meeting. You do a slightly bigger project for someone else. You make a short film. That leads to a speaking gig. Which leads to an consulting contract. And then you get the gig.

These ‘hops’ as he calls them are important as they affect the mindset we should adopt:

If you’re walking around with a quid pro quo mindset, giving only enough to get what you need right now, and walking away from anyone or anything that isn’t the destination—not only are you eliminating all the possible multi-hop options, you’re probably not having as much as fun or contributing as much as you could either.

Amen to that.

Source: Seth Godin