Category: How stuff works (page 1 of 3)

Are tiny conferences and meetups better than big ones?

Over a decade ago, a few Scottish educators got together in a pub for a meetup. This spawned something that is still going to this day: the TeachMeet. I’ve been to a fair few in my time and, particularly in the early days, found them the perfect mix of camaraderie and professional learning.

Does the size of the event matter? I think it probably does. While you can absolutely learn a lot at much larger events that carefully curated such as MoodleMoots, there’s nothing like events of fewer than one hundred people getting together. If it’s less than fifty, even better.

I’ve been reminded of this thanks to a post on ‘tiny conferences’ that I found via Hacker News:

I find that I get so much more value and enjoyment from conferences with less than 30 people than I do from most of the 200+ attendee conferences I’ve been to. Don’t get me wrong, there are some excellent, well-run, “real” business conferences with plenty value.

But if I compare and evaluate them based on this criteria: “Did I get what I wanted out of this trip?” … “Will my business benefit because I went?” … “Did I have fun and enjoy my time there?” … “Would I go again?”, then I choose Tiny Confs every time.

The author of the post gives eight pointers for running a successful ‘Tiny Conf’:

  1. Keep it ‘tiny’
  2. Make it application and invite-only
  3. Pick a fun location with an activity
  4. ‘Sessions’ not ‘talks’
  5. Plan everything in advance
  6. Manage the money
  7. Keep in touch before, during and after the trip
  8. You do you!

There’s some solid advice in there. It actually reminded me of the MountainMoot I went to earlier this year, which ticked all of these boxes. It was a great event, and one that I’ll remember for a long time!

At this time of political upheaval and social media burnout, it might be nice to even call this kind of thing a ‘retreat’? I’d certainly be attracted to go something like that.

Source: Brian Casel


Update: Thanks to Mags Amond who mentioned CongRegation which looks excellent!

How do people learn?

I was looking forward to digging into a new book from the US National Academies Press, which is freely downloadable in return for a (fake?) email address:

There are many reasons to be curious about the way people learn, and the past several decades have seen an explosion of research that has important implications for individual learning, schooling, workforce training, and policy.

In 2000, How People Learn: Brain, Mind, Experience, and School: Expanded Edition was published and its influence has been wide and deep. The report summarized insights on the nature of learning in school-aged children; described principles for the design of effective learning environments; and provided examples of how that could be implemented in the classroom.

Since then, researchers have continued to investigate the nature of learning and have generated new findings related to the neurological processes involved in learning, individual and cultural variability related to learning, and educational technologies. In addition to expanding scientific understanding of the mechanisms of learning and how the brain adapts throughout the lifespan, there have been important discoveries about influences on learning, particularly sociocultural factors and the structure of learning environments.

How People Learn II: Learners, Contexts, and Cultures provides a much-needed update incorporating insights gained from this research over the past decade. The book expands on the foundation laid out in the 2000 report and takes an in-depth look at the constellation of influences that affect individual learning. How People Learn II will become an indispensable resource to understand learning throughout the lifespan for educators of students and adults.

Thankfully, Stephen Downes has created a slide-based overview of the key points for easier consumption!

How People Learn from Stephen Downes

It would have been great if he’d used different images rather than the same one on every slide, but it’s still helpful.
 
Source: National Academies / OLDaily

An incorrect approach to teaching History

My thanks to Amy Burvall for bringing to my attention this article about how we’re teaching History incorrectly. Its focus is on how ‘fact-checking’ is so different with the internet than it was beforehand. There’s a lot of similarities between what the interviewee, Sam Wineburg, has to say and what Mike Caulfield has been working on with Web Literacy for Student Fact-Checkers:

Fact-checkers know that in a digital medium, the web is a web. It’s not just a metaphor. You understand a particular node by its relationship in a web. So the smartest thing to do is to consult the web to understand any particular node. That is very different from reading Thucydides, where you look at internal criticism and consistency because there really isn’t a documentary record beyond Thucydides.

Source: Slate

Childhood amnesia

My kids will often ask me about what I was like at their age. It might be about how fast I swam a couple of length freestyle, it could be what music I was into, or when I went on a particular holiday I mentioned in passing. Of course, as I didn’t keep a diary as a child, these questions are almost impossible to answer. I simply can’t remember how old I was when certain things happened.

Over and above that, though, there’s some things that I’ve just completely forgotten. I only realise this when I see, hear, or perhaps smell something that reminds me of a thing that my conscious mind had chosen to leave behind. It’s particularly true of experiences from when we are very young. This phenomenon is known as ‘childhood amnesia’, as an article in Nautilus explains:

On average, people’s memories stretch no farther than age three and a half. Everything before then is a dark abyss. “This is a phenomenon of longstanding focus,” says Patricia Bauer of Emory University, a leading expert on memory development. “It demands our attention because it’s a paradox: Very young children show evidence of memory for events in their lives, yet as adults we have relatively few of these memories.”

In the last few years, scientists have finally started to unravel precisely what is happening in the brain around the time that we forsake recollection of our earliest years. “What we are adding to the story now is the biological basis,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto. This new science suggests that as a necessary part of the passage into adulthood, the brain must let go of much of our childhood.

Interestingly, our seven year-old daughter is on the cusp of this forgetting. She’s slowly forgetting things that she had no problem recalling even last year, and has to be prompted by photographs of the event or experience.

One experiment after another revealed that the memories of children 3 and younger do in fact persist, albeit with limitations. At 6 months of age, infants’ memories last for at least a day; at 9 months, for a month; by age 2, for a year. And in a landmark 1991 study, researchers discovered that four-and-a-half-year-olds could recall detailed memories from a trip to Disney World 18 months prior. Around age 6, however, children begin to forget many of these earliest memories. In a 2005 experiment by Bauer and her colleagues, five-and-a-half-year-olds remembered more than 80 percent of experiences they had at age 3, whereas seven-and-a-half-year-olds remembered less than 40 percent.

It’s fascinating, and also true of later experiences, although to a lesser extent. Our brains conceal some of our memories by rewiring our brain. This is all part of growing up.

This restructuring of memory circuits means that, while some of our childhood memories are truly gone, others persist in a scrambled, refracted way. Studies have shown that people can retrieve at least some childhood memories by responding to specific prompts—dredging up the earliest recollection associated with the word “milk,” for example—or by imagining a house, school, or specific location tied to a certain age and allowing the relevant memories to bubble up on their own.

So we shouldn’t worry too much about remembering childhood experiences in high-fidelity. After all, it’s important to be able to tell new stories to both ourselves and other people, casting prior experiences in a new light.

Source: Nautilus

On ‘instagrammability’

“We shape our tools and thereafter our tools shape us.” (John M. Culkin)

I choose not to use or link to Facebook services, and that includes Instagram and WhatsApp. I do, however, recognise the huge power that Instagram has over some people’s lives which, of course, trickles down to businesses and those looking to “live the Instagram lifestyle”.

The design blog Dezeen picks up on a report from an Australian firm of architects, demonstrating that ‘Instagrammable moments’ are now part of their brief.

The Six Universal Truths of Influence

I’m all for user stories and creating personas but one case looks like grounds for divorce, Bob is seen as the servant of Michelle, who wants to be photographed doing things she’s seen others doing

One case study features Bob and Michelle, a couple with “very different ideas about what their holiday should look like.”

While Bob wants to surf, drink beer and spend quality time with Michelle, she wants to “be pampered and live the Instagram life of fresh coconuts and lounging by the pool.”

In response to this type of user, designers should focus on providing what Michelle wants, since “Bob’s main job this holiday is to take pictures of Michelle.”

“Michelle wants pictures of herself in the pool, of bright colours, and of fresh attractive food,” the report says. “You’ll also find her taking pictures of remarkable indoor and outdoor artwork like murals or inspirational signage.”

It’s easy to roll your eyes at this (and trust me, mine are almost rotating out of their sockets) but the historian in me finds this fascinating. I wonder if future generations will realise that architectural details were a result of photos been taken for a particular service?

Other designers taking users’ Instagram preferences into account include Coordination Asia, who recent project for restaurant chain Gaga in Shanghai has been optimised so design elements fit in a photo frame and maximise the potential for selfies.

Instagram co-founder Mike Krieger told Dezeen that he had noticed that the platform was influencing interior design.

Of course, architects and designers have to start somewhere and perhaps ‘instagrammability’ is a useful creative constraint.

“Hopefully it leads to a creative spark and things feeling different over time,” [Krieger] said. “I think a bad effect would be that same definition of instagrammability in every single space. But instead, if you can make it yours, it can add something to the building.”

Instagram was placed at number 66 in the latest Dezeen Hot List of the most newsworthy forces in world design.

Now that I’ve read this, I’ll be noticing this everywhere, no doubt.

Source: Dezeen

Where memes come from

In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.

This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.

Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.

The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.

Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.

Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.

They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.

Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.

It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.

The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.

That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.

For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.

As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.

It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).

Source: MIT Technology Review

Why NASA is better than Facebook at writing software

Facebook’s motto, until recently, was “move fast and break things”. This chimed with a wider Silicon Valley brogrammer mentality of “f*ck it, ship it”.

NASA’s approach, as this (long-ish) Fast Company article explains, couldn’t be more different to the Silicon Valley narrative. The author, Charles Fishman, explains that the group who write the software for space shuttles are exceptional at what they do. And they don’t even start writing code until they’ve got a complete plan in place.

This software is the work of 260 women and men based in an anonymous office building across the street from the Johnson Space Center in Clear Lake, Texas, southeast of Houston. They work for the “on-board shuttle group,” a branch of Lockheed Martin Corps space mission systems division, and their prowess is world renowned: the shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute (SEI) a measure of the sophistication and reliability of the way they do their work. In fact, the SEI based it standards in part from watching the on-board shuttle group do its work.

There’s an obvious impact, both in terms of financial and human cost, if something goes wrong with a shuttle. Imagine if we had these kinds of standards for the impact of social networks on the psychological health of citizens and democratic health of nations!

NASA knows how good the software has to be. Before every flight, Ted Keller, the senior technical manager of the on-board shuttle group, flies to Florida where he signs a document certifying that the software will not endanger the shuttle. If Keller can’t go, a formal line of succession dictates who can sign in his place.

Bill Pate, who’s worked on the space flight software over the last 22 years, [/url]says the group understands the stakes: “If the software isn’t perfect, some of the people we go to meetings with might die.

Software powers everything. It’s in your watch, your television, and your car. Yet the quality of most software is pretty poor.

“It’s like pre-Sumerian civilization,” says Brad Cox, who wrote the software for Steve Jobs NeXT computer and is a professor at George Mason University. “The way we build software is in the hunter-gatherer stage.”

John Munson, a software engineer and professor of computer science at the University of Idaho, is not quite so generous. “Cave art,” he says. “It’s primitive. We supposedly teach computer science. There’s no science here at all.”

The NASA team can sum-up their process in four propositions:

  1. The product is only as good as the plan for the product.
  2. The best teamwork is a healthy rivalry.
  3. The database is the software base.
  4. Don’t just fix the mistakes — fix whatever permitted the mistake in the first place.

They don’t pull all-nighters. They don’t switch to the latest JavaScript library because it’s all over Hacker News. Everything is documented, and genealogy of the whole code is available to everyone working on it.

The most important things the shuttle group does — carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code — are not expensive. The process isn’t even rocket science. Its standard practice in almost every engineering discipline except software engineering.

I’m going to be bearing this in mind as we build MoodleNet. We’ll have to be a bit more agile than NASA, of course. But planning and process is important stuff.

 

Source: Fast Company

Protocols for the free web

If there’s one thing I’ve learned in my time at the intersection of education and technology, it’s that nobody cares about the important stuff, but people will go crazy if you make a small tweak to an emoji icon. 🙄

The reason you can use any web browser you want to access this website is down to standards. These are collections of protocols that define expected behaviours when you use a web browser to read what I’ve written. There are organisations and working groups ensuring that the internet doesn’t devolve into the Wild West.

This post on the We Distribute blog is an interview with Mike Macgirvin who has spent much of his adult life working on the protocols that enable social interaction on the web to happen. It’s an important read, even for less-than-technical people, as it serves to explain some of the very human decisions that shape the technology that mediates our lives.

There’s nothing magic about a protocol. It’s basically just a gentleman’s agreement about how to implement something. There are a number of levels or grades of protocols from simple in-house conventions all the way to internet specifications. The higher quality protocols have some interesting characteristics. Most importantly, these are intended as actual technical blueprints so that if two independent developers in isolated labs follow the specifications accurately, their implementations should interact together perfectly. This is an important concept.

The level of specification needed to produce this higher quality protocol is a double-edged sword. If you specify things too rigidly, projects using this protocol cannot grow or extend beyond the limits and restrictions you have specified. If you do not specify the implementation rules tightly enough, you will end up with competing products or projects that can both claim to implement the specification, yet are unable to interoperate at a basic level.

For-profit companies, and in particular those who are backed by venture capitalists, are very fond of what’s known as vendor lock-in. While there are moves afoot seeking to limit this, including those provided by GDPR, it’s a game of cat-and-mouse.

The free web, on the other hand, is different. It’s a place where, instead of being beholden to people trying to commodify and intermediate your interactions with other human beings, there is the free exchange of data and ideas.

Unfortunately, as Macgirvin points out, its much easier to enclose something than to ‘lock it open’:

In 2010–2012, the free web lost *hundreds of thousands* of early adopters because we had no way to easily migrate from server to server; and lots of early server administrators closed down with little or no warning. This set the free web back at least five years, because you couldn’t trust your account and identity and friendships and content to exist tomorrow. Most of the other free web projects decided that this problem should be solved by import/export tools (which we’re still waiting for in some cases).

I saw an even bigger problem. Twitter at the time was over capacity and often would be shut down for hours or a few days. What if you didn’t really want to permanently move to another server, but you just wanted to post something and stay in touch with friends/family when your server was having a bad day? This was the impetus for nomadic identity. You could take a thumbdrive and load it into any other server; and your identity is intact and you still have all your friends. Then we allowed you to “clone” your identity so you could have these backup accounts available at any time you needed them. Then we started syncing stuff between your clones so that on server ‘A’ you still have the same exact content and friends that you do on server ‘B’. They’re clones. You can post from either. If one shuts down forever, no big deal. If it has a cert issue that takes 24 hours to fix, no big deal. Your online life can continue, uninterrupted — no matter what happens to individual servers.

The trouble, of course, with all of this, is that things aren’t important until they are. So if you’re using Twitter to share photos of what you had for breakfast or status updates about the facial expressions of your cat, you’re not so bothered if the service experiences some downtime. Fast forward a couple of years and emergency services are using it to reassure the citizenry in the face of impending doom.

Those out to make a profit from commodifying social interaction are like those on the political right; they’re more likely to rally behind one another in the name of capital. The left, in this case represented by the free web, is prone to internecine conflict due to their motivation being more ideological than financial.

The way I look at it is that the free web is like family. Everybody has a dysfunctional family. You have black sheep and relatives you really just want to strangle sometimes. Thanksgiving dinner always turns into a shitfight. They’re all fundamentalist Christians and you’re more Zen Buddhist. You can’t carry on a conversation without arguing about who has the more successful career or chastising cousin Harry for his drug use.

But when you get right down to it — none of this matters. They’re family. We’re all in this together. That’s how it is with the free web, even if some projects like to think that they are the only ones that matter. Everybody matters. Each of our projects brings a unique value proposition to the table, and provides a different set of solutions and decentralised services. You can’t ignore any of them or leave any of them behind. We’re one family and we’re all busy creating something incredible. If you look at only one member of this family, you might be disappointed in the range of services that are being offered. You’re probably missing out completely on what the rest of the family is doing. Together we’re all creating a new and improved social web. There are some awesome projects tackling completely different aspects of decentralisation and offering completely different services. If we could all work together we could probably conquer the world — though that’s unlikely to happen any time soon. The first step is just to all sit down at Thanksgiving dinner without killing each other.

We get to choose the technologies we use in our lives. And those decisions matter. Decentralisation is important, particularly in regards to the social web, because no government or organisation should be given the power to mediate our interactions.

Source: We Distribute

Nobody is ready for GDPR

As a small business owner and co-op founder, GDPR applies to me as much as everyone else. It’s a massive ballache, but I support the philosophy behind what it’s trying to achieve.

After four years of deliberation, the General Data Protection Regulation (GDPR) was officially adopted by the European Union in 2016. The regulation gave companies a two-year runway to get compliant, which is theoretically plenty of time to get shipshape. The reality is messier. Like term papers and tax returns, there are people who get it done early, and then there’s the rest of us.

I’m definitely in “the rest of us” camp, meaning that, over the last week or so, my wife and I have spent time figuring stuff out. The main thing is getting things in order so that  you’ve got a process in place. Different things are going to affect different organisations, well, differently.

But perhaps the GDPR requirement that has everyone tearing their hair out the most is the data subject access request. EU residents have the right to request access to review personal information gathered by companies. Those users — called “data subjects” in GDPR parlance — can ask for their information to be deleted, to be corrected if it’s incorrect, and even get delivered to them in a portable form. But that data might be on five different servers and in god knows how many formats. (This is assuming the company even knows that the data exists in the first place.) A big part of becoming GDPR compliant is setting up internal infrastructures so that these requests can be responded to.

A data subject access request isn’t going to affect our size of business very much. If someone does make a request, we’ve got a list of places from which to manually export the data. That’s obviously not a viable option for larger enterprises, who need to automate.

To be fair, GDPR as a whole is a bit complicated. Alison Cool, a professor of anthropology and information science at the University of Colorado, Boulder, writes in The New York Times that the law is “staggeringly complex” and practically incomprehensible to the people who are trying to comply with it. Scientists and data managers she spoke to “doubted that absolute compliance was even possible.”

To my mind, GDPR is like an much more far-reaching version of the Freedom of Information Act that came into force in the year 2000. That changed the nature of what citizens could expect from public bodies. I hope that the GDPR similarly changes what we all can expect from organisations who process our personal data.

Source: The Verge

The toughest smartphones on the market

I found this interesting:

To help you avoid finding out the horrifying truth when your phone goes clattering to the ground, we tested all of the major smartphones by dropping them over the course of four rounds from 4 feet and 6 feet onto wood and concrete — and even into a toilet — to see which handset is the toughest.

The results?

While the result wasn’t completely unexpected — after all, the phone has a ShatterShield display, which the company guarantees against cracks — the Moto Z2 Force survived drops from 6 feet onto concrete, with barely a scratch.

Apple’s least-expensive phone didn’t prove very tough at all. In fact, the $399 iPhone SE was rendered unusable before all of the others. However, this was not a big surprise, as the newer iPhone 8 and iPhone X are made with much stronger glass than the iPhone SE’s from 2016.

Summary:

  • Motorola Moto Z2 Force – Toughness score: 8.5/10
  • LG X Venture – Toughness score: 6.6/10
  • Apple iPhone X – Toughness score: 6.2/10
  • LG V30 – Toughness score: 6/10
  • Samsung Galaxy S9 – Toughness score: 6/10
  • Motorola Moto G5 Plus – Toughness score: 5.1/10
  • Apple iPhone 8 – Toughness score: 4.9/10
  • Samsung Galaxy Note 8 – Toughness score: 4.3/10
  • OnePlus 5T – Toughness score: 4.3/10
  • Huawei Mate 10 Pro – Toughness score: 4.3/10
  • Google Pixel 2 XL – Toughness score: 4.3/10
  • iPhone SE – Toughness score: 3.9/10

Source: Tom’s Guide