Bridging technologies

    When you go deep enough into philosophy or religion one of the key insights is that everything is temporary. Success is temporary. Suffering is temporary. Your time on earth is temporary.

    One way of thinking about this one a day-to-day basis is that everything is a bridge to something else. So that technology that I’ve been excited about since 2011? Yep, it’s a bridge (or perhaps a raft) to get to something else.

    Benedict Evans, who works for the VC firm Andreessen Horowitz, sends out a great, short newsletter every week to around 95,000 people. I’m one of them. In this week’s missive, he linked to a blog post he wrote about bridging technologies.

    A bridge product says 'of course x is the right way to do this, but the technology or market environment to deliver x is not available yet, or is too expensive, and so here is something that gives some of the same benefits but works now.'
    As with anything, there are good and bad bridging technologies. At the time, it can be hard to spot the difference:

    In hindsight, though, not just WAP but the entire feature-phone mobile internet prior to 2007, including i-mode, with cut-down pages and cut-down browsers and nav keys to scroll from link to link, was a bridge. The 'right' way was a real computer with a real operating system and the real internet. But we couldn't build phones that could do that in 1999, even in Japan, and i-mode worked really well in Japan for a decade.

    It's all obvious in retrospect, as with the example of Firefox OS, which was developed at the same time I was at Mozilla:
    [T]he problem with the Firefox phone project was that even if you liked the experience proposition - 'almost as good as Android but works on much cheaper phones' - the window of time before low-end Android phones closed the price gap was too short.
    Usually, cheap things add more features until people just 'make do' with 80-90% of the full feature set. However, that's not always the case:
    Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around - the expensive good product gets cheaper faster than the cheap bad product can get good.
    Evans goes on to talk about autonomous vehicles, something that he's heavily invested in (financially and intellectually) with his VC firm.

    In the world of open source, however, it’s a slightly different process. Instead of thinking about the ‘runway’ of capital that you’ve got before you have to give up and go home, it’s about deciding when it no longer makes sense to maintain the project you’re working on. In some cases, the answer to that is ‘never’ which means that the project keeps going and going and going.

    It can be good to have a forcing function to focus people’s minds. I’m thinking, for example, of Steve Jobs declaring war on Flash. The reasons he gives are disingenuous (accusing Adobe of not being ‘open’!) but the upshot of Apple declaring Flash as dead to them caused the entire industry to turn upside down. In effect, Flash was a ‘bridge’ to the full web on mobile devices.

    Using the idea of technology ‘bridges’ in my own work can lead to some interesting conclusions. For example, the Project MoodleNet work that I’m beginning will ultimately be a bridge to something else for Moodle. Thinking about my own career, each step has been a bridge to something else; the most interesting bridges have been those where I haven’t been quite sure what was one the other side. Or, indeed, if there even was an other side…

    Source: Benedict Evans

    Tech will eat itself

    Mike Murphy has been travelling to tech conferences: CES, MWC, and SXSW. He hasn’t been overly-impressed by what he’s seen:

    The role of technology should be to improve the quality of our lives in some meaningful way, or at least change our behavior. In years past, these conferences have seen the launch of technologies that have indeed impacted our lives to varying degrees, from the launch of Twitter to car stereos and video games.
    However, it's all been a little underwhelming:
    People always ask me what trends I see at these events. There are the usual words I can throw out—VR, AR, blockchain, AI, big data, autonomy, automation, voice assistants, 3D-printing, drones—the list is endless, and invariably someone will write some piece on each of these at every event. But it’s rare to see something truly novel, impressive, or even more than mildly interesting at these events anymore. The blockchain has not revolutionized society, no matter what some bros would have you believe, nor has 3D-printing. Self-driving cars are still years away, AI is still mainly theoretical, and no one buys VR headsets. But these are the terms you’ll find associated with these events if you Google them.
    There's nothing of any real substance being launched at this big shiny events:
    The biggest thing people will remember from this year’s CES is that it rained the first few days and then the power went out. From MWC, it’ll be that it snowed for the first time in years in Barcelona, and from SXSW, it’ll be the Westworld in the desert (which was pretty cool). Quickly forgotten are the second-tier phones, dating apps, and robots that do absolutely nothing useful. I saw a few things of note that point toward the future—a 3D-printed house that could actually better lives in developing nations; robots that could crush us at Scrabble—but obviously, the opportunity for a nascent startup to get its name in front of thousands of techies, influential people, and potential investors can be huge. Even if it’s just an app for threesomes.
    As Murphy points out, the more important the destination (i.e. where the event is held) the less important the content (i.e. what is being announced):
    When real technology is involved, the destinations aren’t as important as the substance of the events. But in the case of many of these conferences, the substance is the destinations themselves.

    However, that shouldn’t necessarily be cause for concern: There is still much to be excited about in technology. You just won’t find much of it at the biggest conferences of the year, which are basically spring breaks for nerds. But there is value in bringing so many similarly interested people together.

    […]

    Just don’t expect the world of tomorrow to look like the marketing stunts of today.

    I see these events as a way to catch up the mainstream with what’s been happening in pockets of innovation over the past year or so. Unfortunately, this is increasingly being covered in a layer of marketing spin and hype so that it’s difficult to separate the useful from the trite.

    Source: Quartz

    Teaching kids about computers and coding

    Not only is Hacker News a great place to find the latest news about tech-related stuff, it’s also got some interesting ‘Ask HN’ threads sourcing recommendations from the community.

    This particular one starts with a user posing the question:

    Ask HN: How do you teach you kids about computers and coding?

    Please share what tools & approaches you use - it may Scratch, Python, any kids specific like Linux distros, Raspberry Pi or recent products like Lego Boost… Or your experiences with them.. thanks.

    Like sites such as Reddit and Stack Overflow, responses are voted up based on their usefulness. The most-upvoted response was this one:

    My daughter is almost 5 and she picked up Scratch Jr in ten minutes. I am writing my suggestions mostly from the context of a younger child.

    I approached it this way, I bought a book on Scratch Jr so I could get up to speed on it. I walked her through a few of the basics, and then I just let her take over after that.

    One other programming related activity we have done is the Learning Resources Code & Go Robot Mouse Activity. She has a lot of fun with this as you have a small mouse you program with simple directions to navigate a maze to find the cheese. It uses a set of cards to help then grasp the steps needed. I switch to not using the cards after a while. We now just step the mouse through the maze manually adding steps as we go.

    One other activity to consider is the robot turtles board game. This teaches some basic logic concepts needed in programming.

    For an older child, I did help my nephew to learn programming in Python when he was a freshman in high school. I took the approach of having him type in games from the free Python book. I have always though this was a good approach for older kids to get the familiar with the syntax.

    Something else I would consider would be a robot that can be programmer with Scratch. While I have not done this yet, I think for kid seeing the physical results of programming via a robot is a powerful way to capture interest.

    But I think my favourite response is this one:

    What age range are we talking about? For most kids aged 6-12 writing code is too abstract to start with. For my kids, I started making really simple projects with a Makey Makey. After that, I taught them the basics with Scratch, since there are tons of fun tutorials for kids. Right now, I'm building a Raspberry Pi-powered robot with my 10yo (basically it's a poor man's Lego Mindstorm).

    The key is fun. The focus is much more on ‘building something together’ than ‘I’ll learn you how to code’. I’m pretty sure that if I were to press them into learning how to code it will only put them off. Sometimes we go for weeks without building on the robot, and all of the sudden she will ask me to work on it with her again.

    My son is sailing through his Computer Science classes at school because of some webmaking and ‘coding’ stuff we did when he was younger. He’s seldom interested, however, if I want to break out the Raspberry Pi and have a play.

    At the end of the day, it’s meeting them where they’re at. If they show an interest, run with it!

    Source: Hacker News

    10 breakthrough technologies for 2018

    I do like MIT’s Technology Review. It gives a glimpse of cool future uses of technology, while retaining a critical lens.

    Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.
    Here's the list of their 'breakthrough technologies' for 2018:
    1. 3D metal printing
    2. Artificial embryos
    3. Sensing city
    4. AI for everybody
    5. Dueling neural networks
    6. Babel-fish earbuds
    7. Zero-carbon natural gas
    8. Perfect online privacy
    9. Genetic fortune-telling
    10. Materials' quantum leap
    It's a fascinating list, partly because of the names they've given ('genetic fortune telling'!) to things which haven't really been given a mainstream label yet. Worth exploring in more details, as they flesh out each on of these in what is a reasonably lengthy article.

    Source: MIT Technology Review

    Firefox OS lives on in The Matrix

    I still have a couple of Firefox OS phones from my time at Mozilla. The idea was brilliant: using the web as the platform for smartphones. The execution, in terms of the partnership and messaging to the market… not so great.

    Last weekend, I actually booted up a device as my daughter was asking about ‘that orange phone you used to let me play with sometimes’. I noticed that Mozilla are discontinuing the app marketplace next month.

    All is not lost, however, as open source projects can never truly die. This article reports on a ‘fork’ of Firefox OS being used to resurrect one of my favourite-ever phones, which was used in the film The Matrix:

    Quietly, a company called KaiOS, built on a fork of Firefox OS, launched a new version of the OS built specifically for feature phones, and today at MWC in Barcelona the company announced a new wave of milestones around the effort that includes access to apps from Facebook, Twitter and Google in the form of its Voice Assistant, Google Maps, and Google Search; as well as a list of handset makers who will be using the OS in their phones, including HMD/Nokia (which announced its 8110 yesterday), Bullitt, Doro and Micromax; and Qualcomm and Spreadtrum for processing on the inside.
    I think I'm going to have to buy the new version of the Nokia 8110 just... because.

    Source: TechCrunch

     

    The punk rock internet

    This kind of article is useful in that it shows to a mainstream audience the benefits of a redecentralised web and resistance to Big Tech.

    Balkan and Kalbag form one small part of a fragmented rebellion whose prime movers tend to be located a long way from Silicon Valley. These people often talk in withering terms about Big Tech titans such as Mark Zuckerberg, and pay glowing tribute to Edward Snowden. Their politics vary, but they all have a deep dislike of large concentrations of power and a belief in the kind of egalitarian, pluralistic ideas they say the internet initially embodied.

    What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.

    However, these kind of articles are very personality-driven, and the little asides made the article’s author paint those featured as a bit crazy and the whole idea as a bit far-fetched.

    For example, here’s the section on a project which is doing some pretty advanced tech while avoiding venture capitalist money:

    In the Scottish coastal town of Ayr, where a company called MaidSafe works out of a silver-grey office on an industrial estate tucked behind a branch of Topps Tiles, another version of this dream seems more advanced. MaidSafe’s first HQ, in nearby Troon, was an ocean-going boat. The company moved to an office above a bridal shop, and then to an unheated boatshed, where the staff sometimes spent the working day wearing woolly hats. It has been in its new home for three months: 10 people work here, with three in a newly opened office in Chennai, India, and others working remotely in Australia, Slovakia, Spain and China.
    I get the need to bring technology alive for the reader, but what difference does it make that their office is behind Topps Tiles? So what if the staff sometimes wear woolly hats? It just makes the whole thing out to be farcical. Which of course, it's not.

    Source: The Guardian

    The punk rock internet

    This kind of article is useful in that it shows to a mainstream audience the benefits of a redecentralised web and resistance to Big Tech.

    Balkan and Kalbag form one small part of a fragmented rebellion whose prime movers tend to be located a long way from Silicon Valley. These people often talk in withering terms about Big Tech titans such as Mark Zuckerberg, and pay glowing tribute to Edward Snowden. Their politics vary, but they all have a deep dislike of large concentrations of power and a belief in the kind of egalitarian, pluralistic ideas they say the internet initially embodied.

    What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.

    However, these kind of articles are very personality-driven, and the little asides made the article’s author paint those featured as a bit crazy and the whole idea as a bit far-fetched.

    For example, here’s the section on a project which is doing some pretty advanced tech while avoiding venture capitalist money:

    In the Scottish coastal town of Ayr, where a company called MaidSafe works out of a silver-grey office on an industrial estate tucked behind a branch of Topps Tiles, another version of this dream seems more advanced. MaidSafe’s first HQ, in nearby Troon, was an ocean-going boat. The company moved to an office above a bridal shop, and then to an unheated boatshed, where the staff sometimes spent the working day wearing woolly hats. It has been in its new home for three months: 10 people work here, with three in a newly opened office in Chennai, India, and others working remotely in Australia, Slovakia, Spain and China.
    I get the need to bring technology alive for the reader, but what difference does it make that their office is behind Topps Tiles? So what if the staff sometimes wear woolly hats? It just makes the whole thing out to be farcical. Which of course, it's not.

    Source: The Guardian

    The Project Design Tetrahedron

    I had reason this week to revisit Dorian Taylor’s interview on Uses This. I fell into a rabbithole of his work, and came across a lengthy post he wrote back in 2014.

    I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

    As you can see in the image above, he argues that the traditional engineering approach of having things either:

    • Fast and Good
    • Cheap and Fast
    • Good and Cheap

    ...is wrong, given a lean and iterative design process. You can actually make things that are immediately useful (i.e. 'Good'), relatively Cheap, and do so Fast. The thing you sacrifice in those situations, and hence the 'tetrahedron' is Predictable Results.

    If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable.

    In other words, we live in an unpredictable world, but we can still do awesome stuff. Nassim Nicholas Taleb would be proud.

    Source: Dorian Taylor

    The Project Design Tetrahedron

    I had reason this week to revisit Dorian Taylor’s interview on Uses This. I fell into a rabbithole of his work, and came across a lengthy post he wrote back in 2014.

    I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

    As you can see in the image above, he argues that the traditional engineering approach of having things either:

    • Fast and Good
    • Cheap and Fast
    • Good and Cheap

    ...is wrong, given a lean and iterative design process. You can actually make things that are immediately useful (i.e. 'Good'), relatively Cheap, and do so Fast. The thing you sacrifice in those situations, and hence the 'tetrahedron' is Predictable Results.

    If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable.

    In other words, we live in an unpredictable world, but we can still do awesome stuff. Nassim Nicholas Taleb would be proud.

    Source: Dorian Taylor

    Designing social systems

    This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

    Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

    When we're joining in with conversations online, then we're not always part of a group, sometimes we're part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

    He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

    Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

    When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

    [T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

    Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their 'tribe' online.

    If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

    Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I'm not sure it'll scale, though...

    Source: Human Systems

    The military implications of fitness tech

    I was talking about this last night with a guy who used to be in the army. It’s a BFD.

    In March 2017, a member of the Royal Navy ran around HMNB Clyde, the high-security military base that's home to Trident, the UK's nuclear deterrent. His pace wasn't exceptional, but it wasn't leisurely either.

    His run, like millions of others around the world, was recorded through the Strava app. A heatmap of more than one billion activities – comprising of 13 billion GPS data points – has been criticised for showing the locations of supposedly secretive military bases. It was thought that, at the very least, the data was totally anonymised. It isn't.

    Oops.

    The fitness app – which can record a person's GPS location and also host data from devices such as Fitbits and Garmin watches – allows users to create segments and leaderboards. These are areas where a run, swim, or bike ride can be timed and compared. Segments can be seen on the Strava website, rather than on the heatmap.

    Computer scientist and developer Steve Loughran detailed how to create a GPS segment and upload it to Strava as an activity. Once uploaded, a segment shows the top times of people running in an area. Which is how it's possible to see the running routes of people inside the high-security walls of HMNB Clyde.

    Of course, this is an operational security issue. The military personnel shouldn't really be using Strava while they're living/working on bases.

    "The underlying problem is that the devices we wear, carry and drive are now continually reporting information about where and how they are used 'somewhere'," Loughran said. "In comparison to the datasets which the largest web companies have, Strava's is a small set of files, voluntarily uploaded by active users."

    Source: WIRED

    Audrey Watters on technology addiction

    Audrey Watters answers the question whether we’re ‘addicted’ to technology:

    I am hesitant to make any clinical diagnosis about technology and addiction – I’m not a medical professional. But I’ll readily make some cultural observations, first and foremost, about how our notions of “addiction” have changed over time. “Addiction” is medical concept but it’s also a cultural one, and it’s long been one tied up in condemning addicts for some sort of moral failure. That is to say, we have labeled certain behaviors as “addictive” when they’ve involve things society doesn’t condone. Watching TV. Using opium. Reading novels. And I think some of what we hear in discussions today about technology usage – particularly about usage among children and teens – is that we don’t like how people act with their phones. They’re on them all the time. They don’t make eye contact. They don’t talk at the dinner table. They eat while staring at their phones. They sleep with their phones. They’re constantly checking them.
    The problem is that our devices are designed to be addictive, much like casinos. The apps on our phones are designed to increase certain metrics:
    I think we’re starting to realize – or I hope we’re starting to realize – that those metrics might conflict with other values. Privacy, sure. But also etiquette. Autonomy. Personal agency. Free will.
    Ultimately, she thinks, this isn't a question of addiction. It's much wider than that:
    How are our minds – our sense of well-being, our knowledge of the world – being shaped and mis-shaped by technology? Is “addiction” really the right framework for this discussion? What steps are we going to take to resist the nudges of the tech industry – individually and socially and yes maybe even politically?
    Good stuff.

    Source: Audrey Watters

    No cash, no freedom?

    The ‘cashless’ society, eh?

    Every time someone talks about getting rid of cash, they are talking about getting rid of your freedom. Every time they actually limit cash, they are limiting your freedom. It does not matter if the people doing it are wonderful Scandinavians or Hindu supremacist Indians, they are people who want to know and control what you do to an unprecedentedly fine-grained scale.
    Yep, just because someone cool is doing it doesn't mean it won't have bad consequences. In the rush to add technology to things, we create future dystopias.
    Cash isn’t completely anonymous. There’s a reason why old fashioned crooks with huge cash flows had to money-launder: Governments are actually pretty good at saying, “Where’d you get that from?” and getting an explanation. Still, it offers freedom, and the poorer you are, the more freedom it offers. It also is very hard to track specifically, i.e., who made what purchase.

    Blockchains won’t be untaxable. The ones which truly are unbreakable will be made illegal; the ones that remain, well, it’s a ledger with every transaction on it, for goodness sakes.

    It’s this bit that concerns me:

    We are creating a society where even much of what you say, will be knowable and indeed, may eventually be tracked and stored permanently.

    If you do not understand why this is not just bad, but terrible, I cannot explain it to you. You have some sort of mental impairment of imagination and ethics.

    Source: Ian Welsh

    Using VR with kids

    I’ve seen conflicting advice regarding using Virtual Reality (VR) with kids, so it’s good to see this from the LSE:

    Children are becoming aware of virtual reality (VR) in increasing numbers: in autumn 2016, 40% of those aged 2-15 surveyed in the US had never heard of VR, and this number was halved less than one year later. While the technology is appealing and exciting to children, its potential health and safety issues remain questionable, as there is, to date, limited research into its long-term effects.

    I have given my two children (six and nine at the time) experience of VR — albeit in limited bursts. The concern I have is about eyesight, mainly.

    As a young technology there are still many unknowns about the long-term risks and effects of VR gaming, although Dubit found no negative effects from short-term play for children’s visual acuity, and little difference between pre- and post-VR play in stereoacuity (which relies on good eyesight for both eyes and good coordination between the two) and balance tests. Only 2 of the 15 children who used the fully immersive head-mounted display showed some stereoacuity after-effects, and none of those using the low-cost Google Cardboard headset showed any. Similarly, a few seemed to be at risk of negative after-effects to their balance after using VR, but most showed no problems.

    There's some good advice in this post for VR games/experience designers, and for parents. I'll quote the latter:

    While much of a child’s experience with VR may still be in museums, schools or other educational spaces under the guidance of trained adults, as the technology becomes more available in domestic settings, to ensure health and safety at home, parents and carers need to:

    • Allow children to preview the game on YouTube, if available.
    • Provide children with time to readjust to the real world after playing, and give them a break before engaging with activities like crossing roads, climbing stairs or riding bikes, to ensure that balance is restored.
    • Check on the child’s physical and emotional wellbeing after they play.
    There's a surprising lack of regulation and guidance in this space, so it's good to see the LSE taking the initiative!

    Source: Parenting for a Digital Future

    Augmented and Virtual Reality on the web

    There were a couple of exciting announcments last week about web technologies being used for Augmented Reality (AR) and Virtual Reality (VR). Using standard technologies that can be used across a range of devices is a game-changer.

    First off, Google announced ‘Article’ which provides an straightforward way to add virtual objects to physical spaces.

    Google AR

    Mozilla, meanwhile directed attention towards A-Frame, which they’ve been supporting for a while. This allows VR experiences to be created using web technologies, including networking users together in-world.

    Mozilla VR

    Although each have their uses, I think AR is going to be a much bigger deal than Virtual Reality (VR) for most people, mainly because it adds to an experience we’re used to (i.e. the world around us) rather than replacing it.

    Sources: Google blog / A-Frame

    The horror of the Bett Show

    I’ve been to the Bett Show (formely known as BETT, which is how the author refers to it in this article) in many different guises. I’ve been as a classroom teacher, school senior leader, researcher in Higher Education, when I was working in different roles at Mozilla, as a consultant, and now in my role at Moodle.

    I go because it’s free, and because it’s a good place to meet up with people I see rarely. While I’ve changed and grown up, the Bett Show is still much the same. As Junaid Mubeen, the author of this article, notes:  

    The BETT show is emblematic of much that EdTech gets wrong. No show captures the hype of educational technology quite like the world’s largest education trade show. This week marked my fifth visit to BETT at London’s Excel arena. True to form, my two days at the show left me feeling overwhelmed with the number of products now available in the EdTech market, yet utterly underwhelmed with the educational value on offer.

    It's laughable, it really is. I saw all sorts of tat while I was there. I heard that a decent sized stand can set you back around a million pounds.

    One senses from these shows that exhibitors are floating from one fad to the next, desperately hoping to attach their technological innovations to education. In this sense, the EdTech world is hopelessly predictable; expect blockchain applications to emerge in not-too-distant future BETT shows.

    But of course. I felt particularly sorry this year for educators I know who were effectively sales reps for the companies they've gone to work for. I spent about five hours there, wandering, talking, and catching up with people. I can only imagine the horror of being stuck there for four days straight.

    I like the questions Mubeen comes up with. However, the edtech companies are playing a different game. While there’s some interested in pedagogical development, for most of them it’s just another vertical market.

    In the meantime, there are four simple questions every self-professed education innovator should demand of themselves:

    • What is your pedagogy? At the very least, can you list your educational goals?
    • What does it mean for your solution to work and how will this be measured in a way that is meaningful and reliable?
    • How are your users supported to achieve their educational goals after the point of sale?
    • How do your solutions interact with other offerings in the marketplace?
    Somewhat naïvely, the author says that he looks forward to the day when exhibitors are selected "not on their wallet size but on their ability to address these foundational questions". As there's a for-profit company behind Bett, I think he'd better not hold his breath.

    Source: Junaid Mubeen

    More haste, less speed

    In the last couple of years, there’s been a move to give names to security vulnerabilities that would be otherwise too arcane to discuss in the mainstream media. For example, back in 2014, Heartbleed, “a security bug in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol”, had not only a name but a logo.

    The recent media storm around the so-called ‘Spectre’ and ‘Meltdown’ shows how effective this approach is. It also helps that they sound a little like James Bond science fiction.

    In this article, Zeynep Tufekci argues that the security vulnerabilities are built on our collective desire for speed:

    We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.
    Helpfully, she gives a layperson's explanation of what went wrong with these two security vulnerabilities:

    Almost all modern microprocessors employ tricks to squeeze more performance out of a computer program. A common trick involves having the microprocessor predict what the program is about to do and start doing it before it has been asked to do it — say, fetching data from memory. In a way, modern microprocessors act like attentive butlers, pouring that second glass of wine before you knew you were going to ask for it.

    But what if you weren’t going to ask for that wine? What if you were going to switch to port? No problem: The butler just dumps the mistaken glass and gets the port. Yes, some time has been wasted. But in the long run, as long as the overall amount of time gained by anticipating your needs exceeds the time lost, all is well.

    Except all is not well. Imagine that you don’t want others to know about the details of the wine cellar. It turns out that by watching your butler’s movements, other people can infer a lot about the cellar. Information is revealed that would not have been had the butler patiently waited for each of your commands, rather than anticipating them. Almost all modern microprocessors make these butler movements, with their revealing traces, and hackers can take advantage.

    Right now, she argues, systems have to employ more and more tricks to squeeze performance out of hardware because the software we use is riddled with surveillance and spyware.

    But the truth is that our computers are already quite fast. When they are slow for the end-user, it is often because of “bloatware”: badly written programs or advertising scripts that wreak havoc as they try to track your activity online. If we were to fix that problem, we would gain speed (and avoid threatening and needless surveillance of our behavior).

    As things stand, we suffer through hack after hack, security failure after security failure. If commercial airplanes fell out of the sky regularly, we wouldn’t just shrug. We would invest in understanding flight dynamics, hold companies accountable that did not use established safety procedures, and dissect and learn from new incidents that caught us by surprise.

    And indeed, with airplanes, we did all that. There is no reason we cannot do the same for safety and security of our digital systems.

    There have been patches going out over the past few weeks since the vulnerabilities came to light from major vendors. For-profit companies have limited resources, of course, and proprietary, closed-source code. This means there'll be some devices that won't get the security updates at all, leaving end users in a tricky situation: their hardware is now almost worthless. So do they (a) keep on using it, crossing their fingers that nothing bad happens, or (b) bite the bullet and upgrade?

    What I think the communities I’m part of could have done better at is shout loudly that there’s an option (c): open source software. No matter how old your hardware, the chances are that someone, somewhere, with the requisite skills will want to fix the vulnerabilities on that device.

    Source: The New York Times

    Ethical design in social networks

    I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

    There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

    Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

    There’s already people (like me) making choices about the technology and social networks they used based on ethics:

    User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

    However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

    Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

    In addition to ethical design, there are other elements to take into consideration:

    Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

    Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

    This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

    Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

    Source: The Conversation

    Reading the web on your own terms

    Although it was less than a decade ago since the demise of the wonderful, simple, much-loved Google Reader, it seems like it was a different age entirely.

    Subscribing to news feeds and blogs via RSS wasn’t as widely used as it could/should have been, but there was something magical about that period of time.

    In this article, the author reflects on that era and suggests that we might want to give it another try:

    Well, I believe that RSS was much more than just a fad. It made blogging possible for the first time because you could follow dozens of writers at the same time and attract a considerably large audience if you were the writer. There were no ads (except for the high-quality Daring Fireball kind), no one could slow down your feed with third party scripts, it had a good baseline of typographic standards and, most of all, it was quiet. There were no comments, no likes or retweets. Just the writer’s thoughts and you.
    I was a happy user of Google Reader until they pulled the plug. It was a bit more interactive than other feed readers, somehow, in a way I can't quite recall. Everyone used it until they didn't.
    The unhealthy bond between RSS and Google Reader is proof of how fragile the web truly is, and it reveals that those communities can disappear just as quickly as they bloom.
    Since that time I've been an intermittent user of Feedly. Everyone else, it seems, succumbed to the algorithmic news feeds provided by Facebook, Twitter, and the like.
    A friend of mine the other day said that “maybe Medium only exists because Google Reader died — Reader left a vacuum, and the social network filled it.” I’m not entirely sure I agree with that, but it sure seems likely. And if that’s the case then the death of Google Reader probably led to the emergence of email newsletters, too.

    […]

    On a similar note, many believe that blogging is making a return. Folks now seem to recognize the value of having your own little plot of land on the web and, although it’s still pretty complex to make your own website and control all that content, it’s worth it in the long run. No one can run ads against your thing. No one can mess with the styles. No one can censor or sunset your writing.

    Not only that but when you finish making your website you will have gained superpowers: you now have an independent voice, a URL, and a home on the open web.

    I don’t think we can turn the clock back, but it does feel like there might be positive, future-focused ways of improving things through, for example, decentralisation.

    Source: Robin Rendle

    The NSA (and GCHQ) can find you by your 'voiceprint' even if you're speaking a foreign language on a burner phone

    This is pretty incredible:

    Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees.

    The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.”

    Hmmm….

    The voice is a unique and readily accessible biometric: Unlike DNA, it can be collected passively and from a great distance, without a subject’s knowledge or consent. Accuracy varies considerably depending on how closely the conditions of the collected voice match those of previous recordings. But in controlled settings — with low background noise, a familiar acoustic environment, and good signal quality — the technology can use a few spoken sentences to precisely match individuals. And the more samples of a given voice that are fed into the computer’s model, the stronger and more “mature” that model becomes.
    So yeah, let's put a microphone in every room of our house so that we can tell Alexa to turn off the lights. What could possibly go wrong?

    Source: The Intercept

Older Posts →