Product managers as knowledge centralisers

    If you asked me what I do for a living, I’d probably respond that I work for Moodle, am co-founder of a co-op, and also do some consultancy. What I probably wouldn’t say, although it would be true, is that I’m a product manager.

    I’m not particularly focused on ‘commercial success’ but the following section of this article certainly resonates:

    When I think of what a great product manager’s qualities should be, I find myself considering where the presence of this role is felt the most. When successful, the outside world perceives commercial success but internally, over the course of building the product, a team would gain a sense of confidence, rooted in a better understanding of the problem being addressed, a higher level of focus and an overall higher level of aptitude. If I were to summarize what I feel a great product manager’s qualities are, it would be the constant dedication to centralizing knowledge for a team in all aspects of the role — the UX, the technology and the strategy.

    We haven't got all of the resourcing in place for Project MoodleNet yet, so I'm spending my time making sure the project is set up for success. Things like sorting out the process of how we communicate, signal that things are blocked/finished/need checking, that the project will be GDPR-compliant, that the risk register is complete, that we log decisions.
    Product management has been popularized as a role that unified the business, technology and UX/Design demands of a software team. Many of the more established product managers have often noted that they “stumbled” into the role without knowing what their sandbox was and more often than not, they did not even hold the title itself.
    Being a product manager is an interdisciplinary role, and I should imagine that most have had varied careers to date. I certainly have.
    There is a lot of thinking done around what the ideal product manager should have the power to do and it often hinges around locking down a vision and seeing it through to it’s execution and data collection. However, this portrayal of a product manager as an island of synergy, knowledge and the perfect intersection of business, tech and design is not where the meaty value of the role lies.

    […]

    A sense of discipline in the daily tasks such as sprint planning and retrospectives, collecting feedback from users, stand up meetings and such can be seen as something that is not just done for the purpose of order and structure, but as a way of reinforcing and democratizing the institutional knowledge between members of a team. The ability for a team to pivot, the ability to reach consensus, is a byproduct of common, centralized knowledge that is built up from daily actions and maintained and kept alive by the product manager. In the rush of a delivery and of creative chaos , this sense of structure and order has to be lovingly maintained by someone in order for a team to really internally benefit from the fruits of their labour over time.

    It’s a great article, and well worth a read.

    Source: We Seek

    Using VR with kids

    I’ve seen conflicting advice regarding using Virtual Reality (VR) with kids, so it’s good to see this from the LSE:

    Children are becoming aware of virtual reality (VR) in increasing numbers: in autumn 2016, 40% of those aged 2-15 surveyed in the US had never heard of VR, and this number was halved less than one year later. While the technology is appealing and exciting to children, its potential health and safety issues remain questionable, as there is, to date, limited research into its long-term effects.

    I have given my two children (six and nine at the time) experience of VR — albeit in limited bursts. The concern I have is about eyesight, mainly.

    As a young technology there are still many unknowns about the long-term risks and effects of VR gaming, although Dubit found no negative effects from short-term play for children’s visual acuity, and little difference between pre- and post-VR play in stereoacuity (which relies on good eyesight for both eyes and good coordination between the two) and balance tests. Only 2 of the 15 children who used the fully immersive head-mounted display showed some stereoacuity after-effects, and none of those using the low-cost Google Cardboard headset showed any. Similarly, a few seemed to be at risk of negative after-effects to their balance after using VR, but most showed no problems.

    There's some good advice in this post for VR games/experience designers, and for parents. I'll quote the latter:

    While much of a child’s experience with VR may still be in museums, schools or other educational spaces under the guidance of trained adults, as the technology becomes more available in domestic settings, to ensure health and safety at home, parents and carers need to:

    • Allow children to preview the game on YouTube, if available.
    • Provide children with time to readjust to the real world after playing, and give them a break before engaging with activities like crossing roads, climbing stairs or riding bikes, to ensure that balance is restored.
    • Check on the child’s physical and emotional wellbeing after they play.
    There's a surprising lack of regulation and guidance in this space, so it's good to see the LSE taking the initiative!

    Source: Parenting for a Digital Future

    Augmented and Virtual Reality on the web

    There were a couple of exciting announcments last week about web technologies being used for Augmented Reality (AR) and Virtual Reality (VR). Using standard technologies that can be used across a range of devices is a game-changer.

    First off, Google announced ‘Article’ which provides an straightforward way to add virtual objects to physical spaces.

    Google AR

    Mozilla, meanwhile directed attention towards A-Frame, which they’ve been supporting for a while. This allows VR experiences to be created using web technologies, including networking users together in-world.

    Mozilla VR

    Although each have their uses, I think AR is going to be a much bigger deal than Virtual Reality (VR) for most people, mainly because it adds to an experience we’re used to (i.e. the world around us) rather than replacing it.

    Sources: Google blog / A-Frame

    The horror of the Bett Show

    I’ve been to the Bett Show (formely known as BETT, which is how the author refers to it in this article) in many different guises. I’ve been as a classroom teacher, school senior leader, researcher in Higher Education, when I was working in different roles at Mozilla, as a consultant, and now in my role at Moodle.

    I go because it’s free, and because it’s a good place to meet up with people I see rarely. While I’ve changed and grown up, the Bett Show is still much the same. As Junaid Mubeen, the author of this article, notes:  

    The BETT show is emblematic of much that EdTech gets wrong. No show captures the hype of educational technology quite like the world’s largest education trade show. This week marked my fifth visit to BETT at London’s Excel arena. True to form, my two days at the show left me feeling overwhelmed with the number of products now available in the EdTech market, yet utterly underwhelmed with the educational value on offer.

    It's laughable, it really is. I saw all sorts of tat while I was there. I heard that a decent sized stand can set you back around a million pounds.

    One senses from these shows that exhibitors are floating from one fad to the next, desperately hoping to attach their technological innovations to education. In this sense, the EdTech world is hopelessly predictable; expect blockchain applications to emerge in not-too-distant future BETT shows.

    But of course. I felt particularly sorry this year for educators I know who were effectively sales reps for the companies they've gone to work for. I spent about five hours there, wandering, talking, and catching up with people. I can only imagine the horror of being stuck there for four days straight.

    I like the questions Mubeen comes up with. However, the edtech companies are playing a different game. While there’s some interested in pedagogical development, for most of them it’s just another vertical market.

    In the meantime, there are four simple questions every self-professed education innovator should demand of themselves:

    • What is your pedagogy? At the very least, can you list your educational goals?
    • What does it mean for your solution to work and how will this be measured in a way that is meaningful and reliable?
    • How are your users supported to achieve their educational goals after the point of sale?
    • How do your solutions interact with other offerings in the marketplace?
    Somewhat naïvely, the author says that he looks forward to the day when exhibitors are selected "not on their wallet size but on their ability to address these foundational questions". As there's a for-profit company behind Bett, I think he'd better not hold his breath.

    Source: Junaid Mubeen

    Issue #289: Loooooong week

    The latest issue of the newsletter hit inboxes earlier today!

    💥 Read

    🔗 Subscribe

    More haste, less speed

    In the last couple of years, there’s been a move to give names to security vulnerabilities that would be otherwise too arcane to discuss in the mainstream media. For example, back in 2014, Heartbleed, “a security bug in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol”, had not only a name but a logo.

    The recent media storm around the so-called ‘Spectre’ and ‘Meltdown’ shows how effective this approach is. It also helps that they sound a little like James Bond science fiction.

    In this article, Zeynep Tufekci argues that the security vulnerabilities are built on our collective desire for speed:

    We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.
    Helpfully, she gives a layperson's explanation of what went wrong with these two security vulnerabilities:

    Almost all modern microprocessors employ tricks to squeeze more performance out of a computer program. A common trick involves having the microprocessor predict what the program is about to do and start doing it before it has been asked to do it — say, fetching data from memory. In a way, modern microprocessors act like attentive butlers, pouring that second glass of wine before you knew you were going to ask for it.

    But what if you weren’t going to ask for that wine? What if you were going to switch to port? No problem: The butler just dumps the mistaken glass and gets the port. Yes, some time has been wasted. But in the long run, as long as the overall amount of time gained by anticipating your needs exceeds the time lost, all is well.

    Except all is not well. Imagine that you don’t want others to know about the details of the wine cellar. It turns out that by watching your butler’s movements, other people can infer a lot about the cellar. Information is revealed that would not have been had the butler patiently waited for each of your commands, rather than anticipating them. Almost all modern microprocessors make these butler movements, with their revealing traces, and hackers can take advantage.

    Right now, she argues, systems have to employ more and more tricks to squeeze performance out of hardware because the software we use is riddled with surveillance and spyware.

    But the truth is that our computers are already quite fast. When they are slow for the end-user, it is often because of “bloatware”: badly written programs or advertising scripts that wreak havoc as they try to track your activity online. If we were to fix that problem, we would gain speed (and avoid threatening and needless surveillance of our behavior).

    As things stand, we suffer through hack after hack, security failure after security failure. If commercial airplanes fell out of the sky regularly, we wouldn’t just shrug. We would invest in understanding flight dynamics, hold companies accountable that did not use established safety procedures, and dissect and learn from new incidents that caught us by surprise.

    And indeed, with airplanes, we did all that. There is no reason we cannot do the same for safety and security of our digital systems.

    There have been patches going out over the past few weeks since the vulnerabilities came to light from major vendors. For-profit companies have limited resources, of course, and proprietary, closed-source code. This means there'll be some devices that won't get the security updates at all, leaving end users in a tricky situation: their hardware is now almost worthless. So do they (a) keep on using it, crossing their fingers that nothing bad happens, or (b) bite the bullet and upgrade?

    What I think the communities I’m part of could have done better at is shout loudly that there’s an option (c): open source software. No matter how old your hardware, the chances are that someone, somewhere, with the requisite skills will want to fix the vulnerabilities on that device.

    Source: The New York Times

    Ethical design in social networks

    I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

    There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

    Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

    There’s already people (like me) making choices about the technology and social networks they used based on ethics:

    User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

    However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

    Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

    In addition to ethical design, there are other elements to take into consideration:

    Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

    Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

    This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

    Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

    Source: The Conversation

    Reading the web on your own terms

    Although it was less than a decade ago since the demise of the wonderful, simple, much-loved Google Reader, it seems like it was a different age entirely.

    Subscribing to news feeds and blogs via RSS wasn’t as widely used as it could/should have been, but there was something magical about that period of time.

    In this article, the author reflects on that era and suggests that we might want to give it another try:

    Well, I believe that RSS was much more than just a fad. It made blogging possible for the first time because you could follow dozens of writers at the same time and attract a considerably large audience if you were the writer. There were no ads (except for the high-quality Daring Fireball kind), no one could slow down your feed with third party scripts, it had a good baseline of typographic standards and, most of all, it was quiet. There were no comments, no likes or retweets. Just the writer’s thoughts and you.
    I was a happy user of Google Reader until they pulled the plug. It was a bit more interactive than other feed readers, somehow, in a way I can't quite recall. Everyone used it until they didn't.
    The unhealthy bond between RSS and Google Reader is proof of how fragile the web truly is, and it reveals that those communities can disappear just as quickly as they bloom.
    Since that time I've been an intermittent user of Feedly. Everyone else, it seems, succumbed to the algorithmic news feeds provided by Facebook, Twitter, and the like.
    A friend of mine the other day said that “maybe Medium only exists because Google Reader died — Reader left a vacuum, and the social network filled it.” I’m not entirely sure I agree with that, but it sure seems likely. And if that’s the case then the death of Google Reader probably led to the emergence of email newsletters, too.

    […]

    On a similar note, many believe that blogging is making a return. Folks now seem to recognize the value of having your own little plot of land on the web and, although it’s still pretty complex to make your own website and control all that content, it’s worth it in the long run. No one can run ads against your thing. No one can mess with the styles. No one can censor or sunset your writing.

    Not only that but when you finish making your website you will have gained superpowers: you now have an independent voice, a URL, and a home on the open web.

    I don’t think we can turn the clock back, but it does feel like there might be positive, future-focused ways of improving things through, for example, decentralisation.

    Source: Robin Rendle

    The NSA (and GCHQ) can find you by your 'voiceprint' even if you're speaking a foreign language on a burner phone

    This is pretty incredible:

    Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees.

    The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.”

    Hmmm….

    The voice is a unique and readily accessible biometric: Unlike DNA, it can be collected passively and from a great distance, without a subject’s knowledge or consent. Accuracy varies considerably depending on how closely the conditions of the collected voice match those of previous recordings. But in controlled settings — with low background noise, a familiar acoustic environment, and good signal quality — the technology can use a few spoken sentences to precisely match individuals. And the more samples of a given voice that are fed into the computer’s model, the stronger and more “mature” that model becomes.
    So yeah, let's put a microphone in every room of our house so that we can tell Alexa to turn off the lights. What could possibly go wrong?

    Source: The Intercept

    Favourable winds

    “If a man does not know to what port he is steering, no wind is favourable to him.”

    (Seneca)

    Listening to video game soundtracks can improve your productivity

    I can attest to the power of this, particularly the Halo soundtrack:

    As I write these words, a triumphant horn is erupting in my ear over the rhythmic bowing of violins. In fact, as you read, I would encourage you to listen along—just search “Battlefield One.” I bet you'll focus just a bit better with it playing in the background. After all, as a video game soundtrack it's designed to have exactly that effect.

    This is, by far, the best Life Pro Tip I’ve ever gotten or given: Listen to music from video games when you need to focus. It’s a whole genre designed to simultaneously stimulate your senses and blend into the background of your brain, because that’s the point of the soundtrack. It has to engage you, the player, in a task without distracting from it. In fact, the best music would actually direct the listener to the task.

    These days I prefer to listen to Brain.fm after I got a lifetime deal via AppSumo a year or so ago. I enjoy music as an art form, but I also appreciate it for the effect it can have on my brain.

    Source: Popular Science

     

    Technology to connect and communicate

    People going to work in factories and offices is a relatively recent invention. For most of human history, people have worked from, or very near to, their home.

    But working from home these days is qualitatively different, because we have the internet, as Sarah Jaffe points out in a recent newsletter:

    Freelancing is a strange way to work, not because self-supervised labor in the home doesn't have a long history that well predates leaving your house to go to a workplace, but because it relies so much on communication with the outside. I'm waiting on emails from editors and so I am writing to you, my virtual water-cooler companions.

    […]

    The internet, then, serves to make work less isolated. I have chats going a lot of the day, unless I’m in super drill-down writing mode, which is less of my job than many people probably expect. My friends have helped me figure out thorny issues in a piece I’m writing and helped me figure out what to write in an email to an editor who’s dropped off the face of the earth and advised me on how much money to ask for. It’s funny, there are so many stories about the way the internet is making us lonely and isolated, and it is sometimes my only human contact. My voice creaked when I answered the phone this morning because I hadn’t yet used it today.

    The problem is that capitalism forces us into a situation where we’re competing with others rather than collaborating with them:

    How do we use technology to connect and communicate rather than compete? How do we have conversations that further our understandings of things?
    I don't actually think it's solely a technology problem, although every technology has inbuilt biases. It's also a problem to be solved at the societal 'operating system' level through, for example, co-owning the organisation for which you work.

    Source: Sarah Jaffe

    Are conferences a vestige of a bygone era?

    I’m certainly attending fewer conferences than I used to, but I thought that was just the changing nature of my work and ways of making a living.

    Marco Arment makes some important points in this post about how conferences are just kind of outdated as a concept:

    • Cost: With flights, lodging, and the ticket adding up to thousands of dollars per conference, most people are priced out. The vast majority of attendees’ money isn’t even going to the conference organizers or speakers — it’s going to venues, hotels, and airlines.
    • Size: There’s no good size for a conference. Small conferences exclude too many people; big conferences impede socialization and logistics.
    • Logistics: Planning and executing a conference takes such a toll on the organizers that few of them have ever lasted more than a few years.
    • Format: Preparing formal talks with slide decks is a massively inefficient use of the speakers’ time compared to other modern methods of communicating ideas, and sitting there listening to blocks of talks for long stretches while you’re trying to stay awake after lunch is a pretty inefficient way to hear ideas.
    This has always been the case, of course. It's just that technology-mediated ways of connecting, both synchronously and asynchronously, have improved:
    Podcasts are a vastly more time-efficient way for people to communicate ideas than writing conference talks, and people who prefer crafting their message as a produced piece or with multimedia can do the same thing (and more) on YouTube. Both are much easier and more versatile for people to consume than conference talks, and they can reach and benefit far more people.
    Conferences are by their very nature exclusive and take up a lot of people's time. There's still space for them, but I think time is up for the low-quality, just-for-the-sake-of-it conference.

    Source: Marco.org

    A useful IndieWeb primer

    I’ve followed the IndieWeb movement since its inception, but it’s always seemed a bit niche. I love (and use) the POSSE model, for example, but expecting everyone to have domain of their own stacked with open source software seems a bit utopian right now.

    I was surprised and delighted, therefore, to see a post on the GoDaddy blog extolling the virtues of the IndieWeb for business owners. The author explains that the IndieWeb movement was born of frustration:

    Frustration from software developers who like the idea of social media, but who do not want to hand over their content to some big, unaccountable internet company that unilaterally decides who gets to see what.

    Frustration from writers and content creators who do not want a third party between them and the people they want to reach.

    Frustration from researchers and journalists who need a way to get their message out without depending on the whim of a big company that monitors, and sometimes censors, what they have to say.

    He does a great job of explaining, with an appropriate level of technical detail, how to get started. The thing I'd really like to see in particular is people publishing details of events at a public URL instead of (just) on Facebook:
    Importantly, with IndieAuth, you can log into third-party websites using your own domain name. And your visitors can log into your website with their domain name. Or, if you organize events, you can post your event announcement right on your website, and have attendees RSVP either from their own IndieWeb sites, or natively on a social site.
    A recommended read. I'll be pointing people to this in future!

    Source: GoDaddy

    Three most harmful addictions

    “The three most harmful addictions are heroin, carbohydrates, and a monthly salary.”

    (Nassim Nicholas Taleb)

    More on Facebook's 'trusted news' system

    Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

    Instead, he thinks Facebook should try Google’s approach:

    Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.
    Facebook's algorithms are opaque by design, whereas, Caulfield argues, Google's approach is documented:
    I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.
    This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

    Source: Hapgood

    Living in capitalism

    “We live in capitalism, its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings.”

    (Ursula Le Guin)

    Anxiety is the price of convenience

    Remote working, which I’ve done for over five years now, sounds awesome, doesn’t it? Open your laptop while still in bed, raid the biscuit barrel at every opportunity, spend more time with your family…

    Don’t get me wrong, it is great and I don’t think I could ever go back to working full-time in an office. That being said, there’s a hidden side to remote working which no-one ever tells you about: anxiety.

    Every interaction when you’re working remotely is an intentional act. You either have to schedule a meeting with someone, or ‘ping’ them to see if they’re available. You can’t see that they’re free, wander over to talk to them, or bump into them in the corridor, as you could if you were physically co-located.

    When people don’t respond in a timely fashion, or within the time frame you were expecting, it’s unclear why that happened. This article picks up on that:

    In recent decades, written communication has caught up—or at least come as close as it’s likely to get to mimicking the speed of regular conversation (until they implant thought-to-text microchips in our brains). It takes more than 200 milliseconds to compose a text, but it’s not called “instant” messaging for nothing: There is an understanding that any message you send can be replied to more or less immediately.

    But there is also an understanding that you don’t have to reply to any message you receive immediately. As much as these communication tools are designed to be instant, they are also easily ignored. And ignore them we do. Texts go unanswered for hours or days, emails sit in inboxes for so long that “Sorry for the delayed response” has gone from earnest apology to punchline.

    It’s not just work, either. Because we carry our smartphones with us everywhere, my wife expects almost an instantaneous response on even the most trivial matters. I’ve come back to my phone with a stream of ‘oi’ messages before…

    It’s anxiety-inducing because written communication is now designed to mimic conversation—but only when it comes to timing. It allows for a fast back-and-forth dialogue, but without any of the additional context of body language, facial expression, and intonation. It’s harder, for example, to tell that someone found your word choice off-putting, and thus to correct it in real-time, or try to explain yourself better. When someone’s in front of you, “you do get to see the shadow of your words across someone else’s face,” [Sherry] Turkle says.
    Lots to ponder here. A lot of it has to do with the culture of your organisation / family, at the end of the day.

    Source: The Atlantic (via Hurry Slowly)

    Different sorts of time

    Growing up, I always thought I’d write for a living. Initially, I wanted to be a journalist, but as it turns out, thinking and writing is about 75% of what I do on a weekly basis.

    I’m always interested in how people who write full-time structure the process. This, from Jon McGregor, struck a chord with me:

    There are other sorts of time, besides the writing time. There is thinking time, reading time, research time and sketching out ideas time. There is working on the first page over and over again until you find the tone you’re looking for time. There is spending just five minutes catching up on email time. There is spending five minutes more on Twitter because, in a way, that is part of the research process time. There is writing time, somewhere in there. There is making the coffee and clearing away the coffee and thinking about lunch and making the lunch and clearing away the lunch time. There is stretching the legs time. There is going for a long walk because all the great writers always talk about walking time being the best thinking time, and then there is getting back from that walk and realising what the hell the time is now time. There’s looking back over what you’ve written so far and deciding it is all a load of awkwardly phrased bobbins time; there is wondering what kind of a way this is to make a living at all time. There is finding the tail-end of an idea that might just work and trying to get that down on the page before you run out of time time. There is answering emails that just can’t be put off any longer time. There is moving to another table and setting a timer and refusing to look up from the page until you’ve written for 40 minutes solid time. There is reading that back and crossing it out time. And then there is running out of the door and trying to get to the school gates at anything like a decent time time.
    I've written before, elsewhere, about how difficult it is for knowledge workers such as writers to quantify what counts as 'work'. Does a walk in the park while thinking about what you're going to write count? What about when you're in the shower planning something out?

    It’s complicated.

    Source: The Guardian

    Some podcast recommendations

    Despite no longer having a commute, I still find time to listen to podcasts. They’re useful for a variety of reasons: I can be doing something else while listening to them such as walking, going to the gym, or boring admin, and they don’t require me to look at a screen (which I do most of the day).

    So it’s very useful for Bryan Alexander to share the podcasts he’s listening to at present. Here’s a couple that were new to me:

    Beyond the Book – a look into the book publishing industry. It’s clearly biased in favor of strong copyright policies and practices, a bias I don’t share, but the program is also very informative.

    Very Bad Wizards – two thinkers and, sometimes, a guest brood about deep questions concerning human psychology, philosophy, and ethics. It’s not my usual fare, so I enjoy learning.

    Podcasts are basically RSS feeds with an audio enclosures as such, they can be exported as OPML files. Most podcast clients, including AntennaPod (which I use) allow you to do this.

    Here’s my OPML file, as of today. I don’t listen to all of these podcasts regularly, just dipping in and out of them. My top five favourites are:

    There's also, obviously, Today In Digital Education (TIDE) which I record with Dai Barnes. Well be releasing our first episode of 2018 later this week!

    Source: Bryan Alexander

← Newer Posts Older Posts →