Speeding up a Chromebook by allocating zram

    Pixelbook

    Oddly enough, in the few days since I've bookmarked this URL, it's disappeared. Thank goodness for the Internet Archive!

    I'll post the main details below, which are instructions for making Chromebooks run faster by allocating compressed cache. Note that on my Google Pixelbook (2017) I used '4000' instead of the '2000' recommended and it's really made a difference

    Also see: Cog - System Info Viewer

    You use zram (otherwise known as compressed cache - compcache). With a single command you can create enough zram to compensate for your device's lack of physical RAM. You can create as much compcache as you need; but remember, most Chromebooks contain smaller internal drives, so create a swap space that doesn't gobble up too much of your physical drive (as swap is created using your Chromebook internal, physical drive).

    To create compcache, you must work within Crosh (Chromebook shell), aka the command line. Believe it or not, the command use for this is incredibly simple; but the results are significant (especially in cases where you're frequently running out of memory).

    [...]

    The first thing you must do is open a Crosh tab. This is simple and doesn't require anything more than hitting the key combination [Ctrl]+[Alt]+[t]. When you find yourself at crosh> you know you're ready to go.

    The command to create swap space is very simple and comes in the form of:

    swap enable SIZE

    Where SIZE is the size of the swap space you wish to create. The ChromeOS developers suggest adding a swap of 2GB, which means the command would be:

    swap enable 2000

    Once you've run the command, you must then reboot your Chromebook for the effect to take place. The swap will remain persistent until you run the disable command (again, from Crosh), like so:

    swap disable

    No matter how many times you reboot, the swap will remain until you issue the disable command.

    How to prevent a Chromebook from running out of memory | TechRepublic (archive.org link)

    Human and computer memory

    There are some good points made in this article about ‘desktop’ operating systems but it’s a bit Mac-centric for my liking. I’m pretty sure, for example, the author would love ChromeOS or another Linux-based operating system.

    One really interesting point is the difference between human memory and computer memory. In my own life and experience, I use the latter to augment the former by not even trying to remember anything that computers can store and retrieve more quickly. Kind of like Cory Doctorow’s Memex Method.

    Maciej Cegłowski’s powerful “The Internet With A Human Face” highlights the cognitive dissonance between human memory (gradiated and complex and eventually faulty) and computer memory (binary: flawless or nonexistent). We should model fragment search and access after human memory, using access patterns and usage patterns as rich metadata to help the computer understand what is important and what is relevant. And what is related to what. That doesn’t mean auto-deleting documents after some period of time, but just as it’s a lot harder to Google something generic that happened a decade ago and garnered little attention since, it doesn’t need to be “easy” to find the untitled scratch spreadsheet we cooked up to check the car payment budget in 2013 (but we should be able to find it if we need to).
    Source: Why We Need to Rethink the Computer ‘Desktop’ as a Concept | by Ben Zotto | May, 2021 | OneZero

    Reafferent loops

    In Peter Godfrey-Smith's book Other Minds, he cites work from 1950 by the German physiologists Erich van Holst and Horst Mittelstaedt.

    They used the term afference to refer to everything you take in through the senses. Some of what comes in is due to the changes in the objects around you — that is exafference... — and some of what comes in is due to your own actions: that is reafference.

    Peter Godfrey-Smith, Other Minds, p.154

    Godfrey-Smith is talking about octopuses and other cephalopods, but I think what he's discussing is interesting from a digital note-taking point of view.

    To write a note and read it is to create a reafferent loop. Rather than wanting to perceive only the things that are not due to you — finding the exafferent among the noise is the senses — you what you read to be entirely due to your previous action. You want the contents of the note to be due to your acts rather than someone else's meddling, or the natural decay of the notepad. You want the loop between present action and future perception to be firm. Thus enables your to create a form of external memory — as was, almost certainly, the role of much early writing (which is full of records of goods and transactions), and perhaps also the role of some early pictures, though that js much less clear.

    When a written message is directed at others, it's ordinary communication. When you write something for yourself to read, there's usually an essential role for time — the goal is memory, in a broad sense. But memory like this is a communicative phenomenon; it is communication between your present self and a future self. Diaries and notes-to-self are embedded in a sender/receiver system just like more standard forms of communication.

    Peter Godfrey-Smith, Other Minds, p.154-155

    Some people talk about digital note-taking as a form of 'second brain'. Given the type of distributed cognition that Godfrey-Smith highlights in Other Minds, it would appear that by creating reafferent loops that's exactly the kind of thing that's happening.

    Very interesting.

    If you have been put in your place long enough, you begin to act like the place

    Remembering the past through photos

    A few weeks ago, I bought a Google Assistant-powered smart display and put it in our kitchen in place of the DAB radio. It has the added bonus of cycling through all of my Google Photos, which stretch back as far as when my wife and I were married, 15 years ago.

    This part of its functionality makes it, of course, just a cloud-powered digital photo frame. But I think it’s possible to underestimate the power that these things have. About an hour before composing this post, for example, my wife took a photo of a photo(!) that appeared on the display showing me on the beach with our two children when they were very small.

    An article by Giuliana Mazzoni in The Conversation points out that our ability to whip out a smartphone at any given moment and take a photo changes our relationship to the past:

    We use smart phones and new technologies as memory repositories. This is nothing new – humans have always used external devices as an aid when acquiring knowledge and remembering.

    […]

    Nowadays we tend to commit very little to memory – we entrust a huge amount to the cloud. Not only is it almost unheard of to recite poems, even the most personal events are generally recorded on our cellphones. Rather than remembering what we ate at someone’s wedding, we scroll back to look at all the images we took of the food.

    Mazzoni points out that this can be problematic, as memory is important for learning. However, there may be a “silver lining”:

    Even if some studies claim that all this makes us more stupid, what happens is actually shifting skills from purely being able to remember to being able to manage the way we remember more efficiently. This is called metacognition, and it is an overarching skill that is also essential for students – for example when planning what and how to study. There is also substantial and reliable evidence that external memories, selfies included, can help individuals with memory impairments.

    But while photos can in some instances help people to remember, the quality of the memories may be limited. We may remember what something looked like more clearly, but this could be at the expense of other types of information. One study showed that while photos could help people remember what they saw during some event, they reduced their memory of what was said.

    She goes on to discuss the impact that viewing many photos from your past has on a malleable sense of self:

    Research shows that we often create false memories about the past. We do this in order to maintain the identity that we want to have over time – and avoid conflicting narratives about who we are. So if you have always been rather soft and kind – but through some significant life experience decide you are tough – you may dig up memories of being aggressive in the past or even completely make them up.
    I'm not so sure that it's a good thing to tell yourself the wrong story about who you are. For example, although I grew up in, and identified with, a macho ex-mining town environment, I've become happier by realising that my identify is separate to that.

    I suppose it’s a bit different for me, as most of the photos I’m looking at are of me with my children and/or my wife. However, I still have to tell myself a story of who I am as a husband and a father, so in many ways it’s the same.

    All in all, I love the fact that we can take photos anywhere and at any time. We may need to evolve social norms around the most appropriate ways of capturing images in crowded situations, but that’s separate to the very great benefit which I believe they bring us.

    Source: The Conversation