Tag: technology (page 1 of 9)

Friday fizzles

I head off on holiday tomorrow! Before I go, check out these highlights from this week’s reading and research:

  • “Things that were considered worthless are redeemed” (Ira David Socol) — “Empathy plus Making must be what education right now is about. We are at both a point of learning crisis and a point of moral crisis. We see today what happens — in the US, in the UK, in Brasil — when empathy is lost — and it is a frightening sight. We see today what happens — in graduates from our schools who do not know how to navigate their world — when the learning in our schools is irrelevant in content and/or delivery.”
  • Voice assistants are going to make our work lives better—and noisier (Quartz) — “Active noise cancellation and AI-powered sound settings could help to tackle these issues head on (or ear on). As the AI in noise cancellation headphones becomes better and better, we’ll potentially be able to enhance additional layers of desirable audio, while blocking out sounds that distract. Audio will adapt contextually, and we’ll be empowered to fully manage and control our soundscapes.
  • We Aren’t Here to Learn What We Already Know (LA Review of Books) — “A good question, in short, is an honest question, one that, like good theory, dances on the edge of what is knowable, what it is possible to speculate on, what is available to our immediate grasp of what we are reading, or what it is possible to say. A good question, that is, like good theory, might be quite unlovely to read, particularly in its earliest iterations. And sometimes it fails or has to be abandoned.”
  • The runner who makes elaborate artwork with his feet and a map (The Guardian) — “The tracking process is high-tech, but the whole thing starts with just a pen and paper. “When I was a kid everyone thought I’d be an artist when I grew up – I was always drawing things,” he said. He was a particular fan of the Etch-a-Sketch, which has something in common with his current work: both require creating images in an unbroken line.”
  • What I Do When it Feels Like My Work Isn’t Good Enough (James Clear) — “Release the desire to define yourself as good or bad. Release the attachment to any individual outcome. If you haven’t reached a particular point yet, there is no need to judge yourself because of it. You can’t make time go faster and you can’t change the number of repetitions you have put in before today. The only thing you can control is the next repetition.”
  • Online porn and our kids: It’s time for an uncomfortable conversation (The Irish Times) — “Now when we talk about sex, we need to talk about porn, respect, consent, sexuality, body image and boundaries. We don’t need to terrify them into believing watching porn will ruin their lives, destroy their relationships and warp their libidos, maybe, but we do need to talk about it.”
  • Drones will fly for days with new photovoltaic engine (Tech Xplore) — “[T]his finding builds on work… published in 2011, which found that the key to boosting solar cell efficiency was not by absorbing more photons (light) but emitting them. By adding a highly reflective mirror on the back of a photovoltaic cell, they broke efficiency records at the time and have continued to do so with subsequent research.
  • Twitter won’t ruin the world. But constraining democracy would (The Guardian) — “The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.
  • The Encryption Debate Is Over – Dead At The Hands Of Facebook (Forbes) — “Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.”
  • Living in surplus (Seth Godin) — “When you live in surplus, you can choose to produce because of generosity and wonder, not because you’re drowning.”

Image from Dilbert. Shared to make the (hopefully self-evident) counterpoint that not everything of value has an economic value. There’s more to life than accumulation.

Anything invented after you’re thirty-five is against the natural order of things

I’m fond of the above quotation by Douglas Adams that I’ve used for the title of this article. It serves as a reminder to myself that I’ve now reached an age when I’ll look at a technology and wonder: why?

Despite this, I’m quite excited about the potential of two technologies that will revolutionise our digital world both in our homes and offices and when we’re out-and-about. Those technologies? Wi-Fi 6, as it’s known colloquially, and 5G networks.

Let’s take Wi-Fi 6 first, which Chuong Nguyen explains in an article for Digital Trends, isn’t just about faster speeds:

A significant advantage for Wi-Fi 6 devices is better battery life. Though the standard promotes Internet of Things (IoT) devices being able to last for weeks, instead of days, on a single charge as a major benefit, the technology could even prove to be beneficial for computers, especially since Intel’s latest 9th-generation processors for laptops come with Wi-Fi 6 support.

Likewise, Alexis Madrigal, writing in The Atlantic, explains that mobile 5G networks bring benefits other than streaming YouTube videos at ever-higher resolutions, but are quite a technological hurdle:

The fantastic 5G speeds require higher-frequency, shorter-wavelength signals. And the shorter the wavelength, the more likely it is to be blocked by obstacles in the world.

[…]

Ideally, [mobile-associated companies] would like a broader set of customers than smartphone users. So the companies behind 5G are also flaunting many other applications for these networks, from emergency services to autonomous vehicles to every kind of “internet of things” gadget.

If you’ve been following the kerfuffle around the UK using Huawei’s technology for its 5G infrastructure, you’ll already know about the politics and security issues at stake here.

Sue Halpern, writing in The New Yorker, outlines the claimed benefits:

Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.

But greater speeds and lower latency isn’t all upside for all members of societies, as I learned in this BBC Beyond Today podcast episode about Korean spy cam porn. Halpern explains:

In China, which has installed three hundred and fifty thousand 5G relays—about ten times more than the United States—enhanced geolocation, coupled with an expansive network of surveillance cameras, each equipped with facial-recognition technology, has enabled authorities to track and subordinate the country’s eleven million Uighur Muslims. According to the Times, “the practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”

Automated racism, now there’s a thing. It turns out that technologies amplify our existing prejudices. Perhaps we should be a bit more careful and ask more questions before we march down the road of technological improvements? Especially given 5G could affect our ability to predict major storms. I’m reading Low-tech Magazine: The Printed Website at the moment, and it’s pretty eye-opening about what we could be doing instead.


Also check out:

The smallest deed is better than the greatest intention

Thanks to John Burroughs for today’s title. For me, it’s an oblique reference to some of the situations I find myself in, both in my professional and personal life. After all, words are cheap and actions are difficult.

I’m going to take the unusual step of quoting someone who’s quoting me. In this case, it’s Stephen Downes picking up on a comment I made in the cc-openedu Google Group. I’d link directly to my comments, but for some reason a group about open education is… closed?

I’d like to echo a point David Kernohan made when I worked with him on the Jisc OER programme. He said: “OER is a supply-side term”. Let’s face it, there are very few educators specifically going out and looking for “Openly Licensed Resources”. What they actuallywant are resources that they can access for free (or at a low cost) and that they can legally use. We’ve invented OER as a term to describe that, but it may actually be unhelpfully ambiguous.

Shortly after posting that, I read this post from Sarah Lambert on the GO-GN (Global OER Graduate Network) blog. She says:

[W]hile we’re being all inclusive and expanding our “open” to encompass any collaborative digital practice, then our “open” seems to be getting less and less distinctive. To the point where it’s getting quite easily absorbed by the mainstream higher education digital learning (eLearning, Technology Enhanced Learning, ODL, call it what you will). Is it a win for higher education to absorb and assimilate “open” (and our gift labour) as the latest innovation feeding the hungry marketised university that Kate Bowles spoke so eloquently about? Is it a problem if not only the practice, but the research field of open education becomes inseparable with mainstream higher education digital learning research?

My gloss on this is that ‘open education’ may finally have moved into the area of productive ambiguity. I talked about this back in 2016 in a post on a blog I post to only very infrequently, so I might as well quote myself again:

Ideally, I’d like to see ‘open education’ move into the realm of what I term productive ambiguity. That is to say, we can do some workwith the idea and start growing the movement beyond small pockets here and there. I’m greatly inspired by Douglas Rushkoff’s new Team Human podcast at the moment, feeling that it’s justified the stance that I and others have taken for using technology to make us more human (e.g. setting up a co-operative) and against the reverse (e.g. blockchain).

That’s going to make a lot of people uncomfortable, and hopefully uncomfortable enough to start exploring new, even better areas. ‘Open Education’ now belongs, for better or for worse, to the majority. Whether that’s ‘Early majority’ or ‘Late majority’ on the innovation adoption lifecycle curve probably depends where in the world you live.

Diffusion of innovation curve
CC BY Pnautilus (Wikipedia)

Things change and things move on. The reason I used that xkcd cartoon about IRC at the top of this post is because there has been much (OK, some) talk about Mozilla ending its use of IRC.

While we still use it heavily, IRC is an ongoing source of abuse and harassment for many of our colleagues and getting connected to this now-obscure forum is an unnecessary technical barrier for anyone finding their way to Mozilla via the web. Available interfaces really haven’t kept up with modern expectations, spambots and harassment are endemic to the platform, and in light of that it’s no coincidence that people trying to get in touch with us from inside schools, colleges or corporate networks are finding that often as not IRC traffic isn’t allowed past institutional firewalls at all.

Cue much hand-wringing from the die-hards in the Mozilla community. Unfortunately, Slack, which originally had a bridge/gateway for IRC has pulled up the drawbridge on that front, so they could go with something like Mattermost, but given recently history I bet they go with Discord (or similar).

As Seth Godin points out in his most recent podcast episode, everyone wants be described as ‘supple’, nobody wants to be described as ‘brittle’. Yet, the actions we take suggest otherwise. We expect that just because the change we see in the world isn’t convenient, that we can somehow slow it down. Nope, you just have to roll with it, whether that’s changing technologies, or different approaches to organising ideas and people.


Also check out:

  • Do Experts Listen to Other Experts? (Marginal Revolution) —”very little is known about how experts influence each others’ opinions, and how that influence affects final evaluations.”
  • Why Symbols Aren’t Forever (Sapiens) — “The shifting status of cultural symbols reveals a lot about who we are and what we value.”
  • Balanced Anarchy or Open Society? (Kottke.org) — “Personal computing and the internet changed (and continues to change) the balance of power in the world so much and with such speed that we still can’t comprehend it.”

Why the internet is less weird these days

I can remember sneakily accessing the web when I was about fifteen. It was a pretty crazy place, the likes of which you only really see these days in the far-flung corners of the regular internet or on the dark web.

Back then, there were conspiracy theories, there was porn, and there was all kinds of weirdness and wonderfulness that I wouldn’t otherwise have experienced growing up in a northern mining town. Some of it may have been inappropriate, but in the main it opened my eyes to the wider world.

In this Engadget article, Violet Blue points out that the demise of the open web means we’ve also lost meaningful free speech:

It’s critical… to understand that apps won, and the open internet lost. In 2013, most users accessing the internet went to mobile and stayed that way. People don’t actually browse the internet anymore, and we are in a free-speech nightmare.

Because of Steve Jobs, adult and sex apps are super-banned from Apple’s conservative walled garden. This, combined with Google’s censorious push to purge its Play Store of sex has quietly, insidiously formed a censored duopoly controlled by two companies that make Morality in Media very, very happy. Facebook, even though technically a darknet, rounded it out.

A very real problem for society at the moment is that we simultaneously want to encourage free-thinking and diversity while at the same time protecting people from distasteful content. I’m not sure what the answer is, but outsourcing the decision to tech companies probably isn’t the answer.

In 1997, Ann Powers wrote an essay called “In Defense of Nasty Art.” It took progressives to task for not defending rap music because it was “obscene” and sexually graphic. Powers puts it mildly when she states, “Their apprehension makes the fight to preserve freedom of expression seem hollow.” This is an old problem. So it’s no surprise that the same websites forbidding, banning, and blocking “sexually suggestive” art content also claim to care about free speech.

As a parent of a 12 year-old boy and eight year-old girl, I check the PEGI age ratings for the games they play. I also trust Common Sense Media to tell me about the content of films they want to watch, and I’m careful about what they can and can’t access on the web.

Violet Blue’s article is a short one, so focuses on the tech companies, but the real issue here is one level down. The problem is neoliberalism. As Byung-Chul Han comments in Psychopolitics: Neoliberalism and New Technologies of Powerwhich I’m reading at the moment:

Neoliberalism represents a highly efficient, indeed an intelligent, system for exploiting freedom. Everything that belongs to practices and expressive forms of liberty –emotion, play and communication –comes to be exploited.

Almost everything is free at the point of access these days, which means, in the oft-repeated phrase, that we are the product. This means that in order to extract maximum value, nobody can be offended. I’m not so sure that I want to live in an inoffensive future.

Source: Engadget (via Noticing)

Why it’s so hard to quit Big Tech

I’m writing this on a Google Pixelbook. Earlier this evening I wiped it, fully intending to install Linux on it, and then… meh. Partly, that’s because the Pixelbook now supports Linux apps in a sandboxed environment (which is great!) but mostly because using ChromeOS on decent hardware is just a lovely user experience.

Writing for TechCrunch, Danny Crichton writes:

Privacy advocates will tell you that the lack of a wide boycott against Google and particularly Facebook is symptomatic of a lack of information: if people really understood what was happening with their data, they would galvanize immediately for other platforms. Indeed, this is the very foundation for the GDPR policy in Europe: users should have a choice about how their data is used, and be fully-informed on its uses in order to make the right decision for them.

This is true for all kinds of things. If people only knew about the real cost of Brexit, about what Donald Trump was really like, about the facts of global warning… and on, and on.

I think it’s interesting to compare climate change and Big Tech. We all know that we should probably change our actions, but the symptoms only affect us directly very occasionally. I’m just pleased that I’ve been able to stay off Facebook for the last nine years…

Alternatives exist for every feature and app offered by these companies, and they are not hard to find. You can use Signal for chatting, DuckDuckGo for search, FastMail for email, 500px or Flickr for photos, and on and on. Far from being shameless clones of their competitors, in many cases these products are even superior to their originals, with better designs and novel features.

It’s not good enough just to create a moral choice and talk about privacy. Just look at the Firefox web browser from Mozilla, which now stands at less than 5% market share. That’s why I think that we need to be thinking about regulation (like GDPR!) to change things, not expect individual users to make some kind of stand.

I mean, just look at things like this recent article that talks about building your own computer, sideloading APK files onto an Android device with a modified bootloader, and setting up your own ‘cloud’ service. It’s do-able, and I’ve done it in the past, but it’s not fun. And it’s not a sustainable solution for 99% of the population.

Source: TechCrunch

Cal Newport on the dangers of ‘techno-maximalism’

I have to say that I was not expecting to enjoy Cal Newport’s book Deep Work when I read it a couple of years ago. As someone who’s always been fascinated by technology, and who has spent most of his career working in and around it, I assume it was going to contain the approach of a Luddite working in his academic ivory tower.

It turns out I was completely wrong in this assumption, and the book was one of the best I read in 2017. Newport is back with a new book that I’ve eagerly pre-ordered called Digital Minimalism: On Living Better with Less Technology. It comes out next week. Again, the title is something that would usually be off-putting to me, but it’s hard to argue about the points that he makes in his blog posts since Deep Work.

As you would expect with a new book coming out, Newport is doing the rounds of interviews. In one with GQ magazine, he talks about the dangers of ‘digital maximalism’, which he defines in the following way:

The basic idea is that technological innovations can bring value and convenience into your life. So, you assess new technological tools with respect to what value or convenience it can bring into your life. And if you can find one, then the conclusion is, “If I can afford it, I should probably have this.” It just looks at the positives. And it’s view is “more is better than less,” because more things that bring you benefits means more total benefits. This is what maximalism is: “If there’s something that brings value, you should get it.”

That type of thinking is dangerous, as:

We see these tools, and we have this narrative that, “You can do this on Facebook,” or “This new feature on this device means you can do this, which would be convenient.” What you don’t factor in is, “Okay, well what’s the cost in terms of my time attention required to have this device in my life?” Facebook might have some particular thing that’s valuable, but then you have the average U.S. user spending something like 50 minutes a day on Facebook products. That’s actually a pretty big [amount of life] that you’re now trading in order to get whatever the potential small benefit is.

[Maximalism] ignores the opportunity cost. And as Thoreau pointed out hundreds of years ago, it’s actually in the opportunity cost that all the interesting math happens.

Newport calls for a new philosophy of technology which includes things like ‘digital minimalism’ (the subject of his new book):

Digital minimalism is a clear philosophy: you figure out what’s valuable to you. For each of these things you say, “What’s the best way I need to use technology to support that value?” And then you happily miss out on everything else. It’s about additively building up a digital life from scratch to be very specifically, intentionally designed to make your life much better.

There might be other philosophies, just like in health in fitness. More important to me than everyone becoming a digital minimalist, is people in general getting used to this idea that, “I have a philosophy that’s really clear and grounded in my values that tells me how I approach technology.” Moving past this ad-hoc stage of like, “Whatever, I just kind of signed up for maximalist stage,” and into something a little bit more intentional.

I’ve never really the type of person to go to a book club, but what with this coming out and Company of One by Paul Jarvis arriving yesterday, perhaps I need to set up a virtual one?

Source: GQ

Through the looking-glass

Earlier this month, George Dyson, historian of technology and author of books including Darwin Among the Machines, published an article at Edge.org.

In it, he cites Childhood’s End, a story by Arthur C. Clarke in which benevolent overlords arrive on earth. “It does not end well”, he says. There’s lots of scaremongering in the world at the moment and, indeed, some people have said for a few years now that software is eating the world.

Dyson comments:

The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.

I think that’s an insightful point: human knowledge is seen to be that indexed by Google, friendships are mediated by Facebook, Twitter and Instagram, and to some extent what possible/desirable/interesting is dictated to us rather than originating from us.

We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves. The new gatekeepers, by controlling the flow of information, rule a growing sector of the world.

What deserves our full attention is not the success of a few companies that have harnessed the powers of hybrid analog/digital computing, but what is happening as these powers escape into the wild and consume the rest of the world

Indeed. We need to raise our sights a little here and start asking governments to use their dwindling powers to break up mega corporations before Google, Amazon, Microsoft and Facebook are too powerful to stop. However, given how enmeshed they are in everyday life, I’m not sure at this point it’s reasonable to ask the general population to stop using their products and services.

Source: Edge.org

Exit option democracy

This week saw the launch of a new book by Shoshana Zuboff entitled The Age of Surveillance Capitalism: the fight for a human future at the new frontier of power. It was featured in two of my favourite newspapers, The Observer and the The New York Times, and is the kind of book I would have lapped up this time last year.

In 2019, though, I’m being a bit more pragmatic, taking heed of Stoic advice to focus on the things that you can change. Chiefly, that’s your own perceptions about the world. I can’t change the fact that, despite the Snowden revelations and everything that has come afterwards, most people don’t care one bit that they’re trading privacy for convenience..

That puts those who care about privacy in a bit of a predicament. You can use the most privacy-respecting email service in the world, but as soon as you communicate with someone using Gmail, then Google has got the entire conversation. Chances are, the organisation you work for has ‘gone Google’ too.

Then there’s Facebook shadow profiles. You don’t even have to have an account on that platform for the company behind it to know all about you. Same goes with companies knowing who’s in your friendship group if your friends upload their contacts to WhatsApp. It makes no difference if you use ridiculous third-party gadgets or not.

In short, if you want to live in modern society, your privacy depends on your family and friends. Of course you have the option to choose not to participate in certain platforms (I don’t use Facebook products) but that comes at a significant cost. It’s the digital equivalent of Thoreau taking himself off to Walden pond.

In a post from last month that I stumbled across this weekend, Nate Matias reflects on a talk he attended by Janet Vertesi at Princeton University’s Center for Information Technology Policy. Vertesi, says Matias, tried four different ways of opting out of technology companies gathering data on her:

  • Platform avoidance,
  • Infrastructural avoidance
  • Hardware experiments
  • Digital homesteading

Interestingly, the starting point is Vertesi’s rejection of ‘exit option democracy’:

The basic assumption of markets is that people have choices. This idea that “you can just vote with your feet” is called an “exit option democracy” in organizational sociology (Weeks, 2004). Opt-out democracy is not really much of a democracy, says Janet. She should know–she’s been opting out of tech products for years.

The option Vertesi advocates for going Google-free is a pain in the backside. I know, because I’ve tried it:

To prevent Google from accessing her data, Janet practices “data balkanization,” spreading her traces across multiple systems. She’s used DuckDuckGo, sandstorm.io, ResilioSync, and youtube-dl to access key services. She’s used other services occasionally and non-exclusively, and varied it with open source alternatives like etherpad and open street map. It’s also important to pay attention to who is talking to whom and sharing data with whom. Data balkanization relies on knowing what companies hate each other and who’s about to get in bed with whom.

The time I’ve spent doing these things was time I was not being productive, nor was it time I was spending with my wife and kids. It’s easy to roll your eyes at people “trading privacy for convenience” but it all adds up.

Talking of family, straying too far from societal norms has, for better or worse, negative consequences. Just as Linux users were targeted for surveillance, so Vertisi and her husband were suspected of fraud for browsing the web using Tor and using cash for transactions:

Trying to de-link your identity from data storage has consequences. For example, when Janet and her husband tried to use cash for their purchases, they faced risks of being reported to the authorities for fraud, even though their actions were legal.

And then, of course, there’s the tinfoil hat options:

…Janet used parts from electronics kits to make her own 2g phone. After making the phone Janet quickly realized even a privacy-protecting phone can’t connect to the network without identifying the user to companies through the network itself.

I’m rolling my eyes at this point. The farthest I’ve gone down this route is use the now-defunct Firefox OS and LineageOS for microG. Although both had their upsides, they were too annoying to use for extended periods of time.

Finally, Vertesi goes down the route of trying to own all your own data. I’ll just point out that there’s a reason those of us who had huge CD and MP3 collections switched to Spotify. Looking after any collection takes time and effort. It’s also a lot more cost effective for someone like me to ‘rent’ my music instead of own it. The same goes for Netflix.

What I do accept, though, is that Vertesi’s findings show that ‘exit democracy’ isn’t really an option here, so the world of technology isn’t really democratic. My takeaway from all this, and the reason for my pragmatic approach this year, is that it’s up to governments to do something about all this.

Western society teaches us that empowered individuals can change the world. But if you take a closer look, whether it’s surveillance capitalism or climate change, it’s legislation that’s going to make the biggest difference here. Just look at the shift that took place because of GDPR.

So whether or not I read Zuboff’s new book, I’m going to continue my pragmatic approach this year. Meanwhile, I’ll continue to mute the microphone on the smart speakers in our house when they’re not being used, block trackers on my Android smartphone, and continue my monthly donations to work of the Electronic Frontier Foundation and the Open Rights Group.

Source: J. Nathan Matias

Noise cancelling for cars is a no-brainer

We’re all familiar with noise cancelling headphones. I’ve got some that I use for transatlantic trips, and they’re great for minimising any repeating background noise.

Twenty years ago, when I was studying A-Level Physics, I was also building a new PC. I realised that, if I placed a microphone inside the computer case, and fed that into the audio input on the soundcard, I could use software to invert the sound wave and thus virtually eliminate fan noise. It worked a treat.

It doesn’t surprise me, therefore, to find that BOSE, best known for its headphones, are offering car manufacturers something similar with “road noise control”:

With accelerometers, multiple microphones, and algorithms, it’s much more complicated than what I rigged up in my bedroom as a teenager. But the principle remains the same.

Source: The Next Web

Acoustic mirrors

On the beach at Druridge Bay in Northumberland, near where I live, there are large blocks in various intervals. These hulking pieces of concrete, now half-submerged, were deployed on seafronts up and down England to prevent the enemy successfully landing tanks during the Second World War.

I was fascinated to find out that these aren’t the only concrete blocks that protected Britain. BBC News reports that ‘acoustic mirrors’ were installed for a very specific purpose:

More than 100 years ago acoustic mirrors along the coast of England were built with the intention of using them to detect the sound of approaching German zeppelins.

The concave concrete structures were designed to pick up sound waves from enemy aircraft, making it possible to predict their flight trajectory, giving enough time for ground forces to be alerted to defend the towns and cities of Britain.

Some of these, which vary in size, still exist, and have been photographed by Joe Pettet-Smith.

The reason most of us haven’t heard of them is that the technology improved so quickly. Pettet-Smith comments:

The sound mirror experiment, this idea of having a chain of concrete structures facing the Channel using sound to detect the flight trajectory of enemy aircraft, was just that – an experiment. They tried many different sizes and designs before the project was scrapped when radar was introduced.

The science was solid, but aircraft kept getting faster and quieter, which made them obsolete.

Fascinating. The historian (and technologist) within me loves this.

Source: BBC News