Confusing tech questions

    Today is the first day of the Consumer Electronics Show, or CES, in Las Vegas. Each year, tech companies showcase their latest offerings and concepts. Nilay Patel, Editor-in-Chief for The Verge, comments that, increasingly, the tech industry is built on a number of assumptions about consumers and human behaviour:

    [T]hink of the tech industry as being built on an ever-increasing number of assumptions: that you know what a computer is, that saying “enter your Wi-Fi password” means something to you, that you understand what an app is, that you have the desire to manage your Bluetooth device list, that you’ll figure out what USB-C dongles you need, and on and on.

    Lately, the tech industry is starting to make these assumptions faster than anyone can be expected to keep up. And after waves of privacy-related scandals in tech, the misconceptions and confusion about how things works are both greater and more reasonable than ever.

    I think this is spot-on. At Mozilla, and now at Moodle, I spend a good deal of my time among people who are more technically-minded than me. And, in turn, I’m more technically-minded than the general population. So what’s ‘obvious’ or ‘easy’ to developers feels like magic to the man or woman on the street.

    Patel keeps track of the questions his friends and family ask him, and has listed them in the post. The number one thing he says that everyone is talking about is how people assume their phones are listening to them, and then serving up advertising based on that. They don’t get that that Facebook (and other platforms) use multiple data points to make inferences.

    I’ll not reproduce his list here, but here are three questions which I, too, get a lot from friends and family:

    “How do I make sure deleting photos from my iPhone won’t delete them from my computer?”

    “How do I keep track of what my kid is watching on YouTube?”

    “Why do I need to make another username and password?”

    As I was discussing with the MoodleNet team just yesterday, there’s a difference between treating users as ‘stupid’ (which they’re not) and ensuring that they don’t have to think too much when they’re using your product.

    Source: The Verge (via Orbital Operations)

    Configuring your iPhone for productivity (and privacy, security?)

    At an estimated read time of 70 minutes, though, this article is the longest I’ve seen on Medium! It includes a bunch of advice from ‘Coach Tony’, the CEO of Coach.me, about how he uses his iPhone, and perhaps how you should too:

    The iPhone could be an incredible tool, but most people use their phone as a life-shortening distraction device.

    However, if you take the time to follow the steps in this article you will be more productive, more focused, and — I’m not joking at all — live longer.

    Practically every iPhone setup decision has tradeoffs. I will give you optimal defaults and then trust you to make an adult decision about whether that default is right for you.

    As an aside, I appreciate the way he sets up different ways to read the post, from skimming the headlines through to reading the whole thing in-depth.

    However, the problem is that for a post that the author describes as a ‘very very complete’ guide to configuring your iPhone to ‘work for you, not against you’, it doesn’t go into enough depth about privacy and security for my liking. I’m kind of tired of people thinking that using a password manager and increasing your lockscreen password length is enough.

    For example, Coach Tony talks about basically going all-in on Google Cloud. When people point out the privacy concerns of doing this, he basically uses the tinfoil hat defence in response:

    Moving to the Google cloud does trade privacy for productivity. Google will use your data to advertise to you. However, this is a productivity article. If you wish it were a privacy article, then use Protonmail. Last, it’s not consistent that I have you turn off Apple’s ad tracking while then making yourself fully available to Google’s ad tracking. This is a tradeoff. You can turn off Apple’s tracking with zero downside, so do it. With Google, I think it’s worthwhile to use their services and then fight ads in other places. The Reader feature in Safari basically hides most Google ads that you’d see on your phone. On your computer, try an ad blocker.
    It's all very well saying that it's a productivity article rather than a privacy article. But it's 2018, you need to do both. Don't recommend things to people that give them gains in one area but causes them new problems in others.

    That being said, I appreciate Coach Tony’s focus on what I would call ‘notification literacy’. Perhaps read his article, ignore the bits where he suggests compromising your privacy, and follow his advice on configuring your device for a calmer existence.

     

    Source: Better Humans

    Is Google becoming more like Facebook?

    I’m composing this post on ChromeOS, which is a little bit hypocritical, but yesterday I was shocked to discover how much data I was ‘accidentally’ sharing with Google. Check it out for yourself by going to your Google account’s activity controls page.

    This article talks about how Google have become less trustworthy of late:

    [Google] announced a forthcoming update last Wednesday: Chrome’s auto-sign-in feature will still be the default behavior of Chrome. But you’ll be able to turn it off through an optional switch buried in Chrome’s settings.

    This pattern of behavior by tech companies is so routine that we take it for granted. Let’s call it “pulling a Facebook” in honor of the many times that Facebook has “accidentally” relaxed the privacy settings for user profile data, and then—following a bout of bad press coverage—apologized and quietly reversed course. A key feature of these episodes is that management rarely takes the blame: It’s usually laid at the feet of some anonymous engineer moving fast and breaking things. Maybe it’s just a coincidence that these changes consistently err in the direction of increasing “user engagement” and never make your experience more private.

    What’s new here, and is a very recent development indeed, is that we’re finally starting to see that this approach has costs. For example, it now seems like Facebook executives spend an awful lot of time answering questions in front of Congress. In 2017, when Facebook announced it had handed more than 80 million user profiles to the sketchy election strategy firm Cambridge Analytica, Facebook received surprisingly little sympathy and a notable stock drop. Losing the trust of your users, we’re learning, does not immediately make them flee your business. But it does matter. It’s just that the consequences are cumulative, like spending too much time in the sun.

    I'm certainly questioning my tech choices. And I've (re-)locked down my Google account.

    Source: Slate

    Tracking vs advertising

    We tend to use words to denote something right up to the time that term becomes untenable. Someone has to invent a better one. Take mobile phones, for example. They’re literally named after the least-used app on there, so we’re crying out for a different way to refer to them. Perhaps a better name would be ‘trackers’.

    These days, most people use mobile devices for social networking. These are available free at the point of access, funded by what we’re currently calling ‘advertising’. However, as this author notes, it’s nothing of the sort:

    What we have today is not advertising. The amount of personally identifiable information companies have about their customers is absolutely perverse. Some of the world’s largest companies are in the business of selling your personal information for use in advertising. This might sound innocuous but the tracking efforts of these companies are so accurate that many people believe that Facebook listens to their conversations to serve them relevant ads. Even if it’s true that the microphone is not used, the sum of all other data collected is still enough to show creepily relevant advertising.

    Unfortunately, the author doesn’t seem to have come to the conclusion yet that it’s the logic of capitalism that go us here. Instead, he just points out that people’s privacy is being abused.

    [P]eople now get most of their information from social networks yet these networks dictate the order in which content is served to the user. Google makes the worlds most popular mobile operating system and it’s purpose is drive the company’s bottom line (ad blocking is forbidden). “Smart” devices are everywhere and companies are jumping over each other to put more shit in your house so they can record your movements and sell the information to advertisers. This is all a blatant abuse of privacy that is completely toxic to society.
    Agreed, and it's easy to feel a little helpless against this onslaught. While it's great to have a list of things that users can do, if those things are difficult to implement and/or hard to understand, then it's an uphill battle.

    That being said, the three suggestions he makes are use

    To combat this trend, I have taken the following steps and I think others should join the movement:
    • Aggressively block all online advertisements
    • Don’t succumb to the “curated” feeds
    • Not every device needs to be “smart”
    I feel I'm already way ahead of the author in this regard:
    • Aggressively block all online advertisements
    • Don’t succumb to the “curated” feeds
      • I quit Facebook years ago, haven't got an Instagram account, and pretty much only post links to my own spaces on Twitter and LinkedIn.
    • Not every device needs to be “smart”
      • I don't really use my Philips Hue lights, and don't have an Amazon Alexa — or even the Google Assistant on my phone).
    It's not easy to stand up to Big Tech. The amount of money they pour into things make their 'innovations' seem inevitable. They can afford to make things cheap and frictionless so you get hooked.

    As an aside, it’s interesting to note that those that previously defended Apple as somehow ‘different’ on privacy, despite being the world’s most profitable company, are starting to backtrack.

    Source: Nicholas Rempel

    Nobody is ready for GDPR

    As a small business owner and co-op founder, GDPR applies to me as much as everyone else. It’s a massive ballache, but I support the philosophy behind what it’s trying to achieve.

    After four years of deliberation, the General Data Protection Regulation (GDPR) was officially adopted by the European Union in 2016. The regulation gave companies a two-year runway to get compliant, which is theoretically plenty of time to get shipshape. The reality is messier. Like term papers and tax returns, there are people who get it done early, and then there’s the rest of us.

    I'm definitely in "the rest of us" camp, meaning that, over the last week or so, my wife and I have spent time figuring stuff out. The main thing is getting things in order so that  you've got a process in place. Different things are going to affect different organisations, well, differently.

    But perhaps the GDPR requirement that has everyone tearing their hair out the most is the data subject access request. EU residents have the right to request access to review personal information gathered by companies. Those users — called “data subjects” in GDPR parlance — can ask for their information to be deleted, to be corrected if it’s incorrect, and even get delivered to them in a portable form. But that data might be on five different servers and in god knows how many formats. (This is assuming the company even knows that the data exists in the first place.) A big part of becoming GDPR compliant is setting up internal infrastructures so that these requests can be responded to.

    A data subject access request isn't going to affect our size of business very much. If someone does make a request, we've got a list of places from which to manually export the data. That's obviously not a viable option for larger enterprises, who need to automate.

    To be fair, GDPR as a whole is a bit complicated. Alison Cool, a professor of anthropology and information science at the University of Colorado, Boulder, writes in The New York Times that the law is “staggeringly complex” and practically incomprehensible to the people who are trying to comply with it. Scientists and data managers she spoke to “doubted that absolute compliance was even possible.”

    To my mind, GDPR is like an much more far-reaching version of the Freedom of Information Act that came into force in the year 2000. That changed the nature of what citizens could expect from public bodies. I hope that the GDPR similarly changes what we all can expect from organisations who process our personal data.

    Source: The Verge

    Designing for privacy

    Someone described the act of watching Mark Zuckerberg, CEO of Facebook, testifying before Congress as “low level self-harm”. In this post, Joe Edelman explains why:

    Zuckerberg and the politicians—they imagine privacy as if it were a software feature. They imagine a system has “good privacy” if it’s consensual and configurable; that is, if people explicitly agree to something, and understand what they agree to, that’s somehow “good for privacy”. Even usually-sophisticated-analysts like Zeynep Tufekci are missing all the nuance here.

    Giving the example of a cocktail party where you're talking to a friend about something confidential and someone else you don't know comes along, Edelman introduces this definition of privacy:
    Privacy, n. Maintaining a sense of what to show in each environment; Locating social spaces for aspects of yourself which aren’t ready for public display, where you can grow those parts of yourself until they can be more public.
    I really like this definition, especially the part around "locating social spaces for aspects of yourself which aren't ready for public display". I think educators in particular should note this.

    Referencing his HSC1 Curriculum which is the basis for workshops he runs for staff from major tech companies, Edelman includes a graphic on the structural features of privacy. I’ll type this out here for the sake of legibility:

    • Relational depth (close friends / acquaintances / strangers / anonymous / mixed)
    • Presentation (crafted / basic / disheveled)
    • Connectivity (transient / pairwise / whole-group)
    • Stakes (high / low)
    • Status levels (celebrities / rank / flat)
    • Reliance (interdependent / independent)
    • Time together (none / brief / slow)
    • Audience size (big / small / unclear)
    • Audience loyalty (loyal / transient / unclear)
    • Participation (invited / uninvited)
    • Pretext (shared goal / shared values / shared topic / many goals (exchange) / emergent)
    • Social Gestures (like / friend / follow / thank / review / comment / join / commit / request / buy)
    The post is, of course, both an expert response to the zeitgeist, and a not-too-subtle hint that people should take his course. I'm sure Edelman goes into more depth about each of these structural features in his workshops.

    Nevertheless, and even without attending his sessions (which I’m sure are great) there’s value in thinking through each of these elements for the work I’m doing around the MoodleNet project. I’ve probably done some thinking around 70% of these, but it’s great to have a list that helps me organise my thinking a little more.

    Source: Joe Edelman

    Every part of your digital life is being tracked, packaged up, and sold

    I’ve just installed Lumen Privacy Monitor on my Android smartphone after reading this blog post from Mozilla:

    New research co-authored by Mozilla Fellow Rishab Nithyanand explores just this: The opaque realm of third-party trackers and what they know about us. The research is titled “Apps, Trackers, Privacy, and Regulators: A Global Study of the Mobile Tracking Ecosystem,” and is authored by researchers at Stony Brook University, Data & Society, IMDEA Networks, ICSI, Princeton University, Corelight, and the University of Massachusetts Amherst.

    [...]

    In all, the team identified 2,121 trackers — 233 of which were previously unknown to popular advertising and tracking blacklists. These trackers collected personal data like Android IDs, phone numbers, device fingerprints, and MAC addresses.

    The link to the full report is linked to in the quotation above, but the high-level findings were:

    »Most trackers are owned by just a few parent organizations. The authors report that sixteen of the 20 most pervasive trackers are owned by Alphabet. Other parent organizations include Facebook and Verizon. “There is a clear oligopoly happening in the ecosystem,” Nithyanand says.

    » Mobile games and educational apps are the two categories with the highest number of trackers. Users of news and entertainment apps are also exposed to a wide range of trackers. In a separate paper co-authored by Vallina-Rodriguez, he explores the intersection of mobile tracking and apps for youngsters: “Is Our Children’s Apps Learning?

    » Cross-device tracking is widespread. The vast majority of mobile trackers are also active on the desktop web, allowing companies to link together personal data produced in both ecosystems. “Cross-platform tracking is already happening everywhere,” Nithyanand says. “Fifteen of the top 20 organizations active in the mobile advertising space also have a presence in the web advertising space.”

    We're finally getting the stage where a large portion of the population can't really ignore the fact that they're using free services in return for pervasive and always-on surveillance.

    Source: Mozilla: Read, Write, Participate

    Survival in the age of surveillance

    The Guardian has a list of 18 tips to ‘survive’ (i.e. be safe) in an age where everyone wants to know everything about you — so that they can package up your data and sell it to the highest bidder.

    On the internet, the adage goes, nobody knows you’re a dog. That joke is only 15 years old, but seems as if it is from an entirely different era. Once upon a time the internet was associated with anonymity; today it is synonymous with surveillance. Not only do modern technology companies know full well you’re not a dog (not even an extremely precocious poodle), they know whether you own a dog and what sort of dog it is. And, based on your preferred category of canine, they can go a long way to inferring – and influencing – your political views.
    Mozilla has pointed out in a recent blog post that the containers feature in Firefox can increase your privacy and prevent 'leakage' between tabs as you navigate the web. But there's more to privacy and security than just that.

    Here’s the Guardian’s list:

    1. Download all the information Google has on you.
    2. Try not to let your smart toaster take down the internet.
    3. Ensure your AirDrop settings are dick-pic-proof.
    4. Secure your old Yahoo account.
    5. 1234 is not an acceptable password.
    6. Check if you have been pwned.
    7. Be aware of personalised pricing.
    8. Say hi to the NSA guy spying on you via your webcam.
    9. Turn off notifications for anything that’s not another person speaking directly to you.
    10. Never put your kids on the public internet.
    11. Leave your phone in your pocket or face down on the table when you’re with friends.
    12. Sometimes it’s worth just wiping everything and starting over.
    13. An Echo is fine, but don’t put a camera in your bedroom.
    14. Have as many social-media-free days in the week as you have alcohol-free days.
    15. Retrain your brain to focus.
    16. Don’t let the algorithms pick what you do.
    17. Do what you want with your data, but guard your friends’ info with your life.
    18. Finally, remember your privacy is worth protecting.
    A bit of a random list in places, but useful all the same.

    Source: The Guardian

    The only privacy policy that matters is your own

    Dave Pell writes NextDraft, a daily newsletter that’s one of the most popular on the web. I used to subscribe, and it’s undeniably brilliant, but a little US-centric for my liking.

    My newsletter, Thought Shrapnel, doesn’t track you. In fact, I have to keep battling MailChimp (the platform I use to send it out) as it thinks I’ve made a mistake. Tracking is so pervasive but I have no need to know exactly how many people clicked on a particular link. It’s an inexact science, anyway.

    Pell has written a great post about online privacy:

    The story of Cambridge Analytica accessing your personal data on Facebook, supposedly creating a spot-on psychographic profile, and then weaponizing your own personality against you with a series of well-worded messages is now sweeping the media. And it will get louder. And it will pass. And then, I promise, there will be another story about your data being stolen, borrowed, hacked, misused, shared, bought, sold and on and on.

    He points out the disconnect between rich people such as Mark Zuckerberg, CEO of Facebook, going to "great lengths" to protect his privacy, whilst simultaneously depriving Facebook users of theirs.

    They are right to want privacy. They are right to want to keep their personal lives walled off from anyone from nosy neighbors to potential thieves to, well, Matt Richtel. They should lock their doors and lock down their information. They are right not to want you to know where they live, with whom they live, or how much they spend. They’re right to want to plug a cork in the social media champagne bottle we’ve shaken up in our blind celebration of glass houses.

    They are right not to want to toss the floor planks that represent their last hint of personal privacy into the social media wood chipper. They are right in their unwillingness to give in to the seeming inevitability of the internet sharing machine. Do you really think it’s a coincidence that most of the buttons you press on the web are labeled with the word submit?

    A Non-Disclosure Agreement (NDA) is something that's been in the news recently as Donald Trump has taken his shady business practices to the whitehouse. Pell notes that the principle behind NDAs is nevertheless sound: you don't get to divulge my personal details without my permission.

    So you should follow their lead. Don’t do what they say. Do what they do. Better yet, do what they NDA.

    [...]

    There’s a pretty simple rule: never share anything on any site anywhere on the internet regardless of any privacy settings unless you are willing to accept that the data might one day be public.

    The only privacy policy that matters is your own.

    Source: Dave Pell

    GDPR, blockchain, and privacy

    I’m taking an online course about the impending General Data Protection Regulatin (GDPR), which I’ve writing about on my personal blog. An article in WIRED talks about the potential it will have, along with technologies such as blockchain.

    People have talked about everyone having ‘private data accounts’ which they then choose to hook up to service providers for years. GDPR might just force that to happen:

    A new generation of apps and websites will arise that use private-data accounts instead of conventional user accounts. Internet applications in 2018 will attach themselves to these, gaining access to a smart data account rich with privately held contextual information such as stress levels (combining sleep patterns, for example, with how busy a user's calendar is) or motivation to exercise comparing historical exercise patterns to infer about the day ahead). All of this will be possible without the burden on the app supplier of undue sensitive data liability or any violation of consumers' personal rights.

    As the article points out, when we know what's going to happen with our data, we're probably more likely to share it. For example, I'm much more likely to invest in voice-assisted technologies once GDPR hits in May:

    Paradoxically, the internet will become more private at a moment when we individuals begin to exchange more data. We will then wield a collective economic power that could make 2018 the year we rebalance the digital economy.

    This will have a huge effect on our everyday information landscape:

    The more we share data on our terms, the more the internet will evolve to emulate the physical domain where private spaces, commercial spaces and community spaces can exist separately, but side by side. Indeed, private-data accounts may be the first step towards the internet as a civil society, paving the way for a governing system where digital citizens, in the form of their private micro-server data account, do not merely have to depend on legislation to champion their private rights, but also have the economic power to enforce them as well.

    I have to say, the more I discover about the provisions of GDPR, the more excited and optimistic I am about the future.

    Source: WIRED

    No cash, no freedom?

    The ‘cashless’ society, eh?

    Every time someone talks about getting rid of cash, they are talking about getting rid of your freedom. Every time they actually limit cash, they are limiting your freedom. It does not matter if the people doing it are wonderful Scandinavians or Hindu supremacist Indians, they are people who want to know and control what you do to an unprecedentedly fine-grained scale.
    Yep, just because someone cool is doing it doesn't mean it won't have bad consequences. In the rush to add technology to things, we create future dystopias.
    Cash isn’t completely anonymous. There’s a reason why old fashioned crooks with huge cash flows had to money-launder: Governments are actually pretty good at saying, “Where’d you get that from?” and getting an explanation. Still, it offers freedom, and the poorer you are, the more freedom it offers. It also is very hard to track specifically, i.e., who made what purchase.

    Blockchains won’t be untaxable. The ones which truly are unbreakable will be made illegal; the ones that remain, well, it’s a ledger with every transaction on it, for goodness sakes.

    It’s this bit that concerns me:

    We are creating a society where even much of what you say, will be knowable and indeed, may eventually be tracked and stored permanently.

    If you do not understand why this is not just bad, but terrible, I cannot explain it to you. You have some sort of mental impairment of imagination and ethics.

    Source: Ian Welsh

    The NSA (and GCHQ) can find you by your 'voiceprint' even if you're speaking a foreign language on a burner phone

    This is pretty incredible:

    Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees.

    The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.”

    Hmmm….

    The voice is a unique and readily accessible biometric: Unlike DNA, it can be collected passively and from a great distance, without a subject’s knowledge or consent. Accuracy varies considerably depending on how closely the conditions of the collected voice match those of previous recordings. But in controlled settings — with low background noise, a familiar acoustic environment, and good signal quality — the technology can use a few spoken sentences to precisely match individuals. And the more samples of a given voice that are fed into the computer’s model, the stronger and more “mature” that model becomes.
    So yeah, let's put a microphone in every room of our house so that we can tell Alexa to turn off the lights. What could possibly go wrong?

    Source: The Intercept

    DuckDuckGo moves beyond search

    This is excellent news:

    Today we’re taking a major step to simplify online privacy with the launch of fully revamped versions of our browser extension and mobile app, now with built-in tracker network blocking, smarter encryption, and, of course, private search – all designed to operate seamlessly together while you search and browse the web. Our updated app and extension are now available across all major platforms – Firefox, Safari, Chrome, iOS, and Android – so that you can easily get all the privacy essentials you need on any device with just one download.
    I have a multitude of blockers installed, which makes it difficult to recommend just one to people. Hopefully this will simplify things:
    For the last decade, DuckDuckGo has been giving you the ability to search privately, but that privacy was only limited to our search box. Now, when you also use the DuckDuckGo browser extension or mobile app, we will provide you with seamless privacy protection on the websites you visit. Our goal is to expand this privacy protection over time by adding even more privacy features into this single package. While not all privacy protection can be as seamless, the essentials available today and those that we will be adding will go a long way to protecting your privacy online, without compromising your Internet experience.
    It looks like the code is all open source, too! 👏 👏 👏

    Source: DuckDuckGo blog

    WTF is GDPR?

    I have to say, I was quite dismissive of the impact of the EU’s General Data Protection Regulation (GDPR) when I first heard about it. I thought it was going to be another debacle like the ‘this website uses cookies’ thing.

    However, I have to say I’m impressed with what’s going to happen in May. It’s going to have a worldwide impact, too — as this article explains:

    For an even shorter tl;dr the [European Commission's] theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

    It’s a big deal:

    [GDPR is] set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.
    ...and unlike other regulations, actually has some teeth:
    The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M
    I'm having conversations about it wherever I go, from my work at Moodle (an company headquartered in Australia) to the local Scouts.

    Source: TechCrunch

    Choose your connected silo

    The Verge reports back from CES, the yearly gathering where people usually get excited about shiny thing. This year, however, people are bit more wary…

    And it’s not just privacy and security that people need to think about. There’s also lock-in. You can’t just buy a connected gadget, you have to choose an ecosystem to live in. Does it work with HomeKit? Will it work with Alexa? Will some tech company get into a spat with another tech company and pull its services from that hardware thing you just bought?
    In other words, the kind of digital literacies required by the average consumer just went up a notch.

    Here’s the thing: it’s unlikely that the connected toothpaste will go back in the tube at this point. Consumer products will be more connected, not less. Some day not long from now, the average person’s stroll down the aisle at Target or Best Buy will be just like our experiences at futuristic trade shows: everything is connected, and not all of it makes sense.

    It won't be long before we'll be inviting techies around to debug our houses...

    Source: The Verge

    It doesn't matter if you don't use AI assistants if everyone else does

    Email is an awesome system. It’s open, decentralised, and you can pick whoever you want to provide your emails. The trouble is, of course, that if you decide you don’t want a certain company, say Google, to read your emails, you only have control of your half of the equation. In other words, it doesn’t matter if you don’t want to use GMail, if most of your contacts do.

    The same is true of AI assistant. You might not want an Amazon Echo device in your house, but you don’t spend all your life at home:

    Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the “Alexa Mobile Accessory Kit,” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.
    The future isn't pre-ordained. We get to choose the society and culture in which we'd like to live. Huge, for-profit companies having listening devices everywhere sounds dystopian to me.

    Source: TechCrunch

    Privacy-based browser extensions

    I visit Product Hunt on a regular basis. While there’s plenty of examples of hyped apps and services that don’t last six months, there’s also some gems in there, especially in the Open Source section!

    There’s a Q&A part of the site where this week I unearthed a great thread about privacy-based browser extensions. The top ones were:

    The comments and shared experiences are particularly useful. Remember, the argument that you don’t need privacy because you’ve got nothing to hide is like saying you don’t need free speech because you’ve got nothing to say…

    Source: Product Hunt

    Data-driven society: utopia or dystopia?

    Good stuff from (Lord) Jim Knight, who cites part of his speech in the House of Lords about data privacy:

    The use of data to fuel our economy is critical. The technology and artificial intelligence it generates has a huge power to enhance us as humans and to do good. That is the utopia we must pursue. Doing nothing heralds a dystopian outcome, but the pace of change is too fast for us legislators, and too complex for most of us to fathom. We therefore need to devise a catch-all for automated or intelligent decisioning by future data systems. Ethical and moral clauses could and should, I argue, be forced into terms of use and privacy policies.

    Jim’s a great guy, and went out of his way to help me in 2017. It’s great to have someone with his ethics and clout in a position of influence.

    Source: Medium

    It's called Echo for a reason

    That last-minute Christmas gift sounds like nothing but unadulterated fun after reading this, doesn’t it?

    It is a significant thing to allow a live microphone in your private space (just as it is to allow them in our public spaces). Once the hardware is in place, and receiving electricity, and connected to the Internet, then you’re reduced to placing your trust in the hands of two things that unfortunately are less than reliable these days: 1) software, and 2) policy.

    Software, once a mic is in place, governs when that microphone is live, when the audio it captures is transmitted over the Internet, and to whom it goes. Many devices are programmed to keep their microphones on at all times but only record and transmit audio after hearing a trigger phrase—in the case of the Echo, for example, “Alexa.” Any device that is to be activated by voice alone must work this way. There are a range of other systems. Samsung, after a privacy dust-up, assured the public that its smart televisions (like others) only record and transmit audio after the user presses a button on its remote control. The Hello Barbie toy only picks up and transmits audio when its user presses a button on the doll.

    Software is invisible, however. Most companies do not make their code available for public inspection, and it can be hacked, or unscrupulous executives can lie about what it does (think Volkswagen), or government agencies might try to order companies to activate them as a surveillance device.

    I sincerely hope that policy makers pay heed to the recommendations section, especially given the current ‘Wild West’ state of affairs described in the article.

    Source: ACLU

← Newer Posts