Does the world need interactive emails?
I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.
The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:
See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.
It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.
Source: TechCrunch
Does the world need interactive emails?
I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.
The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:
See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.
It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.
Source: TechCrunch
The Kano model
Using the example of the innovation of a customised home page from the early days of Flickr, this article helps break down how to delight users:
Years ago, we came across the work of Noriaka Kano, a Japanese expert in customer satisfaction and quality management. In studying his writing, we learned about a model he created in the 1980s, known as the Kano Model.The article does a great job of explaining how you can implement great features but they don't particularly get users excited:
Capabilities that users expect will frustrate those users when they don’t work. However, when they work well, they don’t delight those users. A basic expectation, at best, can reach a neutral satisfaction a point where it, in essence, becomes invisible to the user.So it’s a process of continual improvement, and marginal gains in some areas:Try as it might, Google’s development team can only reduce the file-save problems to the point of it working 100% of the time. However, users will never say, “Google Docs is an awesome product because it saves my documents so well.” They just expect files to always be saved correctly.
One of the predictions that the Kano Model makes is that once customers become accustomed to excitement generator features, those features are not as delightful. The features initially become part of the performance payoff and then eventually migrate to basic expectations.Lots to think about here, particularly with Project MoodleNet.
Source: UIE
Is the gig economy the mass exploitation of millennials?
The answer is, “yes, probably”.
The 'sharing economy' and 'gig economy' are nothing of the sort. They're a problematic and highly disingenuous way for employers to not care about the people who create value in their business.If the living wage is a pay scale calculated to be that of an appropriate amount of money to pay a worker so they can live, how is it possible, in a legal or moral sense to pay someone less? We are witnessing a concerted effort to devalue labour, where the primary concern of business is profit, not the economic wellbeing of its employees.
The problem, of course, is late-stage capitalism:The employer washes their hands of the worker. Their immediate utility is the sole concern. From a profit point of view, absolutely we can appreciate the logic. However, we forget that the worker also exists as a member of society, and when business is allowed to use and exploit people in this manner, we endanger societal cohesiveness.
And the alternative? Co-operation.The neoliberal project has encouraged us to adopt a hyper-individualistic approach to life and work. For all the speak of teamwork, in this economy the individual reigns supreme and it is destroying young workers. The present system has become unfeasible. The neoliberal project needs to be reeled back in. The free market needs a firm hand because the invisible one has lost its grip.
Source: The Irish Times
Humans are not machines
Can we teach machines to be ‘fully human’? It’s a fascinating question, as it makes us think carefully about what it actually means to be a human being.
Humans aren’t just about inputs and outputs. There’s some things that we ‘know’ in different ways. Take music, for example.
In philosophy, it’s common to describe the mind as a kind of machine that operates on a set of representations, which serve as proxies for worldly states of affairs, and get recombined ‘offline’ in a manner that’s not dictated by what’s happening in the immediate environment. So if you can’t consciously represent the finer details of a guitar solo, the way is surely barred to having any grasp of its nuances. Claiming that you have a ‘merely visceral’ grasp of music really amounts to saying that you don’t understand it at all. Right?There's activities we do and actions we peform that aren't the result of conscious thought. What status do we give them?
Getting swept up in a musical performance is just one among a whole host of familiar activities that seem less about computing information, and more about feeling our way as we go: selecting an outfit that’s chic without being fussy, avoiding collisions with other pedestrians on the pavement, or adding just a pinch of salt to the casserole. If we sometimes live in the world in a thoughtful and considered way, we go with the flow a lot, too.What sets humans apart from animals is the ability to plan and to pay attention to absract things and ideas:
Now, the world contains many things that we can’t perceive. I am unlikely to find a square root in my sock drawer, or to spot the categorical imperative lurking behind the couch. I can, however, perceive concrete things, and work out their approximate size, shape and colour just by paying attention to them. I can also perceive events occurring around me, and get a rough idea of their duration and how they relate to each other in time. I hear that the knock at the door came just before the cat leapt off the couch, and I have a sense of how long it took for the cat to sidle out of the room.Time is one of the most abstract of the day-to-day things we deal with as humans:
Our conscious experience of time is philosophically puzzling. On the one hand, it’s intuitive to suppose that we perceive only what’s happening rightnow. But on the other, we seem to have immediate perceptual experiences of motion and change: I don’t need to infer from a series of ‘still’ impressions of your hand that it is waving, or work out a connection between isolated tones in order to hear a melody. These intuitions seem to contradict each other: how can I perceive motion and change if I am only really conscious of what’s occurring now? We face a choice: either we don’t really perceive motion and change, or the now of our perception encompasses more than the present instant – each of which seems problematic in its own way. Philosophers such as Franz Brentano and Edmund Husserl, as well as a host of more recent commentators, have debated how best to solve the dilemma.So where does that leave us in terms of the differences between humans and machines?
Human attempts at making sense of the world often involve representing, calculating and deliberating. This isn’t the kind of thing that typically goes on in the 55 Bar, nor is it necessarily happening in the Lutheran church just down the block, or on a muddy football pitch in a remote Irish village. But gathering to make music, play games or engage in religious worship are far from being mindless activities. And making sense of the world is not necessarily just a matter of representing it.To me, that last sentence is key: the world isn't just representations. It's deeper and more visceral than that.
Source: Aeon
Legislating against manipulated 'facts' is a slippery slope
In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.
As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:
Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:
This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?We need to be vigilant. Those we trust today may not be trustworthy tomorrow.
Source: The Guardian
Why we forget most of what we read
I read a lot of stuff, and I remember random bits of it. I used to be reasonably disciplined about bookmarking stuff, but then realised I hardly ever went back through my bookmarks. So, instead, I try to use what I read, which is kind of the reason for Thought Shrapnel…
Surely some people can read a book or watch a movie once and retain the plot perfectly. But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain. It might leave a film in the tub, but the rest is gone.Well, indeed. Nice metaphor.
In the internet age, recall memory—the ability to spontaneously call information up in your mind—has become less necessary. It’s still good for bar trivia, or remembering your to-do list, but largely, [Jared Horvath, a research fellow at the University of Melbourne] says, what’s called recognition memory is more important. “So long as you know where that information is at and how to access it, then you don’t really need to recall it,” he says.Exactly. You need to know how to find that article you read that backs up the argument you're making. You don't need to remember all of the details. Search skills are really important.
One study showed that recalling details about episodes for those bingeing on Netflix series was much lower than for thoose who spaced them out. I guess that’s unsurprising.
People are binging on the written word, too. In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them. It’s hard to imagine that’s decreased in the nine years since. In “Binge-Reading Disorder,” an article for The Morning News, Nikkitha Bakshani analyzes the meaning of this statistic. “Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it ‘sticks.’”For anyone who knows about spaced learning, the conclusions are pretty obvious:
The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out. I used to get irritated in school when an English-class syllabus would have us read only three chapters a week, but there was a good reason for that. Memories get reinforced the more you recall them, Horvath says. If you read a book all in one stretch—on an airplane, say—you’re just holding the story in your working memory that whole time. “You’re never actually reaccessing it,” he says.So apply what you learn and you're putting it to work. Hence this post!
Source: The Atlantic (via e180)
Should you lower your expectations?
“Aim for the stars and maybe you’ll hit the treetops” was always the kind of advice I was given when I was younger. But extremely high expectations of oneself is not always a great thing. We have to learn that we’ve got limits. Some are physical, some are mental, and some are cultural:
The problem with placing too much emphasis on your expectations—especially when they are exceedingly high—is that if you don’t meet them, you’re liable to feel sad, perhaps even burned out. This isn’t to say that you shouldn’t strive for excellence, but there’s wisdom in not letting perfect be the enemy of good.A (now famous) 2006 study found that people in Denmark are the happiest in the world. Researchers also found that have remarkably low expectations. And then:
In a more recent study that included more than 18,000 participants and was published in 2014 in the Proceedings of the National Academy of Sciences, researchers from University College in London examined people’s happiness from moment to moment. They found that “momentary happiness in response to outcomes of a probabilistic reward task is not explained by current task earnings, but by the combined influence of the recent reward expectations and prediction errors arising from those expectations.” In other words: Happiness at any given moment equals reality minus expectations.So if you've always got very high expectations that aren't being met, that's not a great situation to be in
In the words of Jason Fried, founder and CEO of software company Basecamp and author of multiple books on workplace performance: “I used to set expectations in my head all day long. But constantly measuring reality against an imagined reality is taxing and tiring, [and] often wrings the joy out of experiencing something for what it is.”Source: Outside
Why do some things go viral?
I love internet memes and included a few in my TEDx talk a few years ago. The term ‘meme’ comes from Richard Dawkins who coined the term in the 1970s:
But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?Memes aren't things that you necessarily want to find engaging or persuasive. They're kind of parasitic on the human mind:
Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.Dawkins doesn't like the use of the word 'meme' to refer to what we see on the internet:
According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation.So... why should we care?
While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”It is indeed a humbling thing, but one that a the study of Philosphy prepares you for, particularly Stoicism. Your mind is the one thing you can control, so be careful out there on the internet, reader.
Source: Nautilus
Humans responsible for the Black Death
I taught History for years, and when I was teaching the Black Death, I inculcated the received wisdom that it was rats that were responsible for the spread of disease.
But a team from the universities of Oslo and Ferrara now says the first, the Black Death, can be "largely ascribed to human fleas and body lice".There are three candidates for the spread of the Black Death: rats, air, and lice/fleas:The study, in the Proceedings of the National Academy of Science, uses records of its pattern and scale.
[Prof Nils Stenseth, from the University of Oslo] and his colleagues... simulated disease outbreaks in [nine European] cities, creating three models where the disease was spread by:Apologies to all those I taught the incorrect cause! I hope it hasn’t affected you too much in later life…In seven out of the nine cities studied, the "human parasite model" was a much better match for the pattern of the outbreak.
- rats
- airborne transmission
- fleas and lice that live on humans and their clothes
It mirrored how quickly it spread and how many people it affected.
“The conclusion was very clear,” said Prof Stenseth. “The lice model fits best."
Source: BBC News
The world's most nutritious foods
The older I get, the more important (and the more immediately apparent) the health benefits from eating and exercising well.
This article reports on scientists studying 1,000 different foods for their health benefits:
Scientists studied more than 1,000 foods, assigning each a nutritional score. The higher the score, the more likely each food would meet, but not exceed your daily nutritional needs, when eaten in combination with others.The top ones?
- Almonds
- Cherimoya
- Ocean perch
- Flatfish
- Chia seeds
- Pumpkin seeds
- Swiss chard
- Pork fat
- Beet greens
- Snapper
Time for some more experimentation…
Source: BBC Future
Audio Adversarial speech-to-text
I don’t usually go in for detailed technical papers on stuff that’s not directly relevant to what I’m working on, but I made an exception for this. Here’s the abstract:
We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (at a rate of up to 50 characters per second). We apply our white-box iterative optimization-based attack to Mozilla’s implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.In other words, the researchers managed to fool a neural network devoted to speech recognition into transcribing a phrase different to that which was uttered.
So how does it work?
By starting with an arbitrary waveform instead of speech (such as music), we can embed speech into audio that should not be recognized as speech; and by choosing silence as the target, we can hide audio from a speech-to-text systemThe authors state that merely changing words so that something different occurs is a standard adverserial attack. But a targeted adverserial attack is different:
Not only are we able to construct adversarial examples converting a person saying one phrase to that of them saying a different phrase, we are also able to begin with arbitrary non-speech audio sample and make that recognize as any target phrase.This kind of stuff is possible due to open source projects, in particular Mozilla Common Voice. Great stuff.
Source: Arxiv
Sounds and smells can help reinforce learning while you sleep
Apparently, the idea of learning while you sleep is actually bollocks, at least the way we have come to believe it works:
It wasn’t until the 1950s that researchers discovered the touted effects of hypnopaedia were actually not due to sleep at all. Instead these contraptions were actually awakening people. The debunkers could tell by using a relatively established technique called electroencephalography (EEG), which records the brain’s electrical signals through electrodes placed on the scalp. Using EEG on their participants, researchers could tell that the sleep-learners were actually awake (something we still do in research today), and this all but ended research into sleep as a cognitive tool. 50 years later, we now know it is possible to alter memory during sleep, just in a different way than previously expected.However, and fascinatingly, sounds (not words) and smells can reinforce learning:
In 2007, the neuroscientist Björn Rasch at Lübeck University and colleagues reported that smells, which were associated with previously learned material, could be used to cue the sleeping brain. The study authors had taught participants the locations of objects on a grid, just like in the game Concentration, and exposed them to the odour of roses as they did so. Next, participants slept in the lab, and the experimenters waited until the deepest stage of sleep (slow-wave sleep) to once again expose them to the odour. Then when they were awake, the participants were significantly better at remembering where the objects were located. This worked only if they had been exposed to the rose odour during learning, and had smelled it during slow-wave sleep. If they were exposed to the odour only while awake or during REM sleep, the cue didn’t work.Pretty awesome. There are some things still to research:
Outstanding questions that we have yet to address include: does this work for foreign-language learning (ie, grammar learning), or just learning foreign vocabulary? Could it be used to help maintain memory performance in an ageing population? Does reactivating some memories mean that others are wiped away even more quickly?Worth trying!
Source: Aeon
Every easy thing is hard again
Although he isn’t aware, it was Frank Chimero who came up with the name Thought Shrapnel in a throwaway comment he made on his blog a while back. I immediately registered the domain name.
In this article, a write-up of a talk he’s been giving recently, Chimero talks about getting back into web design after a few years away founding a company.
This past summer, I gave a lecture at a web conference and afterward got into a fascinating conversation with a young digital design student. It was fun to compare where we were in our careers. I had fifteen years of experience designing for web clients, she had one year, and yet some how, we were in the same situation: we enjoyed the work, but were utterly confused and overwhelmed by the rapidly increasing complexity of it all. What the hell happened? (That’s a rhetorical question, of course.)Look at the image at the top of this post, one that Chimero uses in his talk. He explains:
There are similar examples of the cycle in other parts of how websites get designed and made. Nothing stays settled, so of course a person with one year of experience and one with fifteen years of experience can both be confused. Things are so often only understood by those who are well-positioned in the middle of the current wave of thought. If you’re before the sweet spot in the wave, your inexperience means you know nothing. If you are after, you will know lots of things that aren’t applicable to that particular way of doing things. I don’t bring this up to imply that the young are dumb or that the inexperienced are inept—of course they’re not. But remember: if you stick around in the industry long enough, you’ll get to feel all three situations.The current way of working, he suggests, may be powerful, but it's overly-complex for most of his work
It was easy to back away from most of this new stuff when I realized I have alternate ways of managing complexity. Instead of changing my tools or workflow, I change my design. It’s like designing a house so it’s easy to build, instead of setting up cranes typically used for skyscrapers.Chimero makes an important point about the 'legibility' of web projects, a word I've also been using recently about my own work. I want to make it as understandable as possible:
Illegibility comes from complexity without clarity. I believe that the legibility of the source is one of the most important properties of the web. It’s the main thing that keeps the door open to independent, unmediated contributions to the network. If you can write markup, you don’t need Medium or Twitter or Instagram (though they’re nice to have). And the best way to help someone write markup is to make sure they can read markup.He includes a great video showing a real life race between a tortoise and a hare. He points out that the tortoise wins because the hare becomes distracted:
He finishes with some powerful words:
As someone who has decades of experience on the web, I hate to compare myself to the tortoise, but hey, if it fits, it fits. Let’s be more like that tortoise: diligent, direct, and purposeful. The web needs pockets of slowness and thoughtfulness as its reach and power continues to increase. What we depend upon must be properly built and intelligently formed. We need to create space for complexity’s important sibling: nuance. Spaces without nuance tend to gravitate towards stupidity. And as an American, I can tell you, there are no limits to the amount of damage that can be inflicted by that dangerous cocktail of fast-moving-stupid.Source: Frank Chimero
Why good parents have naughty children
This made me smile, then it made me think. Our children are offspring of a current teacher and a former teacher. What difference does our structure and rules make to their happiness?
This article from the ongoing Book of Life compares and contrasts two families. The first is what would generally be regarded as a ‘good’ family, where the children are well-behaved and interactions pleasant. However:
In Family One the so-called good child has inside them a whole range of emotions that they keep out of sight not because they want to but because they don’t feel they have the option to be tolerated as they really are. They feel they can’t let their parents see if they are angry or fed up or bored because it seems as if the parents have no inner resources to cope with their reality; they must repress their bodily, coarser, more volatile selves. Any criticism of a grown up is (they imagine) so wounding and devastating that it can’t be uttered.The second family is the opposite, but:
In Family Two the so-called bad child knows that things are robust. They feel they can tell their mother she’s a useless idiot because they know in their hearts that she loves them and that they love her and that a bout of irritated rudeness won’t destroy that. They know their father won’t fall apart or take revenge for being mocked. The environment is warm and strong enough to absorb the child’s aggression, anger, dirtiness or disappointment.As a parent, I'm torn between, on the one hand wanting my children to be a bit rebellious. But, on the other hand, it's just really inconvenient when they are...
We should learn to see naughty children, a few chaotic scenes and occasional raised voices as belonging to health rather than delinquency – and conversely learn to fear small people who cause no trouble whatsoever. And, if we have occasional moments of happiness and well-being, we should feel especially grateful that there was almost certainly someone out there in the distant past who opted to look through the eyes of love at some deeply unreasonable and patently unpleasant behaviour from us.Source: The Book of Life
Telegram cryptocurrency
I come across so many interesting links every day that I can only post a handful of them. Right now, and only a couple of months after starting this approach to Thought Shrapnel, I’ve got around 50 draft posts! This was one of them, from early January.
Telegram is great. I’ve been using it for the past couple of years with my wife, for the past year with my son and parents, and the past three months or so with Moodle. It’s an extremely useful platform, as it’s so quick to send messages. Reliable too, which my wife and I found Signal to struggle with sometimes.
The brothers behind Telegram made their billions from creating VKontakte (usually shortened to ‘VK’ and known as the ‘Russian Facebook’). They’ve announced that Telegram will raise millions of dollars through an ‘ICO’ or Initial Coin Offering. This uses similar terminology to an Initial Public Offering, or IPO, which comes through a company becoming publicly listed on a stock exchange. An ICO, on the other hand, is actually more like equity crowdfunding using cryptocurrency:
It could lead to some quite exciting features:Encrypted messaging startup Telegram plans to launch its own blockchain platform and native cryptocurrency, powering payments on its chat app and beyond. According to multiple sources which have spoken to TechCrunch, the “Telegram Open Network” (TON) will be a new, ‘third generation’ blockchain with superior capabilities, after Bitcoin and, later, Ethereum paved the way.
With cryptocurrency powered payments inside Telegram, users could bypass remittance fees when sending funds across international borders, move sums of money privately thanks to the app’s encryption, deliver micropayments that would incur too high of credit card fees, and more. Telegram is already the de facto communication channel for the global cryptocurrency community, making a natural home to its own coin and Blockchain.Whereas the major social networks kowtow to governmental demands around censorship, that doesn't seem to be the gameplan for Telegram:
Moving to a decentralized blockchain platform could kill two birds with one stone for Telegram. As well as creating a full-blown cryptocurrency economy inside the app, it would also insulate it against the attacks and accusations of nation-states such as Iran, where it now accounts for 40% of Iran’s internet traffic but was temporarily blocked amongst nationwide protests against the government.I don't pretend to understand the white paper they've published, but:
The claim is that it will be capable of a vastly superior number of transactions, around 1 million per second. In other words, similar to the ambitions of the Polkadot project out of Berlin — but with an installed base of 180 million people. This makes it an ‘interchain’ with so-called ‘dynamic sharding’.Exciting times. As I was explaining to someone recently, Telegram are taking a very interesting route into user adoption. They couldn't go with the standard 'social network' approach as Facebook, Instagram, and Twitter mean that market is effectively saturated. Instead, they started with a messaging app, and are building out from there.
Source: TechCrunch