Tag: Fast Company (page 1 of 8)

Adversarial interoperability to return to a world of ‘fast companies’

Cory Doctorow is one of my favourite people on the entire planet. I’ve heard him speak in person and online on numerous occasions. I met him a couple of times while at Mozilla, and he’s even recommended swimming pools in Toronto to me when I visited. (He’s a daily swimmer due to chronic back pain.)

His new book, which I’m saving to read for my next holiday, is The Internet Con: How to Seize the Means of Computation. In this interview as part of promoting the book, he talks about how we’ve ended up in a world without real competition in the technology marketplace. Essential reading, as ever.

There used to be a time when the tech sector could be described as a bunch of “fast companies,” right? They would use the interoperability that’s latent in all digital technology and they would specifically target whatever pain points the incumbent had introduced. If incumbents were making money by showing you ads, they made an ad blocker. If incumbents were making money by charging gigantic margins on hard drives, they made cheaper hard drives.

Over time, we went from an internet where tech companies more or less had their users’ backs, to an internet where tech companies are colluding to take as big a bite as possible out of those users. We do not have fast companies anymore; we have lumbering behemoths. If you’ve started a fast company, it’s probably just a fake startup that you’re hoping to get acqui-hired by one of the big giants, which is something that used to be illegal.

As these companies grew more concentrated, they were able to collude and convince courts and regulators and lawmakers that it was time to get rid of the kind of interoperability, the reverse engineering that had been a feature of technology since the very beginning, and move into a new era in which no one was allowed to do anything to a tech platform that their shareholders wouldn’t appreciate. And that the government should step in to use the state’s courts to punish anyone who disagrees. That’s how we got to the world that we’re in today.

Source: Cory Doctorow: Silicon Valley is now a world of ‘lumbering behemoths’ | Fast Company

Leadership is contextual

This article feels quite foreign to me as a member of a co-operative, but it contains an important insight. I feel that there’s more nuance than the author provides, in that leadership is contextual.

Some people believe that they are a ‘leader’ because their job title says so. But true leadership comes when people choose to follow you, not be coerced into something because you’re higher up the pyramid than they are.

For as long as I can remember, leadership was the expectation. If you wanted to move up in the world, you had to be a leader: in school, at work, in your extracurriculars. Leadership was the golden ticket, and the more opportunities you took, the closer you’d get to owning the whole chocolate factory.

Source: What to do if you don’t want to be a leader | Fast Company

AI-generated misinformation is getting more believable, even by experts

I’ve been using thispersondoesnotexist.com for projects recently and, honestly, I wouldn’t be able to tell that most of the faces it generates every time you hit refresh aren’t real people.

For every positive use of this kind of technology, there are of course negatives. Misinformation and disinformation is everywhere. This example shows how even experts in critical fields such as cybersecurity, public safety, and medicine can be fooled, too.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation—flagged and unflagged—has been aimed at the general public. Imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

Source: False, AI-generated cybersecurity news was able to fool experts | Fast Company