Tag: interoperability

Friday feastings

These are things I came across that piqued my attention:

  • What do cats do all day? (The Kid Should See This) — “Catcam footage from collar cameras captured the activities of 16 free-roaming domestic cats in England as they explored, stared, touched noses, hunted, vocalized, and more.”
  • These researchers invented an entirely new way of building with wood (Fast Company) — “Each of the 12 wooden components of the tower was made by laminating two pieces of wood with different levels of moisture. Then, when the laminated pieces of wood dried out, the piece of wood curved naturally–no molds or braces needed.”
  • What Did Old English Sound Like? Hear Reconstructions of Beowulf, The Bible, and Casual Conversations (Open Culture) — “Over the course of 1000 years, the language came together from extensive contact with Anglo-Norman, a dialect of French; then became heavily Latinized and full of Greek roots and endings; then absorbed words from Arabic, Spanish, and dozens of other languages, and with them, arguably, absorbed concepts and pictures of the world that cannot be separated from the language itself.”
  • Adversarial interoperability: reviving an elegant weapon from a more civilized age to slay today’s monopolies (BoingBoing) — “This kind of adversarial interoperability goes beyond the sort of thing envisioned by “data portability,” which usually refers to tools that allow users to make a one-off export of all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you’re in the process of migrating away from.”
  • Fables of School Reform (The Baffler) — “Even pre-internet efforts to upgrade the technological prowess of American schools came swathed in the quasi-millennial promise of complete school transformation.”

Data transfer as a ‘hedge’?

This is an interesting development:

Today, Google, Facebook, Microsoft, and Twitter joined to announce a new standards initiative called the Data Transfer Project, designed as a new way to move data between platforms. In a blog post, Google described the project as letting users “transfer data directly from one service to another, without needing to download and re-upload it.”

This, of course, would probably not have happened without GDPR. So how does it work?

The existing code for the project is available open-source on GitHub, along with a white paper describing its scope. Much of the codebase consists of “adapters” that can translate proprietary APIs into an interoperable transfer, making Instagram data workable for Flickr and vice versa. Between those adapters, engineers have also built a system to encrypt the data in transit, issuing forward-secret keys for each transaction. Notably, that system is focused on one-time transfers rather than the continuous interoperability enabled by many APIs.

I may be being cynical, but just because something is open source doesn’t mean that it’s a level playing field for everyone. In fact, I’d wager that this is large companies hedging against new entrants to the market.

The project was envisioned as an open-source standard, and many of the engineers involved say a broader shift in governance will be necessary if the standard is successful. “In the long term, we want there to be a consortium of industry leaders, consumer groups, government groups,” says Fair. “But until we have a reasonable critical mass, it’s not an interesting conversation.”

This would be great if it pans out in the way it’s presented in the article. My 20+ years experience on the web, however, would suggest otherwise.

Source: The Verge

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box