Tag: technology (page 1 of 6)

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks’ octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.


A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.


A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.


Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.


We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

It’s a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of “the funny things kids say to Alexa” is pretty well documented as well as the earlier “Alexa is teaching my kid to be rude” observation. This isn’t to say that Amazon haven’t done *any* work thinking about how Alexa works in a kid context (Honan’s article shows that they’ve demonstrably thought about how Alexa might work and that they’ve made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan’s piece was that, as a parent, I still don’t think Amazon haven’t gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids’ content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a *very* personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line – it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds’ worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere *thought* of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

Blockchain as a ‘futuristic integrity wand’

I’ve no doubt that blockchain technology is useful for super-boring scenarios and underpinning get-rich-quick schemes, but it has very little value to the scenarios in which I work. I’m trying to build trust, not work in an environment where technology serves as a workaround.

This post by Kai Stinchcombe about the blockchain bubble is a fantastic read. The author’s summary?

Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.

Fair enough, let’s dig in…

People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions.

It’s funny seeing people who have close to zero understanding of how blockchain works explain how it’s going to ‘revolutionise’ X, Y, or Z. Again, it’s got exciting applicability… for very boring stuff.

[H]ere’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.”

Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?”

This is the bit that really grabbed me about the post, the blockchain-as-metaphor section. People are sold on stories, not on technologies. Which is why some people are telling stories that involve magicking away all of their fears and problems with a magic blockchain wand.

People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution.

It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity.


Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds.

When, like me, you think that humanity moves forward at the speed of trust and collaboration, blockchain seems like the antithesis of all that.

Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole.

Source: Kai Stinchcombe

Bridging technologies

When you go deep enough into philosophy or religion one of the key insights is that everything is temporary. Success is temporary. Suffering is temporary. Your time on earth is temporary.

One way of thinking about this one a day-to-day basis is that everything is a bridge to something else. So that technology that I’ve been excited about since 2011? Yep, it’s a bridge (or perhaps a raft) to get to something else.

Benedict Evans, who works for the VC firm Andreessen Horowitz, sends out a great, short newsletter every week to around 95,000 people. I’m one of them. In this week’s missive, he linked to a blog post he wrote about bridging technologies.

A bridge product says ‘of course x is the right way to do this, but the technology or market environment to deliver x is not available yet, or is too expensive, and so here is something that gives some of the same benefits but works now.’

As with anything, there are good and bad bridging technologies. At the time, it can be hard to spot the difference:

In hindsight, though, not just WAP but the entire feature-phone mobile internet prior to 2007, including i-mode, with cut-down pages and cut-down browsers and nav keys to scroll from link to link, was a bridge. The ‘right’ way was a real computer with a real operating system and the real internet. But we couldn’t build phones that could do that in 1999, even in Japan, and i-mode worked really well in Japan for a decade.

It’s all obvious in retrospect, as with the example of Firefox OS, which was developed at the same time I was at Mozilla:

[T]he problem with the Firefox phone project was that even if you liked the experience proposition – ‘almost as good as Android but works on much cheaper phones’ – the window of time before low-end Android phones closed the price gap was too short.

Usually, cheap things add more features until people just ‘make do’ with 80-90% of the full feature set. However, that’s not always the case:

Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around – the expensive good product gets cheaper faster than the cheap bad product can get good.

Evans goes on to talk about autonomous vehicles, something that he’s heavily invested in (financially and intellectually) with his VC firm.

In the world of open source, however, it’s a slightly different process. Instead of thinking about the ‘runway’ of capital that you’ve got before you have to give up and go home, it’s about deciding when it no longer makes sense to maintain the project you’re working on. In some cases, the answer to that is ‘never’ which means that the project keeps going and going and going.

It can be good to have a forcing function to focus people’s minds. I’m thinking, for example, of Steve Jobs declaring war on Flash. The reasons he gives are disingenuous (accusing Adobe of not being ‘open’!) but the upshot of Apple declaring Flash as dead to them caused the entire industry to turn upside down. In effect, Flash was a ‘bridge’ to the full web on mobile devices.

Using the idea of technology ‘bridges’ in my own work can lead to some interesting conclusions. For example, the Project MoodleNet work that I’m beginning will ultimately be a bridge to something else for Moodle. Thinking about my own career, each step has been a bridge to something else; the most interesting bridges have been those where I haven’t been quite sure what was one the other side. Or, indeed, if there even was an other side…

Source: Benedict Evans

Tech will eat itself

Mike Murphy has been travelling to tech conferences: CES, MWC, and SXSW. He hasn’t been overly-impressed by what he’s seen:

The role of technology should be to improve the quality of our lives in some meaningful way, or at least change our behavior. In years past, these conferences have seen the launch of technologies that have indeed impacted our lives to varying degrees, from the launch of Twitter to car stereos and video games.

However, it’s all been a little underwhelming:

People always ask me what trends I see at these events. There are the usual words I can throw out—VR, AR, blockchain, AI, big data, autonomy, automation, voice assistants, 3D-printing, drones—the list is endless, and invariably someone will write some piece on each of these at every event. But it’s rare to see something truly novel, impressive, or even more than mildly interesting at these events anymore. The blockchain has not revolutionized society, no matter what some bros would have you believe, nor has 3D-printing. Self-driving cars are still years away, AI is still mainly theoretical, and no one buys VR headsets. But these are the terms you’ll find associated with these events if you Google them.

There’s nothing of any real substance being launched at this big shiny events:

The biggest thing people will remember from this year’s CES is that it rained the first few days and then the power went out. From MWC, it’ll be that it snowed for the first time in years in Barcelona, and from SXSW, it’ll be the Westworld in the desert (which was pretty cool). Quickly forgotten are the second-tier phones, dating apps, and robots that do absolutely nothing useful. I saw a few things of note that point toward the future—a 3D-printed house that could actually better lives in developing nations; robots that could crush us at Scrabble—but obviously, the opportunity for a nascent startup to get its name in front of thousands of techies, influential people, and potential investors can be huge. Even if it’s just an app for threesomes.

As Murphy points out, the more important the destination (i.e. where the event is held) the less important the content (i.e. what is being announced):

When real technology is involved, the destinations aren’t as important as the substance of the events. But in the case of many of these conferences, the substance is the destinations themselves.

However, that shouldn’t necessarily be cause for concern: There is still much to be excited about in technology. You just won’t find much of it at the biggest conferences of the year, which are basically spring breaks for nerds. But there is value in bringing so many similarly interested people together.


Just don’t expect the world of tomorrow to look like the marketing stunts of today.

I see these events as a way to catch up the mainstream with what’s been happening in pockets of innovation over the past year or so. Unfortunately, this is increasingly being covered in a layer of marketing spin and hype so that it’s difficult to separate the useful from the trite.

Source: Quartz

Teaching kids about computers and coding

Not only is Hacker News a great place to find the latest news about tech-related stuff, it’s also got some interesting ‘Ask HN’ threads sourcing recommendations from the community.

This particular one starts with a user posing the question:

Ask HN: How do you teach you kids about computers and coding?

Please share what tools & approaches you use – it may Scratch, Python, any kids specific like Linux distros, Raspberry Pi or recent products like Lego Boost… Or your experiences with them.. thanks.

Like sites such as Reddit and Stack Overflow, responses are voted up based on their usefulness. The most-upvoted response was this one:

My daughter is almost 5 and she picked up Scratch Jr in ten minutes. I am writing my suggestions mostly from the context of a younger child.

I approached it this way, I bought a book on Scratch Jr so I could get up to speed on it. I walked her through a few of the basics, and then I just let her take over after that.

One other programming related activity we have done is the Learning Resources Code & Go Robot Mouse Activity. She has a lot of fun with this as you have a small mouse you program with simple directions to navigate a maze to find the cheese. It uses a set of cards to help then grasp the steps needed. I switch to not using the cards after a while. We now just step the mouse through the maze manually adding steps as we go.

One other activity to consider is the robot turtles board game. This teaches some basic logic concepts needed in programming.

For an older child, I did help my nephew to learn programming in Python when he was a freshman in high school. I took the approach of having him type in games from the free Python book. I have always though this was a good approach for older kids to get the familiar with the syntax.

Something else I would consider would be a robot that can be programmer with Scratch. While I have not done this yet, I think for kid seeing the physical results of programming via a robot is a powerful way to capture interest.

But I think my favourite response is this one:

What age range are we talking about? For most kids aged 6-12 writing code is too abstract to start with. For my kids, I started making really simple projects with a Makey Makey. After that, I taught them the basics with Scratch, since there are tons of fun tutorials for kids. Right now, I’m building a Raspberry Pi-powered robot with my 10yo (basically it’s a poor man’s Lego Mindstorm).

The key is fun. The focus is much more on ‘building something together’ than ‘I’ll learn you how to code’. I’m pretty sure that if I were to press them into learning how to code it will only put them off. Sometimes we go for weeks without building on the robot, and all of the sudden she will ask me to work on it with her again.

My son is sailing through his Computer Science classes at school because of some webmaking and ‘coding’ stuff we did when he was younger. He’s seldom interested, however, if I want to break out the Raspberry Pi and have a play.

At the end of the day, it’s meeting them where they’re at. If they show an interest, run with it!

Source: Hacker News

10 breakthrough technologies for 2018

I do like MIT’s Technology Review. It gives a glimpse of cool future uses of technology, while retaining a critical lens.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

Here’s the list of their ‘breakthrough technologies’ for 2018:

  1. 3D metal printing
  2. Artificial embryos
  3. Sensing city
  4. AI for everybody
  5. Dueling neural networks
  6. Babel-fish earbuds
  7. Zero-carbon natural gas
  8. Perfect online privacy
  9. Genetic fortune-telling
  10. Materials’ quantum leap

It’s a fascinating list, partly because of the names they’ve given (‘genetic fortune telling’!) to things which haven’t really been given a mainstream label yet. Worth exploring in more details, as they flesh out each on of these in what is a reasonably lengthy article.

Source: MIT Technology Review

Firefox OS lives on in The Matrix

I still have a couple of Firefox OS phones from my time at Mozilla. The idea was brilliant: using the web as the platform for smartphones. The execution, in terms of the partnership and messaging to the market… not so great.

Last weekend, I actually booted up a device as my daughter was asking about ‘that orange phone you used to let me play with sometimes’. I noticed that Mozilla are discontinuing the app marketplace next month.

All is not lost, however, as open source projects can never truly die. This article reports on a ‘fork’ of Firefox OS being used to resurrect one of my favourite-ever phones, which was used in the film The Matrix:

Quietly, a company called KaiOS, built on a fork of Firefox OS, launched a new version of the OS built specifically for feature phones, and today at MWC in Barcelona the company announced a new wave of milestones around the effort that includes access to apps from Facebook, Twitter and Google in the form of its Voice Assistant, Google Maps, and Google Search; as well as a list of handset makers who will be using the OS in their phones, including HMD/Nokia (which announced its 8110 yesterday), Bullitt, Doro and Micromax; and Qualcomm and Spreadtrum for processing on the inside.

I think I’m going to have to buy the new version of the Nokia 8110 just… because.

Source: TechCrunch


The punk rock internet

This kind of article is useful in that it shows to a mainstream audience the benefits of a redecentralised web and resistance to Big Tech.

Balkan and Kalbag form one small part of a fragmented rebellion whose prime movers tend to be located a long way from Silicon Valley. These people often talk in withering terms about Big Tech titans such as Mark Zuckerberg, and pay glowing tribute to Edward Snowden. Their politics vary, but they all have a deep dislike of large concentrations of power and a belief in the kind of egalitarian, pluralistic ideas they say the internet initially embodied.

What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.

However, these kind of articles are very personality-driven, and the little asides made the article’s author paint those featured as a bit crazy and the whole idea as a bit far-fetched.

For example, here’s the section on a project which is doing some pretty advanced tech while avoiding venture capitalist money:

In the Scottish coastal town of Ayr, where a company called MaidSafe works out of a silver-grey office on an industrial estate tucked behind a branch of Topps Tiles, another version of this dream seems more advanced. MaidSafe’s first HQ, in nearby Troon, was an ocean-going boat. The company moved to an office above a bridal shop, and then to an unheated boatshed, where the staff sometimes spent the working day wearing woolly hats. It has been in its new home for three months: 10 people work here, with three in a newly opened office in Chennai, India, and others working remotely in Australia, Slovakia, Spain and China.

I get the need to bring technology alive for the reader, but what difference does it make that their office is behind Topps Tiles? So what if the staff sometimes wear woolly hats? It just makes the whole thing out to be farcical. Which of course, it’s not.

Source: The Guardian

The Project Design Tetrahedron

I had reason this week to revisit Dorian Taylor’s interview on Uses This. I fell into a rabbithole of his work, and came across a lengthy post he wrote back in 2014.

I’ve given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

As you can see in the image above, he argues that the traditional engineering approach of having things either:

  • Fast and Good
  • Cheap and Fast
  • Good and Cheap

…is wrong, given a lean and iterative design process. You can actually make things that are immediately useful (i.e. ‘Good‘), relatively Cheap, and do so Fast. The thing you sacrifice in those situations, and hence the ‘tetrahedron’ is Predictable Results.

If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable.

In other words, we live in an unpredictable world, but we can still do awesome stuff. Nassim Nicholas Taleb would be proud.

Source: Dorian Taylor