Parochial data – the failure to share data or its analysis – can be lethal. In 2008, I published a case study of the 2005 explosion at BP’s Texas City refinery that killed 15 and injured 100 people in my book Relevance (Wiley). One key point was that the firm had recently devolved strategy to its operating units. The other was that refinery workers at another BP unit in Whiting, Indiana had just reported the very conditions – “preventive maintenance was seldom practiced, the refinery had a ‘run until it breaks’ mentality, and the workforce had a great deal of experience running equipment with ‘Band-Aids’” – that caused the Texas City explosion. The US Chemical Safety board concluded in 2006: “If you’re not learning from near misses, you’re not in a position to prevent major disasters like the one in Texas City.” I had no idea at the time of publication that Deepwater Horizon was just two years away.
If there is one near miss per year across ten independent operating units, a risk manager monitoring all of them will get useful warnings every year. If the units don’t or can’t share data, risk managers at each will get important warnings only once a decade. BP’s decision to devolve strategy to its operating units appears to have had the latter effect despite a number of precautions. Deepwater Horizon happened at a time when one of the largest engineering companies in the world had to some extent reverted to parochial data.
ElementAI – Yoshua Bengio’s Montreal-based startup dedicated to leveling the AI playing field – will try to fix this. Tech firms subscribing to ElementAI’s network will contribute to and license what is intended to be a state-of-the-art AI platform. Membership will also give them access to a roster of AI experts – which, in turn, will let those experts earn money and equity from project work without abandoning their academic careers, as Uber’s recent cache of 40 Carnegie Mellon engineers have had to do.
The thought that little tech startups may have access to models trained with as much data as Google’s Deep Mind is heartening. Not guaranteed, of course, but heartening. It raises the question, however, whether open-source and syndicated-data AI could reach a point where the best insights about risks – if not individual customer behavior – will be equally available to all. Might AI-driven technology ever reach a “post-competitive” state where everyone has access to the best answer to most questions, or at least to questions outside of personalized advertising?
From the perspective of assumption-based metrics and other hypothesis-driven approaches to learning from results, the answer has to be “No.” We will always be free to dream up and test new assumptions – or conjectures, hypotheses, or guesses – about what drives an outcome of importance to us. And if some of those assumptions concern success factors so original that few people have ever measured them, then no neural net will have had a chance to weigh them and use them to improve predictions. The utopian vision of democratic AI will not keep entrepreneurs from having new ideas and squeezing commercial advantages out of them. It will simply make it harder to compete – at least outside of advertising – based on data-gathering, alone.