NOW AVAILABLE The draft of my book on Organizational Intelligence is now available on LeanPub Please support this development by subscribing and commenting. Thanks.

Sunday, March 18, 2018

Security is downstream from strategy

Following @carolecadwalla's latest revelations about the misuse of personal data involving Facebook, she gets a response from Alex Stamos, Facebook's Chief Security Officer.

So let's take a look at some of his hand-wringing Tweets.

I'm sure many security professionals would sympathize with this. Nobody listens to me. Strategy and innovation surge ahead, and security is always an afterthought.

According to his Linked-In entry, Stamos joined Facebook in June 2015. Before that he had been Chief Security Officer at Yahoo!, which suffered a major breach under his watch in late 2014, affecting over 500 million user accounts. So perhaps a mere 50 million Facebook users having their data used for nefarious purposes doesn't really count as much of a breach in his book.

In a series of tweets he later deleted, Stamos argued that the whole problem was caused by the use of an API that everyone should have known about, because it was well-documented. As if his job was only to control the undocumented stuff.
Or as Andrew Keane Woods glosses the matter, "Don’t worry everyone, Cambridge Analytica didn’t steal the data; we were giving it out". By Monday night, Stamos had resigned.

In one of her articles, Carole Cadwalladr quotes the Breitbart doctrine
"politics is downstream from culture, so to change politics you need to change culture"
And culture eats strategy. And security is downstream from everything else. So much then for "by design and by default".

Carole Cadwalladr ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower (Observer, 18 Mar 2018) via @BiellaColeman

Carole Cadwalladr and Emma Graham-Harrison, How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool (Guardian, 17 Mar 2018)

Jessica Elgot and Alex Hern, No 10 'very concerned' over Facebook data breach by Cambridge Analytica (Guardian, 19 Mar 2018)

Hannes Grassegger and Mikael Krogerus, The Data That Turned the World Upside Down (Motherboard, 28 Jan 2017) via @BiellaColeman

Justin Hendrix, Follow-Up Questions For Facebook, Cambridge Analytica and Trump Campaign on Massive Breach (Just Security, 17 March 2018)

Casey Johnston, Cambridge Analytica's leak shouldn't surprise you, but it should scare you (The Outline, 19 March 2018)

Nicole Perlroth, Sheera Frenkel and Scott Shanemarch, Facebook Exit Hints at Dissent on Handling of Russian Trolls (New York Times, 19 March 2018)

Mattathias Schwartz, Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate (The Intercept, 30 March 2017)

Andrew Keane Woods, The Cambridge Analytica-Facebook Debacle: A Legal Primer (Lawfare, 20 March 2018) via BoingBoing

Wikipedia: Yahoo data breaches

Related post: Making the World more Open and Connected (March 2018)

Updated 20 March 2018 with new developments and additional commentary

Friday, March 9, 2018

Fail Fast - Burger Robotics

As @jjvincent observes, integrating robots into human jobs is tougher than it looks. Four days after it was installed in a Pasadena CA burger joint, Flippy the robot has been taken out of service for an upgrade. Turns out it wasn't fast enough to handle the demand. Does this count as Fail Fast?

Flippy's human minders have put a positive spin on the failure, crediting the presence of the robot for an unexpected increase in demand. As Vincent wryly suggests, Flippy is primarily earning its keep as a visitor attraction.

If this is a failure at all, what kind of failure is it? Drawing on earlier work by James Reason, Phil Boxer distinguishes between errors of intention, planning and execution.

If the intention for the robot is to improve productivity and throughput at peak periods, then the designers have got more work to do. And the productivity-throughput problem may be broader than just burger flipping: making Flippy faster may simply expose a bottleneck somewhere else in the system. But if the intention for the robot is to attract customers, this is of greatest value at off-peak periods. In which case, perhaps the robot already works perfectly.

Philip Boxer, ‘Unintentional’ errors and unconscious valencies (Asymmetric Leadership, 1 May 2008)

John Donohue, Fail Fast, Fail Often, Fail Everywhere (New Yorker, 31 May 2015)

Lora Kolodny, Meet Flippy, a burger-grilling robot from Miso Robotics and CaliBurger (TechCrunch 7 Mar 2017)

Brian Heater, Flippy, the robot hamburger chef, goes to work (TechCrunch, 5 March 2018)

James Vincent, Burger-flipping robot takes four-day break immediately after landing new job (Verge, 8 March 2018)

Related post Fail Fast - Why did the chicken cross the road? (March 2018)

Monday, January 15, 2018

Carillion Struck By Lightning

@NilsPratley blames delusion in the boardroom (on a grand scale, he says) for Carillion's collapse. "In the end, it comes down to judgments made in the boardroom."

A letter to the editor of the Financial Times agrees.
"This situation has been caused, in part, by the unprofessional, fatalistic and blasé attitude to contract risk management of some senior executives in the UK construction industry."

By no means the first company brought low by delusion (I've talked some about Enron on this blog, as well as in my book on organizational intelligence), and probably not the last.

And given that Carillion was the beneficiary of some very large public sector contracts, we could also talk about delusion and poor risk management in government circles. As @econtratacion points out, "the public sector had had information pointing towards Carillion's increasingly dire financial situation for a while".

As it happens, the Home Secretary was at the London Stock Exchange today, talking to female executives about gender diversity at board level. So I thought I'd just check the gender make-up of the Carillion board. According to the Carillion website, there were two female executives and two female non-executive directors in a board of twelve.

In the future, Amber Rudd would like half of all directors to be female. An earlier Government-backed review had recommended that at least a third should be female by 2020.

But compared to other large UK companies, the Carillion gender ratio wasn't too bad. "On paper, the directors looked well qualified", writes Kate Burgess in the Financial Times, noting that "the board ticked all the boxes in terms of good governance". But now even the Institute of Directors has expressed belated concerns about the effective governance at Carillion, and Burgess says the board fell into what she calls "a series of textbook traps".

So what kind of traps were these? The board paid large dividends to the shareholders and awarded large bonuses to themselves and other top executives, despite the fact that key performance targets were not met, and there was a massive hole in the pension fund. In other words, they looked after themselves first and the shareholders second, and to hell with pensioners and other stakeholders. Meanwhile, Larry Elliott notes that the directors of the company took steps to shield themselves from financial risk. These are not textbook traps, they are not errors of judgement, they are moral failings.

Of course we shouldn't rely solely on the moral integrity of company executives. If there is no regulation or regulator able to prevent a board behaving in this way, this points to a fundamental weakness in the financial system as a whole. As @RSAMatthew writes,
"There are many culprits in this tale. Lazy or ideologically blinkered ministers, incompetent public sector commissioners, cynical private sector providers signing 'suicide bids' on the assumption that they can renegotiate when things go wrong and, as always, a financial sector willing to arbitrage any profit regardless of consequences or ethics."

There is a strong case that diversity mitigates against groupthink - but as I've argued in my earlier posts, this needs to be real diversity not just symbolic or imaginary diversity (ticking boxes). And even if having more women or ethnic minorities on the board might possibly reduce errors of judgement, women as well as men can have moral failings. It's as if we imagined that Ivanka Trump was going to be a wise and restraining influence on her father, simply because of her gender.

As it happens, the remuneration director at Carillion was a woman. We may never know whether she was coerced or misled by her fellow directors or whether she participated enthusiastically in the gravy. But we cannot say that having a woman in that position is automatically going to be better than having a man. Women on boards may be a necessary step, but it is not a sufficient one.

Martin Bentham, Amber Rudd: 'It makes no sense to have more men than women in the boardroom' (Evening Standard, 15 January 2018)

Mark Bull, A lesson on risk from Carillion’s collapse (FT Letters to the Editor, 16 January 2018)

Kate Burgess, Carillion’s board: misguided or incompetent? (FT, 17 January 2018) HT @AidanWard3

Larry Elliott, Four lessons the Carillion crisis can teach business, government and us (Guardian, 17 January 2018)

Vanessa Fuhrmans, Companies With Diverse Executive Teams Posted Bigger Profit Margins, Study Shows (WSJ, 18 January 2018)

Simon Goodley, Carillion's 'highly inappropriate' pay packets criticised (Guardian, 15 January 2018)

Nils Pratley, Blame the deluded board members for Carillion's collapse (Guardian, 15 January 2018)

Albert Sánchez-Graells, Some thoughts on Carillion's liquidation and systemic risk management in public procurement (15 January 2018)

Rebecca Smith, Women should hold one third of senior executive jobs at FTSE 100 firms by 2020, says Sir Philip Hampton's review (City Am, 6 November 2016)

Matthew Taylor, Is Carillion the end for Public Private Partnerships? (RSA, 16th January 2018)

Related posts

Explaining Enron (January 2010)
The Purpose of Diversity (January 2010)
Organizational Intelligence and Gender (October 2010)
Delusion and Diversity (October 2012)
Intelligence and Governance (February 2013)
More on the Purpose of Diversity (December 2014)

Updated 25 January 2018

Friday, November 24, 2017

Pax Technica - The Conference

#paxtechnica Today I was at the @CRASSHlive conference in Cambridge to hear a series of talks and panel discussions on The Implications of the Internet of Things. For a comprehensive account, see @LaurieJ's livenotes.

When I read Philip Howard's book last week, I wondered why he had devoted so much of his book to such internet phenomena as social media and junk news, when the notional topic of the book was the Internet of Things. His keynote address today made the connection much clearer. While social media provides data about attitudes and aspirations, the internet of things provides data about behaviour. When these different types of data are combined, this produces a much richer web of information.

For example, Howard mentioned a certain coffee company that wanted to use IoT sensors to track the entire coffee journey from farm to disposed cup. (Although another speaker expressed scepticism about the value of this data, arguing that most of the added value of IoT came from actuators rather than sensors.)

To the extent that the data involves personal information, this raises political concerns. Some of the speakers today spoke of surveillance capitalism, and there were useful talks on security and privacy. (See separate post on Risk and Security)

In his 2014 essay on the Internet of Things, Bruce Sterling characterizes the Internet of Things as "an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband". According to Sterling, powerful stakeholders like the slogan 'Internet of Things' "because it sounds peaceable and progressive".

Peaceable? Howard uses the term Pax. This refers to a period in which the centre is stable and relatively peaceful, although the periphery may be marked by local skirmishes and violence (p7). His historical examples are the Pax Romana, the Pax Britannica and the Pax Americana. He argues that we are currently living in a similar period, which he calls Pax Technica.

For Howard, "a pax indicates a moment of agreement between government and the technology industry about a shared project and way of seeing the world" (p6). This seems akin to Gramsci's notion of cultural hegemony, "the idea that the ruling class can manipulate the value system and mores of a society, so that their view becomes the world view or Weltanschauung" (Wikipedia).

But whose tech? Howard has documented significant threats to democracy from foreign governments using social media bots to propagate junk news. There are widespread fears that this propaganda has had a significant effect on several recent elections. And if the Russians are often mentioned in the context of social media bots and junk news, the Chinese are often mentioned in the context of dodgy Internet of Things devices. While some political factions in the West are accused of collaborating with the Russians, and some commercial interests (notably pharma) may be using similar propaganda techniques, it seems odd to frame this as part of a shared project between government and the technology industry. Howard's research indicates a new technological cold war, in which techniques originally developed by the authoritarian regimes to control their own citizens are repurposed to undermine and destabilize democratic regimes.

David Runciman talked provocatively about government of the things, by the things, for the things. (Someone from the audience linked this, perhaps optimistically, to Bruno Latour's Parliament of Things.) But Runciman's formulation foregrounds the devices (the "things") and overlooks the relationships behind the devices (the "internet of"). (This is related to Albert Borgmann's notion of the Device Paradigm.) As consumers we may spend good money on products with embedded internet-enabled devices, then we discover that these devices don't truly belong to ourselves but remain loyal to their manufacturers. They monitor our behaviour, they may refuse to work with non-branded spare parts, or they may terminate service altogether. As Ian Steadman reports, it's becoming more and more common for everyday appliances to have features we don't expect. (Worth reading Steadman's article in full. He also quotes some prescient science fiction from Philip K Dick's 1969 novel Ubik.) "Very soon your house will betray you" warns architect Rem Koolhaas (Guardian 12 March 2014).

There are important ethical questions here, relating to non-human agency and the Principal-Agent problem.

But the invasion of IoT into our lives doesn't stop there. McGuirk worries that "our countless daily actions and choices around the house become what define us", and quotes a line from Dave Eggers' 2013 novel, The Circle

"Having a matrix of preferences presented as your essence, as the whole you? … It was some kind of mirror, but it was incomplete, distorted."
So personal identity and socioeconomic status may become precarious. This needs more thinking about. In the meantime, here is a quote from Teston.

"Wearable technologies ... are non-human actors that interact with other structural conditions to determine whose bodies count."

Related Posts

Witnessing Machines Built in Secret (November 2017)
Pax Technica - The Book (November 2017)
Pax Technica - On Risk and Security (November 2017)


Dan Herman, Dave Eggers' "The Circle" — on tech, big data and the human component (Metaweird, Oct 2013)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Justin McGuirk, Honeywell, I’m Home! The Internet of Things and the New Domestic Landscape (e-flux #64 April 2015)

John Naughton, 95 Theses about Technology (31 October 2017)

Ian Steadman, Before we give doors and toasters sentience, we should decide what we're comfortable with first (New Statesman, 10 February 2015)

Bruce Sterling, The Epic Struggle of the Internet of Things (2014). Extract via BoingBoing (13 Sept 2014)

Christa Teston, Rhetoric, Precarity, and mHealth Technologies (Rhetoric Society Quarterly, 46:3, 2016) pp 251-268 

Wikipedia: Cultural Hegemony, Device ParadigmHegemony, Principal-Agent problem

Sunday, April 9, 2017

Creative Tension in the White House

In his 1967 book on Organizational Intelligence, Harold Wilensky praises President Franklin Roosevelt for his unorthodox but apparently effective management style.
"Roosevelt devised an administrative structure that would baffle any conventional student of public administration." (p53)

In contrast with FDR's approach, Wilensky notes some episodes where White House intelligence systems were not fit for purpose, including Korea (Truman) and the Bay of Pigs (Kennedy).

What about President Trump's approach? @tonyjoyce suggests that Trump is failing FDR's first construct - checking and balancing official intelligence vs unorthodox sources. However, Reuters (via the Guardian) quotes Republican strategist Charlie Black, who believes Trump’s White House reflects his traditional approach to running his business. “He’s always had a spokes-to-the-wheel management style,” said Black. “He wants people with differing views among the spokes.“


Reuters, Kushner and Bannon agree to 'bury the hatchet' after White House peace talks (Guardian, 9 April 2017)

Related posts

Delusion and Diversity (October 2010)
The Art of the New Deal - Trump and Intelligence (February 2017)
Another Update on Deconfliction (April 2017)

Sunday, April 2, 2017

Linear Thought

Various concerns have been raised about Lt. Gen. Michael Flynn, previously described as "disruptive" by a former Pentagon official, and now the subject of heated investigation and speculation around his short-lived role in the Trump administration, his alleged links with Russia and other countries, and his alleged obsessions about various topics.

According to the Guardian, US and UK intelligence officers were also anxious about Flynn's capacity for "linear thought".

I guess most people will interpret this concern as "insufficient capacity". When I searched for "linear thinking" on the internet, I found a number of pages that contrasted linear thinking with various forms of supposedly bad thinking, such as "fragmented thinking". I also found pages that tried to divide people into two camps - the scientific "leftbrain" types who think in straight lines, and the artistic "rightbrain" types who think in circles.

However, systems thinkers might be concerned about someone at that level having too much capacity for linear thought. (As one might be concerned about someone's capacity for gossip or deception.) In a previous post on this blog, I defended Flynn's former boss, Gen. Stanley McChrystal (labelled an "ill-fated iconoclast" by James Kitfield) against the claim that he was not a systems thinker. (This claim was based on a remark McChrystal had made about a subsequently notorious systems dynamics diagram. I argued that McChyrstal's remark could have been made either by someone who doesn't get systems thinking, or at the other extreme by someone who really gets systems thinking.)

The question here is about greater or lesser capacity for various kinds of thinking, because I'm trying to avoid the fallacy (identified by @cybersal) of categorizing people as this or that type of thinker. She rightly insists on seeing systems thinking not as an all-or-nothing affair but "as a lens to be applied in a particular type of situation".

By the way, Flynn himself has appeared on this blog before. In January 2010, using the lens of organizational intelligence, I reviewed his report on Fixing Intel. While I was sceptical about some of his recommendations, I can affirm that the report showed considerable capacity for systems (non-linear) thinking. Make of that what you will.


Phillip Carter, What is Michael Flynn's game? (Slate, 31 March 2017)

Luke Harding et al, Michael Flynn: new evidence spy chiefs had concerns about Russian ties (Guardian, 31 March 2017)

James Kitfield, Flynn’s Last Interview: Iconoclast Departs DIA With A Warning (Breaking Defense, 7 August 2014)

Stanley McChrystal, The military case for sharing knowledge (TED2014, March 2014)

Stan McChrystal, Career Curveballs: No Longer A Soldier (22 April 2014)

Greg Miller and Adam Goldman, Head of Pentagon intelligence agency forced out, officials say (Washington Post, 30 April 2014)

Related Blogposts

A Job Description for Systems Thinking (November 2009)
Making Intelligence Relevant (January 2010)
Understanding Complexity (July 2010)

Wednesday, March 1, 2017

Decision-Making Models

In my previous discussion of the ACPO national decision model (May 2014), I promised to return to the methodological question, namely what theories of decision-making would be relevant to NDM and any other decision models. I have just happened upon a doctoral thesis by Maxwell Mclean looking at the decision-making by coroners, which analyses local variation in coronial outcomes at three decision-making stages: whether to report the death, whether to advance to inquest, and the choice of inquest conclusion.

Mclean notes that there is no decision-making model for coroners equivalent to the police national decision model and focussed on standards and consistency of outcome. He finds other examples of decision-making models in nursing (Lewinson and Truglio-Londrigan, 2008; Husted and Husted, 1995; Jasper, Rosser and Mooney, 2013); social work (O’Sullivan, 2011; Taylor, 2010); and probation work (Carter, 1967; Rosecrance, 1985). However, several of these are descriptive models rather than normative models.

Within the professions mentioned by Mclean, I found a lot more work on evidence-based nursing as well as some interesting international discussions on decision-making within offender supervision. Looking further afield, I was interested to find an article about a decision-making model in the US Army, but this turned out to be merely a polemical article by a former Navy Seal advocating the use of Design Thinking.

Rosecrance introduces an interesting concept of the Ball Park, where a professional decision is influenced by the anticipated reaction of a more senior professional. For example, the decisions of a probation officer are not solely designed to achieve the desired outcomes for the client, but also designed to meet the approval of (1) judges, (2) prosecuting attorneys, and (3) probation supervisors. When a recommendation seems likely to meet the approval of these three entities, it is said to be "in the ball park". The "ball park" concept is also used in sales negotiations, and this hints at the idea that the focus here is on "selling" (or at least defending) the decision rather than just making it.

Coming back to the police, this frames the NDM not just as a way of making the best decision but also avoiding censure if anything goes wrong. See my post on the National Decision Model and Lessons Learned (February 2017).

Miranda Boone and Martine Evans, Offender supervision and decision-making in Europe (Offender Supervision in Europe: Decision-Making and Supervision Working Group, 2013)

Jeff Boss, The Army's New Decision-Making Model (Forbes, 8 August 2014)

Carter, R.M. (1967). The presentence report and the decision making process. Journal of
research in crime and delinquency. 4 203-211.

Jasper, M., Rosser, M., Mooney, G. (Eds.) (2013). Professional Development, Reflection
and Decision-Making in Nursing and Health Care (2nd ed.). Swansea: Wiley Blackwell.

Husted, G.L. and Husted, I.H. (1995). Ethical decision-making in nursing (2nd ed.). St
Louis: Mosby.

Lewenson, S.B. and Truglio-Londrigan, M. (2008). Decision-Making in Nursing, thoughtful approaches for practice. London: Jones and Bartlett Publishers International.

Maxwell Mclean, The Coroner in England and Wales; Coronial Decision-­Making and Local Variation in Case Outcomes (Doctoral Thesis, University of Huddersfield, 2015)

O'Sullivan, T. (2011). Decision making in social work (2nd ed.). Basingstoke: Palgrave

Rosecrance, J. (1985). The Probation Officers' Search for Credibility: Ball Park
Recommendations. Journal of research in crime and delinquency. 31, (4) 539-554.

Mooi Standing, Perceptions of clinical decision-making: a matrix model (May 2010). This appears to be a chapter from Mooi Standing (ed) Clinical Judgement and Decision-Making in Nursing and Inter-professional Healthcare (McGraw Hill, 2010)

Taylor, B. (2010). Professional Decision-Making in Social Work. Exeter: Learning Matters.

Carl Thompson et al, Nurses, information use, and clinical decision making—the real world potential for evidence-based decisions in nursing (Evidence-Based Nursing Vol 7 No 3, July 2004)

Related posts
National Decision Model (May 2014)
National Decision Model and Lessons Learned (Feb 2017)

Updated 4 March 2017