What the betting markets are saying …. It’s complicated

I am pathologically fascinated by political probabilities, and have been for easily 20 years: my first big speculative win in life was to “buy” Labour seats in the 1997 General Election at around 350, and I have been hooked every since.

Here are some of the current odds.

  1. A meaningful vote to pass in 2019  ~31%
  2. No Deal in 2019 ~ 18%
  3. UK to leave the EU by end of October ~28%
  4. UK to leave the EU by the end of December ~34%
  5. Brexit to happen before a General Election ~33%
  6. General Election this year ~68%
  7. Conservatives to win majority in next GE ~ 32% (Labour: 6%)
  8. Conservatives to win most seats ~70%
  9. GNU! Ken Clarke, Harriet Harman or Margaret Beckett to be next PM ~19%

There are a lot of overlapping probabilities in there that I would love to disentangle.  So here I am trying, looking at the possibilities (excluding 5 and 8).

 

What a mess, eh? But I think it gets across a few things. A Government of National Unity means no Brexit before 2020, but does NOT mean no MV passing – it might put through one with a referendum rider, for example. And a GNU might also be followed by a Tory Majority in a 2019 election.  No Deal Brexit might happen before end of October 2019, or it might happen in the months after.  There is a chance it happens even if a Meaningful Vote passes (See Maddy Thimont Jack). And it might happen if the Tories win a working majority – their mandate might fail to win over something workable from the EU.

Here it is with a few of the more salient probabilities given some kind of a name. (It goes without saying that the shape is not built to be proportional to the size of the probabilities.  That would be impossible for my poor brain. )

Anyway, my intention when sitting down to do this was to attempt some smartypants Bayesian logic to work out what one or two of the derived probabilities might be from the above.  For example, what does the Market think is the chance of the Benn Act forcing an extension? What does the market think is the chance of Johnson getting his new deal past the EU and then Parliament? Not the 31% chance above – some of those odds refer to meaningful votes on OTHER scenarios passing (like with a referendum rider). But right now all it has done is just warn me of the multiple different possibilities we as a country currently face.  I labelled 11 scenarios there, and I did not cover all of them by any means.

Perhaps if I earn myself the brain space at some other point I will try to work out some flow chart of possibilities, but I know others have done this, and better than I can. I will be humble and search them out first.

The defeat of the Treasury must not be final

For most advisers toiling within government, the standard daily routine is simple: “wake up/go to work/try to do things/get told you can’t by the Treasury/grumble a lot/go home”.

OK, I exaggerate: there is the whole maddening business of government by collective agreement to wade through. This means that any other department, from the mighty Home Office down to the, er, plucky Welsh Office might throw a spanner in your works.

But mostly it’s the Treasury telling you No, and then twice a year itself getting to announce whatever it damned well wants under some convention called “we control the purse strings, what you want to do about it?”  Usually it’s the likes of a beer duty cut, to show that They Get People (the omnishambles Budget casts a looong shadow). If you can detect notes of resentment in my preamble, well duh; I even wrote a pamphlet (with Stian) calling for HMT to be broken up.

So I ought to be feeling some smidgeon of schaudenfreude at the seeming overthrow of the once mighty HMT, in particular with regard to industrial strategy.  First with Vince Cable, and then later with Theresa May, half of my job for 6 years was building the  case for economic intervention, and always against the sharp brains and ruthless negotiators of Horseguards.

Now it feels like one of those episodes of Game of Thrones where a longstanding warlord is suddenly, ruthlessly murdered. Recent days have seen staggering announcements from my old stamping ground, now under the hitherto dry-ish Andrea Leadsom: a billion for electric vehicles, £200m for Fusion, a few more forests … and all of this against a backdrop of disintegrating fiscal rules . This is all before much of what the PM promised in his leadership charge is even costed.  All of these were ideas being worked up during Theresa May’s time, some amongst those refused by the then-Chancellor in the dying days (hard to believe, but nuclear fusion wheezes don’t just spring up from a few feverish caffeinated hours of pre-conference brainstorming. Forest wheezes, coin wheezes, maybe … ). 

This is politics, and it makes sense that the new administration frames everything as a pure consequence of their thrusting optimism about Britain’s future, even if this is somewhat unfair.  Well, forget the “somewhat”: for me this is the most galling paragraph of the weekend so far:

“Mrs Leadsom is a Brexit true believer. While her predecessor Greg Clark agonised about its impact, she sees a path to a shining future in which the UK will lead the world in artificial intelligence, green energy, automation and life sciences.”

Sorry to shout, but WHAT? Greg Clark, with the close and energetic support of Downing Street, led the charge in setting goals and missions for industrial strategy around the life sciences, Artificial Intelligence , the Future of Mobility, green energy, the Ageing Society, and much more. Rather unfair to imply that his widely-shared concerns about damage from the wrong Brexit betrays a lack of belief in our technological potential. Greg fought hard for R&D.

But as I said, politics – and there is some cynical truth to the implicit claim, “if it were not for us, these things would not happen”.  The current prime minister appears to be de facto First Lord of the Treasury to a degree that his predecessor wasn’t. Good advisers have a talent for feeling the winds of change; four years ago, the promise-threat of endless austerity was the recipe for winning elections, and now it’s the opposite. Principle should not become blind.  You can see no starker example than the shift of the current Chancellor – today’s Santa Claus was four years ago planning to convert grants to loans, not just for student maintenance but even for innovation, despite being warned of “potentially disastrous consequences”.  People who see George Osborne’s downfall as entirely stemming from the EU referendum forget the knock he had already taken from trying to cut tax credits too hard in 2015. The wind was already changing, and I suspect our current PM had detected it.

So, industrial strategists rejoice? I would throw in a few words of caution. I always felt that a good industrial strategy was one that beat the Treasury on the merits of the argument, not through political force majeure. Principles like:

  • avoid distorting competition
  • respect value for money
  • go with the grain of the country’s long term direction (ageing, services-led, inherently globalist, knowledge economy)
  • be realistic about gaming, crowding out, additionality
  • you are being lobbied, and everything seems more convincing than it really is

are worth heeding, always. There is a huge opportunity between mindless laissez faire and “do whatever the latest overpromoted Spad feels like” and the processes, arm’s length organisations, and continuous tedious challenge is part of finding that space.

I am reasonably sure that these announcements are good ones, because they have been brewing a long time. It makes sense to go for it on electric vehicles; I even like the Labour plan of massive subsidized loans to buy them (though bear in mind most of the cars bought will be foreign-made at first. And building three battery plants in the UK is punchy: I wonder if the Labour leadership realises that this will basically involve a lot of hard bargaining with the rare capitalists who are good at this). Nuclear fusion is a gamble worth taking: even if it is a one in ten shot, it is a technology that just might transform the whole planet’s energy system.

But, for the future, an industrial strategy constructed in a Treasury-free zone is a worrying thought. Its sudden, seeming defeat reminds me, unforgivably, of further Game of Thrones themes, in particular: the nature of the real enemy can shift.* Back in 2015, the people pointlessly urging us towards a fiscal surplus were a threat to national prosperity, in my view.  Now a far worse threat comes from those urging us on to a No Deal Brexit, and pretending it is somehow embracing the future.

Deciding you want a future filled with autonomous electric vehicles, AI-powered care assistants, green energy and quantum mechanics is the easy bit. Actually getting there means two things: staying relentlessly open to all the world best ideas, and being prepared to hear the word No a whole lot more. The Treasury may be a pain, but in the more recent chapters of this saga it has weirdly become one of the good guys.

*face it, it would have been helpful for Tywin Lannister to have taken on the Night King

When concentrating your vote flips over into being a disadvantage

There was a fascinating discussion on my Twitter timeline with Rob Ford, Will Jennings, Iron Economist and many other distinguished people, triggered by concerns about the Liberal Democrat revoke A50 policy.  In short: the concerns expressed by some are that the Liberal Democrats might get the total majority they would need to enact this Revoke with a mere 30-35% of the vote, and that would be way short of the 50% endorsement sought by those wanting a referendum.  And the fact that they could get this majority with just 35% was bolstered by the modelling I did ages ago in this popular blog post.

Which blog post I still stand by in outline, but developments since have shown up even more glitches in the system, including this feature: the LibDem seat total climbs very slowly at first, but then at some point it rockets as all sorts of seats fall.  What this highlights is how having a very evenly spread vote across all constituencies is a massive disadvantage below a certain threshold, and only flips over to being an advantage when you hit the 30s in terms of vote share.

There is a corollary: learning how to concentrate your vote share is essential if you want to go above a small number of seats, as a small party: contrast the fortunes of the SNP and UKIP in the 2015 election.

Since that post, I have written a number of others exploring methods a model user might concentrate the LibDem vote share and get a different result; generally speaking, the outcome was about 20-30 more seats for the LibDems when their vote share is in the high teens/low 20s. Again, just what you would expect.

What I thought I would also share before heading off for my nighttime cocoa: that same variable becomes a disadvantage for the LibDems if they are looking for a majority. In other words, they begin to pile up pointlessly large majorities rather than gain more seats – just as hit the Tories in 1997, say, or Labour in 2017, when their votes did not go as far as they might in seats.

Here is a graphical representation: first the behaviour of party seats when there is no use of “historical LibDemmyness” in the machine (Solid line) and second the same relationship with a high degree of LibDemmyness and a little tactical voting

The dotted line suggests a much higher threshold for the LibDems is needed to get a majority – but still in the 30s. Maybe 5 percentage points higher.  And none of this loopy, “450 seats plus” style outcome.  Another reason to doubt whether a purely smooth swing is what we might expect.

 

Trying to start a fight between the Bank of England and Resolution Foundation

It is excellent that the Resolution Foundation has embarked upon serious macro-economic wonkery. Their opening salvo – “Recession Ready?: Assessing the UK’s macroeconomic framework” – is as good an introduction to the state of play as you can find. They call it “the most comprehensive assessment of the UK’s macroeconomic policy framework since the financial crisis”. Damnit, they are right.
You could also argue that it is way overdue. The Resolution Foundation is “an independent think-tank focused on improving the living standards for those on low to middle incomes”. Around three years into its existence, the biggest macro economic event of the past 80 years hit the UK causing Gross National Income to fall some 20% off trend. To calibrate that, any policy intervention that might raise GDP by 1% deserves a serious prize. Twenty times that is epoch-making. Future historians will wonder why we don’t go on about it more.
I have started and restarted this blog many times because a single post cannot asses something so comprehensive, and invariably does the thing a disservice. I will need to pick my target.
Anyway, first a shamefully short summary of the RF position:

  • Government macro support matters – the crisis might have been 16 percentage points worse without any
  • Of that 16 percent, the bulk came from monetary policy. The fiscal lever was maybe responsible for just 3 percentage points
  • Yet for the next recession (which RF thinks has never been more likely), monetary policy will have less juice, “reflecting what appears to be a secular decline in the level of interest rates
  • Therefore, “fiscal policy needs to play a more active role which necessitates a change in the framework”

That is a lot to unpack. Each superficially reasonable, the positions above might together add up to something revolutionary- a shift in the framework bigger than anything since 1992, or possibly since the Thatcher revolution dethroned fiscal demand management.
Revolutionary conclusions should not go down as easy as buttered toast. Ten or twelve years of argument and reflection have led me to perceive varying positions out there that cannot be adjudicated as settled. I want to address this from the hypothetical point of view of an adviser telling a Chancellor whether they should follow this scheme and pursue “a change in the framework so that fiscal policy can play an explicit stabilisation role within a credible framework for achieving long-run debt sustainability and low and stable inflation.”
Here are some minor reasons they might entertain:

Is weak aggregate demand the problem? By assumption it is – the RF is explicit in saying it is discussing policies for a recession. But absent some really sharp events, this will remain a point of contention. Inflation is on target, and by some measures the economy at capacity (e.g. record employment). Against this, some of us wonder if the capacity of the economy is endogenously determined – could we really have lost 20% of our know-how, irretrievably?

Don’t debt levels matter more? Chris Giles in the FT has reopened this, arguing that the commitment to have Debt/GDP falling is a damaging constraint on investment and should be ditched. But just four years ago, the view of Osborne’s Treasury was that outstanding debt was such a big deal that the government needed to target an absolute surplus by 2018. Relatedly …

Is a sterling crisis possible? Another big limit on the government getting to do what it wants is the ancient risk of the currency being repudiated – Britain as an emerging market country. Absurd, you might say; but that was before Brexit. It was what happened in the mid-1970s: no one was saying “hey, you borrow in your own currency, go nuts!”
I don’t personally think any of these objections should weigh too strongly – but they would all be considered in a traditional Treasury, and that Treasury is always worth bearing in mind.
Here is my larger problem: I am not convinced that fiscal policy would do the job the RF intends it to; more to the point, I don’t think the Bank thinks it would, in which case I am not sure the Bank’s “tacit cooperation” would be achieved. It very much depends on how you answer the question of how monetary and fiscal policy really interact. For me, this is the biggie and needs to be broken into a sub questions. When is fiscal policy effective at boosting demand

A. Always.

B. During a crisis.

C. Only when the Bank “can’t boost the economy any more” and

D. Never

Why would you answer D? Well, go back to 2005. Rates are 4-5% or so and the government produces an unexpected £20bn fiscal boost, in spending and tax cuts. The Bank forecasts inflation to be on track (because that is its job). What does the Bank do? Not nothing; it already thinks aggregate demand is at the right level. Any uncalled-for increase in AD needs to be battled. So they tighten policy, until demand is basically back where it was. Fiscal policy has changed the composition of demand, but not its level (barring lags).
This is known in places as the Sumner Critique, and is acknowledged by RF, but in my view not emphasised enough. Until the Bank literally does not think it can in good faith forecast CPI inflation landing where it should, it does not think it is out of fuel. And as a result, it ought not to passively accept any fiscal boost.
This is not just a fairy tale, but literally how things turned out for 1994-2007. See the chart; in that period, the UK fiscal position swung from borrowing 7% to a net surplus, and then back to borrowing 2.5%; yet nominal GDP growth stayed absolutely steady. The fiscal stance was trumped by the Bank’s monetary stance. In the game of macro, the Bank moves last.

Are there other reasons to doubt fiscal policy? Here is one that niggles me, when I imagine advising that Chancellor. Suppose GDP growth is weak, and you plan a stimulus package of 1% to boost it. Pretend the Sumner critique does not apply. Fine; next year’s growth is 1% higher than it might have been – but your deficit is permanently higher. If you want to maintain that pace, do you not have to boost the deficit again? But an accelerating deficit is not sustainable. Ultimately, you have to reverse what you did and (ceteris paribis) you are back at the low growth you didn’t want in the first place. AFAIK, the case has to rest on a bunch of other nice things that will happen – perhaps one year of extra government-inspired growth boosts the private sector’s confidence permanently, or helps the supply side, or hits the exact right spot when growth was temporarily weak (there is a predictable cycle). But they are all rather nice assumptions.
Obviously, this critique falls away if monetary policy is really ineffective, and for some people that is true when rates are low. We are at risk of being in position C. Monetary policy “runs out of fuel”. But at some level that cannot be the case. Someone in possession of a printing press can surely boost nominal (i.e. cash) growth in an economy, at some limit.
What the authors half-assume is that monetary policy is really about lowering interest rates (either spot or long) and that this basically induces people/business to spend money. Other channels are brought up from time to time, such as portfolio rebalancing or expectations or commitment to being lower for longer (it is an excellent, comprehensive document) but the basic stance is that we hit diminishing returns as we get near zero.
I tend not to take this view, being over-influenced by the market-monetarists. This is no time to try to reincarnate all those arguments: I recommend reading Sumner in particular, pieces like this or this. But I am also very influenced by two other things. First, there is the record of Japan since Abenomics, which since 2012 is really remarkable, given the collapsing working age population. It has not been easy, but then Japan has other disadvantages that make it particularly hard – not least, their currency being a safe-haven that is continuously being bolstered in a crisis. And the rebound in NGDP has been incredible, and has happened despite fiscal policy being unhelpful. What Abe has done is painfully reorient expectations – a critical channel for monetary policy.


But, second, I come back to this: the case for fiscal policy “taking over” in particular circumstances, when monetary policy is weak, suffers when you think that monetary policy is in any way really still in control. And we can blog and tweet and argue, but as far as I can see, outside the absolute depths of a financial panic, for over 20 years, the Bank of England has forecast CPI inflation returning to exactly where it needs to be, without the help of the Government. It does not itself think it cannot control aggregate demand sufficiently. And that means whatever you are doing fiscally, the Bank in some sense does not think is necessary.
It is like a driver determined to drive at 50mph that notices the passenger sneakily leaning on the accelerator. The driver just leans on the brake a little more. The key point is the policy – 50mph. The Bank is the driver. And this is why I think all these arguments should always, always, return to what the Bank of England is actually targeting.

The vast, unknowable potential of tactical voting

TL;DR summary: if you adjust the uniform swing so that voting patterns reflect echoes of past Labour or LibDem strength, the predicted Tory majority vanishes. If you add onto this a measure of tactical voting, their seat share might fall by dozens of seats more.  But detecting whether this is realistic is very, very hard. 


Before launching into this, a recap.

I have been on quite a journey, hopefully towards a decent model for the impending* General Election. It began with a straight arithmetical exercise, intended to turn headline voting numbers into seats, done in a very naive way: take a certain chunk of Conservative and Labour votes, and reassign them uniformly so that you get the national vote share.  Like this:

The result of this kind of exercise was set out in this post which reflected on the sheer volatility and sometimes arbitrariness of the results.  For example, the numbers above produce for me a 33%-26%-18% win for the Conservatives over Labour, but 338-202-35 in terms of seats. Brutal. The method  was destined to deliver a very poor outcome to a split opposition with the Tories in a clear lead.

However, it also looked naive, in at least two ways.

First, LibDem votes just head heedlessly to every seat in an even manner. This struck me as unlikely: recent European election results showed a much lumpier, more motivated voting surge.  For example, there are Labour seats where the model spits out a significantly better result for the LibDems than they achieved in May: places like Blackpool South were showing a LibDem vote share of 13%, despite their only scoring 9% just four months ago.  And that meant that in other places the LibDem surge was being undermeasured in the model – places like Harrogate that went 28% in EU2019, and 43% as recently as 2010. 

So I designed a factor to reflect this “LibDemmyness“, and found that a modest application of that factor might raise the LibDem gains off the Conservatives by around 20-30-40 seats.  

This also applied to Labour.  The steep fall in their vote share (from 40% in 2017 to 26% now) meant a quite vertiginous fall everywhere, even places that are historically very pro-Labour. Is this realistic? Lord Mandelson in a recent event cited Hartlepool, his old seat, as an example – in each of the past three elections, Labour had beaten the Conservatives by a minimum of 14% – yet my model had that shrinking to 5%.  Now, maybe that is possible: the Brexit Party took 52% of the vote in May, so who knows. Mandelson may be out of touch.  But the Conservative party took just 5% in EU2019 and so a model suggesting they are competitive looks a bit odd.

So I added a “Labourishness” factor, and found that a modest application of this might raise the Labour seat total by 10 – mostly, taken from the Conservatives’ total.  Here are two examples: a seat that stops turning blue, and a seat that goes LibDem, thanks to these factors.

 

To emphasize, this is not a prediction. It merely says that if these older voting propensities come good, then you get results 30-40 seats worse for the Conservatives.  Put another way, the “unfair” luck they enjoy from the voting system is partially eroded.

Now, getting to the point. What about tactical voting? You could argue that these factors already take it into account – they basically instruct voters to emphasize their past Labour and LibDem patterns, which quite inevitably pushes in a tactical direction.  But given the stakes, it is not unreasonable to wonder if voters will think hard about whether their vote will have the effect they want and change accordingly.  Matthew Goodwin, the expert academic, has written about this and modelled a situation where dozens of LibDem and Labour candidates just stand down (and, presumably, just hand their votes to the other one).  The result – Conservatives collapse from 366 seats to around 100 less.

For me, that is too extreme. Candidates don’t stand down, and their voters do not obey like sheep.  Instead, I have set up a milder version like this:

  • Choose four categories of seat where LD-LAB tactical voting may take place.  They are
    1. Labour held, less than 50% of the vote (39 seats)
    2. LibDem held in 2017 (12 – I appreciate I must now update this!)
    3. Conservative held, and even if the LibDems were polling 25% nationally they would still be third (72)
    4. Conservative held, and even if Labour were polling 35% nationally, they would still be third. (53)
  • Then apply a % to the votes that the ‘conceding’ party would pass to the other party.

The result? For every 10% of tactical voting, there is a loss to the Conservatives of around 8-9 seats. Here is a chart:

Incidentally, Labour gain 5 seats for every 1 that the LibDems gain – what you would expect, but still a reason to stop and think about its political saleability as a bargain.

Which brings me to 1997 and 2001.  These are the elections that give us the best sense of what degree of Tactical Voter-iness is possible.

I wanted to work out how much TV went on there, according to my model, and so rebuilt the machine using 1992’s data, and went to work trying to reverse engineer a 1997-style Labour majority. (This is a very ugly way of operating, with all sorts of assumptions – the 1990s were very different from today.) I found that without any tactical voting aspect, the Tories would have won 190 seats. So to deliver them their 165 seat nightmare, you would have needed a 45% tactical voting switch across 200 seats that they won in 1992.

Apply that much tactical voting this time round and you obviously produce a very poor result for the Conservatives, even if they gain some of the higher national vote share totals they have recently scored (around 33-4%).  This may explain why the Conservatives internal polling was weaker.

Apply it to some of the weaker outcomes recently polled – e.g. 31% Con, 28% Lab – and the result is a total rout against the Conservatives – seat numbers in the low 200s.

Bottom line: it is impossible to predict, but if this highly confrontational behaviour by the Conservatives inspires tactical voting against them anywhere near what we saw in the 1990s, their chance of a majority vanishes. I cannot tell if that is a realistic assumption; I hope to illustrate many more specific seat model-predictions in order that the hive mind can tear it to pieces (or perhaps validate).

*Though the odds of a 2019 vote have slipped to around 65%, at time of writing, down from 90%

Conventional wisdom comes good, with a time fuse

I’ve had this thought for a while, and wanted to get it down in case it proves to be an enduring one. 

We have seen recently – by which I mean, since I have been paying attention – a number of sharp examples of the conventional wisdom being overthrown. By this, I mean suggestions or predictions like these:

The party that promises and delivers austerity is doomed – “out of government for a generation”. That last quote came courtesy the seldom-reliable Mervyn King, pre 2010, but it felt true enough at the time: governments are popular for spending money, hated for cuts. Gordon Brown really struggled to use the word “cuts” in the months before, and having marmelized the Tories in the mid 1990s during a gentler spell of austerity, you can understand why. 

Yet George Osborne et al turned this on its head. Austerity became a dividing line they could actually deploy against Labour in 2015 – the prospect of more cuts to come put the opposition in a worse bind than the Government. 

Electing a far-left Trot spelled doom for Labour at the next election This felt utterly obvious at the time. I recall, vividly, the FT editorials out during that 2015 Labour leadership contest as the impossible became possible, became likely and then inevitable.  Here are some choice picks. 

Janan Ganesh, “It’s as simple as it seems: Corbyn spells disaster for Labour”, with this brave complacency: “If a socialist peacenik becomes leader of Britain’s Labour party on September 12, it is not somehow a problem for the Conservatives, too. Tories high-fiving each other at the prospect of facing Jeremy Corbyn should not “be careful what they wish for””. 

Or how about “Labour’s disastrous choice”, the FT editorial lamenting his capturing the leadership, which alongside suggesting Corbyn may be forced to “tack to the centre”, did at least predict that some MPs would break away, and that “with the opposition in turmoil, the risk is that Tory MPs will lose discipline, especially over the neuralgic issue of Europe.” Nor arf.  But it basically assumed Labour were now unelectable, bad for Labour, bad for the country. 

Yet by 2017 Corbyn had seemingly transmuted into a near-election winner, conducting possibly the most successful election campaign (from 25% to 40%) in my memory, and changing history in the process. That manifesto was incredibly popular; every item listed in those disapproving editorials looked like a winner

An OUT vote will split the Tories I remember being astonished when Janan revealed that up to one third of Tory MPs might support a Leave vote in an EU referendum. What is with these extremists? Then it happened, Theresa May came in, and the Conservatives enjoyed the happiest conference of the past thirty years (this is what I hear from people who attended: activists who had grizzled under Cameron felt blissfully happy to be Citizens of Somewhere again). 

Now here we are.  The Tories are split over Europe, Labour Party polling in the low twenties and Corbyn the most unpopular Opposition leader since ever, and everyone competing to see who can spend the most money.  By many accounts austerity played a serious role in GE2017, and I have a view that Sajid Javid’s harsh spending review choices at BIS in 2015 – scrapping maintenance grants, in particular – cost a good dozen seats.  

All the conventional wisdom came true, but with a time fuse. Reality can only be defied for so long. 

The latest example of conventional wisdom, temporarily thwarted: the view that you cannot run a government in a hideously partisan way without it horribly fracturing.  This divisive character Cummings will tear them apart; Matthew Parris wrote the best column about the new Cabinet: 

That he will fall out with his new master within months is almost certain. That, when he does, the world will know about it in coruscating language, equally so. Not least among the compensations for the chaos that awaits us is the anticipation of Mr Cummings’s blogs, once he turns against Mr Johnson.”

Then August happened, gravity defied, all that Quentin-Letts-delighting decisiveness and suddenly the conventional intelligentsia had a loss of nerve, seeing Cummings Plans round every corner, to the point of self-parody. 

The conventional wisdom often rebounds. Not always – we are waiting a long time for Trump to lose favour with his base, for example. But sometimes with extraordinary rapidity.  Conventional wisdom was that this sort of government cannot go on like this for long.   A general election in 2019 is now a 85% possibility, and Tory private polling suggests they would fail to gain anything close to a majority.  Matthew Parris has not been proven right … yet.  

Some recent polling implications

Wild recent polling produces wild results

The columnists had a lovely job this week: the Johnson government in unprecedented meltdown (seemingly owned by the opposition, in possession of a minus-43 majority, a heated debate about what kind of prison food the former PM might expect, etc etc) and yet a swarm of polls suggesting things are not too bad.  A correspondent asked what they mean for the seats outcomes according to my machine; feeling all relaxed after a nice run* and with the model newly re-written to strip out ancient bugs, I decided to oblige.

First, Opinium, which shocked us with a 10 point Tory lead, and the LibDems down at 17.  Even with a moderate degree of “Libdemmyness”, as I have christened my skew on the LibDem voting patterns, you – obviously – get a handsome Tory majority.

Then came YouGov, even more shocking – 14 point lead!

Even fewer surprises there. No one would quibble at this being a deserved victory for a No Deal Brexit – though the result is “unfair” in that the Conservatives would win 11 seats for every percentage point share, Labour just 7, the combined BXP-CON vote is pretty compelling.

But then we got ComRes, and a very different story : Conservative lead of just 3  (for a situation where Brexit is not delivered as of 31 October)

Here the Conservatives’ governing majority is wiped out, the strategy has failed. There’s an intriguing multi-coloured government somewhere in there. Corbyn has lost seats though; is he under pressure? Esher and Walton falls (Uxbridge does not).

And here, worse (or better: let’s stay neutral) – the Labour lead envisaged if a Brexit Extension is imposed

Total disarray for the Conservatives, a small victory for the Labour party, a big one for the LibDems.

Finally, Delta and ComRes did a couple of similar ones showing the Tories with a small lead, like this

Bottom line: well, it is all obvious.  And as for which of these top line numbers feels right, you tell me. The people I mix with are appalled at what Johnson is doing; the vox pops done in the Observer and BBC appear to cheer him on.

The bulk of the fights are about Conservatives, and that surely matters

Final observation. I was noticing that no matter how much I messed around with my model, the seat switches to the LibDems were nearly all from the Conservatives – even if on most polls the Labour vote is down as much as the Conservatives’, and if we assume Conservatives are losing votes to BXP, the LibDems must be getting more from Labour.

For example, in that last Delta-ComRes result, there were 48 Con seats falling to LibDems, 12 to the SNP, and the Conservatives gaining back 28 from Labour. Only 6 Labour ones fall to the LibDems.  Why?

It appears to be because of the 2017 results, where only 7 of Labour’s seats are held against the LibDems in 2nd, while 29 of the Conservatives’ are.  And after the swing above, even more are set up that way.   We would move into a situation where a quarter of the House’s seats are Conservative-LibDem fights, but only a tiny percentage are Labour-LibDem fights.

If I have got this right, it feels significant, for tactical voting. At the headline poll level, it looks like the fight is all about who gets to be the Anti No Deal party; at a seat level, the tactical sorting may be a lot easier than you think. It is a fight against Tories in most places.  Next polling model post should be about how on earth to model that ….

*I apologise. It motivates me.

The way Lib Dems vote could take an extra 40 seats off the Tories

Of the many ways First Past the Post fails as a voting system, the way it punishes a split opposition is the most enduring.

To recap: recent Tory polling leads, on a uniform swing, would see the Conservatives returned with a governing majority – quite a hefty one, if the Brexit Party disarms.  But such a result would be brought about by the perversity of the voting system. Throughout the country, you have many situations where the leading party, the Conservatives, win a seat with little more than 30% of the vote – in well over 100 seats.  In half of those, the winning margin is less than 5%.  Such situations are highly sensitive to slight changes in how the opposition behave.

Method (boring bit)

The natural place to look for these variations is in the Lib Dem vote: where the biggest likely bump in support is likely to be, relative to 2017.  Three or four million votes, driven in large part by Brexit sentiment, are not just going to land as evenly as midnight snow.  What I needed was an algorithm for how they might fall more unevenly, and I chose recent history as my guide: Lib Dem performance in the GE2010, GE2015, GE2017 votes, and the recent EU elections.  Performance in those seats in those situations give us a clue as to how “LibDemmy” a place might be, on a percentage scale: 100% is set as the average. 

For example, Thurrock scored 2% in the last two GEs, 11.7% in 2010, and a mere 9% in EU2019 – these figures are well below the typical LibDem performance, and gave Thurrock an overall LibDemmy Score of 34%. Hornsey and Wood Green, at the other end of the scale, saw LibDems scoring in the 30s and 40s, and made a LibDemmy Score of 257%. 

Using that figure, one can take the overall swing to the LibDems, and disperse them more towards the places where the LibDemmyness was high, and less where it was low. The degree to which I can vary, by applying a variable “power” to that LibDemmy Score. Applying just one, the result is that the vote share in Thurrock rises by just a percentage point, but in Hornsey by more like 16%. 

What is the result?

Looking at individual seats, it can look dramatic.  For example, this methodology confirms that the LibDems would be right to be looking at Esher and Walton, the seat of Dominic Raab.  This place saw 38% of voters plump for the LibDems in EU2019, and 17% in 2017 – double the party’s national vote share – which gave it a high score:

 

You see a few extra Labour seats fall, too, like Cardiff Central, which went 25% LD in EU2019 and did well for them in 2010 too.

 

And the votes are taken off places like Birmingham Erdington which did not exactly flock to the LibDems recently:

Applying this sort of skew to the LibDem vote means a significant bump in the number of seats they take, principally from the Conservatives – maybe 40 seats more, depending on your starting point.  A whole bunch of seats become competitive for the Lib Dems that were not hitherto – like Wimbledon (the bar chart at the top).

And in aggregate?

There are too many variables in total for a definitive answer, but applying this sort of new skew to recent Conservative poll leads, and you see the number of seats the LibDems win off Conservatives rise from just over a dozen

to more like 50.

 

At higher LibDem swings, the effect is obviously more dramatic – more like 60 extra seats.

Anyway, this is all highly speculative and rough’n ready. I keep having to check back that my workings are not wrong, because such dramatic results keep coming out.  Yet this is all before the Conservatives lost 20 of their own – I have not taken the time to work out where those seats lie.  There are plenty of subtleties I have not factored in, such as strategic pro-Brexit voting, which may go the other way; will the voters of Esher and Walton really undermine Raab with support for the Brexit Party?

But the bottom line is surely this: you cannot rile up a massive chunk of the population (those against a No Deal Exit) and not expect some real electoral consequences …

Could the voting system be “cruel” to the Tories?

The rumours are of a general election, and the polls bad for the anti-No Deal side. Since the new administration took power, there has been a somewhat-predictable bump in Conservative support, with some polls showing CON ~32 LAB ~ 25, the BXP and LD jostling together in the low- and high-teens. You hardly need my model to demonstrate this, but such a result on a uniform swing would be enough to return Conservatives with an increased, even workable majority.  Remember those?

If you can bear to watch, here is one attempt at replicating those recent polling figures

So the same story as before: thanks to the way First Past the Post scatters the force of the enemy, the winners brutally gain 11 seats for every percentage point of vote share; Labour get 8, the LibDems 2, and so on. This must be what the warlike strategists are aiming for.

Diving in, the Tories lose most of their Scotland seats, and a chunk to the LibDems.  But what is critical is how they get it all back from Labour, mostly in Brexit areas.  Again, hardly a surprise: you can’t lose 16% on your 2017 result and not expect great losses.

In historical terms, it would be the *most* unfair result*, in terms of a governing party winning a majority from so little support, since forever. Here is a chart of those seats-per-vote-share ratios for CON and LAB.

However, the analytical account of certain Tory victory clashes somewhat with another more qualitative story you hear: “The Tories have lost London; there are a bunch of LibDem losses elsewhere in the South West, too; SNP has Scotland; so you need MASSES of gains from Labour to bring about a majority.” My model really doesn’t grasp that, and it could well be a failing of the model, a failing I mean to rectify. Here are the problems:

Tory gains from Labour look surprising

These are the 56 seats that Conservatives are modeled to be taking off Labour, and they look a bit iffy to me:

The consistent story here is of Conservatives managing to win a three-way race, despite Labour often starting at 50%, because the LibDems rise from nothing-ish to a hefty 15-17% at the expense of Labour.  That just feels odd. If the LibDem rise is driven by anger about Brexit, the Labour incumbents in many cases would surely be able to cauterise it into an anti-Tory vote. Implicit local deals will take a great may of these safe scenarios out of play. Very pro remain places like Kensington and Canterbury also fall to the Conservatives for similar reasons.  So I need some way of modelling a more intelligent dispersion of LibDem votes.   Relatedly….

The small number of LibDem gains from the Conservatives look surprising

Given the size of the LibDem surge, the poor return looks like a particular consequence of the uniformity of the swing. It is straightforwardly hard to model the surge from almost nothing, but at the least there ought to be more volatility in the model, which will tend to help the surg-er gain seats.  For example, I have heard that the south west London seat where I reside, currently a Con-Labour marginal, could be a LibDem gain.  That would require that I apply some Remain supercharged variable to LibDems in SW London, and make up the difference by removing votes from Brexity Wales, for example.

I am not quite sure how to do this. I can take clues from the 2017 result, compared to 2015, which may have some Brexit-intentionality. Chris Hanretty has produced the EU 2019 election results by constituency, which may provide another clue.

Kids-willing, I will try to provide another model with lots of charts in the next couple of days!

*though, eyeballing the differences, Blair’s landslides stick out too.

Burke, and being against “the coercive authority of such instructions”

When I first heard the words “MPs don’t get to choose which votes to respect” (repeated loyally by the PM and Party Chairman) my first thought was that someone is going to mention Burke.  No doubt many of you had that same thought.  And Sunder Katwala had it first and best, and wrote a splendid essay on CapX which you should read.

In case you are wondering what I am on about, this refers to a famous letter written by the great Conservative Edmund Burke to his constituents, in response to pressures you can guess at. The MP for Bristol is being asked to respond strictly to the ‘coercive instructions’ of his constituents, and he responds thusly:

To deliver an opinion, is the right of all men; that of constituents is a weighty and respectable opinion, which a representative ought always to rejoice to hear; and which he ought always most seriously to consider. But authoritative instructions; mandates issued, which the member is bound blindly and implicitly to obey, to vote, and to argue for, though contrary to the clearest conviction of his judgment and conscience,–these are things utterly unknown to the laws of this land, and which arise from a fundamental mistake of the whole order and tenor of our constitution.

MPs are not delegates or ambassadors, bearing firm instructions, but representatives.  They are members not of their constituencies, but of Parliament, and should not go there and blindly ignore the wider good of the whole community.  It is not a place for him or her to sacrifice “his unbiased opinion, his mature judgment, his enlightened conscience … to you, to any man, or to any set of men living”.

It is a beautiful letter, though as Sunder observes it provides no slam-dunk for the opponents of Brexit; the decision to have a referendum was clearly decided by a parliament of MPs exercising that judgment and conscience. To ignore the referendum as if it didn’t happen would be an act of bad faith. Moreover, Sunder points out that the representative model argued for by Burke is rather unpopular with the public, who much prefer the “do as you are told” model of democracy.

However, I think Burke’s broader point still stands against the focus-group tested, judgment-lobotomising line, “MPs do not get to choose”. That is precisely what MPs are there for.  More generally, while they are under a duty to follow instructions such as those issuing from that (advisory) referendum, this is one duty only amongst many others.  There are absolutely no “come what may” instructions, no “do or die”‘s that outweigh all other considerations, no matter how weighty.  In fact, the entirely business of deliberative democracy is a matter of weighing dozens of contradictory duties: the duty to keep the government solvent against the need to fund public services; the duty to protect the environment, against our personal freedom to choose how we behave. There are constraints and trade-offs everywhere, from the high abstractions – liberty against security, efficiency against fairness – down to the smallest value for money argument or row about burdensome red tape. It is why we have government by collective agreement – so all the interests can be weighed.

This is staggeringly obvious, but still not appreciated enough. Just because none of the other duties are expressed in as crude a form as a referendum vote, does not mean they suddenly cease to apply. The then-PM in weighing up her withdrawal agreement had to balance the (important) need to pursue that referendum result, against all the others pressing upon a responsible prime minister: to keep the economy working well, to maintain our international standing and friendships, public safety, to provide for an orderly life for the citizens, and many more. She found these were best navigated by the construction of a complex deal likely to keep us close to the European economic sphere but outside its slowly constricting politics.  I think it was ugly, and as good as one can expect given the constraints. In my opinion, it found the unhappy balancing point of that unhappy plebiscite; one that would annoy a lot of people, but basically do the job.

Above all there was never a duty to render realistic the impossible promises made in someone else’s referendum campaign.  There is no “spirit of the vote”, and if some hyperventilating campaigner promised a paradise of zero regulation, fountains of cash for everyone, and trade deals with all of South America, tough.

Absolutely any arrangement that meant the UK no longer featured on this Wikipedia page fulfils the strict requirements of the vote. Beyond that, there is only what the government and Parliament in their mature judgment thinks is wise for the whole community, in light of all possible considerations. If they decide a No Deal Brexit tramples over too many other important duties, that is what you sent them to Parliament to decide.