When concentrating your vote flips over into being a disadvantage

There was a fascinating discussion on my Twitter timeline with Rob Ford, Will Jennings, Iron Economist and many other distinguished people, triggered by concerns about the Liberal Democrat revoke A50 policy.  In short: the concerns expressed by some are that the Liberal Democrats might get the total majority they would need to enact this Revoke with a mere 30-35% of the vote, and that would be way short of the 50% endorsement sought by those wanting a referendum.  And the fact that they could get this majority with just 35% was bolstered by the modelling I did ages ago in this popular blog post.

Which blog post I still stand by in outline, but developments since have shown up even more glitches in the system, including this feature: the LibDem seat total climbs very slowly at first, but then at some point it rockets as all sorts of seats fall.  What this highlights is how having a very evenly spread vote across all constituencies is a massive disadvantage below a certain threshold, and only flips over to being an advantage when you hit the 30s in terms of vote share.

There is a corollary: learning how to concentrate your vote share is essential if you want to go above a small number of seats, as a small party: contrast the fortunes of the SNP and UKIP in the 2015 election.

Since that post, I have written a number of others exploring methods a model user might concentrate the LibDem vote share and get a different result; generally speaking, the outcome was about 20-30 more seats for the LibDems when their vote share is in the high teens/low 20s. Again, just what you would expect.

What I thought I would also share before heading off for my nighttime cocoa: that same variable becomes a disadvantage for the LibDems if they are looking for a majority. In other words, they begin to pile up pointlessly large majorities rather than gain more seats – just as hit the Tories in 1997, say, or Labour in 2017, when their votes did not go as far as they might in seats.

Here is a graphical representation: first the behaviour of party seats when there is no use of “historical LibDemmyness” in the machine (Solid line) and second the same relationship with a high degree of LibDemmyness and a little tactical voting

The dotted line suggests a much higher threshold for the LibDems is needed to get a majority – but still in the 30s. Maybe 5 percentage points higher.  And none of this loopy, “450 seats plus” style outcome.  Another reason to doubt whether a purely smooth swing is what we might expect.

 

Trying to start a fight between the Bank of England and Resolution Foundation

It is excellent that the Resolution Foundation has embarked upon serious macro-economic wonkery. Their opening salvo – “Recession Ready?: Assessing the UK’s macroeconomic framework” – is as good an introduction to the state of play as you can find. They call it “the most comprehensive assessment of the UK’s macroeconomic policy framework since the financial crisis”. Damnit, they are right.
You could also argue that it is way overdue. The Resolution Foundation is “an independent think-tank focused on improving the living standards for those on low to middle incomes”. Around three years into its existence, the biggest macro economic event of the past 80 years hit the UK causing Gross National Income to fall some 20% off trend. To calibrate that, any policy intervention that might raise GDP by 1% deserves a serious prize. Twenty times that is epoch-making. Future historians will wonder why we don’t go on about it more.
I have started and restarted this blog many times because a single post cannot asses something so comprehensive, and invariably does the thing a disservice. I will need to pick my target.
Anyway, first a shamefully short summary of the RF position:

  • Government macro support matters – the crisis might have been 16 percentage points worse without any
  • Of that 16 percent, the bulk came from monetary policy. The fiscal lever was maybe responsible for just 3 percentage points
  • Yet for the next recession (which RF thinks has never been more likely), monetary policy will have less juice, “reflecting what appears to be a secular decline in the level of interest rates
  • Therefore, “fiscal policy needs to play a more active role which necessitates a change in the framework”

That is a lot to unpack. Each superficially reasonable, the positions above might together add up to something revolutionary- a shift in the framework bigger than anything since 1992, or possibly since the Thatcher revolution dethroned fiscal demand management.
Revolutionary conclusions should not go down as easy as buttered toast. Ten or twelve years of argument and reflection have led me to perceive varying positions out there that cannot be adjudicated as settled. I want to address this from the hypothetical point of view of an adviser telling a Chancellor whether they should follow this scheme and pursue “a change in the framework so that fiscal policy can play an explicit stabilisation role within a credible framework for achieving long-run debt sustainability and low and stable inflation.”
Here are some minor reasons they might entertain:

Is weak aggregate demand the problem? By assumption it is – the RF is explicit in saying it is discussing policies for a recession. But absent some really sharp events, this will remain a point of contention. Inflation is on target, and by some measures the economy at capacity (e.g. record employment). Against this, some of us wonder if the capacity of the economy is endogenously determined – could we really have lost 20% of our know-how, irretrievably?

Don’t debt levels matter more? Chris Giles in the FT has reopened this, arguing that the commitment to have Debt/GDP falling is a damaging constraint on investment and should be ditched. But just four years ago, the view of Osborne’s Treasury was that outstanding debt was such a big deal that the government needed to target an absolute surplus by 2018. Relatedly …

Is a sterling crisis possible? Another big limit on the government getting to do what it wants is the ancient risk of the currency being repudiated – Britain as an emerging market country. Absurd, you might say; but that was before Brexit. It was what happened in the mid-1970s: no one was saying “hey, you borrow in your own currency, go nuts!”
I don’t personally think any of these objections should weigh too strongly – but they would all be considered in a traditional Treasury, and that Treasury is always worth bearing in mind.
Here is my larger problem: I am not convinced that fiscal policy would do the job the RF intends it to; more to the point, I don’t think the Bank thinks it would, in which case I am not sure the Bank’s “tacit cooperation” would be achieved. It very much depends on how you answer the question of how monetary and fiscal policy really interact. For me, this is the biggie and needs to be broken into a sub questions. When is fiscal policy effective at boosting demand

A. Always.

B. During a crisis.

C. Only when the Bank “can’t boost the economy any more” and

D. Never

Why would you answer D? Well, go back to 2005. Rates are 4-5% or so and the government produces an unexpected £20bn fiscal boost, in spending and tax cuts. The Bank forecasts inflation to be on track (because that is its job). What does the Bank do? Not nothing; it already thinks aggregate demand is at the right level. Any uncalled-for increase in AD needs to be battled. So they tighten policy, until demand is basically back where it was. Fiscal policy has changed the composition of demand, but not its level (barring lags).
This is known in places as the Sumner Critique, and is acknowledged by RF, but in my view not emphasised enough. Until the Bank literally does not think it can in good faith forecast CPI inflation landing where it should, it does not think it is out of fuel. And as a result, it ought not to passively accept any fiscal boost.
This is not just a fairy tale, but literally how things turned out for 1994-2007. See the chart; in that period, the UK fiscal position swung from borrowing 7% to a net surplus, and then back to borrowing 2.5%; yet nominal GDP growth stayed absolutely steady. The fiscal stance was trumped by the Bank’s monetary stance. In the game of macro, the Bank moves last.

Are there other reasons to doubt fiscal policy? Here is one that niggles me, when I imagine advising that Chancellor. Suppose GDP growth is weak, and you plan a stimulus package of 1% to boost it. Pretend the Sumner critique does not apply. Fine; next year’s growth is 1% higher than it might have been – but your deficit is permanently higher. If you want to maintain that pace, do you not have to boost the deficit again? But an accelerating deficit is not sustainable. Ultimately, you have to reverse what you did and (ceteris paribis) you are back at the low growth you didn’t want in the first place. AFAIK, the case has to rest on a bunch of other nice things that will happen – perhaps one year of extra government-inspired growth boosts the private sector’s confidence permanently, or helps the supply side, or hits the exact right spot when growth was temporarily weak (there is a predictable cycle). But they are all rather nice assumptions.
Obviously, this critique falls away if monetary policy is really ineffective, and for some people that is true when rates are low. We are at risk of being in position C. Monetary policy “runs out of fuel”. But at some level that cannot be the case. Someone in possession of a printing press can surely boost nominal (i.e. cash) growth in an economy, at some limit.
What the authors half-assume is that monetary policy is really about lowering interest rates (either spot or long) and that this basically induces people/business to spend money. Other channels are brought up from time to time, such as portfolio rebalancing or expectations or commitment to being lower for longer (it is an excellent, comprehensive document) but the basic stance is that we hit diminishing returns as we get near zero.
I tend not to take this view, being over-influenced by the market-monetarists. This is no time to try to reincarnate all those arguments: I recommend reading Sumner in particular, pieces like this or this. But I am also very influenced by two other things. First, there is the record of Japan since Abenomics, which since 2012 is really remarkable, given the collapsing working age population. It has not been easy, but then Japan has other disadvantages that make it particularly hard – not least, their currency being a safe-haven that is continuously being bolstered in a crisis. And the rebound in NGDP has been incredible, and has happened despite fiscal policy being unhelpful. What Abe has done is painfully reorient expectations – a critical channel for monetary policy.


But, second, I come back to this: the case for fiscal policy “taking over” in particular circumstances, when monetary policy is weak, suffers when you think that monetary policy is in any way really still in control. And we can blog and tweet and argue, but as far as I can see, outside the absolute depths of a financial panic, for over 20 years, the Bank of England has forecast CPI inflation returning to exactly where it needs to be, without the help of the Government. It does not itself think it cannot control aggregate demand sufficiently. And that means whatever you are doing fiscally, the Bank in some sense does not think is necessary.
It is like a driver determined to drive at 50mph that notices the passenger sneakily leaning on the accelerator. The driver just leans on the brake a little more. The key point is the policy – 50mph. The Bank is the driver. And this is why I think all these arguments should always, always, return to what the Bank of England is actually targeting.

The vast, unknowable potential of tactical voting

TL;DR summary: if you adjust the uniform swing so that voting patterns reflect echoes of past Labour or LibDem strength, the predicted Tory majority vanishes. If you add onto this a measure of tactical voting, their seat share might fall by dozens of seats more.  But detecting whether this is realistic is very, very hard. 


Before launching into this, a recap.

I have been on quite a journey, hopefully towards a decent model for the impending* General Election. It began with a straight arithmetical exercise, intended to turn headline voting numbers into seats, done in a very naive way: take a certain chunk of Conservative and Labour votes, and reassign them uniformly so that you get the national vote share.  Like this:

The result of this kind of exercise was set out in this post which reflected on the sheer volatility and sometimes arbitrariness of the results.  For example, the numbers above produce for me a 33%-26%-18% win for the Conservatives over Labour, but 338-202-35 in terms of seats. Brutal. The method  was destined to deliver a very poor outcome to a split opposition with the Tories in a clear lead.

However, it also looked naive, in at least two ways.

First, LibDem votes just head heedlessly to every seat in an even manner. This struck me as unlikely: recent European election results showed a much lumpier, more motivated voting surge.  For example, there are Labour seats where the model spits out a significantly better result for the LibDems than they achieved in May: places like Blackpool South were showing a LibDem vote share of 13%, despite their only scoring 9% just four months ago.  And that meant that in other places the LibDem surge was being undermeasured in the model – places like Harrogate that went 28% in EU2019, and 43% as recently as 2010. 

So I designed a factor to reflect this “LibDemmyness“, and found that a modest application of that factor might raise the LibDem gains off the Conservatives by around 20-30-40 seats.  

This also applied to Labour.  The steep fall in their vote share (from 40% in 2017 to 26% now) meant a quite vertiginous fall everywhere, even places that are historically very pro-Labour. Is this realistic? Lord Mandelson in a recent event cited Hartlepool, his old seat, as an example – in each of the past three elections, Labour had beaten the Conservatives by a minimum of 14% – yet my model had that shrinking to 5%.  Now, maybe that is possible: the Brexit Party took 52% of the vote in May, so who knows. Mandelson may be out of touch.  But the Conservative party took just 5% in EU2019 and so a model suggesting they are competitive looks a bit odd.

So I added a “Labourishness” factor, and found that a modest application of this might raise the Labour seat total by 10 – mostly, taken from the Conservatives’ total.  Here are two examples: a seat that stops turning blue, and a seat that goes LibDem, thanks to these factors.

 

To emphasize, this is not a prediction. It merely says that if these older voting propensities come good, then you get results 30-40 seats worse for the Conservatives.  Put another way, the “unfair” luck they enjoy from the voting system is partially eroded.

Now, getting to the point. What about tactical voting? You could argue that these factors already take it into account – they basically instruct voters to emphasize their past Labour and LibDem patterns, which quite inevitably pushes in a tactical direction.  But given the stakes, it is not unreasonable to wonder if voters will think hard about whether their vote will have the effect they want and change accordingly.  Matthew Goodwin, the expert academic, has written about this and modelled a situation where dozens of LibDem and Labour candidates just stand down (and, presumably, just hand their votes to the other one).  The result – Conservatives collapse from 366 seats to around 100 less.

For me, that is too extreme. Candidates don’t stand down, and their voters do not obey like sheep.  Instead, I have set up a milder version like this:

  • Choose four categories of seat where LD-LAB tactical voting may take place.  They are
    1. Labour held, less than 50% of the vote (39 seats)
    2. LibDem held in 2017 (12 – I appreciate I must now update this!)
    3. Conservative held, and even if the LibDems were polling 25% nationally they would still be third (72)
    4. Conservative held, and even if Labour were polling 35% nationally, they would still be third. (53)
  • Then apply a % to the votes that the ‘conceding’ party would pass to the other party.

The result? For every 10% of tactical voting, there is a loss to the Conservatives of around 8-9 seats. Here is a chart:

Incidentally, Labour gain 5 seats for every 1 that the LibDems gain – what you would expect, but still a reason to stop and think about its political saleability as a bargain.

Which brings me to 1997 and 2001.  These are the elections that give us the best sense of what degree of Tactical Voter-iness is possible.

I wanted to work out how much TV went on there, according to my model, and so rebuilt the machine using 1992’s data, and went to work trying to reverse engineer a 1997-style Labour majority. (This is a very ugly way of operating, with all sorts of assumptions – the 1990s were very different from today.) I found that without any tactical voting aspect, the Tories would have won 190 seats. So to deliver them their 165 seat nightmare, you would have needed a 45% tactical voting switch across 200 seats that they won in 1992.

Apply that much tactical voting this time round and you obviously produce a very poor result for the Conservatives, even if they gain some of the higher national vote share totals they have recently scored (around 33-4%).  This may explain why the Conservatives internal polling was weaker.

Apply it to some of the weaker outcomes recently polled – e.g. 31% Con, 28% Lab – and the result is a total rout against the Conservatives – seat numbers in the low 200s.

Bottom line: it is impossible to predict, but if this highly confrontational behaviour by the Conservatives inspires tactical voting against them anywhere near what we saw in the 1990s, their chance of a majority vanishes. I cannot tell if that is a realistic assumption; I hope to illustrate many more specific seat model-predictions in order that the hive mind can tear it to pieces (or perhaps validate).

*Though the odds of a 2019 vote have slipped to around 65%, at time of writing, down from 90%

Conventional wisdom comes good, with a time fuse

I’ve had this thought for a while, and wanted to get it down in case it proves to be an enduring one. 

We have seen recently – by which I mean, since I have been paying attention – a number of sharp examples of the conventional wisdom being overthrown. By this, I mean suggestions or predictions like these:

The party that promises and delivers austerity is doomed – “out of government for a generation”. That last quote came courtesy the seldom-reliable Mervyn King, pre 2010, but it felt true enough at the time: governments are popular for spending money, hated for cuts. Gordon Brown really struggled to use the word “cuts” in the months before, and having marmelized the Tories in the mid 1990s during a gentler spell of austerity, you can understand why. 

Yet George Osborne et al turned this on its head. Austerity became a dividing line they could actually deploy against Labour in 2015 – the prospect of more cuts to come put the opposition in a worse bind than the Government. 

Electing a far-left Trot spelled doom for Labour at the next election This felt utterly obvious at the time. I recall, vividly, the FT editorials out during that 2015 Labour leadership contest as the impossible became possible, became likely and then inevitable.  Here are some choice picks. 

Janan Ganesh, “It’s as simple as it seems: Corbyn spells disaster for Labour”, with this brave complacency: “If a socialist peacenik becomes leader of Britain’s Labour party on September 12, it is not somehow a problem for the Conservatives, too. Tories high-fiving each other at the prospect of facing Jeremy Corbyn should not “be careful what they wish for””. 

Or how about “Labour’s disastrous choice”, the FT editorial lamenting his capturing the leadership, which alongside suggesting Corbyn may be forced to “tack to the centre”, did at least predict that some MPs would break away, and that “with the opposition in turmoil, the risk is that Tory MPs will lose discipline, especially over the neuralgic issue of Europe.” Nor arf.  But it basically assumed Labour were now unelectable, bad for Labour, bad for the country. 

Yet by 2017 Corbyn had seemingly transmuted into a near-election winner, conducting possibly the most successful election campaign (from 25% to 40%) in my memory, and changing history in the process. That manifesto was incredibly popular; every item listed in those disapproving editorials looked like a winner

An OUT vote will split the Tories I remember being astonished when Janan revealed that up to one third of Tory MPs might support a Leave vote in an EU referendum. What is with these extremists? Then it happened, Theresa May came in, and the Conservatives enjoyed the happiest conference of the past thirty years (this is what I hear from people who attended: activists who had grizzled under Cameron felt blissfully happy to be Citizens of Somewhere again). 

Now here we are.  The Tories are split over Europe, Labour Party polling in the low twenties and Corbyn the most unpopular Opposition leader since ever, and everyone competing to see who can spend the most money.  By many accounts austerity played a serious role in GE2017, and I have a view that Sajid Javid’s harsh spending review choices at BIS in 2015 – scrapping maintenance grants, in particular – cost a good dozen seats.  

All the conventional wisdom came true, but with a time fuse. Reality can only be defied for so long. 

The latest example of conventional wisdom, temporarily thwarted: the view that you cannot run a government in a hideously partisan way without it horribly fracturing.  This divisive character Cummings will tear them apart; Matthew Parris wrote the best column about the new Cabinet: 

That he will fall out with his new master within months is almost certain. That, when he does, the world will know about it in coruscating language, equally so. Not least among the compensations for the chaos that awaits us is the anticipation of Mr Cummings’s blogs, once he turns against Mr Johnson.”

Then August happened, gravity defied, all that Quentin-Letts-delighting decisiveness and suddenly the conventional intelligentsia had a loss of nerve, seeing Cummings Plans round every corner, to the point of self-parody. 

The conventional wisdom often rebounds. Not always – we are waiting a long time for Trump to lose favour with his base, for example. But sometimes with extraordinary rapidity.  Conventional wisdom was that this sort of government cannot go on like this for long.   A general election in 2019 is now a 85% possibility, and Tory private polling suggests they would fail to gain anything close to a majority.  Matthew Parris has not been proven right … yet.  

Some recent polling implications

Wild recent polling produces wild results

The columnists had a lovely job this week: the Johnson government in unprecedented meltdown (seemingly owned by the opposition, in possession of a minus-43 majority, a heated debate about what kind of prison food the former PM might expect, etc etc) and yet a swarm of polls suggesting things are not too bad.  A correspondent asked what they mean for the seats outcomes according to my machine; feeling all relaxed after a nice run* and with the model newly re-written to strip out ancient bugs, I decided to oblige.

First, Opinium, which shocked us with a 10 point Tory lead, and the LibDems down at 17.  Even with a moderate degree of “Libdemmyness”, as I have christened my skew on the LibDem voting patterns, you – obviously – get a handsome Tory majority.

Then came YouGov, even more shocking – 14 point lead!

Even fewer surprises there. No one would quibble at this being a deserved victory for a No Deal Brexit – though the result is “unfair” in that the Conservatives would win 11 seats for every percentage point share, Labour just 7, the combined BXP-CON vote is pretty compelling.

But then we got ComRes, and a very different story : Conservative lead of just 3  (for a situation where Brexit is not delivered as of 31 October)

Here the Conservatives’ governing majority is wiped out, the strategy has failed. There’s an intriguing multi-coloured government somewhere in there. Corbyn has lost seats though; is he under pressure? Esher and Walton falls (Uxbridge does not).

And here, worse (or better: let’s stay neutral) – the Labour lead envisaged if a Brexit Extension is imposed

Total disarray for the Conservatives, a small victory for the Labour party, a big one for the LibDems.

Finally, Delta and ComRes did a couple of similar ones showing the Tories with a small lead, like this

Bottom line: well, it is all obvious.  And as for which of these top line numbers feels right, you tell me. The people I mix with are appalled at what Johnson is doing; the vox pops done in the Observer and BBC appear to cheer him on.

The bulk of the fights are about Conservatives, and that surely matters

Final observation. I was noticing that no matter how much I messed around with my model, the seat switches to the LibDems were nearly all from the Conservatives – even if on most polls the Labour vote is down as much as the Conservatives’, and if we assume Conservatives are losing votes to BXP, the LibDems must be getting more from Labour.

For example, in that last Delta-ComRes result, there were 48 Con seats falling to LibDems, 12 to the SNP, and the Conservatives gaining back 28 from Labour. Only 6 Labour ones fall to the LibDems.  Why?

It appears to be because of the 2017 results, where only 7 of Labour’s seats are held against the LibDems in 2nd, while 29 of the Conservatives’ are.  And after the swing above, even more are set up that way.   We would move into a situation where a quarter of the House’s seats are Conservative-LibDem fights, but only a tiny percentage are Labour-LibDem fights.

If I have got this right, it feels significant, for tactical voting. At the headline poll level, it looks like the fight is all about who gets to be the Anti No Deal party; at a seat level, the tactical sorting may be a lot easier than you think. It is a fight against Tories in most places.  Next polling model post should be about how on earth to model that ….

*I apologise. It motivates me.

The way Lib Dems vote could take an extra 40 seats off the Tories

Of the many ways First Past the Post fails as a voting system, the way it punishes a split opposition is the most enduring.

To recap: recent Tory polling leads, on a uniform swing, would see the Conservatives returned with a governing majority – quite a hefty one, if the Brexit Party disarms.  But such a result would be brought about by the perversity of the voting system. Throughout the country, you have many situations where the leading party, the Conservatives, win a seat with little more than 30% of the vote – in well over 100 seats.  In half of those, the winning margin is less than 5%.  Such situations are highly sensitive to slight changes in how the opposition behave.

Method (boring bit)

The natural place to look for these variations is in the Lib Dem vote: where the biggest likely bump in support is likely to be, relative to 2017.  Three or four million votes, driven in large part by Brexit sentiment, are not just going to land as evenly as midnight snow.  What I needed was an algorithm for how they might fall more unevenly, and I chose recent history as my guide: Lib Dem performance in the GE2010, GE2015, GE2017 votes, and the recent EU elections.  Performance in those seats in those situations give us a clue as to how “LibDemmy” a place might be, on a percentage scale: 100% is set as the average. 

For example, Thurrock scored 2% in the last two GEs, 11.7% in 2010, and a mere 9% in EU2019 – these figures are well below the typical LibDem performance, and gave Thurrock an overall LibDemmy Score of 34%. Hornsey and Wood Green, at the other end of the scale, saw LibDems scoring in the 30s and 40s, and made a LibDemmy Score of 257%. 

Using that figure, one can take the overall swing to the LibDems, and disperse them more towards the places where the LibDemmyness was high, and less where it was low. The degree to which I can vary, by applying a variable “power” to that LibDemmy Score. Applying just one, the result is that the vote share in Thurrock rises by just a percentage point, but in Hornsey by more like 16%. 

What is the result?

Looking at individual seats, it can look dramatic.  For example, this methodology confirms that the LibDems would be right to be looking at Esher and Walton, the seat of Dominic Raab.  This place saw 38% of voters plump for the LibDems in EU2019, and 17% in 2017 – double the party’s national vote share – which gave it a high score:

 

You see a few extra Labour seats fall, too, like Cardiff Central, which went 25% LD in EU2019 and did well for them in 2010 too.

 

And the votes are taken off places like Birmingham Erdington which did not exactly flock to the LibDems recently:

Applying this sort of skew to the LibDem vote means a significant bump in the number of seats they take, principally from the Conservatives – maybe 40 seats more, depending on your starting point.  A whole bunch of seats become competitive for the Lib Dems that were not hitherto – like Wimbledon (the bar chart at the top).

And in aggregate?

There are too many variables in total for a definitive answer, but applying this sort of new skew to recent Conservative poll leads, and you see the number of seats the LibDems win off Conservatives rise from just over a dozen

to more like 50.

 

At higher LibDem swings, the effect is obviously more dramatic – more like 60 extra seats.

Anyway, this is all highly speculative and rough’n ready. I keep having to check back that my workings are not wrong, because such dramatic results keep coming out.  Yet this is all before the Conservatives lost 20 of their own – I have not taken the time to work out where those seats lie.  There are plenty of subtleties I have not factored in, such as strategic pro-Brexit voting, which may go the other way; will the voters of Esher and Walton really undermine Raab with support for the Brexit Party?

But the bottom line is surely this: you cannot rile up a massive chunk of the population (those against a No Deal Exit) and not expect some real electoral consequences …

Could the voting system be “cruel” to the Tories?

The rumours are of a general election, and the polls bad for the anti-No Deal side. Since the new administration took power, there has been a somewhat-predictable bump in Conservative support, with some polls showing CON ~32 LAB ~ 25, the BXP and LD jostling together in the low- and high-teens. You hardly need my model to demonstrate this, but such a result on a uniform swing would be enough to return Conservatives with an increased, even workable majority.  Remember those?

If you can bear to watch, here is one attempt at replicating those recent polling figures

So the same story as before: thanks to the way First Past the Post scatters the force of the enemy, the winners brutally gain 11 seats for every percentage point of vote share; Labour get 8, the LibDems 2, and so on. This must be what the warlike strategists are aiming for.

Diving in, the Tories lose most of their Scotland seats, and a chunk to the LibDems.  But what is critical is how they get it all back from Labour, mostly in Brexit areas.  Again, hardly a surprise: you can’t lose 16% on your 2017 result and not expect great losses.

In historical terms, it would be the *most* unfair result*, in terms of a governing party winning a majority from so little support, since forever. Here is a chart of those seats-per-vote-share ratios for CON and LAB.

However, the analytical account of certain Tory victory clashes somewhat with another more qualitative story you hear: “The Tories have lost London; there are a bunch of LibDem losses elsewhere in the South West, too; SNP has Scotland; so you need MASSES of gains from Labour to bring about a majority.” My model really doesn’t grasp that, and it could well be a failing of the model, a failing I mean to rectify. Here are the problems:

Tory gains from Labour look surprising

These are the 56 seats that Conservatives are modeled to be taking off Labour, and they look a bit iffy to me:

The consistent story here is of Conservatives managing to win a three-way race, despite Labour often starting at 50%, because the LibDems rise from nothing-ish to a hefty 15-17% at the expense of Labour.  That just feels odd. If the LibDem rise is driven by anger about Brexit, the Labour incumbents in many cases would surely be able to cauterise it into an anti-Tory vote. Implicit local deals will take a great may of these safe scenarios out of play. Very pro remain places like Kensington and Canterbury also fall to the Conservatives for similar reasons.  So I need some way of modelling a more intelligent dispersion of LibDem votes.   Relatedly….

The small number of LibDem gains from the Conservatives look surprising

Given the size of the LibDem surge, the poor return looks like a particular consequence of the uniformity of the swing. It is straightforwardly hard to model the surge from almost nothing, but at the least there ought to be more volatility in the model, which will tend to help the surg-er gain seats.  For example, I have heard that the south west London seat where I reside, currently a Con-Labour marginal, could be a LibDem gain.  That would require that I apply some Remain supercharged variable to LibDems in SW London, and make up the difference by removing votes from Brexity Wales, for example.

I am not quite sure how to do this. I can take clues from the 2017 result, compared to 2015, which may have some Brexit-intentionality. Chris Hanretty has produced the EU 2019 election results by constituency, which may provide another clue.

Kids-willing, I will try to provide another model with lots of charts in the next couple of days!

*though, eyeballing the differences, Blair’s landslides stick out too.

Burke, and being against “the coercive authority of such instructions”

When I first heard the words “MPs don’t get to choose which votes to respect” (repeated loyally by the PM and Party Chairman) my first thought was that someone is going to mention Burke.  No doubt many of you had that same thought.  And Sunder Katwala had it first and best, and wrote a splendid essay on CapX which you should read.

In case you are wondering what I am on about, this refers to a famous letter written by the great Conservative Edmund Burke to his constituents, in response to pressures you can guess at. The MP for Bristol is being asked to respond strictly to the ‘coercive instructions’ of his constituents, and he responds thusly:

To deliver an opinion, is the right of all men; that of constituents is a weighty and respectable opinion, which a representative ought always to rejoice to hear; and which he ought always most seriously to consider. But authoritative instructions; mandates issued, which the member is bound blindly and implicitly to obey, to vote, and to argue for, though contrary to the clearest conviction of his judgment and conscience,–these are things utterly unknown to the laws of this land, and which arise from a fundamental mistake of the whole order and tenor of our constitution.

MPs are not delegates or ambassadors, bearing firm instructions, but representatives.  They are members not of their constituencies, but of Parliament, and should not go there and blindly ignore the wider good of the whole community.  It is not a place for him or her to sacrifice “his unbiased opinion, his mature judgment, his enlightened conscience … to you, to any man, or to any set of men living”.

It is a beautiful letter, though as Sunder observes it provides no slam-dunk for the opponents of Brexit; the decision to have a referendum was clearly decided by a parliament of MPs exercising that judgment and conscience. To ignore the referendum as if it didn’t happen would be an act of bad faith. Moreover, Sunder points out that the representative model argued for by Burke is rather unpopular with the public, who much prefer the “do as you are told” model of democracy.

However, I think Burke’s broader point still stands against the focus-group tested, judgment-lobotomising line, “MPs do not get to choose”. That is precisely what MPs are there for.  More generally, while they are under a duty to follow instructions such as those issuing from that (advisory) referendum, this is one duty only amongst many others.  There are absolutely no “come what may” instructions, no “do or die”‘s that outweigh all other considerations, no matter how weighty.  In fact, the entirely business of deliberative democracy is a matter of weighing dozens of contradictory duties: the duty to keep the government solvent against the need to fund public services; the duty to protect the environment, against our personal freedom to choose how we behave. There are constraints and trade-offs everywhere, from the high abstractions – liberty against security, efficiency against fairness – down to the smallest value for money argument or row about burdensome red tape. It is why we have government by collective agreement – so all the interests can be weighed.

This is staggeringly obvious, but still not appreciated enough. Just because none of the other duties are expressed in as crude a form as a referendum vote, does not mean they suddenly cease to apply. The then-PM in weighing up her withdrawal agreement had to balance the (important) need to pursue that referendum result, against all the others pressing upon a responsible prime minister: to keep the economy working well, to maintain our international standing and friendships, public safety, to provide for an orderly life for the citizens, and many more. She found these were best navigated by the construction of a complex deal likely to keep us close to the European economic sphere but outside its slowly constricting politics.  I think it was ugly, and as good as one can expect given the constraints. In my opinion, it found the unhappy balancing point of that unhappy plebiscite; one that would annoy a lot of people, but basically do the job.

Above all there was never a duty to render realistic the impossible promises made in someone else’s referendum campaign.  There is no “spirit of the vote”, and if some hyperventilating campaigner promised a paradise of zero regulation, fountains of cash for everyone, and trade deals with all of South America, tough.

Absolutely any arrangement that meant the UK no longer featured on this Wikipedia page fulfils the strict requirements of the vote. Beyond that, there is only what the government and Parliament in their mature judgment thinks is wise for the whole community, in light of all possible considerations. If they decide a No Deal Brexit tramples over too many other important duties, that is what you sent them to Parliament to decide.

Would it matter if we built “a million too many houses” anyway?

The previous post about housing supply kicked off quite a debate (by the standards of my Twitter stream). It reminded me of how for 90% of the time I have been arguing against Ian on the whole housing supply issue and probing what will always feel like a counterintuitive thesis – that building many more houses cannot meaningfully address what everyone in the media accepts is “Britain’s Housing Crisis”.

I won’t summarise all the points made; I think Ian will probably address them himself. Those that spring to mind were:

  • Ian has somehow gotten the rental price series wrong. I doubt very much this is the case. He has been obsessively monitoring this and is hardly the sole source. I even noticed it myself in a five year old post which itself links through to Bond Vigilantes and Fathom Consulting. Rents just haven’t soared according to ONS figures
  • The (low) elasticity of response to more supply of houses means it is a ‘free shot’ – we all just get richer! This doesn’t even work on its own terms – any elasticity around 1.5 will lead the overall value of housing to being lower if you build more – but in any case it is not the point of housing policy to hit some asset-value target.
  • Ian must have gotten the household and housing numbers wrong because we cannot find all the empty houses. Here I have to defer to him about how these numbers are compiled. I understand there are real doubts about how local authorities count vacancies, but this is not an area I am expert on.

But for me the strongest response is best put in a comment left here:

we live in a world of abundance and waste. Why then should our consumption of housing goods and services be running only at the rate of household formation, and not run way way ahead to the point of satiation like other goods and services?

Why not? It is a curiously statist way of looking at things, to “count” the demand we need (household formation) and try thereby to work out what the country needs.  We don’t do that for cars or iPads or cinemas. Why not just let the market do the work? And this is after all what most supply-side advocates are calling for: they notice artificial restrictions (the Green Belt, local planning rules, the hoarding of decent sites, etc) and want them attacked so that whatever demand is latent can be met.  Who cares if the aggregate of all these dispersed decisions adds up to something  that exceeds what an economist thinks is ‘needed’? No one is proposing some state-driven building scheme that will crash through all these market signals.

The obvious rejoinder is that unnecessary investment is waste.  As Ian puts it

greater supply is likely to result in further growth in the number of unoccupied houses, which may not be an efficient use of scarce investment capital.

Imagine we were a village economy of 1000 households that usually built only 10 new houses a year. The village has only a little surplus resource for investment, which it may want to put into flood defences, new fishing boats, or a school; doubling the housing investment rate would hold down these other good things and damage the village’s prospects.  In the case of the UK, if we were  talking about a million extra, cheap, homes then the total cost might be £100bn – a huge amount of diverted investment, if that is how to frame the problem. If it were like investment in our energy infrastructure, we would be employing hundreds of analysts to argue about the right amount.

But is it? We are not a big village collectively choosing how much to sink into R&D, roads, cinemas and shopping centres.  Housing is dispersed – it is not like grid infrastructure. When you thwart someone’s attempt to invest in new housing, do you really cause investment funds to flow into better things? There have certainly been claims to that effect – that British Banks are only interested in backing (safe) property investments and uninterested in backing good British Business. But it would be an odd justification for housebuilding restrictions, to argue that this was we push more finance into (good) SMEs. And remember there really hasn’t been a housebuilding boom this last 30 years – that is the whole point. The charge is generally levelled instead against the business of lending towards the buying and selling of already existing homes.

In short, it would be highly speculative to argue that restrictions on housebuilding are a well-designed effort to improve the allocation of scarce British investment capital.  People might just spend the money on holidays or gambling instead, not smart business investment. And we are in a global market for capital. Good ideas get funded.

Nevertheless, if you accept Ian’s premises, you have to ask what would happen if supply restrictions were loosened.  Here is what I think:

  • Perhaps more houses would be built where people actually need them, and less where they don’t. The national surplus does not get worse. If restrictions have held back house supply in Wandsworth, lifting them enables more people to live there. These people come from other areas leaving behind more vacant properties and less building in those places.
  • Perhaps some more households form in response – though I think Ian has addressed the question of suppressed households and finds is implausible that average household size is going to keep falling at the rate it once did.
  • Perhaps the total number that are vacant, nationally, rises

If it is the last of these, is it such a terrible thing? Maybe it sounds like it is, from a pure efficiency point of view: vacant houses are like the inert inventory a shop or manufacturer keeps aside ahead of need. Efficiency means cutting down on inventory.  But as my commenter implies, every other decently functioning market has excess capacity. It is anti-competitive to stop a supermarket being built, because we already have ‘enough’ supermarkets. Same too with houses.

Accepting Ian’s premise, a burst of a million houses above trend might lower rents by 5-8%. The effect on total landlord profits might be greater – the same total rental envelope is now spread over a more supply. Neither of these strike me as a particularly bad thing. I haven’t checked for a while (so do correct me) but returns on capital in the property-investment business are pretty good. Cutting down on economic rent is a good object of policy.

So I still think that any feasible level of increased housebuilding is not likely to lower the cost of housing by all that much. Unaffordable houses will still appear very unaffordable. But I cannot see the case for artificial restrictions to supply. Let the market do its work. Just don’t be surprised if the effect is a bit ‘meh’.

 

Ian Mulheirn says UK housing is not a supply problem. No one can prove him wrong

Around three years ago in some bistro in Soho, Ian Mulheirn startled a tableful of economists going at their cassoulet with the bland statement: “Housing? It is not a supply problem”.

To call this heresy is melodramatic. We just thought he was winding us up. To question the mantra “just build more houses already” felt more like the kind of smart-alec model-play economists like to indulge in, such as when Hayek reportedly told Richard Kahn that buying a raincoat would worsen the Great Depression, or Scott Sumner reminds people that low interest rates can be contractionary (unlike Hayek, he has a point). Surely it is obviously a supply problem? Just look at those easily-googledup graphs of housebuilding, house prices, ownership rates? Screenshot 2019-08-22 at 16.15.12.png

Screenshot 2019-08-22 at 16.14.08
It all fits rather neatly into a political narrative, too: around 30 years ago, the wicked Thatcherites sold off the council homes and around the same time NIMBYs seized the planning system. For a while people kept on owning homes because other kinds of wicked capitalists (bankers) lent them money they could not afford, but when that game was up, the whole edifice came crashing down, leaving our young robbed of their future. It is a story that convinces the free market liberals and socialists alike. If this lot ever joined in a Government of National Unity, a Million More Homes might be the only agenda they coalesce around.

But Ian wasn’t trolling, and the next day at the FT I received a rather impressive and weighty report full of careful analysis like this

Screenshot 2019-08-22 at 16.21.08
I proceeded to find some hours in my day for arguing with Ian and proving some embarrassing mistake in his fundamental thinking. Because, of course, he could not be right – it is obviously a supply problem, it must be some trick of his model or failure to consider some obvious factor. Clearly we do not build enough houses – just eyeball those charts! We have had a long boom in net immigration, house supply has not risen in line, house prices have, come on, use Ockham’s razor, it HAS to be supply …
And so a long email chain began. Really long – probably the longest of my life. And every decent point I could think of, Ian patiently rebuffed over the next few months. I learned an embarrassing amount myself. The rental time series do NOT show rents swallowing an ever larger portion of the household income. There is a critical difference between Housing as an Asset, and Housing as a Service, and for the former you need to think in terms of an asset that has a yield, which must be compared to all the other interest rates available. Yes, Ian had thought about household formation being ‘endogenous’ – trans: the cost of housing possibly suppressing people going off and setting up homes.

And then Dominic Raab of all people helped to prove Ian right, or at least that the Government agrees. In early 2018 he as housing minister kicked off some ugly argument about whether immigrants were responsible for our soaring house prices and as a consequence the official MHCLG thinking on this subject was aired. Here it is. There in Table 1 is a straight acknowledgement of the elasticities that Ian references. 1% more houses means 2% lower prices: over the 1991 to 2006 period, the 20% extra houses (i.e. about 5m, or 200,000 a year) had therefore led to house prices being 40% lower than they would otherwise have been. Sounds a lot? Well, over that period house prices had risen 160%.

To put it another way: if some combination of council house building, green belt destruction and NIMBY-confronting had raised the building rate by 100,000 a year, there might have been 2.5m more houses, about 10% more, and house prices might have risen 20% less. So they might have risen by 140% instead of 160%. And that is one ENORMOUS amount of extra building – and, according to Ian’s figures, it would have hugely outpaced the actual rate of household formation.*

Let me repeat: the Government through its figures agrees with Ian. No feasible amount of house building will make housing more affordable, or the buying of a house markedly easier.

Ian has now moved to the Tony Blair Institute (he was at Oxford Economics before, and we briefly worked together at the Social Market Foundation). From the TBI he has released his latest attempt to educate the supply-convinced: it can be found here. Please read it and tweet me or, better, him what you think he has missed, because I know you think he must be wrong. Mostly lurking, I have watched him patiently and politely argue with people for three years now. See this Resolution Foundation blog. His ability to keep his temper in the face of so many arguments he has already heard is quite amazing. Do follow him. And read that paper, and reflect on charts like this showing the greater affordability of housing as a service.

Screenshot 2019-08-22 at 16.54.59.png

You may ask: what WILL make houses more affordable? A massive shift in the interest rate curve might, though it will also push many more owners into distress; houses were more ‘affordable’ in the early 1990s from that point of view, but the rate of repossession was also sky-high from the unaffordability of the monthly mortgage payments. Transfers of buying power from (old) owners to the (young) yet-to-own might, but that is a monumental political challenge. Greater mortgage availability might, but people will shake a withered finger at you and warn of repeating the errors of the 2000s.

The tougher question is: if this is so compelling, why is everyone so sure the problem is supply? Why do we all feel it in our guts? Here I cannot answer, and cannot rule out myself throwing down my pen one day, shouting “it MUST be supply” and restarting one of those enormous email chains. You tell me.

All in all, it is right that the provision of housing is a political issue. But whenever I hear someone profess some obvious sounding answer, I will always ask: “have they tackled Ian Mulheirn yet?”

*UPDATE. Jonathan Portes has politely pointed out an error in the way MHCLG presents the outcome of elasticities; instead of applying that 20% to the original price (i.e. to 100) it should be applied to 260 and so the total price would have risen 108% not 160% in light of that extra supply (this is his tweet, which Ian acknowledges). However, this error is only so egregious because we are talking about analysis of a very large, 25 year run up in prices (and 2 is apparently a high estimate for supply elasticity) – and 25 years of doing that extraordinary price explosion. It strikes me as still being the case that even adding 300,000 houses more than expected in just one year – about 1% – would lower the price level, ceteris paribus by just 2%.

The best way to put Ian’s point is to quote it directly: this is how he makes it in the document

To put these figures in context, it is worth considering the situation in England. The latest available data suggests that the first half of 2018 England had a total of just under 24.2 million dwellings, 4.9% more than the number of households. The above relationships suggest that, if households in England were to form at a rate of 200,000 per year, net additions of 300,000 per year would cut real terms house prices by 0.8% per year, all else equal. Applying the range of UK estimates above, such rate of additions might therefore be expected to reduce prices by between  7% and 13% over 20 years.