Wednesday, August 12, 2015

Multiplicative Factors in Games and Cause Prioritization

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.

-----------


Part of the effective altruism movement was founded on the idea that, within public health charities, there is an incredibly wide spread between the most effective and least effective.  Effective altruists have recently been coming around to the idea that at least as important is the difference between the most and least effective cause areas.  But while most EAs will agree that global public health interventions are generally more effective, or at least have higher potential, than supporting your local opera house, there's a fair bit of disagreement over what the most effective cause area is.  Global poverty, animal welfare, existential risk, and movement building/meta-EA charities are the most popular, but there are also proponents of first world education, prioritization research, economics, life extension, and a whole host of other issues.

Recently there's been a lot of talk about whether one cause is so important that all other causes are rounding errors compared to it (though some disagreement over what that cause would be!).  The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters.  For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans. Similar arguments have been made where the "very large number" is the number of animals, or the intractability of a cause, or moral discounting of some group (often future humans or animals).

This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that's probably not a very good model.  But first, a foray into games.

Krug Versus Gromp


Imagine that you're playing some game against a friend.  You each have a character--yours is named Krug, and your opponents' is named Gromp.  The characters will eventually battle each other, once, to the death.  They each do some amount of damage per second D, and have some amount of health H.  They'll keep attacking each other continuously until one is dead.

If they fight, then Krug will take H_g / D_k seconds to kill Gromp, and Gromp will take H_k / D_g seconds to kill Krug, with the winner being the one who lasts longer.  Multiply through by D_g*D_k, and you get that the winner is the one who has the higher D*H--what you're trying to maximize is the product of damage per second, and health.  It doesn't matter what your opponent is doing--there's no rock, paper, scissors going on.  You just want to maximize health * damage.

Now let's say that before this fight, you each get to buy items to equip to your character.  You're buying for Krug.  Krug starts out with no health and no damage.  There are two items you can buy: swords that each give 5 damage per second, and shields that each give 20 health.  They both cost $1 each, and you have $100 to spend.  It turns out that the right way to spend your money is to spend $50 buying 50 swords, and $50 buying 50 shields, ending up with 250 damage per second, and 1,000 health.  (You can play around with other options if you want, but I promise this is the best.)

The really cool thing is that your money allocation is totally independent of the cost of swords and shields, and how much damage/health they give.  You should spend half your money on swords and half on shields, no matter what.  If swords cost $10 and gave 1 attack, and shields cost $1 and gave 100 health, you should still spend $50 on each.  One way to think about this is: the nth dollar I spend on swords will increase my damage per second by a factor of n/(n-1), and the nth dollar spent on shields will increase my health by n/(n-1).  Since all I care about is damage * health, I can just pull out these multiplicative factors--the actual scale of the numbers don't matter at all.

This turns out to be a useful way to look at a wide variety of games.  In Magic, 4/4's are better than 2/6's and 6/2's; in League of Legends, bruisers win duels; in Starcraft, Zerglings and Zealots are very strong combat units.  In most games, the most powerful duelers are the units that have comparable amounts of investment in attack and defense.

Sometimes there are other stats that matter, too.  For instance, there might be health, damage per attack, and attacks per second.  In this case your total badassery is the product of all three, and you should spend 1/3 of your money on shields, 1/3 on swords, and 1/3 of caffeine (or whatever makes you attack quickly).  In general most combat stats in games are multiplicative, and you're usually best off spending equal amounts of money on all of them, unless you're specifically incentivized not to (e.g. by getting more and more efficient ways to buy swords the more you spend on swords).  In general, when factors each increase linearly in money spent and multiply with each other, you're best off spending equal amounts of money on each of the factors.  Let's call this the Principle of Distributed Power (PDP).


Multiplicative Causes


So, what does this have to do with effective altruism?

I think that, in practice, the impacts of lots of causes multiply, instead of adding.  For instance, I think that a plausible way to view the future is that expected utility is X * G, where X is the probability that we avoid existential risk and make it to the far future, and G is the goodness of the world we create, assuming we succeed in avoiding x-risk. By the Principle of Distributed Power, you'd want to invest equal amounts of resources on X and G.  But within X there are actually lots of different forms of existential risk--AI, Global Warming, bioterrorism, etc.  And within G, there are lots and lots of factors, each of which might multiply with each other--technological advancement, the care with which we treat animals, ability to effectively govern ourselves, etc.  And the PDP implies that our prior should be to invest comparable resources in each of those terms.

The real world is a lot messier than the battle between Krug and Gromp.  One of the big differences is that the impact of work on most of these causes isn't linear.  If you invest $1M in global warming x-risk maybe you reduce the odds that it destroys us by .01%, but if you invest $10^30 clearly you don't decrease the odds by 10^28%--the odds can't go below 0.  Many of these causes have some best achievable outcome, and so at some point you have to have decreasing marginal utility of resources.

Another difference is that we're not starting from zero on all causes.  The world has already invested billions of dollars in fighting global warming, and so that should be subtracted from the amount that's efficient to further spend on it.  (If you start off with $100 already invested in swords, then your next $100 should be invested in shields before you go back to splitting up your investments.)

In practice, when considering causes that multiply together, the question of how to divide up resources depends on how much has already been invested, where on the probability distribution for that cause you currently think you are, and lots of other practicalities.  In other words, it depends on how much you think it costs to increase your probability of a desired outcome by 1%.

But as long as there are other factors that multiply with it, a factor's importance transfers to them as well.  Which, in some cases, is a fact long ago discovered: the whole reason that x-risk is important is because of how immensely important the future is, which is equally an argument for improving the future and for getting there.

None of this proves anything.  But it's significantly changed my prior, and I now think it's likely that the EA movement should heavily invest in multiple causes, not just one.

I've spend a lot of time in my life trying to decide what the single most important cause is, and pissing other people off by being an asshole when I think I've found it.  I also like playing AD carries.  But my winrate with them isn't very high.  Maybe it's time to build bruiser.




7 comments:

  1. Great post!!

    Given how large current world investments are in various projects, and given that those investments probably aren't already perfectly optimized for the product that you want to optimize, I would expect that you (and the whole EA movement) could spend all its resources pushing on just one variable trying to make it more aligned with the others and still not get to a point where that variable hits diminishing marginal returns relative to pushing on other variables. For example, suppose the world's current distribution between D and H is 48% vs. 52%. Even if EAs generate millions of dollars of donations, they probably wouldn't change the distribution more than making it like 48.1% vs. 51.9% or something.

    As far as "the future is that expected utility is X * G", one has to remember that G could be negative, which changes the calculation a bit. Maybe you mean that G is the expected goodness of the future.

    ReplyDelete
  2. Yup totally agree that G could be negative; by X and G I do mean the expected value of X and G.

    I think that that's a good argument for not spending resources in most causes. But when selecting for causes EAs tend to care about it's a lot less clear--(copy-pasting what I said on the FB thread)--EA does make up a significant about of spending in many of the causes EAs tend to care about (e.g. xrisk, farmed animal welfare, EA movement building, prioritization research, wild animal welfare, etc.), and even where it doesn't it probably has a reasonably large impact on what the future of the field will look like (e.g. givewell with global poverty donations).

    ReplyDelete
    Replies
    1. Very thought-provoking post.

      My thoughts, after reading all the above comments:
      - causes add rather than multiply in the general case e.g. economic development + better tech + immigration reform + justice reform.
      - impact of investment in a cause is generally not linear. Rather, you'd want it to have diminishing returns, perhaps described by a log. I'm not so expert at mathematical modelling, but in the more general case, if you want to use multiplicative causes, you have I = f(y)*g(x) where y=1-x, you want to maximise marginal impact per dollar, ∂I/∂x, so I think you you want to maximise ∂f/∂y*g + ∂g/∂y*f. This will not generally bear the solution x = y.
      - these models might apply to how the world's resources should be distributed. But 99.9% of funding of poverty alleviation is going to occur anyway. If we can only move x (the fraction of funds distributed to global economic development, rather than existential risk reduction) from .999 (vs 0.001) to .998 (vs 0.002), then this would be good on the multiplicative cause model.

      Delete
    2. - Yeah I think that a reasonable way to think of causes is probably that there are categories within which things add (e.g. economic development), and then the categories multiply with each other.

      - I also agree that causes probably have decreasing marginal utility. So you probably want to find out which cause, right now, has the greatest % increase per $ spent. I'll maybe write up something mathier on it later, but I think it doesn't change the general notion that you want to split up resources
      - yeah I think it's a reasonable argument against poverty reduction unless you think it's much much much more effective than most

      Delete
  3. I'm not swayed much. I don't see any plausible multiplicative effects. From the perspective of reducing animal suffering being the most important thing (assuming you don't think the future is relevant or that x-risk isn't actually much of a worry), x-risk is irrelevant. And from the perspective of x-risk being the most important thing, what the future looks like is dominated by what future agents do rather than the suffering we eliminate now. There doesn't seem like there's a relevant multiplication going on at all.

    I do think its valuable to bring up cause prioritization arguments like these, and I appreciate that you're doing so, but I don't think this has the macro-applicability you mention here.

    ReplyDelete
    Replies
    1. Hi jsalvati :) Animal advocacy in the present helps contribute to the values that future (post-)humans hold. Making it more likely future agents care about animals makes it more likely the future will be good. So some sort of multiplication (maybe with a small coefficient) seems appropriate.

      Delete
  4. I don't think that this is going to be a practical concern for the effective altruism movement for some time. As long as you are investing small amounts of money (and on a global scale the amount of money spent by the EA movement is still small), your most effective investment is in the one with the highest marginal returns. While global returns to different investments are not always additive, marginal returns are. Thus, unless the amount of money that you are investing is large relative to the ratio of first and second derivatives (i.e. the point at which you reach substantial diminishing/increasing returns), you should basically just pick the single cause with the highest marginal returns and invest in only that. I mean if EA was the only people in the world thinking about global health or about animal welfare or about x-risk, we might need to spread our money out more because the first marginal dollar probably has pretty substantial impact (I mean if literally nobody else has ever thought about x-risks it might be super important for you to spend five minutes considering whether it's plausible that we would be wiped out tomorrow). However, none of these seem to be the case, and x-risk is the only cause I can think of where EA dollars might even plausibly account for a large fraction of the worldwide work in the area.

    ReplyDelete