I’m continuing to read my way through Hilary Allen‘s Driverless Finance.  The second chapter examines how fintech now alters how the financial system manages risk.

She begins by breaking down different kinds of risks.  Systemic risk, of course, is the biggest risk of them all.  It’s really about risks to the entire financial system, not just winners and losers within it.  For example, it doesn’t matter how well-diversified your portfolio is if nuclear war breaks out.  That’s systemic risk. Managing systemic risk is primarily a job for regulators and government.

Investors often need to manage other kinds or risks.  Market risk is about what might happen in the marketplace.  When we think about interest rate risk for bonds, it’s a market risk. Credit risk refers to the risk that a counterparty might not pay you.  There are all kinds of risks and substantial research has been done to look at how to build a portfolio to manage risk.  Financial firms now construct complex models to game out these risks.  There’s even model risk; the risk that your model doesn’t accurately capture the risks!

Allen brings our attention to machine learning and risk management.  We now have rapidly developing artificial intelligence technology working to identify and learn from patterns in data.  This technology has begun to play a role in risk management as well.

Machine learning now happens in a variety of different ways.  Firms using machine learning in risk management still make choices about what type of machine learning technology to deploy.  Firms also have to make choices about the data they feed into the machine.  How do they acquire it?  How much do they spend?  How do they even test to see if the data they used to train the machine worked?  How do we even know if the data used to train the machine was good or garbage?  These are all choices that people make.  And they may make these choices in ways that increase their ability to make money in the short term.

We should not be too confident that machine learning can solve risk management.  Machine learning may particularly struggle with low-probability events and unusual circumstances.  There simply may not be enough data available or datasets may exclude unusual circumstances.  How do we manage risk in unusual circumstances? Do we keep deferring to the algorithms? We shouldn’t simply trust an algorithmic answer when we can’t understand it. Allen warns us about the risks that automation bias may play and the risk that regulated firms will automate decisions with inscrutable black box algorithms to ward off regulators.

Allen also warns about the possibility of algorithmic decisions leading to asset bubbles.  If everyone uses the same algorithm and bids up the price of the same kinds of assets, enormous and widespread harm could result when the music stops.  This has already happened with credit ratings for mortgage-backed securities.

She also highlights the possibility that widespread deployment of machine learning technology may lead to both increased complexity and more coordinated behavior.  If most market participants are running the same algorithm, they may all behave in the same way, increasing risks overall.  We can see this beginning to happen with roboadvice firms, insurers, and risk managers all beginning to lean more and more on these tools.