Eric Halpern’s Post

View profile for Eric Halpern, graphic

Financial modeling | Actuarial | Risk management

Some of what passes for "AI" is really just good data science. Policyholder compression is typically viewed as a data science problem, so actuarial practitioners often revert to their #datascience comfort zone, and apply clustering (or heuristic groupings) based on policyholder characteristics. But as our results at Stoch Analytics make clear, it's possible to to a lot better. Our Compression approach is more #AI-like: Use a training dataset to establish the pattern, and then apply the results to the next problem. When I first used this approach 20 years ago, it compressed a set of a million records down to a thousand and blew me away. Stoch Analytics developed the same approach independently -- and having battle-tested it over the years with a diverse range of products, blocks of business, and use cases, today offers an industrial-grade tool suitable for enterprise use. Industry surveys show that insurers who use other clustering algorithms get compression ratios of 6:1 on average, with "good" ratios of maybe 15:1. With our iReplicate Compression tool, our clients achieve ratios of at least 100:1, and often 200:1 or 400:1. And while clustering methods begin to break down within just a few timesteps into the projection, our training set approach holds up with fidelity well into the projection horizon. This isn't just of academic interest to me a practitioner. As our clients know, this translates to more business insights and -- in an age where fewer compute cycles translates to lower cloud spend -- dramatically lower costs. I'd love to share the benefits. Please feel free to reach out to talk about how to achieve similar results.

To view or add a comment, sign in

Explore topics