Statsig

Statsig

Software Development

Bellevue, Washington 19,288 followers

Accelerate product growth with integrated feature flags, experimentation and analytics.

About us

Statsig is the leading product experimentation platform that helps businesses use data to ship fast and build better products. Companies like Microsoft, Figma, Notion, Flipkart, Eventbrite, Ancestry, Headspace, and Univision use Statsig to manage feature rollouts, automate experiments, and make decisions based on performance metrics. Founded in 2021 by former Facebook engineers, Statsig supports thousands of experiments impacting over a billion end users globally. Statsig democratizes experimentation, an essential aspect of reducing development risk, and previously only accessible to big tech companies.

Website
http://www.statsig.com
Industry
Software Development
Company size
51-200 employees
Headquarters
Bellevue, Washington
Type
Privately Held
Founded
2021
Specialties
Experimentation, Data Analytics, Feature Flag, A/B Testing, Product Analytics, Session Replay, and Web Analytics

Products

Locations

Employees at Statsig

Updates

  • View organization page for Statsig, graphic

    19,288 followers

    2024 was a big year of Statsig. Just how big? Well, let's dig into the data. This year, we... - Delivered features and experiments to over 4 billion end users (non-deduplicated) - Scaled our infrastructure to process over 1 trillion events per day - Crossed 50,000 experiments created on our platform - Passed 100,000 total features launched through our platform - Released 120 updates to our product - Hosted dozens of events with thousands of guests / partners - Grew our team to over 100 people To learn more about everything we did in 2024 (plus all the stuff that's coming in 2025) check out the blog below. There's a lot more that happened that's not covered in the numbers! Last but not least - thank you to everyone who’s been a part of our journey in 2024. We're so excited to keep building in 2025 (and beyond). https://lnkd.in/gRqGh4FS

    Statsig's 2024 year in review

    Statsig's 2024 year in review

    statsig.com

  • Statsig reposted this

    View profile for Allon Korem, graphic

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Here’s a brain teaser for you: A company tests a new feature at the bottom of its landing page. Since it’s out of sight when the page loads, some users in the test group might never scroll down to see it. Could this cause issues in the A/B testing process? If you immediately thought about the allocation vs. exposure gap, you’re already ahead of the game! If not, don’t worry—this blog is here to help. The Allocation vs. Exposure Dilemma • Allocation Point: When users are assigned to a control or test group (e.g., at page load). • Exposure Point: When users actually experience the version they were assigned (e.g., by scrolling down). When these two points don’t align, it creates a challenge: some users in the test group might behave just like control users, diluting the impact of your test version. Why It Matters? Imagine this scenario: you’re testing a bold new feature expected to drive conversions. If users allocated to the test group don’t interact with the feature, they won’t contribute to the difference you’re measuring. As a result: • Your statistical power drops (making it harder to detect a real effect). • Valuable improvements to your product might be overlooked. Take it from the data—our simulation shows that as the percentage of unexposed users in the test group increases, statistical power declines sharply. How to Solve It (1) Plan Proactively: Tag exposure events accurately to align allocation and exposure points. (2) Estimate Non-Exposure: If offline allocation is necessary, use historical data to estimate how many users might not engage with the test version. (3) Adjust Power: Account for non-relevant users in test planning by increasing the sample size or adjusting the hypothesized effect size. (4) Post-Hoc Adjustments: Filter out non-exposed users (if possible, but from all groups) to clean up your data. Key Takeaway This issue isn’t just a minor technical detail—it’s a core challenge that can undermine the validity of your A/B test results. Proactively addressing the allocation-exposure gap ensures your tests are robust, reliable, and actionable. Curious about how this plays out in practice? Read the full blog post here - https://lnkd.in/d88vAfyh. Thanks Oryah Lancry-Dayan, our lead statistician, for co-writing this article, and... happy new year everyone! #ABTesting #DataScience #ProductDevelopment #Statistics #Experimentation

    When allocation point and exposure point differ

    When allocation point and exposure point differ

    statsig.com

  • Statsig reposted this

    View profile for Allon Korem, graphic

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Ever run an A/B test and wondered, “Do I need a one-tailed or two-tailed hypothesis?" It’s not just a small detail—it’s a decision that shapes your test planning, analysis, and results. Let’s break it down: • A two-tailed test checks for any difference between groups—whether it’s positive or negative. • A one-tailed test focuses on one specific direction, like testing if a new feature increases user engagement. Imagine you’re running an A/B test to evaluate whether a new homepage design boosts conversion rates. If you hypothesize that the new design will increase conversions, a one-tailed test is your ally. But if you’re unsure about the direction (it could improve or hurt conversions), a two-tailed test keeps both possibilities open. Here’s where it gets interesting: • Sample size: One-tailed tests require fewer participants, making them faster and cheaper to run. • Power: Two-tailed tests distribute the rejection region across both tails, lowering their ability to detect a difference in a single direction. When does this particularly important? In sequential testing! With ongoing data analysis, one-tailed tests can significantly reduce test duration—perfect for fast-paced business environments. However, two-tailed tests have their perks, like detecting unexpected negative effects. As one client told us, “A negative significant result still gives me actionable insights—I learn what not to do.” Takeaways: (1) One-tailed tests: Best when you’re confident about the direction of change. (2) Two-tailed tests: Ideal for exploratory analysis or when detecting negative effects is valuable. (3) Always align your approach with your business goals and test objectives. Ready to dive deeper? Read our full blog post here - https://lnkd.in/dstq7ZuU. Special shoutout to Oryah Lancry-Dayan, our lead statistician, for co-writing this article. You're a true professional! #ABTesting #DataScience #Statistics

    One-tailed vs. two-tailed tests

    One-tailed vs. two-tailed tests

    statsig.com

  • Statsig reposted this

    View profile for Alex Gutman, graphic

    Data Scientist | Author "Becoming a Data Head" | Fulbright Specialist

    Thank you to Statsig for joining our Procter & Gamble Data Science event in Cincinnati last week and teaching a session on *Modern Experimentation Tools and Techniques in Practice*. It was great to welcome Liz Obermaier and Craig Sexauer. Your insights and expertise provided invaluable knowledge to our team. Thank you for helping make our event a success!

    • No alternative text description for this image
  • Statsig reposted this

    View profile for Brett Bass, graphic

    Senior Data Scientist - P&G

    Huge thank you to Statsig for leading a session for Procter & Gamble data scientists on modern experimentation techniques. Lots of useful information and techniques we will be using going forward! Craig Sexauer Liz Obermaier

    View profile for Alex Gutman, graphic

    Data Scientist | Author "Becoming a Data Head" | Fulbright Specialist

    Thank you to Statsig for joining our Procter & Gamble Data Science event in Cincinnati last week and teaching a session on *Modern Experimentation Tools and Techniques in Practice*. It was great to welcome Liz Obermaier and Craig Sexauer. Your insights and expertise provided invaluable knowledge to our team. Thank you for helping make our event a success!

    • No alternative text description for this image
  • Feature Friday Week 44: Announcing Angular Support 🅰️ We’re excited to launch out-of-the-box support for Angular with the release of our Angular bindings as a new offering for Statsig’s JavaScript SDK. Brock Lumbard, a Product Manager on our SDK team, walks through the easy set up process to get Statsig running in an Angular app. This update streamlines setup with bindings and suggested patterns for App Config and App Module integrations. And yes, you’ll still find all the features you know and love from Statsig’s SDKs: feature flags, experiments, event logging, and more. 😍 Dive into our docs to explore Angular support and get started: https://bit.ly/4gv4PN2

  • Statsig reposted this

    View profile for Asad Awan, graphic

    CEO and co-founder potrerolabs.xyz (We're hiring). Ex-FB VP.

    Should you use Gemini vs 4o vs Llama vs Phi or a smaller model vs a larger model? Is one prompt better than the other? Should you use Azure, AWS, GCP, OpenAI, Anthropic? Vijaye's articulation on how to approach these questions with data while moving fast, is one of the best I have seen on this topic (I mean he is kind of the expert on this). In this video chat with me, Vijaye lays out the need & strategy to A/B test with LLM integrations to optimize tradeoffs e.g., relevance goals vs perf (speed) goals vs $ cost goals, etc. It's a must watch. And of course Statsig has an SDK for this :)

  • 1 trillion events a day…..and counting! Scaling a system to handle 1 trillion events per day is no small feat—for us, it involved relentless optimization and innovative engineering from our team. We’re excited to be featured in ByteByteGo’s latest System Design Deep Dive, written in collaboration with our engineers, Pablo Beltran and Brent Echols. This piece covers how Statsig is able to scale efficiently while maintaining reliability with strategies like: ✅ Handling massive data volumes with a robust streaming architecture composed of three parts: request recorder, log processing, and routing ✅ Reducing costs with optimized storage, compression, and spot nodes ✅ Ensuring reliability through a persistence layer with multiple fallback options Thank you to ByteByteGo and Alex Xu for the thoughtful deep dive. Check out the full story: https://lnkd.in/guJWCDgD

    • No alternative text description for this image
  • How Bluesky scaled to 25M users with Statsig!🚀 Bluesky Social is building an open social network and scaling fast with Statsig—leveraging product analytics to track key growth metrics, experimentation to guide product decisions, and feature flags for fast, controlled rollouts. 📈 Paul Frazee (CTO) & Rose Wang (COO) have helped to spearhead implementation of Statsig, and in just 7 months, have rolled out 25+ features with Statsig integrated, built several custom growth dashboards, and ran 30+ experiments. The result? Faster releases, data-informed product decisions, and a 30% reduction in tears cried by the CTO (Paul’s words!). 👉 Learn how Bluesky grows with Statsig: https://lnkd.in/gnJh-teH

    Statsig customer story - How Bluesky reached 25 million users in record time - and keeps growing!

    Statsig customer story - How Bluesky reached 25 million users in record time - and keeps growing!

    statsig.com

  • Feature Friday Week 43: Athena Warehouse Native is here!🎉 Daniel West from our Enterprise Engineering team walks through how Athena users can now enjoy access to the same capabilities available on Snowflake, BigQuery, Redshift, and Databricks.⚙️ With Athena Warehouse Native, you can reuse existing events and metrics from Athena in experimental analysis along with typical Statsig features including incremental reloads (to manage costs), power analysis, entity properties, and more.📊 Learn more on how to set up a connection with Athena: https://lnkd.in/eUk8T6GJ

Similar pages

Browse jobs

Funding

Statsig 2 total rounds

Last Round

Series B

US$ 43.0M

See more info on crunchbase