Crafting a modern memory store. What makes our cache unique? Part 4 - Hot Key Propagation: Momento Cache leverages a tiered caching strategy that spans both the caching and gateway layers. Hot data is propagated outwards to gateway nodes in order to quickly clear requests and prevent pressure from building up the caching layer. This can be understood as a unique form of load shedding, in which excess requests are rejected with a value rather than an error. Curious? Read more here about our #serverless cache!
Momento’s Post
More Relevant Posts
-
Technology has a wonderfully circular nature sometimes. Around 10 or so years ago, working on financial risk systems, I was busy implementing in memory caches to feed the HPC workers. Most risk systems at the time had relied on either a basic file share or a database for this and moving to an cache was a huge jump in performance. More recently, when working clients to migrate their HPC workload to the cloud, we’ve come across risk systems that hadn’t received the same level of investment over the last decade. They still ran using file systems as the input source for workers. We didn’t add an in memory cache. Why bother. There was more than one filesystem option available that exceeded the demands of the HPC application. The advent of both better hardware (SSDs) and faster distributed filesystems meant that it just isn’t necessary. Conversely many of the systems that had been updated in the last decade to use caches were now significantly more complex to migrate to cloud! Hide this from your product managers. Letting them know that underinvestment pays off could set a bad precedent :lol The article below by Behrad looks at the role caching plays in modern software though not HPC specific it is still an interesting read even for HPC practitioners. To further complicate matters we’ve also seen what were once volatile in memory only caches like Redis (you should of course switch to ValKey) evolve to include persistence. Becoming in memory databases. Whilst database technologies such as Cassandra and Aerospike became so performant that the use of an in memory cache becomes questionable. If you were designing an HPC system today, would you bother with a cache?
Last month, Redis' licensing model change sparked widespread dissatisfaction across LinkedIn and other forums. This backlash got me thinking about a critical question: Is caching still necessary in today's technological landscape? 🤔 With the throughput and latency of a couple of SSD disks on a PCIe bus now significantly surpassing network speeds, it's time to re-evaluate the role of caching in modern data management. #Caching #NoSQL #TechDiscussion
Is Caching Still Necessary?
https://thenewstack.io
To view or add a comment, sign in
-
Last month, Redis' licensing model change sparked widespread dissatisfaction across LinkedIn and other forums. This backlash got me thinking about a critical question: Is caching still necessary in today's technological landscape? 🤔 With the throughput and latency of a couple of SSD disks on a PCIe bus now significantly surpassing network speeds, it's time to re-evaluate the role of caching in modern data management. #Caching #NoSQL #TechDiscussion
Is Caching Still Necessary?
https://thenewstack.io
To view or add a comment, sign in
-
🔥𝐂𝐚𝐜𝐡𝐢𝐧𝐠 𝐒𝐞𝐫𝐢𝐞𝐬 [3] In the previous post, we discussed cache: https://lnkd.in/dwjhcewv https://lnkd.in/d3xbhDGT 𝐃𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐞𝐝 𝐂𝐚𝐜𝐡𝐢𝐧𝐠 refers to caching mechanisms where the cached data is stored in a distributed, centralized store, accessible by all instances of your application. This is different from In-Memory Caching, which stores the cache in the memory of individual application instances, leading to potential inconsistency and duplication of cached data in environments with multiple servers or containers. It useful in: - cloud-based environments where applications are deployed across multiple nodes. - microservice-based architectures where each service can share cached data. - scenarios requiring cache persistence after application restarts. Distributed caching allows your application to scale across multiple servers without duplicating data, ensures all instances share the same consistent data, and can be configured for high availability, making it resilient to failures. Some caches, like Redis, can even save data to disk so it persists after restarts. Disadvantages: - Network Latency: Involves small latency due to network access, whereas in-memory caches are faster since they are stored locally. - Complex Setup - Additional Infrastructure Cost - Concurrency Handling: Requires mechanisms to handle concurrency issues when multiple instances attempt to update the same cached data. 📄 https://lnkd.in/dWMDj78F #dotnet #aspnetcore #systemdesign
To view or add a comment, sign in
-
📢 8 Strategies for Reducing Latency TLDR High latency can render an application unusable, frustrating users and negatively impacting business outcomes. Developers need to understand low-latency strategies such as caching, using Content Delivery Networks (CDNs), load balancing, asynchronous processing, database indexing, data compression, pre-caching, and utilizing keep-alive connections to mitigate these issues and improve performance #webdev #cloud #performance #architecture
8 Strategies for Reducing Latency
newsletter.systemdesigncodex.com
To view or add a comment, sign in
-
"Caching is a powerful technique to reduce latency and improve system performance. There are several caching strategies, depending on what a system needs - whether the focus is on optimizing for read-heavy workloads, write-heavy operations, or ensuring data consistency. In this article, we'll cover the 5 most common caching strategies that frequently come up in system design discussions and widely used in real-world applications." https://lnkd.in/eEbRBMwi
Top 5 Caching Strategies Explained
blog.algomaster.io
To view or add a comment, sign in
-
Reducing long-term logging expenses by 4,800% with Amazon OpenSearch Service! When you use Amazon OpenSearch Service for time-bound data like server logs, service logs, application logs, clickstreams, or event streams, storage cost is one of the primary drivers for the overall cost of your solution. Over the last year, OpenSearch Service has released features that have opened up new possibilities for storing your log data in various tiers, enabling you to trade off data latency, durability, and availability. This blog post works through an example to help you understand the trade-offs available in cost, latency, throughput, data durability and availability, retention, and data access, so that you can choose the right deployment to maximize the value of your data and minimize the cost.
Reducing long-term logging expenses by 4,800% with Amazon OpenSearch Service | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
🔍 The Power of Distributed Caching I recently explored the benefits of distributed caching, and it's clear how crucial it is for scaling large applications. By spreading cache data across multiple nodes, distributed caching boosts scalability, fault tolerance, and load balancing. Key Insights: - Scalability: Easily manage higher traffic by adding more cache nodes. - Fault Tolerance: Maintain performance even if a node fails. - Load Balancing: Prevent bottlenecks by distributing load evenly across nodes. Implementing distributed caching effectively, whether through Redis, Memcached, or AWS ElastiCache, is essential for building resilient and scalable systems. #DistributedCaching #Scalability #TechInsights #Algomaster
What is Distributed Caching?
blog.algomaster.io
To view or add a comment, sign in
-
Mastering Caching in Distributed Applications
Mastering Caching in Distributed Applications
medium.com
To view or add a comment, sign in
-
Are your caching systems buckling under pressure as you scale? The New Stack highlights the common pitfalls in evolving cache architectures, but Aerospike has the solution. With unmatched scalability and reliability, our platform simplifies complexity while enhancing performance. Visit us at AWS Re:Invent to discover how we can help you regain control. Read the full article here: https://lnkd.in/dCEq-4Rd #CacheScaling #Aerospike #AWSreInvent
Scaling From Simple to Complex Cache: Challenges and Solutions
https://thenewstack.io
To view or add a comment, sign in
-
🚀 Maximizing Data Performance with Bare Metal Servers! 🚀 Dive into our latest blog where we explore the impressive benchmark results of ClickHouse on Bare-Metal.io's bare metal servers across various data workloads. Discover how dedicated hardware enhances data ingestion, query performance, and scalability, providing a significant edge over virtualized environments. 📊 Key Highlights: Faster Data Ingestion: Experience up to 30% faster data loading times. Enhanced Query Performance: Complex queries perform up to 50% better on bare metal. Consistent Performance Under Load: Maintain steady performance even with multiple concurrent queries. Superior Scalability: Handle larger datasets with less performance degradation. Learn why bare metal is the optimal choice for handling your most demanding data tasks and how it can transform your data analytics capabilities. 🔗 Read the full article here and see how Bare-Metal.io can power your data-driven decisions: https://lnkd.in/ghwkbcZM #DataAnalytics #BareMetalServers #ClickHouse #PerformanceBenchmarking #BareMetalio
Benchmarking ClickHouse on Bare Metal for Different Workloads | Bare-Metal.io
https://bare-metal.io
To view or add a comment, sign in
4,897 followers