Here is my prediction. Several years from now, companies will no longer put their data into proprietary data stores. Companies that want to compete for compute will do so by accessing common table formats like Delta and Iceberg. True separation of compute and storage will be the norm. I'm not talking about paying separately for those services. I mean actually being able to bring any engine you want to the data and let the compute vendors compete for your business. Its not a matter of if. It is a matter of when. So if you know its coming, do you wait until your competitors do it first? My recommendation is be #DATAFORWARD and embrace the #LAKEHOUSE https://lnkd.in/gQf5PGed
Mike Dampier, MBA’s Post
More Relevant Posts
-
couldn’t agree more with this outlook! The shift toward separating compute from storage and embracing open table formats like Delta and Iceberg is inevitable. At AkashX, we’re not just anticipating this shift — we’re already delivering solutions for it. Our E-S3 Empowered Storage architecture is built to tackle the exact problem of excessive compute costs that plagues modern data warehouses. By pushing partial-SQL execution directly into the storage layer, we accelerate query performance by 4x, slashing cloud costs and enabling true workload-agnostic acceleration across any SQL engine. This results in a 4-10x reduction in your total cost of ownership (TCO) for analytics workloads . In today’s era of cloud-run AI and LLM-scale workloads, performance and cost are more critical than ever. With AkashX, your data ops costs can be reduced by 4x, and we ensure predictable pricing with no runaway bills caused by poorly written queries . The future is here, and it’s about leveraging disaggregated storage to stay ahead of the curve. Don’t wait for your competitors to adopt this model. Be #DataForward and embrace the #Lakehouse today. #DataRevolution #CloudData #AkashXCloud #EmpoweredStorage #CostEfficientAnalytics
Here is my prediction. Several years from now, companies will no longer put their data into proprietary data stores. Companies that want to compete for compute will do so by accessing common table formats like Delta and Iceberg. True separation of compute and storage will be the norm. I'm not talking about paying separately for those services. I mean actually being able to bring any engine you want to the data and let the compute vendors compete for your business. Its not a matter of if. It is a matter of when. So if you know its coming, do you wait until your competitors do it first? My recommendation is be #DATAFORWARD and embrace the #LAKEHOUSE https://lnkd.in/gQf5PGed
Databricks vs. Snowflake
databricks.com
To view or add a comment, sign in
-
Today at Ignite, Microsoft announced the public preview of SQL database in Fabric (https://lnkd.in/dcwuPYER). This technology is a game-changer as it brings Fabric into the HTAP space (Hybrid transactional/analytical processing) by providing an OLTP layer with automatic replication into OneLake. We at Aimplan are super excited for this technology as it provides the best of both worlds out-of-the-box. By utilizing the transactional layer, you can plan using real-time data, and then analyze it on a much bigger scale using the analytical endpoints. Want to try Aimplan on Fabric SQL? Give us a shout and we'll give you a demo!
To view or add a comment, sign in
-
🚀 New Blog Post! 🚀 Part 2 of my #Microsoft #Fabric migration series is live, and we’re jumping into Mastering Data #Lakehouse Integration! In this post, I guide you through setting up your #DataLakehouse in #MicrosoftFabric, sharing tips to keep your data organised, your processing efficient, and your headaches to a minimum. Whether you're just getting started or refining your setup, there’s something here for everyone. ✨ Highlights: - Why a Data Lakehouse? 🤔 - Data Ingestion: Getting your data in without the stress. - Schema Design: Structure your data like a pro. - Storage Considerations: Avoid the dreaded “data swamp.” - Monitoring & Maintenance: Keep everything running smoothly. - Troubleshooting: Handle common pitfalls like a champ. Ready to level up your data game? Check out the full post here: https://lnkd.in/gaKYXSnR Let’s keep the conversation going - would love to hear your thoughts and any challenges you’ve faced while integrating Data Lakehouses! 💬 #BI #Analytics #TechBlog
Migrating to Microsoft Fabric Part 2: Optimising Data Lakehouse Integration
datainsightnest.com
To view or add a comment, sign in
-
🚧 Consolidate existing workloads 🏋️♀️ Leverage more of your growing data volumes, and 💰 Lower the cost of processing complex data at scale. In a world of growing, large-scale datasets, only solutions built for always-on, compute intensive analytics will do. Learn more about hyperscale analytics and the Ocient Hyperscale Data Warehouse™️ ⤵️
The Next Generation of Data Warehousing | Ocient Blog
To view or add a comment, sign in
-
Interesting news from the Microsoft Ignite Conference!
Extending Power BI to help organizations reach and exceed their goals with superior planning and reporting
Today at Ignite, Microsoft announced the public preview of SQL database in Fabric (https://lnkd.in/dcwuPYER). This technology is a game-changer as it brings Fabric into the HTAP space (Hybrid transactional/analytical processing) by providing an OLTP layer with automatic replication into OneLake. We at Aimplan are super excited for this technology as it provides the best of both worlds out-of-the-box. By utilizing the transactional layer, you can plan using real-time data, and then analyze it on a much bigger scale using the analytical endpoints. Want to try Aimplan on Fabric SQL? Give us a shout and we'll give you a demo!
To view or add a comment, sign in
-
Creating a table in Microsoft Fabrics Notebook creates a delta table now while dealing with the intricacies of it was very challenging for me. I have written a bit about the delta table and its importance. Take a look at the post and subscribe to my substack to get more updates on data analytics on Microsoft Fabric. #Microsoftfabric #datascience #elearning #substack
Understanding Delta Tables
aayushgupta9125.substack.com
To view or add a comment, sign in
-
Databricks CloudFetch is a game-changer for Sigma users, offering significant improvements in data transfer speeds, user experience, and cost efficiency. Read more about how this CloudFetch and Sigma integration improves data retrieval and why this is essential for making timely decisions:
Improved Query Performance: Utilizing Databricks CloudFetch with Sigma
sigmacomputing.com
To view or add a comment, sign in
-
Struggling with slow data queries and high costs, we knew there had to be a better way. Enter optimization. By fine-tuning our data processing in Databricks—partitioning, caching, and optimizing queries—we saw incredible results. Faster insights, lower costs, and increased productivity. Check out my latest blog and learn how you can do the same. 💡 #DataOptimization #Databricks #DataAnalytics #BestPractices
Unlocking Peak Performance: Optimizing Data Processing in Databricks
link.medium.com
To view or add a comment, sign in
-
Does anyone really think that future data sets will be smaller than what we currently see? I don't recall running into anyone who believes that they will be working with smaller data sets at any time in the future. Many of the current industry providers have difficulty performing analytics on data sets larger than 100TB, especially if they have critical, time-sensitve data that is crucial to their clients' mission. We have multiple proven use cases in both the commercial market and the Federal market. If you are working with multiple, large data volumes, and you have a need to reduce costs in time, money and energy, then we should probably talk. #olap #hyperscale #datawarehouse #dataanlytics #geospatial #aiml #greenerdata
🚧 Consolidate existing workloads 🏋️♀️ Leverage more of your growing data volumes, and 💰 Lower the cost of processing complex data at scale. In a world of growing, large-scale datasets, only solutions built for always-on, compute intensive analytics will do. Learn more about hyperscale analytics and the Ocient Hyperscale Data Warehouse™️ ⤵️
The Next Generation of Data Warehousing | Ocient Blog
To view or add a comment, sign in
-
This article is the part 2 on data engineering within Microsoft Fabric, written by Rejaul Islam Royel - providing information on the features of the dataflow gen2 component, its applications, and how to store processed data inside Lakehouse. Stay tuned for more exciting contents on #microsoftfabric
Data Engineering in Microsoft Fabric: Part 2 – Ingestion – Dataflow Gen2
https://datacrafters.io
To view or add a comment, sign in
Senior Consultant Cloud Data & AI at Microsoft Germany
4moThe day someone from databricks says something good about snowflake and the other way around, on that day I will run a city marathon (I am not sporty at all)