To all that say #ML is #statistics. It is not and I am tiered of reading it. #statistics is a field of #probability while #fuzzylogic on the other hand is #deterministic That means a probability of 80% means we get 4 answers A and 1 answer B (like a coin toss or quantum probability) Fuzzy logic on the other hand IS DETERMINISTC which means a #threshhold of 80% will always (100%) yield answer A and never answer B. If you understand this, congratulations you are probably 1 in 1 million.
Aleksandar Savevski’s Post
More Relevant Posts
-
#ScientificBlunders Ever wondered why the sum of the 𝙨𝙦𝙪𝙖𝙧𝙚𝙨 of the errors is minimized in linear regression and not the absolute value or the fourth power? Check out the video below to find out more!
#38 Why "square" the errors in regression? | Machine Learning | Scientific Blunders
https://www.youtube.com/
To view or add a comment, sign in
-
"Bayes' theorem visual proof presentation: Imagine a probability space where: Event A and Event B are two overlapping regions. The overlap represents their joint probability, P(A∆B) Bayes' theorem connects conditional probabilities and as: P(A|B) = {P(B|A)P(A)}/{P(B)}. This intuitive relationship updates prior knowledge P(A) with new evidence (B)—a cornerstone of decision-making and machine learning. Visual Tip: Use the overlap to highlight the symmetry , making the theorem easy to grasp! #Probability #Statistics #BayesTheorem #MachineLearning
To view or add a comment, sign in
-
Here's a simplified explanation of the top 5 regression algorithms: Linear Regression: Best for straight-line relationships between variables. Polynomial Regression: Adds curved lines to model more complex patterns. Support Vector Regression (SVR): Good for tricky, non-linear data using advanced math to find the best fit. Ridge Regression: Helps prevent the model from being too complex by limiting large values in the equation. Lasso Regression: Reduces complexity and picks out the most important features by setting some values to zero. #algorithms #ml
To view or add a comment, sign in
-
Diversifying your trading strategy? Consider Probabilistic Models, Linear Regression. Ideal for handling a wealth of data, trumping Time Series models built for fewer, noisy data sets. Need simple benchmarks? SVM is your go-to. For NN testing, don't forget LSTMs 📈💡 #AlgoTrading
To view or add a comment, sign in
-
🚀 Day 32/100: Today's learning journey was dedicated to revising fundamental concepts in machine learning and regression analysis. 1. K-Nearest Neighbors (KNN): - Explored the KNN algorithm, a simple yet powerful method for classification and regression tasks. - Revised the concept of KNN, where predictions are made based on the majority vote (for classification) or average (for regression) of the k nearest data points. 2. Simple and Multiple Linear Regression: - Reinforced understanding of linear regression, a foundational technique for modeling the relationship between independent and dependent variables. - Reviewed simple linear regression, which involves predicting a continuous target variable based on a single predictor variable. - Delved into multiple linear regression, where multiple predictor variables are used to predict the target variable, allowing for more complex modeling of relationships. By revisiting these concepts, I solidified my understanding of regression analysis and its application in predictive modeling. Excited to apply these techniques in real-world projects! #MachineLearning #RegressionAnalysis #KNN #LinearRegression #100DaysChallenge 📈🔍
To view or add a comment, sign in
-
hello connections!!!!! 🚀 Excited to share my latest Medium article on linear regression! 📊 In this piece, I explore the inner workings of the linear regression algorithm in detail, covering its mechanics and applications. I also discuss essential regularization techniques like Lasso and Ridge, which help improve model accuracy and prevent overfitting. Additionally, I dive into various error metrics, including MSE, RMSE, and MAE, to help you understand how to effectively evaluate model performance. I’d love for you to check it out and share your thoughts! Let’s connect and discuss your experiences with linear regression! #DataScience #MachineLearning #LinearRegression #Regularization #Analytics
To view or add a comment, sign in
-
🎓 Demystifying Gradient Descent: The Engine of Machine Learning 🚀 Learn the core optimization technique behind ML algorithms! 🔍 Unpack jargon: GD, SGD, MGD, learning rate, cost function 📊 Explore batch, stochastic & minibatch variants 🧭 Navigate challenges: local minima, ridges, plateaus ⚖️ Understand feature scaling impact 🏁 Master stopping criteria & epoch selection Key takeaways: ✅ GD variant use cases ✅ Practical implementation tips ✅ Performance optimization strategies Elevate your ML skills! Watch now 🎥 https://lnkd.in/dftjZ-RD #MachineLearning #DataScience #GradientDescent #AIEducation
Lecture 50: Gradient Descent (Batch - Stochastic - MiniBatch) | Linear Regression
https://www.youtube.com/
To view or add a comment, sign in
-
Are you curious about how businesses predict future trends and make data-driven decisions? Multiple linear regression is a fundamental technique that helps decode complex relationships between variables. What you'll learn: What is Multiple Linear Regression? How it Works The Mathematical Equation Finding the Best-Fitting Line A Real-World Example #DataScience #MachineLearning #LinearRegression #DataAnalysis #PredictiveAnalytics
To view or add a comment, sign in
-
It is based on Bayes' theorem with an assumption of independence between features. Click for more https://bsapp.ai/-DuuoosY4 #MachineLearning
Classification: Naive Bayes
kiziridis.com
To view or add a comment, sign in
-
This computer class covers key machine learning concepts through simulations, logistic regression (using Gradient Descent and Stochastic Gradient Descent), model extensions with quadratic terms, and misclassification risk evaluation. It includes Logistic Lasso, ROC curve comparisons, and text data preprocessing with LASSO and Ridge regression for spam detection. The image below was generated by some computer experiments with the aim of illustrating why independence between random variables is essential for the reliability of the estimate. In accordance with the central limit theorem, when variables are independent, the variation in the sample mean decreases predictably, allowing it to converge to the true population mean. https://lnkd.in/djVchFti The computer class is part of the Machine Learning course held by François Portier at ENSAI #MachineLearning #GradientDescent #StochasticGradientDescent #Ridge #Lasso
To view or add a comment, sign in