🌟 Exciting News! 🌟 After months of hard work, we are thrilled to announce that we’ve successfully completed the Google Machine Learning Bootcamp 2024 and the Gemma Sprint! 🎉 As part of this journey, we wanted to push our creativity and innovation. Our goal was to design something trendy, unique, and truly representative of today’s culture. That’s how we came up with the Gen Z Slang Generator — a creative model that generates fresh slang based on context! Built using the Keras API and fine-tuned from the Gemma2 model, this slang generator is designed to produce catchy and relevant slang that resonates with Gen Z. Whether it’s for casual conversations or creative content, users can input specific contexts and instantly generate slang terms that feel authentic and natural. This is the link to our model -> https://lnkd.in/ePVVnt2h With this slang generator, you can easily create new and engaging slang terms tailored to various contexts. Feel free to modify the context and conditions to explore different slang outputs! Last but not least, a huge thanks to my amazing teammates Junghyun Kim and Jiyeong Sim for their hard work and collaboration in making this happen😁 #GemmaSprint
Seoyeon Park’s Post
More Relevant Posts
-
🚀 Day 2 of #30DaysOfFLCode ! 👉 30DaysOfFLCode.com Federated Learning isn't just a buzzword – it's a game-changing approach tackling privacy, scalability, and heterogeneity head-on. Today, I dove deep into the core challenges (spoiler: communication costs and non-IID data are just the tip of the iceberg) and how they shape the future of decentralized AI. 🤖✨ Curious to know how we can address these challenges? Check out the detailed breakdown in my GitHub repo: https://lnkd.in/e5hDUWra #FederatedLearning #MachineLearning #DataPrivacy #30DaysOfFLCode
Federated-Learning/DaysIntoLearning/Day2/day2.md at main · TensorSpd/Federated-Learning
github.com
To view or add a comment, sign in
-
🎉Hi, We participated in the Google Machine Learning Bootcamp 2024💻 and worked on the Gemma Sprint project! This project aims to develop an AI-powered system for the automatic classification of emergency calls using the advanced language understanding capabilities of the Gemma language model and the Whisper speech recognition model. By accurately categorizing calls based on the type of emergency (fire, medical, etc.), the system will enable faster and more targeted dispatch of emergency services, potentially saving lives. We have written the details in the tutorial. Model: https://lnkd.in/gqhhRN-c Tutorial code: https://lnkd.in/g3K4eURA Team Members 이현준, 최상헌 Thanks to the Google Korea and everyone involved in the MLB for making this great opportunity. #GemmaSplint #GoogleMLBootcamp
sanghe0n/Gemma2-multiLabelPredict at main
huggingface.co
To view or add a comment, sign in
-
Discover how Adam uses EduProtocols. Also discover how Adam organizes and sequences content to make it accessible to all students.
Discover how I used a variety of engaging EduProtocols to bring history to life: ✅ Annotate and Tell 📈 Graph and Tell 🖼️ Sketch and Tell 🎭 2xPOV 🎲 Fast and Curious 🔍 Thick Slide 🔢 Number Mania 🧩 Frayer Model Plus, learn how I incorporated AI to create a tailored lesson on local history, generating a captivating "Cybersandwich" activity that had my students hooked! 🤖🥪 Don't miss out on this opportunity to elevate your history lessons and inspire your students. Read the full blog post now and join the EduProtocol revolution! 🌟💻🌍 https://bit.ly/3Q3G3Ip
To view or add a comment, sign in
-
Gemma 2 released! Google just released the next iteration of its open LLM! Gemma 2 comes in two sizes, 9B & 27B, trained on 13T tokens. Gemma 2 27B approaches Meta Llama 3 70B performance! First Chatbot Arena evals place Gemma2 27B around Anthropic Claude 3 Sonnet, Llama 3 70B, and OpenAI GPT-4. What's new with Gemma 2: 9B & 27B Instruction and base version with 8192 context window!! Trained on 13T tokens (27B) and 8T tokens (9B) NEW Sliding window attention, logit soft-capping and Grouped-Query Attention (GQA) 9B scores 71.3 MMLU; 52.8 AGIEval; 40.2 Human Eval 27B scores 75.2 MMLU; 55.1 AGIEval; 51.8 Human Eval Commercial use allowed!! Used SFT, Distillation, RLHF & Model Merging. Trained on Google TPUv5e Models: https://lnkd.in/eVW34g55
Gemma 2 Release - a google Collection
huggingface.co
To view or add a comment, sign in
-
💡 This is an excellent free 1-hour course to help you get started with Llama 3.2 and make full use of its multimodal capabilities! Llama 3.2 is Meta's first multimodal LLM, packed with a bunch of new features. This hands-on intro course by DeepLearning.AI is perfect for understanding how to use it for your use cases! 📕 What's covered: - Details on how the Llama models are trained - Multimodal prompting for some real world use-cases - Tool calling and agentic workflows - Using the open-source Llama Stack API which is a standardized interface for building LLM applications. Link: https://lnkd.in/ee_VvJG6
To view or add a comment, sign in
-
Just finished the course “Advanced Prompt Engineering Techniques” by Morten Rand-Hendriksen! Check it out: https://lnkd.in/dUy4bPaE #artificialintelligence #promptengineering #generativeai
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Talking to the AIs in natural language, doesn't mean it necessarily comes naturally... Check the course “Prompt Engineering: How to Talk to the AIs” by Xavier Amatriain I just finished for mote insights! https://lnkd.in/dGD2jvbg #generativeai #largelanguagemodels #promptengineering.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Google DeepMind just unveiled Code Gemma, a groundbreaking collection of open code models. Gemma can now code! 🤯 Here's what you need to know: -- Code Gemma comes in 2B & 7B sizes, boasting an impressive 8192 context. -- These models are initialized from Gemma Base and further trained on a whopping 500B additional Tokens (web, code & math). -- Fine-tuned with SFT & RLHF for superior performance. -- The 2B model achieves 27% on HumanEval, while the 7B model surpasses expectations with a staggering 52%. -- Commercial use is allowed, making it perfect for industry applications. -- Optimized for on-device code completion, enhancing efficiency and productivity. -- You can access these cutting-edge models on Hugging Face. Get all the details: 🔗 Models: https://lnkd.in/gmsKgTWE 🔗 Google Blog: https://lnkd.in/gcFGufaf
Models - Hugging Face
huggingface.co
To view or add a comment, sign in
-
Is Pedro Domingos’ The Master Algorithm still relevant today? I just revisited this fascinating book, where he explores the five tribes of AI: Symbolists, Connectionists, Evolutionaries, Bayesians, and Analogizers. It’s incredible to see how these paradigms shaped AI’s foundation and how they contrast with the LLM-driven boom we’re seeing now. If you’re interested in AI’s evolution, this book is a must-read! What do you think—are these five tribes still relevant in the age of generative AI? https://lnkd.in/e4GEVyeN
The Master Algorithm | Pedro Domingos | Talks at Google
https://www.youtube.com/
To view or add a comment, sign in
-
At Awsome LLC, we're fine-tuning large language models (LLMs) to make our energy utility talkbot smarter and more efficient. This article covers the essential math behind the fine-tuning process, breaking it down into simple terms. Our team at Awsome LLC is dedicated to leveraging these techniques to set new standards in customer service and efficiency. Want to learn more about the process? Check out the article.
Machine Learning Engineer| GenAI | LLM | RAG | AWS Certified AI and Cloud Practitioner | NLP | Deep Learning | Project Management | Energy & Sustainability | Data Analytics Masters @ Clark University
🚀 Demystifying the Math Behind Fine-Tuning Large Language Models (LLMs)! 🚀 Fine-tuning LLMs might sound like rocket science, but it doesn't have to be! I've distilled all the key concepts into an easy-to-understand document. Dive into: 🔍 Quantization: Making models lighter without losing their edge. 🎯 Precision: Full vs. half – balancing detail and efficiency. 🔧 Calibration: Fine-tuning models for top performance. 🛠️ Quantization Modes: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT). 📄 Check out my document and discover the secrets!
To view or add a comment, sign in