Our #1 priority today is increasing our capacity. Groqsters are busy building racks on racks on racks, shipped from America. Stay tuned for exciting things to come!
Groq
Semiconductor Manufacturing
Mountain View, California 118,524 followers
Groq builds the world’s fastest AI inference technology.
About us
Groq is fast AI Inference. The Groq LPU™ AI Inference Technology delivers exceptional compute speed, quality, and energy efficiency. Groq, headquartered in Silicon Valley, provides cloud and on-prem inference at scale for AI applications. The LPU and related systems are designed, fabricated, and assembled in North America. Learn more and try Groq speed today at groq.com.
- Website
-
https://groq.com/
External link for Groq
- Industry
- Semiconductor Manufacturing
- Company size
- 51-200 employees
- Headquarters
- Mountain View, California
- Type
- Privately Held
- Founded
- 2016
- Specialties
- ai, ml, artificial intelligence, machine learning, engineering, hiring, compute, innovation, semiconductor, llm, large language model, gen ai, systems solution, generative ai, inference, LPU, and Language Processing Unit
Locations
-
Primary
400 Castro St
Mountain View, California 94041, US
-
Portland, OR 97201, US
Employees at Groq
-
Peter Bordes
CEO Collective Audience CAUD: OTC, Founder, Board Member, Investor, managing partner Trajectory Ventures & Trajectory Capital
-
Ofer SHOSHAN
Entrepreneur, Tech. investor
-
John Barrus
Product Manager, Entrepreneur - ML, Robotics, Cloud, IoT
-
Bill Lesieur
AI Revolution | Open Innovation | CVC | Future of Work | Equitable Futures | Sustainability | Corporate Innovation | Futurist | Strategic Foresight |…
Updates
-
The rapidly evolving field of Artificial Intelligence (AI) has led to significant advancements in Machine Learning (ML), with “inference” emerging as a crucial concept. But what exactly is inference, and how does it work in a way that you can make most useful for your AI-based applications? Find out in our recent blog: https://hubs.la/Q02_DJ-K0
-
Join us for our first meetup in 2025! We're teaming up with Arize AI and LlamaIndex at the GitHub HQ in SF. Agenda: 6:00 PM | Debugging and Improving AI Agents with Arize AI 6:20 PM PM | Creating Agent Systems with Fast Inference on Groq 6:40 PM PM | Agentic Workflows with LlamaIndex REGISTER: https://hubs.la/Q02_GpRw0
-
Groq reposted this
more power, more racks.
-
Groq reposted this
Love you developers!
-
Congrats to Chris Yoo & Ryan Hay, the Best on Groq 🏆winners at the Eucalyptus x Build Club AI HealthTech Hackathon. Their app, powered by Groq, uses Llama 3.2 90B vision to analyze grocery receipts and Bland AI's voice calling agent to help users maintain weight loss goals.
-
Groq reposted this
Groq in TOP 10 charts again and again and again….
Groq offers the best speed, cost, and quality. https://lnkd.in/gad2Vj5D
-
Groq reposted this
Groq offers the best speed, cost, and quality. https://lnkd.in/gad2Vj5D
-
"Interestingly, Ollama and Groq (which both allow users to run open source models, with the former focusing on local execution and the latter on cloud deployment) have accelerated in momentum this year, breaking into the top 5. This shows a growing interest in more flexible deployment options and customizable AI infrastructure."
Another year, another leap in LLM app development! 🪄 Our new 2024 LangChain State of AI report uncovers key trends in how developers are building LLM apps. In 2024, we’ve seen a shift from retrieval workflows to multi-step, agentic applications. 🤖 Open-source models are also gaining traction as devs seek more customizable infrastructure. Here's where the AI ecosystem is headed, based on data from LangSmith.👇 Read the full report: https://lnkd.in/g5yd92QW
-
Exciting news! Our friends at Hunch are revolutionizing content creation by letting you transform content into new assets while maintaining the same voice and style! With their no-code AI workspace, Powered by Groq, teams can now focus on strategy, creativity, and output. Learn how Hunch is making AI accessible in our latest customer use case: https://hubs.la/Q030f5710
Groq Customer Use Case: Hunch - Groq is Fast AI Inference