Predibase’s Post

There's a common belief that #opensource LLMs aren't as performant as #GPT4. Well, oftentimes they're not... unless you #finetune! You can achieve GPT-4 accuracy or better for your use case with the power of fine-tuning. Want to learn how you can fine-tune your own #LLMs that outperform GPT-4? Join our upcoming virtual workshop to learn more: https://pbase.ai/43Yuvfb

  • No alternative text description for this image
Rohit Saha

Machine Learning Scientist at Georgian

8mo

We at Georgian have had a similar stance on RAG + fine-tuning giving a decent shot to smaller language models to compete with the likes of GPT-4! We have an open-source toolkit that allows you to do the same! https://github.com/georgian-io/LLM-Finetuning-Toolkit

Craig Wiley

Sr. Director of Product for AI/ML @Databricks / Ex-Google / Ex-Amazon

8mo

Let us know how we can get DBRX added to the list above, we designed it to be fast (and inexpensive) to fine tune.

Devvret Rishi

CEO @ Predibase | Co-Founder

8mo

Nice work Timothy Wang and Justin Zhao pulling this chart together, it was one of my favorite takeaways from the experiments!

See more comments

To view or add a comment, sign in

Explore topics