There's a common belief that #opensource LLMs aren't as performant as #GPT4. Well, oftentimes they're not... unless you #finetune! You can achieve GPT-4 accuracy or better for your use case with the power of fine-tuning. Want to learn how you can fine-tune your own #LLMs that outperform GPT-4? Join our upcoming virtual workshop to learn more: https://pbase.ai/43Yuvfb
Let us know how we can get DBRX added to the list above, we designed it to be fast (and inexpensive) to fine tune.
Nice work Timothy Wang and Justin Zhao pulling this chart together, it was one of my favorite takeaways from the experiments!
Machine Learning Scientist at Georgian
8moWe at Georgian have had a similar stance on RAG + fine-tuning giving a decent shot to smaller language models to compete with the likes of GPT-4! We have an open-source toolkit that allows you to do the same! https://github.com/georgian-io/LLM-Finetuning-Toolkit