March 26, 2024 Must Reads

OpenPipe: Bridging the Gap Between LLMs and Production with Easier Fine-Tuning

OpenPipe Founders, Kyle Corbitt & David Corbitt

By: Tony Liu & Rebecca Li

As we enter the second year of the ChatGPT era, the developer community has built a vibrant ecosystem around Large Language Models (LLMs) and has extensively explored their potential value in the real world. On one hand, LLMs are powerful, versatile, and easy to experiment with. On the other hand, they require skillful augmentation before they can be put in production reliably and deliver intended business outcomes. 

The two main LLM augmentation techniques are prompting (including retrieval-augmented generation or RAG) and fine-tuning. There has been endless debate among the AI community around which technique is better. 

Kyle Corbitt, who led the development of Startup School at Y Combinator, was deeply embedded in the ecosystem of developers trying to build LLM applications across various industries and observed common failure patterns of LLMs in production. David Corbitt, his younger brother, has tinkered with LLMs obsessively over the past few years. Their conclusions: 

  • RAG and fine-tuning are both useful, and they solve different pain points. In most scenarios, you need both to achieve the best outcome for LLMs in production.
  • Fine-tuning smaller, open-sourced LLMs is a proven way to effectively reduce the latency, cost, and improve the accuracy of LLMs to meet production requirements at scale. 
  • Today, the industry is not fine-tuning LLMs enough because fine-tuning knowledge is in the heads of highly specialized ML engineers and researchers.

Their conclusions match well with what we’ve seen happening among the most sophisticated software engineering teams in enterprises. 

Typically, full-stack engineering teams start building LLM apps with big, proprietary models like GPT-4 to see if they can achieve the target outcome. Along the way, they might implement prompting and RAG systems to provide relevant context to LLMs to improve accuracy and reduce hallucination. At this point, the engineering teams have LLMs that output good enough results to be served to their users. When they put the models in production, however, they realize that it is very difficult to scale them. The latency is too high, which makes apps slow and painful to use. Also the inference cost to get results from the models is too high, which makes deploying apps in production cost prohibitive. 

The typical move for these companies is to assemble a specialized ML engineering team to work on the scaling and optimizing issues. However, these teams are expensive, siloed from the software engineering teams. They can slow down progress drastically. The problem of scaling is happening away from the teams that have deeper knowledge of the user experience being created.

OpenPipe is here to address that exact problem. The vision is to allow any software engineer, even those without a lot of ML background, to optimize LLM applications through fine-tuning. It abstracts away the ML jargon and lets you focus on what matters: the data collected, the performance trade-offs, and ultimately the user experience. 

OpenPipe’s early market traction speaks for itself. It has enabled its customers to easily translate production data into smaller, faster, and more specialized LLM models across various user experiences and business workflows – question / answering, structured data extraction, business process automation, service & support chatbots, etc. Yet Kyle & David’s vision is bigger than that. By providing a systemic way to continuously distill and disseminate valuable insights from production data into AI applications, OpenPipe empowers customers to create a great user experience and a unique data moat.

We at Costanoa deeply believe that an LLM optimization tool built for all software engineers is the key to unlocking LLM’s massive potential in real-world applications. We also believe that Kyle and David are uniquely capable of building such a product. They understand developers, have deep expertise in LLMs, and iterate at a rapid pace. We are thrilled to lead OpenPipe’s $6.7 million seed round and to partner closely with Kyle and David as they continue to build the company.

Authors

0 Shares

Written by

Link
author

Partner

Tony Liu

Link
author

Principal

Rebecca Li