Improve Your Enterprise LLM Code With Fine-Tuning with Shomron Jacob at AIE 2024

July 3, 2024

Fine-tuned large language models (LLMs) excel at relatively simple code generation tasks, but struggle with more intricate code, specialized libraries or complex application demos. This session addresses a well-known limitation of LLM—hallucinations—by proposing an innovative dataset representation strategy. While conventional fine-tuning often employs a question-output pair format, it also sometimes leads to undesirable hallucinations. This session will discuss real-world experiences fine-tuning existing code generators to output high-caliber code on newer generative AI libraries such as Langchain, Vertex AI and others. It will offer a use case on the importance of data representation, as well as a practice strategy for enhancing LLM performance in code generation—particularly for complex and specialized tasks.

Share some ❤
Categories: AIE 2024
starts in 10 seconds