Empower Functions brings you advanced LLM capabilities designed for real-world applications. This intuitive model acts as a drop-in replacement, seamlessly turning user requests into actionable API calls. Whether for data extraction or interactive tasks, its refined capabilities ensure smooth, efficient conversations—perfect for enhancing any conversational agent.
Empower Functions is an innovative suite of large language models (LLMs) designed to deliver GPT-4 caliber functionality for various real-world "tool using" applications. These models are fully compatible and can seamlessly replace standard solutions, making them a go-to choice for developers looking to integrate advanced conversational capabilities into their applications.
Key Features
-
Real-World Applications: Empower Functions can intelligently interact with external APIs, recognizing when to invoke functions and producing precise JSON outputs based on user inputs. This functionality is crucial for creating effective conversational agents that handle tasks like data retrieval, API calls, and more.
-
Latest Release - Version 1.1: The recently launched v1.1 of Empower Functions has been fine-tuned using Llama3.1, leveraging a curated dataset for enhanced performance. This version has distinguished itself by achieving state-of-the-art results on the Berkeley Function Calling leaderboard.
(Image placeholder for performance data)
Understanding "Tool Using"
"Tool using" refers to the LLMs' capability to execute API requests by generating required function calls in response to user queries. This involves:
- Multi-Turn Conversations: Maintaining context across several exchanges to provide meaningful responses based on previous interactions.
- Clarification and Context Integration: Asking for clarifications when necessary and effectively integrating outputs from different tools into cohesive responses.
Explore further how our models can enhance user interactions by visiting our live demo.
Model Family
Empower Functions offers a variety of models tailored to different contexts and requirements, including:
Model | Specs | Links | Notes |
---|---|---|---|
llama3-empower-functions-small | 128k context, based on Llama3.1 8B | Model Link | Cost-effective and locally runnable |
llama3-empower-functions-large | 128k context, based on Llama3.1 70B | Model Link | Best accuracy |
Innovative Training Approach
Empower Functions uses a unique training methodology that combines Supervised Fine-Tuning (SFT) with Direct Preference Optimization (DPO) to ensure top-notch performance:
- SFT Phase: Involves training on over 100,000 curated conversation examples, enabling the model to handle single-turn and multi-turn scenarios effectively.
- DPO Phase: Focuses on refining function specification handling to avoid generating incorrect or fabricated outputs.
We are committed to ongoing enhancements, fine-tuning our models to meet your specific needs while optimizing performance across diverse use cases. Contact us for tailored solutions!