Gembatch is a Python library that simplifies the creation of language chain applications with Gemini. By leveraging batch processing, it helps developers optimize prompt usage for significant cost savings while maintaining code clarity. Transform your prompt chaining workflow effortlessly and unlock the benefits of batch APIs.
Gembatch is an innovative Python library designed to streamline the development of language chain applications using the Gemini model. By employing advanced batch processing techniques, Gembatch significantly reduces costs associated with prompt generation while enhancing the overall efficiency of your workflows.
Key Features
- Cost Efficiency: Traditionally, executing prompt chains sequentially can be expensive, especially when relying on multiple API calls. Gembatch allows you to leverage batch processing discounts, often up to 50%, by intelligently grouping requests, which results in substantial cost savings.
- Simplified Prompt Chaining: Creating complex workflows no longer requires convoluted code. With Gembatch, you can define your prompt chains sequentially, retaining clarity and maintainability of your code. The library skillfully manages batch processing in the background, allowing you to focus on higher-level design.
- Asynchronous Handling: Gembatch manages the complexities of asynchronous processing, such as polling for batch completion and handling potential errors. This helps you write more robust applications without getting bogged down in implementation details.
Example Usage
Here’s a simple example illustrating how to utilize Gembatch to create a prompt chain in your Firebase environment:
import gembatch
# Task A
def task_a_prompt1():
gembatch.submit(
{
"contents": [
{
"role": "user",
"parts": [{"text": "some prompts..."}],
}
],
}, # prompt 1
"publishers/google/models/gemini-1.5-pro-002",
task_a_prompt2
)
def task_a_prompt2(response: generative_models.GenerationResponse):
gembatch.submit(
{
"contents": [
{
"role": "model",
"parts": [{"text": response.text}],
},
{
"role": "user",
"parts": [{"text": "some prompts..."}],
}
],
}, # prompt 2
"publishers/google/models/gemini-1.5-pro-002",
task_a_output
)
def task_a_output(response: generative_models.GenerationResponse):
print(response.text)
# Start the prompt chain
task_a_prompt1()
In this example:
task_a_prompt1
initiates the process by submitting the first prompt to the Gemini model.task_a_prompt2
handles the response and creates the next prompt in the sequence.task_a_output
outputs the final result.
How Gembatch Works
Gembatch efficiently orchestrates multiple prompt chains using batch processing, illustrated in the workflow diagram:
- Independent Prompt Chains: Each task runs its own sequence of prompts and responses without interference.
- Gembatch Submission: Prompts are added to a job queue rather than being sent individually to Gemini.
- Batch Formation: Prompts are grouped intelligently into batches for efficient processing.
- Batch Execution: Batches are periodically sent to Gemini, minimizing API interactions.
- Response Handling: Responses are routed to their respective tasks, triggering subsequent steps in each chain.
- Chain Continuation: Tasks continue processing until completion, making efficient use of resources.
Why Choose Gembatch?
- Enhance your productivity with simpler management of complex prompt chains.
- Unlock significant cost savings through efficient batch processing.
- Improve code clarity and maintainability with a structured framework for handling language models.
Get Started
Ready to elevate your language processing projects? Check out our comprehensive Installation Guide to integrate Gembatch into your Firebase environment.