The LaraUtilX package provides a unified interface for interacting with multiple Large Language Model (LLM) providers. Choose between OpenAI's GPT models, Google's Gemini models, or Anthropic's Claude models through a consistent API.
All providers support:
generateResponse( string $modelName, array $messages, ?float $temperature = null, ?int $maxTokens = null, ?array $stop = null, ?float $topP = null, ?float $frequencyPenalty = null, ?float $presencePenalty = null, ?array $logitBias = null, ?string $user = null, ?bool $jsonMode = false, bool $fullResponse = false ): Response
The provider is auto-bound to the LLMProviderInterface, so you can inject it anywhere in your Laravel app:
use LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
public function ask(LLMProviderInterface $llm)
{
$response = $llm->generateResponse(
'gpt-3.5-turbo', // or 'gemini-2.0-flash' or 'claude-sonnet-4-0'
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
use LaraUtilX\LLMProviders\Contracts\LLMProviderInterface;
class MyController extends Controller
{
private LLMProviderInterface $llm;
public function __construct(LLMProviderInterface $llm)
{
$this->llm = $llm;
}
public function ask()
{
$response = $this->llm->generateResponse(
'gpt-3.5-turbo',
[
['role' => 'user', 'content' => 'What is Laravel?']
]
);
return $response->getContent();
}
}
public function testLLM(Request $request)
{
$provider = config('lara-util-x.llm.default_provider', 'openai');
$model = match($provider) {
'gemini' => config('lara-util-x.gemini.default_model', 'gemini-2.0-flash'),
'claude' => config('lara-util-x.claude.default_model', 'claude-sonnet-4-0'),
default => config('lara-util-x.openai.default_model', 'gpt-3.5-turbo')
};
$prompt = $request->input('prompt', 'Say hello in one short sentence.');
$temperature = $request->input('temperature');
$maxTokens = $request->input('max_tokens');
$jsonMode = filter_var($request->input('json_mode', false), FILTER_VALIDATE_BOOL);
$messages = [
['role' => 'user', 'content' => $prompt]
];
$response = $this->llm->generateResponse(
modelName: $model,
messages: $messages,
temperature: $temperature !== null ? (float) $temperature : null,
maxTokens: $maxTokens !== null ? (int) $maxTokens : null,
jsonMode: $jsonMode,
fullResponse: true
);
return response()->json([
'provider' => $provider,
'model' => $response->getModel() ?? $model,
'content' => $response->getContent(),
'usage' => $response->getUsage(),
]);
}
You can use all supported chat parameters:
$response = $llm->generateResponse(
'gpt-3.5-turbo', // or 'gemini-2.0-flash' or 'claude-sonnet-4-0'
[
['role' => 'user', 'content' => 'Tell me a joke.']
],
temperature: 0.8,
maxTokens: 100,
stop: ['\n'],
topP: 1.0,
frequencyPenalty: 0,
presencePenalty: 0,
logitBias: null,
user: 'user-123',
jsonMode: false,
fullResponse: true
);
All configurations are present in config/lara-util-x.php:
'llm' => [
'default_provider' => env('LLM_DEFAULT_PROVIDER', 'openai'),
],
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'max_retries' => env('OPENAI_MAX_RETRIES', 3),
'retry_delay' => env('OPENAI_RETRY_DELAY', 2),
'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-3.5-turbo'),
'default_temperature' => env('OPENAI_DEFAULT_TEMPERATURE', 0.7),
'default_max_tokens' => env('OPENAI_DEFAULT_MAX_TOKENS', 300),
'default_top_p' => env('OPENAI_DEFAULT_TOP_P', 1.0),
],
'gemini' => [
'api_key' => env('GEMINI_API_KEY'),
'max_retries' => env('GEMINI_MAX_RETRIES', 3),
'retry_delay' => env('GEMINI_RETRY_DELAY', 2),
'base_url' => env('GEMINI_BASE_URL', 'https://generativelanguage.googleapis.com/v1beta'),
'default_model' => env('GEMINI_DEFAULT_MODEL', 'gemini-2.0-flash'),
'default_temperature' => env('GEMINI_DEFAULT_TEMPERATURE', 0.7),
'default_max_tokens' => env('GEMINI_DEFAULT_MAX_TOKENS', 300),
'default_top_p' => env('GEMINI_DEFAULT_TOP_P', 1.0),
],
'claude' => [
'api_key' => env('CLAUDE_API_KEY'),
'max_retries' => env('CLAUDE_MAX_RETRIES', 3),
'retry_delay' => env('CLAUDE_RETRY_DELAY', 2),
'base_url' => env('CLAUDE_BASE_URL', 'https://api.anthropic.com'),
'default_model' => env('CLAUDE_DEFAULT_MODEL', 'claude-sonnet-4-0'),
'default_temperature' => env('CLAUDE_DEFAULT_TEMPERATURE', 1.0),
'default_max_tokens' => env('CLAUDE_DEFAULT_MAX_TOKENS', 1024),
'default_top_p' => env('CLAUDE_DEFAULT_TOP_P', 1.0),
],
Add your API keys to your .env file:
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
CLAUDE_API_KEY=sk-ant-...
LLM_DEFAULT_PROVIDER=openai
Switch between providers via configuration:
// Use OpenAI
LLM_DEFAULT_PROVIDER=openai
// Use Gemini
LLM_DEFAULT_PROVIDER=gemini
// Use Claude
LLM_DEFAULT_PROVIDER=claude
The method returns a response instance (either OpenAIResponse, GeminiResponse, or ClaudeResponse), which provides:
getContent(): The generated text from the model.getModel(): The model used for the response.getUsage(): Token usage information.getRawResponse(): The full raw response object from the provider.Example:
$content = $response->getContent();
$model = $response->getModel();
$usage = $response->getUsage();
Success Result:
fullResponse is true, metadata such as model and usage.Failure Result:
max_retries.To add support for other LLM providers:
LLMProviderInterface in your own provider class.This unified system simplifies the process of interacting with multiple LLM providers, providing a convenient, robust, and extensible way to generate AI-powered completions in your Laravel application.