Skip to main content

Overview

DeepSeekLLMService provides access to DeepSeek’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with advanced reasoning capabilities.

Installation

To use DeepSeek services, install the required dependency:
pip install "pipecat-ai[deepseek]"

Prerequisites

DeepSeek Account Setup

Before using DeepSeek LLM services, you need:
  1. DeepSeek Account: Sign up at DeepSeek Platform
  2. API Key: Generate an API key from your account dashboard
  3. Model Selection: Choose from available DeepSeek models with reasoning capabilities

Required Environment Variables

  • DEEPSEEK_API_KEY: Your DeepSeek API key for authentication

Configuration

api_key
str
required
DeepSeek API key for authentication.
base_url
str
default:"https://api.deepseek.com/v1"
Base URL for DeepSeek API endpoint.
model
str
default:"deepseek-chat"
Model identifier to use.

InputParams

This service uses the same input parameters as OpenAILLMService. See OpenAI LLM for details.

Usage

Basic Setup

import os
from pipecat.services.deepseek import DeepSeekLLMService

llm = DeepSeekLLMService(
    api_key=os.getenv("DEEPSEEK_API_KEY"),
    model="deepseek-chat",
)

With Custom Parameters

from pipecat.services.deepseek import DeepSeekLLMService

llm = DeepSeekLLMService(
    api_key=os.getenv("DEEPSEEK_API_KEY"),
    model="deepseek-chat",
    params=DeepSeekLLMService.InputParams(
        temperature=0.7,
        top_p=0.9,
        max_tokens=2048,
    ),
)

Notes

  • DeepSeek does not support the seed and max_completion_tokens parameters. Use max_tokens instead.
  • DeepSeek models offer strong reasoning capabilities, particularly the deepseek-reasoner model variant.