Artificial Intelligence 6 min read

Integrating DeepSeek Large Model with Spring AI: A Step‑by‑Step Guide

This article explains how to integrate DeepSeek's large language models—both the chat‑oriented deepseek‑chat and the reasoning‑focused deepseek‑reasoner—into a Spring AI application, covering API key setup, base‑URL configuration, model selection, and providing full code examples for dependency, configuration, and a simple chat controller.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Integrating DeepSeek Large Model with Spring AI: A Step‑by‑Step Guide

DeepSeek is a domestically developed large language model released by DeepSeek Co., offering two main model families: the V series for dialogue (model name deepseek-chat ) and the R series for reasoning (model name deepseek-reasoner ).

Spring AI can integrate DeepSeek by reusing the existing OpenAI client. To get started you need a DeepSeek API key, set the base URL to api.deepseek.com , and choose the desired model via the spring.ai.openai.chat.model property.

Preparation steps:

Create an API key on the DeepSeek portal and configure it with the property spring.ai.openai.api-key .

Set the base URL using spring.ai.openai.base-url to api.deepseek.com .

Select the model by setting spring.ai.openai.chat.model=<model name> (e.g., deepseek-chat or deepseek-reasoner ).

Example integration:

1. Add the Maven dependency

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>

2. Configure Spring Boot

spring:
  ai:
    openai:
      api-key: sk-xxx   // replace with your own key
      base-url: https://api.deepseek.com
      chat:
        options:
          model: deepseek-chat

3. Simple chat controller (blocking and streaming)

package com.ivy.controller;

import org.springframework.ai.chat.messages.UserMessage;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.prompt.Prompt;
import org.springframework.ai.openai.OpenAiChatModel;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;
import java.util.Map;

@RestController
public class ChatController {

    private final OpenAiChatModel chatModel;

    public ChatController(OpenAiChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping("/ai/generate")
    public Map
generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", this.chatModel.call(message));
    }

    @GetMapping("/ai/generateStream")
    public Flux
generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return this.chatModel.stream(prompt);
    }
}

Because DeepSeek's public service may be limited by resources, you can also deploy the model locally for experimentation.

Conclusion : Integrating DeepSeek with Spring AI is straightforward, supporting both synchronous and streaming chat modes. The usage patterns for function calls, role definitions, and structured outputs are consistent with other Spring AI tutorials.

Source code repository: https://github.com/Fj-ivy/spring-ai-examples

JavaintegrationAIDeepSeeklarge language modelSpring AIChatbot
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.