Integrating DeepSeek Large Model with Spring AI: A Step‑by‑Step Guide
This article explains how to obtain a DeepSeek API key, configure Spring AI with the appropriate base URL and model, and provides Java code examples for both synchronous and streaming chat interactions using the DeepSeek large‑language model.
DeepSeek is a Chinese large‑language model offering two series: the V series for chat (model name deepseek-chat ) and the R series for reasoning (model name deepseek-reasoner ).
To integrate DeepSeek with Spring AI, first create an API key from the DeepSeek portal, then set the base URL property spring.ai.openai.base-url to api.deepseek.com and choose the desired model via spring.ai.openai.chat.model=<model name> .
Configuration can be added to application.yml (or properties) as follows:
spring:
ai:
openai:
api-key: sk-xxx // your key
base-url: https://api.deepseek.com
chat:
options:
model: deepseek-chatA simple Spring Boot controller demonstrates two endpoints: /ai/generate for synchronous generation and /ai/generateStream for streaming responses. The controller injects OpenAiChatModel , builds a Prompt from a UserMessage , and returns either a map with the generated text or a Flux<ChatResponse> stream.
package com.ivy.controller;
import org.springframework.ai.chat.messages.UserMessage;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.prompt.Prompt;
import org.springframework.ai.openai.OpenAiChatModel;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;
import java.util.Map;
@RestController
public class ChatController {
private final OpenAiChatModel chatModel;
public ChatController(OpenAiChatModel chatModel) { this.chatModel = chatModel; }
@GetMapping("/ai/generate")
public Map
generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", this.chatModel.call(message));
}
@GetMapping("/ai/generateStream")
public Flux
generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
Prompt prompt = new Prompt(new UserMessage(message));
return this.chatModel.stream(prompt);
}
}Because the public DeepSeek service may be subject to resource limits, the article also suggests deploying a DeepSeek model locally for uninterrupted learning and experimentation.
In summary, connecting DeepSeek to Spring AI is straightforward and enables both blocking and streaming chat capabilities with minimal configuration.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.