Artificial Intelligence 7 min read

How to Build a Spring AI Hello World with Ollama and DeepSeek Locally

This step‑by‑step tutorial shows how to install Ollama, pull the DeepSeek‑R1 model, create a Spring Boot project with the Spring AI Ollama starter, code a ChatController, and test a local AI "Hello World" integration, illustrating AI‑enhanced backend development.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
How to Build a Spring AI Hello World with Ollama and DeepSeek Locally

Overview

Spring AI 1.0 has been released; this guide demonstrates how to build a simple "Hello World" AI application by installing Ollama, pulling the DeepSeek‑R1 model, creating a Spring Boot project, writing a ChatController, and testing the interaction.

Step 1: Install Ollama

Download Ollama from https://ollama.com/download, install it to a custom directory (e.g., D:\Ollama ), and set the model directory environment variable:

<code>setx OLLAMA_MODELS "D:\Ollama\models" /M</code>

Verify the installation with ollama -v , which should output version 0.5.11.

Ollama download page
Ollama download page

Step 2: Install DeepSeek

From the Ollama site, pull the lightweight 1.5b DeepSeek‑R1 model (1.5 billion parameters):

<code>ollama pull deepseek-r1:1.5b</code>

Run the model locally with:

<code>ollama run deepseek-r1:1.5b</code>
DeepSeek model selection
DeepSeek model selection

Step 3: Create Spring AI Project

Generate a Spring Boot project with Spring Initializr (Java 17) and replace the default OpenAI starter with the Ollama starter in pom.xml :

<code>&lt;dependency&gt;
    &lt;groupId&gt;org.springframework.ai&lt;/groupId&gt;
    &lt;artifactId&gt;spring-ai-starter-model-ollama&lt;/artifactId&gt;
&lt;/dependency&gt;
</code>

Configure application.yml to point to the local Ollama server and the DeepSeek model:

<code>spring:
  http:
    encoding:
      charset: UTF-8
      enable: true
      force: true
  ai:
    ollama:
      base-url: http://localhost:11434
      chat:
        model: deepseek-r1:1.5b
</code>
Spring Initializr configuration
Spring Initializr configuration

Step 4: Implement Chat Controller

Use Spring AI’s ChatClient to forward user messages to the model:

<code>package com.myai.demo.controller;

import org.springframework.ai.chat.client.ChatClient;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/ai")
public class ChatController {

    private final ChatClient chatClient;

    public ChatController(ChatClient.Builder chatClient) {
        this.chatClient = chatClient.build();
    }

    @GetMapping("/chat")
    public String chat(@RequestParam("message") String message) {
        try {
            return chatClient.prompt().user(message).call().content();
        } catch (Exception e) {
            return "Exception";
        }
    }
}
</code>

Step 5: Test and Summary

Run the Spring Boot application and send a request to /ai/chat?message=hello . The model replies with a friendly greeting, confirming that the "Hello World" AI integration works.

Test result screenshot
Test result screenshot

This tutorial shows that a Java backend can interact with a locally hosted large language model, opening many possibilities for AI‑enhanced applications.

JavaDeepSeekSpring AITutorialAI integrationOllama
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.