Artificial Intelligence 10 min read

Building a Local Personal Knowledge Base with Ollama, DeepSeek‑R1, AnythingLLM and Integrating Continue into VSCode

This guide walks through setting up a local personal knowledge base using Ollama, DeepSeek‑R1, and AnythingLLM, and demonstrates how to integrate the Continue AI code assistant into VSCode, covering installation, configuration, and usage tips for efficient, secure development.

JD Tech Talk
JD Tech Talk
JD Tech Talk
Building a Local Personal Knowledge Base with Ollama, DeepSeek‑R1, AnythingLLM and Integrating Continue into VSCode

Overview

In the current wave of AI and development tools, combining Ollama, DeepSeek‑R1, and AnythingLLM creates a powerful local personal knowledge base that enhances development efficiency and data privacy.

1. Installing Ollama

Ollama is an open‑source platform for running and managing large language models locally. It simplifies model download, installation, and management, enabling private, on‑device inference. Users download the appropriate package from https://ollama.com/ (e.g., macOS for Apple M3 Pro), install it, start the service, and verify it at http://localhost:11434/ .

2. Installing DeepSeek‑R1

DeepSeek‑R1 can be obtained via Ollama or Hugging Face. For an Apple M3 Pro with 18 GB RAM, the 1.5 b model is recommended for lightweight tasks, while the 7 b model offers better performance for more demanding workloads. The model is launched with the command:

ollama run deepseek-r1:7b

After the model starts, a simple prompt such as “You are who?” can be used to confirm functionality.

3. Installing AnythingLLM

AnythingLLM is a zero‑setup, private, integrated AI application that provides Retrieval‑Augmented Generation (RAG) and AI‑agent capabilities without requiring code or infrastructure. It is built with JavaScript, making it familiar to front‑end developers. Installation steps include downloading the desktop client from https://anythingllm.com/desktop, opening the client, selecting Ollama as the LLM provider, and choosing the installed DeepSeek‑R1 model (e.g., 7 b). Users can then customize appearance, language, and default messages.

To feed knowledge, local documents (e.g., a Word file) are uploaded via the workspace UI, and browser pages are added using the AnythingLLM Browser Companion extension. An API key is generated in the client settings, copied, and pasted into the browser extension’s connection string to enable seamless syncing.

4. Integrating Continue into VSCode

Continue is an open‑source AI coding assistant for VSCode and JetBrains that offers chat‑based code understanding, inline completions, and in‑editor editing. After ensuring VSCode is installed, users add the “Continue” extension from the marketplace, then configure the provider to “Ollama” and select the previously installed DeepSeek‑R1 model. Once connected, a test query such as “What model are you?” demonstrates the integration.

Conclusion

The combined setup provides a secure, locally‑run AI environment that protects sensitive data while boosting coding productivity. The main trade‑off is higher hardware requirements; insufficient resources can lead to slower model responses, but overall the workflow offers a valuable, privacy‑preserving development experience.

DeepSeekVSCodecontinueAI integrationOllamaAnythingLLMLocal Knowledge Base
JD Tech Talk
Written by

JD Tech Talk

Official JD Tech public account delivering best practices and technology innovation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.