Implementing Vue Server-Side Rendering (SSR) with Koa, Docker, and Kubernetes
This technical guide explains how to build a Vue SSR service using Koa, outlines the project benefits and trade‑offs, details the directory structure, server and client entry files, webpack configurations, and describes a Docker‑based CI/CD pipeline with Kubernetes deployment for scalable production environments.
Project benefits include a 20% overall development efficiency improvement and a 40% faster first‑screen rendering under weak network conditions.
Trade‑offs : Using SSR requires a Node.js capable server, incurs higher learning costs, increases server load due to rendering on the server, and introduces two execution environments where lifecycle hooks such as beforeCreate and created run on both server and client, potentially causing side‑effects.
Before implementation, it is recommended to read the official Vue SSR documentation.
Step 1: Install dependencies
yarn add vue vue-server-renderer koaThe vue-server-renderer module is the core for Vue SSR, and Koa is used to build the server.
Step 2: Create a simple SSR server
const Koa = require('koa');
const server = new Koa();
const Vue = require('vue');
const renderer = require('vue-server-renderer').createRenderer();
const router = require('koa-router')();
const app = new Vue({
data: { msg: 'vue ssr' },
template: '
{{msg}}
'
});
router.get('*', (ctx) => {
renderer.renderToString(app, (err, html) => {
if (err) { throw err; }
ctx.body = html;
});
});
server.use(router.routes()).use(router.allowedMethods());
module.exports = server;This creates a minimal SSR service that renders a Vue instance to HTML.
Step 3: SSR specific implementation
Based on the simple service, the project is expanded into a real application with a proper directory structure:
app
├── src
│ ├── components
│ ├── router
│ ├── store
│ ├── index.js
│ ├── App.vue
│ ├── index.html
│ ├── entry-server.js // runs on the server
│ └── entry-client.js // runs in the browser
└── server
├── app.js
└── ssr.jsTwo entry files are required: entry-server.js for server‑side rendering and entry-client.js for client‑side hydration.
Server entry file
import cookieUtils from 'cookie-parse';
import createApp from './index.js';
import createRouter from './router/router';
import createStore from './store/store';
export default context => {
return new Promise((resolve, reject) => {
const router = createRouter();
const app = createApp({ router });
const store = createStore({ context });
const cookies = cookieUtils.parse(context.cookie || '');
router.push(context.url);
router.onReady(() => {
const matchedComponents = router.getMatchedComponents();
if (!matchedComponents.length) { return reject(new Error('404')); }
Promise.all(matchedComponents.map(({ asyncData }) => asyncData && asyncData({
store,
route: router.currentRoute,
cookies,
context: { ...context }
})))
.then(() => {
context.meta = app.$meta;
context.state = store.state;
resolve(app);
})
.catch(reject);
}, () => { reject(new Error('500 Server Error')); });
});
};Client entry file
import createApp from './index.js';
import createRouter from './router/router';
export const initClient = () => {
const router = createRouter();
const app = createApp({ router });
const cookies = cookieUtils.parse(document.cookie);
router.onReady(() => {
if (window.__INITIAL_STATE__) { store.replaceState(window.__INITIAL_STATE__); }
router.beforeResolve((to, from, next) => {
const matched = router.getMatchedComponents(to);
const prevMatched = router.getMatchedComponents(from);
let diffed = false;
const activated = matched.filter((c, i) => diffed || (diffed = (prevMatched[i] !== c)));
if (!activated.length) { return next(); }
Promise.all(activated.map(c => c.asyncData && c.asyncData({
store,
route: to,
cookies,
context: {}
})))
.then(() => next())
.catch(next);
});
app.$mount('#app');
});
};Adapting app.js for SSR
import Vue from 'vue';
import App from './App.vue';
export default function createApp({ router }) {
const app = new Vue({
router,
render: h => h(App),
});
return app;
};To avoid singleton issues in a long‑running Node process, a factory function is used to create a new Vue instance per request.
Automatic loading of router and store modules
// store implementation (simplified)
const storeContext = require.context('../module/', true, /\.(\/.+)\/js\/store(\/.+){1,}\.js/);
const getStore = (context) => {
storeContext.keys().filter(key => {
const filePath = key.replace(/^(.\/)|(js\/store\/)|(.js)$/g, '');
let moduleData = storeContext(key).default || storeContext(key);
const namespaces = filePath.split('/');
moduleData = normalizeModule(moduleData, filePath);
store.modules = store.modules || {};
const storeModule = getStoreModule(store, namespaces);
VUEX_PROPERTIES.forEach(property => {
mergeProperty(storeModule, moduleData[property], property);
});
return true;
});
};
export default ({ context }) => {
getStore(context);
return new Vuex.Store({ modules: { ...store.modules } });
};Webpack configuration
Three webpack configs are used: webpack.base.conf.js (common), webpack.client.conf.js (client bundle) and webpack.server.conf.js (server bundle).
// webpack.server.conf.js
const merge = require('webpack-merge');
const nodeExternals = require('webpack-node-externals');
const VueSSRServerPlugin = require('vue-server-renderer/server-plugin');
const path = require('path');
const baseConfig = require('./webpack.base.conf.js');
const resolve = src => path.resolve(__dirname, './', src);
module.exports = merge(baseConfig, {
entry: { app: ['./src/entry-server.js'] },
target: 'node',
devtool: 'source-map',
output: {
filename: '[name].js',
publicPath: '',
path: resolve('./dist'),
libraryTarget: 'commonjs2'
},
externals: nodeExternals({}),
plugins: [new VueSSRServerPlugin()]
}); // webpack.client.conf.js
const VueSSRClientPlugin = require('vue-server-renderer/client-plugin');
const merge = require('webpack-merge');
const webpack = require('webpack');
const baseConfig = require('./webpack.base.conf');
const UploadPlugin = require('@q/hj-webpack-upload');
const path = require('path');
const resolve = src => path.resolve(__dirname, './', src);
module.exports = merge(baseConfig, {
entry: { app: ['./src/entry-client.js'] },
target: 'web',
output: { filename: '[name].js', path: resolve('./dist'), publicPath: '', libraryTarget: 'var' },
plugins: [
new VueSSRClientPlugin(),
new webpack.HotModuleReplacementPlugin(),
new UploadPlugin(cdn, {
enableCache: true,
logLocal: false,
src: path.resolve(__dirname, '..', Source.output),
dist: path.resolve(__dirname, '..', Source.output),
beforeUpload: (content, location) => {
if (path.extname(location) === '.js') {
return UglifyJs.minify(content, { compress: true, toplevel: true }).code;
}
return content;
},
compilerHooks: 'done',
onError(e) { console.log(e); }
})
]
});SSR server implementation (Koa middleware)
// ssr.js (excerpt)
async render(context) {
const renderer = await this.getRenderer();
return new Promise((resolve, reject) => {
renderer.renderToString(context, (err, html) => {
if (err) { reject(err); } else { resolve(html); }
});
});
}
getRenderer() {
return new Promise((resolve, reject) => {
const htmlPath = `${this.base}/index.html`;
const bundlePath = `${this.base}/vue-ssr-server-bundle.json`;
const clientPath = `${this.base}/vue-ssr-client-manifest.json`;
fs.stat(htmlPath, statErr => {
if (!statErr) {
fs.readFile(htmlPath, 'utf-8', (err, template) => {
const bundle = require(bundlePath);
const clientManifest = require(clientPath);
const renderer = createBundleRenderer(bundle, {
template,
clientManifest,
runInNewContext: false,
shouldPrefetch: () => false,
shouldPreload: () => false,
});
resolve(renderer);
});
} else { reject(statErr); }
});
});
}
// app.js
const Koa = require('koa');
const server = new Koa();
const router = require('koa-router')();
const ssr = require('./ssr');
server.use(router.routes()).use(router.allowedMethods());
server.use(ssr(server));
app.on('error', (err, ctx) => { console.error('server error', err, ctx); });
module.exports = server;Deployment strategy
To address operational challenges, Docker is used for consistent environments across development, CI, and production, while Kubernetes handles container orchestration.
Local development uses Docker containers for dependency installation and hot‑reload development:
# Dependency installation
docker run -it \
-v $(pwd)/package.json:/opt/work/package.json \
-v $(pwd)/yarn.lock:/opt/work/yarn.lock \
-v $(pwd)/.yarnrc:/opt/work/.yarnrc \
-v mobile_node_modules:/opt/work/node_modules \
--workdir /opt/work \
--rm node:13-alpine \
yarn
# Development mode
docker run -it \
-v $(pwd)/:/opt/work/ \
-v mobile_node_modules:/opt/work/node_modules \
--expose 8081 -p 8081:8081 \
--expose 9229 -p 9229:9229 \
--expose 3003 -p 3003:3003 \
--workdir /opt/work \
node:13-alpine \
./node_modules/.bin/nodemon --inspect=0.0.0.0:9229 --watch server server/bin/wwwThe CI pipeline builds a Docker image for each commit, pushes it to a private registry, and triggers CD.
# Dockerfile (simplified)
FROM node:13-alpine
COPY package.json /opt/dependencies/package.json
COPY yarn.lock /opt/dependencies/yarn.lock
COPY .yarnrc /opt/dependencies/.yarnrc
RUN cd /opt/dependencies && yarn install --frozen-lockfile && yarn cache clean && mkdir /opt/work && ln -s /opt/dependencies/node_modules /opt/work/node_modules
COPY ci/docker/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
COPY ./ /opt/work/
RUN cd /opt/work && yarn build
WORKDIR /opt/work
EXPOSE 3003
ENV NODE_ENV production
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node", "server/bin/www"]CD uses kubectl to apply Kubernetes manifests. A typical deployment includes a Deployment with low resource limits (e.g., 256Mi memory, 250m CPU) to allow quick pod restarts on memory leaks, a Service exposing port 3003 as 8081, and an Ingress routing traffic based on host names.
# Deployment example
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-mobile
namespace: mobile
labels:
app: frontend-mobile
spec:
selector:
matchLabels:
app: frontend-mobile
replicas: 8
template:
metadata:
name: frontend-mobile
labels:
app: frontend-mobile
spec:
containers:
- name: frontend-mobile
image: nginx:latest
ports:
- containerPort: 3003
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /api/serverCheck
port: 3003
initialDelaySeconds: 15
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: frontend-mobile
namespace: mobile
labels:
app: frontend-mobile
spec:
selector:
app: frontend-mobile
ports:
- protocol: TCP
port: 8081
targetPort: 3003
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-mobile
namespace: mobile
labels:
app: frontend-mobile
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: local-deploy.com
http:
paths:
- path: /
backend:
serviceName: frontend-mobile
servicePort: 8081By using Docker for all stages, the environment remains consistent, and Kubernetes provides automated scaling, self‑healing, and easy roll‑backs, completing the end‑to‑end deployment workflow.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.