Practical Guide to Unit Testing and Mocking in Go Backend Services
This guide explains how to embed robust unit tests in Go backend services, outlines the AIR testing principles and common pitfalls, and demonstrates three practical mocking techniques—database stubs, interface‑based gomock, and runtime monkey‑patching—plus CI integration and coverage reporting.
Unit testing is an effective means to ensure code quality, and it should be integrated into every stage of software development, especially in a DevOps workflow. This article describes a concrete transformation process using a real business application, highlighting common problems encountered during unit test implementation and focusing on several mock techniques for complex dependencies.
Background
Testing guarantees code quality, and unit testing provides the smallest verification unit for program modules. Compared with manual testing, unit tests are automated, can be re‑executed automatically, and have higher efficiency in defect detection. Writing unit tests during development, pushing daily, and measuring success rate and coverage can effectively ensure overall project quality.
Unit Test Principles
The "A I R" criteria from Alibaba’s Java Development Manual defines good unit tests as:
A (Automatic): fully automated and non‑interactive.
I (Independent): test cases must not call each other or depend on execution order.
R (Repeatable): tests should be repeatable in CI, shielding external dependencies via mocks.
Additional characteristics of good unit tests include being short, simple, fast, and following a strict structure (setup, execution, verification).
Common Pitfalls
No assertions – tests without assertions have no value.
Not integrated into CI – tests should run on every code change.
Too large granularity – tests should focus on a single purpose.
Complex dependencies make isolation difficult.
The article then presents three practical mocking solutions for a Go HTTP service built with the Gin framework.
Example Function Under Test
func ListRepoCrAggregateMetrics(c *gin.Context) {
workNo := c.Query("work_no")
if workNo == "" {
c.JSON(http.StatusOK, errors.BuildRsp(errors.ErrorWarpper(errors.ErrParamError.ErrorCode, "work no miss"), nil))
return
}
crCtx := code_review.NewCrCtx(c)
rsp, err := crCtx.ListRepoCrAggregateMetrics(workNo)
if err != nil {
c.JSON(http.StatusOK, errors.BuildRsp(errors.ErrorWarpper(errors.ErrDbQueryError.ErrorCode, err.Error()), rsp))
return
}
c.JSON(http.StatusOK, errors.BuildRsp(errors.ErrSuccess, rsp))
}Test scenarios include:
Empty workNo should return an error.
Non‑empty workNo with successful downstream call returns aggregated data.
Non‑empty workNo with downstream failure returns an error.
Solution 1: Mock Downstream Storage (Not Recommended)
var db *gorm.DB
func getMetricsRepo() *model.MetricsRepo {
repo := model.MetricsRepo{ProjectID: 2, RepoPath: "/", FileCount: 5, CodeLineCount: 76, OwnerWorkNo: "999999"}
return &repo
}
func getTeam() *model.Teams { return &model.Teams{WorkNo: "999999"} }
func init() {
db, err := gorm.Open("sqlite3", "test.db")
if err != nil { os.Exit(-1) }
db.Debug()
db.DropTableIfExists(model.MetricsRepo{})
db.DropTableIfExists(model.Teams{})
db.CreateTable(model.MetricsRepo{})
db.CreateTable(model.Teams{})
db.FirstOrCreate(getMetricsRepo())
db.FirstOrCreate(getTeam())
}
type RepoMetrics struct {
CodeReviewRate float32 `json:"code_review_rate"`
ThousandCommentCount uint `json:"thousand_comment_count"`
SelfSubmitCodeReviewRate float32 `json:"self_submit_code_review_rate"`
}
// ... (test code omitted for brevity)Solution 2: Interface‑Based Mocking (Recommended)
Using gomock , define an interface for the downstream component and generate a mock.
type Foo interface { Bar(x int) int }
func SUT(f Foo) { /* ... */ }
ctrl := gomock.NewController(t)
defer ctrl.Finish()
m := NewMockFoo(ctrl)
m.EXPECT().Bar(gomock.Eq(99)).Return(101)
SUT(m)Apply this pattern to the controller:
type RepoCrCRController struct { c *gin.Context; crCtx code_review.CrCtxInterface }
func NewRepoCrCRController(ctx *gin.Context, cr code_review.CrCtxInterface) *RepoCrCRController { return &RepoCrCRController{c: ctx, crCtx: cr} }
func (ctrl *RepoCrCRController) ListRepoCrAggregateMetrics(c *gin.Context) { /* same logic as before, using ctrl.crCtx */ }Test example:
func TestListRepoCrAggregateMetrics(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
m := mock.NewMockCrCtxInterface(ctrl)
resp := &code_review.RepoCrMetricsRsp{}
m.EXPECT().ListRepoCrAggregateMetrics("999999").Return(resp, nil)
w := httptest.NewRecorder()
ctx, engine := gin.CreateTestContext(w)
repoCtrl := NewRepoCrCRController(ctx, m)
engine.GET("/api/test/code_review/repo", repoCtrl.ListRepoCrAggregateMetrics)
req, _ := http.NewRequest("GET", "/api/test/code_review/repo?work_no=999999", nil)
engine.ServeHTTP(w, req)
assert.Equal(t, w.Code, 200)
var got gin.H
json.NewDecoder(w.Body).Decode(&got)
assert.EqualValues(t, got["errorCode"], 0)
}Solution 3: Monkey‑Patch Mocking (Recommended)
When the code cannot be refactored to use interfaces, monkey can patch instance methods at runtime.
func TestListRepoCrAggregateMetrics(t *testing.T) {
w := httptest.NewRecorder()
_, engine := gin.CreateTestContext(w)
engine.GET("/api/test/code_review/repo", ListRepoCrAggregateMetrics)
var crCtx *code_review.CrCtx
repoRet := code_review.RepoCrMetricsRsp{}
monkey.PatchInstanceMethod(reflect.TypeOf(crCtx), "ListRepoCrAggregateMetrics",
func(ctx *code_review.CrCtx, workNo string) (*code_review.RepoCrMetricsRsp, error) {
if workNo == "999999" { repoRet.Total = 0; repoRet.RepoCodeReview = []*code_review.RepoCodeReview{} }
return &repoRet, nil
})
req, _ := http.NewRequest("GET", "/api/test/code_review/repo?work_no=999999", nil)
engine.ServeHTTP(w, req)
assert.Equal(t, w.Code, 200)
var v map[string]code_review.RepoCrMetricsRsp
json.Unmarshal(w.Body.Bytes(), &v)
assert.EqualValues(t, 0, v["data"].Total)
assert.Len(t, v["data"].RepoCodeReview, 0)
}Database Layer Mocking
Using go‑sqlmock to mock database/sql/driver interactions, enabling table‑driven tests without a real database.
// Example test omitted for brevity – demonstrates setting up sqlmock, defining expected queries, and asserting results.Continuous Integration
The article mentions integrating these tests into an internal CI platform (Aone) similar to Travis CI, running coverage collection, converting coverage reports with gocov / gocov‑xml , and generating incremental coverage reports with diff‑cover .
# Execute test command
mkdir -p $sourcepath/cover
RDSC_CONF=$sourcepath/config/config.yaml go test -v -cover=true -coverprofile=$sourcepath/cover/cover.cover ./...
ret=$?; if [[ $ret -ne 0 && $ret -ne 1 ]]; then exit $ret; fi
# Generate incremental coverage report
cp $sourcepath/cover/cover.cover /root/cover/cover.cover
pip install diff-cover==2.6.1
gocov convert cover/cover.cover | gocov-xml > coverage.xml
cd $sourcepath
diff-cover $sourcepath/coverage.xml --compare-branch=remotes/origin/develop > diff.outReferences are provided for further reading on unit testing architecture, mocking frameworks, SQL driver interfaces, table‑driven tests, and CI tools.
Amap Tech
Official Amap technology account showcasing all of Amap's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.