Personal Reflections on Algorithms, Data Structures, and Complexity
This article shares the author’s personal insights on a book about algorithms, data structures, and complexity, explaining core concepts such as algorithm definition, common data structures, time and space complexity, and offering intuitive examples to aid understanding.
Chapter 1: Algorithm Overview
1. Algorithms
What is an algorithm? The book uses the example of summing 1+2+3+…+10000 and asks how you would compute it—by adding one by one or by using a trick. The author interprets an algorithm simply as a method of calculation.
In computer programming, algorithms are applied to operations such as computation, searching, sorting, optimal decision‑making, routing, and interview problems. The key is not just getting the correct result, but doing it faster and more concisely.
Efficiency matters: two solutions may produce the same answer, but the one that uses fewer steps or less time is considered better.
2. Data Structures
Data is what we store; a structure is how we store it. Efficient storage and retrieval require organized structures rather than random placement.
Common data structures include linear structures (arrays, linked lists, stacks, queues), trees, graphs, and many others such as hash tables and bitmaps.
Linear structures are simple sequences; trees have hierarchical branches; graphs capture relationships between items.
3. Complexity
Both algorithms and data structures are evaluated by their efficiency—how fast they run and how much memory they consume.
3.1 Time Complexity
Time complexity is expressed with Big‑O notation (e.g., O(n), O(log n), O(2ⁿ)). It abstracts the growth rate of an algorithm’s running time as the input size increases.
Example: eating a 10 cm bread at 1 cm per 3 minutes takes 30 minutes; for a length n, the time is 3n minutes, giving a linear time complexity O(n).
The author emphasizes that even with faster hardware, understanding time complexity remains important because algorithmic growth can dominate performance.
3.2 Space Complexity
Space complexity, also expressed with Big‑O, measures the amount of extra memory an algorithm requires during execution.
Example: finding two duplicate numbers in a list can be done by scanning each element (high space/time) or by using a dictionary to record occurrences, which reduces both.
Space complexity categories include constant, linear, quadratic, and recursive space usage.
Chapter 2: Basics of Data Structures
1. Arrays
Arrays are ordered collections where each element has an index. Operations such as insertion, deletion, and lookup have different time and space costs.
2. Linked Lists
Linked lists are non‑contiguous structures where each node points to the next (and optionally the previous) node, allowing flexible insertion and deletion.
2.1 Singly Linked List
Structure: A → B → C → D → NULL. Each node stores data and a pointer to the next node.
2.2 Doubly Linked List
Structure: NULL ←→ A ↔ B ↔ C ↔ D → NULL. Each node has pointers to both the next and previous nodes, enabling traversal in both directions.
FunTester , a Tencent Cloud annual author, Boss Direct‑Hire author, and GDevOps partner media.
Exploring the FunTester Testing Framework Architecture
Introduction to Performance Testing Soft‑Start
How to Become a Full‑Stack Automation Engineer
Measuring Latency of Asynchronous Write APIs in Load Testing
Distributed Performance Testing Framework Design (Part 1)
Agile Testing Topics
Complete Guide to Automated Testing Frameworks (Translation)
Moco Framework Interface Hit‑Rate Statistics Practice
Best Practices to Avoid PPT Automation Pitfalls
What You Need to Know About Functional Testing
Distributed Performance Testing Framework Single‑Node Internal Test
Click the links to read the original articles and view the public account’s history.
FunTester
10k followers, 1k articles | completely useless
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.