Overview
Linked lists are usually the next basic data structure that algorithm learners study after mastering arrays.
The concept of a linked list is very simple: any "linear structure connected by pointers" can be called a linked list. Linked list problems typically range from easy to medium difficulty, but the term "linked list" usually refers to the array structure provided in the problem rather than the specific method to solve it, similar to arrays.
However, without a solid foundation, it's easy to make mistakes when writing linked lists by hand during interviews. This isn’t an exaggeration, because most of the time, platforms like LeetCode provide pre-defined linked list node structures for automated scoring, so learners just follow the template when solving problems. As a result, many graduates with significant problem-solving experience may realize during an interview that they have never actually implemented a linked list from scratch.
Linked List Basics
As mentioned above, a solid understanding of linked lists is important, but implementing them is not difficult. Mastering "insertion, deletion, access, search, and initialization" makes understanding linked lists as clear as reading the palm of your hand.
1. Initializing a Linked List
The building block of a linked list is the node object. Each node contains two pieces of data: the "value" of the node and a "reference" to the next node. Based on this definition, we can write the following code:
class Node: def __init__(self, val: int): self.val: int = val self.next: Node | None = None
When initializing a linked list, special attention should be given to the head node and the tail node. The head node is the entry point to access the entire list, so its pointer should not be modified carelessly during problem-solving. The tail node should ensure that its pointer points to null (e.g., None
in Python, null
in JavaScript, etc.), otherwise, traversing the list would not know when to stop.

2. Deleting a Node
Unlike arrays, where deleting an element requires rearranging the remaining elements to fill the gap, in linked lists, once the node to be deleted is found, simply redirect the next
pointer of its parent node to the next node. The modified list is now complete, and the detached node can be destroyed or repurposed.

3. Inserting a Node
Inserting a node follows a similar logic to deletion. Adding an element to a linked list only requires updating the pointers of the adjacent nodes, making the previous node’s next
point to the new node, and the new node’s next
point to the subsequent node.

4. Accessing a Node
It must be acknowledged that accessing nodes in a linked list is less efficient. Unlike arrays, where elements can be found in O(1) time using an index, linked lists require starting from the head node and traversing sequentially until the target node is found, and then accessing it.
5. Searching for a Node
Searching for a node in a linked list is the same as accessing a node. In fact, accessing a node = searching for the node + accessing the node.
Linked List vs. Array
For many algorithm learners and interviewees, the sole criterion for using a linked list is whether the problem explicitly requires it. From a learner's perspective, this is understandable. However, as a software engineer designing algorithms from scratch for a given requirement, how should one choose a basic data structure?
With this in mind, some may start to question the existence of linked lists: Arrays are popular because their linear structure aligns well with computer memory architecture, so why create linked lists, which are "discrete" but still "linear" data structures?
In my opinion, reaching this level of questioning marks the true beginning of understanding the essence of linked lists. We can explore the significance of linked lists by comparing them with arrays.
The Survival Edge of Linked Lists: Uniting Fragmented Memory
First, let’s consider the problem from the perspective of computer systems. Memory space is a shared resource for all programs, and in a complex system environment, free memory may be scattered throughout, even with the most advanced and intelligent memory management systems.
As discussed in the first chapter on arrays, memory allocated for an array must be contiguous. When dealing with large arrays, during a resize operation that requests double the space, the system might not have sufficient contiguous space available. This is where the flexibility of linked lists shines: by using pointers to effectively link fragmented memory, linked lists can form a linear data structure without needing contiguous memory.

It’s undeniable that linked list nodes, in addition to storing values, need to store an additional reference (pointer). Thus, with the same amount of data, linked lists occupy more memory than arrays. This leads to our first tradeoff: "Total Memory Usage vs. Memory Flexibility."
Performance Comparison: Array vs. Linked List
Both arrays and linked lists aim to implement a "linear structure," yet they use seemingly opposite storage strategies to achieve this goal, resulting in contrasting performance characteristics.
Array | Linked List | |
---|---|---|
Storage Method | Contiguous memory space | Scattered memory space |
Capacity | Fixed length (unless resized) | Flexibly expandable, no upper limit |
Memory Efficiency | Low memory usage with high cache hit rate | Higher memory usage under the same conditions |
Access Element (Index) | O(1) | O(N) |
Search Element | O(N) | O(N) |
Add Element | O(N) | O(1) |
Delete Element | O(N) | O(1) |
Traverse All Elements | O(N) | O(N) |
Clearly, due to their scattered structure, linked lists are far less efficient than arrays for element access. However, when inserting and deleting elements, linked lists do not require the manipulation of an entire bulky memory block, as arrays do, to maintain linearity. This leads to our second tradeoff: "Insertion/Deletion Performance vs. Search Performance."
Additional Insight: Physical-Level Tradeoff
For those familiar with database concepts, this isn’t new. In computer architecture, cache is much closer to the CPU, making its read/write efficiency significantly higher than that of RAM, and far higher than that of hard drives. The tradeoff is that cache has a very small capacity and is expensive, so storing large amounts of data in cache is not feasible. If data being read from a data structure happens to be in the cache, the efficiency is astonishingly high, known as a "cache hit".
This brings us to our third tradeoff: "Cache Hit Rate vs. Space Utilization."
Cache considerations are seldom mentioned in algorithm performance assessments, which I believe is reasonable. Theoretical assessments using algorithmic upper bounds for time and space provide a fairer evaluation of intrinsic performance. However, it cannot be denied that "(in real-world applications) physical structure significantly impacts memory and cache usage efficiency," and thus the overall performance of the algorithm.
Arrays and linked lists differ in their cache utilization efficiency:
- Space Usage: Linked list nodes occupy more space than array elements, resulting in less effective data storage per cache line.
- Cache Lines: Linked list data is scattered across memory, and since cache loads data in "lines," the proportion of irrelevant data loaded is higher.
- Prefetching Mechanism: Array data access patterns are more "predictable," making it easier for the system to prefetch upcoming data.
- Spatial Locality: Arrays are stored in contiguous memory, so data loaded nearby is more likely to be accessed soon.
Overall, from the perspective of "physical structure," arrays tend to have a higher "cache hit rate," which often results in better performance in practical applications. However, this does not mean that arrays are always better than linked lists. Most of the time, the choice between arrays and linked lists depends on the first two tradeoffs and specific application requirements (examples will be discussed in the "Stack" section).
Types of Linked Lists
Linked lists can generally be divided into three types:
- Singly Linked List
- Doubly Linked List
- Circular Linked List
Each type is suited for solving different types of problems.
Singly Linked List
This is the commonly referenced standard linked list and the earliest version. A node in a singly linked list contains two pieces of data: a value and a reference to the next node. The first node is called the head, and the last node is called the tail, which points to null (None
).

Doubly Linked List
Compared to a singly linked list, a doubly linked list keeps track of references in both directions. A node in a doubly linked list includes references (pointers) to both its successor (next node) and predecessor (previous node). While more flexible than a singly linked list and allowing traversal in both directions, it also requires more memory space.

Circular Linked List
If the tail node of a singly linked list points to the head node (forming a loop), it becomes a circular linked list. In a circular linked list, any node can be considered the head node.

Typical Applications of Linked Lists
Different types of linked lists are designed to solve different problems. Below are some common applications.
Singly Linked List
- "Stack and Queue": When insertions and deletions occur at one end of the linked list, it behaves as a last-in-first-out (LIFO) structure, like a stack. When insertions occur at one end and deletions at the other, it behaves as a first-in-first-out (FIFO) structure, like a queue.
- "Hash Table": Separate chaining is one of the main solutions for handling hash collisions, where all colliding elements are placed in a linked list.
- "Graph": The adjacency list is a common way to represent a graph, where each vertex of the graph is associated with a linked list, and each element in the list represents other vertices connected to that vertex.
- "React - Hook": In the React source code, the core feature of Hooks is implemented using a linked list, likely because linked lists enable efficient insertion and deletion operations without concern for element lookup.
Doubly Linked List
Useful in scenarios where quick access to both the previous and next elements is needed.
- "Advanced Data Structures": In structures like Red-Black Trees and B-Trees, accessing the parent node is often necessary. This can be implemented by storing a reference to the parent node within each node, similar to a doubly linked list.
- "Browser History": In web browsers, when users click the forward or backward buttons, the browser needs to know the previously visited and next pages. The bidirectional nature of a doubly linked list makes these operations simple.
- "LRU Algorithm": In the Least Recently Used (LRU) cache eviction algorithm, quick access to the least recently used data and support for fast insertion and deletion are needed. A doubly linked list is ideal for this purpose.
Circular Linked List
Suitable for scenarios involving cyclic operations, such as resource scheduling in operating systems.
- "Round-Robin Scheduling Algorithm": In operating systems, round-robin scheduling is a common CPU scheduling algorithm that cycles through a set of processes. Each process is given a time slice, and when the time slice is up, the CPU switches to the next process. This cyclical operation can be implemented using a circular linked list.
- "Data Buffers": Circular linked lists may also be used in the implementation of certain data buffers. For instance, in audio or video players, data streams may be divided into buffer blocks placed in a circular linked list to enable seamless playback.
It’s evident that many applications where linked lists are used, especially those involving "circular linked lists," would be cumbersome to implement using arrays. Therefore, when choosing a data structure, the first consideration should be the "specific application scenario and requirements," followed by performance tradeoffs if multiple data structure options are available.
Problem-Solving Experience Summary
ID | Name | Difficulty | Linked List Techniques |
---|---|---|---|
203 | Remove Linked List Elements | Easy | "Dummy Head Node" + "In-place Operation" |
707 | Design Linked List | Medium | "Dummy Head Node" + "In-place Operation" |
206 | Reverse Linked List | Easy | "Dummy Head Node" + ("Recursion" or "In-place Operation" + "Two Pointers") |
24 | Swap Nodes in Pairs | Medium | "Dummy Head Node" + "In-place Operation" + "Simulation (Can be complex)" |
19 | Remove Nth Node From End of List | Medium | "Dummy Head Node" + "Two Pointers" |
160 | Intersection of Two Linked Lists | Easy | "Dummy Head Node" + ("Two Pointers" or "Hash Set") |
142 | Linked List Cycle II | Medium | "Dummy Head Node" + ("Two Pointers" or "Hash Table") + "Mathematics" (Interviewers may ask about the underlying math logic) |
Technique: "Dummy Head Node"
One major issue with linked lists is that to operate on the current node, you often need to access the previous node. This leads to the problematic situation of the head node, as it has no previous node.
Handling cases involving the head node separately each time can be cumbersome, so using the dummy head node technique can solve this problem.
With a dummy head node, the meaningful part of the list can be processed uniformly.
# Template for the dummy head node method # # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next class Solution: def removeElements(self, head: Optional[ListNode], val: int) -> Optional[ListNode]: # Create a dummy head node to simplify the deletion process dummy_head = ListNode(next=head) # Traverse the list and remove nodes with the value 'val' current = dummy_head while current.next: if current.next.val == val: current.next = current.next.next else: current = current.next return dummy_head.next