adventofcode2018

package module
v0.0.0-...-2698eaa Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 28, 2025 License: GPL-3.0 Imports: 12 Imported by: 0

README

image:https://godoc.org/gitlab.com/jhinrichsen/adventofcode2018?status.svg["godoc", link="https://godoc.org/gitlab.com/jhinrichsen/adventofcode2018"]
image:https://goreportcard.com/badge/gitlab.com/jhinrichsen/adventofcode2018["Go report card", link="https://goreportcard.com/report/gitlab.com/jhinrichsen/adventofcode2018"]
image:https://gitlab.com/jhinrichsen/adventofcode2018/badges/master/pipeline.svg[link="https://gitlab.com/jhinrichsen/adventofcode2018/-/commits/master",title="pipeline status"]
image:https://gitlab.com/jhinrichsen/adventofcode2018/badges/master/coverage.svg[link="https://gitlab.com/jhinrichsen/adventofcode2018/-/commits/master",title="coverage report"]

= Advent Of Code (AOC) 2018
:toc:

My take on the AOC, documenting my errors.

Usually, i don't go for implementation speed, because that does not resonate
well with me.

My second highest priority is runtime performance, and priority number one is
getting it right the first time.
Therefore, i always provide complete coverage via unit tests.
Go just makes this _so_ easy.

This document does not include solutions so no worries to read it at any point
in time.
Solutions are hardcoded into unit tests, so you won't see any solutions as
long as you avoid looking at `*_test.go` files.

== Environment
- Go 1.17
- vim, vim-go, gopls, fed by an HHKB
- VisualStudio Code for complex debugging scenarios
- Fedora 34
- AMD Ryzen 5 3400G on a Gigabyte B450

== Updates

=== Update 1 (2025)
- Go 1.24
- vim, vim-go, gopls, fed by an HHKB
- Windsurf + GPT-5 low reasoning
- Fedora 42
- Framework Laptop 16"

=== Update 2 (Nov 2025)
- Go 1.25
- Claude Code
- Claude Code Web (Research preview)
- Fedora 43
- Intel i7 14700

The package name is `adventofcode2018`.

|===
| Day | Tries
|  1  | (no history available. I just do not remember, and haven't taken notes)
|  2  |
|  3  |
|  4  |
|  5  |
|  6  |
|  7  |
|  8  | 1
|  9  |
| 10  |
| 11  | 2
| 12  |
| 13  |
| 14  |
| 15  |
| 16  |
| 17  |
| 18  |
| 19  |
| 20  |
| 21  |
| 22  |
| 23  |
| 24  |
| 25  |
|===

Numbers in (brackets) indicate tries so far (part #2 not solved yet).

== Day 01: Chronal Calibration

=== Performance Optimization

Day 01 was optimized by removing the separate parser and parsing directly from `[]byte` inline within the solver. This eliminates intermediate allocations and reduces function call overhead.

==== Baseline (b0)
Initial implementation used `NewDay01([]string)` parser that converted strings to integers using `strconv.Atoi()`, then passed the resulting `[]int` slice to the solver.

==== Optimization (b1)
Removed the parser entirely. The solver now takes `[]byte` directly and parses numbers inline by iterating through bytes, handling `+`/`-` signs, and building integers digit by digit.

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day01_bench_b0.txt │       day01_bench_b1.txt            │
              │       sec/op       │   sec/op     vs base                │
Day01Part1-16          8.703µ ± 4%   5.292µ ± 1%  -39.20% (p=0.000 n=10)
Day01Part2-16          14.97m ± 6%   15.29m ± 3%        ~ (p=0.280 n=10)

              │ day01_bench_b0.txt │          day01_bench_b1.txt           │
              │        B/op        │     B/op      vs base                 │
Day01Part1-16         8.000Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)
Day01Part2-16         9.028Mi ± 0%   9.020Mi ± 0%    -0.09% (p=0.000 n=10)

              │ day01_bench_b0.txt │         day01_bench_b1.txt            │
              │     allocs/op      │  allocs/op   vs base                  │
Day01Part1-16           1.000 ± 0%    0.000 ± 0%  -100.00% (p=0.000 n=10)
Day01Part2-16          1.044k ± 0%   1.043k ± 0%    -0.10% (p=0.000 n=10)
----

**Key Improvements:**

* **Part 1**: 39.20% faster (8.7µs → 5.3µs)
* **Part 1**: Zero allocations (was 1 allocation, 8KB)
* **Part 2**: Similar performance, one less allocation (1044 → 1043)

The optimization is particularly effective for Part 1, achieving zero allocations by eliminating the intermediate string-to-int conversion step. Part 2's performance remains similar because it's dominated by the map operations for detecting duplicate frequencies.

== Day 02: Inventory Management System
== Day 03: No Matter How You Slice It
== Day 04: Repose Record
== Day 05: Alchemical Reduction

=== Performance Optimization

Day 05 was optimized by choosing the reslicing approach over the hole-marking approach for polymer reactions.

==== Baseline (b0)
Initial implementation had two competing algorithms: `reactByCreatingHoles` marked reacted units with '.' and removed them in a second pass (1 allocation, 56KB), while `reactByReslicing` used slice reslicing to remove reacted units immediately (0 allocations).

==== Optimization (b1)
Selected the faster reslicing approach as the single implementation:

- Use `append(polymer[0:i], polymer[i+2:]...)` to remove reacted pairs in-place
- Backtrack by 2 positions after each reaction to check newly adjacent units
- Zero allocations by reusing the input slice

==== Results

----
Comparing reactByCreatingHoles (b0) vs reactByReslicing (b1):
- reactByCreatingHoles: 295.4µs, 56KB, 1 alloc
- reactByReslicing: 59.96µs, 0B, 0 allocs

Improvement: 79.7% faster, zero allocations
----

**Key Improvements:**

* **Part 1**: 79.7% faster (295µs → 60µs)
* **Part 1**: Zero allocations (was 1 allocation, 56KB)

The reslicing approach is faster because it avoids the two-pass overhead (mark holes, then compact). By modifying the slice in-place and backtracking, we eliminate allocations entirely. The hole-marking implementation has been removed.

== Day 06: Chronal Coordinates

=== Performance Optimization

Day 06 was optimized by inline byte parsing and eliminating repeated allocations in the distance calculation loop.

==== Baseline (b0)
Initial implementation used `strings.Split` for parsing and allocated new slices for every grid cell in the `minDist` function. With ~160,000 grid cells and 2 allocations per cell, this resulted in over 250,000 allocations.

==== Optimization (b1)
Complete rewrite following the blueprint pattern:

- Inline byte parsing without string operations
- Single flat grid array instead of 2D slice
- Reusable distance buffer eliminating per-cell allocations
- Optimized manhattan distance calculations inline
- Part 2 achieved zero allocations

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day06_bench_b0.txt │          day06_bench_b1.txt          │
              │       sec/op       │    sec/op     vs base                │
Day06Part1-16          45.66m ± 5%    17.31m ± 5%  -62.10% (p=0.000 n=10)
Day06Part2-16          9.685m ± 4%   11.010m ± 3%  +13.68% (p=0.000 n=10)

              │ day06_bench_b0.txt │            day06_bench_b1.txt             │
              │        B/op        │     B/op      vs base                     │
Day06Part1-16       53165.5Ki ± 0%   984.9Ki ± 0%   -98.15% (p=0.000 n=10)
Day06Part2-16         3.547Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)

              │ day06_bench_b0.txt │           day06_bench_b1.txt            │
              │     allocs/op      │ allocs/op   vs base                     │
Day06Part1-16      255868.000 ± 0%   4.000 ± 0%  -100.00% (p=0.000 n=10)
Day06Part2-16           57.00 ± 0%    0.00 ± 0%  -100.00% (p=0.000 n=10)
----

**Key Improvements:**

* **Part 1**: 62.10% faster (45.7ms → 17.3ms)
* **Part 1**: 98.15% reduction in memory (53MB → 985KB)
* **Part 1**: 99.998% reduction in allocations (255,868 → 4)
* **Part 2**: Zero allocations (was 57 allocations)

The dramatic improvement came from eliminating the per-cell allocations in the distance calculation loop. By reusing a single buffer and using inline calculations, Part 1 went from 255,868 allocations down to just 4 (for the main data structures). Part 2 benefits from inline parsing achieving zero allocations.

== Day 07: The Sum of Its Parts
== Day 08: Memory Maneuver
== Day 09: Marble Mania

=== Performance Optimization

Day 09 was optimized by eliminating millions of pointer allocations in the circular linked list implementation.

==== Baseline (b0)
Initial implementation used pointers (`*marble`) for the circular doubly-linked list, allocating a new marble struct for each game turn. With Part 2 playing 6.8M turns, this resulted in 6.8M allocations and significant GC pressure.

==== Optimization (b1)
Changed from pointer-based to index-based circular linked list:

- Pre-allocate a single `[]marble` array for all marbles
- Use `uint` indices instead of `*marble` pointers
- Store prev/next as array indices rather than memory addresses
- Eliminates per-turn allocations entirely

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day09_bench_b0.txt │         day09_bench_b1.txt          │
              │       sec/op       │   sec/op     vs base                │
Day09Part1-16          2.355m ± 3%   1.039m ± 4%  -55.88% (p=0.000 n=10)
Day09Part2-16         338.70m ± 8%   52.30m ± 1%  -84.56% (p=0.000 n=10)
geomean                28.24m        7.372m       -73.90%

              │ day09_bench_b0.txt │         day09_bench_b1.txt          │
              │        B/op        │     B/op      vs base               │
Day09Part1-16         1.563Mi ± 0%   1.637Mi ± 0%  +4.70% (p=0.000 n=10)
Day09Part2-16         155.9Mi ± 0%   163.0Mi ± 0%  +4.55% (p=0.000 n=10)
geomean               15.61Mi        16.33Mi       +4.63%

              │ day09_bench_b0.txt │         day09_bench_b1.txt          │
              │     allocs/op      │ allocs/op   vs base                 │
Day09Part1-16       68129.000 ± 0%   2.000 ± 0%  -100.00% (p=0.000 n=10)
Day09Part2-16     6812637.000 ± 0%   2.000 ± 0%  -100.00% (p=0.000 n=10)
geomean                681.3k        2.000       -100.00%
----

**Key Improvements:**

* **Part 1**: 55.88% faster (2.4ms → 1.0ms)
* **Part 2**: 84.56% faster (339ms → 52ms)
* **Both parts**: 100% reduction in allocations (68k/6.8M → 2 each)
* **Memory**: Slight increase (4.6%) due to pre-allocation, but worthwhile trade-off

The dramatic speedup comes from eliminating GC pressure. By pre-allocating the entire marble array upfront and using indices instead of pointers, we reduce allocations from millions to just 2 (the scores slice and marble array). This optimization saves ~286ms from the total benchmark runtime.

== Day 10: The Stars Align
== Day 11: Chronal Charge

Part 2 required 2 tries. Initial submission was incorrect.

The implementation using summed-area table was correct. The issue was a test failure due to an incorrect expected value - I had guessed the wrong answer when writing the test before running the actual computation.

After debugging with a standalone program, verified the algorithm produces correct results for both examples and the puzzle input.

== Day 12: Subterranean Sustainability
== Day 13: Mine Cart Madness
== Day 14: Chocolate Charts

=== Performance Optimization

Day 14 was optimized by changing from `[]int` to `[]byte` storage and eliminating repeated function calls.

==== Baseline (b0)
Initial implementation stored recipe scores as `[]int` (4 bytes each) and used `fmt.Sprintf` in a loop for string building. Part 2 called a `matchesEnd` helper function after every append, resulting in millions of function calls.

==== Optimization (b1)
Complete rewrite with memory and computational improvements:

- Use `[]byte` instead of `[]int` for 4x memory savings
- Pre-allocate slice capacity to avoid reallocations
- Inline pattern matching to eliminate function call overhead
- Build result strings directly with byte operations
- Part 1: Build result as `[]byte` instead of repeated `fmt.Sprintf`

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day14_bench_b0.txt │         day14_bench_b1.txt          │
              │       sec/op       │   sec/op     vs base                │
Day14Part1-16         7.878m ±  5%   2.846m ± 1%  -63.87% (p=0.000 n=10)
Day14Part2-16         579.2m ± 10%   215.5m ± 2%  -62.79% (p=0.000 n=10)
geomean               67.55m         24.77m       -63.33%

              │ day14_bench_b0.txt │          day14_bench_b1.txt          │
              │        B/op        │     B/op      vs base                │
Day14Part1-16       13007.8Ki ± 0%   288.0Ki ± 0%  -97.79% (p=0.000 n=10)
Day14Part2-16        917.16Mi ± 0%   23.84Mi ± 0%  -97.40% (p=0.000 n=10)
geomean               107.9Mi        2.590Mi       -97.60%

              │ day14_bench_b0.txt │         day14_bench_b1.txt         │
              │     allocs/op      │ allocs/op   vs base                │
Day14Part1-16          44.000 ± 0%   2.000 ± 0%  -95.45% (p=0.000 n=10)
Day14Part2-16          67.000 ± 1%   2.000 ± 0%  -97.01% (p=0.000 n=10)
geomean                 54.30        2.000       -96.32%
----

**Key Improvements:**

* **Part 1**: 63.87% faster (7.9ms → 2.8ms)
* **Part 1**: 97.79% reduction in memory (13MB → 288KB)
* **Part 2**: 62.79% faster (579ms → 215ms)
* **Part 2**: 97.40% reduction in memory (917MB → 24MB)
* **Both parts**: 95%+ reduction in allocations (down to 2 allocations each)

The dramatic improvements come from using byte storage instead of integers (4x memory savings) and pre-allocating capacity to avoid slice reallocations. Part 2's speedup is achieved by inlining the pattern matching logic, eliminating millions of function calls. This optimization saves ~364ms from the total benchmark runtime.

==== Optimization (b2)
Changed return type from `string` to `uint` to follow CLAUDE.md guidelines:

- Part 1: Build result directly as `uint` instead of creating `[]byte` then converting to `string`
- Part 2: Return position as `uint` instead of converting to string with `itoa`
- Eliminates all string allocations in result path

==== Results (b1 → b2)

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day14_bench_b0.txt │         day14_bench_b1.txt         │
              │       sec/op       │   sec/op     vs base               │
Day14Part1-16          2.838m ± 0%   2.793m ± 1%  -1.59% (p=0.000 n=10)
Day14Part2-16          216.3m ± 1%   215.9m ± 1%       ~ (p=0.393 n=10)
geomean                24.77m        24.55m       -0.88%

              │ day14_bench_b0.txt │         day14_bench_b1.txt          │
              │        B/op        │     B/op      vs base               │
Day14Part1-16         288.0Ki ± 0%   288.0Ki ± 0%  -0.01% (p=0.000 n=10)
Day14Part2-16         23.84Mi ± 0%   23.84Mi ± 0%       ~ (p=0.067 n=10)
geomean               2.590Mi        2.590Mi       -0.00%

              │ day14_bench_b0.txt │         day14_bench_b1.txt         │
              │     allocs/op      │ allocs/op   vs base                │
Day14Part1-16           2.000 ± 0%   1.000 ± 0%  -50.00% (p=0.000 n=10)
Day14Part2-16           2.000 ± 0%   1.000 ± 0%  -50.00% (p=0.000 n=10)
geomean                 2.000        1.000       -50.00%
----

**Additional Improvements:**

* **Both parts**: 50% reduction in allocations (2 → 1)
* **Performance**: Minor speedup due to eliminating final string conversion

This final optimization eliminates the string conversion overhead and follows the project guideline of using `uint` for numeric answers.

== Day 15: Beverage Bandits

=== Performance Optimization

Day 15 was optimized by replacing map-based lookups with array-based tracking and eliminating repeated linear searches.

==== Baseline (b0)
Initial implementation used map-based `visited` tracking in BFS, performed O(n) linear searches through all units to check occupancy on every move, and used slice allocations for tracking adjacent enemies. With hundreds of unit turns per simulation and Part 2 running multiple simulations, this resulted in 408k allocations.

==== Optimization (b1)
Complete rewrite with multiple optimizations:

- Replaced map-based `visited` tracking with reusable 2D boolean array
- Added occupancy grid (`[][]bool`) updated incrementally instead of O(n) searches
- Optimized `adjacentEnemyIdx` to find best attack target in single pass
- Changed return type from `string` to `uint` per CLAUDE.md guidelines
- Passed `visited` and `occupied` arrays as parameters to avoid allocations

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day15_bench_b0.txt │       day15_bench_b1.txt       │
              │       sec/op       │   sec/op     vs base                │
Day15Part1-16              43.812m ± 3%   7.746m ± 1%  -82.32% (p=0.000 n=10)
Day15Part2-16              401.45m ± 1%   82.03m ± 1%  -79.57% (p=0.000 n=10)
geomean                     132.6m        25.21m       -80.99%

              │ day15_bench_b0.txt │       day15_bench_b1.txt        │
              │        B/op        │     B/op      vs base                │
Day15Part1-16            16666.2Ki ± 0%   995.9Ki ± 0%  -94.02% (p=0.000 n=10)
Day15Part2-16            155.913Mi ± 0%   9.814Mi ± 0%  -93.71% (p=0.000 n=10)
geomean                    50.37Mi        3.089Mi       -93.87%

              │ day15_bench_b0.txt │       day15_bench_b1.txt       │
              │      allocs/op     │  allocs/op   vs base                │
Day15Part1-16              40.692k ± 0%   7.155k ± 0%  -82.42% (p=0.000 n=10)
Day15Part2-16              408.71k ± 0%   75.92k ± 0%  -81.42% (p=0.000 n=10)
geomean                     129.0k        23.31k       -81.93%
----

**Key Improvements:**

* **Part 1**: 82.32% faster (43.8ms → 7.7ms)
* **Part 1**: 94.02% reduction in memory (16.3MB → 996KB)
* **Part 1**: 82.42% reduction in allocations (40,692 → 7,155)
* **Part 2**: 79.57% faster (401ms → 82ms)
* **Part 2**: 93.71% reduction in memory (156MB → 9.8MB)
* **Part 2**: 81.42% reduction in allocations (408k → 76k)

The dramatic improvement came from eliminating the hot-path map allocations in BFS and replacing O(n) unit searches with O(1) grid lookups. The occupancy grid is updated incrementally as units move and die, avoiding repeated linear scans. This optimization saves ~320ms from the total benchmark runtime.

== Day 16: Chronal Classification
== Day 17: Reservoir Research
== Day 18: Settlers of The North Pole

=== iter.Seq Optimization

Refactored to use `C8Indices` iterator from `grid.go` for neighbor iteration without bounds checking. The iterator provides pre-computed 8-connectivity neighbor indices for each cell.

Additional optimization: count trees and lumberyards in a single pass instead of calling `countAdjacent` twice for lumberyard cells.

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: AMD Ryzen 7 7840HS w/ Radeon 780M Graphics
                     │   original   │              C8Indices              │
                     │    sec/op    │   sec/op     vs base                │
Day18Part1-16           441.0µ ± 2%   299.0µ ± 1%  -32.20%
Day18Part2-16           18.56ms ± 0%   8.42ms ± 1%  -54.63%
----

Part 1: 1.5x faster, Part 2: 2.2x faster

== Day 19: Go With The Flow

=== Performance Optimization

Day 19 was optimized by eliminating map-based opcode dispatch and reducing string allocations during parsing.

==== Baseline (b0)
Initial implementation parsed opcodes as strings and used a `map[string]func([6]int, int, int, int) [6]int` for opcode dispatch. Each instruction execution required a map lookup and a function call with array copy semantics. With Part 1 executing 11M+ instructions, the map lookups and string comparisons created significant overhead.

==== Optimization (b1)
Complete rewrite with opcode compilation:

- Converted opcode strings to integer IDs (`uint8`) during parsing
- Replaced map-based dispatch with switch statement on opcode ID
- Inlined opcode execution logic directly in `executeOpcode` function
- Changed register operations to use pointer to avoid array copying
- Eliminated all string allocations by parsing byte-by-byte

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day19_bench_b0.txt │       day19_bench_b1.txt       │
              │       sec/op       │   sec/op     vs base                │
Day19Part1-16              231.96m ± 2%   42.02m ± 1%  -81.89% (p=0.000 n=10)
Day19Part2-16               14.59µ ± 2%   11.85µ ± 4%  -18.79% (p=0.000 n=10)
geomean                     1.839m        705.5µ       -61.64%

              │ day19_bench_b0.txt │       day19_bench_b1.txt        │
              │        B/op        │     B/op      vs base                │
Day19Part1-16              6.711Ki ± 0%   3.969Ki ± 0%  -40.86% (p=0.000 n=10)
Day19Part2-16              6.711Ki ± 0%   3.969Ki ± 0%  -40.86% (p=0.000 n=10)
geomean                    6.711Ki        3.969Ki       -40.86%

              │ day19_bench_b0.txt │      day19_bench_b1.txt       │
              │      allocs/op     │ allocs/op   vs base                │
Day19Part1-16               46.000 ± 0%   7.000 ± 0%  -84.78% (p=0.000 n=10)
Day19Part2-16               46.000 ± 0%   7.000 ± 0%  -84.78% (p=0.000 n=10)
geomean                      46.00        7.000       -84.78%
----

**Key Improvements:**

* **Part 1**: 81.89% faster (232ms → 42ms)
* **Part 1**: 40.86% reduction in memory (6.7KB → 4.0KB)
* **Part 1**: 84.78% reduction in allocations (46 → 7)
* **Part 2**: 18.79% faster (14.6µs → 11.9µs)
* **Both parts**: 40.86% reduction in memory, 84.78% reduction in allocations

The massive speedup comes from eliminating map lookups in the instruction execution hot loop. By converting opcodes to integer IDs during parsing and using a switch statement, the Go compiler can optimize the dispatch with a jump table. The switch statement combined with pointer-based register updates eliminates both the map lookup overhead and array copying overhead. This optimization saves ~190ms from the total benchmark runtime.

== Day 20: A Regular Map
== Day 21: Chronal Conversion
== Day 22: Mode Maze

=== Performance Optimization

Day 22 was optimized by replacing map-based data structures with pre-allocated arrays for Dijkstra's pathfinding algorithm.

==== Baseline (b0)
Initial implementation used maps for all data structures: `map[state]int` for distances, `map[state]bool` for visited states, and `map[pos]int` for erosion cache. Part 2 performed Dijkstra's algorithm with 1M+ map operations, resulting in significant overhead.

==== Optimization (b1)
Replaced maps with fixed-size arrays based on bounded search space:

- Pre-compute all erosion levels in a 2D array (`[][]int`)
- Use 3D arrays for distances and visited states (`[][][3]int` and `[][][3]bool`)
- Eliminated state struct and map lookups
- Part 1: Changed from recursive to iterative erosion calculation
- Pre-allocate search space bounds (target * 2 in each dimension)

==== Results

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
              │ day22_bench_b0.txt │         day22_bench_b1.txt          │
              │       sec/op       │   sec/op     vs base                │
Day22Part1-16         1906.3µ ± 2%   117.7µ ± 3%  -93.82% (p=0.000 n=10)
Day22Part2-16        2314.04m ± 9%   54.69m ± 3%  -97.64% (p=0.000 n=10)
geomean                66.42m        2.538m       -96.18%

              │ day22_bench_b0.txt │          day22_bench_b1.txt          │
              │        B/op        │     B/op      vs base                │
Day22Part1-16         853.2Ki ± 0%   110.7Ki ± 0%  -87.02% (p=0.000 n=10)
Day22Part2-16        235.06Mi ± 0%   14.27Mi ± 0%  -93.93% (p=0.000 n=10)
geomean               14.00Mi        1.242Mi       -91.12%

              │ day22_bench_b0.txt │          day22_bench_b1.txt          │
              │     allocs/op      │  allocs/op   vs base                 │
Day22Part1-16           81.00 ± 0%   744.00 ± 0%  +818.52% (p=0.000 n=10)
Day22Part2-16         1056.8k ± 0%   193.0k ± 0%   -81.73% (p=0.000 n=10)
geomean                9.252k        11.98k        +29.53%
----

**Key Improvements:**

* **Part 1**: 93.82% faster (1.9ms → 118μs)
* **Part 1**: 87.02% reduction in memory (853KB → 111KB)
* **Part 2**: 97.64% faster (2.31s → 54.7ms)
* **Part 2**: 93.93% reduction in memory (235MB → 14MB)
* **Part 2**: 81.73% reduction in allocations (1.06M → 193k)

The massive speedup comes from eliminating map operations in the hot path. By pre-allocating fixed arrays and using direct indexing instead of hash lookups, Part 2's Dijkstra search runs 40x faster. The bounded search space (target dimensions * 2) allows for efficient array-based storage. This optimization saves ~2.26 seconds from the total benchmark runtime.

== Day 23: Experimental Emergency Teleportation
== Day 24: Immune System Simulator 20XX
== Day 25: Four-Dimensional Adventure

== Benchmarks

Run with `go test -run=^$ -bench=Day..Part.$ -benchmem`

----
goos: linux
goarch: amd64
pkg: gitlab.com/jhinrichsen/adventofcode2018
cpu: Intel(R) Xeon(R) CPU @ 2.60GHz
BenchmarkDay01Part1-16    	  279217	      4179 ns/op	       0 B/op	       0 allocs/op
BenchmarkDay01Part2-16    	      67	  15598267 ns/op	 9458762 B/op	    1043 allocs/op
BenchmarkDay02Part1-16    	    1526	    815387 ns/op	  468001 B/op	    2500 allocs/op
BenchmarkDay02Part2-16    	     385	   3058652 ns/op	 1210822 B/op	   60512 allocs/op
BenchmarkDay03Part1-16    	     194	   6077997 ns/op	 8554224 B/op	    6618 allocs/op
BenchmarkDay03Part2-16    	     144	   8281505 ns/op	  411896 B/op	    5637 allocs/op
BenchmarkDay04Part1-16    	   32767	     36550 ns/op	   10728 B/op	      27 allocs/op
BenchmarkDay04Part2-16    	   33577	     35841 ns/op	   10728 B/op	      27 allocs/op
BenchmarkDay05Part1-16    	   20227	     61017 ns/op	       0 B/op	       0 allocs/op
BenchmarkDay05Part2-16    	       7	 160246428 ns/op	 3834387 B/op	      83 allocs/op
BenchmarkDay06Part1-16    	      76	  15912322 ns/op	 1008513 B/op	       4 allocs/op
BenchmarkDay06Part2-16    	     100	  10806370 ns/op	       0 B/op	       0 allocs/op
BenchmarkDay07Part1-16    	   11476	    102966 ns/op	   24496 B/op	     354 allocs/op
BenchmarkDay07Part2-16    	    1022	   1124302 ns/op	   23408 B/op	     279 allocs/op
BenchmarkDay08Part1-16    	    7977	    130490 ns/op	  155648 B/op	       1 allocs/op
BenchmarkDay08Part2-16    	    5901	    212507 ns/op	  171888 B/op	    1166 allocs/op
BenchmarkDay09Part1-16    	    1112	   1168670 ns/op	 1716231 B/op	       2 allocs/op
BenchmarkDay09Part2-16    	      12	  86096659 ns/op	170946576 B/op	       2 allocs/op
BenchmarkDay10Part1-16    	      44	  25665104 ns/op	  134888 B/op	     707 allocs/op
BenchmarkDay10Part2-16    	      46	  26285860 ns/op	   59010 B/op	      27 allocs/op
BenchmarkDay11Part1-16    	    1404	    788790 ns/op	  809837 B/op	     302 allocs/op
BenchmarkDay11Part2-16    	      50	  23668177 ns/op	  809786 B/op	     303 allocs/op
BenchmarkDay12Part1-16    	    2659	    446044 ns/op	   67008 B/op	    8355 allocs/op
BenchmarkDay12Part2-16    	     544	   2119997 ns/op	  257667 B/op	   47591 allocs/op
BenchmarkDay13Part1-16    	    4459	    276321 ns/op	   77081 B/op	     671 allocs/op
BenchmarkDay13Part2-16    	     142	   8242921 ns/op	 3828976 B/op	   94351 allocs/op
BenchmarkDay14Part1-16    	     424	   2792827 ns/op	  294912 B/op	       1 allocs/op
BenchmarkDay14Part2-16    	       5	 219967526 ns/op	25001984 B/op	       1 allocs/op
BenchmarkDay15Part1-16    	     151	   7882202 ns/op	 1019769 B/op	    7155 allocs/op
BenchmarkDay15Part2-16    	      14	  82625262 ns/op	10291165 B/op	   75921 allocs/op
BenchmarkDay16Part1-16    	    1225	    984709 ns/op	  517904 B/op	    3333 allocs/op
BenchmarkDay16Part2-16    	     874	   1336860 ns/op	  548104 B/op	    3462 allocs/op
BenchmarkDay17Part1-16    	      73	  16520990 ns/op	 5241479 B/op	     421 allocs/op
BenchmarkDay17Part2-16    	      73	  15741445 ns/op	 5241473 B/op	     421 allocs/op
BenchmarkDay18Part1-16    	    1638	    740788 ns/op	   32256 B/op	      12 allocs/op
BenchmarkDay18Part2-16    	      38	  30780724 ns/op	 2577151 B/op	     960 allocs/op
BenchmarkDay19Part1-16    	      28	  41983231 ns/op	    4064 B/op	       7 allocs/op
BenchmarkDay19Part2-16    	   97034	     12316 ns/op	    4064 B/op	       7 allocs/op
BenchmarkDay20Part1-16    	      21	  61985983 ns/op	111571016 B/op	   64799 allocs/op
BenchmarkDay20Part2-16    	      18	  62625383 ns/op	111569716 B/op	   64797 allocs/op
BenchmarkDay21Part1-16    	  455888	      2385 ns/op	    2844 B/op	      37 allocs/op
BenchmarkDay21Part2-16    	    1518	    826546 ns/op	  594326 B/op	     116 allocs/op
BenchmarkDay22Part1-16    	    8629	    128505 ns/op	  113391 B/op	     744 allocs/op
BenchmarkDay22Part2-16    	      20	  54381145 ns/op	14963561 B/op	  193034 allocs/op
BenchmarkDay23Part1-16    	   14577	     82387 ns/op	  100960 B/op	      12 allocs/op
BenchmarkDay23Part2-16    	     266	   4466976 ns/op	  238185 B/op	    1006 allocs/op
BenchmarkDay24Part1-16    	      58	  21290822 ns/op	10108859 B/op	  173606 allocs/op
BenchmarkDay25Part1-16    	     313	   3892091 ns/op	  251649 B/op	      48 allocs/op
PASS
ok  	gitlab.com/jhinrichsen/adventofcode2018	60.239s
----

Total: 1028314426.00 ns, 1028314.426 µs, 1028.314 ms, 1.028s

== References

https://github.com/Voltara/advent2018-fast[Advent of Code 2018 optimized C++ solutions]

https://www.forrestthewoods.com/blog/solving-advent-of-code-in-under-a-second/[Rust]

== SAST (Static Application Security Testing)

This project uses custom SAST tooling in GitLab CI, optimized for the free tier.

=== GitLab Free Tier Limitations

GitLab's built-in SAST features (Security Dashboard, vulnerability management, merge request security widgets) require the Ultimate tier. On the free tier, SAST scans can run but results are only available as downloadable JSON artifacts.

=== Current Setup

Our CI pipeline uses:

- Code Quality Reports: golangci-lint → JSON → banyansecurity/golint-convert → CodeClimate JSON format
  * Displays findings in merge request Code Quality widget (available in free tier since GitLab 13.2)
  * Shows code quality degradations/improvements directly in MRs

- Test Reports: go-junit-report/v2 → JUnit XML format
  * Integrates test results into GitLab's test report UI

- Coverage Reports: gocover-cobertura → Cobertura XML format  
  * Shows coverage metrics and trends in merge requests

- Vulnerability Scanning: govulncheck (periodic, scheduled pipeline)
  * Scans for known vulnerabilities in Go dependencies
  * Runs on a schedule to catch newly disclosed vulnerabilities
  * Results available as JSON artifacts (no UI on free tier)

=== Note on Deprecation

GitLab deprecated its built-in CodeClimate scanning template in version 17.3 (planned removal in 19.0). This only affects GitLab's bundled scanning engine. Custom pipelines that generate CodeClimate-format JSON (like ours) continue to work and are the recommended approach for free tier users.

The Code Quality widget will continue to display results from custom CodeClimate JSON reports.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Day01

func Day01(data []byte, part1 bool) uint

func Day06

func Day06(data []byte, part1 bool) uint

func Day07

func Day07(lines []string) (string, error)

Day07 returns sorted characters on input lines in format "Step C must be finished before step A can begin.".

func Day07Part2

func Day07Part2(lines []string, workers, base int) (int, error)

Day07Part2 computes the total time to complete all steps with the given number of workers and a base duration where A=1+base, B=2+base, ... Z=26+base.

func Day08

func Day08(numbers []uint, part1 bool) uint

Day08 solves day 8 for both parts using iterative (non-recursive) approach. Part 1: Sum all metadata entries. Part 2: Compute the value of the root node where metadata entries reference child values.

func Day08Part2

func Day08Part2(numbers []int) (sum int)

func Day09

func Day09(puzzle Day09Puzzle, part1 bool) uint

Day09 simulates the marble game and returns the winning score. Part 1: Play the game as described. Part 2: Last marble is 100 times larger.

func Day10

func Day10(puzzle Day10Puzzle, part1 bool) string

Day10 simulates the moving points and finds when they align to form a message. Part 1: Returns the message text using OCR. Part 2: Returns the number of seconds to wait for the message.

func Day11

func Day11(puzzle Day11Puzzle, part1 bool) string

Day11 finds the square with the largest total power. Part 1: Returns the X,Y coordinate of the 3x3 square's top-left corner. Part 2: Returns the X,Y,size identifier of any size square with largest power.

func Day12

func Day12(puzzle Day12Puzzle, part1 bool) string

Day12 simulates plant growth. Part 1: Returns the sum of pot numbers after 20 generations. Part 2: Returns the sum after 50 billion generations by detecting pattern.

func Day13

func Day13(puzzle Day13Puzzle, part1 bool) string

Day13 simulates mine cart movement. Part 1: Returns X,Y coordinates of first collision. Part 2: Returns X,Y coordinates of last remaining cart.

func Day14

func Day14(puzzle Day14Puzzle, part1 bool) uint

Day14 generates chocolate recipe scores. Part 1: Returns the 10 scores after the given number of recipes (as a number). Part 2: Returns the number of recipes before the input sequence appears.

func Day15

func Day15(puzzle Day15Puzzle, part1 bool) uint

Day15 simulates the battle. Part 1: Returns the outcome (rounds * remaining HP). Part 2: Returns the outcome with minimum elf attack power for no elf deaths.

func Day16Part1

func Day16Part1(puzzle *Day16Puzzle) uint

Day16Part1 counts samples that behave like 3 or more opcodes

func Day16Part2

func Day16Part2(puzzle *Day16Puzzle) int

Day16Part2 deduces opcodes and executes the test program

func Day17

func Day17(puzzle Day17Puzzle, part1 bool) uint

Day17 simulates water flow. Part 1: Count all tiles reached by water (settled + flowing). Part 2: Count only retained water (settled).

func Day18

func Day18(p Day18Puzzle, part1 bool) uint

Day18 is a unified solver for both parts. Part 1: simulates 10 minutes. Part 2: simulates 1000000000 minutes using cycle detection.

func Day19

func Day19(puzzle Day19Puzzle, part1 bool) uint

Day19 executes the program. Part 1: Returns the value in register 0 when the program halts. Part 2: Same but with register 0 starting at 1; uses optimization to avoid slow simulation.

func Day20

func Day20(puzzle Day20Puzzle, part1 bool) uint

Day20 finds the furthest room. Part 1: Returns the maximum number of doors to reach any room. Part 2: Returns the number of rooms that require at least 1000 doors.

func Day21

func Day21(puzzle Day21Puzzle, part1 bool) uint

Day21 finds the value for register 0 that causes the program to halt. Part 1: Find the value that halts with fewest instructions (first value checked). Part 2: Find the value that halts with most instructions (last unique value before cycle).

func Day22

func Day22(puzzle Day22Puzzle, part1 bool) string

Day22 calculates the cave risk level. Part 1: Sum of risk levels in rectangle from (0,0) to target. Part 2: Shortest time to reach target with tool constraints.

func Day23

func Day23(puzzle Day23Puzzle, part1 bool) uint

Day23 finds nanobots in range of strongest. Part 1: Count nanobots in range of the strongest nanobot. Part 2: Find the coordinate in range of the most nanobots, closest to origin.

func Day24

func Day24(puzzle Day24Puzzle, part1 bool) uint

Day24 simulates the combat. Part 1: Returns the number of units in the winning army. Part 2: Returns the number of units the immune system has after winning with the smallest boost.

func Day25

func Day25(puzzle Day25Puzzle, part1 bool) string

Day25 counts constellations. Part 1: Returns the number of constellations.

func Lines

func Lines(s string) []string

Lines separates one string into lines From Haskell

func NewDay08

func NewDay08(data []byte) ([]uint, error)

NewDay08 parses a whitespace-delimited sequence of positive integers from data. It avoids strconv for speed and works directly with []byte.

func Overlap

func Overlap(l1, r1, l2, r2 Point) bool

Overlap returns true if two rectanges (l1, r1) and (l2, r2) overlap.

func SimulateCombat

func SimulateCombat(puzzle Day24Puzzle, boost int) (string, int)

SimulateCombat runs the combat simulation with the given boost. Returns the winner ("immune" or "infection") and the number of units remaining.

Types

type Day09Puzzle

type Day09Puzzle struct {
	// contains filtered or unexported fields
}

Day09Puzzle represents the marble game configuration.

func NewDay09

func NewDay09(data []byte) (Day09Puzzle, error)

NewDay09 parses the marble game configuration from data. Expected format: "N players; last marble is worth M points"

type Day10Puzzle

type Day10Puzzle struct {
	// contains filtered or unexported fields
}

Day10Puzzle represents the moving points of light.

func NewDay10

func NewDay10(data []byte) (Day10Puzzle, error)

NewDay10 parses the points from data. Expected format: "position=<X, Y> velocity=<VX, VY>"

type Day11Puzzle

type Day11Puzzle struct {
	// contains filtered or unexported fields
}

Day11Puzzle represents the grid serial number.

func NewDay11

func NewDay11(data []byte) (Day11Puzzle, error)

NewDay11 parses the grid serial number from data.

type Day12Puzzle

type Day12Puzzle struct {
	// contains filtered or unexported fields
}

Day12Puzzle represents the plant growth simulation.

func NewDay12

func NewDay12(data []byte) (Day12Puzzle, error)

NewDay12 parses the initial state and rules from data.

type Day13Puzzle

type Day13Puzzle struct {
	// contains filtered or unexported fields
}

Day13Puzzle represents the mine cart track system.

func NewDay13

func NewDay13(data []byte) (Day13Puzzle, error)

NewDay13 parses the track and carts from data.

type Day14Puzzle

type Day14Puzzle struct {
	// contains filtered or unexported fields
}

Day14Puzzle represents the number of recipes to skip.

func NewDay14

func NewDay14(data []byte) (Day14Puzzle, error)

NewDay14 parses the number of recipes from data.

type Day15Puzzle

type Day15Puzzle struct {
	// contains filtered or unexported fields
}

Day15Puzzle represents the battle map.

func NewDay15

func NewDay15(data []byte) (Day15Puzzle, error)

NewDay15 parses the battle map from data.

type Day16Puzzle

type Day16Puzzle struct {
	Samples []Sample
	Program []Instruction
}

Day16Puzzle holds parsed samples and test program

func NewDay16

func NewDay16(lines []string) (*Day16Puzzle, error)

NewDay16 parses the input lines into samples and test program

type Day17Puzzle

type Day17Puzzle struct {
	// contains filtered or unexported fields
}

Day17Puzzle represents the underground scan.

func NewDay17

func NewDay17(data []byte) (Day17Puzzle, error)

NewDay17 parses the clay positions.

type Day18Puzzle

type Day18Puzzle struct {
	// contains filtered or unexported fields
}

Day18Puzzle represents the lumber collection area grid.

func NewDay18

func NewDay18(data []byte) (Day18Puzzle, error)

NewDay18 parses the lumber collection area grid from raw bytes.

type Day19Puzzle

type Day19Puzzle struct {
	// contains filtered or unexported fields
}

Day19Puzzle represents the device program.

func NewDay19

func NewDay19(data []byte) (Day19Puzzle, error)

NewDay19 parses the program.

type Day20Puzzle

type Day20Puzzle struct {
	// contains filtered or unexported fields
}

Day20Puzzle represents the facility regex.

func NewDay20

func NewDay20(data []byte) (Day20Puzzle, error)

NewDay20 parses the regex.

type Day21Puzzle

type Day21Puzzle struct {
	// contains filtered or unexported fields
}

Day21Puzzle represents the device program.

func NewDay21

func NewDay21(data []byte) (Day21Puzzle, error)

NewDay21 parses the program (same format as day 19).

type Day22Puzzle

type Day22Puzzle struct {
	// contains filtered or unexported fields
}

Day22Puzzle represents the cave system parameters.

func NewDay22

func NewDay22(data []byte) (Day22Puzzle, error)

NewDay22 parses the depth and target.

type Day23Puzzle

type Day23Puzzle struct {
	// contains filtered or unexported fields
}

Day23Puzzle represents the nanobots.

func NewDay23

func NewDay23(data []byte) (Day23Puzzle, error)

NewDay23 parses the nanobots.

type Day24Puzzle

type Day24Puzzle struct {
	// contains filtered or unexported fields
}

Day24Puzzle represents the two armies.

func NewDay24

func NewDay24(data []byte) (Day24Puzzle, error)

NewDay24 parses the army groups.

type Day25Puzzle

type Day25Puzzle struct {
	// contains filtered or unexported fields
}

Day25Puzzle represents 4D points.

func NewDay25

func NewDay25(data []byte) (Day25Puzzle, error)

NewDay25 parses the 4D coordinates.

type Grid

type Grid struct {
	W, H int
}

Grid defines board dimensions.

func (Grid) C4Indices

func (g Grid) C4Indices() iter.Seq2[int, iter.Seq[int]]

C4Indices yields (index, neighbors) for 4-connectivity.

func (Grid) C4Points

func (g Grid) C4Points() iter.Seq2[image.Point, iter.Seq[image.Point]]

C4Points yields (pos, neighbors) for 4-connectivity.

func (Grid) C6Indices

func (g Grid) C6Indices() iter.Seq2[int, iter.Seq[int]]

C6Indices yields (index, neighbors) for 6-connectivity (hexagonal grid). Uses axial coordinates: E(+1), W(-1), NE(+W), SW(-W), NW(+W-1), SE(-W+1) Row 0 is bottom, row H-1 is top.

func (Grid) C8Indices

func (g Grid) C8Indices() iter.Seq2[int, iter.Seq[int]]

C8Indices yields (index, neighbors) for 8-connectivity.

func (Grid) C8Points

func (g Grid) C8Points() iter.Seq2[image.Point, iter.Seq[image.Point]]

C8Points yields (pos, neighbors) for 8-connectivity.

type Instruction

type Instruction struct {
	Op [4]int // [opcode, A, B, C]
}

Instruction represents a single instruction in the test program

type OpFunc

type OpFunc func(reg [4]int, a, b, c int) [4]int

OpFunc is a function that executes an operation

type Point

type Point struct {
	// contains filtered or unexported fields
}

Point has two coordinates.

type Sample

type Sample struct {
	Before [4]int
	After  [4]int
	Op     [4]int // [opcode, A, B, C]
}

Sample represents a before/after observation with an instruction

Directories

Path Synopsis
cmd
cpuname command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL