Aoc 2024 Post Mortem
I was feeling a bit inspired today, so I thought I’d write about my experience with AoC last year.
Buckle up, this is going to be a big one!
General Thoughts
It was a pretty fun year! Some classics have returned, and a lot of callbacks to previous years to celebrate the 10-year anniversary of AoC.
I have a few general complaints that I’ll list:
Boring Map
One of the things that became clear as the days progressed was that the map was pretty boring. Last year we got an amazing group of islands with animated lava flow and more! They could’ve been more creative while incorporating the 10-year theme.
Yet Another Path-Finding Problem
This is something that I was discussing with colleagues at work. There were just So. Many. Path. Finding. Problems.
I think we got at least five that reduced to doing a BFS/DFS/A* run.
Annoying Setups
There was one day that just had so much setup that I couldn’t be bothered to even start the problem.
Looking at you, day 21.
Difficulty Peaks
It’s not unusual for difficulty to vary up and down a bit during the month, with a general trend to increase as the days progress.
Some later days were extremely simple, however. Day 23 shouldn’t have been that simple while being 2 days away from the end.
The AI Problem
I think most people that participate in AoC completely ignore the global leaderboards. It has always been an unattainable goal for most human beings: you need preparation, you need to wake at the ungodly hour of 06:00, and you need to be able to write code inhumanly fast.
I’ve never got a global leaderboard time, and I’ll never try. I used to be impressed at people that managed to complete these puzzles in under a minute.
But this year it was different. The leaderboard turned from something impressive into a joke. A terrible, disgusting joke. I guess this is a metaphor for the state of software development in general in the last year: talented programmers being shadowed by low effort AI slop.
It’s as offensive to our profession as GenAI is for artists. Yeah, it works, it can give you usable results, but the whole point of solve AoC is to solve a puzzle. It’s like opening a sudoku board and clicking “solve”.
While these are technically against the rules of AoC, they’re probably too hard to detect consistently to be completely banned from the leaderboard. Still, I think a 10-second solution to a hard puzzle should be an instantly ban-able offence.
My Attempt
As with last year, I did my solutions in Rust. Being my typical self, I decided to make an overcomplicated “template” to contain the solutions, as opposed to the off the shelf template I used last year.
The Template
I don’t consider myself a great Rust programmer. I do my best, and sometimes I end up struggling with some patterns that are a lot easier in other languages.
My problem this year was trying to dynamically detect solved days without having to add functions to a big global list of callables. My solution: use a trait to implement the solution. “It will be easy”, I thought.
Spoilers: you still need a big global list of callables since I couldn’t figure out a way to detect all types that were implemented a trait.
The solver trait is straightforward enough:
pub trait Solver<const D: usize, const P: usize> {
fn solve(&self) -> Option<impl Display + Debug>;
}
I added a macro to implement this using a top level function, which looks a lot better when solving the days. Internally it does:
impl Solver<$day, 1> for PuzzleInput {
fn solve(&self) -> Option<impl Display + Debug> {
Some($part_1_solver(self))
}
}
The main code uses a magical function to access the solved days:
pub fn get_solvers() -> HashMap<(Day, Part), Box<dyn Fn(&PuzzleInput) -> Option<String>>> {
// magic
}
To explain how that’s possible, let’s take a look at the tree structure of the project:
.
├── src/
│ ├── solutions/
│ │ ├── day01.rs
│ │ ├── day02.rs
│ │ └── ...
│ ├── solutions.rs
│ ├── main.rs
│ └── ...
└── build.rs
The build script does a lot of the heavy lifting that allows the rest of the code to iterate over solved days. It works like this:
- Read the files in the solution directory matching
day\d{2}.rs
- Use quote and prettyplease to generate a callable function that’s added to the big map of solvers.
- Put that file on the build directory so it can be included by
solutions.rs
I wish I could find an easier way to do this without any unsafe code or some other type of static constructor!
Measuring Stats
Last year’s template included the ability to measure the execution time of each day. I replicated this in my template, and added the ability to measure heap usage!
As it turns out, there are several caveats when attempting to measure memory usage.
Some objects are statically allocated when you run the solution for the first time. This includes some internal Rust things, apparently, which was causing my memory usage of part 1 to always be way higher than part 2. I quickly found out that you need to run the solution once to allocate these pesky objects before you do your measurement.
Sadly, this workaround means that you can’t count all the heap usage, since you can cheat a bit with lazy_static!
allocations (which I’ve done for Regex instances).
Measuring time comes with a host of problems as well, which includes code randomly changing execution time if you run the benchmark 3x in a row. I didn’t find a good solution for this except for just running it for a while (~5 seconds) and then taking an average.
My Results
Here are my (not very impressive) results, benchmarked on an AMD Ryzen 9 7950X3D.
|###########################################-------| 43/50 stars
Day | Part 1 | Part 2 |
---|---|---|
01 | 42.9µs / 31 KiB |
68.7µs / 33 KiB |
02 | 134.0µs / 86 KiB |
302.8µs / 86 KiB |
03 | 89.3µs / 96 bytes |
140.1µs / 192 bytes |
04 | 1.5ms / 109 KiB |
4.2ms / 628 KiB |
05 | 205.3µs / 17 KiB |
203.4µs / 17 KiB |
06 | 317.5µs / 1 MiB |
113.3ms / 1 MiB |
07 | 13.4ms / 696 bytes |
691.0ms / 696 bytes |
08 | 21.7µs / 20 KiB |
164.8µs / 59 KiB |
09 | 928.4µs / 2 MiB |
572.3ms / 1 MiB |
10 | 245.1µs / 32 KiB |
234.0µs / 16 KiB |
11 | 159.6µs / 150 KiB |
11.4ms / 9 MiB |
12 | 61.5ms / 961 KiB |
62.4ms / 961 KiB |
13 | 245.7µs / 224 bytes |
244.0µs / 224 bytes |
14 | 109.9µs / 316 bytes |
137.5ms / 41 KiB |
15 | 2.8ms / 772 KiB |
3.0ms / 776 KiB |
16 | 3.9ms / 2 MiB |
6.9ms / 5 MiB |
17 | 2.3µs / 320 bytes |
- |
18 | 1.2ms / 614 KiB |
2.7ms / 350 KiB |
19 | 1.2ms / 504 KiB |
22.9ms / 1 MiB |
20 | 683.4ms / 22 MiB |
- |
21 | - |
- |
22 | 15.3ms / 11 bytes |
578.5ms / 133 MiB |
23 | 962.6ms / 763 KiB |
2.6ms / 476 KiB |
24 | 216.9µs / 81 KiB |
- |
25 | - |
- |
Days, Commented
In this section, I’ll comment about some specific days that I found interesting and offer some insight into my thought process, and some failed paths I went down before solving it the “correct” way.
Day 01
Iterators to the rescue! I learned about the unzip
property of iterators. Pretty trivial day otherwise.
Day 03
Part 2 is basically just a fold
operation with an initial state of (active, sum) = (true, 0)
.
Bonus points for using only 192 bytes of memory (discounting the Regex, which is lazy_static
).
Day 04
Awful. Just plain awful. I haven’t checked other people’s solutions, but from my attempts, the best you can do is brute-force checks in part 1.
Part 2 is a bit easier, and I ended up doing it with a “mask” that is checked against every grid position.
This is a good time to mention that Itertools’ cartesian_product
and multi_cartesian_product
are one of the best
tools in AoC.
Day 05
Very trivial use case of sort_by
. A friend of mine was doing their implementation in C# and complaining about how
much harder it is to sort stuff like this in C#.
Day 06
Another day of brute-forcing in part 2. I just ended up checking every position (that overlaps the original path) to see if it generates a cycle.
Detecting cycles is pretty annoying sometimes, so I cheated a bit by checking if the total number of steps is over 10_000. Definitely a hack, but it works really well.
Day 07
A beautiful use case for multi_cartesian_product
to generate all the combinations of operators!
Day 09
Another terrible day. Part 2 isn’t challenging per se, but it ends up just being a bunch of code.
Day 10
The funny day when a bug in the solution to part 1 was actually the solution to part 2.
Day 11
For the longest time I refused to consider caching because I thought the cache would just grow too much, but the problem is cleverly designed to constrain the number by doing the splits!
A simple recursive problem, otherwise.
Day 12
I never knew someone could fail so many times at counting corners. It ended up being relatively clean!
Day 13
Straightforward day if you solve the system of equations to find the number of steps. Don’t skip your linear algebra classes!
Day 14
The day that made everyone go “a picture of a WHAT???”.
I ended up solving this with a println
when a candidate is found. This is done by measuring the “inertia” of the
robots and printing when we find a solution with small enough inertia as a measure of “order”.
Day 15
An interesting extension problem. Depending on how you did your part 1, part 2 would be a complete rewrite.
In my case, I ended up just adding a special type of tile in part 2 that can only be moved if its corresponding neighbour is also movable.
Day 16
I was lucky that the pathfinding
rust lib implements an astar_bag
function that returns all paths!
It simplified the whole thing.
Day 17
The first day when I gave up on part 2.
I’m sure there’s some correlation that you can use to extract the value, but I couldn’t be bothered. And this one really can’t be brute-forced (I tried!).
Day 18
Another pathfinding case! I thought part 2 was going to be harder, so I answered part 1 in a way that allowed you to move while the blocks were falling, but that turned out to not be necessary, so I optimised it later.
Day 21
Too annoying to set up.
Day 22
Brute-force, easily to run in parallel.
Day 24
I didn’t do part 2, but it’s probably something to do with back-propagation.