Advent of Code 2024 - In Rust - Days 6-10
This is the second article in a series covering Advent of Code 2024 in Rust.
Day 6 – Guard Gallivant
This puzzle asks to trace the path taken by a “guard” in a simple grid with obstacles. In the first part of the puzzle, the guard simply turns right at every obstacle and the goal is count the number of fields covered until they exit the grid.
I built some basic data structures to represent the grid with marked fields, the guard with a position and direction, and I then implemented the prescribed logic, marking fields as the guard moves.
I chose to run with a similar idea to the crossword puzzle (see
Day 4 in the ealier article) and
made the grid coordinates signed i64
, allowing me to express
out-of-bounds coordinates (e.g., when the guard leaves the grid, or
also when drawing a fixed debug view of the game centered on the
guard, which might go beyond the underlying grid when the guard
is at the edges somewhere).
// The possible states of a tile on the grid.
#[derive(PartialEq, Debug)]
enum Tile {
Empty,
Obstacle,
Marked,
}
// The grid our guard moves on.
// Tiles are stored in row-major layout.
struct Grid {
/// Size width x height
size: (usize, usize),
tiles: Vec<Tile>,
marked: usize,
}
impl Grid {
/// Checks if position is within the bounds of our grid.
/// Uses a signed type to make it easy to check leaving
/// the field on the left and top.
fn within_bounds(&self, position: (i64, i64)) -> bool {
if position.0 < 0 || position.1 < 0 {
return false;
}
let x: usize = position.0.try_into().unwrap();
let y: usize = position.1.try_into().unwrap();
x < self.size.0 && y < self.size.1
}
/// Gets the value of a field.
fn get(&self, position: (i64, i64)) -> Option<&Tile> {
if !self.within_bounds(position) {
return None;
}
let x: usize = position.0.try_into().unwrap();
let y: usize = position.1.try_into().unwrap();
self.tiles.get(y * self.size.0 + x)
}
...
}
enum Direction {
Up,
Down,
Left,
Right,
}
struct Guard {
position: (i64, i64),
direction: Direction,
}
impl Guard {
/// Returns the next coordinate the guard would walk to.
fn next(&self) -> (i64, i64) {
match self.direction {
Direction::Up => (self.position.0, self.position.1 - 1),
Direction::Down => (self.position.0, self.position.1 + 1),
Direction::Left => (self.position.0 - 1, self.position.1),
Direction::Right => (self.position.0 + 1, self.position.1),
}
}
/// Returns the value of the next field ahead of the guard.
fn lookahead<'a>(&self, grid: &'a Grid) -> Option<&'a Tile> {
grid.get(self.next())
}
// ... some code omitted ...
/// Moves the guard one step, marking the grid.
/// Returns false if the guard has left the grid.
fn walk(&mut self, grid: &mut Grid) -> bool {
grid.mark(self.position);
match self.lookahead(grid) {
None => false,
Some(Tile::Empty) | Some(Tile::Marked) => {
self.position = self.next();
true
}
Some(Tile::Obstacle) => {
// Rotate 90 degrees to the right and try walking again.
self.direction = Self::turn_right(&self.direction);
self.walk(grid)
}
}
}
}
Given how “graphical” this puzzle was, I chose to implement a little animated
console “debug view” that could print out a view of the field centered on
the guard. I wrote a function to render the grid back into the ASCII format,
and used ANSI Escape Codes
to move the cursor so I could overwrite the view in place. I chose to use
o
instead of X
for marked fields as I found it a bit more visually distinct.
This worked beautifully.
// Run the game, printing out a debug view.
// Use the ANSI escape code to move the cursor back up, so the
// debug view updates in place.
while game.tick() {
sleep(Duration::from_millis(100));
print!("\x1B[{}A", DEBUG_SIZE.1);
print!("{}", game.view(DEBUG_SIZE));
}
The second part of the puzzle asks to find ways to send the guard into a loop by adding an additional obstacle. My first idea was that we could sending the guard into a loop by sending them back into a previously traversed path. I figured we could keep sending out “rays” towards the right as the guard traveled (as that’s where they would go if we placed an obstacle in their path):
...................
XXXXXXXXXXX>......#
.........|||........
.........||v.......
.........|v........
.........v.........
We know we can send the guard in a loop if the ray hits a scenrio like this, where hitting the obstacle sends the guard off on a path it has traveled before. We would need to keep track of which direction the guard traveled, as we might otherwise send them off in the opposite direction (where we’re not sure about a loop):
...................
XXXXXXXXXXX>......#
.........|..........
.........|.........
.........|.........
.........v.........
XXXXXXXXXXXXXXXXXXX
.........#.........
But I then realized that there might also be “naturally” occurring loops
waiting in the input that the guard bypasses, but that we can send it
into with a new obstacle, like shown below. O
is the new obstacle we
put into the guard’s path, and it now runs into a pre-existing loop it
would otherwise have bypassed completely:
....#................X
...............#.....X
...#.............<XXXX
..............#......O
So I figured I’d go with a brute force idea instead, putting a possible obstacle at every step in the path, and using a clean copy of the grid to verify if this ends up in a loop. This creates a nice sub-problem of detecting a loop, which of course boils down to visiting a field we visited before. But, similar to before, we hit the issue of needing to know in which direction we visited a tile before. Paths can cross and we can construct scenarios like below, where the same field can be visited in opposite directions (path enters from the top, runs into the “box” on the left and runs back out on the same path as before). So we can only be sure we’re in a loop if we hit a field that we visited in the same direction before.
....#...............X....
....XXXXXX#.........X....
...#XXXXXX***********XX>.
.........#..........$....
To track which direction(s) a tile was visited in, I extended the
Marked
enum value with an enumset
of the directions the field was visited in:
#[derive(EnumSetType, Debug)]
enum Direction {
...
}
#[derive(PartialEq, Debug)]
enum Tile {
Empty,
Obstacle,
Marked(EnumSet<Direction>)
}
And with that abstraction it was relatively simple to detect a loop:
fn walk(&mut self, grid: &mut Grid) -> Option<PathResult> {
grid.mark(self.position, self.direction);
match self.lookahead(grid) {
Some(Tile::Marked(directions)) => {
if directions.contains(self.direction) {
// Loop detected.
Some(PathResult::Loop)
} else {
self.position = self.next();
None
}
}
// Other arms omitted for brevity
}
}
And with that problem solved, the brute force solution of putting a possible new obstacle at every position in the gaurd’s original path (minus the starting one), and running a copy of the game to check if the guard ends up in a loop was straightforward to implement. I thought for a moment whether I should try to add threading to check these game copies in parallel, but it turns out that even a naive solution checking them sequentially runs in under 3s on my Macbook with an unoptimized build, so I didn’t bother:
% time cat inputs/aoc06.txt | cargo run
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/aoc2024`
Guard visits 55XX tiles on the original route
Checked 55XX/55XX options, finding XXXX loops...done
...
cargo run 2.39s user 0.03s system 92% cpu 2.621 total
Day 7 – Bridge Repair
The first part of this puzzle asks to insert operators between numbers to create a target number. Evaluation order is strictly right to left with no operator precedence and the only operators are * and +:
3267: 81 40 27
can be solved as both:
3267 == 81 * 40 + 27
3267 == 81 + 40 * 27 // evaluated as (81 + 40) * 27
For an input with N
numbers, there will be N - 1
“slots” for
operators, so there are 2 ^ (N-1)
possible combinations for
the two operators we can choose. Ideally we could prune some of
these early, and maybe save some computational effort. I had a nagging
suspicion that there might be some clever ways to rule out certain
combinations early on, maybe based on prime factoring, but I couldn’t
figure out how to work addition into that space.
I thought about dynamic programming approaches, where we keep building
on previous results, and the best I could come up with was to iteratively
build up sequences from the left, always extending the existing set
with both possible operators in a given position. This still runs the risk
of building the full set of 2 ^ (N-1)
combinations, but it saves some
computation for the earlier sequence(s).
3267: 81 40 27
{81}
{81+40 = 121, 81*40 = 3420}
{121+27, 121*27, 3420+27, 3420*27}
One more idea I had was that we could prune the set whenever the value(s)
got larger than the target value, as both operators we have available can
only grow the value. That wouldn’t hold if there are zeroes in the input,
but thankfully that wasn’t the case. There are some 1’s, though, which
means we might have to deal with cases like (previous values) * 1
, and
that means we can only prune values that are strictly greater than the
target.
$ grep ' 0' inputs/aoc07.txt | wc
0 0 0
$ grep ' 1 ' inputs/aoc07.txt | wc
242 2348 7604
I suspect there might be some clever techniques to also prune too small values, that can not grow fast enough anymore, but that would require some precomputation of what operands we had “left”, and I couldn’t figure out a good way do do this, so I ran with the set idea, and that worked out nicely.
The second part of the puzzle adds a third operator ||
that “concatenates”
the operands:
7290: 6 8 6 15 can be solved as
7290 == 6 * 8 || 6 * 15
48 || 6 * 15
486 * 15
This seemed to still fit into the constraints earlier, so I just added the third operand to the set implementation, and that worked out fine (although it did bring up the runtime to almost 2 minutes).
Interlude – Some refactoring
When I found Day 8 was also a geometry-related puzzle happening on a grid,
I decided to start pulling out some common code into a geometry
crate,
in particular a Point
type and two containers for grids using signed
integer coordinates, as I had found it useful before to just do some
maths on coordinates and have the grid lookup handle out of bounds cases.
I broke out both a Torus
grid - where coordinates wrap around - and a
regular Grid
, where out of bounds accesses are signaled through an
Option
return value from the get
methods. Here’s a few of the key
signatures, with details omitted:
/// A 2D point.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Point {
pub x: isize,
pub y: isize,
}
/// A 2D vector.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Vector {
pub x: isize,
pub y: isize,
}
/// Allows adding a Vector to a point, yielding a new point.
impl Add<Vector> for Point {
type Output = Point;
}
/// Allows substracting two points yielding a Vector.
impl Sub<Point> for Point {
type Output = Vector;
}
/// Allows adding two Vectors, yielding a new vector.
impl Add<Vector> for Vector {
type Output = Vector;
}
/// A limited-size grid with values of type V.
///
/// Data is stored in row-major format.
#[derive(Clone, Debug)]
pub struct Grid<V> {
width: usize,
height: usize,
values: Vec<V>,
}
impl<V> Grid<V> {
pub fn new(width: usize, height: usize, values: Vec<V>) -> Grid<V> {...}
/// Checks if position is within the bounds of our grid.
pub fn within_bounds(&self, position: &Point) -> bool {...}
/// Gets the value of a field.
pub fn get(&self, position: &Point) -> Option<&V> {...}
/// Gets a mutable reference to a field.
pub fn get_mut(&mut self, position: &Point) -> Option<&mut V> {...}
}
/// A torus (infinite) grid with values of type V.
///
/// Data is stored in row-major format.
/// Indices wrap around at both edges.
#[derive(Clone, Debug)]
pub struct Torus<V> {
width: usize,
height: usize,
values: Vec<V>,
}
impl<V> Torus<V> {
pub fn new(width: usize, height: usize, values: Vec<V>) -> Torus<V> {...}
/// Wraps a point to within (self.width, self.height)
fn wrap(&self, position: &Point) -> Point {...}
/// Gets the value of a field.
pub fn get(&self, position: &Point) -> &V {...}
/// Gets the value of a field.
pub fn get_mut(&mut self, position: &Point) -> &mut V {...}
}
Day 8 – Resonant Collinearity
This puzzle asks to find “antinodes” for a configuration of “antennas” on a grid. At first, this looks like a grid-shaped problem again, but after thinking about it for a moment, I figured it was probably better approached via coordinates, where we store a set of coordinates for each group of antennas and then compute antinodes for each pair within a group.
Finding the guard’s position in the grid input in Day 6 yielded some
useful code that I could adapt for input handling, and the Point
class I had just factored out proved useful to create a base data structure:
struct AntennaArray {
width: usize,
height: usize,
antennas: HashMap<char, Vec<Point>>,
}
To create the pairs of antennas in each group, I decided to create a
simple cartesian_product
iterator. A quick search turned up the
cartesian_product
in itertools
, but I had enjoyed thinking about a simple approach
so I coded it up for plain vectors:
struct CartesianIterator<'a, T> {
a: usize,
b: usize,
items: &'a Vec<T>,
}
impl<'a, T> Iterator for CartesianIterator<'a, T> {
type Item = (&'a T, &'a T);
fn next(&mut self) -> Option<Self::Item> {
let (a, b) = (self.a, self.b);
if a >= self.items.len() - 1 {
return None;
}
if b >= self.items.len() - 1 {
self.a += 1;
self.b = self.a;
}
self.b += 1;
Some((self.items.get(a).unwrap(), self.items.get(b).unwrap()))
}
}
/// Returns an iterator for the cartesian product of a list of items.
fn cartesian_product<'a, T>(items: &'a Vec<T>) -> impl Iterator<Item = (&'a T, &'a T)> {
CartesianIterator { a: 0, b: 1, items }
}
#[cfg(test)]
mod test {
#[test]
fn iterates_cartesian() {
assert_eq!(
cartesian_product(&vec![1, 2, 3]).collect::<Vec<_>>(),
vec![(&1, &2), (&1, &3), (&2, &3)]
);
assert_eq!(
cartesian_product(&vec![1]).next(),
None
);
}
}
The Point
and Vector
classes I had factored out earlier came in
very handy in computing the location(s) of the antinodes. After adding
multiplication by a scalar to the Vector, the location of all the
antinodes for an antenna pair of Points a, b
could literally be
expressed as a + (b - a) * 2
and b + (a - b) * 2)
in the code,
very neat.
Day 9 – Disk Fragmenter
This puzzle asks to “defragment” a “disk” by filling in gaps in a string from the back of the string until all values are in a single block near the start of the string. The answer to the puzzle then is the sum of each number multiplied by its index.
0..111....22222
02.111....2222.
022111....222..
0221112...22...
02211122..2....
022111222......
That’s simple enough, we can just use two pointers to move the elements (the reverse pointer needs to skip over any blanks it encounters when moving). We stop when we arrive at the same index:
0..111....22222
^ -> <- ^
02.111....2222.
^ ^
02211122..2....
^ ^
022111222......
^
However, this string isn’t given directly, but instead in a
kind of run-length encoded format called a “disk map”. In the
disk map, alternating numbers express the amount of free space
and the size of the next file (so, by construction files and
space always alternate in the input). For the above example,
the disk map would be 12345
0..111....22222
|| | | |
12 3 4 5
|| |
|| |
|| |
|| Second file has 3 blocks, ...
|Followed by 2 spaces
First file has 1 block
While I figured the problem was likely small enough to actually
expand the input and then run the two pointer algorithm above,
I decided to challenge myself by trying to directly solve the
problem on the compact format. My first idea was to
solve this with iterators, creating a forward and reverse iterator
that would expand values in the fly from the compressed format
by keeping a position and counter of how many of the elements
for a given position we had already returned. So, for example,
after returning 0..11
, the forward iterator would look like:
12345
^ ---> (moves this way)
|
+-- Iterator{pos: 2, count: 2}
We are returning elements for index 2 right now, and have
returned 2/3 of the elements at this index.
The reverse iterator is defined in the same fashion,
but can skip any empty elements encountered. In my case
I solved this by decrementing pos
by two each time.
For example, the reverse iterator that had returned
2222
so far, would look like:
12345
^ <--- (moves that way)
|
+-- ReverseIterator{pos: 4, count: 4}
With these two iterators, we can solve the problem by iterating forward and, whenever we encounter a free block, filling in values from the reverse iterator.
There was one interesting wrinkle with that plan, and that was knowing when to stop. Remember that with the pointers, we stop when both pointers have arrived at the same element. I made this work by doing a precomputation that sums up all the numbers in the input, giving us the total length of the output and using that in the reverse iterator to also return a “forward” position of where a given number would land in the output (by counting how many elements we had returned):
/// A disk map
struct DiskMap {
values: Vec<usize>,
/// The total length of the output this DiskMap will _expand_ to
output_size: usize,
}
impl DiskMap {
/// Creates a new DiskMap from a string like '12345'
fn new(s: &str) -> DiskMap {
let mut sum = 0;
let values = s
.chars()
.map(|c| {
let v = c.to_digit(10).unwrap().try_into().unwrap();
sum += v;
v
})
.collect();
DiskMap {
values,
output_size: sum,
}
}
}
// Backward iterator left as an exercise to the reader :-P
mod test {
#[test]
fn iterates_backwards() {
assert_eq!(
DiskMap::new("12345").iter_backward().collect::<Vec<_>>(),
vec![
(14, Block::File(2)),
(13, Block::File(2)),
(12, Block::File(2)),
(11, Block::File(2)),
(10, Block::File(2)),
// .
// .
// .
// .
(5, Block::File(1)),
(4, Block::File(1)),
(3, Block::File(1)),
// .
// .
(0, Block::File(0))
]
);
}
}
The second part of the puzzle asks to move whole files instead, working from the back, trying exactly once to move each file to the leftmost hole that would accomodate it. The iterators are not very useful here, as they operate on single blocks, and they aren’t very amenable to a solution that needs to search an insert data.
I think this was the first puzzle where the data structures I had picked for the first part weren’t suitable at all for the second part. Interesting.
The second puzzle seems easier to solve if operate at contiguous runs of blocks, i.e., whole files or gaps. I built new data structures to represent these, and chose to keep spaces in an ordered set, so we could efficiently scan for a free space.
/// Represents a file with the given id of size len starting at block begin.
#[derive(PartialEq, Eq, Debug)]
struct File {
id: usize,
begin: usize,
len: usize,
}
/// Represents a run of free space of size len starting at block begin.
#[derive(PartialOrd, Ord, PartialEq, Eq, Clone, Debug)]
struct Free {
len: usize,
begin: usize,
}
/// A disk map suitable for compacting files in a contiguous manner.
#[derive(PartialEq, Eq, Debug)]
struct ContiguousDiskMap {
/// Files, ordered by their id (small to large)
files: Vec<File>,
/// Spaces, ordered by size (small to large) and then their position (earlier first)
spaces: BTreeSet<Free>,
/// The total # of blocks the map expands to.
block_size: usize
}
With these data structures, I could walk backwards through the files, scanning for a matching space for each, and then move the file there (updating the map of spaces to reflect the changes). I hit two interesting cases to think about.
First, when moving a file, I just created a new space in it’s place. I first thought I needed to ensure to merge it with neighboring spaces, so other files can later fit into there. However, that isn’t actually necessary, as files only ever move to the left, and when a file vacates space, the next files we consider are always to the left of that space (because files are in strictly increasing order at first).
Second, by simply scanning for a large enough space from the left and putting the file there, I accidentally moved files to the right sometimes (oops):
If we start out with a naive algorithm that just
scans for a large enough space from the left,
we sometimes move files in the wrong direction.
For example, here we move file 5 into the space
right behind it:
00...111...2...333.44.5555.6666.777.888899
...
0099.1117772...333.44.5555.....6666.8888..
0099.1117772...333.44.....5555.6666.8888..
^^^^^^^^^^
Um, no, that's not right.
After I implemented an extra check that ensured files only ever move to the left, I got the right result.
Interlude – More refactoring
Given the next problem had an ASCII grid again, I took
another look at how my Grid
class got initialized,
realizing I often needed to repeat code to turn
a sequence of lines into a single array (peeking
at the input to determine the width
) and mapping
the various characters to whatever custom structure
I had come up with.
let mut lines = lines.peekable();
let width = lines.peek().unwrap().len();
let values: Vec<_> = lines.map(|s| s.chars().collect::<Vec<_>>()).flatten().collect();
let grid = Grid::new(width, values.len() / width, values);
I added a new method to the Grid to directly initialize from the sequence of lines that I commonly used as the input for each problem by supplying a function to map each of the characters to the desired type:
/// Initializes a Grid from an ASCII representation by reading
/// a sequence of lines (one per row in the grid) and applying
/// the map function to each character. The map function is given
/// the (x,y) coordinates and char value.
pub fn from_ascii<F>(lines: impl Iterator<Item = String>, map: F) -> Grid<V>
where
F: FnMut(usize, usize, char) -> V,
{
let mut lines = lines.peekable();
let width = lines.peek().unwrap().len();
let mut values = vec![];
for line in lines {
assert_eq!(width, line.len());
for char in line.chars() {
values.push(map(values.len() % width, values.len() / width, char));
}
}
Grid { width, height: values.len() / width, values }
}
So the earlier code (creating a grid of chars) now simply becomes:
let grid = Grid::from_ascii(lines, |_, _, c| c);
I decided to also give the map function the position, as some problems, like Day 6’s Guard need to know where in the grid a certain symbol occurs.
Day 10 – Hoof It
Ooh, a graph problem. I like graph problems. This puzzle presents a simple topological map and asks for “hiking paths” that take you from height 0 to height 9 in a steady incline, counting how many 9’s you can reach from each 0 (and the puzzle solution is the sum of these values for all 0’s). For example, this 0 “trailhead” has a score of 2 because two 9’s can be reached from it.
...0...
...1...
...2...
6543456
7.....7
8.....8
9.....9
This can be approached a number of ways. We can treat it as a path-finding problem, using Dijsktra’s algorithm or Floyd-Warshal to find all paths between 0’s and 9’s (and then doing some postprocessing to count them). I also thought about whether this could be treated as a (max-)flow probem using approaches like Ford-Fulkerson, but this seems less attractive as most flow algorithms consider exactly one pair of nodes, so we’d have to run this for every pair of 0’s and 9’s while Dijkstra only needs to run once for each 0 and Floyd-Warshal only needs to run once. I also thought about a custom solution that tries to re-use computations across paths. I came up with an idea of descending from all mountains at once. That is, we could start at the 9’s and walk down all paths from them until we reach 0’s. Imagine pouring some water on the 9’s and following that down each path, seeing how much arrives at each 0. I figured with at least 3 viable solutions to choose from, I could do a little microbenchmark shootout here to see what works best.
Let’s start with an educated guess. Floyd-Warshal sounds
attractive at first here, because it solves all pairs in
a single run. However, we will likely have comparatively
few 0’s and 9’s compared to to other nodes, so we will be
wasting a bunch of computation time figuring out pairs we
won’t care about. If we assume all digits are roughly
equally distributed, we’d have ~10% of 0's and 9's
each,
so the probably of having a pair of those, for which we
care about the result, would be ~1%
, which means a lot
of wasted computation. We can also assume our graph is relatively
sparse. Edges mostly form linear paths along our hiking trails,
meaning each node would be connected to exactly one neighbor.
There will be a few nodes where hiking paths branch off, but
there will likley also be a lot of nodes won’t have any hiking path
crossing through them. So let’s squint and assume on average
nodes will have about one edge. That is, for simplicity’s sake
let’s assume V = E
. So, with an input of width W
and height H
we will have V = WH
nodes and as many edges. Based on this, we’d expect
- Floyd-Warshal would run in
O(V^3)
time. - Dijkstra would run in
O(0.1V * (V + E log V)) = O(V^2 * (1 + log V)) = O(V^2 log V)
time - Descend all mountains should run in
O(V)
time, based on the intuition that it needs to evaluate each node at most once and that each node should have ~1 edge on everage. We can store nodes in a simple deque as there is no particular order for visiting the nodes of a paticular height (and descending by height happens naturally as we append newfound nodes to the deque).
So we’d expect the Descend all mountains idea here to win
by a good margin. And with this, the race was on! However, a first
pit stop was soon required when I found a problem with my descend
idea. I consistently got results that were too high compared to what
Dijkstra gave me. After thinking about this for a while, I realized
that the issue was that paths might fork and merge again. That is,
we could have multiple paths leading to the same summit! In the
example below, we have two paths leading from the 0
trailhead
to the same 9
summit. The naive idea of just incrementing a counter
on each tile as we descend ends up counting this as a +2
to the
trailhead, while it should just be a +1
.
...........
.32101234..
.45... .5..
..6789876..
But we can amend the idea by labeling tiles with a set of the peaks that can be reached instead of a simple score. Starting at the peaks (where each peak has a set containing just itself), we can descend the mountain and perform a set union each time we visit a new node. This way the above example ends up with the correct score, as we simply propagate the same set of a single peak down both paths, which merges into a single element set as we reach the 0. And with this change, I got the same - correct - answer from all 3 algorithms. Time to compare runtimes, and lo and behold, seems the predictions weren’t far off the mark:
Base Algorithm | Runtime |
---|---|
“Descend All” | 0.10s |
Dijkstra | 2.52s |
Floyd Warshal | 1039.21s |
NOTE There isn’t a lot of science to the benchmarking here. I just ran each algorithm once under the
time
command on my MacBook Air M1, and I didn’t spend any time optimizing the details of each algorithm. I was suprised how clearly the differences were observable, despite the moderate input. A nice illustration of algorithmic complexity.
The second half of the puzzle asks to count unique paths. Funnily enough,
that’s pretty much what I accidentally solved with the first idea of
just “descending” with a count of paths feeding into a given node. So
I dusted off that idea, and … immediately got numbers that were waay
larger than the examples. I had failed to consider the edge case where
one node might be added to the queue multiple times by different nodes
higher up. For example, in this case, the 8
node will be added to the
queue twice, as we evaluate each of the 9
nodes (and with a naive
implementation would then be evaluated twice, leading to double counting
down the road):
......
......
......
......
....89
....9.
After adding a check that made sure we only visit each node once, things worked fine.