Anybody got some ideas to optimize today? I’ve got it down to 65ms (total) on my desktop, using A* with a visitation map. Each cell in the visitation map contains (in part 2) 16 entries; 4 per direction of movement, 1 for each level of straightaway. In part 2, I use a map with 11 entries per direction.

Optimizations I’ve implemented:

  • use a 2D array instead of a hashset/map. No idea how much this saves, I did it in the first place.
  • the minimum distance for a specific cell’s direction + combo applies for higher combo levels as well for part 1. For part 2, if the current combo is greater than 4, we do the same*. Gains about 70(!!) ms
  • A* heuristic weighting optimization, a weight of about 1% with a manhattan distance heuristic seems to gain about 15 ms (might be my input only tho)

*Correctness-wise: the reason we’re splitting by direction is because there’s a difference between being at a cell going up with a 3 combo but a really short path, and going right with a 0 combo but a long path. However, this is fine because a 3 combo in the same direction as a 0 combo is identical, just more restrictive.

Optimizations that could be done but I need to ensure correctness:

the same optimization for the combo, but for directions. If I’m on a specific combo+direction, does that imply something about the distance for another direction? Simply doing the same for every non-opposite direction isn’t correct

Code: https://codeberg.org/Sekoia/adventofcode/src/branch/main/src/y2023/day17.rs

Warning: quite ugly, there’s like 8 copy-pastes for adding to the queue

  • hades@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I did a different heuristic for A*: I ran a Dijkstra backwards from the bottom right corner and computed the heat losses for each block if there were no movement restrictions whatsoever. Maybe that’ll help shave more milliseconds :)

    • lwhjp@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Clever! And removing constraints doesn’t increase the path cost, so it won’t be an overestimate.

  • lwhjp@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Some (not very insightful or helpful) observations:

    • The shortest path is likely to be mostly monotonic (it’s quite hard for the “long way round” to be cost-effective), so the Manhattan distance is probably a good metric.
    • The center of the puzzle is expensive, so the straight-line distance is probably not a good metric
    • I’m pretty sure that the shortest route (for part one at least) can’t self-intersect. Implementing this constraint is probably not going to speed things up, and there might be some pathological case where it’s not true.

    Not an optimization, but I suspect that a heuristic-based “reasonably good” path such as a human would take will be fairly close to optimal.