See https://github.com/jolod/advent-of-code-2017.
The reason I started doing AoC 2017--I have not done it any other year--was that I could use a one-liner in Perl without leaving the console. I was tempted to use the CPAN module List::Util which provide a sum0
function. But then and there I set a precedent that I should only use core modules for Perl.
Part two is more of the same, just that the gap is half the length minus one, and the string is appended with the first half of the string.
Part 1 was one again a one-liner in Perl, using the -a
switch this time.
In my solution of part 2 I decided to interpret the problem as to allow that the same number occurs twice in the list. To do this, I needed to index the numbers. In turned out that for my input there are no duplicates in the lists.
For part 1 I decided to just do the math. In wrote the formula in Perl just because I didn't have to leave the terminal.
Part 2 however required some programming. The first thing I did was to create a function that gives the coordinates that draws the (empty) square at a distance radius
from the center along any of the axes. So the values 2 to 9 lie at radius 1, values 10 to 25 lie at radius two, etc. Coordinates were generated by starting at the lower right corner, and walking up, left, down, and right.
I then created a map from coordinates to values, initialized with [0 0]
mapping to 1. Then I just summed the neighbor values that exists in the map and then added the new entry to the map. I dropped values from this infinite list until I found a value which was larger than the input, and then I had found my answer as the first element in the resultin list.
Part 1 was just using grep
using a Perl regex and wc -l
.
Part 2 was similar but I used only Perl to find the invalid passphrases, and made use of Perl's END
block and the special variable $.
to calculate the number of valid phrases.
Really straight forward. Changed language for part 2 just because.
I really fancy this solution.
(defn reallocation-routine [banks]
(iterate redistribute banks))
That is just brilliant. Now, you might say that this is just like a loop in an imperative language. But it is not. The difference is how you combine this with other code. An imperative loop solution might look like
banks = ...
seen = set()
seen.add(banks)
while True:
banks = redistribute(banks)
if banks in seen:
break
print len(seen)
In contrast the Clojure code is
(print (count (take-while-unique (reallocation-routine banks))))
The loop is not abstracted away and composable with other computations, as it is in the functional code.
The Python equivalent would be to write a generator:
def reallocation_routine(banks):
yield banks
banks = redistribute(banks)
and this you can use the same way as the Clojure solution--almost. The downside is that generators are consumed, while infinite lists are not. Typically you either just take a finite part, or you drop a finite part and then take a finite part. The dropped part is garbage collected, so you need not worry about memory usage. If you "hold on to the head" as it's usually phrased, then you might run into memory issues:
(let [banks ...
banks-seq (reallocation-routine banks)
second-bank (second banks-seq)
millionth-bank (nth banks-seq (dec 1000000)) -- Will eat up memory.
]
...)
The workaround is to simply reallocation-routine
again as create a new list for which you don't hold on to the head.
(let [banks ...
mk-banks-seq #(reallocation-routine banks)
second-bank (second (mk-banks-seq))
millionth-bank (nth (mk-banks-seq) (dec 1000000)) -- Will not eat up memory.
]
...)
For part 1 my strategy was to walk towards the root from an arbitrary starting node. To achieve that I reversed the arrows in the input, so instead of
a -> b, c, d
I created
b -> a
c -> a
d -> a
and just followed the arrows until I hit a node which pointed no where. That was the root.
The second part is trickier, because my code does not work in general. It works for my input, which is good enough for AoC, but unsatisfactory still.
TODO: Explain why day 7 part 2 does not work in general.
The interpretation of the question is not totally clear.
I chose to initialize registers
with zeros for all registers that were present in the program. The reason is that if we have a program that only decreases values, and only uses one register in a conditional position, or its conditional evaluates to false, we will have a zero left. Zero should then be the largest value, but if it is not initialized and never written to we will not see it in registers
.
This interpretation of the problem may be questioned. It could be argued that the answer is always at least zero since you could argue that there is a register that the program does not use at all which will be zero. That would imply that there are infinite registers though, which is why I favor the interpretation above.
Now this is a fun one. There's the obvious solution (recursion) and there's the regex solution!
First I cleaned the input by removing everything that matched ,?<(?:!.|[^>])*>,?
, and then normalized the commas. So, starting with the sample strings {{{},{},{{}}}}
, the reduction looks like this:
{{{},{},{{},},},}, # } is always followed by a comma.
{{x,x,{x,},},}, # {} counts as one, so replace with one x.
{{x,x,x,xx,},}, # A single x in a group is equivalent to x,xx.
# The inner x is increased and the group itself is represented by the leading x.
{{x,x,xx, xx,},}, # Take the first x and increase it by one, and move it to the end of the group
# The space signifies that it has been increased already.
{{x,xx, xx,xx,},}, # The same for the next x.
{{xx, xx,xx,xx},}, # This is now the last element. Increase by one, add to the end, and insert a new x for the group itself.
{x,xx,xx,xx,xxx,}, # Start iterating over the elements again.
{xx,xx,xx,xxx, xx,}, # ...
{xx,xx,xxx, xx,xxx,}, # ...
{xx,xxx, xx,xxx,xxx,}, # ...
{xxx, xx,xxx,xxx,xxx,}, # ...
x,xx,xxx,xxx,xxx,xxxx, # No more groups to reduce.
Then all the x:s were counted.
Part two was again a simple regex solution.
Not much to comment on.
TODO: Write about day 11.
I encoded the connections through a map from nodes to sets of nodes. Finding reachable nodes were just a loop following all connections. The second part was just about repeatedly selecting an arbitrary node and remove all connected nodes from the connection map, until the map was empty.
This puzzle had a couple of interesting features.
The first is that the text encourages simulation. I can imagine each scanner being an object that moves position and that you have a packat that moves forward. However, it's completley unnecessary. The packet is not moving sideways in any way, so it's only necessary to know if the scanner will be at position zero when the packet will be by the scanner.
The scanner returns to zero every 2 * (range - 1)
steps, so if this cycle time evenly divides the time it takes for the packet to get to the scanner, the packet is caught. The time it takes for the packet to get to a scanner is the scanner's depth. So
depth % (2 * (range - 1)) == 0
is the criterion for being caught by that scanner.
For part 2 it would be easy to use a function that calculates the severity and look for a delay where the severity is zero. Now, I have two issues with that. The first issue is that the first scanner always gives a severity of zero, even if you are caught. Thus you need to guard against that someone. Secondly, it's unnecessary to keep iterating over the rest of the scanners after one scanner has caught the packet. It does save a lot of time.
I also chose to introduce a delay
parameter. While not strictly necessary in this case, I opted to not introduce the delay by moving the scanners away from the packet by increasing the depth, since that would also affect the severity. In an imagined third part of the puzzle perhaps the worst-case delay would be asked for. On a less imaginary scenario, anyone reading the code might think it's a bug rather than a hack exploiting that you do not actually use the severity value.
TODO: Write about day 14.
Since the problem involved generators I used Python and generators!
The part about using binary representation and only looking at the last digits is a red herring. You can use a bitmask instead.
Again I decided to use regular expressions just because I could.
A spin of 5 was turned into ^(.*)()(.5)$
, an exchange of 3 9 was turned into (?<=^.{3})(.)(.{5})(.)
, and partner with a
and b
was turned into ([ab])(.*)([ab])
. The first and third captures group then swap places.
Part 1 was really straight forward. Just add to the buffer according to the instructions.
The trick for part 1 is to realized that the zero does not move; it always stays as the first element in the buffer. So we only need to keep track of the size of the buffer and record the value every time the current position is zero (and that value would have been inserted after zero).
I did two versions: one naïve that increases the buffer one by one, and calculates the current position for every insertions. That took a few seconds to execute, so if it had been five billion, it would not have been the best of solutions. So I decided to write a fast version, which has the runtime scale logarithmically (i.e. sublinear) with the number of insertions. If the buffer is one thousand long and the current position is in the beginning and you have a step size of ten, then you have about 100 insertions until you could end up at zero. So I make a "batch" insertion of max(1, (insertion - pos) // step)
where insertion
is the next value to be inserted. This way, with every loop iteration the buffer grows faster and faster and you reach five million, or five quadrillion, in no time.
I particularly found part 2 interesting. But generally I found this piece from part 1 to be pretty nice:
(case op
:set (update-register a write (value b))
:add (update-register a + (value b))
:mul (update-register a * (value b))
:mod (update-register a mod (value b))
:snd (set-frequency (value a))
:rcv (if (zero? (value a))
(do-nothing)
(update-register a write current-frequency))
:jgz (if (pos? (value a))
(jump (value b))
(do-nothing))))))
update-register
, write
, value
, set-frequency
, do-nothing
, and jump
are all local functions defined in a surrounding let
. It's a very direct translation of the problem statement. They all return a tagged union in the form of a vector where the first element is a keyword (the tag).
Other than that it was basically just a reduction, again, over instructions.
The interesting part about part 2 was the scheduling. In Clojure the communication could have been done through core.async
channels, but that would not work for deadlocks, since it would be a proper deadlock in the Clojure code.
One design could have been a bit more OO in style, I guess, where send would actually write to a mutable queue in the other program (via atoms). I discarded that idea because 1) it uses atoms unnecessarily, and 2) if I wanted to run a whole bunch of instructions before letting the other program execute, I would have to either return an extra thing to indicate whether the first program did a send so that the next program, if it was waiting for something, actually could execute. Or I would need to query the program if it was waiting and if it actually had something it it's queue. The logic became to convolved.
I opted to go a bit overkill and solve the problem for an arbitrary number of programs, where the send instruction would pass the value to the next program (and the last would sent to the first). I also decided to alternate the programs as often as possible. So I maintained a queue of running programs, and when an instruction was executed for one program it was moved to the end of the queue. The return value indicated whether the program could continue to execute, whether the program sent a value (and that value), or whether the program was waiting for more input, or finally if it was done. If it did write a value, the waiting program was put back at the front of the queue of running programs.
It is not the programs themselves that put values in any queue, that is the job of the operating system which also does the scheduling. With this design it is trivial to change the scheduling so that it executes as much as it can for one program before handing to the other. Also, interestingly, it's easy to do that in parallel since no program can affect any other program. So the run-all-you-can bit can be put in future
s and actually run in parallel.
When there are no running programs, and no waiting programs, the computer "shuts down". Otherwise, if there are any waiting programs, a deadlock is reported. I guess, technically, it might not be a deadlock if one program terminates while the other is waiting for input from it. It's more like a broken pipe.
(I actually called a program a computer, and the programs, i.e. computers, made up a system. So the analogy is that you have a network of computers that send messages instead of two programs that talk to each other running in the same computer.)
Very straightforward. I wrote the first version in Perl, and then I wanted to see how the code would look in Python (see day19-part12.py). It is exactly the same solution, and shows that the difference between "normal" Perl and Python is not that large. The biggest difference is the superficial syntax.
For part 1 I just sorted (using sort -g
) the particles by the sum of magnitudes of accelerations (which I calculated using a Perl one-liner, see day20-part1.sh), which narrowed it down to two candidates of which I could see which one the answer was. For completeness, I have added a solutions which fully computes the answer in Python. More on that below.
Part 2 I did in Clojure, but I'm not sure it was the best choice, at least if I would allow myself to use numpy in Python.
The way I solved it was by writing down the equations. A collision between two particles occur if the have the same position at any time. Given p0
, v0
, and a0
we have
p[0] = p0
p[n] = p[n-1] + v[n]
v[0] = v0
v[n] = v[n-1] + a0
We can rewrite v[n]
to
v[n] = v0 + a0 * n
Similarly we can rewrite p[n]
.
p[n] = p0 + sum{k = 1..n} v[k]
p[n] = p0 + sum{k = 1..n} (v0 + a0 * k)
p[n] = p0 + v0 * n + sum{k = 1..n} (a0 * k)
p[n] = p0 + v0 * n + a0 * (sum{k = 1..n} k)
p[n] = p0 + v0 * n + a0 * (n * (n + 1) / 2)
p[n] = p0 + v0 * n + a0 * n / 2 + a0 * n**2 / 2
p[n] = p0 + (v0 + a0 / 2) * n + a0 / 2 * n**2
So p[n]
is a second degree polynomial and two particles collide if there is any n
such that p1[n] = p2[n]
, i.e. p1[n] - p2[n] = 0
. So by solving that using the quadratic equation we find zero, one, or two values of n
. In order to be a collision the n
must be an integer and non-negative, so you may still have zero, one, or two values of n
that gives a collision. If there are two values, only the lowest is of interest.
Next, I grouped up all pairs of particles that collided in at the time n
. For every such group, I removed particle pairs from the set of all remaining particles, but only if none of the particles had already been removed at an earlier time--a particle cannot collide with a particle that has already collided.
For completeness I wrote a proper solution to part 1. The problem can be described as finding the particle with the minimum
dist[n] = abs(p_x[n]) + abs(p_y[n]) + abs(p_z[n])
for arbitrarily large n
.
For large n
the highest-power term will dominate (typically the acceleration), and if it is negative then we need to negate the whole expression, otherwise it can be kept as it is. That means that we can normalize each of p_x
, p_y
, and p_z
based on the dominant term and then forget about abs
.
I used a list of position, speed, acceleration to represent the function p
, so it was easy to manipulate and add the polynomial expressions, and finally I just sorted them, first comparing acceleration, then velocity, and lastly position.
TODO: Write about day 21 part 1.
Part 2 was just part 1 with a different number of iterations. It took a while to run, but still faster than a minute so no need to change any code.
For even more iterations I would instead use an imperative language and used (one dimensional) arrays to represent the image, and do index calculations. I would use a function along these lines:
(defn ranges [n factor i j]
(for [k (range factor)]
(let [idx (+ (* k n)
(* i factor n)
(* j factor))]
[idx (+ idx factor)])))
This function calculates the ranges that represent a subsquare of the image. So for a 4 by 4 image with factor 2 and i and j equal to zero you would have [[0 2] [4 6]]
(where the end point are not included). This can both be used to read a square from the image and write a square to the new image.
The algorithm described in part 1 goes under another name: Langton's Ant. It's a bit fascinating; it's believed that it always ends up creating a "highway" after some time of apparent chaos regardless of the starting state, but no one has managed to prove this. It's also Turing complete, which isn't exactly obvious. Another interesting property is that it is completely reversible, even though it overwrites it's past. To make the ant erase everything, just turn it around! If you continue when it's back to the start it will recreate the same pattern but in the opposite direction.
I did it in Clojure because I already had a version of Langton's Ant in Clojure, complete with drawing the map and everything. It's a bit mesmerizing watching it, in particular if you have two ants that just happen to interact so that they start reversing and have warping edges. Then they can get stuck doing that forever.
Another fun variation is to add some noise to the ant's movement. For instance, maybe it makes the wrong turn with a certain probability. If the probability is low enough, it will get stuck in highways, but due to random chance it will break out and wreak havoc for a while. It sometimes gets stuck along an edge of a previously created highway and run against it, then perhaps end up running inside another old highway in the other direction.
I could watch it for hours. It would be an awesome screen saver, if I used screen savers.
Part 2 was just minor changes. I had to change to explicitly count infections rather than infering it from the number of times a cell was changed.
The first part of day 23 was a fairly straightforward implementation of the instruction set, and rather similar to day 18. The trickiest part was to handle jumps because there is a potential for an off-by-one error there. The step
function returns the operation performed, so that I can count the number of times mul
was being performed.
The second part was entirely different. There I had to analyze the code, and I reformatted the code so that negative jumps indented all the lines it covered (including the jump instruction). Positive jumps cause the skipped lines to be indented (but not the jump instruction). Then it looks a bit like Python code, where negative jumps are loops and positive jumps are if
s.
set b 99
set c b
jnz a 2
jnz 1 5
mul b 100
sub b -100000
set c b
sub c -17000
set f 1
set d 2 # for d in range(2, b + 1):
set e 2 # for e in range(2, b + 1):
set g d
mul g e
sub g b
jnz g 2 # if d * e == b:
set f 0
sub e -1
set g e
sub g b
jnz g -8
sub d -1
set g d
sub g b
jnz g -13
jnz f 2 # if f == 0:
sub h -1
set g b
sub g c
jnz g 2
jnz 1 3
sub b -17
jnz 1 -23
Then you had to realize that f
is zero unless b
is prime, which the double loop checks for. So all values of b
which are not prime cause h
to increment, and for me b
started at 109900 and increased in steps of 17 until it reached 126900.
My solution works under the assumption that there are on duplicate components, which there was not in my input. I made a "double map" index of the components; if I had connectors 0/2, 2/2, 2/3, 3/4
then I had an index map looking like
{0 #{2}
2 #{0 2 3}
3 #{2 4}
4 #{3}}
In that way I could easily look up want components I could attached to the bridge. Since the bridge must start with a zero, I can immediately get that I need to connect the component 0/2
, and the index would be updated to be.
{0 #{}
2 #{2 3}
3 #{2 4}
4 #{3}}
Now the bridge needs a 2-pin component, and I can now choose from the set #{2 3}
. And so on.
The solution is recursive, and it is really convenient to have immutable data so that you can effortlessly recurse trying different components. I stored the components of the bridge instead of just storing the strength and open connector, even though part 1 did not require that. However, it was useful to check that the code was working properly, and I saw no reason to remove it. Additionally, I suspected it might be useful for part 2, given the pattern of previous puzzles.
Now, given that I had the components, it would not be necessary to keep around the :connector
value and just look at the last component in the bridge. However, then you would have to encode the default value somehow. A hack could be to initialize the bridge with a 0/0
component. However, if the bridge would start with a different pin then the strength calculation would be wrong. The other alternative could be to default to the start pin if the bridge is empty, but that would also require extra logic with little benefit. Just updating the :connector
value seems the simplest, even if it means that you now have an invariant in your data model that needs to be kept in sync and a potential source of bugs.
I also decided to keep the strengthed stored in the bridge and updated it every time a component was connected. I could just as well have had a function strength
that calculated the strength based on the stored components, but that would do a lot of recalculation. And this was noticable. I did a simple benchmark and the algorithm took about four times longer to run using the ad hoc strength computation (see the code for part 2).
Part 2 was just a matter of keying the bridges by (juxt (comp count :components) :strength)
instead of just :strength
. For some reason, Clojure's max-key
only works on numbers as keys, not on comparable values, so I had to write my own max-by
. (Interestingly, sort-by
can use any comparable value, and at first I implemented max-by
using sort-by
but O(n log n) instead of O(n) for no good reason other than über laziness just doesn't feel good, even if it's just an Advent of Code puzzle.)
This was again a fairly simple implementation of a Turing machine. The infinite tape was modeled using a map. The blueprint was parsed using regexes.
A nice trick was to encode the rules using a map from the state to a vector, and the current state was used to look up a value in that vector. So what value to write and what state to transition to etc if the current value was zero was found at index zero of the vector, and similarly for index one.
Copyright © 2018-2019 Johan Lodin
Distributed under CC BY-NC-SA 4.0.