`mtime`

s and whatever the platform remembers from my (precious few) submissions.
Anyway, I have write-ups for the individual problems, that ought to spark something:

Memory aids revisited, it does appears I did them in order.

As the write-up hinted, I had the wrong intuition on *Minimum Sort*, and spent a lot of time trying to prove the solution wrong, without taking a step back to notice it was a single-test fully visible-results problem, with no guesswork needed to ensure validity of a hidden set. Mess-up number 1: assuming too much.^{1} It went ahead pretty fast once I accepted the situation.

*Matrygons* took a bit of time in the start, because my mind didn’t want to interpret the summation at the correct level. Once that got settled, the implementation followed easily. A wee bit of optimization was needed to go from TLE to AC on the large set: migrating from `Data.Array`

to `Data.Array.Unboxed`

.

I never wrote a bruteforce “test each one of the 13! possibilities” in *Hidden Pancakes*, my first and only contest attempt was already close to the tree-based one I describe in the write-up. I recall it took me some time to add a “closed set” of numbers not to be allocated again if they’re not directly comparable because they’re on separate sub-branches. But my 5-minute-past-the-contest update that passed the large set is still a direct continuation of that code branch.

I don’t remember much from *Retiling*. It’s likely I had my natural revulsion to that sort of problem for some time, then tried going at it anyway. Or maybe I only used my time to struggle with C-large? I don’t remember. I didn’t have a hungarian algorithm at hand anyway; even if I had understood the problem, I wouldn’t have solved it in time.

This concludes the competing aspect of my CodeJam participation for the year. If motivation keeps up, I’ll try and detail a few other problems. See you soon!

Not exactly a mess-up specific to coding competitions. It applies all over the engineering spectrum.↩︎

We are given two bichromatic grid of squares, and want to find a minimum-cost transition from one to the other. The two available operations are:

- flip a tile (cost
*F*) - exchange two adjacent tiles without flipping them (cost
*S*for “swap”)

By repeatedly exchanging adjacent tiles, we can generalize the operation to “exchange any two tiles for cost *S* times their Manhattan distance”.

Zooming in a bit, any given tile is either solved or in need of flipping. Whether that flipping is realized with an *F* or some combination of *S* operation is what we have to determine.

The *S* operation preserves the color distribution, so if it has to be changed, flips are unavoidable.

For any unsolved tile, we have to choose between flipping it directly, or finding a close-by compatible tile to exchange it with. What’s a compatible tile? One that also needs flipping, but in the other direction.

So we can partition all unsolved tiles by wanted flip direction, and seek a minimal-cost matching between the both. Both sets necessarily have the same cardinality, so we can complete the smaller one with a few “just flip” counterparts.

Actually, if flips are cheap and exchanges expensive, there’s no saying we want to match at all costs. So we either add “just flip” counterparts to both sets, or play on the cost function to decide on its own whether a long exchange or a double flip is preferable. It’s a constant-time decision per pair, it’s affordable.

The matching can be performed with our old friend the hungarian algorithm. Complexity is fuzzy, somewhere between *O*(*N*^{3}) and *O*(*N*^{4}) according to wikipedia.^{1} Input size happens to be at most 100, so it should be ok.

Let’s implement.

```
main :: IO ()
= do
main <- readLn :: IO Int
t 1..t] $ \i -> do
forM_ [<- map read . words <$> getLine
[r,_c,f,s] <- replicateM r getLine
src <- replicateM r getLine
dst putStrLn $ "Case #" ++ show i ++ ": " ++ show (solve f s src dst)
```

The `main`

wrapper is nothing too interesting, just read the input, convert the integers and delegate the meat to the `solve`

function.

```
solve :: Int -> Int -> [String] -> [String] -> Int
= cost
solve f s src dst where
```

The `solve`

workhorse is a bit of a wrapper itself. First we’ll identify the tiling positions that need some change.

```
delta :: [[Bool]]
= (zipWith . zipWith) (/=) src dst delta
```

Next we’ll partition into those who start as a G and those who start as an M. The pipeline is a bit more convoluted than I’d like because it takes care of assigning coordinates on-the-fly

```
ms :: Set (Int,Int)
gs,= bimap (Set.fromList . map fst) (Set.fromList . map fst) $
(gs,ms) == 'G') . snd) $ concat $
partition ((zipWith3 (\i srcRow dstRow ->
-> ((i,j),se) <$ guard de)
mapMaybe (\(j,se,de) zip3 [0..] srcRow dstRow))
(0..] src delta [
```

Then construct a cost map of pairs from them and hand it to the hungarian algorithm. To ensure every tile has a counterpart without too many `if`

s no matter how they distribute initially, I’m simply offering a “deactivated” version: the cost function will recognize two “active” opposite tiles and pair them if it makes sense, ignore two “inactive” opposite tiles, and consider an active-inactive match as an unconditional flip.

```
= munkresI (Set.disjointUnion ms gs) (Set.disjointUnion gs ms) $
(cost,_) curry $ \ case (Right g,Right m) -> min (dist g m * s) (2*f)
Left _,Left _) -> 0
(-> f _
```

I’m skipping the hungarian algorithm itself, because in its current version it’s quite long, and will actually warrant a post of its own.

So there we have it. A solution to that remaining problem from round 2. It’s there with the others in the usual place on GitHub. See you soon for the round recap!

And my implementation uses maps and sets, so there’s probably a few

*O*(*l**g**N*) thrown in for good measure.↩︎

Round 2’s problem C “Hidden Pancakes”, has us counting pancake stack orderings given an observation of how adding them one by one changed the external view.

Distinct-sized pancakes are piled on a plate, one by one. Each time a pancake is added to the pile, one of two things happens:

- either it is smaller than the topmost pancake, and there is one more pancake visible from above,
- or it is bigger than one or more (possibly all) of the visible pancakes, and obscures them so that there are as many or fewer resulting pancakes visible from above.

Since the pancakes’ sizes are known to be the first \(N\) naturals, we can attempt to assign a size to every one as it is added, check the constraints, and backtrack as necessary. This translates quite naturally as a fold in the Haskell list monad:

```
backtrack :: Int -> [Int] -> Int
= length . foldM add (0,[],IntSet.empty)
backtrack n where
= case compare v' (v+1) of
add (v,vss,cl) v' GT -> mzero
EQ -> do
<- lower vss
s `IntSet.notMember` cl)
guard (s pure (v',s : vss,IntSet.insert s cl)
LT -> do
let vss' = drop (v - v') vss
<- higher vss'
s `IntSet.notMember` cl)
guard (s pure (v',s : tail vss',IntSet.insert s cl)
```

The folding function `add`

takes a current state triplet of:

- current number of visible pancakes
- sizes of the currently visible pancakes
- “closed” set of sizes already allocated

At each iteration, it deduces a pancake’s relative size to the previous one’s. If the pancake is smaller, it is allocated a size lower than all visible pancakes, which simplifies as a size lower than the topmost pancake’s. If the pancake is larger, we count how many pancakes are hidden, and allocate a size between the biggest pancake covered’s and the smallest pancake not covered’s.

With easy special cases to account for the stack’s foundation:

```
= [1..n]
lower [] :_) = [1..s-1]
lower (s
= [s+1..n]
higher [s] :b:_) = [a+1..b-1] higher (a
```

You’ll have noticed I’m not even bothering to take the `mod`

. We’re counting the possibilities one by one here; if it gets anywhere near \(10^9+7\), it’s out of reach to this algorithm anyway.

But it’s enough^{1} for \(N\le 13\) as is the case for the small case. For the large case, we’ll have to get more crafty.

What’s really killing the complexity is the fact we commit to a pancake’s size as soon as we see it, and never adjust it in relation to those that pile on it afterwards without cancelling the entire covering. So let’s try to refine our knowledge representation.

The exact information we get from an added pancake is always the same: an increase in number of visible pancakes means, in a very “if and only if” sense,^{2} that its size is strictly less than that of the stack top; its conservation or decrease means that its size is between those of the biggest covered and smallest uncovered.

In other words, we’ve got an induced partial order.

The one depicted above isn’t very interesting:

- that \(p_1 \rightarrow p_2\) edge is redundant: it’s implied by those going through \(p_3\).
- it’s got a single topological ordering.

Let’s start again, this time with a test that requires a bit more on the ordering front: 1 2 3 2 3.

The first three pancakes have a trivial ordering: \(p_1 > p_2 > p_3\). Pancake 4 hides two of those, so \(p_4 > p_2 \, (> p_3)\). It failed to cover \(p_1\), so \(p_1 > p_4\). So the \(p_1 > p_2\) relation we had earlier is now redundant.

Redundant edges create cycles in our ordering’s graph, we’d like to avoid that. And we can: by only generating the edges when a “covering” pancake comes up. Oh, and also at algorithm termination, when we know no edge is going to be made redundant anymore.

We can revisit our fold from the previous implementation. This time we aren’t going to store pancake sizes, since we’re not allocating them. Instead we’ll keep the nodes of our graph in construction.

Let’s see how the process would go. We’re maintaining a stack of visible pancakes, represented as a double-bordered ellipses. They’re ordered by size when directly on the stack. Sometimes an added pancake would break that invariant. We’ll resolve it by making the part of the stack it hides, in a solid blue box, its children. Those nodes within the box are stack-ordered, so they are aggregated the other way round, by making the top-most element, in a dashed blue box, a child of the element below it.

- We have a new visible pancake. We stack it.
- We have a new visible pancake. We stack it.
- We have a new visible pancake. We stack it.
- We have one fewer visible pancakes. This means we’re covering up two stack positions (the blue box). We want to eliminate those from the stack, making them children of the currently inserted pancake. They’re visible on the stack, so their relative ordering is known. The resulting stack is two elements high. (Double-line borders indicate the nodes that form the stack.)
- We have a new visible pancake. We stack it.
- We have no further pancakes, so we can wrap up the stack contents. As previously, its internal ordering is known, and accounted for.

So a solid blue box means “this subtree is to be made a child of the node *above* it in the stack”, and a dashed blue box “this subtree is to be made a child of the node *below* it on the stack”.

Let’s code it.

```
type Stack = (Int,[Tree ()])
process :: Stack -> Int -> Maybe Stack
= case compare v' (v+1) of
process (v,s) v' GT -> Nothing
EQ -> Just (v',Node () [] : s)
LT -> let (covered,s') = splitAt (v - v' + 1) s
in Just (v',Node () [cover covered] : s')
```

We’ve moved to the `Maybe`

monad to handle the case where the input is inconsistent.^{3} The core is still the same: a comparison of two consecutive visible pancake counts. When that count increases by one (the `EQ`

case), we pile a new node up. When it remains or decreases, we extract the covered part of the stack, make a comb tree out of it and make it a child of the new node. That last functionality is also needed when we’re done with new input, so it’s factored out to a separate function:

```
cover :: [Tree ()] -> Tree ()
: Node r f : ts) = cover (Node r (t:f):ts)
cover (t = t cover [t]
```

So we’ve constructed a tree^{4} of all known “greater than” relations from the input. That doesn’t answer the question, though: how many orderings fit?

This can be tackled recursively.

- If the node has no children, there’s only a single ordering.
- If the node has a single child, there are exactly as many orderings as the child has: the parent node will always be in the same relative position to them (namely: above).
- If the node has two children… this is where it gets interesting. We have no way to compare any element of subtree \(T_1\) with one of the other subtree, \(T_2\). So both orderings are possible, and have to be counted in. How many are there, taking
*all*elements into account?

The combinatorial vision that works is the following: we take an ordering \(O_1\) of one subtree and \(O_2\) of the other. It results in an ordering \(O\) of both, whose size is the sum. The resulting ordering is determined exactly by where in the resulting ordering the elements of one subtree are. This is because the other elements are necessarily from the other ordering, and the elements’ order is given by the ordering they come from.

So \(O_1\) and \(O_2\) generate \(C_{n_1+n_2}^{n_1}\) orderings of \(O\).^{5}^{6}So \(T_1\) and \(T_2\) generate that, times the number of orderings of \(T_1\), times the number of orderings of \(T_2\). - If the node has more children than that… well, can it? We have two cases of creating a node child: one with a solid box below, and one with a dashed box above. Both can happen to a single node, but no more.

So our tree is indeed binary.

We need the tree’s size along when computing, so we’ll report it along with the number of orderings. Our complete tree counting function thus looks like this:

```
count :: () -> [(Int,Int)] -> (Int,Int)
= (1,1)
count () [] = (c,n+1)
count () [(c,n)] = (combinations (n1+n2) n1 *% c1 *% c2,1+n1+n2) count () [(c1,n1),(c2,n2)]
```

We’ve got \(O(N)\) combinations to compute, we do have to take care not to compute them too inefficiently. Precomputing factorials and inverse factorials up to \(N\) modulo \(10^9+7\) achieves that.

```
m :: Int
= 1000000007
m
(*%) :: Int -> Int -> Int
*%) = ((`mod` m) .) . (*)
(infixl 8 *%
combinations :: Int -> Int -> Int
= fact!n *% invFact!(n-k) *% invFact!k
combinations n k
nFact :: Int
= 100000
nFact
fact :: Vector Int
= V.scanl (*%) 1 $ V.generate nFact succ
fact
invFact :: Vector Int
= V.scanr (*%) 716327852 $ V.generate nFact succ invFact
```

That magic number in the last line is the inverse factorial of 100 000.^{7}

We’ve got all we need to summarize the process with a few folds.

```
solve :: [Int] -> Int
=
solve maybe 0 .
fst . foldTree count . cover . snd) .
(0,[]) foldM process (
```

So the entire process is \(2×100\,000\) multiplications and modulos of precomputation; then \(O(N)\) of tree construction and folding. All in all \(O(N)\) time and space.

And that’s all there is to it. Here’s the CodeJam wrapper for completeness.

```
main :: IO ()
= do
main <- readLn :: IO Int
t 1..t] $ \i -> do
forM_ [<- readLn
n <- map read . words <$> getLine
vs let answer | n <= 13 = backtrack n vs
| otherwise = solve vs
putStrLn $ "Case #" ++ show i ++ ": " ++ show answer
```

This concludes today’s problem. GitHub link. See you soon to conclude round 2!

It’s more of a “happens to be” than a “can be shown that it”. The theoretical number of orderings, \(13!\),

*is*larger than \(10^9+7\), but not by much. We’re saved by the fact being provided with a sequence of \(V\)s of length \(N\) reduces it enough to be tractable. Even if I didn’t go as far as to compute by how much.↩︎It’s also smaller than all other visible, respectively bigger than all those it covers. But that does not convey any additional information that isn’t transitively there in the other pancakes’ relationships.↩︎

It’s a bad aspect of the puzzle statement IMHO. It’s not hard to handle, it’s actually a part of the visible tests; but it doesn’t bring anything interesting to the table, and most importantly doesn’t really follow from the statement.↩︎

Quite literally, it’s actually a heap. But as we’re not using it in any of the standard ways, I assume it would only confuse things to call it that way.↩︎

\(C_{n_1+n_2}^{n_1} = {n_1+n_2 \choose n_1}\), for those of you who want to see binomials when it’s all about combinations.↩︎

We can verify we’re not really singling out \(T_1\) by picking its combinations instead of \(T_2\)’s: the combinations function is nicely symmetrical and \(C_{n_1+n_2}^{n_1}=C_{n_1+n_2}^{n_2}\).↩︎

If you don’t know how to compute a modular inverse on your own, it can always be obtained from WolframAlpha.↩︎

In round 2’s problem B “Matrygons”, we want to nest regular polygons within each other so that they all have all of their vertices on the same circle without crossing lines.

The puzzle gives a number of edges \(N\) and asks how many polygons we can squeeze in this way such that the sum of their number of edges is \(N\).

Very artificial.

Two regular polygons that nest within the same circle necessarily have a number of edges one a multiple of the other. The minimum number of edges to form a polygon is \(3\). So we are searching for the longest possible sequence of integers \(f_1, f_2 \ldots f_k\) such that:

\[ \begin{cases} f_1\ge 3 \\ \forall i>1, f_i\ge 2 \\ \sum_{k=1}^n \prod_{i=1}^k f_i=N \end{cases} \]

Solving for a given \(N\) is rather inconvenient. On the other hand, computing all of them once for later querying isn’t too complex. The process is very similar to array-based Eratosthenes sieving, though the complexity is a bit more since we can’t skip composites anymore. I find \(O(N \log N)\).

```
-- | Compute a solution look-up table up to N.
solve :: Int -> UArray Int Int
= runSTUArray $ do
solve hi <- newArray (1,hi) 0
ns let go sides polygons s = do
let sides' = sides+s
= polygons+1
polygons' <= hi) $ do
when (sides' <- readArray ns sides'
n let n' = max polygons' n
writeArray ns sides' n'mapM_ (go sides' polygons') [2*s,3*s..hi-sides']
mapM_ (go 0 0) [3..hi]
pure ns
```

The `go`

helper recursively computes the matrygon settings for a given inner content, summarized as an (outer polygon edge count, polygon count) pair.

The wrapper follows. It takes advantage of lazyness to have the same program able to answer on both test set sizes.

```
main :: IO ()
= do
main <- readLn :: IO Int
t let cacheSmall = solve 1000
= solve 1000000
cacheLarge 1..t] $ \i -> do
forM_ [<- readLn
n let cache | n <= 1000 = cacheSmall
| otherwise = cacheLarge
putStrLn $ "Case #" ++ show i ++ ": " ++ show (cache ! n)
```

This concludes today’s problem. The same code is on GitHub as per usual. See you soon!

]]>Round 2’s problem B, “Minimum Sort”, is yet another twist on this year’s theme: sorting integers in convoluted ways.

In this instalment, we have two operations:

- swapping two elements, for free
- locating the minimum element of a contiguous range, for a cost inversely proportional to the range length.

I’ll be honest. I could only think of one natural way to sort using those two operations: selection sort. But that’s way too easy for a round 2 problem, isn’t it? So let’s rule it out using math.

Selection sort works like this:

- Identify the smallest element.
- Move it to the front.
- Recursively sort the elements after the front one.

So if \(L\) is the sequence’s length, the minimum operation is carried out for a cost of \(10^8\over L\). A free swap will bring it into place, then we repeat for the remaining \(L-1\)-element long sequence. It will take at most N iterations, so the total cost will be:

\[ \sum_{L=1}^N{10^8\over L} \le \left\lceil10^8 ln N\right\rceil \approx 4,6×10^8 \]

Unfortunately, we’re only allowed \(6×10^8\) coins, so we’re going to have to refine a bit.

Wait, what?

We’ve got enough coins. It’ll pass.

```
import Control.Monad
import System.Exit
import System.IO
main :: IO ()
= do
main LineBuffering
hSetBuffering stdout <- map read . words <$> getLine
[t,n] $ do
replicateM_ t 1..n-1] $ \i -> do
forM_ [$ "M " ++ show i ++ " " ++ show n
send <- get
m > i) $ do
when (m $ "S " ++ show i ++ " " ++ show m
send 1 <- get
pure ()
"D"
send 1 <- get
pure ()
send :: String -> IO ()
= putStrLn
send
get :: IO Int
= do
get <- readLn
n < 0) exitSuccess
when (n pure n
```

That was easy. Too easy. More on that later.

The full code is above, but it’s also on my GitHub if really you like the syntax highlighting there better for some reason.

See you soon for problem B!

]]>I’m not exactly proud of *how* I passed, but not needing to make myself available for round 1C was greatly appreciated for my local lockdown logistics’ planning.

I had the right approach from the start for **Broken Clock**—and think the ones in the analysis are overengineered, FTM—but took way too long to push it past the finish line. Pressure and stupidly ridiculous bugs. Oh well.

I had **Subtransmutation** mostly up in time. Just a wee bit overengineered, which cost me not passing the large set in time. Simplifying binary search down to linear search got it to pass with less than five minutes of thinking past round end. Oh well.

I didn’t get a chance to poke at **Digit Blocks** much further than reading the statement. Hypothetically, assuming I’d tackled the problems in another order, I definitely would have completed the greedy put-nines-on-top strategy, then intuited DP and failed to simplify to the top-N. Would I have fallen back to top-2-rows-based heuristics and succeeded there? I have no idea.

There’s a bit over a week to round 2. Will I manage to carve out some time to upsolve the other round 1 problems before then? There’s only one way to find out!

]]>We have N piles. Each pile starts empty and can hold up to B blocks. Blocks each have a value of *D* × 10^{H} where D is the digit printed on the block and H the height the block ends up at.

*N* × *B* blocks arrive, one by one, each painted with a digit coming from a uniform distribution over [0..9]; for each block we are asked in which pile to put it, with no knowledge of what digits the future blocks will have.

The submission is a success if the summed value over a few attempts is over 90% or 98% of the theoretical maximum. I’m still not totally clear to me if that’s the theoretical maximum over all lucks (including the one that happens to yield *N* × *B* nines in a row), or the theoretical maximum assuming the perfect strategy. Anyway, we’ll proxy that by doing the best we can at every attempt and not worry so much more about it.

So it’s a classical act of balance between committing score points and creating opportunity.

For knowledge representation, my wording of the statement may have made it a bit more obvious than the original, so I’ll repeat it: it doesn’t matter which pile a block is in; all that matters is its height. So we can represent our state as a map from height to cardinality of the piles who have that height.

As a side note, the puzzle’s I/O protocol does ask us for a column index to place digits. We still don’t really need to track the column contents: we can maintain a decreasing set of piles by merely choosing which of the available heights we’ll fill with a block, and get a valid column index by summing the cardinalities of higher columns.

So here’s a general wrapper.

```
type RemHeight = Int
type Card = Int
type Digit = Int
type PureStrategy = Card -> RemHeight -> St -> Digit -> RemHeight
playStrategy :: MonadIO m => Card -> RemHeight -> PureStrategy -> m ()
= do
playStrategy n b strat let play = strat n b
flip evalStateT (st0 n b) $ replicateM_ (n * b) $ do
<- readFromJudge
d <- get
st let rh = play st d
put (consume st rh)
writeToJudge (heightToIndex st rh)
type St = IntMap Card
st0 :: Card -> RemHeight -> St
= IMap.singleton b n
st0 n b
-- | Place a block at a remaining height of rh. Incorrect (unchecked)
-- to call for rh ≤ 0.
consume :: St -> RemHeight -> St
= IMap.unionWith (+) s $ IMap.fromList [(rh,-1),(rh-1,1)]
consume s rh
heightToIndex :: St -> RemHeight -> Int
= 1 + sum (IMap.filterWithKey (\h _ -> h < rh) st) heightToIndex st rh
```

The top spots of every column are worth much more than anything below them, so they’re the ones we ought to optimize first. How do we put the most of the best value, nine, there?

It takes about ten draws to get a nine, so we want a spot for it to be available for as long as possible. So we can try a simple greedy algorithm of always placing digits 0 through 8 in the lower spots, and nines on the top spot.

- We won’t necessarily have a top spot open when a nine comes up. That’s literally why we said we want those open as soon as possible. At any rate, if that happens, well… just treat the nine as a lower digit.
- We won’t necessarily have a lower spot open when a lower digit comes up. It would be real unlucky (and unlikely) most of the time, but it’s extremely likely to happen for the very last spot. So when that happens, well, there’s no choice either.

Here’s that greedy strategy expressed in code.

```
-- | Always place on the leftmost tower that fits. Nines fit
-- anywhere; other digit fit as long as they don't conclude the
-- column. Unless there's no other choice left.
greedy :: PureStrategy
= fst $ IMap.findMin $
greedy n b st d if d == 9 || IMap.null safe then valid else safe
where valid = IMap.filter (> 0) $ IMap.delete 0 st
= IMap.delete 1 valid safe
```

This happens to be enough to pass the first test set. The problem analysis gives a much better explanation of why than I could.^{1} Now for the harder set.

Obviously, the greedy strategy described above can’t be optimal. We’ll achieve a better optimum considering the actual expected values of the positions we reach after every choice we make. Each position’s value can be computed recursively from its successors’, so Dynamic Programming fits the bill.

But the number of states to compute is still prohibitive: there are 3 247 943 160 states^{2} for normalized (decreasing) unlabeled block piles only. I’m not aware of a straightforward indexing for them, especially with the 1GB RAM limit this problem has.

But we can reduce the number of states by using a variant of the greedy heuristic and only ever considering placing a digit in the top R positions, or in the leftmost sub-R column.

R | states | time |
---|---|---|

0 | 3k | <1s |

1 | 30k | 3,5s |

2 | 200k | 30s |

3 | 1M | 3,5m |

(Those are columns×digits states, so there are about 10× more naturally.)

*R* = 0 is the “always play leftmost nonfull” strategy. It yields about 50%, which is as good as random. Indeed, with no decision taken, it’s as good as the “play a random nonfull column” one.

*R* = 1 is close to the greedy strategy, except special-casing 9 isn’t hard-coded and the chosen digit to top a column may vary depending on how close we are to the end. It returns 92%, so indeed better than greedy but unworthy of the second test set.

*R* = 2 is mostly the sweet spot. It’s computable within problem limits and doesn’t overflow. It returns some yield over 98%^{3} reliably.

*R* = 3 goes overtime, but not by too much. It seems conceivable optimizing it could give even greater results. My implementation is a rather straightforward one, there is definitely room for improvement:

```
-- | Place at the “optimal” height among those <th or on the leftmost
-- among those >= th.
dp :: RemHeight -> PureStrategy
= \ st d -> fst (cache ! (st,d)) where
dp th n b = execState (mapM_ (go (st0 n b)) [0..9]) Map.empty
cache = gets (Map.lookup (st,d)) >>= \ case
go st d Just r -> pure r
Nothing -> do
<- expected th go st d
r
modify' (Map.insert (st,d) r)pure r
-- | Expected gain of placing a digit.
expected :: Monad m => RemHeight
-> (St -> Digit -> m (RemHeight,Double))
-> St -> Digit -> m (RemHeight,Double)
= do
expected th rec st d <- mapM f (prune th st)
candidates pure $ if null candidates then (-1,0)
else maximumBy (comparing snd) candidates
where f rh = expected' rec (consume st rh) >>= \e ->
pure (rh,fromIntegral d * 10^^(-rh) + e)
-- | Expected gain of placing any digit.
expected' :: Monad m => (St -> Digit -> m (RemHeight,Double)) -> St -> m Double
= (/ 10) . sum . map snd <$> mapM (rec st) [0..9]
expected' rec st
-- | Reduce search space by only allowing one tower that's not close
-- to the max to be filled in. In dual terms, only allow one height
-- among those greater than the threshold.
prune :: RemHeight -> St -> [RemHeight]
= case r of [] -> l
prune th st :_) -> x:l
(xwhere (l,r) = partition (< th) $ dropWhile (<= 0) $
$ IMap.filter (> 0) st IMap.keys
```

So yeah, improvement is still feasible. OTOH, it passes the test reliably, so don’t hold your breath waiting for me to do it.

In the meantime, I’ve given out an almost complete program, so here’s the little bit of wrapper and protocols holding it all together.

```
main :: IO ()
= do
main <- map read . words <$> getLine
[t,n,b,p] let strategy = case p of
860939810732536850 -> greedy
937467793908762347 -> dp 3
$ playStrategy n b strategy
replicateM_ t
void readFromJudge"VICTORY"
traceM
readFromJudge :: MonadIO m => m Int
= liftIO $ do
readFromJudge <- readLn
num < 0) $ traceM "DEFEAT" *> exitSuccess
when (num pure num
writeToJudge :: MonadIO m => Int -> m ()
= liftIO (print i *> hFlush stdout) writeToJudge i
```

And… as is too often the case with those DP strategy problems, I’ve now got a close-to-perfect solution while still having very little inner understanding of how to play the game in real life. Oh, well…

This concludes round 1B’s final problem. As usual, the fuller code is on GitHub. Stay tuned for a short round recap before I go back to counting days to round 2!

Round 1B’s problem B, “Subtransmutation”, puts us in the alchemist role once again. This time we’re to find the least mineral such that we can transmute it to a given distribution of other minerals, using repeated application of a simple spell.

For any \(N\), said spell takes a unit of mineral \(N\) and splits it into a unit of two lesser minerals, \(N-A\) and \(N-B\). \(A\) and \(B\) are constant per test set, and notably do not depend on the mineral being transmuted.

The easy test set uses \(A=1\) and \(B=2\). In this restricted case, it’s easy to see solving the problem is always possible: with enough units of mineral \(N\), we’re able to produce as many units of mineral \(N-1\), or the double units of mineral \(N-2\). Coarsely speaking, minerals “grow” at a geometric rate greater than \(\sqrt 2\) per mineral rank.

There may be a clever way to construct the least mineral that works bottom-up from the given distribution. But with that geometric sequence, we’re confident it won’t need to go very high, so we can just search for it, linearly.

This entails implementing a verification function: “can I produce the distribution from this starting mineral?”

A greedy algorithm appears to do the trick: starting with a stock of one, repeatedly transmute the highest ranking element, reserving the amount needed when such ranks are passed.

- It’s never a bad idea to be transmuting a mineral higher than the distribution’s highest: there is literally no other use for it.
- With no reserving, no mineral will be missed, all of them from the starting one down to mineral 1 will be “hit”.
- Starting high enough, we can establish a lower bound of how many units we want to have at any rank thanks to the geometric progression.

So it’s solved, conceptually. The most mineral we can ever need is 20 units of each mineral from 1 to 20. \(20×20=400\) units of mineral 20 only are (more than) enough to cover that. 1 unit of mineral \(20+\left\lceil\log_\sqrt 2 400\right\rceil+1 = 39\) covers that.

Really, a quadratic algorithm—linear search with a linear test—on that order of magnitude for a worst case is Very Safe.

Now to the more general case, when \(A\) and \(B\) are free. We have two hurdles:

- Can we still hit every number in the distribution?
- Does the linear approach still work?

It’s kind of obvious it can’t always work. Either by spotting the “IMPOSSIBLE” case in the additional sample output, or by just considering (2,4), who fit the requirements, yet only ever cover numbers that share parity with the starting mineral.

Formally, we can reach any number that can be expressed as \(x - \lambda A - \mu B\), for \(\lambda\) and \(\mu\) positive.^{1} This looks a lot like Bézout’s identity, doesn’t it?

\[\forall (a,b)\in\mathbb N^2,\quad\exists(\lambda,\mu)\in\mathbb Z^2,\quad\lambda a + \mu b = a \wedge b\]

The key difference is that alchemy coefficients must be positive, whereas Bézout’s don’t care. The other is that we’re counting down from the starting mineral, instead of up from zero. Can we still get any number from 1 to 20?

We can easily get any multiple of the GCD (still as a difference from the starting mineral), by simply multiplying the entire equation. But that won’t help if one of the coefficients is negative.^{2}

By adding or removing \(ab\over{a\wedge b}\) to both terms, we can fudge the coefficients around until the negative one, say \(\lambda\), is at a distance to zero no more than \(\left|{b\over{a\wedge b}}\right|\). But then by summing \(ab\) to some integer times the GCD, we can get positive coefficients. This works for an integer between 0 and \(b\over{a\wedge b}\). But that upper bound is itself a multiple of \(a\), so we can start over. So above a certain thresholh, every multiple of the GCD can be obtained with positive cefficients.

Are numbers *other* than multiples of the GCD reachable? Bézout’s theorem is more directly applicable here: no, an integer linear combination of A and B, of which a positive linear combination is a strict subset, is necessarily a multiple of the GCD.

Back to counting down from minerals, this translates as “the distance between any needed mineral and the starting one is a multiple of the GCD.” They’re still obviously of lower rank.

That’s a great step towards a solvability test, but there’s still the small part of proving abundance of minerals. Some form of the geometric argument used for (1,2) probably still applies, but I’m too tired to expand it here. Let’s say it’s left as an exercice to the reader!^{3}

The biggest “stride” we can encounter is the maximum GCD we can have between any integers between 1 and 20, so exactly 20 (between 1 and 20). A quadratic algorithm is still safe.

We can implement at peace.

The core of the matter is the test—the function that verifies a given starting mineral covers a given distribution.

```
valid :: Int -> Int -> IntMap Int -> Int -> Bool
= go us (IMap.singleton n 1) where
valid a b us n
go n s| IMap.null n = True -- no metal need → success
| IMap.null s = False -- no metal in stock → failure
| w < nmax = False -- heaviest needed is heavier than stock → failure
| w > nmax = go n s'' -- heaviest stock isn't needed → transmute it
| nw < unmax = False -- not enough heaviest metal produced → failure
| otherwise = go n' s'' -- reserve and transmute
where
= IMap.findMax n -- heaviest needed metal
(nmax,unmax) Just ((w,nw),s') = IMap.maxViewWithKey s -- heaviest in stock
| w == nmax = min nw unmax -- consumed quantity
cons | otherwise = 0
-- quantities after consumption and transmutation
= IMap.unionWith (+) s' $
s'' -cons) | c <- [w-a,w-b], c > 0 ]
IMap.fromList [ (c,nw-- needed quantities after consumption
= IMap.delete nmax n n'
```

It’s a bit of a mouthful^{4} so here’s a reading guide:

- The core of the algorithm is the two transmuting lines that recurse.
- The rest is essentially “ensuring the variables are well-defined and meaningful
*before*they are used”. That’s the part that would be rewritten for a longer-lived application. - N for needed, S for stock, W for, err… “most Weightful”.

The general solver follows directly:

```
solve :: Int -> Int -> [Int] -> Maybe Int
= guard (all ((== 0) . (`mod` gcd a b)) δs) *>
solve a b us 0..]
find (valid a b us') [where δs = map =<< subtract . head $ IMap.keys us'
= IMap.filter (> 0) $ IMap.fromList $ zip [1..] us us'
```

It could be optimized to only seek on multiples of the GCD, but… why bother?

The protocol wrapper for completion:

```
main :: IO ()
= do
main <- readLn
t 1..t] $ \i -> do
forM_ [<- map read . words <$> getLine
[_n,a,b] <- map read . words <$> getLine
us putStrLn $ "Case #" ++ show i ++ ": " ++
maybe "IMPOSSIBLE" show (solve a b us)
```

This concludes this puzzle’s solution. The full code is on GitHub. See you soon with this round’s final problem!

Round 1B’s problem A, “Broken Clock”, gives us a clock’s position and asks us for the time.

Catch number 1: it doesn’t tell us which hand is which. Countermeasure number 1: there are only 3 hands, that’s 6 permutations. We can try them all and keep the consistent ones only.

Catch number 2: the clock has no markings, so all hand positions are given with an arbitrary offset. Countermeasure 2: that offset is eliminated by pairwise subtraction.

So let’s get the math out. The clock positions are given in nanoseconds, \(360×12×10^{10}\) per revolution. For obvious reasons, I’ll call that constant \(2\pi\). For a time \(t\) in nanoseconds and hand positions \(h\), \(m\) and \(s\), we’d have:

\[ \left\{ \begin{array}{cclc} h & = & \theta + t & [\bmod 2\pi] \\ m & = & \theta + 12t & [\bmod 2\pi] \\ s & = & \theta + 720t & [\bmod 2\pi] \end{array} \right. \]

Subtracting pairwise,

\[ \left\{ \begin{array}{cclc} m - h & = & 11t & [\bmod 2\pi] \\ s - m & = & 708t & [\bmod 2\pi] \\ s - h & = & 719t & [\bmod 2\pi] \end{array} \right. \]

Three equations, four unknowns. But \(t\)’s range is bounded, so there’s probably a way to get a MIP solver on the task. Luckily for us, that won’t be needed.

The variables are all integers.^{1} So we’re really looking at a modular inverse. The 708 may be a bit of a composite, but 11 and 719 are *bona-fide* primes, we can invert them without a second thought using any conventional means: product enumeration, exponentiation, WolframAlpha^{2}, extended Euclid/Bézout.^{3}

I suppose 128-bit integers would be ok, but they’re a bit of a pain to get working in Haskell. Especially when we’ve got BigInt baked right in.

```
eleven_ :: Integer
twoPi,= 360 * 12 * 10^10
twoPi = 15709090909091 eleven_
```

Now given \(h\), \(m\) and \(s\), we can obtain \(t\) directy from just a single difference (the hours-minutes one), there is always one that fits. Then we can trivially verify the seconds hand is indeed in the correct position relative to the others for that time. Any equation involving \(s\) and \(t\) will do, though testing both for acute decision fatigue also works.

```
verify :: [Integer] -> Maybe Integer
= do
verify [h,m,s] let t = (m - h) * eleven_ `mod` twoPi
$ ((720-1) * t - (s - h)) `mod` twoPi == 0
guard $ ((720-12) * t - (s - m)) `mod` twoPi == 0
guard pure t
```

Solving from \(A\), \(B\), \(C\) is a simple matter of trying the permutations until a fit is found. The judge accepts any solution if there are multiple.^{4}

```
solve :: Integer -> Integer -> Integer -> (Integer,Integer,Integer,Integer)
= (h,m,s,n) where
solve a b c = t `divMod` (10^9)
(t',n) = t' `divMod` 60
(t'',s) = t'' `divMod` 60
(h,m) = head $ mapMaybe verify $ permutations [a,b,c] t
```

Here’s a fairly standard CodeJam wrapper for completeness.

```
main :: IO ()
= do
main <- readLn
t 1..t] $ \i -> do
forM_ [<- map read . words <$> getLine
[a,b,c] let (h,m,s,n) = solve a b c
putStrLn $ "Case #" ++ show i ++ ": " ++ unwords (map show [h,m,s,n])
```

This concludes this puzzle’s solution. As usual, the full code is on GitHub. See you soon for the round’s other problems!

In case you missed them, here are my write-ups for the qualifier problems:

*Reversort**Moons and Umbrellas**Reversort Engineering*(it’s the same as for Reversort)*Median Sort**Cheating Detection*

The problems were fun and interesting, on the whole. I do regret a few things.

- It’s really a lot of sorting for a single round.
- (related) It’s a lot of abstract computer science for a qualifier.
- My initial solution to E-large fell in the bad luck territory of the test set. For no good reason. It’s a pity it had so few tests. And I say that as someone who didn’t really attempt it during the challenge. There are probably quite a few poor souls who paid a higher price for it.

On the positive side: they’re now providing the actual test sets in the analyses! That’s the first great news you’re hearing from me since they revamped the platform, take heed!

Here’s my writing prompts list, for later perusal.

- The analysis says
*Reversort*is feasible in*O*(*N**l**g**N*). That might be interesting. - As mentioned in the post for
*Median Sort*, there are at least two other fun classes of ways to solve:- an optimal algorithm (non-asymptotically) for the small set
- mergesort and heapsort for the medium set
- ternary quicksort for the large set

One hundred contestants answer the same ten thousand questions. They get each one either right or wrong. So we can visualize the entire input set quite easily with a simple two-dimensional bitmap.

It’s apparent that there is some form of structure here: there is a grid of horizontal and vertical patterns of sorts.^{2} This is explained by a careful reading of the statement: each contestant as well as each question has a specific skill rating—a real number between -3 and 3. The probability of a contestant answering a question correctly is given by the sigmoid function applied to the difference of the contestant’s skill to the question’s.

So vertical black lines represent hard questions; horizontal black lines represent weak contestants. You can figure out what the white lines mean.

We can estimate how the questions rate with regard to each other by simply tallying their success rate over the contestant set and sorting them. Conversely, we can estimate how skilled the contestants are by tallying their success rate over the question set and sorting them.

The easy questions are to the left: they were answered successfully most of the time. The better contestants are to the bottom: they answered successfully most of the time. There’s still a random aspect: the border between black and white isn’t too clear.

Now a cheater happens to lurk in this data. But the cheater is devious: to throw the jury off scent, they didn’t cheat for *all* of the questions, merely for about half of them, selected at random by an independent coin flip per question. Can you spot the cheater in the above data?

Let’s zoom a bit to have distinguishable contestants.

There’s a crisp line around the 70% mark. An anomaly. That’s our cheater. Still, their line has more white to it than the line before, and less than the line after: that’s why it ended up there in the first place. But it definitely doesn’t appear “in place”.

Applying a simple edge detection convolution^{3} makes it much more apparent.

OK, let’s implement.

I’ll read and store the contest statistics as a contestant-indexed `Vector`

of question-indexed `Vector`

s of successes represented as a boolean stored in an `Int`

.

```
readData :: IO (Vector (Vector Int))
= V.replicateM 100 $ V.fromList . map digitToInt <$> getLine readData
```

Now to identify the cheater. I’ll first rank the players by their success rate.

```
idCheater :: Vector (Vector Int) -> Int
= cheater
idCheater fs where
= V.sum <$> fs
pEase = V.fromList $ map fst $ sortOn snd $ V.toList $ V.indexed pEase pIndex
```

Then rank the questions by their success rate.

```
= V.foldl1' vAdd fs
qEase = V.fromList $ map fst $ sortOn snd $ V.toList $ V.indexed qEase qIndex
```

Note I didn’t bother to reverse the sort; we’re getting easier first as opposed to the pictures above. It doesn’t change a thing for the rest of the process, so I’m aiming for simplicity.^{4}

We reorder the data to (virtually) get the rectangular diagonal split.

```
= for pIndex $ \i ->
ranked $ \j ->
for qIndex ! i ! j fs
```

Now we “zoom”, *i.e.* aggregate nearby questions by batches of 100, to get the square diagonal distribution with a horizontal line.

```
= for ranked $ \qs ->
zoom $
V.map V.sum 100 (Just . V.splitAt 100) qs V.unfoldrN
```

Yeah, the GCJ platform is so old it doesn’t have `V.unfoldrExactN`

. Moving on.

Edge detection:

```
= ifor zoom $ \i z ->
edges fmap ((^3) . abs) $
$ catMaybes
vAvg `vSub`) <$> zoom !? (i-1)
[ (z `vSub`) <$> zoom !? (i+1)
, (z ]
```

The slight complication here is to account for borders: we do want them tested because there *is* a higher chance of the cheater ranking best, but we don’t have a further contestant to compare to. So we just express the computation as a sum of neighborwise differences, and average to keep the order of magnitude the same.

It took a bit of trial and error to find an appropriate value for N.

N | Success Rate |
---|---|

1 | 88,6% |

2 | 91,0% |

3 | 93,8% |

4 | 92,0% |

∞ | 83,2% |

So mostly anything passes as long as it doesn’t get out-of-handedly big. I might as well stick with 3 now I’ve identified it. I have no idea if there’s a solid explanation behind that. Or if it could get even better with fractionals.

Anyway, it seems robust enough. Let’s submit.

Infuriating. And doubly so, too.

- It takes three to four minutes per attempt to judge.
- The input set is very precisely specified. My benchmarking generated tests that followed the spec to the letter. My code had acceptable success rates on such test sets, by a margin. I can’t think of a good reason for it to fail on the platform.

Oh well. Bear with me while I go take a peek at the problem analysis.

That was short.

Take-away 1: the test set is actually static. So there’s no need to mass-submit hoping for a luckier draw.

Take-away 2: I fail at 8 cases, which is exactly 1 too many to pass. 50 tests make for 2% success per test, it *is* sensitive. I’m tempted to call that some form of bad meta-luck.

Take-away 3: My edge detection is probably not the perfect approach, statistics-wise. But intuitively, the inversion count metric seems worse to me! I’ll really have to find the time to refresh my statistics. Or ask a friend who’s still involved enough.

This problem will leave a bit of a bitter taste, because it leaves me stranded without a preferable path forward. It doesn’t make sense to optimize for the specific test set GCJ uses. Optimizing for random was already the proper way to do it. Maybe I’ll just try and look for the smallest possible change that passes?

**Update 22:23** well LOL. After a bit of above-comfort statistical hacking, in the end the easy fix was to just dumb it down. I’m now getting 98,3% accuracy on random input by quantizing to 10 times fewer pixels per contestant than what I initially wrote. With a perfect 100% on the test set.

I feel a bit better about my distant statistical abilities.

So now I’ve got a proper conclusion to today’s puzzle! As usual, fuller code is on my github. Stay tuned for a qualifier recap.

Hey, I’ve got a life too, you know!↩︎

The verticals are less visible because the squashing is more pronounced. Click through to see the square pixels.↩︎

Pictured: some form of |−1;2;−1|

^{N}for some N lost to the gods of the GIMP’s curves tool.↩︎Hah! I’m aiming to get the line of code to fit without a scrollbar, is all.↩︎

Median sort is this year’s qualifier’s interactive problem. A rather fun one, IMHO. On the surface, it’s dead simple: we are to sort an array. We’re even given the leeway to pick ascending or descending order per our very own preference!

The catch is: we’re not provided with the most useful operation the common sorting algorithms tend to make use of: actually indexing the array. We’re not even allowed its first contender: comparing two array positions’ values.

The only operation we’re provided, as the title suggests, is “Median”. Median takes three distinct array indices as parameters, and returns the one whose value is the median of the three values.

Let’s start by implementing the querying process. As the GCJ interactive runner has one of the weirdest operating modes out there, it’s where it’s easiest to commit the most deadly blunders.

```
query :: [Int] -> IO Int
= do
query xs putStrLn $ unwords $ map show xs
hFlush stdout<- readLn
r < 0) exitSuccess
when (r pure r
```

I’m flushing `stdout`

explicitly after a print instead of changing the stream’s global buffering mode, mostly because I do not want to be writing anything to that stream from anywhere else but this function.

I’m checking for disqualification directly there. If the call to `exitSuccess`

in that case reads weird to you, it’s because it really is. But such is the GCJ interactive protocol. Despite numerous complaints from higher-caliber contestants than I, they’ve always stuck to their guns and preserved the stupid parts of it. Oh well.

I love interactive problems in general, but I hate the GCJ interface to them.

I’m not enforcing a list length of 3 because that function is going to serve as the “I’m done” signal as well.

With that out of the way, we can start thinking about the problem *per se*.

As the statement reminds, with only that “median” query available, it’s impossible to determine whether we’re going to be sorting up or down, which is obviously why both are accepted. Now what strategy can we devise to make use of a minimal number of median queries to discover the array’s order? Conversely, what’s the maximum information we can extract out of a median query, and how does it depend on the choice of the input parameters?

I haven’t checked the problem analyses yet, so I’ll just describe my personal train of thought here. Just reading the test sets’ constraints gives a lot of information out already.

Test set 1 allows 300 queries per test, for lists of size 10. There are \({10!\over 2} \approx 1,8×10^6\) ordered-yet-undirected permutations of `[1..10]`

. Large but not intractable. I suppose it’s feasible to generate all of them, possibly lazily, then interactively filter the ones who match the judge’s answers on median queries. The rest of the problem would be to find queries that restrict the search space by as much as possible at each query. Intuitively it seems like it would be possible to reach the optimum—dividing the search space by two at each step—given the structure of the data. That would need \(log_2 P\) attempts, so 21 in the worst case, which is a lot better than the 300 we’re allowed. Which is a hint that’s probably overkill for a ridiculous solution, and just random queries might be enough.

I may give it a go for the kicks, but it’s bound to be improductive for the larger sets, so let’s not waste any more time on that.

My idea went like this. One median query gives a relative order among three elements. If I make a second median query, how can I relate the information it yields to the first one? Well, in the general case I can’t: while it also gives a relative ordering among three elements, there’s no way of knowing whether that relative ordering goes in the same direction as the first one. So I’ll need it to include not only one but two elements whose order is bound by the first query.

This applies to any two pairs of queries, so globally it appears as though I can’t extract any information from a query if it doesn’t fuse two parameters with another one. Inductively, a query can’t ever mention more than one element that wasn’t mentioned before.

What information does a median query yield where two out of three parameters are “known”? Well, it provides some kind of a band filter: if \(a < b\), then \(Median(a,b,x)\) can be either of:

- \(a\), in which case \(x < a < b\)
- \(b\), in which case \(a < b < x\)
- \(x\), in which case \(a < x < b\)

So intuitively, we’ll make the best use out of that information if those bands are of the same size.

This looks like a beginning of a ternary search. What’s missing? The extracted ordering is a bit coarse at first, but we can try to pin it down to a total ordering by iterating on it. Then repeat for the next numbers.

To bootstrap it, we need two elements whose ordering is “known”. Any pair will do, so we’ll just take 1 and 2 and postulate \(a_1 < a_2\).

And we’ve got a convoluted insertion sort:

```
InsertionSort(N) returns S:1,2]
let S := [for i := 3 to N:
let p := TernarySearch(S,i)as S[p]
insert i
TernarySearch(S,i):
let a,b be distinct indices that best subdivide S
m := Median(S[a],S[b],i)if m = S[a] then return TernarySearch(S[..a],i)
if m = S[b] then return TernarySearch(S[b..],i)
if m = i then return TernarySearch(S[a..b],i)
```

The ternary search has an upper bound of \(\left\lceil log_3 N\right\rceil\),^{1} so the overall algorithm would consume fewer than \((N-2)log_3 N\) queries. ~~ That’s about 410, this is bad news, we’re only allowed 300. But it’s an overestimate, so let’s dig deeper.~~

\[ Q \le \sum_{L=3}^N \left\lceil log_3 L\right\rceil = (\ldots) = 378 \]

~~Still not good enough. But I’m counting the bands’ bounds twice during recursion, perhaps if I refined that?~~ Hah hah no, I got my figures wrong. The maximum array size is 50, not 100. So the first calculation actually results in 171 *overestimated worst-case*. It’s good enough for both test sets 2 and 3.

Whew. A good thing I didn’t worry about the math during the contest and just implemented it.

Of course, the devil is in the implementation details. So many things could go wrong around the border conditions of a ternary search it’s not even funny. Let’s kick the paranoia level up a notch and define a few `newtype`

s to avoid mixing up the many possible interpretations of `Int`

.

In the course of the algorithm, I’ll be maintaining a sequence of array indices. I’ll keep it such that at any point the indices it holds are ordered in the same direction as [array[1],array[2]]. I’ll call *indices* the indices of the hidden array; I’ll call *ranks* the indices in the index sequence of the array indices. A *ranking* is such a sequence.

Unfortunately the core Haskell `Seq`

type enforces `Int`

~~indices~~ ranks, so I’ll wrap the accessors I need.

```
newtype Rank = Rank Int deriving (Eq,Ord,Enum,Num,Real,Integral)
newtype Index = Index Int deriving (Enum)
type Ranking = Seq Index
insertAt :: Rank -> Index -> Ranking -> Ranking
Rank i) e q = Q.insertAt i e q
insertAt (
index :: Ranking -> Rank -> Index
index q (Rank r) = Q.index q r
lastRank :: Ranking -> Rank
firstRank,= Rank 0
firstRank _ = Rank (Q.length q - 1) lastRank q
```

Just a word about the last two functions there.

- The useless parameter to
`firstRank`

is for similarity with`lastRank`

. Don’t worry, it won’t kill performance. - For an N-element sequence, I’m indexing CS-style with rank \(0\) being the first and \(N-1\) the last. But that’s not the whole story. We are to repeatedly insert elements in (remote-)sorted order, so those rankings will double as insertion points. Inserting at \(0\) means a new first element. But inserting at \(N-1\) does
**not**mean a new last element: that would be \(N\). Keep that in mind: it*would*be a major source of mistakes if we relaxed our attention.

Bottom-up, let’s now wrap the `query`

function to more type-safe variants.

```
submit :: Ranking -> IO ()
= void . query . coerce . toList
submit
data Trisection = Before | Between | After
medianOrder :: Ranking -> Rank -> Rank -> Index -> IO Trisection
Index i) = do
medianOrder q a b (let Index sa = index q a
Index sb = index q b
<- query [sa,sb,i]
m if | m == sa -> pure Before
| m == sb -> pure After
| m == i -> pure Between
```

This is more verbose than my submitted code, but it’s *much* better to gain the confidence to move stuff around and experiment.

We now have all the infrastructure we need to tackle the algorithm itself. Proceeding top-down for a change, the sort is self-describing:

```
insertionSort :: Int -> IO Ranking
= foldM insert q0 [Index 3..Index n]
insertionSort n where q0 = Q.fromList [Index 1,Index 2]
= do
insert q i <- ternarySearch q i
p pure (insertAt p i q)
```

And now the real meat. The place where all the off-by-one errors lurk, ready to strike your solution with a TLE or worse, a WA.

I’ll use the common wrapper/worker idiom to keep in scope both the current ranking and the index to be ranked. So the wrapper reads as such:

```
ternarySearch :: Seq Index -> Index -> IO Rank
= ternary (firstRank q) (succ (lastRank q)) where ternarySearch q i
```

The `ternary`

worker is intended to be called with the bounds between which the ranking should land, inclusive. Note how this translates to one past `lastRank`

. Let’s also assert the bounds should always be properly ordered, possibly equal.

In the normal case, we can actually split the sought range in three non-null parts, and recurse there. More specifically, and considering it’s always possible to be handed a value that’s going to land out of the current range, we can successfully trisect it if we can find two distinct points in the range to “median” on. So if the ranking spans at least two elements. In that case, we compute said two points as good as we can, query where the index to be ranked places, and refine from there.

```
ternary :: Rank -> Rank -> IO Rank
ternary l r| w >= 2 = do
let a = l + w `div` 3
= l + w * 2 `div` 3
b >>= \case
medianOrder q a b i Before -> ternary l a
After -> ternary (b+1) r
Between -> ternary (a+1) b
where w = r - l
```

What do we need to check cautiously?

- Are the trisection points indeed distinct? They are if \(w\) and \(2w\) fall on separate integer multiple bands of 3. It can’t always work: for example for \(w=1\), 1 and 2 both integer-divide to 0. For \(w=3\) it does work: they divide to quotients exactly 1 apart. So the \(w \ge 3\) case is covered as well. As for the middle ground \(w=2\), we can check by hand that \(\left\lfloor{w \over 3}\right\rfloor = 0 \ne 1 = \left\lfloor{2w \over 3}\right\rfloor\). So we’re good, the points
*are*distinct for all \(w \ge 2\). - Are the points within range? We’re allowed \(l\) to \(r\) inclusive. They’re both obviously greater than \(l\). (\(b\) is obviously greater than or equal to \(a\).) To actually reach \(r\) we’d need \(\left\lfloor{2w\over 3}\right\rfloor\ge w\). Which is equivalent to \(w \le 0\). So no, it can’t happen. We’re safe.
- Are the recursion ranges correct? Well, only careful proofreading can prove that.
- We can also verify the recursion ranges do maintain a lower bound less or equal to the the upper bound.

So the common case looks solid enough. Moving on.

`| w == 0 = pure l `

When \(l=r\), there’s only a single possible resulting ranking. So there’s no need to query, we have it already.

What’s left? The \(l+1=r\) case.

If the bounds are exactly 1 apart, we can’t trisect using two values in-between, as there is only one. So we’ve “only” got to compare our index to the value we have. Except we don’t know how to do that: all we’ve got is that Median operation.

There are multiple ways out of this. The one I selected^{2} is to welcome in one of the ranking’s extrema. It’s possible because we can’t both still be on a \(w=1\) case **and** on the full ranking range at the same time, since the recursion starts with a sequence of two and always shortens the range on recursion.^{3}

Which extremum should we include? Very pragmatically: the one that isn’t already in. It sounds simple, but it’s the most painful part of the entire algorithm to get right. Not so much because it’s difficult, but because it’s that much more work to do when we feel we should have been done with this for quite some time already.

So here’s my take at it:

```
| otherwise = binary l
binary :: Rank -> IO Rank
@(Rank 0) = do
binary alet b = lastRank q
>>= \case
medianOrder q a b i Before -> pure a
Between -> pure (a+1)
After -> error "Consistency failure"
= do
binary b let a = Rank 0
>>= \case
medianOrder q a b i Before -> error "Consistency failure"
Between -> pure b
After -> pure (b+1)
```

It’s a bit verbose, but it’s clear enough. It passes the local judge in “hard” mode:

```
judge: Total Queries Used: 14553/17000
Judge return code: 0
Solution return code: 0
A solution and judge both finishing with exit code 0 (without
exceeding time or memory limits) would be interpreted as Correct
in the system.
```

As expected, we’re well under the query count limit. Yay!

Now those two `binary`

implementation cases look awfully similar. Can we merge them? Hey, why not! Let’s have a little equational reasoning.

- \(a\) is always
`firstRank q`

. - The error cases can be ignored: they don’t happen.
- The remaining case is
`Between`

. Unfortunately, we can’t make \(a+1\) coincide with \(b\) in every case. So there’s going to be an`if`

expression there too.

We’re down to the following clause:

```
binary :: Rank -> IO Rank
= do
binary x let a = firstRank q
= if x == a then lastRank q else x
b >>= \case
medianOrder q a b i Before -> pure a
Between -> pure $ if x == a then (a+1) else b
After -> pure (b+1)
```

Better? More elegant? A strange art-like beauty that should never make it to a production codebase near you? Hey, it’s only Code Jam code!

This concludes today’s problem, namely this year’s qualifier’s problem D. As usual, the complete code is on github.

I’m not done with it, though. There’s still the ridiculous solution to implement for the small test set. And… have you noticed how close that `binary`

worker is to an actual comparison function? Insertion sort was a good first fit, but we may be able to do better.^{4}

See you soon for either that or problem E!

It’s almost as easy to solve as the statement’s length makes it seem, so I’ll try and find something to spice it up.

First off, the input string is given as a dumbed-down ASCII representation of Cody-Jamal’s true Artwork. This can be fixed with a simple stream transcoder:

```
∷ String → String
fixASCII = map $ \case 'C' → '🌘'
fixASCII 'J' → '🌂'
'?' → '❓'
→ x x
```

Now for each prefix of the input mural, we’ll want to know the minimal fee for that part of the mural, so we can deduce the minimal fee after adding a glyph to it, and reap the final fee in the end.

But that won’t work: there’s no way to achieve the “deduction” part. It can be fudged to work, though. We can split the minimal fee information in two: the minimal fee assuming the prefix ends in a moon, and the minimal fee assuming the prefix ends in an umbrella.

The natural way to represent this is with a simple record:

`data State = State { (🌘) ∷ Maybe Int, (🌂) ∷ Maybe Int }`

Each `Int`

represents the minimal cost of a mural ending in said glyph; `Nothing`

if it’s impossible to end as such. But that’s not perfect yet. Such a representation allows for a pair of `Nothing`

s, and… what would we answer if we ended on such a pair? Because of the transition function’s structure, such a pair can never occur. So we really need a representation that reflects that.

According to the kind folks on IRC^{1}, the standard library to reach for in this case is `these`

.

Except the Haskell setup we’ve got over at GCJ’s is even more primeval than CodinGame’s!^{2} So no libraries for us. Luckily enough, the data type in itself is pretty straightforward.

`data These a b = This a | That b | These a b`

Err… since I’m not using the library, I might as well give those constructors meaningful names.

`data (│) = (:🌘) Int | (:🌂) Int | (:🌘🌂) Int Int`

Much better. I won’t reproduce its standard combinators and `Bifunctor`

instances here, they’re just a reproduction of `These`

’s.

With the helpers available, here’s the little utility to compute the final minimal cost from the split state.

```
∷ (│) → Int
minimal = merge min minimal
```

It allows for a delightfully elegant implementation of the transition function.

```
∷ Int → Int → Char → (│) → (│)
transition = \case '🌘' → (:🌘) . (<|🌘)
transition x y '🌂' → (:🌂) . (<|🌂)
'❓' → uncurry (:🌘🌂) . ((<|🌘) &&& (<|🌂))
where (<|🌘) = minimal . (🌂) (+y)
<|🌂) = minimal . (🌘) (+x) (
```

Maybe inserting a small reading key would be in order? So, assuming we could implement things properly using the `these`

package, here’s what the various symbols mean:

- The type name is
`(│)`

, aiming to evoke horizontal separation, as it is most often used to represent a state of the computation between two mural glyphs. `(:🌘)`

,`(:🌂)`

,`(:🌘🌂)`

are`This`

,`That`

and`These`

, the data constructors holding integer values associated to a moon, an umbrella or both, respectively.^{3}`(🌘)`

and`(🌂)`

are`first`

and`second`

from`Bifunctor`

: apply a function to the moon or umbrella aspect of the value.`(<|🌘)`

and`(<|🌂)`

are helpers to perform the “append moon” and “append umbrella” operations.

We need dedicated code to create the initial state, as there’s no proto-state that would result in the correct first one if the transition function were blindly applied to it.

```
∷ Char → (│)
start '🌘' = (:🌘) 0
start '🌂' = (:🌂) 0
start '❓' = (:🌘🌂) 0 0 start
```

And now solving the problem reduces to a simple fold.

```
∷ Int → Int → String → Int
invoice :t) = minimal $ foldl' (flip (transition x y)) (start h) t invoice x y (h
```

It performs linearly, which is optimal in this case, and solves test sets 1, 2 and 3 without a hitch.

This concludes this qualifier problem B’s solution. The complete code with I/O handling is on my github^{4}.

See you soon for problem D!

Because Covid, the Infinite House of Pancakes seems closed for business this year, so instead of pancake sorting we get reversorting, which is basically the same thing from another angle.

So how does Reversort work? Allow me to paraphrase the pseudocode:

```
Reversort(L):for i := 1 to length(L) - 1
j := argmin L[i..length(L)] Reverse(L[i..j])
```

How artificially iterative. Let’s reword for recursion:

As long as the section of the list under scrutiny is not known to be sorted, locate the minimum element and bring it to the front by reversal; then sort the rest of the list.

“Known to be sorted” is code for “of length 1 or less”. It’s really important to note that this specific reversal is never performed. The algorithm would work all the same if we did, and with an simpler definition too. But no.

It’s trivial to prove this it is indeed a sorting algorithm. It’s unstable, but that doesn’t matter because we’ll only ever be handling permutations of [1..N].

From this algorithm we define an algorithm **cost**—the number of items that undergo reversal (this is why it was important to note we’re not reversing 1-lists)—and two problems for the qualifier round:

- problem A performs the algorithm: it gives us a list and asks for the cost.
- problem C reverses
^{1}the algorithm: it gives us a cost and asks for the list.

What order of magnitude can that cost value take?

The smallest it can be is when all reversals are as small as possible: one element. This happens whenever the smallest sought element is first of its sublist. In other words^{2}, if the list is already sorted. The total cost in that case is \(N-1\).

The largest it can be, since the number of iterations is predetermined, is when all reversals are as large as possible. On the first iteration, that’s \(N\) elements, which happens when the smallest element is at the end. On the second iteration, that’s \(N-1\), which happens when the smallest remaining element is at the end of the remaining (reversed) list, which tranlates to the beginning of the initial list. Subsequent iterations reverse \(N-k\) elements, until the last two, which reverse \(2\) elements then \(0\) elements. The total cost in that case is:

\[ \sum_{i=1}^{N-1} (N-i+1) = \sum_{L=2}^{N} L = {(N-1) (N+2) \over 2} \]

How is this useful? Well, the problem statement gives a handful set of constraints: there are 100 tests or fewer, and the lists are 100 elements or shorter.

So assuming a linear reversal, solving problem A using a straightforward transcription of the algorithm would perform in \(O(TN^2)\) which is perfectly reasonable to have done in ten seconds.

The marginally tricky^{3} part of implementing it in Haskell is to locate the sublist minimum in a single pass. I did it like this:

```
-- | Extract a (minimum,sublist before,sublist after) triplet from a list.
minSplit :: [Int] -> Maybe (Int,[Int],[Int])
= Nothing
minSplit [] :t)
minSplit (x| Just (m,h',t') <- minSplit t, m < x = Just (m,x:h',t')
| otherwise = Just (x,[],t)
```

In plain English: an empty list has no minimum; else the result depends on how the list’s head compares to the list’s tail’s minimum. If the tail’s minimum “wins”, the result triplet is that minimum and the same post-minimum sublist, and the current item prepended to the sublist’s pre-minimum sublist. If the list’s head is a better minimum, we return it with an empty known prefix and the tail as a suffix.

Now we can implement the algorithm almost as specified:

```
-- | Reversort a list of ints.
reversort :: [Int] -> Writer (Sum Int) [Int]
= case minSplit xs of
reversort xs Just (m,h,t) -> do
Sum (1 + length h))
tell (:) <$> reversort (reverse h ++ t)
(m Nothing -> pure xs
```

I’m instrumenting with a `Writer`

monad to count the reversal lengths without disrupting the algorithm flow. So problem A can be solved with a simple `getSum . execWriter . reversort`

invocation.

Now for problem C. We’re given a cost, and are tasked with generating a permutation of `[1..N]`

that would match costs when reversorted. I’ll call that a (controlled) revershuffle operation. It takes a sorted list and shuffles it, striving to consume an exact reversal budget.

```
Revershuffle(N,C) returns L:1..N]
let L = [for i := N-1 downto 1
{ invariant: L[i] = N }
let P = some_amount_to_be_determined-1]) Reverse(L[i..i+P
```

Each reversal is intended to have cost P.^{4} So the total cost is going to be the sum of each step’s costs. Can we pick a distribution of them that sums to C while remaining compatible with the array bounds?

At each step, P has to be 1 or more, since that was the case for reversort: all reversals necessarily reversed at least one element. At step \(i\) (remember they count down), we’re bounded by the end of the list, so \(i+P-1\) cannot be greater than N, so \(P \le N-i+1\).

As discussed above, a list of N elements necessarily reverses \(N-1\) or more elements, so we can trivially dismiss those cases as impossible. We can also virtually discount them from our budget: this way we won’t have to worry too much about keeping enough budget to be able to complete the algorithm. Remove \(N-1\) first, then count all reversals as consuming as much budget as the sublist’s length *without* counting the minimum.

Transcribing that back to P constraints, we’ll have \(0 \le P \le N-i\).

The remaining problem is merely to consume enough of the remaining budget before reaching the end. That’s a rather easy problem to solve: we can simply greedily consume as much as is available per step, until the algorithm completes. This will make for an easy check of success: the remaining budget has fallen to zero. We won’t even need to ensure our earlier computation of maximum cost is correct.

```
-- | Shuffle [1..n] for an exact cost of c, if possible.
revershuffle :: Int -> Int -> Maybe [Int]
= go (n - 1) [n] (c - n + 1) where
revershuffle n c | b < 0 = mzero
go _ _ b 0 l b = guard (b == 0) *> pure l
go =
go i l b let p = min (n - i) b
= splitAt p l
(before,after) in go (i - 1) (reverse (i : before) ++ after) (b - p)
```

It’s expressed recursively, so it reads a bit different to the pseudocode, but it really works the same. The `go`

helper takes and maintains the same i and L parameters, and an additional B one for the remaining budget. The recursion starts with \(i=N-1\) and ends when \(i=0\), either with a successfully revershuffled list, or in error if the whole budget wasn’t consumed. The other error case is overspending the budget.^{5}

This concludes the qualifier’s problems A and C solutions. The complete code with I/O handling is on my github.

See you soon for problem B!

**Post-scriptum:** the round editorial mentions that

Reversort Engineeringinvolved an insight and working out some math.

I beg to differ. I did the bounds analysis and agree it qualifies as math, but implementing the common sense algorithm doesn’t. The upper bound doesn’t need to be pre-checked for feasibility: it’s enough to iterate, pass it, and recognize the failure.

Hah!↩︎

That word is: “inductively”.↩︎

Arguably overengineered. In a way, it’s not tricky enough: using simple linked lists as I did, there’s no way to reverse the list in the same pass as I’m looking for the minimum. In another view, it’s too much refinement when it could be simply split in more phases with the same big-O complexity: locate minimum, extract up to minimum, reverse and concatenate. The balance I struck aims at exposing as much of the original algorithm as possible. The waste here is consing or thunking the entire list in addition to the reversal, depending on what the optimizer decides.↩︎

That’s P for “price”.↩︎

It can actually only happen on the first call, but is more useful in the inner function when still debugging.↩︎

Org doesn’t do much in terms of export when it comes to drawers. The default HTML backend translator just… exports their contents. It even drops their name on the way.

I’ve attended a few MOOCs recently that were well-funded enough to feature transcripts for most of the videos. Now, as such, those don’t exactly qualify as quality note-taking. But they’re very handy to keep around, if only to have plain text available for keyword search. So I usually copy-paste them to an Org drawer.

So yeah, when exported, it sucks by default. Very noisy, kills the flow. The appropriate export would have them invisible by default, and revealed on demand.

So here’s a bit of code to that effect.

```
defun my-format-drawer (name contents &rest args?)
(if (string= name "TRANSCRIPTION")
(let ((h (format "%x" (sxhash contents))))
(
(concat"<a id=\"show-" h "\" "
"onclick='"
"document.getElementById(\"ts-" h "\").style.display = \"block\"; "
"document.getElementById(\"show-" h "\").style.display = \"none\"; "
"document.getElementById(\"hide-" h "\").style.display = null; "
"return false;' "
"href=\"#ts-" h "\">"
"[show transcript]"
"</a>"
"<a id=\"hide-" h "\" "
"onclick='"
"document.getElementById(\"ts-" h "\").style.display = \"none\"; "
"document.getElementById(\"show-" h "\").style.display = null; "
"document.getElementById(\"hide-" h "\").style.display = \"none\"; "
"return false;' "
"style='display: none;' "
"href=\"#show-" h "\">"
"[hide transcript]"
"</a>"
"<div id='ts-" h "' style='display: none;'>"
contents"</div>"))
apply org-html-format-drawer-function name contents args?))) (
```

Nothing too fancy, just two <a> elements and a <div>. The anchors have an `onclick`

event handler to toggle the `div`

’s visibility. And hide self, show the other one.

Inline scripts instead of something factored in a separate JavaScript file because… I’m not too sure, actually. It makes for totally localized Elisp, which is something. I’m not sure how a factored-out variant would look.

Activate by customizing `org-html-format-drawer-function’, or by setting the appropriate property at publish-project level:

```
;; I'm not quite sure why this isn't in by default :-/
;; Possibly the ambiguous list-head modifying specification?
(gv-define-simple-setter plist-get plist-put)
setf (plist-get (cdr zk-notes) :html-format-drawer-function) #'my-format-drawer) (
```

I’d really like to get rid of the JavaScript, but I’m not sure it can actually be done in this case. Please tell me!

]]>This in installment 2 in this series.

There’s a nice set of this class of problems over at brainzilla.^{1} Our old friend, Einstein’s, is there too, rated medium. There are five puzzles available in the “very hard” category.

Here’s the plan:

- upgrade my code to be able to read a puzzle in the site’s format
- check it works using Einstein
- proceed to time various ways of solving them on a harder one.

Of course, I may get bored halfway through, or find something else shiny enough to distract me, or it may just plain old be too hard for me. We’ll see how that pans out…

The first thing to notice is Einstein there isn’t the same one as Wikipedia’s. *Sigh*. Oh well. The gist of it is similar enough.

Brainzilla features a nifty PDF export of their puzzles. It’s the cleanest way I’ve found to extract the relevant information our of them. So, what do we have?

`Color: blue, green, red, white, yellow Nationality: Brit, Dane, German, Norwegian, Swede Drink: beer, coffee, milk, tea, water Cigarette: Blends, Blue Master, Dunhill, Pall Mall, Prince Pet: birds, cats, dogs, horses, fish - The Brit lives in the Red house. - The Swede keeps Dogs as pets. - The Dane drinks Tea. - The Green house is exactly to the left of the White house. - The owner of the Green house drinks Coffee. - The person who smokes Pall Mall rears Birds. - The owner of the Yellow house smokes Dunhill. - The man living in the centre house drinks Milk. - The Norwegian lives in the first house. - The man who smokes Blends lives next to the one who keeps Cats. - The man who keeps Horses lives next to the man who smokes Dunhill. - The man who smokes Blue Master drinks Beer. - The German smokes Prince. - The Norwegian lives next to the Blue house. - The man who smokes Blends has a neighbour who drinks Water.`

They’re kind enough to give us the property names and values. Having to guess them is fun when you’re solving by hand; it’s a nuisance when automating. So I’m glad they do. We notice two-word “orange juice” is gone, but there’s still “Pall Mall” and “Blue Master”. I’ll search-and-replace them to single words. I’ll also add a period at the end of the property list lines, so I can mostly use the same word-based tokenizer between both parts. And remove the leading dashes in the constraint listing.

Then I’ll highlight the actually interesting remaining parts of the text, to see what we really need to keep from this.

Color: blue, green, red, white, yellow.

Nationality: Brit, Dane, German, Norwegian, Swede.

Drink: beer, coffee, milk, tea, water.

Cigarette: Blends, BlueMaster, Dunhill, PallMall, Prince.

Pet: birds, cats, dogs, horses, fish.

The Brit lives in the Red house.

The Swede keeps Dogs as pets.

The Dane drinks Tea.

The Green house is exactly to the left of the White house.

The owner of the Green house drinks Coffee.

The person who smokes PallMall rears Birds.

The owner of the Yellow house smokes Dunhill.

The man living in the centre house drinks Milk.

The Norwegian lives in the first house.

The man who smokes Blends lives next to the one who keeps Cats.

The man who keeps Horses lives next to the man who smokes Dunhill.

The man who smokes BlueMaster drinks Beer.

The German smokes Prince.

The Norwegian lives next to the Blue house.

The man who smokes Blends has a neighbour who drinks Water.

That text doesn’t have much efficiency, huh?

I’m keeping the colon and period punctuation marks as individual token to help at:

- distinguishing feature lines from constrain lines
- future extensibility when we’ll have more complex sentences

The features’ actual names aren’t needed *per se*, but I do need to know which values go together, so that name is as good as another to group them under.

Now to implement this. I’ll use the usual Prolog DCGs.

First to split on words.

```
:- [library(dcg/basics)].
W|Ws]) --> word(W), blanks, !, words(Ws).
words([--> eos. words([])
```

To scan for a word, take anything that’s not ignored. We’re ignoring whitespace as a separator, and some punctuation marks as tokens of their own. I’m defining those lists in separate predicates to help further evolution.

```
".:").
punct(",").
skip(" \n").
whitespace(
Cs) :-
word_complement(P), skip(S), whitespace(W),
punct(P,S,T), string_concat(T,W,Cs).
string_concat(
T) -->
word({ word_complement(Cp) },
Cp, Codes),
string_without({ Codes \= [] },
{ read_codes(Codes,T) }.
T) --> [C], { punct(P), string_codes(P,Cs), member(C,Cs), atom_codes(T,[C]) }.
word(T) --> [C], { skip(S), string_codes(S,Cs), member(C,Cs) }, blanks, word(T).
word(
Codes,Token) :- atom_codes(Token,Codes). read_codes(
```

Let’s try it out.

?-

string_codes(“:hello, world.”, Codes),

phrase(words(Words), Codes).

Codes = [58, 104, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100, 46],

Words = [:, hello, world, ‘.’].

It’s extracted the two interesting punctuation marks as individual tokens. It skipped spaces and commas. It converted the two words to atoms. Seems all right for now.

Let’s see what it makes of the entire input file.

?-

phrase_from_file(words(Words),“einstein.txt”).

Words = [color, :, blue, green, red, white, yellow, ‘.’, nationality, :, brit, dane, german, norwegian, swede, ‘.’, drink, :, beer, coffee, milk, tea, water, ‘.’, cigarette, :, blends, bluemaster, dunhill, pallmall, prince, ‘.’, pet, :, birds, cats, dogs, horses, fish, ‘.’, the, brit, lives, in, the, red, house, ‘.’, the, swede, keeps, dogs, as, pets, ‘.’, the, dane, drinks, tea, ‘.’, the, green, house, is, exactly, to, the, left, of, the, white, house, ‘.’, the, owner, of, the, green, house, drinks, coffee, ‘.’, the, person, who, smokes, pallmall, rears, birds, ‘.’, the, owner, of, the, yellow, house, smokes, dunhill, ‘.’, the, man, living, in, the, centre, house, drinks, milk, ‘.’, the, norwegian, lives, in, the, first, house, ‘.’, the, man, who, smokes, blends, lives, next, to, the, one, who, keeps, cats, ‘.’, the, man, who, keeps, horses, lives, next, to, the, man, who, smokes, dunhill, ‘.’, the, man, who, smokes, bluemaster, drinks, beer, ‘.’, the, german, smokes, prince, ‘.’, the, norwegian, lives, next, to, the, blue, house, ‘.’, the, man, who, smokes, blends, has, a, neighbour, who, drinks, water, ‘.’].

Ok.

Now to strip anything we’re not interested in. We’ll need to extract our domain first. The domain is a list of features, where each feature presents as its name, a colon, a list of value words, and a period.

```
F|Fs]) --> feature(F), domain(Fs).
domain([--> [].
domain([]) Name,Vs)) --> [Name,':'], values(Vs).
feature(feature(
--> ['.'], !.
values([]) V|Vs]) --> [V], values(Vs). values([
```

?-

phrase_from_file(words(Words),“einstein.txt”),

phrase(domain(Dom), Words, _).

Dom = [feature(color, [blue, green, red, white, yellow]), feature(nationality, [brit, dane, german, norwegian, swede]), feature(drink, [beer, coffee, milk, tea, water]), feature(cigarette, [blends, bluemaster, dunhill, pallmall, prince]), feature(pet, [birds, cats, dogs, horses, fish])]

Sweet.

Flattening that and referencing a few connector words now lets us filter the constraints.

```
D,Vs) :- maplist(feature_values,D,Vss), append(Vss,Vs).
domain_values(_,Vs),Vs).
feature_values(feature(
'.']).
connectors([left,next,neighbour,
D,Ts,C) :-
cleanse(Cs), domain_values(D,Vs), append(Cs,Vs,K),
connectors(include(member_(K),Ts,C).
L,E) :- member(E,L). member_(
```

?-

phrase_from_file(words(Words),“einstein.txt”),

phrase(domain(Dom), Words, Verbose),

cleanse(Dom, Verbose, Terse).

Terse = [brit, red, ‘.’, swede, dogs, ‘.’, dane, tea, ‘.’, green, left, white, ‘.’, green, coffee, ‘.’, pallmall, birds, ‘.’, yellow, dunhill, ‘.’, milk, ‘.’, norwegian, ‘.’, blends, next, cats, ‘.’, horses, next, dunhill, ‘.’, bluemaster, beer, ‘.’, german, prince, ‘.’, norwegian, next, blue, ‘.’, blends, neighbour, water, ‘.’]

It’s reduced to all that’s needed. Parsing is now trivial:

```
Dom,[C|Cs]) --> constraint(Dom,C), ['.'], constraints(Dom,Cs).
constraints(_,[]) --> [].
constraints(
Dom,same(A,B)) --> attribute(Dom,A), attribute(Dom,B).
constraint(Dom,seq([A,B])) --> attribute(Dom,A), [left], attribute(Dom,B).
constraint(Dom,neighbors(A,B)) -->
constraint(Dom,A),
attribute(N], { member(N,[next,neighbour]) },
[Dom,B). attribute(
```

This recognizes three kinds of constraints:

- the most commone one: “feature A is in the same place as feature B”
- ordered sequence: “features A directly precedes feature B”
- unordered sequence: “features A and B are directly next to each other”

It needs an attribute parser, recognizing values from the domain:

```
Dom,attr(F,V)) --> [V], { member(feature(F,Vs),Dom), member(V,Vs) }.
attribute(_,attr(position,P)) --> [P], { positions(Ps), member(P,Ps) }. attribute(
```

We can now parse the entire constraint list:

?-

phrase_from_file(words(Words),“einstein.txt”),

phrase(domain(Dom), Words, Verbose),

cleanse(Dom, Verbose, Terse),

phrase(constraints(Dom,Constraints),Terse).

Constraints = [

same(attr(nationality, brit),attr(color, red)),

same(attr(nationality, swede),attr(pet, dogs)),

same(attr(nationality, dane),attr(drink, tea)),

seq([attr(color, green),attr(color, white)]),

same(attr(color, green),attr(drink, coffee)),

same(attr(cigarette, pallmall),attr(pet, birds)),

same(attr(color, yellow),attr(cigarette, dunhill)),

same(attr(position, centre),attr(drink, milk)),

same(attr(nationality, norwegian),attr(position, first)),

neighbors(attr(cigarette, blends),attr(pet, cats)),

neighbors(attr(pet, horses),attr(cigarette, dunhill)),

same(attr(cigarette, bluemaster),attr(drink, beer)),

same(attr(nationality, german),attr(cigarette, prince)),

neighbors(attr(nationality, norwegian),attr(color, blue)),

neighbors(attr(cigarette, blends),attr(drink, water))

]

Let’s package that.

```
FileName,puzzle(Dom,Constraints)) :-
read_zebra(Words),FileName),
phrase_from_file(words(phrase(domain(Dom),Words,Verbose),
Dom,Verbose,Terse),
cleanse(phrase(constraints(Dom,Constraints),Terse).
```

With a parsed AST, all we have left to do is convert it to a search, in the same form as last installment’s. In a nutshell: give the search space a shape, then unify it with the constraints, in order.

```
Dom,Constraints),Sol) :-
solve(puzzle(Dom,Sol),
shape(Dom,Sol),Constraints). maplist(apply(
```

The shaping phase will unify the solution with a list as long as the domain’s value count per feature. Each list element will be a `place`

functor, with as many arguments as there are features in the puzzle’s domain.

```
Dom,Sol) :-
shape(Dom,N),
inner_length(Sol,N),
length(Dom,F),
length(F),Sol).
maplist(place(
F,P) :- length(X,F), P =.. [place|X].
place(
_,H)|T],N) :-
inner_length([feature(H,N),
length(\+ ( member(feature(_,X),T), length(X,N2), N2 \= N ).
```

It’s probably easier to visualize than decode:

?-

read_zebra(“einstein.txt”,puzzle(D,_)),shape(D,S).

S = [place(_2154, _2160, _2166, _2172, _2178), place(_2202, _2208, _2214, _2220, _2226), place(_2250, _2256, _2262, _2268, _2274), place(_2298, _2304, _2310, _2316, _2322), place(_2346, _2352, _2358, _2364, _2370)]

Constraint application unifies the places’ attributes, depending on the connector. As a special case, “position” attributes are special-cased so that they can apply to a specific place instead of trying them all.

```
Dom,Sol,same(attr(position,Pos),A)) :-
apply(Pos = centre -> length(Sol,L), M is L//2 ;
( Pos = first -> M = 0 ;
throw(invalid_position(Pos)) ),
M,Sol,P),
nth0(Dom,P,A).
unify(Dom,Sol,same(A,attr(position,Pos))) :-
apply(Dom,Sol,same(attr(position,Pos),A)).
apply(Dom,Sol,same(A,B)) :-
apply(P,Sol),
member(Dom,P,A),
unify(Dom,P,B).
unify(Dom,Sol,seq(A,B)) :-
apply(Pa,Pb,Sol),
nextto(Dom,Pa,A),
unify(Dom,Pb,B).
unify(Dom,Sol,neighbors(A,B)) :-
apply(Dom,Sol,seq(A,B)) ;
apply(Dom,Sol,seq(B,A)). apply(
```

Unification simply locates the `place`

functor argument by cross-referencing the domain.

```
Dom,P,attr(F,V)) :-
unify(N,Dom,feature(F,_)), !,
nth0(P =.. [place|Vs],
N,Vs,V). nth0(
```

And we’re done! Let’s try it out.

?-

read_zebra(“einstein.txt”,P),solve(P,Sol).

Sol = [place(yellow, norwegian, water, dunhill, cats), place(blue, dane, tea, blends, horses), place(red, brit, milk, pallmall, birds), place(green, german, coffee, prince, _25176), place(white, swede, beer, bluemaster, dogs)]

A solution is found instantly. As was the case previously, the actual pet the statement asks for isn’t reified: it’s still a free variable. As it’s the only one left in the solution space, it’s unambiguous.

With this new generic infrastructure at the ready, let’s try for a harder puzzle: delightful dresses. Parsing difficulties:

- new relations: “somewhere to the right/left”, “somewhere between”
- new positions: “at one of the ends”
- indirect attributes: “oldest”, “youngest”, “smallest discount”

I’ll introduce a new constraint type, `ordered`

, to account for the “somewhere” class of new relations. Recognition will consist in adding “right”, “somewhere”, “between” and “and” to the list of connector words. Parsing is a few additional clauses of the `constraint`

predicate.

```
'.',somewhere,between,and]).
connectors([left,right,next,neighbour,
Dom,ordered([A,B])) -->
constraint(Dom,A), [somewhere,left], attribute(Dom,B).
attribute(Dom,ordered([A,B])) -->
constraint(Dom,B), [somewhere,right], attribute(Dom,A).
attribute(
Dom,seq(A,B)) --> attribute(Dom,B), [right], attribute(Dom,A). constraint(
```

The “at-end” one will be a new position pseudo-feature value by virtue of adding a keyword to the list.

`. positions([centre,first,ends])`

The indirect attributes are going to be slightly trickier. We know the domain, so we can resolve them at parse time. To do it cleanly, I’ll need to interpret their values as numbers, instead of the symbols I currently have. So I’ll update the tokenizer slightly:

```
",%").
skip(
Codes,Number) :-
read_codes(catch(number_codes(Number,Codes),
,_),
error(syntax_error(illegal_number)fail).
Codes,Token) :- atom_codes(Atom,Codes), downcase_atom(Atom,Token). read_codes(
```

I added `%`

to the list of skipped characters so it wouldn’t confuse the number parser. The puzzle author was nice enough not to make the discounts collide with the ages. Then I inserted a number parsing clause above the regular atom one: if it’s a number it’ll parse as such, else it will be a simple symbol.

I’ll new list the relatives as recognized words, and update the word purge to save those too.

```
.
relatives([youngest,oldest])
D,Ts,C) :-
cleanse(Cs),
connectors(Ps),
positions(Rs),
relatives(D,Vs),
domain_values(Cs,Ps,Rs,Vs],K),
append([include(member_(K),Ts,C).
```

I’ll need to read them accurately. “Youngest” and “oldest” can reasonably be assumed to always refer to an “age” feature.

```
Dom,attr(age,Young)) --> [youngest], !, { feature_min(Dom,age,Young) }.
attribute(Dom,attr(age,Young)) --> [oldest], !, { feature_max(Dom,age,Young) }.
attribute(
Dom,F,Min) :-
feature_min(F,Vs),Dom),
menber(feature(Min,Vs).
min_member(Dom,F,Min) :-
feature_max(F,Vs),Dom),
menber(feature(Min,Vs). max_member(
```

“Smallest discount” is tricker. Interpreting both words in the current framework would be quite nasty. Interpreting “smallest” to imply “discount” would be very oddly specific.

So I’ll take the side road. I’ll introduce an optional, puzzle-specific predicate to substitute a term for a word. It’ll serve as a hatch to help language recognition in *those* cases.

```
:- dynamic subst/2.
D,Ts,C) :-
cleanse(Cs),
connectors(Ps),
positions(Rs),
relatives(S,subst(S,_),Ss),
findall(D,Vs),
domain_values(Cs,Ps,Rs,Vs,Ss],K),
append([include(member_(K),Ts,Tmp),
,Tmp,C).
maplist(perform_substF,T) :- once(subst(F,T)) ; F = T. perform_subst(
```

Here’s how to use it to interpret the smallest discount without leaking too much abstraction away:

```
,min(discount).
subst(smallest
Dom,attr(F,Min)) --> [min(F)], { feature_min(Dom,F,Min) }. attribute(
```

And we’ve got the whole file parsed. Of course, it’s parsed *wrong* because I’m not interpreting “immediately before” correctly yet. But this is trivial now I’ve got a substitution facility.

`,left). subst(before`

?-

read_zebra(“dresses.txt”,Puzzle).

Puzzle = puzzle([feature(dress, [black, blue, purple, red, white]),

feature(name, [anna, erica, lauren, megan, sara]),

feature(profession, [actress, electrician, programmer, psychologist, surgeon]),

feature(type, [‘a-line’, bodycon, sheath, sundress, wrap]),

feature(discount, [5, 10, 15, 20, 25]),

feature(age, [30, 33, 36, 39, 41])],

[seq(attr(profession, surgeon), attr(profession, programmer)),

ordered([attr(discount, 10), attr(dress, black)]),

neighbors(attr(profession, actress), attr(dress, black)),

neighbors(attr(profession, psychologist), attr(type, bodycon)),

same(attr(name, megan), attr(age, 41)),

ordered([attr(dress, white), attr(age, 36)]),

neighbors(attr(type, sheath), attr(discount, 15)),

ordered([attr(age, 30), attr(dress, purple), attr(dress, white)]),

seq(attr(discount, 5), attr(discount, 15)),

ordered([attr(dress, white), attr(discount, 20)]),

seq(attr(profession, surgeon), attr(type, sundress)),

same(attr(name, anna), attr(dress, white)),

seq(attr(name, megan), attr(discount, 5)),

same(attr(type, sheath), attr(dress, red)),

same(attr(position, ends), attr(age, 33)),

seq(attr(type, bodycon), attr(name, sara)),

same(attr(name, lauren), attr(age, 33)),

same(attr(position, ends), attr(type, sheath)),

seq(attr(discount, 10), attr(discount, 5)),

same(attr(type, ‘a-line’), attr(dress, purple))])

I love the smell of a freshly ~~roasted~~ parsed puzzle in the morning.

To be able to solve it, we’ll need to implement the new `ordered`

subsequence predicate.

```
Dom,Sol,ordered(S)) :- apply_list(Dom,Sol,S).
apply(
Dom,[Sh|St],[Ah|At]) :- unify(Dom,Sh,Ah), apply_list(Dom,St,At).
apply_list(Dom,[_|St],As) :- apply_list(Dom,St,As).
apply_list(_,_,[]). apply_list(
```

And we’ll need to implement the new “at one of the ends” position.

```
Dom,Sol,same(attr(position,Pos),A)) :-
apply(Pos = centre -> length(Sol,L), M is (L+1)//2 ;
( Pos = first -> M = 1 ;
Pos = ends -> (M = 1 ; length(Sol,M) ) ;
throw(invalid_position(Pos)) ),
M,Sol,P),
nth1(Dom,P,A). unify(
```

And merrily solve our puzzle.

?-

read_zebra(“dresses.txt”,Puzzle), solve(Puzzle,Sol).

Sol = [place(red, lauren, surgeon, sheath, 10, 33), place(black, megan, programmer, sundress, 5, 30), place(purple, megan, psychologist, ‘a-line’, 15, 41), place(white, anna, actress, bodycon, 15, _17180), place(black, sara, _17218, sheath, 20, 36)]

Wait a second. This won’t do! Notwithstanding the remaining free variables, this solution is incorrect! Spot the problem yet?

Allow me to reformat to make it more apparent:

- place(red, lauren, surgeon, sheath, 10, 33)
- place(black, megan, programmer, sundress, 5, 30)
- place(purple, megan, psychologist, ‘a-line’, 15, 41)
- place(white, anna, actress, bodycon, 15, Age)
- place(black, sara, Job, sheath, 20, 36)

Yup. Two black dresses, two Megans, two sheathes, two 15% discounts. Contrary to Einstein, this puzzle’s constraints aren’t sufficient to rule out duplication.

Well, at least that isn’t not too hard to include.

```
Dom,Sol) :-
assert_domain(Dom,L),
length(1,L,X),assert_feature(X,Dom,Sol)).
foreach(between(X,Dom,Sol) :-
assert_feature(X,Dom,feature(_,Vs)),
nth1(X,Sol),Vs).
maplist(assert_value(X,Sol,V) :-
assert_value(P,Sol),
member(P =.. PVs,
X,PVs,V).
nth0(
Dom,Constraints),Sol) :-
solve(puzzle(Dom,Sol),
shape(Dom,Sol),Constraints),
maplist(apply(Dom,Sol). assert_domain(
```

The astute reader will have noticed I’m actually checking the converse, namely that each feature value is represented. As there are as many women as values, they’re equivalent.

?-

read_zebra(“dresses.txt”,Puzzle), solve(Puzzle,Sol).

Sol = [place(blue, erica, electrician, wrap, 25, 30), place(purple, megan, psychologist, ‘a-line’, 10, 41), place(white, anna, surgeon, bodycon, 5, 39), place(black, sara, programmer, sundress, 15, 36), place(red, lauren, actress, sheath, 20, 33)] ;

Victory!

Let’s implement blood donation, another “very hard” puzzle from the site, to make sure it wasn’t an accident.

I’m not going to delve too much into the details this time. There were two hurdles.

- Donors can be referred to as “universal donor” or “universal recipient”. This is completely out-of-the-box information. I just inserted “O-” and “AB+” in the relevant places in the constraints. This leads to…
- The Rhesus signs in the PDF are U+00AD SOFT HYPHENs instead of the more usual U+0045 HYPHEN-MINUS. Don’t get mixed up!
^{2}Oh, and the usual: “feature names are expected to be single words”.

Barring that, it all runs as smoothly as expected:

?-

read_zebra(“blood-donation.txt”,Puzzle), solve(Puzzle,Sol).

Sol = [place(black, brooke, ‘b-’, 45, 130, actress), place(green, nichole, ‘o-’, 35, 160, chef), place(purple, andrea, ‘ab+’, 30, 120, policewoman), place(blue, meghan, ‘a+’, 25, 150, florist), place(red, kathleen, ‘b+’, 40, 140, engineer)]

Victory again. This will conclude today’s installment.

So where are we, exactly?

We’ve got Prolog code to solve the Einstein riddle. With a bit of careful generalization, we can mock NLP and parse brainzilla’s puzzles, and solve two “very hard” of those too. Performance, on the other hand, is debatable. Einstein is instant, but delightful dresses and blood donation take entire minutes.

So that gives us something to work with for next time. Don’t you love it when a plan comes together?

The code is available on my Github.

- There are five houses.
- The Englishman lives in the red house.
- The Spaniard owns the dog.
- Coffee is drunk in the green house.
- The Ukrainian drinks tea.
- The green house is immediately to the right of the ivory house.
- The Old Gold smoker owns snails.
- Kools are smoked in the yellow house.
- Milk is drunk in the middle house.
- The Norwegian lives in the first house.
- The man who smokes Chesterfields lives in the house next to the man with the fox.
- Kools are smoked in the house next to the house where the horse is kept.
- The Lucky Strike smoker drinks orange juice.
- The Japanese smokes Parliaments.
- The Norwegian lives next to the blue house.

Now, who drinks water? Who owns the zebra?

In the interest of clarity, it must be added that each of the five houses is painted a different color, and their inhabitants are of different national extractions, own different pets, drink different beverages and smoke different brands of American cigarets [sic]. One other thing: in statement 6, right means your right.

—

Life International, December 17, 1962

Want to give it a go before reading this? Despite the usual clickbaity claims that only such a low proportion of the world population can solve it, usually 2%, it’s not that hard. But it does need paper, pencil, and an eraser.

I won’t give away the answer in this post, so you can read anyway and come back to it later.^{1}

Now, how would you solve this with a computer program? It’s easy to get distracted by details. The most easily reliable implementation is **not** to do it the same way you’d do it by hand. There’s too much bookkeeping to get it right on the first try, especially when there’s a much simpler approach. We can consider it a decision problem on the input space of all (*n**a**t**i**o**n*, *c**o**l**o**r*, *p**e**t*, *d**r**i**n**k*, *c**i**g**a**r**a**t**t**e*) permutations: for each input, we’ll verify that all constraints apply. If they do, we can just scan for the water and the zebra.

There are 5! = 120 permutations of each property, so 5!^{5} = 24883200000 ≈ 2, 5 × 10^{10} permutations globally. It’s big. But it’s tractable. It’s actually a lot more tractable now than it was back in 2003.

Here’s a simple C++ program implementing that.^{2}

```
#include <algorithm>
#include <array>
#include <iostream>
using namespace std;
enum nation_t { england, spain, ukraine, norway, japan };
enum color_t { red, green, ivory, yellow, blue };
enum pet_t { dog, snails, fox, horse, zebra };
enum drink_t { coffee, tea, milk, juice, water };
enum cigs_t { old_gold, kool, chesterfield, lucky_strike, parliament };
template <typename T>
static bool next_to(array<T,5> attr, int i, T v)
{return (i <= 0 || attr[i-1] != v) && (i >= 4 || attr[i+1] != v);
}
template <typename A,typename B>
static bool both(int i, array<A,5>& ka, const A& va, array<B,5>& kb, const B& vb)
{return ka[i] == va && kb[i] != vb;
}
int main()
{nation_t, 5> nation;
array<color_t, 5> color;
array<pet_t, 5> pet;
array<drink_t, 5> drink;
array<cigs_t, 5> cigs;
array<
for (int i = 0; i < 5; i++)
// This is horrible. So I *LOVE* IT!!!
// Try doing /that/ with std::iota!
nation_t) (
nation[i] = (color_t) (
color[i] = (pet_t) (
pet[i] = (drink_t) (
drink[i] = (cigs_t) i))));
cigs[i] = (
for (;;) {
int water_drinker, zebra_owner;
// check the state against all constraints
for (int i = 0; i < 5; i++) {
if (both(i, nation, england, color, red)) goto inconsistent;
if (both(i, pet, dog, nation, spain)) goto inconsistent;
if (both(i, color, green, drink, coffee)) goto inconsistent;
if (both(i, nation, ukraine, drink, tea)) goto inconsistent;
if (color[i] == ivory && (i >= 4 || color[i+1] != green)) goto inconsistent;
if (both(i, cigs, old_gold, pet, snails)) goto inconsistent;
if (both(i, cigs, kool, color, yellow)) goto inconsistent;
if (drink[i] == milk && i != 2) goto inconsistent;
if (i == 0 && nation[i] != norway) goto inconsistent;
if (cigs[i] == chesterfield && next_to(pet,i,fox)) goto inconsistent;
if (pet[i] == horse && next_to(cigs,i,kool)) goto inconsistent;
if (both(i, cigs, lucky_strike, drink, juice)) goto inconsistent;
if (both(i, nation, japan, cigs, parliament)) goto inconsistent;
if (nation[i] == norway && next_to(color,i,blue)) goto inconsistent;
if (drink[i] == water) water_drinker = nation[i];
if (pet[i] == zebra) zebra_owner = nation[i];
}"Found solution: " << water_drinker << ", " << zebra_owner << endl;
cout << /* fall through */
inconsistent:
// permute to next state
if (!next_permutation(nation.begin(), nation.end()) &&
!next_permutation(color.begin(), color.end()) &&
!next_permutation(pet.begin(), pet.end()) &&
!next_permutation(drink.begin(), drink.end()) &&
!next_permutation(cigs.begin(), cigs.end()))goto lose; // no next state :-(
}return 0;
lose:"No other solution found.\n";
cout << return 1;
}
```

Some noteworthy remarks about the translation from statement to code:

- The first constraint is of a different nature. It has implications not only on the number of slots to allocate, but also on the number of elements in the enumerations. That’s why I’m not making it a variable or even a constant. The code works for five, and only five.
- The property sets were constructed by scanning the constraints
**and queries**. - The
`both()`

helper implements a failure check for proposition “if this house index verifies property A, then it must also validate property B”. This explains both why the check is`==`

on A and`!=`

on B, and why A and B are sometimes in a different order than they appeared in the initial text. - Yes, I’m using
`goto`

. This kind of constraint satisfaction where the goal is reached after succeeding at a sieve of tests is very conducive to short-circuiting in this manner.^{3}

Let’s run it.

```
time ./brute
$ Found solution: 3, 4
No other solution found.
real 1m57,451s
user 1m57,299s
sys 0m0,028s
```

The solution is actually found closer to the beginning of that, approximately forty-five seconds in, but that’s distribution luck. And it *is* nice to check no other solution exists, and to get an idea of the time to exhaustively search the entire space.

Okay, so it’s solvable. That’s what I’ll call the brute-force approach.^{4} It’ll serve as an upper bound to what we’ll consider an acceptable solve time if we start doing better.

Is doing better even possible? What part of the work can be removed? The insight will probably come easier with an example. The way the program is structured, we’re generating a permutation within the search space, and then checking it for all the constraints.

Consider constraint 10: “the Norwegian lives in the first house.” How discriminant is it? It’s not too hard to see the Norwegian guy will be generated in the first house in one one input out of five. When he isn’t, the input is globally invalid since it’s locally invalid, but a set of 4, 9 × 10^{9} failing cases will be generated and evaluated anyway. That’s a huge waste! The common kind of constraint that pairs two attributes in a given house is even more discriminating.^{5}

Generating only cases where the one-house attribute pairing constraints are valid is a bit of a pain in C++. But we can easily implement constraint 10. I’m not reproducing the code, but here’s the result.

```
time ./brute-10
$ Found solution: 0, 4
No other solution found.
real 0m26,434s
user 0m26,416s
sys 0m0,004s
```

(Don’t cringe at the figures being different. I wanted to keep the same simple initialization code, so I shuffled nations around to do it. It’s the same result all right.)

We can indeed observe it’s five times as fast.

How can we generalize short-circuiting to *all* of the constraints? Well, there’s no definitive generic easy way; it’s still an active area of research. A common rule of thumb is to start with the most discriminating ones, but that’s hindsight thinking.

An easy step up from our brute-force will start with expressing the problem domain in a higher-level language. One where a partially assigned state can more easily be represented. I’ll use Prolog.^{6}

The state will be a list of five `house/5`

functors. This is enough to represent the problem and unknowns. But it’s a little too much. We’re able to represent a state where each house has the same nationality, color, pet, drink and cigs. We can’t reasonably^{7} encode the extensive cross-house constraining: it’s what we’re solving for, it’s bound to be really unwieldy for a good reason. So we’ll have to remember to enforce that form of constraint explicitly.

Let’s define a few accessors, so as not to mix up the different attributes.

```
N,_,_,_,_),N).
nation(house(_,C,_,_,_),C).
color (house(_,_,P,_,_),P).
pet (house(_,_,_,D,_),D).
drink (house(_,_,_,_,C),C). cigs (house(
```

This time, the first constraint isn’t any different, and we can implement it directly on a Prolog list:

`S) :- length(S,5). c1(`

The “common” constraints all follow the same pattern: peek at the list members one by one, looking for a match that satisfies the rest of the clauses.

```
S) :- member(H,S), nation(H,england), color(H,red).
c2(S) :- member(H,S), nation(H,spain), pet(H,dog).
c3(S) :- member(H,S), drink(H,coffee), color(H,green).
c4(S) :- member(H,S), nation(H,ukraine), drink(H,tea).
c5(S) :- member(H,S), cigs(H,old_gold), pet(H,snails).
c7(S) :- member(H,S), cigs(H,kool), color(H,yellow).
c8(S) :- member(H,S), cigs(H,lucky_strike), drink(H,orange_juice).
c13(S) :- member(H,S), nation(H,japan), cigs(H,parliament). c14(
```

Reading the first constraint `c2`

out loud for those who are a bit rusty in their Prolog: constraint 2 is verified for state S if there exists a house H in S, and the nation in H is England, and the color in H is red. “`:-`

” reads as “if”; “`,`

” reads as “and”.

Some constraints need access to the position in the list. Let’s start with the easy one, constraint 10, “the Norwegian lives in the first house.” We can pattern match^{8} on the state directly:

`H|_]) :- nation(H,norway). c10([`

Constraint 9 needs some notion of the “middle”. As I’m not hardcoding 5 this time, we’ll extract it from the list itself.

`S) :- length(S,L), M is L // 2, nth0(M,S,H), drink(H,milk). c9(`

Constraint 6 needs pairwise access to contiguous houses, to call a specific goal on each. We can do that by pattern matching on the list to check the base case, and recursing on the list’s tail for alternatives.

```
S) :- seq(S,ap(color,ivory),ap(color,green)).
c6(
L,R|_],ap(A,Va),ap(B,Vb)) :- call(A,L,Va), call(B,R,Vb).
seq([_|T],A,B) :- seq(T,A,B). seq([
```

I’m using `call/3`

to keep it generic on the house attribute being tested, so I can re-use it for the “next to” class of constraints. I’ll implement those using a simple disjunction (boolean OR, represented with a semicolon “;” in Prolog) on the `seq/3`

basis.

```
S,A,B) :- seq(S,A,B) ; seq(S,B,A).
next_to(
S) :- next_to(S,ap(cigs,chesterfield),ap(pet,fox)).
c11(S) :- next_to(S,ap(cigs,kool), ap(pet,horse)).
c12(S) :- next_to(S,ap(nation,norway), ap(color,blue)). c15(
```

All constraints are accounted for. We can now combine them.

```
S) :- c1(S), c2(S), c3(S), c4(S), c5(S), c6(S), c7(S),
puzzle(S), c9(S), c10(S), c11(S), c12(S), c13(S), c14(S), c15(S). c8(
```

And solve, instantly: (linebreaks mine)

`?- puzzle(S). S = [house(norway, yellow, fox, _1220, kool), house(ukraine, blue, horse, tea, chesterfield), house(england, red, snails, milk, old_gold), house(spain, ivory, dog, orange_juice, lucky_strike), house(japan, green, _1182, coffee, parliament)] .`

Don’t peek too deep if you still wanted to solve it on your own!

A few takeaway notes:

- I did not encode the actual puzzle queries this time. So there’s no “water” or “zebras” in the results: we get anonymous free variables instead.
- Remember how I said the data structure wouldn’t take care of ensuring we didn’t have any duplicates? Yet I didn’t implement any specific code to enforce that. How could it work anyway? For most attributes, a simple counting argument suffices: the puzzle knows about five values and needs to place them in somewhere, so it can only succeed if all set values are indeed distinct. For the attributes that are part of the queries, it’s more of structural luck. We can indeed verify we have a solution with the four constrained attributes plus one placeholder, but nothing prevented the inference engine from yielding a solution with a duplicate attribute and no placeholder, provided such a solution was feasible.
- It’s so much faster than the brute-force version I can’t even reliably time it.

The kind of search we’re using here is a simple backtracking search. It applies each constraint in order, backtracking when one fails. If the entire chain succeeds, that’s a solution. The REPL then asks us whether we want to keep searching for other solutions. (There aren’t any.)

I implemented the constraints in statement order, and Prolog followed. But I’m not using any choice-guiding facility. So they could theoretically be ordered in any other way: we’d get the same result set. Having the “count” constraint first is a good heuristic. The predicates I used in this implementation are versatile enough to let Prolog find a solution in any order. Possibly in a much longer time.

State-of-the-art constraint programming engines can reorder constraints’ application in a number of ways to try and improve speed; we’re not going to do that.

What we *are* going to do is explore refining the variables a bit more. In the current state of the implementation, a variable can be in two states: assigned or unassigned. We’re leaving information on the table, right there. When we’re assigning the Norwegian to the first house, it should make it impossible to assign it again anywhere else. It ends up not changing the result, but a lot of futile possibilities are needlessly explored.

That ought to make the search better. But it’s already instantaneous. So we’ll have to look for a harder problem first. That’ll be the topic of another post.

In the meantime, enjoy this OR joke:

Constraint Programming is satisfying.

And even if I did by accident, the fun in this kind of puzzle is in the solving process, not the solution. Kind of like Sudoku. But I find Sudoku boring, and implementing CSP solvers fun. YMMV.↩︎

C++, on the other hand, wasn’t as good in 2003.

`std::array`

rocks!↩︎I know I’ve got anti-goto zealots in my followers. Most of them aren’t even coders. You

*are*welcome to improve it on your own. I may even update it here.**If**it doesn’t make the code any more complex.↩︎I’m aware it’s possible to do worse without randomizing: by enumerating full 5

^{5}assignments per feature, for a grand total of 5^{25}≈ 3 × 10^{17}which is all the less tractable that it would need the additional checking we’re not duplicating a feature value. Ridiculous. Don’t do that. C++ supports permutations, use that.↩︎I count 2880, but I’ve been out of school for too long to be confident to state that in the article main text.↩︎

I’m using SWI-Prolog, mostly because that’s what we used in school way back. I have no idea what the popular Prolog implementation is nowadays, but this one happens to still work.↩︎

Must… resist… urge…↩︎

Prolog calls it “unification”; it’s more general than pattern matching. But my readership is mostly Haskellers.↩︎

So I refactored it. Little by little. Using various Haskell common practices to make bug introduction and reappearance less likely.

The biggest game changer is the new “batch” type representation I use for knowledge management, which makes it much more clear where in the amnesia process we are. Next up is implementing query counting as a ressource monad with exit by exception.

So it’s now in a state where it’s remarkably overengineered for a throwaway competitive coding problem, yet still (I hope) works on the venerable platform GCJ provides. Also somewhere in that uncanny valley between literate Haskell and a well-documented module. More could be done to make it even safer, but not too easily while keeping that platform requirement. Indexed monads come to mind, also (not as strongly as they’re much easier to reimplement) free monads.

Anyway, now it’s done, it might as well be put out there; by chance it could be of interest to someone, be it on the competitive algorithms or the language side.

~~This file is a Haddock module documentation page. It is not literary Haskell. Come to think of it, it probably should have been. But it currently is not.~~

~~Most of it reads as text, with the relevant function signatures interspersed. That’s most of the content anyway. The source code itself is only a click away: either the “Source” link at the top of this page, or any or the source links on the functions.~~

This code was originally published as Haddock with source, but is now closer to literate Haskell. Starting with a few extensions and imports, as is now tradition.

```
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE LambdaCase #-}
-- base
import Control.Applicative
import Data.Bits (xor)
import Data.Coerce
import Data.List hiding (insert)
import Data.List.NonEmpty (NonEmpty((:|)),(<|))
import Data.Functor
import System.Exit (exitFailure)
import System.IO
-- mtl
import Control.Monad.Reader
import Control.Monad.State.Strict
import Control.Monad.Except
-- MonadFail not really needed for GCJ's rusty GHC (too old);
-- Control.Monad.Fail not needed with recent GHCs.
-- So this import is just the worst of worse world because I didn't
-- figure out a simple enough way of running an old haddock.
import Control.Monad.Fail
```

I won’t reproduce the entire statement, you can find it at on the Code Jam site.^{1} For a summary: in this interactive problem, the judge has knowledge of a *B*-bits wide word called “the database” (for *B* among: 10, 20 or 100). It provides us with an operation to query a bit at an index of our choosing. The goal of the problem is to gain enough knowledge about the database’s contents that we can provide the entire bitstring at once.

There’s a catch. We’re only allowed 150 queries. There’s another catch. Querying the database, one time out of ten, will cause the database’s contents to both switch endianness (50% probability) and flip the bits (50% independent probability) before returning the (new) result.

The saving grace is that we know *which* queries trigger that so-called **quantum fluctuation**: they’re the very first one and then repeat with a period of 10.

That quantum fluctuation thing is quite the downer. We can only read 10 bits at a time from the database before everything is shuffled!

As it turns out, that’s not too much of a problem for the easy *B* = 10 case: the bits may be fluctuated before we even start, but as long as we only query 10 of them, they won’t be fluctuated again, so our gained knowledge is fresh enough that we can output the result while it’s still correct.

```
-- | Solve a @B=10@ problem case.
solve10 :: ReaderT BitWidth IO ()
= void $ runExceptT $ flip runStateT (QueryCount 0) $ do
solve10 <- mapM (\i -> fmap (i,) (readBit i)) [Index 1 .. Index 10]
bits $ provideAnswer $ reconstruct bits liftIO
```

We need to find some way to keep track of the two operations that could happen to a bit in the original database: changes and moves. Changes are what happen when we complement the entire database: the bit’s value flips. Moves are what happen when the database is reversed: the bit’s value doesn’t change, but its index does.

So, how are we to make sense of this? The trick is to always consider a bit as paired with the symmetrical one on the other side of the database, the one it would switch places with if the database were reversed. From a starting position, we can boil it down to two simple cases:

- The bits start equal. In this case, they’ll always remain equal, as both operations preserve that. To track them, all we need is either of them’s value, and we automatically know the other one’s without having to query it. As a parallel to analytical functions, we’ll call these ‘Even’ pairs.
- The bits start diferent. In that case, they’ll always remain different, as both operations preserve that too. (It may take just a little more thinking to convince yourself of this, but all there is to it is realizing that both operations’ start and end pair are the same.) Same as before, all we need to track them is either of their bit’s value. As a parallel to analytical functions, we’ll call those ‘Odd’ pairs.

What’s interesting is that once we’ve identified a pair’s quality, not only will it never change, but we don’t actually care about what the operations are anymore!

So a valid strategy for the medium *B* = 20 case would be to:

- Identify the pair’s quality for pairs 1 to 5 (counterpart indices 16 to 20). One query per bit, two queries per pair, that’s a complete first block of queries before the ‘QuantumFluctuation’.
- Identify the pair’s quality for remaining pairs 6 to 10. That’s another ten queries and another quantum fluctuation.
- Identify each pair’s current fluctuation status. Since we now know each pair’s quality, this can be done in a single query per pair.
- We’re not querying anymore, so there’s no next fluctuation and we can output the complete database contents.

```
-- | Represent a t'Pair'\'s quality,
-- depending on its relationship to its counterpart.
data PairType
= Even -- ^ pairs are those where the counterpart is equal;
-- such pairs are unaffected by reversals.
| Odd -- ^ pairs are those where the counterpart is the negation;
-- database reversals and complements have
-- the same effect on such pairs.
-- | Identify the 'PairType' from a given pair of 'Bool's.
pairType :: Bool -> Bool -> PairType
| x == y = Even
pairType x y | x /= y = Odd
```

```
-- | The v'Pair' type represents a database bit and its symmetrical
-- counterpart. For the bit in the first half of the database:
data PairF bool = Pair {
pairIndex :: HalfIndex -- ^ remember its index
pairValue :: bool -- ^ remember its value
,deriving Functor -- ^ a hacky derived instance to get 'fmap' at
} -- little cost
type Pair = PairF Bool
-- ^ I'm lazily using a parametric @bool@ so I can @DeriveFunctor@ and
-- have a free 'fmap' on the relevant payload, namely the 'pairValue'.
-- In this code, it's only ever going to be used as a 'PairF' 'Bool',
-- hence the type synonym.
-- | Return a pair's left index.
pairIndexL :: Pair -> Index
= halfToFull . pairIndex
pairIndexL
-- | Return a pair's right index.
pairIndexR :: BitWidth -> Pair -> Index
= halfToFullR bw . pairIndex
pairIndexR bw
-- | In addition to `pairValue` which is a record accessor, pairValueR
-- returns the pair's symmetrical counterpart's value.
pairValueR :: PairType -> Pair -> Bool
Even = pairValue
pairValueR Odd = not . pairValue
pairValueR
-- | Expand a pair back to its two known indexed bit values.
expand :: BitWidth -> PairType -> Pair -> [(Index,Bool)]
= [ (pairIndexL p,pairValue p)
expand bw pt p , (pairIndexR bw p,pairValueR pt p) ]
```

```
-- | Solve the @B=20@ case by qualifying each pair,
-- then probing all of them in a single block.
solve20 :: ReaderT BitWidth IO ()
= do
solve20 let block = runExceptT . flip evalStateT (QueryCount 0)
Right half1 <- block $ mapM readPair [HalfIndex 1 .. HalfIndex 5]
Right half2 <- block $ mapM readPair [HalfIndex 6 .. HalfIndex 10]
Right bits <- block $ mapM readBit [Index 1 .. Index 10]
<- ask
bw let (cs1,ps1) = unzip half1
= unzip half2
(cs2,ps2) = zipWith ($>) (ps1 ++ ps2) bits
currentHalf = concat $ zipWith (expand bw) (cs1 ++ cs2) currentHalf
bits' $ provideAnswer $ reconstruct bits' liftIO
```

The hard *B* = 100 case is going to require more fine-grained information management.

The insight here is that since the database remains still within a query block, all the pairs read in that timeframe will remain the same with respect to each other, provided they’re of the same quality. For example, with respect to any given bit of a known even pair, the other bits of known even pairs will either have the same value or the opposite value, and that fact *will remain true after quantum fluctuations, independently of the bits’ subsequent value*.

This is better than having to probe every single pair in a single query block, but it still doesn’t cut it for a direct aproach: supposing we’d qualified all 50 pairs of the database, we’d still need to identify each group’s fluctuation status within a single query block. But that won’t fit: we could have as many as 20 groups (two qualities per block), while we’ve got the bandwidth to probe only 10 of them.

So instead of dedicating all of our queries per block to pair qualifying, we’ll instead use the first one or two to probe our previous groups’ status right after a quantum fluctuation. This way, our qualified groups won’t be independent from one another anymore, we’ll be able to batch them together into only a single batch per pair quality for the entire database.

Does this fit within the allowable query count? After the initial block, we’d use up 2 queries to probe, and the remaining 8 to qualify new pairs. So we cover 8 bits per block, the full 100 within 13 blocks. That’s 130 queries or less, it fits!

```
-- | A Batch groups together t'Pair's of a same known (externally)
-- 'PairType'. If we successfully manage to track one of the batch's
-- representatives' value between 'QuantumFluctuation's, we're able to
-- deduce all the batch's other pairs with no further costly
-- information retrieval!
--
-- The @offset@ parameter enables us to explicitly mark a batch's
-- knowledge as outdated, so we don't accidentally miss a
-- 'QuantumFluctuation' and go out of sync.
data Batch offset
= Empty -- ^ An empty batch. Note that those never hold an @offset@.
| Batch offset (NonEmpty Pair) -- ^ A non-empty batch.
deriving Functor -- ^ I use the same @DeriveFunctor@ trick, this
-- time less idiomatically as the @offset@ can't
-- really be considered the payload: this one makes
-- for a very easy batch 'float'ing implementation.
-- I'm ashamed of this one.
```

```
-- | A __floating__ batch is one whose pairs' values we currently
-- don't know, because a 'QuantumFluctuation' happened and we haven't
-- synchronized yet.
type FloatingBatch = Batch ()
-- | A __bound__ batch is one whose pairs' values are currently known.
-- To avoid having to update all of the values at each
-- 'QuantumFluctuation', we store this as a 'Bool' to be 'xor'ed with
-- them.
type BoundBatch = Batch Bool
-- | Bind a floating batch to a specific boolean offset. This
-- consumes up to one query.
bind :: (MonadError QuantumFluctuation m,MonadState QueryCount m,MonadIO m)
=> FloatingBatch -> m BoundBatch
Empty = pure Empty
bind Batch () ps@(p :| _)) = do v <- readBit (halfToFull (pairIndex p))
bind (pure (Batch (v `xor` pairValue p) ps)
-- | Loosen a bound batch back to a floating one. To be used when we
-- know it'll expire before the next query returns.
float :: BoundBatch -> FloatingBatch
= fmap (const ()) float
```

```
-- | Insert a pair in a batch.
-- Can only by done if the batch is currently bound.
insert :: Pair -> BoundBatch -> BoundBatch
Empty = Batch False (pure p)
insert p Batch b ps) = Batch b ((fmap (xor b) p) <| ps)
insert p (
-- | Expand a batch to a list of @(Index,Bool)@ pairs.
-- Can only be done if the batch is currently bound.
assocs :: MonadReader BitWidth m => PairType -> BoundBatch -> m [(Index,Bool)]
Empty = pure []
assocs _ Batch b ps) = do
assocs pt (<- ask
bw pure (concatMap (expand bw pt . fmap (xor b)) ps)
```

```
-- | Read, classify and store pairs from the database until the next
-- query would result in a quantum fluctuation.
readPairs :: (MonadState QueryCount m,MonadReader BitWidth m,MonadIO m)
=> BoundBatch -> BoundBatch -> [HalfIndex]
-> m (BoundBatch,BoundBatch,Maybe [HalfIndex])
= pure (evens,odds,Nothing)
readPairs evens odds [] @(i:is') = runExceptT (readPair i) >>= \case
readPairs evens odds isRight (Even,p) -> readPairs (insert p evens) odds is'
Right (Odd,p) -> readPairs evens (insert p odds) is'
Left QuantumFluctuation -> pure (evens,odds,Just is)
-- | Perform a block of queries, maintaining a knowledge base of pair
-- batches between two quantum fluctuations.
readBlocks :: (MonadFail m,MonadReader BitWidth m,MonadIO m)
=> FloatingBatch -> FloatingBatch -> [HalfIndex]
-> m (BoundBatch,BoundBatch)
= do
readBlocks ftEvens ftOdds indices <- flip evalStateT (QueryCount 0) $ do
(bdEvens',bdOdds',mbIndices') -- Despite the MonadFail instance, the first two queries can't
-- fail since the query count is 0 then 1 at this time. Guarding
-- against this statically without making the code three times as
-- long reaches beyond what we have avilable on the GCJ platform.
Right bdEvens <- runExceptT (bind ftEvens)
Right bdOdds <- runExceptT (bind ftOdds)
readPairs bdEvens bdOdds indicescase mbIndices' of
Just indices' -> readBlocks (float bdEvens') (float bdOdds') indices'
Nothing -> pure (bdEvens',bdOdds')
-- | Solve the @B=100@ case. Actually, this would solve any (lower)
-- case, but 'main' currently only calls it in that case.
solve100 :: ReaderT BitWidth IO ()
= do
solve100 BitWidth bw <- ask
let pairRange = [HalfIndex 1 .. HalfIndex (bw `div` 2)]
<- readBlocks Empty Empty pairRange
(evens,odds) <- liftA2 (++) (assocs Even evens) (assocs Odd odds)
bits $ provideAnswer $ reconstruct bits liftIO
```

To guard against losing track of where I am between two quantum fluctuations, I’ll wrap the database querying with a basic resource manager, that checks whether the requested query would trigger a fluctuation. This is implemented with two monad transformers and associated classes:

- a
`MonadState QueryCount`

to count how many queries we performed since the last fluctuation. - a
`MonadError QuantumFluctuation`

to signal the special condition.

This helps ensure two things: we only ever trigger fluctuation mitigation if we actually need to perform more queries (see the *B* = 10 case); and we don’t accidentally sync at a likely mistake point. After 9 or 11 queries, for example.

A lot of this would be better-suited to an effects system, but `mtl`

is all we have on the platform. (And let’s consider ourselves lucky. A year or two ago, we only had bare `transformers`

!)

```
-- | Safely query a bit from the database. If querying now would
-- cause a 'QuantumFluctuation', report it using the 'MonadError'
-- interface instead.
readBit :: (MonadError QuantumFluctuation m,MonadState QueryCount m,MonadIO m)
=> Index -> m Bool
= get >>= \case
readBit i QueryCount 10 -> throwError QuantumFluctuation
-> modify succ *> liftIO (rawReadBit i)
_
-- | The singleton event type to signal when bad things are happening.
data QuantumFluctuation = QuantumFluctuation
-- | Query a pair of bits from the database and classify it.
readPair :: ( MonadError QuantumFluctuation m, MonadState QueryCount m
MonadReader BitWidth m, MonadIO m)
, => HalfIndex -> m (PairType,Pair)
= do
readPair i <- ask
bw <- readBit (halfToFull i)
x <- readBit (halfToFullR bw i)
y pure (if x == y then Even else Odd,Pair { pairIndex = i, pairValue = x })
```

For reference, the other constraints we encounter in the type signatures are:

`MonadReader BitWidth`

stores the global constant*B*.`MonadIO`

is a telltale indicator that the affected functions end up interacting with the judge. If I find the time and don’t get frustrated with too much fighting against Google Code Jam’s antique GHC version, I’d like to extract the protocol to a higher level of safety, and likely replace this with a free monad.

Additionally, some `newtype`

s to embellish the type signatures and prevent some classes of variable mixup:

```
-- | A wrapper around problem-global variable @B@. Converting to this
-- earlier would have spared me quite a few mix-ups between identifier
-- @b@ referring to the database width or identifier @b@ referring to
-- a generic bit/boolean value.
--
-- On the one hand I could just use more verbose identifiers. On the
-- other hand, having the typesystem help is always good.
newtype BitWidth = BitWidth Int
-- | A wrapper around an index to the database. Range from @1@ to @B@.
newtype Index = Index Int deriving ( Eq -- ^ needed for 'Ord'
Ord -- ^ needed to sort in 'reconstruct'
, Enum -- ^ needed for the easy case agenda
, Num -- ^ needed to convert counterparts
,
)
-- | A wrapper around an index to the first half of the database.
-- Range @1@ to @B/2@.
newtype HalfIndex = HalfIndex Int deriving Enum -- ^ needed for the agenda
-- | Conversion from a half-index to a full one is always safe.
halfToFull :: HalfIndex -> Index
= coerce
halfToFull
-- | Conversion from a half-index to the full one of its right part
-- requires knowing @B@.
halfToFullR :: BitWidth -> HalfIndex -> Index
BitWidth bw) = (Index bw+1 -) . halfToFull
halfToFullR (
-- | A wrapper around the query count for “managed” querying.
newtype QueryCount = QueryCount Int deriving ( Eq -- ^ check for limit
Enum -- ^ increase
, )
```

```
-- | Turn an unordered list of indexed booleans from various batches
-- back into a nice bitstring.
reconstruct :: [(Index,Bool)] -> [Bool]
= map snd . sort
reconstruct
-- | Query a bit from the database.
-- This is the raw protocol operation.
--
-- I used to label is as ‘unsafe’ to signal not to use it directly,
-- but I've since then written the easier variations of this puzzle,
-- so I'm now going with ‘raw’.
rawReadBit :: Index -> IO Bool
Index i) = print i *>
rawReadBit (>>= \case "0" -> pure False
checkLine "1" -> pure True
-- | Provide an answer to the judge.
provideAnswer :: Foldable f => f Bool -> IO ()
= do
provideAnswer answer putStrLn $ concatMap (show . fromEnum) answer
"Y" <- checkLine -- still not legal to end a void'ened
pure undefined -- block on a monadic pattern bind :-(
-- | Read a line from the judge.
--
-- As per protocol, if the line to be returned is an @\"N\"@, that's
-- an interaction-terminating signal as far as the judge is concerned,
-- whether they're caused my a protocol error or a wrong answer. So
-- exit cleanly ('exitFailure') on those so the judge can return the
-- correct “wrong answer” result instead of “time limit exceeded”.
--
-- IMHO this is a bit lame from the organizers' part, they'd be
-- perfectly able to distinguish those without making the protocol any
-- more cumbersome than it already is.
checkLine :: IO String
= getLine >>= \case "N" -> exitFailure
checkLine -> pure s
s
-- | Perform the Code Jam judge I/O and tie the high-level pieces
-- together.
main :: IO ()
= do
main LineBuffering
hSetBuffering stdout <- map read . words <$> getLine
[t,b] let solver = case b of 10 -> solve10
20 -> solve20
100 -> solve100
$ runReaderT solver (BitWidth b) replicateM_ t
```

Despite the current GCJ interface and level of Haskell, this problem was a very interesting one to solve.

If I find the motivation to put some more time into this, there’s more to be done on a few fronts:

- The main algorithm handles two batches: one for each pair quality. This is horrible! We could use one instead of the other by accident and come up with wrong results! This could easily be solved by integrating the quality in the pair. But that’s very wasteful, and some kind of an invitation to store pairs in the wrong batch. We could instead store it in the batch itself. But that’s still a bit of a waste, as we are going to have exactly two of them every time anyway. I’m considering a phantom type parameter for that, but I’m still not sure how to integrate it without too much unpleasantness.
- My implementation of the judge I/O protocol still has two weak spots:
- Some of the calls to
`readBit`

are marked as failable, yet we know they can’t since they’re the first of a query block. Solving that would require oh-so-many language extensions, but seems doable. Just really out of touch for competition code. For now. - There’s no static limit on the 150 total queries yet.

- Some of the calls to
- The
`Index`

es are newtyped and some care is taken when converting, but they’re not statically bound by*B*either, and this also could result in protocol failures.

Feedback, comments and suggestions welcome. Reasonable improvements too!

I’m never too sure how stable links like that are. If in doubt, find the Google Code Jam home page using a search engine of your choice, search for past problems in the 2020 qualification round of the Code Jam contest, problem D.↩︎

“True, but we’re not quite there yet.”

This got me wondering. What *is* DevOps?

The Wikipedia article is quite bad. Its biggest section is “relationship to other approaches”. Talk about an identity crisis. “Toolchains” is a top-level section. Is that because it has specific discriminating tooling? “Goals” is a top-level section too, and it’s much bigger than “Definition”. It reads more like “Dreams” anyway.

And the money quote near the end:

While DevOps describes an approach to work rather than a distinct role, job advertisements are increasingly using terms like “DevOps Engineer”.

Come to think of it, yes, I too have had to recruit DevOps contractors before. I distinctly remember asking, at the time, what was meant by that. “Well, someone like us, you know, understands his shit and is able to operate less stiffly than the operations’ silo.”

*“You know”*

At least I was able to filter out those who put `bash`

on their resume just because someone had told them to.

And by evaluating the line-up, I was bestowed with part of the revelation. The DevOps candidates I was presented were pieces of meat from “lesser” schools, not because they cost less, you pessimist, but because, *you know*, it’s a specific skillset, that happens to be sold to us for more because, *you know*, they’re kind of hard to come by, and from a distinct set of suppliers, because, *you know*, that’s the DevOps way.

Money finds its way.

Speaking of money, let’s peek at Microsoft’s take:

A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers.

What does DevOps mean for teams? DevOps enables formerly siloed roles—development, IT operations, quality engineering, and security—to coordinate and collaborate to produce better, more reliable products. By adopting a DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve business goals faster.

Cue marketing pitch, along the lines of *“teams that adopt DevOps culture, practices and tools become high-performing, building better products faster for greater customer satisfaction.”* Better than your local astrologer or your money back!

The sharp reader will have noticed that despite leading the sentence with “formerly siloed”, nowhere does it say the teams’ structure would actually change.

Let’s try another big name, such as AWS.

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

Well at least now it reads like a definition. It’s the set of stuff that lets you deliver fast. If you deliver fast, you’re DevOps.

My team *was* delivering faster than the global organization’s average,^{1} so at least part of that could have rung true, had I sought enlightenment at the time.

Let’s try the open-source world, with RedHat’s view on the matter.

The word “DevOps” is a mashup of “development’ and”operations" but it represents a set of ideas and practices much larger than those two terms alone, or together. DevOps includes security, collaborative ways of working, data analytics, and many other things. But what is it?

Pray tell.

I’ll add Atlassian’s view for good measure. Because it was on the search results’ first page, not because I blindly trust their unbiased opinion on the matter.

DevOps is a set of practices that works to automate and integrate the processes between software development and IT teams, so they can build, test, and release software faster and more reliably.

Well at least it’s humble. The rest of the page peddles their software suite as various “stages of DevOps”, so I’ll snip right there.

Yet they do it with that fine infinity-shaped loop diagram. The one all of the others used as well. Maybe there’s something to be found there? Here’s one from Wikimedia Commons.

It’s the first one I see that actually makes sense to display as two loops, since it splits parts of the process between Dev and Ops. The big names above just had the seven phases, mapped to their products.

But hey, at least with this chart, we can completely split the reponsibility between Dev and Ops, crystal clear as day. It’s made very close to explicit by the final link on the search page, The Agile Admin.

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

DevOps is also characterized by operations staff making use many of the same techniques as developers for their systems work.

At last, it’s “official”! DevOps can’t be a role, since the chart *and* the Agile Admin mention dedicated developers and operation staff.^{2}

I have no idea what those people I recruited were, then.

I’m not out of options just yet, though. DevOps is taught in schools, now. I’ll simply ask a trainee what it’s all about.

“DevOps? Oh, I know that one, we studied it last semester! It’s, err, GitHub and Jenkins, right?”

You tell me!

It only makes sense. It sounds cool. So branding wants their share. Before you know it, everybody is using it. So everybody has to have it as a listed resume skill. So universities have to provide, and are happy to oblige. Whoever came up with that name is a marketing genius.

Hey, that’s a lead. Who coined the term?

Digging *that* up is harder. But the French Wikipédia article has an answer: Belgian Patrick Debois did. So I’ll assume he’s the French-speaking flavor of belgian, and community spirit ensured he was properly listed in all the relevant places.

Following the link to his entry, we get a proper explanation from the horse’s mouth: he used the term to organize the first devopsdays in Ghent, because “Agile Administration System” was too long.

**Agile Administration System**

At last!

I’ve also read about Agile Infrastructure, which hits closer to home for me. Hey, I’ve worked with organizations where the hardware to run their applications had to be planned two years in advance if you wanted a fighting chance at having it in time.

Bringing the wonders of agility to infrastructure and deployments! Indeed, using continuous integration and proper version control can only be a step up from where operations used to be. Continuous deployment based on a good implementation of infrastructure-as-code could definitely help too.

This reminds me of what a few friends of mine do at Google. What did they say they called it? Ah, yes, **SRE**.

For those who’ve been out of the computer industry for the last two decades, *Site Reliability Engineering* is Google slang for “automate the shit out of anything that’s in the way of `www.google.com`

having five nines of uptime.” I may have gotten some details wrong, I could only get the googlers to talk to me after getting them drunk and playing pretend you’re having your hiring interview again. Secretive bunch.

In contrast to DevOps, SRE appears to be a very explicit role. And organization. It’s kind of funny everybody else has SRE positions open too, given the little power they have to actually keep `www.google.com`

up. Or down.

I jest. “S” can stand for whatever your TLD is. It even extends to systems nobody in their right mind would call a site.

Everybody wants to have them because the concept comes from Google, and when they’re not sunsetting Yahoo, everything Google does is cool. So branding wants their share of being cool too. Before you know it, everybody is doing it. So everybody has to have it as a listed resume skill. For some reason, universities don’t seem to provide; they’re still busy churning out DevOps.

I figure there’s only two kinds of SRE job openings out there.

- the ones from Google.
^{3} - the ones from companies who want to become Google by walking their footsteps; but in the meantime you’ll just operate the software their developers throw over the fence whilst refraining from distracting them. At five nines. You’ll fail on both counts.

*What a mess!*

Not being Google doesn’t preclude you from automating the hell out of a lot of shit, though. God knows my team and I have automated^{4} a lot over the years.

Take Hudson, for example. It wasn’t always there.^{5} Countless in-grown alternatives, including two of mine, were proudly triggering the industry’s software builds in response to developer commits. Striving to minimize duplicate work in a cross-platform sea of C++ ABI compatibility brain teasers. Then to build on top of that to minimize developers’ build times.

Then to have the hardest time convincing management we should migrate to Jenkins, because its impetus dwarfs our sunk costs by orders of magnitude. Some things never change.

Or that other DevOps tool, *“GitHub”*.

By pure luck, I happened to be on the lookout for version control software when `git`

first came out. So I happen to use it since week one. Stating it was different then would qualify as understatement of the year. Like, there’s-no-commit-command level of different. That’s how I got my knowledge of the `git`

internals since the beginning: back then, there wasn’t anything but. At least it gives me the confidence to do crazy stuff with `git`

dependency graphs and have reddit melt my webserver in return. I can also do the normal stuff, now `git`

’s got the normal stuff abilities too.

The organization I contracted for at the time was seeking justifications to spend manyears on in-grown version control, so they launched a survey of existing options. I entered one-month-old `git`

in, semi-jokingly, clearly listing its young age as a lack of maturity *con*.

Naturally, it got rejected. Not for lack of maturity, though. I had listed its distributed abilities as a *pro*, so…

“It’s distributed whereas we require a centralized system.”

Wow.

Of course they too are using `git`

by now. It’s still as distributed as it was back then. It’s even better at being distributed now than it was back then. They’re using it centralized anyway. As was possible back then as well. Some things never change.

What’s left in the DevOps standard toolkit? Chef and Ansible? Nagios and ELK? LXC and Docker? Kubernetes? I’ve hacked on in-grown alternatives, around proprietary ones, and migrated systems to them.

What’s left in the SRE standard mindset? Automating all that can? Engineering continuous-availability releases? Adjusting alerting to keep on-call rotations meaningful and humane? I’ve done plenty of those too. Shortening the release cycle? Sounds more like DevOps, but it’s a process, and I’ve done a fair share of it too.

Does that make me a DevOps-abiding SRE? Answering that would warrant another post. I do think I’ve got the minimal credibility to apply to such a position, though.

“Hello, our standard recruitment pipeline would normally have you pass a test to see whether you qualify as an SRE, but in your case we’re going to skip it altogether: you have too much out-of-domain knowledge for us to consider you can possibly have the advanced skillset we require for this position.”^{6}

Some things never change.^{7} 🙃

And getting berated for it.↩︎

And my logic is flawless

*and*noncircular.↩︎And maybe AWS.↩︎

And gotten berated for it.↩︎

It’s not even there anymore. Just replace with Jenkins around the time the split happened.↩︎

Translation and bad paraphrasing mine, of course.↩︎

I’m not angry. This rejection carries an actual valuable and

*actionable*message: my first-impression abilities weren’t adjusted for this match, and will need polishing. If I can find the time. “Some things never change.” 😉↩︎

As a long-time Org Mode user, I was kind of pre-committed to Org-roam for my general notes.^{1} So that’s what I’ve gravitated towards these past months.

But this isn’t a Zettelkasten post. It isn’t even an Org-roam post. This post is about hacking Org Mode.

For I have itches.

I’m not taking notes all the time. Sometimes I’d just like to *read* them. Easy, I’ll just publish as HTML and point my web browser to them, right?

There are quite a few ways to convert Org to HTML. The natural one being, of course, Org Export. It tastes raw out of the box, but it certainly does the job: I get an HTML transcription of my input; external and media links are converted and work fine. Defining the whole Kasten as an Org Publish project, internal links work as well.

But I have Org Attachments. In Org Mode, attachments are files that are virtually copied inside an outline. Physically they’re copied to an UUID-derived subdirectory; a system property coupled with the `attachment:`

link scheme maintains the abstraction.

The “default” way to handle them is to set up publishing on the attachment container directory with the `org-publish-attachment`

publishing function.^{2} This would locate any file in the attachment subdirectories, and copy them with no transformation to the publication directory. The transcoder for attachment links is in the same mindset, so it all kind of just works.

There’s a catch, of course. What happens when I have multiple attachments, in separate outlines, with the same name?

As far as the Org Mode abstraction goes, that’s not a problem: the UUID directory they’re stored in will differ, and we can indeed access them separately from their respective contexts.

But when published with `org-publish-attachment`

, they end up in the same directory. The latest one published clobbers all homonyms. Not too satisfactory. Especially when Org Mode had a solution in place to avoid those conflicts all along.

So let’s port it.

I’d like them copied in the same kind of directory structure as they come from. I suppose I could just copy the entire attachment container directory, by Emacs or non-Emacs means. But that feels dirty: what if I only wanted to publish a single outline? What if I had removed an entire Org file for, say, privacy reasons? I wouldn’t want its attachments to still be published. And I’m going to need the attachment links to follow anyway. So, really, this means I can only reasonably handle them while I’m taking care of the Org files they’re declared in, since they’re the ones who have the knowledge of their container’s UUID.

So here’s how I’m currently doing it.

```
defun my-org-publish-attachment-relative (plist filepath pub-dir)
("Like `org-publish-attachment', but keep the attachment's relative path."
when (file-name-absolute-p filepath)
(;; is absolute path, make relative again
setq filepath (org-publish-file-relative-name filepath plist)))
(let* ((path (file-name-directory filepath))
(
(pub-dir-deep (concat (file-name-as-directory pub-dir) path)))
(org-publish-attachment plist filepath pub-dir-deep)))
defun my-org-attach-file-dir-of (element)
("Helper: return the attachment directory of a provided Org Element."
let ((pos (org-element-property :begin element)))
(
(file-name-as-directory
(save-excursion (goto-char pos) (org-attach-dir)))))
defun my-org-publish-attachment-filter (tree backend plist)
("Tree filter to scan for attachments and publish them in the
same relative directory they come from. Returns the unchanged
tree.
To be used as a :filter-parse-tree in the
`org-publish-project-alist'."
let ((pub-dir (plist-get plist :publishing-directory)))
(
(org-element-map tree 'headlinelambda (hl)
(and (member "ATTACH" (org-element-property :tags hl))
(let* ((dir (my-org-attach-file-dir-of hl))
(
(files (org-attach-file-list dir)))dolist (file files)
(
(my-org-publish-attachment-relative
plist (expand-file-name file dir) pub-dir))))))
(org-element-map tree 'linklambda (link)
(when (string= "attachment" (org-element-property :type link))
(let* ((dir (my-org-attach-file-dir-of link))
(
(file (org-element-property :path link))):type "file")
(org-element-put-property link
(org-element-put-property link :path (concat dir file))))))) tree)
```

A few things to note:

~~I’m defining a~~Indeed, it duplicated`drop-prefix`

function to ensure relative links. From all the Elisp tracing I’ve been doing since then, I’m pretty sure this duplicates some`org-publish`

functionality. I’ll have to revisit this someday.`org-publish-file-relative-name`

.- I’m handling attachment links by converting them to file links. This was
*not*my initial plan. I wanted to refine the`org-html-link`

transcoder to recognize attachment links and add the path prefix accordingly. After many failed attempts, I abandoned this path: the way the HTML exporter’s code is now, attachment links are detected by the transcoder as a custom protocol in its first switch, and their conversion is done*in the attachment module*by peeking at the export backend symbol, and having a case to convert HTML attachment exports to`<a>`

tags. Seems reasonable? Well, it works. Until I need one of them to convert to`<img>`

instead, that is. The image conversion is the next switch in the transcoder, so it never sees any attachment links. Smells like attachments were added to Org core after the HTML export, and that part never really got fused. Anyhow, doing it right would need real patches in Org, so hacking the substitution to file links will do for now.

One double-sized itch scratched.

Next up: I don’t always have inner links to attachments. Within Emacs, they can always be accessed through the attachment dispatcher, with the `o`

class of end-functions for files, or `f`

for the entire folder. How can I port that to HTML?

There’s always an `ATTACH`

tag to headlines with them; maybe I could enhance it to an HTML link?

Say linking directly to the attachment if there’s a single one, and to the entire directory if there’s multiple. Onus on the webserver to serve an index for it.

I’m not confident enough to touch the exporter’s info plist, so I’ll use a dynamic variable instead.

- the proper place to define it is when we have access to the UUID. We’re not garanteed to have any more contents in the node than its headline, so that’s the place.
^{3}It’s the`org-html-headline`

transcoder. - the place to use it is when we’re transcoding the tag name. It’s the
`org-html--tags`

internal helper.

```
defvar my-attach-link nil
("The attachment link target currently in scope, nil when none.")
defun my-ox-html-attach-headline (old-func headline contents info &rest args)
("Set `my-attach-link' to the attachment target location for the
scope of `org-html-headline'.
Intended as advice on `org-html-headline'."
if (member "ATTACH" (org-element-property :tags headline))
(let* ((dir (my-org-attach-file-dir-of headline))
(
(files (org-attach-file-list dir))"file:"
(my-attach-link (concat
(org-publish-file-relative-namecond ((cdr files) dir)
(car files))))
((concat dir (
info))))apply old-func headline contents info args))
(apply old-func headline contents info args)))
(
(advice-add 'org-html-headline :around #'my-ox-html-attach-headline)
defun my-make-link (dest text)
("Make an Org link element from a destination"
(with-temp-buffer
(save-excursion (insert (org-link-make-string dest text)))
(org-element-link-parser)))
defun my-ox-html-attach-tag (arglist)
("Replace an ATTACH tag string with an HTML link.
Intended as advice on `org-html--tags'.
The link is to be setup in dynamic variable `my-attach-link' by
the `my-ox-html-attach-headline' advice."
let ((tags (car arglist))
(cadr arglist)))
(info (list
(mapcar (lambda (tag)
(if (string= tag "ATTACH")
(
(org-html-link (my-make-link my-attach-link tag)"ATTACH"
cons '(:html-inline-image-rules nil) info))
(
tag))
tags)
info))) (advice-add 'org-html--tags :filter-args #'my-ox-html-attach-tag)
```

Noteworthy:

- I’m forcing
`:html-inline-image-rules`

to`nil`

when invoking`org-html-link`

to generate the link. If I didn’t, Org Export would detect the link as “descriptionless” and convert it to an inline image, which would look real weird. My links aren’t descriptionless, though, I’m going out of my way to ensure they’re tagged “ATTACH” at every level. I suspect that’s a bug in`org-export-inline-image-p`

, that advertises only applying to links without a description, but checks`org-element-contents`

on them. AFAICT,`org-element-contents`

is*always*`nil`

on link elements. - There’s probably a more Emacsy way of updating the information in-place. Most of my coding has been in pure functions as of late, I need a bit of an adjustment when I come back to Elisp.

Second itch scratched.

What’s blocking me from Nirvana?

On the Org core front: I’m going to want inline audio pretty soon. I could tackle it from both ends, so I’ll need a bit more reflecting before I start.

On the Org-roam front: I’m obviously going to need the backlinks. So that’s still missing, but I don’t think I’ll need to personally hack as much: a lot has already been done out there and I’ll likely be able to pick one. Neil Mather’s implementation is particularly nifty, I’ll probably take a lot of inspiration there.

In the meantime, the code is available as a gist. Feel free to help improve my rusty Elisp!

**Update 2021-03-23:** It broke with an org-mode update, a good opportunity to sanitize the difference between relative paths and link destinations.

I do use at least two other systems for specialized and/or shared notes: Neuron and the venerable MediaWiki.↩︎

Despite its flaws, there’s really no obligation to do it that way. There’s just no support (that I found) for anything else.↩︎

You may wonder, as I did, why it can’t be done directly at the tag transcoding point. The reason is that the tag transcoding point is not provided with enough information to recover the attachment properties: all it has is the tags

*string*and the global export plist; no buffer positioning or parse tree.↩︎

I’ve got a major arrangement in the works, so I didn’t upgrade. Most of the announced improvements were around the new engraving fonts, and since my piece is jazz in MuseJazz style, I didn’t really feel like I was the target audience.

And as a sane software engineer, I fear change, *especially* when I’m using it for something serious at that time.

But then it decided to upgrade itself on its own^{1}.

Well, the new one’s there now, so I might as well try it out. (After backing up my scores, of course.)

…and so far I only have good things to say about it. Of course I didn’t care much for the new dialog that prompted me to convert to the new fonts. But the rest of the interface hasn’t actually changed, so it’s that much muscle memory I won’t have to unlearn. And I discovered a few other improvements, outside the fonts’ scope, that *did* resonate with me.

- The new export dialog is a lot better indeed. The flow now
*looks*different between PDF and MP3, which is a*good*thing. So much less expected confusion when it comes to overwriting files with the wrong type. - This may have been there before, but it’s the first time I try it: notating a
*glissando*results in one beeing played back. And it sounds reasonably good, too! Nice surprise. - Chord symbols have an audio representation when moving around! Quite useful to detect typing mistakes. From a quick look in the inspector, it seems they can actually be set to echo at playback time as well. I know people for whom that’s going to be a life changer.

So… nothing harmful detected for now, and more good stuff. I’m almost tempted to migrate to the development version. Upgrade your installations if you haven’t already.

Congratulations on a release well done, MuseScore team!

*“It”*not being MuseScore at all. Rather some artefact of snaps being integrated in Ubuntu in a way I’m not too familiar with yet. Not that I’d want to, for that matter. But that’s topic enough for another~~rant~~post.↩︎

What a year! Between Covid-19, family matters, work matters, the new site and all, not much of that Advent of Code went smoothly. I did manage to solve all the puzzles in time.

This year’s personal challenge was to write about it. So I kept using Haskell—why change if it works so well—and upgraded the programs to full-fledged posts.

*Literate Haskell* is really nice to use. The language’s disregard for order of definitions makes TANGLE mostly redundant, which adds to the cohesiveness. With the notable hitch that imports still have to be on top, so there’s always that skippable heading block.

Not all puzzles were as *post-worthy* as the other. I tried to strike a balance across a broad spectrum of styles: some posts are straightforward commented implementations, other feel more like written live cuts, others go through more reflection of how puzzle and solution match…

This year was a rather easy one. On the math and computer science fronts, there wasn’t much left for me to discover. On the Haskell front, I took the opportunity to explore alleys I don’t take everyday. Most notably this year:

`recursion-schemes`

in day 7- the ST reference network in day 11
- the lens-like monstrosity that solves day 14

I’m slowly moving to Relude for personal projects and did so too for Advent of Code. It didn’t last half a day. AoC is really more of a *“solve it once”* mindset. I really can’t get myself to give a damn about handling user error properly when there’s only one input, error-free by definition.

I’m not including benchmarking of any kind. Same reasoning, most of this code is optimized for clarity of explanation, and that often means using the native default list structures. Unambiguously suboptimal, demonstratably adequate in face of the small input sizes. It’d all be dwarfed by the lens monstrosity anyway.

Here’s the full list of posts and summary keywords; the files are cloneable from github.

**Bonus** it’s not Haskell, but it’s literate: solving day 7 with a git commit graph.

And that pretty much wraps up the year.

Thanks again **Eric Wastl** for crafting puzzles of such quality year after year. I know the work that goes into making them so unnoticeably good, and doing so consistently by piles of 50 is superhuman.

I can’t wait ’til next year!

]]>Now the time pressure is off—and I have perfect knowledge of what’s in both parts—let’s take a fresh look at it and attempt to do it all at once. Starting with the standard literate Haskell prelude.

```
{-# LANGUAGE RankNTypes #-}
import Control.Lens
import Control.Lens.Unsound
import Control.Monad
import Data.Bits
import Data.IntMap.Strict (IntMap)
import qualified Data.IntMap.Strict as Map
import Text.Parsec
```

So what’s expected? We’re given a file in the following format:

`mask = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX1XXXX0X mem[8] = 11 mem[7] = 101 mem[8] = 0`

…and we must interpret it in two different ways:

- the addresses being strict, the values being modified by the mask when written
- the values being strict, the addresses being modified
*and expanded*by the mask when written to

What makes it intrinsically interesting? Data representation. The addressable RAM space is 36 bits, so just a bit beneath 7 × 10^{10} slots. Slots are 36 bits wide as well, so we’re looking at 36 × 2^{36} bits of addressable data. That’s 288 GiB. It’s more than I have, so **direct simulation is off**.

My input file is 500 lines, shared between 100 mask changes and 400 memory assignments. So for part 1, I can safely assume at most 400 of these spaces will contain a value, and I can implement the RAM storage with a simple `IntMap`

.

For part 2, it’s more subtle. The masks may fan out and interfere a lot with each other, in which case I’d be stuck between the full 288 GiB representation and scratching my head real hard before finding an adequate representation. Or they may remain mostly focused and the part 1 representation still copes.

We can get an idea of the spread by counting the `X`

s per mask: each `X`

doubles the number of addresses.

`$ perl -lpe '$_=y/X//' day14.in | sort -n | sed -ne '$p' 9`

So we’re looking at 400 × 2^{9} ≈ 200 000 36-bit slots in the worst case. That’s still acceptable with the `IntMap`

.

What can I share between both parts? Here’s a few.

- input data parsing
- RAM representation (but not contents)
- the general linear processing logic

Let’s start with the real easy one, **RAM representation**. I already decided to go for an `IntMap`

in both cases. 36 bits of data need an `Int64`

at least. I’ll just use the standard `Int`

, as my system is 64-bit.

```
type Address = Int
type Value = Int
type RAM = IntMap Value
```

Ok. Now the general linear **processing logic**. Squinting a bit, I have two operations to define: a mask assignment operation and an addressed memory assignment.

The **mask assignment** is likely going to be different between both parts, as I may not want to represent the mask the same way, since it behaves so differently in both.

Then the **addressed memory assignment** operation. That one might be more shareable. Squinting a bit more, it takes an address and a value, applies a transformation to both, then writes the transformed value to the transformed address. The latter not necessarily remaining singular.

This sounds pretty close to what optics do. Except I’m no good at them.

Yet.

*Watch me suffer.*

From what I know, a lens is a getter/setter couple. Except we don’t ever need to “get” using the masked logic, so I can probably skip the getter if it makes things too complicated. The “multiple slot access” sounds Laarhoven enough to me, but I don’t know for sure if it’s compatible with the lens view or if I have to look for some other glass widget.

[Cue me scrounging through the documentation]

You can also write to setters that target multiple parts of a structure,

YES!!! That’s what I had in mind! Now to assemble it properly…

The way to write memory changes each time the mask changes.^{3} That’s a good sign the setter kind of “is” the mask. In other words, I’ll be converting each mask assignment to a lens of some sort, and use it for the subsequent updates.

Let’s start with the multiple write, as I suspect it’s going to be the most complex part of it. Browsing the lens package, many data types from `base`

seem supported, but unfortunately not the `IntMap`

I had in mind. There *is* a `Data.Map.Lens`

, though, maybe if I downgrade my type, I can use it?

One of most commonly-asked questions about this package is whether it provides lenses for working with Map. It does, but their uses are perhaps obscured by their genericity. This module exists to provide documentation for them.

Oh. This may be a fear for the `Data.Map`

users, but it’s definitely reassuring for me and my `IntMap`

.

On the other hand, it’s all about modifying pre-existing values, and this doesn’t apply in my case, as the whole point of using a map is that I’m trying to avoid inserting the complete set of 2^{36} keys. So I need to find how to insert before anything else, or this won’t fly.

After some more browsing, I stumble upon `Control.Lens.At.at`

, that seems to do exactly that. Ok, the building blocks are here, let’s get our hands dirty!

`λ> let ram = Map.fromList [(0,0),(1,1),(2,2)] λ> ram ^.at 1 Just 1 λ> ram & at 5 ?~ 42 fromList [(0,0),(1,1),(2,2),(5,42)]`

That covers inserting. Now let’s try for multiple values access. After some more searching, the way to go appears to be `Monoid`

s.

`λ> ram ^.. (at 0 <> at 2) [Just 0,Just 2] λ> ram & (at 0 <> at 2) ?~ 42 fromList [(0,42),(1,1),(2,2)] λ> ram & (at 0 <> at 2) .~ Just 42 fromList [(0,42),(1,1),(2,2)] λ> ram & (at 0 <> at 2) %~ const (Just 42) fromList [(0,42),(1,1),(2,2)]`

Mmm. I can manage a multi-value read, but my writes consistently fail to update more than one value. Looks like I’m going to have to seek outside help.

Also, my RAM accesses returning `Nothing`

is only an artifact of using a sparse map in place of a full array. I ought to adjust for that.

```
at' :: Address -> Lens' RAM Value
= \f -> at a (fmap Just . f . fromMaybe 0) at' a
```

Except it triples my runtime. Possibly because it loses the “strict” aspect of my `IntMap`

.

```
at' :: Address -> Lens' RAM Value
= lens (Map.findWithDefault 0 a) (flip (Map.insert a)) at' a
```

In the meantime, back to basics to try and update multiple slots. There’s this promising function in `Control.Lens.Setter`

I can likely put to good use.

`setting :: ((a -> b) -> s -> t) -> IndexPreservingSetter s t a b`

`λ> :t Map.alter Map.alter :: (Maybe a -> Maybe a) -> Map.Key -> Map.IntMap a -> Map.IntMap a λ> :t setting Map.alter setting Map.alter :: (Conjoined p, Settable f) => p (Maybe a) (f (Maybe a)) -> p Map.Key (f (Map.IntMap a -> Map.IntMap a)) λ> :t set (setting Map.alter) set (setting Map.alter) :: Maybe a -> Map.Key -> Map.IntMap a -> Map.IntMap a λ> :t set (setting Map.alter) (Just 42) 0 ram set (setting Map.alter) (Just 42) 0 ram :: Num a => Map.IntMap a λ> set (setting Map.alter) (Just 42) 0 ram fromList [(0,42),(1,1),(2,2)]`

Damned. Looks like I reimplemented `at`

, but still failed at multi-update. Let’s try again.

`λ> :t setting setting :: (Conjoined p, Settable f) => ((a -> b) -> s -> t) -> p a (f b) -> p s (f t) λ> :t setting (\f -> Map.alter f 0) setting (\f -> Map.alter f 0) :: (Conjoined p, Settable f) => p (Maybe a) (f (Maybe a)) -> p (Map.IntMap a) (f (Map.IntMap a)) λ> :t set (setting (\f -> Map.alter f 0)) set (setting (\f -> Map.alter f 0)) :: Maybe a -> Map.IntMap a -> Map.IntMap a λ> :t set (setting (\f -> Map.alter f 0)) (Just 42) set (setting (\f -> Map.alter f 0)) (Just 42) :: Num a => Map.IntMap a -> Map.IntMap a λ> :t set (setting (\f -> Map.alter f 0)) (Just 42) ram set (setting (\f -> Map.alter f 0)) (Just 42) ram :: Num a => Map.IntMap a λ> set (setting (\f -> Map.alter f 0)) (Just 42) ram fromList [(0,42),(1,1),(2,2)] λ> set (setting (\f -> Map.alter f 0 . Map.alter f 2)) (Just 42) ram fromList [(0,42),(1,1),(2,42)]`

YES!!!^{4}

Let’s package that into a convenient function while I still remember how I did it.

```
ats :: Foldable l => l Int
-> Setter' (Map.IntMap a) (Maybe a)
= setting $ \f -> appEndo (foldMap (Endo . Map.alter f) is) ats is
```

Verification before moving on…

`λ> ram & ats [2..4] ?~ 42 fromList [(0,0),(1,1),(2,42),(3,42),(4,42)]`

No, wait. The actual “correct” way has just revealed itself to me, after I’ve read most of the `lens`

documentation 15× I just stumbled upon `Control.Lens.Unsound`

at last. And it has all the missing bits I wanted. Unsound but correct. At last!

```
ats :: [Address] -> Traversal' RAM Value
= foldr1 adjoin . map at' ats
```

Except it doesn’t work. I’ll spare you the compilation error and some of the detail. It took weeks of trying and a StackOverflow question to get it upright.^{5} But I have it now, I’m holding on to it!

```
ats :: Foldable l => l Int -> Traversal' RAM Value
= runTraversal .
ats foldr (\e t -> Traversal $ at' e `adjoin` runTraversal t)
Traversal ignored) (
```

Moving on. I’d like to perform the changes using a **single lens operation**. That `at`

`ats`

function seems like the right abstraction, except it builds a `Lens`

`Traversal`

when I only need a `Setter`

. ~~As long as I’m writing this on my own, I’d rather skip on the (keyboard) typing.~~ In that vein, I’d like a setter that takes an address as its input, and focuses on the addresses that make sense in the current context.^{6}

The list of addresses to write can’t be known at the time the mask is parsed, as it depends on the given address. What *is* known is the function to convert an address to a list of addresses. So I can construct my setter at parse time, by composing my `ats`

combinator with that function. Problem solved! Implementation deferred, as I’d still like to share the parsing with part 1.

Let’s study the specifics of part 1. Here we have direct addressing, so `at`

is indeed the perfect match. And I’d need to hack together a setter that modifies the value being written. This shouldn’t be too much of a problem, using `setting`

as before, this time without changing the level of the setter’s focus.

Well, well. I do have everything I need as far as lens operations go. Now to put the icing on the cake, I’d like to reduce the processing to a single operation per memory write input instruction. For both parts at once.

Speaking FP jargon, I’d want an endomorphism on a product state of a v1-style RAMfile and a v2-style RAMfile. That I’d build from endomorphisms on the two separate RAMs.

This sounds close to arrow combinators, and indeed the `overA`

one hits pretty close. But not perfect. Next hunch is that it’s yet another multisetter, this time deferring to two inner setters, providing them with the same value. So maybe something more like applying a value to two setters in a reader (`->`

) monad.

Let’s try to type and implement it.

```
pairSetter :: Setter' a b -> Setter' a b -> ReifiedSetter' (a,a) b
= Setter $ setting $ \f (x,y) -> (x & s1 %~ f,y & s2 %~ f) pairSetter s1 s2
```

Wow. That compiled. Let’s try it, just to be sure.

`λ> :t pairSetter (at 0) (at 100) pairSetter (at 0) (at 100) :: (Settable f, At a, Num (Index a)) => (Maybe (IxValue a) -> f (Maybe (IxValue a))) -> (a, a) -> f (a, a) λ> :t pairSetter (at 0) (at 100) ?~ 42 pairSetter (at 0) (at 100) ?~ 42 :: (Num (IxValue a), At a, Num (Index a)) => (a, a) -> (a, a) λ> (ram,ram) & pairSetter (at 0) (at 100) ?~ 42 (fromList [(0,42),(1,1),(2,2)],fromList [(0,0),(1,1),(2,2),(100,42)])`

Looks all right.

After I found `adjoin`

in the unsound subpackage, I realize I could have spared some work with `lensProduct`

. But at least this time, I got what I wanted independently, so I’ll keep it and not make this post even more confusing.

I’ve exhausted my entire “what-if” list. There’s nothing left to do but **implement**.

To construct the setter, I’ll convert the parsed mask to two masks for part 1 and more for part 2.

```
type Write = Address -> ReifiedSetter' (RAM,RAM) Value
-- | An assignment is a setter on an address.
address :: (And,Or) -> (And,[Or]) -> Write
= Setter $
address p1 p2 a runSetter (pairSetter (addrV1 p1 a) (addrV2 p2 a))
```

~~For some reason beyond my understanding by now, that previous definition won’t compile as ~~ Probably more `liftA2 pairSetter addrV1 addrV2`

. More lens magic I’m not worthy of understanding just yet. Oh well.`Setter`

vs `ReifiedSetter`

woes. I’m at peace with those now.

```
-- | V1 addresses are used directly; values get applied the two
-- masks before writing. Mask order is irrelevant as the bitsets
-- they operate on are disjoint.
addrV1 :: (And,Or) -> Address -> Lens' RAM Value
And andMask,Or orMask) a = \f ->
addrV1 ($ fmap ((.|. orMask) . (.&. andMask)) . f
at' a
-- | V2 addresses get masked and multiplied per mask; values are
-- | used directly. Masking order *is* relevant here as the bitsets
-- | overlap totally.
addrV2 :: (And,[Or]) -> Address -> Traversal' RAM Value
And andMask,orMasks) a = ats
addrV2 (.&. andMask .|. orMask | Or orMask <- orMasks ] [ a
```

I’ll perform all operations directly from the parser, using `Parsec`

’s state to store the current `Write`

. That state will be updated each time a mask assignment is parsed.

`type Parser = Parsec String Write`

With that state, the program will thread a dual RAM as the monadic return value of each instruction.

```
program :: Parser (Int,Int)
=
program sum sum . foldl (&) (Map.empty,Map.empty) <$>
bimap <* eof many instruction
```

So each instruction is expected to return a dual RAM modifier function.

```
instruction :: Parser ((RAM,RAM) -> (RAM,RAM))
= (setMask <|> setMem) <* endOfLine <?> "instruction" instruction
```

Setting a mask doesn’t alter the RAM yet, so it returns `id`

.

```
setMask :: Parser ((RAM,RAM) -> (RAM,RAM))
= try (string "mask = ") *> (mask >>= putState) *> pure id setMask
```

The mask is parsed bit by bit.

```
mask :: Parser Write
= toWrite . mconcat <$> traverse maskBit [35,34..0]
mask where
=
toWrite (masks1,(and2,ors2)) map . mappend) mempty ors2) address masks1 (and2,foldM (
```

Each bit contributes some update to the writer’s settings.

- for part 1,
`X`

s are ignored; known digits contribute either to the “and” mask (zeros) or the “or” mask (ones) - for part 2,
`0`

s are ignored;`1`

s contribute to the “or” masks (all of them);`X`

s contribute to the “or” masks as a duplication: both as a`0`

and a`1`

bit. So I’ll use the monadic list transformation to aggregate “or diffs”: lists of masks to or to the ones I already had. A two-element list would duplicate; a one-element list is a form of`fmap (.|.)`

. A zero-element list (`[[]]`

) would clear all masks, but I’m not going to use that: I’ll use an absence of list (`[]`

,*i.e.*`mempty`

) instead to do nothing. The monadic expansion is performed in the`mask`

function above.

```
maskBit :: Int -> Parser ((And,Or),(And,[[Or]]))
=
maskBit i mempty , (And (bit' i),[[Or 0,Or (bit i)]])) <$ char 'X'
( <|> ((And (bit' i),mempty), mempty ) <$ char '0'
<|> ((mempty, Or (bit i)), ( mempty , [[Or (bit i)]] )) <$ char '1'
<?> "maskBit"
```

I like how `mempty`

serves as a perfectly acceptable substitute for `(mempty,mempty)`

.

With the writer in the parser’s state, I can now write the memory assignment operation’s implementation “naturally”.

```
setMem :: Parser ((RAM,RAM) -> (RAM,RAM))
= do
setMem $ string "mem["
void <- number
addr $ string "] = "
void <- number value
```

It is now my greatest pleasure, ladies and gentlemen, to present to you the following next two lines. This entire post’s *raison d’être*, seeking, mumbling, exploring, cursing and overengineering, is to put myself in such a position to be able to write them as such.

```
<- getState
write pure $ runSetter (write addr) .~ value
```

The main wrapper uses a dummy initial mask. I’d normally use a call to `error`

instead, but `Parsec`

’s state is unconditionally strict, and it would bring yet more complexity to wrap it lazy again, and for little gain. Just don’t feed the program inputs that don’t start with a mask assignment, ok?

```
main :: IO ()
=
main print . runParser program (address mempty mempty) "source code"
=<< readFile "day14.in"
```

The rest is helpers and support.

```
newtype And = And Int
instance Semigroup And where And a <> And b = And (a .&. b)
instance Monoid And where mempty = And (complement 0)
newtype Or = Or Int
instance Semigroup Or where Or a <> Or b = Or (a .|. b)
instance Monoid Or where mempty = Or 0
bit' :: Bits b => Int -> b
= complement . bit
bit'
number :: Parser Int
= read <$> many1 digit <?> "number" number
```

This concludes “today”’s solution. A month into the making.

It’s brutally inefficient—it solved my case in just over four minutes—yet it worked on the first try. And I got to understand way more than I set up to about `lens`

es’ nitty-gritty.

It also concludes the series, being the twenty-fifth solution I *finally* get to publish. I’ll write a recap as time permits.

I hope you enjoyed reading about my lenses journey. See you soon!

It also presents the interesting feature that it’s going to be the first one I publish out-of-order on the blog. Much XML and JSON rejoicing awaits me as I hack at the site and its Atom feed to make everything appear

~~as I want it~~*as it Should.*↩︎No, wait, scratch that, nothing went as it should have. It’s going to be the

*last*one I publish regardless of ordering, because Doing Things Right had hidden hurdles.↩︎~~This has been, and remains to date, my biggest blocker before publication.~~I’ve tried to make a traversal of it in many Many different ways~~and failed at all. Relieve me by telling me which foundational law doing it would go against. Or ridicule me by solving it in two combinators, that would still be nice~~. Ok this solved itself in the end. Whew.↩︎And

*that*has been the greatest blocker of them all.↩︎That, and it couldn’t really be a lens anyway, seeing as it addresses multiple values.

~~I~~I’d always like to hear from you, but this part is solved now I got the*am*curious as to whether there’s a more direct way of combining all of this. If you know of one, I’d like to hear from you!`Traversal`

working.↩︎

In this literate bash^{1} post, I’ll walk you through the proper way to solve the problem.

Really, it was right under our nose. We’re given specifications for bags. They can refer to zero or more other bags. But never—in my input—forming any cycles. Ring a bell yet?

Of course! Git, baby!

For those unfamiliar, Git is a DSL and toolkit dedicated to manipulationg directed acyclic graphs. Some people even use it to track files. So why reinvent the wheel when others have solved the problem before? NIH be damned, it’s time to use the proper tool for the job.

I’ll use Git for its graph handling abilities, but I have no need to track any files, let alone directories. So I’ll just set up a temporary directory for the Git metadata and be done with it. With all the proper precautions to avoid being lawyered into oblivion if one of my readers were to encounter a glitch and take out their entire and sole copy of a repository.

```
set -ue
unset GIT_DIR
trap 'rm -rf $GIT_DIR' EXIT
GIT_DIR="$(mktemp -d)"
export GIT_DIR
git init
```

The input format looks something like this.

`light red bags contain 1 bright white bag, 2 muted yellow bags. dark orange bags contain 3 bright white bags, 4 muted yellow bags. bright white bags contain 1 shiny gold bag. muted yellow bags contain 2 shiny gold bags, 9 faded blue bags. shiny gold bags contain 1 dark olive bag, 2 vibrant plum bags. dark olive bags contain 3 faded blue bags, 4 dotted black bags. vibrant plum bags contain 5 faded blue bags, 6 dotted black bags. faded blue bags contain no other bags. dotted black bags contain no other bags.`

Let’s make it more practical.

```
prepare() {
sed -E \
-e 's/ contain|,|\.| no other bags//g' \
-e 's/(\w+) (\w+) bags?/\1-\2/g'
}
```

The first `-e`

line gets rid of anything not data. The second one then normalizes the bag identifiers into some form of string that makes an acceptable Git tag name.

The transformed input now looks like this:

`light-red 1 bright-white 2 muted-yellow dark-orange 3 bright-white 4 muted-yellow bright-white 1 shiny-gold muted-yellow 2 shiny-gold 9 faded-blue shiny-gold 1 dark-olive 2 vibrant-plum dark-olive 3 faded-blue 4 dotted-black vibrant-plum 5 faded-blue 6 dotted-black faded-blue dotted-black`

In the Git model, taking the first line as an example, I’m going to create a commit to represent the “light red” bag, and give it two parent commits: the “bright white” representative and the “muted yellow” one.

There’s a catch. I don’t have an identifier for either parent yet. So I’m going to have to create my commits in a very specific order, inner bags before outer bags. In graph theory lingo, this is known as a reverse topological sort, since I want the pointy side of the arrow to exist before its non-poiny side comes to life.

Fortunately enough, my GNU coreutils^{2} come with a `tsort`

utility. I’ll just reverse the direction of the arrows I provide to it so I get the proper ordering directly.

```
BAGS=""
declare -A LINKS
while read CONTAINER CONTAINED; do
LINKS[$CONTAINER]="$CONTAINED"
set -- $CONTAINED
while [[ $# > 0 ]]; do
BAGS="$BAGS $2 $CONTAINER"
shift 2
done
done < <(prepare < day07.in)
BAGS="$(tsort <<< "$BAGS")"
```

I’m ready for business. I’ll create one commit per bag, with its expected contents as a commit message for easier verification.

```
for BAG in $BAGS; do
parents=""
set -- ${LINKS[$BAG]}
while [[ $# > 0 ]]; do
parents="$parents -p $2"
shift 2
done
git commit-tree \
$parents \
-m "$BAG contains: ${LINKS[$BAG]}" \
$(git write-tree) |
xargs git tag $BAG
done
```

We can check that, indeed, shiny-gold has two links beneath it^{3}, leading to dark-olive and vibrant-plum as expected.

So now I can solve part 1 by simply checking, for each known bag, which ones have shiny-gold in their lineage. `git log`

does this out of the box.

```
for BAG in $(git tag); do
$BAG == shiny-gold ]] && continue
[[ git log $BAG --pretty=oneline
done |
grep -c 'shiny-gold contains'
```

The correct answer, 4, is returned.

As can be expected from AoC, part 2 is trickier. There are more or less two challenges to overcome.

Most

^{4}`git`

operations treat the nodes they follow as a set. They remember the nodes they’ve encountered in the current run and won’t show them twice if they’re reached through different paths.This is obviously an oversight from the Git maintainers; I’ll be sending them a patch to correct this shortly. Yet in the meantime I’ll have to work around.

I currently store a single bag per contained type, but we want to count them, so that information will have to be rendered in the Git graph.

It’s trickier than it seems. I can’t, for example, include a bag as a parent multiple times:

`git`

would automatically (and wrongly!) deduplicate it.

The approach I’ll take will be to duplicate the contained bags by giving them a different commit message, thus separating their identifiers. To avoid cross-container naming clashes, I’ll also include a trace of their containing bags, so each bag in the shiny gold one is unique, in Git as well as aboard.

```
expand() {
local TRACK="$1" CONTAINER=$2 CONTAINED=""
set -- ${LINKS[$CONTAINER]}
while [[ $# > 0 ]]; do
for i in $(seq $1); do
CONTAINED="$CONTAINED -p $(expand "$TRACK > $2 $i/$1" $2)"
done
shift 2
done
git commit-tree -m "$TRACK" $CONTAINED $(git write-tree)
}
```

With this recursive function, I can now directly tag the subgraph I’m interested in.

`git tag part-2 $(expand shiny-gold shiny-gold)`

And count the bags.

`git log part-2 --pretty=oneline | grep -c '>'`

The correct answer, 32, is returned.

Unfortunately, my input was a little bit larger than the example, so I had to compress the images a bit before publication. Still, it gives the correct results, with the added satisfaction of actually using the Right Tool For The Job.

~~I’m not including the part 2 full picture for my input because 120 pages to stitch together is a bit over my patience threshold. So until someone finds me a way to automate this, you’ll have to make your own (hey, it’s open source!) using either your own input or mine deduced from the part 1 graph. Or just use your imagination.~~

**Update:** I’ve pushed both the sample graph and the one from my puzzle input to GitHub. Like, in case you found a bug in one of my bags and wanted to send patches. A better use of your time might be to clone it locally and examine it with `gitk`

, where at least you’d get proper navigation between child and parent commits.

**Update:** Found the patience to screenshot and stitch—there were only 76 pages and after a bit of fiddling ImageMagick did most of the work—and a kind soul turned up to make the whole thing browsable. So my part 2 input’s graph is now online too.

This concludes this day 7 addendum solution. I hope you enjoyed it. See you soon!

No, that’s not really a thing. That I know of. But

`pandoc`

was nice enough to let me hack its`literate_haskell`

extension with a two-line filter to highlight as bash instead of Haskell on the WEAVE side, and GHC’s`unlit`

worked as-is on the TANGLE side. So… I guess it’s a thing now,*de facto*. If you want to reproduce at home, I used this command-line:`~/.stack/programs/x86_64-linux/ghc-8.8.4/lib/ghc-8.8.4/bin/unlit day07.lsh /dev/stdout | bash $ 254 6006`

But I suppose

`sed`

would be just as easy.↩︎It’s not GNU-specific, it seems to apply to any recent enough Unix system. Wikipedia says it’s been added to POSIX in 2017.↩︎

I’d call them children, since they’re “contained”, but for some reason Git wants to refer to them as parents. When people tell you Git is confusing, this is what they’re referring to.↩︎

I’d say “all”, but it’s hard to be sure. Let me know if you can think of one!↩︎

(You’ll need JavaScript activated.)

Thanks!!!

I’ll parse using the standard functions from the Prelude, and find the answers by “brute force” exhaustive search, the simplest idea that could work.

```
main :: IO ()
= do
main <- map read . lines <$> readFile "day01.in"
ns print [ x*y | x <- ns, y <- ns, x+y == 2020 ]
print [ x*y*z | x <- ns, y <- ns, z <- ns, x+y+z == 2020 ]
```

As you noticed, I avoid neither picking the same number multiple times nor picking a set of numbers in a different order. The puzzle input guarantees the solution is unique anyway, so I can suffer the answers being reported multiple times if it saves a bit of typing up front.

By an accident of life, I was actually up at six, so it was reasonable to attempt speed. As it turned out, I was kind of fast, but the site was kind of down. So no points for me.

Life’s unfair.

Anyway, now the pressure is down, let’s talk algorithms for a bit. The solution presented above is *O*(*N*^{2}) for part 1 and *O*(*N*^{3}) for part 2. My input is 200 lines long, so both are still reasonable. But what if it was longer?

The spirit of AoC is “you solve for your input”, so the canonical answer to that is “your question doesn’t make any sense.” Thus the rest of this post is purely theoretical.

An obvious improvement to the code above would be to actually avoid all the duplication I mentioned. The usual Haskelly way to do that is to use the `tails`

function from `Data.List`

.

`*y*z | x:xs <- tails ns, y:ys <- tails xs, z <- ys, x+y+z == 2020 ] [ x`

It’s faster for sure. But it’s still *O*(*N*^{3}), so not orders of magnitude faster. We can do better.

A common idea is to give those numbers some structure, so a single number can be queried for being part of a sum that reaches the target. Let’s start with the pair sum problem.

- We could sort the numbers, for a cost of
*O*(*N*log*N*) time. The resulting sorted list could then be queried for a specific number in logarithmic time using binary search. We could run over all the numbers and check if their complement to 2020 is in there too, for a total runtime of*O*(*N*log*N*) as well. - With a sorted list, we could consume it from both ends: depending on the sign of
*l**e**f**t*+*r**i**g**h**t*− 2020, we know exactly which end to trim. This gives a cruise runtime of*O*(*N*), but still a global one of*O*(*N*log*N*) since we have to sort first. - We could… do nothing, for a cost of zero. The resulting list could then be queried for a specific number in linear time using sequential search. This gives a total runtime of
*O*(*N*^{2}). This doesn’t come as a surprise, as this is exactly my original code. - We could hash the numbers, and query them in constant time. This gives a theoretical
^{1}linear runtime of*O*(*N*), the best we could possibly get!

Of course I’m playing fast and loose with the “list” terminology: I’d need constant-time access for the binary search to work fast.

How about the triplet sum then?

I could conceive of sorting, then for each number trying to find its 2020-complement as a sum of two numbers using the “both ends” method. That’d be *O*(*N*^{2}). I never remember the optimal, remind me to go check out the reddit thread.

In any case, this concludes today’s solution. See you soon!

Getting to proofs when it involves hashing is always a complex matter. See how I’m wisely dodging it here?↩︎

```
{-# LANGUAGE RecordWildCards #-}
import Data.List
import Text.Parsec
type Parser = Parsec String ()
```

Each record provides two numbers, a character and a password. The interpretation for them varies for both parts, so I’ll store them under generic names.

```
data Entry = Entry
a :: !Int
{ b :: !Int
, c :: !Char
, password :: String
, }
```

I’ll parse them with a straightfoward `Parsec`

parser.

```
entry :: Parser Entry
= Entry
entry <$> number
<* char '-'
<*> number
<* char ' '
<*> anyChar
<* string ": "
<*> many1 anyChar
number :: Parser Int
= read <$> many1 digit number
```

And a little wrapper.

```
parseInput :: String -> [Entry]
=
parseInput either (error . show) id .
mapM (parse (entry <* eof) "input") .
lines
```

The notion of password validity for part 1 is that the character must occur a number of times within the interval.

```
valid1 :: Entry -> Bool
Entry{..} = a <= numC && numC <= b
valid1 where numC = length (elemIndices c password)
```

For part 2, the logic is a bit more involved: exactly one position indexed by the numbers must be equal to the character. I’ll implement that as reading both, checking for equality to the provided character, and validate by canonicalizing with a sort. Being careful to adjust the indexing’s origin.

```
valid2 :: Entry -> Bool
Entry{..} =
valid2 sort (map ((== c) . (password !!) . pred) [a,b]) == [False,True]
```

And here’s the `main`

wrapper.

```
main :: IO ()
= do
main <- parseInput <$> readFile "day02.in"
entries print $ length $ filter valid1 entries
print $ length $ filter valid2 entries
```

This concludes today’s solution. See you soon!

]]>```
import Data.List.Split
main :: IO ()
= do main
```

The input is a subset of the local geology. The full geology is obtained by repeating the pattern as many times as necessary along the horizontal dimension.

So no need for intricate parsing, I’ll store it just about as is.

`<- lines <$> readFile "day03.in" geology `

Now to count trees along a provided slope.

`let slope right down = `

I’ll generate the list of positions encountered, and count those that are a tree.

`length $ filter (== '#') $ `

To generate the positions, I’ll use a helper function that extracts a position from a pattern line and an X coordinate.

`zipWith (\row i -> row !! (i `mod` length row)) `

The pattern lines are already what’s stored in the `geology`

variable, so I’d use it directly. Except for part 2, where one of the slopes skips a line out of two. So I use `chunksOf`

to generalize the skipping aspect.

`map head (chunksOf down geology)) (`

The X coordinates are a simple arithmetic sequence, generated by built-in syntax for enumerations.

`0,right..] [`

The rest if wrapping: a single count for part 1, the product of multiple counts for part 2.

```
print $ slope 3 1
print $ product $ [ slope 1 1, slope 3 1, slope 5 1, slope 7 1, slope 1 2 ]
```

This concludes today’s solution. See you soon!

]]>```
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE ViewPatterns #-}
import Data.Bifunctor
import Data.Char
import Data.List.Split
```

A passport is a set of key/value pairs. I’ll store it in an association list.

```
type Passport = [Field]
type Field = (FieldId,Value)
type FieldId = String
type Value = String
```

To parse, I’ll split paragraphs, then split by words and break on colon.

```
parsePassport :: String -> Passport
= map parseField . words
parsePassport
parseField :: String -> Field
= second tail . break (== ':') parseField
```

In part 1, “valid” means “containing all the required fields”.

```
validate1 :: Passport -> Bool
= all (`elem` map fst passport) fieldIds validate1 passport
```

In part 2, “valid” means “present *and* abiding by some field-specific rules”. I’ll define a type for a function that validates a field value.

`type Validator = Value -> Bool`

Three fields are validated by a numerical interval check.

```
digits :: Int -> Int -> Int -> Validator
=
digits len low high ds length ds == len
&& read ds >= low
&& read ds <= high
```

My input is very nice to me: it’s well-formed enough that this `read`

call never fails.

Now I can define an alist of validators.

```
validators :: [(FieldId,Validator)]
=
validators "byr",digits 4 1920 2002)
[ ("iyr",digits 4 2010 2020)
, ("eyr",digits 4 2020 2030) , (
```

The “eye color” field is a simple membership check from a hardcoded set; I can define it inline.

`"ecl",(`elem` ["amb","blu","brn","gry","grn","hzl","oth"])) , (`

The “passport id” field has two conditions, but a lambda expression fits the bill.^{1}

`"pid",\ds -> length ds == 9 && all isDigit ds) , (`

The last two fields are a bit too complex to fit in a line. I’ll defer to toplevel functions.

```
"hgt",height)
, ("hcl",hairColor)
, ( ]
```

Height is a “digits” field with a unit, with differing allowed ranges depending on the unit.

```
height :: Validator
break isLetter -> (ds,unit)) =
height (case unit of
"cm" -> digits 3 150 193 ds
"in" -> digits 2 59 76 ds
-> False _
```

Hair color is a 6-digit HTML color code. I’ll check for syntax and length by pattern-matching.

```
hairColor :: Validator
= \case
hairColor '#':xs@[_,_,_,_,_,_] -> all isHexDigit xs
-> False _
```

Now I can wrap it all to check an entire passport.

```
validate2 :: Passport -> Bool
=
validate2 passport all (\(f,v) -> maybe False v (lookup f passport)) validators
```

Since the field list is the same, I can avoid repeating myself and derive it from the validators’ alist.

```
fieldIds :: [FieldId]
= fst <$> validators fieldIds
```

Obviously, the country id field is a red herring: since it’s valid to have it as well as not have it, and its contents are ignored, it doesn’t ever need to be checked or even appear in my code.

```
main :: IO ()
= do
main <- map parsePassport . splitOn "\n\n" <$> readFile "day04.in"
ps print $ length $ filter validate1 ps
print $ length $ filter validate2 ps
```

This concludes today’s solution. See you soon!

If you follow me through Twitter, you may have noticed I had a little trouble feeling excited about this one. If you want a more interesting way to do it, I recommend reading up Justin Le’s refinement types approach.

Yes, I know how to do without by lifting

`(&&)`

in the reader applicative. This is educational code.↩︎

Seats are numbered from front to back, then left to right,^{1} so the decoding for instructions is as follows.

```
toBit :: Char -> Int
'F' = 0
toBit 'B' = 1
toBit 'L' = 0
toBit 'R' = 1 toBit
```

I can then combine them all to a single seat number.

```
passToSeat :: String -> Int
= foldl1 (\a b -> 2*a + b) . map toBit passToSeat
```

And apply it to all tickets in input to find the maximum.

```
= do
main <- lines <$> readFile "day05.in"
passes let seatIds = map passToSeat passes
print $ maximum seatIds
```

For part 2, the most reliable way to proceed without doing it by hand is to simply transcribe the statement: I’m looking for a seat number that’s missing with its two neighbors^{2} being present.

```
let seatRange = [minimum seatIds..maximum seatIds]
print $ filter (\s -> s + 1 `elem` seatIds
&& s - 1 `elem` seatIds
&& s `notElem` seatIds) seatRange
```

This concludes today’s solution. See you soon!

```
import Data.List.Split
import Data.List
= do main
```

The data is provided as paragraphs of lines of characters, representing groups of results of questions that were answered positively.

I’ll use the old Perl trick to split on `"\n\n"`

to extract paragraphs, then split on lines.

`<- map lines . splitOn "\n\n" <$> readFile "day06.in" gs `

Part 1 is the number of questions to which someone in the group answered positively, summed across groups. In set parlance, that’s the union of the positive answers. `Data.List`

happens to have primitives for set operations.

`print $ sum $ map (length . foldr1 union) gs `

Part 2 is the number of questions to which everyone in the group answered positively, summed across groups. In set parlance, that’s the intersection of the positive answers.

`print $ sum $ map (length . foldr1 intersect) gs `

And that’s all there is to it. See you soon for another AoC solution!

For consistency. For a three-liner that could condense to one, it’s not buying much.↩︎

```
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE TypeFamilies #-}
import Control.Applicative (liftA2)
import Data.Char (isSpace)
import Data.Maybe (fromMaybe)
import Data.Semigroup (stimesMonoid,Any(..),Sum(..))
import Data.Foldable (find,fold)
import Data.Functor.Foldable hiding (fold)
import Text.Parsec
type Parser = Parsec String ()
```

The puzzle today defines a directed graph with labeled edges. Nodes are bags, labeled by color; edges define how many bags of each color a bag contains, directed from container to contained. We’re always going to consider the graph from the perspective of a distinguished node: the shiny gold bag.

It follows from the puzzle assignments there cannot be a circuit in the subgraph generated from the shiny gold bag, as that would mean an infinite number of bags for part 2, which doesn’t quite fit the format. I don’t see anything preventing circuits in the rest of the graph: it wouldn’t invalidate the reasoning for the part of the graph examined in part 1, and if there’s a disconnected subgraph in there somewhere, so be it.^{1}

I don’t recall encountering any in my input though.

Here’s the statement sample’s graph to make things clearer.

Part 1 asks to *count* the number of bags “left” of shiny gold. Part 2 asks to *sum* the bags counts “right” of shiny gold, weighted by the edges’ labels.

All in all, part 2 is actually quite straightforward. It’s part 1 that requires a bit of thinking before diving in. The sane thing to do on the general case is to invert the edges’ direction and DFS. But the graph is quite small—mine is 594 nodes broad. So it’s IMHO easier to just DFS over every node. In the absence of loops, that search doesn’t even need memory: the graph can be considered a tree with no ill effect other than some wasted time. So it’s easier to just tree-fold over every node. This actually extends to part 2.

Ok, let’s get to it.

My complication of the day, by personal choice, will be to implement the folds using `recursion-schemes`

. So I’ll need a base functor.

```
data VertexF a b = VertexF
vtxLabel :: a
{ vtxEdges :: [Times b]
,
}deriving (Show,Functor)
```

My edge labeling is a simple integer, with multiplicative semantics, so I implement with a `Times`

type and the appropriate fold.

```
data Times a = Times
timesFactor :: Int
{ timesOperand :: a
,
}deriving (Functor,Show,Eq,Ord)
instance Foldable Times where foldMap f (Times a b) = stimesMonoid a (f b)
```

The input is given as a series of nodes, one per line, with all out edges. This happens to fit the graph’s base functor quite well! I can store it in an intermediate representation using said functor directly.

`type Entry = VertexF Color Color`

Now to parse. This input is in the middle of that gray area where I could use split/list combinators or actual parsers. I’ll settle for `Parsec`

this time.

```
parser :: Parser [Entry]
= entry `endBy` (string ".\n") <* eof parser
```

As said earlier, each input line is an entry with:

- a node: the container bag
- edges: the contained bags with the matching weights

```
entry :: Parser Entry
=
entry VertexF <$> bag
<* string "contain "
<*> contents
```

As far as this puzzle is concerned, a bag is isomorphic to its color.

```
type Qualifier = String
type ColorName = String
data Color = Adj Qualifier ColorName deriving (Show,Eq,Ord)
bag :: Parser Color
=
bag Adj <$> (word <?> "qualifier")
<*> (word <?> "color")
<* string "bag"
<* optional (char 's')
<* spaces
<?> "bag"
```

The “contained” part of the line has a bit of special casing for when it’s empty.

```
contents :: Parser [Times Color]
=
contents <$ string "no other bags"
[] <|> liftA2 Times num bag `sepBy1` string ", "
<?> "contents"
```

And here are two helpers to conclude the parser.

```
num :: Parser Int
= read <$> many1 digit <* spaces
num
word :: Parser String
= many1 (satisfy (not . isSpace)) <* spaces word
```

And an actual wrapper…

```
parseNodes :: String -> [Entry]
= either (error . show) id . parse parser "puzzle input" parseNodes
```

So I’m now able to parse the input to a list of “entries”: colors representing bags pointing to other colors. That’s not a graph yet, I still need to tie the loop from a node to the next.

I don’t have a workable notion of a root here, so I’ll just represent the graph as a list of vertices.

```
type Graph = [Vertex]
type Vertex = Fix (VertexF Color)
```

And start implementing.

```
nodesToGraph :: [Entry] -> Graph
= graph where nodesToGraph entries
```

I can keep the list structure and convert entry-wise.

`= map nodeToVertex entries graph `

Building the graph from a seed node is an anamorphism.

```
nodeToVertex :: VertexF Color Color -> Vertex
= ana expandNode nodeToVertex
```

The seed is already in `VertexF`

form, so I can use the functor instance to convert it in-place.

```
expandNode :: VertexF Color Color -> VertexF Color (VertexF Color Color)
= fmap lookupNode expandNode
```

Implenting it by fetching a `VertexF Color Color`

from where I have them: the `entries`

list.

```
lookupNode :: Color -> VertexF Color Color
=
lookupNode color error "Edge to non-existant node") $
fromMaybe (== color) . vtxLabel) entries find ((
```

This works. It’s probably not smart enough to deduplicate the subforests on its own, though. Not that it would be needed to solve the problem.

```
solve :: Graph -> [Int]
= [part1,part2] where
solve g = Adj "shiny" "gold"
shinyGold = length (filter (shinyGold `elem'`) g) - 1
part1 = length' (lookupVertex shinyGold g) - 1 part2
```

Minus one because the folds are inclusive, *i.e.* they count lax containment, whereas the problem asks for strict.

Oh, and I need to specialize the folds for my structure because for some reason there’s no automatic `Foldable`

instance.^{2}

```
VertexF l rec) = f l <> foldMap fold rec
foldVertexF f (= getAny . cata (foldVertexF (Any . (e ==)))
elem' e = getSum . cata (foldVertexF (const (Sum 1))) length'
```

I use this helper; I’ve extracted it to toplevel because it’ll come in handy later on.

```
lookupVertex :: Color -> Graph -> Vertex
=
lookupVertex v error "Vertex not found") .
fromMaybe (== v) . vtxLabel . unfix) find ((
```

At any rate, it works.

`λ> solve $ nodesToGraph $ parseNodes sample (4,32)`

Here’s my `main`

so I can run it on my puzzle input.

```
main :: IO ()
=
main mapM_ print . solve . nodesToGraph' . parseNodes =<< readFile "day07.in"
```

A bit under a second to solve for my puzzle input. So yes it works, but still rather inefficient. The obvious cause being the fact I treat a dense graph as a forest, taking no advantage of the fact some nodes are encountered numerous times from different paths.

In this post, I’ll only push a little further: constructing the graph with sharing.

Recall that I’m constructing the graph vertex by vertex by anamorphism, where each vertex is recursively expanded by generating its own subforest. For example where a shiny gold bag contains dark olive bags, I’d construct a dark olive node as a descendant to the shiny gold bag, and an additional node when encountered independently in the bag list.

To share the constructed vertices, I’ll store them in a list and fill in the subtrees by looking up in there instead of the original entries list.

The global function signature remains the same.

```
nodesToGraph' :: [Entry] -> Graph
= graph where nodesToGraph' entries
```

The graph is still a list of vertices. I can still construct it by mapping over the entries.

`= map nodeToVertex entries graph `

This time I won’t use the anamorphism; I’ll use the functor directly.

```
nodeToVertex :: Entry -> Vertex
= Fix . fmap (`lookupVertex` graph) nodeToVertex
```

Doing so halves the total runtime. To make any more progress, I’d have to implement the measurements as actual graph operations instead of tree folds. Which is pointless, since this problem is already solved.

But that’s a subject for another post.

This concludes this installment of today’s solution. See you soon!

For example, “an oily blue bag contains 1 true pearly bag; a true pearly bag contains 1 oily blue bag.” It doesn’t invalidate the universe to have those rules in the regulations. It merely makes it very hard for travellers to actually travel with them if they don’t have an infinite supply at hand. But they can still travel with shiny gold bags, so all hope is not lost.↩︎

I’m not exactly sure I’m doing all of this right.↩︎

```
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
import Control.Applicative
import Control.Monad.Trans.Accum
import Data.Functor.Identity
import Data.List
import qualified Data.Set as S
import Data.Monoid
```

I’ll take the easy route to parsing, and merely split words and convert the argument to an integer.

```
data Instruction = Instruction { operation :: String, δ :: Int }
decode :: String -> Instruction
words -> [operation,argument]) =
decode (Instruction{operation,δ = readSigned argument}
readSigned :: String -> Int
readSigned ('+':n) = read n
readSigned n = read n
```

I shortened the argument to δ as it’s only ever added to either the instruction pointer or the accumulator.

Speaking of which, I’ll `newtype`

them so I don’t mix them up by mistake.

```
newtype Address = Address Int deriving (Eq,Ord,Num,Real,Enum,Integral)
newtype Value = Value Int deriving (Num,Show)
```

Now to simulate. I’ll use an open-out design, where various components are delegated to callbacks that I’ll vary between parts 1 and 2.

```
simulate :: Monad m
=> (Value -> m Value)
-> (Value -> m Value)
-> (Instruction -> m (Address,Value))
-> [Instruction]
-> m Value
= go S.empty (Address 0) (Value 0) where
simulate loop eof eval is
go cl ip acc| ip `S.member` cl = loop acc
| ip == genericLength is = eof acc
| otherwise = do
<- eval (is `genericIndex` ip)
(δ_ip,δ_acc)
go (S.insert ip cl)+ δ_ip)
(ip + δ_acc) (acc
```

The `generic`

-* method names may be a bit verbose, but such is the price of protection from integer mixups.

The components I extracted are:

- what to return when a loop is detected
- what to return when the end of the program is reached
- how to actually interpret the instructions

The first two callbacks take the value of the accumulator as an input.

The last one takes an instruction and returns the values to add to the CPU’s state, namely the instruction pointer and the accumulator. In a monadic context, about which I’ll expand later on.

Obviously, I’ll need an actual implementation. Transcribing the statement:

```
evaluate :: Applicative m => Instruction -> m (Address,Value)
Instruction{..} = pure $ case operation of
evaluate "nop" -> (Address 1,Value 0)
"jmp" -> (Address δ,Value 0)
"acc" -> (Address 1,Value δ)
```

The simple evaluation model is really monad-agnostic, so all it needs is `pure`

from `Applicative`

to return a result.

Now I can wire things up to solve part 1.

```
part1 :: [Instruction] -> Value
= runIdentity . simulate pure (error "Reached end") evaluate part1
```

Part 2 asks to make the single nop/jmp switch that makes the program terminate. As termination isn’t an immediately observable behavior, simulation needs to continue after a choice is made, so I’ll need backtracking. And I’ll need some form of memory to allow myself a single change only.

So I’ll use an `AccumT`

monad transformer over a list monad. Evaluation always delegates to the simple model, and then also attempts an instruction flip if memory allows.

```
evaluate' :: Instruction -> AccumT Any [] (Address,Value)
@Instruction{..} = evaluate i <|> do
evaluate' iAny flipped <- look
Any True)
add (case (flipped,operation) of
True,_) -> empty
(False,"acc") -> empty
(False,"nop") -> pure (Address δ,Value 0)
(False,"jmp") -> pure (Address 1,Value 0) (
```

It may be noteworthy the instruction alteration is never actually recorded: since loops are going to result in immediate failure, just temporarily operating differently is enough.

The wrapper sets up the monad transformer while also taking care of rejecting simulations that lead to loops (the `const empty`

component).

```
part2 :: [Instruction] -> [Value]
= flip evalAccumT (Any False) . simulate (const empty) pure evaluate' part2
```

It so happens my input has a single solution. Here’s the rest of the code for completeness.

```
main :: IO ()
= do
main <- map decode . lines <$> readFile "day08.in"
code print $ part1 code
print $ part2 code
```

This concludes today’s solution. See you soon!

But proving it on such a simplistic architecture

*is*feasible.↩︎

```
{-# LANGUAGE NamedFieldPuns #-}
import Control.Monad (guard)
import Data.List (find,tails)
```

Part 1 asks to identify the first number that cannot be expressed as the sum of two distinct numbers taken among its 25 predecessors.

There’s not much to fuss about, just doing it is sufficient. I’ll use a list zipper.

`data Zip a = Zip { trail :: ![a], cursor :: !a, list :: [a] }`

The zipper provides a view on an element of a list in context. It can be moved left or right in constant time/space. For this problem I’ll only need going right.

```
advance :: Zip a -> Zip a
Zip t c (c':l)) = Zip (c:t) c' l advance (
```

I can convert from a list by simply expanding the cursor on the list’s first element.

```
fromList :: [a] -> Zip a
:t) = Zip [] h t fromList (h
```

A cursor position is valid if I can find a pair of distinct^{1} numbers among the 25 previous that sum up to it.

```
valid :: Zip Int -> Bool
Zip{trail,cursor} = or $ do
valid :ys) <- tails (take 25 trail)
(x<- ys
y -- guard (x /= y)
pure $ cursor == x + y
```

Part 1 is then solved with a straightforward function composition.

```
part1 :: [Int] -> Int
=
part1 maybe (error "All numbers are valid.") cursor .
not . valid) .
find (drop 25 .
iterate advance .
fromList
```

Instead of a pair, part 2 asks to find a contiguous subsequence that sums to the number found in part 1. The classical response is to construct a list of partial sums, so the sum can be searched for in quadratic time as a difference of two elements.

```
part2 :: [Int] -> Int -> [Int]
=
part2 ns let partials = scanl (+) 0 ns
in \s -> head $ do
:_:bts) <- tails (zip partials [0..])
((a,from)<- bts
(b,to) - a == s)
guard (b pure $ take (to - from) $ drop from ns
```

Here’s `main`

for completeness.

```
= do
main <- map read . lines <$> readFile "day09.in"
ns let special = part1 ns
print special
let contiguous = part2 ns special
print $ minimum contiguous + maximum contiguous
```

This concludes today’s solution. See you soon!

I didn’t notice that distinctness constraint when I first solved it. And I didn’t need it to solve. YMMV.↩︎

```
{-# LANGUAGE RecordWildCards #-}
import Control.Arrow ((&&&))
import Data.Function ((&))
import Data.List (sort,group,foldl')
```

Part 1 asks to compute the sum of joltage differentials each adapter would be subject to if we chained all of them from lowest to highest between the outlet and the device.

I’ll use the most underrated algorithm in computer science: sorting.

```
part1 :: [Int] -> Int
= ones * (threes + 1) where
part1 ratings = sort ratings
rs 1,ones),(3,threes)] =
[(zipWith (-) rs (0 : rs) &
sort & group &
map (head &&& length)
```

Processing after sorting is, from top to bottom: pairwise subtraction, grouping by difference, tallying. The difference between the outlet and the first adapter is given by `zipWith`

’s first returned element; the difference between the last adapter and the device is always 3 and is hard-coded as the + 1 in the result.

You may notice I’m not only ignoring `2`

s, but actually actively pattern-matching against them. I happen not to have any in my input.

Part 2 is noticeably trickier.

Instead of restricting the number of way the adapters are chained to a single arrangement, we’re counting all of them! And there are *a lot*.

How could I count them? The natural way is to enumerate them with pattern-matching and recursion.

```
combinations :: [Int] -> [[Int]]
= [[x]]
combinations [x] :j2:js)
combinations (j1| j2 - j1 <= 0 = error "bad joltage ordering"
| j2 - j1 > 3 = mempty
| otherwise = takeJ2 <> skipJ2
where takeJ2 = (j1 :) <$> combinations (j2:js)
= guard (not (null js)) *> combinations (j1:js) skipJ2
```

In plain English: the `combinations`

function maps a (sorted) joltage list to the number of ways to combine the matching adapters. It proceeds recursively:

- if there’s only one joltage (at all, or remaining by recursion), there’s exactly one way to combine it: just using it.
- if the gap between the smallest two joltages is greater than 3, we’re not going to be able to complete the chain at all, so the number of combinations is 0.
- otherwise, there’s two possibilities: we either use the second smallest joltage adapter or we don’t. Therefore the number of combinations is the sum of the number of combinations under both hypotheses.
- the null
`guard`

clause is there to forbid skipping the last adapter: it’s the only valid link to the device, it has to be in.

For non-empty input (which mine is), it’s easy to convince oneself this will indeed terminate: there’s a base case, and each recursive call shortens the input list by one.

Let’s actually try it on the first example.

`λ> let chains1 = combinations (0 : sort sample1) λ> mapM_ print chains1 [0,1,4,5,6,7,10,11,12,15,16,19] [0,1,4,5,6,7,10,12,15,16,19] [0,1,4,5,7,10,11,12,15,16,19] [0,1,4,5,7,10,12,15,16,19] [0,1,4,6,7,10,11,12,15,16,19] [0,1,4,6,7,10,12,15,16,19] [0,1,4,7,10,11,12,15,16,19] [0,1,4,7,10,12,15,16,19] λ> length chains1 8`

So far, so good. How about the second example?

`λ> length (combinations (0 : sort sample2)) 19208`

I’m skipping actually printing them all because it’s starting to get a bit long, but I’ll assume they’re all good.

Unfortunately, it won’t go much further with this approach. The reason is quite simple: at each recursion point, we have a maximal branching factor of two. In broad strokes, this means the total running time for *N* adapters could be the double of the running time for *N* − 1 adapters. This is an exponential algorithm, it will only make sense to use it for very small *N*s.

Let’s refine it.

Since the result set is going to be too big to enumerate, I’ll first convert the algorithm to count the arrangements instead.

```
combinations :: [Int] -> Int
= 1
combinations [_] :j2:js)
combinations (j1| j2 - j1 > 3 = 0
| otherwise = takeJ2 + skipJ2
where takeJ2 = combinations (j2:js)
= if null js then 0 else combinations (j1:js) skipJ2
```

I dropped the error checking because I now trust myself enough not to call it with unsorted joltages. I can verify it yields the same results:

`λ> combinations (0 : sort sample1) 8 λ> combinations (0 : sort sample2) 19208`

It still takes too long, though. Consider that 19208 result was constructed by summing only ones and zeros.

So how do I make it faster? The key observation is that it’s performing the same work multiple times. It’s not directly apparent with this wording of the definition, but notice that the `takeJ2`

recursive call is actually a call on the function argument’s tail.

Let’s rewrite, avoiding to cons any list that isn’t a tail of the original input.

```
combinations :: [Int] -> Int
:t) = go h t where
combinations (h= 1
go _ [] :js)
go from (to| to - from > 3 = 0
| otherwise = takeIt + skipIt
where takeIt = go to js
= if null js then 0 else go from js skipIt
```

There’s still nothing reusable, for now it’s just a different presentation of the function’s calling convention. But I can lift the guard clause up to the recursive callsite by factoring the calls.

```
combinations :: [Int] -> Int
:t) = go h t where
combinations (h= 1
go _ [] = sum [ go to js'
go from js | (to:js') <- tails js
- from <= 3 ] , to
```

This still works, but it’s worse than the original: it traverses the list of tails entirely for each position, whereas the previous implementation short-circuited as soon as the gap got too big. So I’m somewhere between *O*(3^{N}) and *O*(*N*^{N}). But this is fixable.

```
combinations :: [Int] -> Int
:t) = go h t where
combinations (h= 1
go _ [] = sum $ whileMaybe recurseIfReachable $ tails js where
go from js :js') = go to js' <$ guard (to - from <= 3)
recurseIfReachable (to= Nothing recurseIfReachable []
```

(`whileMaybe`

is a variant of `mapMaybe`

that short-circuits on first `Nothing`

.)

```
whileMaybe :: (a -> Maybe b) -> [a] -> [b]
= foldr (\a b -> maybe [] (: b) (f a)) [] whileMaybe f
```

Ok, I’m back on track. To optimize this some more, I’ll observe the recursive call is performed on the tails of the current sublist. This is the wasteful operation. Now’s the time to perform the actual dynamic programming tranformation.

```
combinations :: [Int] -> Int
= fst . head . foldr go [] where
combinations = [(1,[to])]
go to [] @((_,js):_) = (sum (whileMaybe lookupIfReachable rjs),from:js) : rjs where
go from rjs:_)) = n <$ guard (to - from <= 3)
lookupIfReachable (n,(to= Nothing lookupIfReachable (_,[])
```

This was a mechanical conversion. The former return value of the recursion, `n`

, is returned as the head of the result of the fold. References to it now point to its place in the fold’s right-hand side. The fold constructs the entire list of results, since they’re needed to compute the result at each point. I needed the tails too, so they’re returned and maintained there as well, as the second part of a pair.

This implementation is *O*(*N*) time, space and stack. It’s enough to pass today’s challenge.

Not too satisfying, though. It’s a bit ridiculous to maintain the list’s entire tail at each point, for one. Remembering the joltage there ought to be enough information.

```
combinations :: [Int] -> Int
= fst . head . foldr go [] where
combinations = [(1,to)]
go to [] = (sum (whileMaybe lookupIfReachable rjs),from) : rjs where
go from rjs = n <$ guard (to - from <= 3) lookupIfReachable (n,to)
```

Much more readable. Now let’s tackle performance.

The quickest win is stack. For now, the computation is constructed as a chain of dependencies from the first element to the last, from which it “bounces” back, constructing the results and tails on the way back. This is because the conversion stems from the recursive calls, who also presented this two-way traverse pattern.

But the computation in itself is symmetrical. Let’s rewrite it as a left fold.

```
combinations :: [Int] -> Int
= fst . head . foldl' go [] where
combinations = [(1,from)]
go [] from = (sum (whileMaybe lookupIfReaching rjs),to) : rjs where
go rjs to = n <$ guard (to - from <= 3) lookupIfReaching (n,from)
```

This was mechanical as well: flip the arguments to `go`

and swap all occurrences of `to`

and `from`

.

Let’s pretend^{1} the `foldl'`

strictness extends to both members of the pair: the algorithm is now constant stack, *O*(*N*) time and space.

Can we do better? A bit, yes.

The final observation is that since the joltages are distinct and ordered, the `whileMaybe`

call will never need to peek at more than 3 of them for the `guard`

to remain valid. So instead of constructing the entire list of values as I go, I can restrict it to three elements.

```
data Combinations = C
twoBack :: !Int
{ oneBack :: !Int
, current :: !Int
, joltage :: !Int
, }
```

I’m shoehorning the joltage in so I can get rid of the pair and benefit from the record’s strict fields.

```
combinations :: [Int] -> Int
= current . foldl' go (C 0 0 1 0) where
combinations C{..} to = case to - joltage of
go 1 -> C oneBack current (twoBack + oneBack + current) to
2 -> C current 0 (oneBack + current) to
3 -> C 0 0 current to
```

(Note that providing the initial 0 joltage for the outlet is no longer needed: it’s a part of the initial `Combinations`

record.)

Ta-da! The code is now constant stack and space, still linear time. It seems unlikely it’s possible to do better than linear time if we want the result to depend on the input.

It’s debatable whether or not this constitutes an actual improvement: there’s quite a bit more number shuffling going around whereas the previous implementation was closer to very straightforward consing. The cost of latent RAM usage is to be pondered against the increased reliance on the optimizer.

As always for all things optimization: measure!

```
part2 :: [Int] -> Int
= combinations . sort
part2
main :: IO ()
= do
main <- map read. lines <$> readFile "day10.in"
joltages print $ part1 joltages
print $ part2 joltages
```

This concludes today’s solution and improvements. See you soon!

It’s not that adding a

`seq`

call is hard, it just makes the code less readable when we’re going to trash it anyway.↩︎

```
import Data.Complex
import Data.List
```

Parsing is dead easy. Assuming line-based instructions, I just pass the first character as-is and convert the number to a `Double`

.

```
parseInstruction :: String -> (Char,Double)
:n) = (i,read n) parseInstruction (i
```

Why a double? I’m going to use `Complex`

numbers, and in Haskell they only have a useful `Num`

instance if the underlying type implements `RealFloat`

. So `Double`

it is, even if the coordinates are only expected to take integer values.

`type C = Complex Double`

In part 1, `NESW`

mean to move the ship by the specified distance, whereas the `LRF`

series implement “turtle graphics”. So my step function maintains the ship’s position and direction.

```
go1 :: (C,C) -> (Char,Double) -> (C,C)
= case i of
go1 (pos,hdg) (i,n) 'N' -> (pos + (0 :+ n) ,hdg)
'S' -> (pos - (0 :+ n) ,hdg)
'W' -> (pos - (n :+ 0) ,hdg)
'E' -> (pos + (n :+ 0) ,hdg)
'F' -> (pos + (n :+ 0)*hdg,hdg)
'L' -> (pos ,hdg * (0 :+ 1 )^a)
'R' -> (pos ,hdg * (0 :+ (-1))^a)
where a | round n `mod` 90 == 0 = round n `div` 90
```

In part 2, a “waypoint” is introduced. Its use looks more like like a speed vector than a navigational waypoint, but let’s play along. Here the only way to move is the `F`

instruction; `NESW`

adjust the vector by addition, `LR`

by rotation.

```
go2 :: (C,C) -> (Char,Double) -> (C,C)
= case i of
go2 (pos,wpt) (i,n) 'N' -> (pos ,wpt + (0 :+ n))
'S' -> (pos ,wpt - (0 :+ n))
'W' -> (pos ,wpt - (n :+ 0))
'E' -> (pos ,wpt + (n :+ 0))
'L' -> (pos ,wpt * (0 :+ 1 )^a)
'R' -> (pos ,wpt * (0 :+ (-1))^a)
'F' -> (pos + (n :+ 0)*wpt,wpt)
where a | round n `mod` 90 == 0 = round n `div` 90
```

All that’s left is to fold the instructions, pairing the appropriate `go`

function with its starting vector.

```
main :: IO ()
= do
main <- map parseInstruction . lines <$> readFile "day12.in"
instrs let (destination1,_) = foldl' go1 (0 :+ 0, 1 :+ 0) instrs
print $ round (dist destination1)
let (destination2,_) = foldl' go2 (0 :+ 0,10 :+ 1) instrs
print $ round (dist destination2)
```

And measuring the distance to the origin by Manhattan distance, because for some reason that’s what makes sense during a storm.

```
dist :: C -> Double
:+ y) = abs x + abs y dist (x
```

This concludes today’s solution. See you soon!

]]>```
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
import Control.Arrow
import Data.Function
import Data.List
import Data.Maybe
import Data.Ord
import Text.Read
```

The puzzle input is a starting time and a list of bus IDs presented in an unusual manner.

```
data Puzzle = Puzzle
earliest :: Int
{ busIds :: [Maybe Int]
, }
```

With `Int`

s being used both as timestamps and bus IDs, I’d typically `newtype`

them to avoid using one for the other by accident. But the bus IDs double as a time period. Tough luck, it’s all staying undistinguished `Int`

s and I’ll just have to be careful.

I’ll parse the CSV with `break`

and `unfoldr`

.

```
parseInput :: String -> Puzzle
= Puzzle
parseInput input = read start
{ earliest = unfoldr go schedule
, busIds
}where
= lines input
[start,schedule] "" = Nothing
go = Just $ readMaybe *** drop 1 $ break (== ',') s go s
```

Part 1 asks for the first bus to depart the airport after my plane lands. The natural thing to do can’t be beaten: check all busses for the time to wait before their next departure.

```
earliestBus :: Puzzle -> (Int,Int)
Puzzle{..} =
earliestBus &
catMaybes busIds map (id &&& timeToNextDeparture earliest) &
snd) minimumBy (comparing
```

How do I compute the time to wait before a specific bus next departs? Modular arithmetic!

Bus number *m* departs every *m* minutes. *d**e**p**a**r**t**u**r**e* ≡ 0 (mod *m*)

The wait time is the time between the plane’s arrival and the bus’s departure. *w**a**i**t* = *d**e**p**a**r**t**u**r**e* − *a**r**r**i**v**a**l*

Therefore, for a wait time between 0 and *m* − 1: *w**a**i**t* ≡ − *a**r**r**i**v**a**l* (mod *m*)

```
timeToNextDeparture :: Int -> Int -> Int
= negate arrival `mod` period timeToNextDeparture arrival period
```

In part 2, we are to find which arrival time would generate a wait time for each bus ID equal tp its position in the input list.

That’s a direct application of the Chinese Remainder Theorem. When I need it I usually just copy-paste the algorithm from Wikipedia or other source, but it so happens I actually understood it this time, so I’ll detail a bit more.

The theorem in itself just states a solution exists for some conditions on the chosen moduli, “all being distinct primes” being a strict subset.^{1}

The interesting part is generating a solution. The idea is a close parallel to Lagrange’s interpolation polynomials: we’ll use a linear basis among the moduli. In other words, we’ll generate a solution as a linear combination of numbers whose residue is 1 with regard to a specific modulus and 0 for all others.

How is such a number found? It’s 0 for all moduli but one by virtue of being a multiple of their product. We want it to be 1 for the chosen special modulus. That modulus and the product of all others are coprime, so we can use Bézout’s theorem to solve for it:

∃(*u*, *v*) ∈ ℤ^{2}, *u**Π*_{i} + *v**m*_{i} = 1

*u* and *v* are the result of the extended Euclid’s algorithm.

```
egcd :: Int -> Int -> (Int,Int)
1 0 = (1,0)
egcd 0 = error "egcd: Not coprime"
egcd _ = let (u,v) = egcd b (a `mod` b)
egcd a b in (v,u-a `div` b * v)
```

The construction for the CRT’s solution then follows. (Note how I don’t care about the Bézout coefficient for the product of all “other” moduli.)

```
chinese :: [(Int,Int)] -> Int
= x `mod` π where
chinese divRems = unzip divRems
(divisors,remainders) = product divisors
π = map unitFor divisors
factors = snd (egcd m π_i) * π_i where π_i = π `div` m
unitFor m = sum (zipWith (*) remainders factors) x
```

The rest is a matter of mapping the input to fit.

```
earliestArrival :: Puzzle -> Int
Puzzle{busIds} =
earliestArrival $ catMaybes $
chinese zipWith (\bus wait -> (,negate wait) <$> bus) busIds [0..]
```

The `main`

wrapper for completeness.

```
main :: IO ()
= do
main <- parseInput <$> readFile "day13.in"
input print $ uncurry (*) $ earliestBus input
print $ earliestArrival input
```

This concludes today’s solution. See you soon!

It’s the case for mine, and probably yours too.↩︎

```
import Data.List.Split
import qualified Data.Set as Set
```

Space cards have a semi-infinite rank, no suit and no duplicates. I’ll represent a deck of them as a simple list.

`type Deck = [Int]`

Reading the input would probably be easier with Emacs shortcuts, but heck this is a literate Haskell program, I might just as well code it up.

```
parseInput :: String -> (Deck,Deck)
= (read <$> d1,read <$> d2)
parseInput i where ["Player 1:":d1,"Player 2:":d2] = linesBy null (lines i)
```

Ok, let’s tackle the actual problem. The winner’s score is computed as a weighted sum, implemented with `Prelude`

functions.

```
score :: [Int] -> Int
= sum $ zipWith (*) s [n,n-1..]
score s where n = length s
```

`λ> score [3,2,10,6,8,5,9,4,7,1] 306`

Deciding which player wins a round is the determining aspect of the day’s two parts. I’ll extract that as a type and function.

```
data Side = P1 | P2 deriving Show
type Rule = Deck -> Deck -> Side
simpleGameRule :: Rule
:_) (h2:_) = case compare h1 h2 of
simpleGameRule (h1LT -> P2
GT -> P1
```

Just as the statement says: winner is the one who shows the highest-ranked card. We take note that this function would be partial if the input could contain duplicates.

Now to implement the full game logic.

```
game :: Rule -> Deck -> Deck -> (Side,Int)
= go Set.empty where
game rule | (d1,d2) `Set.member` cl = (P1,score d1)
go cl d1 d2 = (P2,score d2)
go _ [] d2 = (P1,score d1)
go _ d1 [] @(h1:t1) d2@(h2:t2) = case rule d1 d2 of
go cl d1P1 -> go cl' (t1 ++ [h1,h2]) t2
P2 -> go cl' t1 (t2 ++ [h2,h1])
where cl' = Set.insert (d1,d2) cl
```

Pretty self-describing. I’ve included the part 2 amendment for game loop avoidance^{1} that’s unneeded in part 1, but doesn’t hurt either. It’s a bit unfortunate that the same full pattern-matching is needed for both the distinctive round winner decision and for moving the cards to their new place, but it’ll have to do.

This solves part 1 flawlessly.

`λ> uncurry (game simpleGameRule) $ parseInput sample (P2,306)`

Part 2 is simply a matter of transcribing the loop detection above, and the new, recursive game rule.

```
recursiveGameRule :: Rule
@(h1:t1) d2@(h2:t2)
recursiveGameRule d1| length t1 < h1 || length t2 < h2 = simpleGameRule d1 d2
| otherwise = fst $ game recursiveGameRule (take h1 t1) (take h2 t2)
```

This solves part 2.

`λ> uncurry (game recursiveeGameRule) $ parseInput sample (P2,291)`

Here’s `main`

for completeness.

```
main :: IO ()
= do
main <- parseInput <$> readFile "day22.in"
(d1,d2) print $ game simpleGameRule d1 d2
print $ game recursiveGameRule d1 d2
```

This concludes today’s puzzle solution.

I call the accumulator

`cl`

for “closed” as a matter of habit from writing graph search functions with an “open” and a “closed” node set.↩︎

```
{-# LANGUAGE DataKinds #-}
import Data.List (elemIndex)
import Data.Maybe (fromJust)
import Data.Modular
```

Today’s modulus is 20201227. Probably because -25 and -26 weren’t prime.

`type N = Mod Int 20201227`

Let’s implement the cryptographic protocol, to get the hang of it.

A device has a secret key called “loop size”. This loop size is the parameter to a transformation function, which in effect is a power function.

```
newtype LoopSize = LoopSize N
transform :: N -> LoopSize -> N
LoopSize secret) = subject^(unMod secret) transform subject (
```

This secret can be used to generate a public key, by applying the tranformation to number 7.

```
newtype PublicKey = PublicKey N
mkPubKey :: LoopSize -> PublicKey
= PublicKey . transform 7 mkPubKey
```

The encryption key to use between card and door is then defined by transforming one device’s public key with the other’s loop size. By mathematical magic^{1}, it happens to produce the same result. Assuming I’m the card, I’d compute it by using my secret loop size of 8 to transform the door’s public key of 17807724.

```
encryptionKey :: LoopSize -> PublicKey -> N
PublicKey pk) = transform pk ls encryptionKey ls (
```

`λ> encryptionKey (LoopSize 8) (PublicKey 17807724) 14897079`

Let’s verify the other party agrees.

`λ> encryptionKey (LoopSize 11) (PublicKey 5764801) 14897079`

All good.

Oh, but my input’s public keys aren’t the same. How do I extract the loop size from a public key?

All that needs to be done is invert the operation: *p**u**b**l**i**c* *k**e**y* = 7^{loop size} ⇔ *l**o**o**p* *s**i**z**e* = *l**o**g*_{7}(*p**u**b**l**i**c* *k**e**y*)

```
crackLoopSize :: PublicKey -> LoopSize
PublicKey pk) = LoopSize (logarithm 7 pk) crackLoopSize (
```

There are a few beautiful algorithms to perform this. Considering the small modulus we have here, I’ll use worthy old brute force.

```
logarithm :: N -> N -> N
=
logarithm base power $ fromJust $ elemIndex power $ iterate (* base) 1 toMod
```

Here’s my wrapper, that also verifies the results are consistent.

```
main :: IO ()
= do
main
[cardPublicKey,doorPublicKey]<- map (PublicKey . read) . lines <$> readFile "day25.in"
print $ encryptionKey (crackLoopSize cardPublicKey) doorPublicKey
print $ encryptionKey (crackLoopSize doorPublicKey) cardPublicKey
```

This concludes today’s solution. See you soon!^{2}

```
{-# LANGUAGE LambdaCase #-}
import Data.Foldable (foldl')
import Data.Set (Set,member,insert,delete)
import qualified Data.Set as Set
```

The puzzle input is provided as concatenated hex-grid steps. The first thing to do is split them up for easier consumption. Luckily enough, they form a valid prefix code, so I can greedily process and decode them.

```
splitDirs :: String -> [Dir]
= []
splitDirs [] 'e': ds) = E : splitDirs ds
splitDirs ('s':'e':ds) = SE : splitDirs ds
splitDirs ('s':'w':ds) = SW : splitDirs ds
splitDirs ('w': ds) = W : splitDirs ds
splitDirs ('n':'w':ds) = NW : splitDirs ds
splitDirs ('n':'e':ds) = NE : splitDirs ds splitDirs (
```

Next I need to recognize which paths lead up to the same tile so I can count it as flipped back instead of count both as flipped once. There are multiple ways to do this. I just introduced a coordinate pair in a system where they’d uniquely identify a tile. (the “odd-r” one, except I order them row first)

```
type Pos = (Int,Int)
data Dir = E | SE | SW | W | NW | NE deriving (Enum,Bounded)
```

Now to write the walking function once and for all. This is the most likely place to trip. Straight east/west movement is easy enough to write. The others… need a bit of thought. Simple checks include: north/south symmetry says the *j* parameter must be the same between `Nx`

and `Sx`

; east/west symmetry says `xE`

should be one more than `xW`

on the *j* dimension; the row offset rule says moving vertically from an even row should result in one less than moving from an odd row.

```
walk :: Pos -> Dir -> Pos
= \case
walk (i,j) E -> (i ,j+1)
SE | even i -> (i+1,j )
| odd i -> (i+1,j+1)
SW | even i -> (i+1,j-1)
| odd i -> (i+1,j )
W -> (i ,j-1)
NW | even i -> (i-1,j-1)
| odd i -> (i-1,j )
NE | even i -> (i-1,j )
| odd i -> (i-1,j+1)
```

With this out of the way, I can summarize a path as a position.

```
type Path = [Dir]
pathToPos :: Path -> Pos
= foldl' walk (0,0) pathToPos
```

This lets me maintain a set of flipped-to-black tiles.

```
type TileSet = Set Pos
flipTiles :: [Pos] -> TileSet
= foldl' xorInsert Set.empty flipTiles
```

Using a small helper to flip a set element.^{1}

```
xorInsert :: Ord a => Set a -> a -> Set a
| member e s = delete e s
xorInsert s e | otherwise = insert e s
```

Surprise! Part 2 is a cellular automaton!

It doesn’t really have anything specific going for it, I can re-use my function from day 17 directly:

`step :: Conway v => Env v -> Env v = foldMap life (activeCubes <> fringe) where step activeCubes = foldMap neighbors activeCubes fringe = case rule (isActive cube) (countNeighbors cube) of life cube True -> singleton cube False -> mempty = cube `member` activeCubes isActive cube = length . Set.filter isActive . neighbors countNeighbors`

Mmm… maybe I’ll edit it *just a bit*.

```
step :: TileSet -> TileSet
= foldMap life (blackTiles <> fringe) where
step blackTiles = foldMap neighbors blackTiles
fringe = case rule (isActive cube) (countNeighbors cube) of
life cube True -> Set.singleton cube
False -> mempty
= cube `member` blackTiles
isActive cube = length . Set.filter isActive . neighbors countNeighbors
```

Much better; I hope you didn’t blink. I’ll still need to transcribe the rule from the statement…

```
rule :: Bool -> Int -> Bool
True = (`elem` [1,2])
rule False = (== 2) rule
```

…and the neighborhood function,

```
neighbors :: Pos -> Set Pos
= Set.fromList $ walk p <$> allDirs neighbors p
```

…using the usual `universe`

helper.

```
allDirs :: [Dir]
= [minBound..maxBound] allDirs
```

And… that’s it! Here’s the `main`

wrapper for completeness.

```
main :: IO ()
= do
main <- map splitDirs . lines <$> readFile "day24.in"
input let tiling = flipTiles $ pathToPos <$> input
print $ Set.size tiling
print $ Set.size $ iterate step tiling !! 100
```

This concludes today’s solution. I’m afraid there wasn’t much to learn from it, but at least I can demonstrate the pieces fit together nicely.

See you soon!

```
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE RankNTypes #-}
import Control.Monad (replicateM_,zipWithM_)
import Control.Monad.ST.Lazy (ST,runST)
import Data.Array.ST (STArray,newArray_,readArray,writeArray,getBounds)
import Data.Char (isDigit,digitToInt)
```

The gist of the procedure isn’t too hard to implement. It could reasonably be done for part 1 using a `Seq`

uence of Ints, or even a simple list.

The issue is scaling it up to part 2.

The key observation is that no matter the representation, we need a fast access operation to a move’s destination. This is trickier than it looks.

But first the cause. The procedure involves moving three adjacent cups from a spot on the circle to another. This is a globally logarithmic operation on `Seq`

. So our general complexity is *O*(*R*×(cost(access cursor)+cost(access destination)+log*N*))

The cost of accessing the cursor can easily be kept *O*(log *N*) by storing its position alongside the circle, or by always shifting the `Seq`

to it and keeping it at index 0. The cost of accessing the destination, on the other hand, is that of a linear scan if we don’t do anything smart about it. That would bring the complexity to *O*(*R* × *N*), which is too much, so we *have* to do something about it.

Any attempt to store indices is going to have to confront itself with the cost of updating them when the strings of 3 are moved around. It’s going to be too much as well.

So an idea would be to store a cup’s destination in a representation that isn’t affected by our shifting stuff around. One way to do this would be with an explicit node representation:

`data Node = Node { nodeLabel :: Int, nodeDestination :: Node }`

(This is isomorphic to the standard Haskell list.)

But that doesn’t solve everything. We still need to keep track of the editing that goes around. And using any container-based approach hits the same class of problem: we either need to update half of the circle at each move or can’t do anything useful with the `Node`

accessor.

The solution is to move the structure information to the node *and make it mutable*.

```
data Node s = Node
nodeLabel :: Int
{ nodeNextClockwise :: STRef s Node
, nodeDestination :: Node
, }
```

Now the whole circle can be addressed from just a `Node`

. We still need to allocate all of them, and we know how many we have in advance, so we can store them in an array.

`type State s = Array Int (Node s)`

But… if they have a stable offset in the array, like, say, their label, we don’t need to store a destination link at all, we can compute it in constant time given a label!

```
data Node s = Node
nodeLabel :: Int
{ nodeNextClockwise :: STRef s Node
, }
```

But then we can just simplify that node type to a simple integer: its clockwise neighbor .

`type State s = STArray s Int Int`

So my state is an array of cups indexed by cups, representing the next one in the circle. With the cup labels being 1-based, this has the added bonus that I can store the link to the current cup right there, at index 0.

Now the data type is stable, I can implement the move as a straightforward transcription of the statement to this linked list encoding.

I’ll start by retrieving the current cup’s label/index.

```
move :: State s -> ST s ()
= do
move env <- readArray env 0 current
```

Then the cups are picked up.

`@[pickFirst,_,pickLast] <- toListN current 3 env pick`

For list editing purposes, I only need to know the first and last. Next I cut them out of the circle by having the current cup skip to the first cup after the pickup.

```
<-readArray env pickLast
next 0 next writeArray env
```

The destination is usually one less than the current by default. But it does happen that we need to skip a few numbers.

```
<- getBounds env
(_,n) let (dest:_) = filter (`notElem` pick) ([current-1,current-2..1] ++ [n,n-1..])
```

I can now insert the segment right next to the destination cup.

```
<- readArray env dest
suffix
writeArray env dest pickFirst writeArray env pickLast suffix
```

There’s no need to rewire the picked up cups internally: they remain in the same order with regard to each other.

Finally I update the link to the current cup.

` writeArray env current next`

I used a `toList`

derivative so I didn’t have to chain too many `readArray`

operations, but it doesn’t really follow the `Foldable`

typeclass signatures. I define it as such:

```
toListN :: Int -> Int -> State s -> ST s [Int]
= go start count where
toListN start count env 0 = pure []
go _ = readArray env i >>= \n -> (n : ) <$> go n (c-1) go i c
```

Initially I used `unsafeInterleaveST`

and a lazy infinite list, but I figured that didn’t buy much when I actually always knew in advance how many items I needed.

Now to generate the starting structure. We’re following a permutation list pattern, so initialization is performed in pseudo-random^{1} order. We know we’re not leaving any holes by responsibility of the caller to not provide a 0 or any duplicates. We know it’s well-formed by further responsibility of the caller to include the entire [0, *n*] range.

```
fromList :: [Int] -> ST s (State s)
= do
fromList l let n = length l
<- newArray_ (0,n)
a
zipWithM_ (writeArray a)0] ++ l )
( [head l] ++ tail l ++ [head l])
([pure a
```

Now playing a game is a simple matter of intializing the structure and repeatedly applying moves. I allow information extraction by taking an `ST`

-based continuation as a parameter.

```
play :: [Int] -> Int -> (forall s. State s -> ST s a) -> a
= runST $ do
play l n f <- fromList l
env
replicateM_ n (move env) f env
```

I can now perform the part 1 game.

`λ> play sample 100 $ toListN 1 8 [6,7,3,8,4,5,2,9]`

For part 2, I’ll use a helper to expand the initial list.

```
unMistake :: [Int] -> [Int]
= xs ++ [maximum xs+1 .. 1000000] unMistake xs
```

Playing the game takes significant time interpreted. I packaged it with a `main`

function for compiled execution. It takes about 10 seconds.

```
main :: IO ()
= do
main <- map digitToInt . filter isDigit <$> readFile "day23.in"
input print $ play input 100 $ toListN 1 8
print $ play (unMistake input) 10000000 $ fmap product . toListN 1 2
```

This concludes today’s solution. Hope you enjoyed it, and see you tomorrow!

As far as the array initializer knows. It’s highly regular for values above 9.↩︎

This post is a literate Haskell program. Please skip over the imports while I go grab myself a cup of coffee.

```
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
import Control.Applicative (liftA2)
import Data.Foldable
import Data.List (permutations)
import Data.Map.Strict (Map,(!))
import Data.Set (Set,unions,intersection,(\\),isSubsetOf)
import Text.Parsec
import qualified Data.Map.Strict as Map
import qualified Data.Set as Set
type Parser = Parsec String ()
```

I’ll use a simple line-based `Parsec`

parser. Not that that much power is really needed for the input format, but I’ve been writing them all month, so it’s currently my best choice in terms of whipuptitude.

```
=
parseRecipes either (error . show) id .
traverse (parse parseFood "food") .
lines
where
= liftA2 Recipe ingredients allergens <* eof
parseFood = Set.fromList <$> many (Ingredient <$> word)
ingredients = Set.fromList <$> between (string "(contains ") (char ')')
allergens Allergen <$> word) `sepBy1` string ", ")
((= many1 letter <* optional spaces word
```

You’ve been reading my parsers all month long as well, so there’s really not much to comment on anymore. For a summary, the input is a list of recipes.

`parseRecipes :: String -> [Recipe]`

A recipe formalizes a relation between a list of ingredients and a list of allergens. There’s no expected repetition, I’m storing as sets for later use.

```
data Recipe = Recipe
rIngredients :: Set Ingredient
{ rAllergens :: Set Allergen
, }
```

Ingredients and allergens are provided as strings. One set happens to be ~~readable~~ in English and not the other, but that’s storytelling. From a solving point of view, it doesn’t matter. What matters is they’re a different type of string, so I’ll `newtype`

them to avoid using one kind instead of the other when I’m not paying attention.

```
newtype Allergen = Allergen String deriving (Eq,Ord)
newtype Ingredient = Ingredient String deriving (Eq,Ord)
```

Poking around the data, my input appears to consist in 39 recipes, listing a total of 8 allergens spread among a list of 200 ingredients.

Now to actually identify guaranteed-safe ingredients as the statement requests.

This is where I initially got stuck. I couldn’t for the life of me figure how an ingredient could ever be ruled out from being allergenic. After all, allergens are never guaranteed to be listed, so what’s preventing an ingredient from containing trace amounts of that spooky allergen, and not being reported in a single food item as the statement explicitly allows? I have no way of telling it apart from an actually safe ingredient, have I? So there can be no safe ingredients at all, this statement doesn’t make any sense!

As the coffee started to kick in, I finally found the key sentence. Hidden in plain sight as the leading one in paragraph 4.

Each allergen is found in exactly one ingredient.

Oooooooh. Well *that*’s better.

Each recipe gives us a bit of information about the allergens. It doesn’t really tell us which is which unless we’re given pure ingredients as foods^{2}, but it does constrain the set of ingredients that are known to carry those allergens. By cross-referencing them with the constraints from other recipes, I can get at least *some* information as to which ingredients could carry each allergen.

I’ll work bottom-up, first extracting a map of possible carriers from a recipe.

```
allergenCandidates :: Recipe -> Map Allergen (Set Ingredient)
Recipe{..} =
allergenCandidates $ map (,rIngredients) (toList rAllergens) Map.fromList
```

And then I’ll work across recipes to generate the set of ingredients who remain in any allergen’s carrier set.

```
suspiciousIngredients :: [Recipe] -> Set Ingredient
=
suspiciousIngredients . Map.unionsWith intersection . map allergenCandidates fold
```

`λ> suspiciousIngredients recipes fromList [csqc,fnntr,gzvsg,jlsqx,lvv,pmz,tr,xblchx]`

Wow.

Only 8 suspicious ingredients. For a total of 8 known allergens! That cross-referencing didn’t bring up “a bit more” information, it gave me “just about all” of it.

And definitely enough to complete part 1. I’ll just write a small helper to count how many ingredients from a set occur in a recipe.

```
occurrence :: Set Ingredient -> Recipe -> Int
= Set.size . intersection is . rIngredients occurrence is
```

And I’ve got all I need to complete.

`λ> let ingredients = unions (rIngredients <$> recipes) λ> let dangerousIngredients = suspiciousIngredients recipes λ> let safeIngredients = ingredients \\ dangerousIngredients λ> sum $ occurrence safeIngredients <$> recipes 2176`

For part 2, classification isn’t enough anymore, we need to pair allergens and ingredients individually.

I could implement a backtracking search as I did a few times already this month. But the problem space here is really small. An exhaustive search is going to be much easier to write.

We’re looking for a one-to-one mapping from allergen to ingredient. More specifically, we’re asked to provide it in alphabetical order of allergens. I’ll switch to lists so I can introduce that notion of ordering.

`type Matching = [(Allergen,Ingredient)]`

Building a matching from two ordered lists of allergens and ingredients is trivial.

```
match :: [Allergen] -> [Ingredient] -> Matching
= zip match
```

The specific canonical matching we’re looking for is in order of allergens, so I’ll refine.

```
canonMatching :: Set Allergen -> [Ingredient] -> Matching
= match (Set.toAscList allergens) canonMatching allergens
```

A matching is the correct one if it can’t be proved wrong. To take a jab at it, I’ll check its consistency with regard to a provided recipe.

```
consistent :: Matching -> Recipe -> Bool
= let m = Map.fromList matching in \Recipe{..} ->
consistent matching let dangerousIngredients = Set.fromList (map (m !) (toList rAllergens))
in dangerousIngredients `isSubsetOf` rIngredients
```

If we know the full allergen-to-ingredient relation list, we know a food item’s actual allergen list exactly. So the consistency check is simply that the reported list of allergens is indeed a subset of the full list of contained allergens.

By checking for consistency across all recipes, I can filter the choices as good as can be.

```
validMatching :: Matching -> [Recipe] -> Bool
= all (consistent matching) validMatching matching
```

`λ> let validOrdering o = validMatching (canonMatching allergens o) recipes λ> filter validOrdering (permutations (toList dangerousIngredients)) [[lvv,xblchx,tr,gzvsg,jlsqx,fnntr,pmz,csqc]]`

And that’s all there is to it! Here’s the end of the program for completeness.

```
main :: IO ()
= do
main <- parseRecipes <$> readFile "day21.in"
recipes let ingredients = unions (rIngredients <$> recipes)
= unions (rAllergens <$> recipes)
allergens let dangerousIngredients = suspiciousIngredients recipes
= ingredients \\ dangerousIngredients
safeIngredients print $ sum $ occurrence safeIngredients <$> recipes
let validOrdering o = validMatching (canonMatching allergens o) recipes
let [canonical] = filter validOrdering
(permutations (toList dangerousIngredients))print canonical
-- small hack to avoid too much editing when copy-pasting the answer
instance Show Ingredient where show (Ingredient i) = i
```

This concludes this day’s solution. Coffee aside, it was actually harder to present than to code. :-)

See you tomorrow!

```
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
import Control.Applicative (liftA2)
import Control.Lens (view,_1,_2,(+~))
import Control.Monad (guard)
import Data.Bool (bool)
import Data.Char (isDigit)
import Data.Function ((&),on)
import Data.Ix (Ix,range)
import Data.List (delete,tails,transpose,groupBy)
import Data.List.Split (linesBy)
import Data.Map.Strict (Map,(!))
import qualified Data.Map.Strict as Map
import Data.Set (Set)
import qualified Data.Set as Set
import Linear (V2(V2),zero)
```

My input file isn’t too long—144 tiles. I ought to be fine just using lists as a data structure.

`type Image = [[Bool]]`

The monochrome images come with an ID number. To avoid mixing them up, I’ll pack them together.

```
data Tile = Tile
tileId :: Int
{ tileImage :: Image
,
}deriving Eq
```

Now to actually parse the input. I’ll use the same trick as usual to split paragraphs.

```
parseInput :: String -> [Tile]
= map parseTile . linesBy null . lines parseInput
```

To parse the `Tile`

structure itself, I’ll try something experimental^{1} to try and answer the great question: can view patterns be nested?

```
parseTile :: [String] -> Tile
break (== ' ') ->
parseTile (("Tile",' ' : (span isDigit -> (read -> tileId,":"))))
(: ((map . map) (== '#') -> tileImage))
= Tile{..}
```

…it seems like they can. I wouldn’t go as far as calling that a good idea, though. Ye gods this is horrendeous! Until someone can show me a visually-pleasing way to format that, I’ll classify this as “never again”.

Ok, so our image was split up in tiles, and the postman tripped on the way so now they’re a mess on the floor. And since they’re square and transparent, there’s no way to know which way to place them so all of them align.

Let’s formalize the various ways it could be messed up.

```
data Setup = Setup
sTranspose :: Bool
{ sOuterFlip :: Bool
, sInnerFlip :: Bool
,
}deriving (Eq,Ord,Ix,Bounded)
```

In real life I’d call the enumeration “4 right-angle rotations for 2 sides”. But we’re doing computer processing on lists, I’ll adjust my terminology.

For some reason GHC can’t derive `Enum`

on this structure, so I’ll define my “universe” enumerator using `Ix`

instead.

```
allSetups :: [Setup]
= range (minBound,maxBound) allSetups
```

Now I can transform my tile to bring it to and from all of those orientations.

```
reorient :: Tile -> Setup -> Tile
Setup{..} = tile { tileImage = morph (tileImage tile) }
reorient tile where morph = bool id transpose sTranspose .
id reverse sOuterFlip .
bool id (map reverse) sInnerFlip bool
```

I chose to keep it in the same type, as there’s no information loss. I need the ID number to remain with it if I want to track it back when I’m done shifting bits around.

Now let’s move towards reassembly. The tiles are going to be arranged in a grid, so integral-`V2`

-based cartesian coordinates are fine.

```
type Vec = V2 Int
type Pos = Vec
```

On the other hand, trying all permutations of the tiles in a square patterns seems a bit wasteful. I’ll likely get drastically better performance starting with an arbitrary tile and accreting the others on the fly.

But I can’t know in advance where in the final square the starting tile will end. So I’ll use a `Map`

instead of a bidimensional array.

`type Assembly = Map Pos Tile`

Today’s neighborhood function only recognizes 4 neighbors per tile. Please indulge me while I try to go creative with `linear`

’s `V2`

and lenses. I’m attempting to keep up the practice, here.^{2}

```
neighbors :: Pos -> Set Pos
= Set.fromList . sequence (liftA2 (+~) [_1,_2] [-1,1]) neighbors
```

No, it’s not shorter,^{3} more efficient or in any way better than a simple list comprehension-based implementation. But I welcomed the fun.

I’d better check it actually works.

`λ> neighbors (V2 44 55) fromList [V2 43 55,V2 44 54,V2 44 56,V2 45 55]`

Looks good :-)

Let’s write a helper to determine whether two tiles can be placed next to each other.

```
compatible :: Tile -> Vec -> Tile -> Bool
= edge1 (tileImage t1) == edge2 (tileImage t2)
compatible t1 delta t2 where
= case delta of
(edge1,edge2) V2 (-1) 0 -> (top,bottom)
V2 1 0 -> (bottom,top)
V2 0 (-1) -> (left,right)
V2 0 1 -> (right,left)
= head
top = last
bottom = map head
left = map last right
```

So to reassemble the big square from the tiles, I’ll place the tiles one by one, next to each other, starting with an arbitrary one—the first—, always ensuring the borders of the newly placed one are compatible with the previously placed others. Backtracking through the list monad.

I expect there to be only one solution up to orientations, and the input size is small, so I’m not throwing any optimization in. I pick, in order:

- an empty spot with at least one neighboring spot filled
- a loose tile
- an orientation for it

If it’s compatible with its neighbors, I recurse, else I backtrack.

```
reassemble :: [Tile] -> Assembly
: tiles0) =
reassemble (startingTile head $ go (Map.singleton zero startingTile) tiles0
where
= pure placed
go placed [] = do
go placed tiles <- Map.keysSet placed &
pos &
Set.map neighbors &
Set.unions `Map.notMember` placed) &
Set.filter (
Set.toListlet nPoss = neighbors pos &
`Map.member` placed) &
Set.filter (
Set.toList
<- tiles
tile <- allSetups
setup let pl = reorient tile setup
$ all (\nPos -> compatible pl (nPos - pos) (placed ! nPos)) nPoss
guard go (Map.insert pos pl placed) (delete tile tiles)
```

If all goes well, this returns a “floating” map of oriented tiles. The following helper extracts the floating square’s corner ID numbers, for the part 1 answer field.

```
cornerIds :: Assembly -> [Int]
= tileId . (arr !) <$>
cornerIds arr V2 top left, V2 top right, V2 bottom left, V2 bottom right ]
[ where
= Map.keys arr
ixs = minimum (view _1 <$> ixs)
top = minimum (view _2 <$> ixs)
left = maximum (view _1 <$> ixs)
bottom = maximum (view _2 <$> ixs) right
```

You may have noticed I’m using the generic `_1`

and `_2`

lenses instead of `linear`

’s `V2`

-specialized `_x`

and `_y`

, here and for the neighborhood function. It’s the clash of worlds: in a linear algebra world, I’d normally consider X to be the first coordinate, horizontal and going right, and Y the second, vertical going up. In a raster world^{4}, I rather see the first coordinate going down and the second right. I chose to go with the raster vision all the way, and evade that part of the confusion by using the numbered accessors.

`λ> let assembly = reassemble (parseInput sample) λ> cornerIds assembly [2971,1171,1951,3079]`

All good and a gold star!

For part 2 we merge the oriented tiles. I’m somewhat arbitrarily repacking it in a `Tile`

so I can reuse the orientation routines later.

```
merge :: Assembly -> Tile
=
merge Tile undefined . -- image to tile
foldr1 (++) . -- fuse vertically
map (foldr1 (zipWith (++))) . -- fuse horizontally
map . map) snd . -- drop coordinates
(==) `on` (view (_1 . _1))) . -- rasterize
groupBy ((. -- map to list
Map.assocs fmap tileTrimmedImage -- trim
```

The tile trimmer is similar in style: composition of simpler functions.

```
tileTrimmedImage :: Tile -> Image
= trimRight . trimLeft . trimBottom . trimTop . tileImage
tileTrimmedImage where
= init
trimBottom = tail
trimTop = map init
trimRight = map tail trimLeft
```

Now to locate sea monsters. Let’s make sure we know what we’re looking for.

```
seaMonster :: Image
= (map . map) (== '#')
seaMonster " # "
[ "# ## ## ###"
, " # # # # # # "
, ]
```

To count them, I’ll match by following both lists of lists in lockstep. There are two pitfalls here.

- I’m working on lists of lists of
`Bool`

s. It’s all too natural to reach for zip-like functions,*e.g.*`zipWith (==)`

or similar. But those constructs stop on the shortest sequence of those provided. When matching, it has a very different meaning to exhaust the pattern (success) or the input string (failure). I’ll alleviate with a dedicated helper. - The comparison semantics are asymmetrical.

```
match :: Image -> Image -> Bool
= allMatch row ref input
match ref input where
:as) (b:bs) = p a b && allMatch p as bs
allMatch p (a= True -- out of pattern, success
allMatch _ [] _ = False -- out of input, failure
allMatch _ _ _
= allMatch pixel
row
False = const True -- blank pattern, always match
pixel True = id -- hash pattern, match on hash pixel
```

This only matches if the pattern is found at (0, 0). Here’s a helper to attempt all shifts of the image .

```
lowerRights :: [[a]] -> [[[a]]]
= concatMap tails . transpose . map tails lowerRights
```

And I can now complete the monster counter.

```
countMonsters :: Tile -> Int
= length . filter (match seaMonster). lowerRights . tileImage countMonsters
```

A final helper before attempting part 2:

```
countHashes :: Image -> Int
= length . filter id . concat countHashes
```

`λ> let image = merge assembly λ> maximum $ map (countMonsters . reorient image) allSetups 2 λ> countHashes (tileImage image) -2*countHashes seaMonster 273`

Am I playing my luck a bit here? This computation works for the sample, but theoretically fails if a hash can be a part of multiple sea monsters.

Does it happen in my puzzle input? There’s only one way to find out…^{5}

In the meantime, here’s the rest of the code for completeness.

```
main :: IO ()
= do
main <- parseInput <$> readFile "day20.in"
tiles let assembly = reassemble tiles
putStrLn $ "Assembly product: " ++ show (product (cornerIds assembly))
let image = merge assembly
= maximum $ map (countMonsters . reorient image) allSetups
monsters = countHashes (tileImage image)
roughness - monsters * countHashes seaMonster
putStrLn $ "Water roughness: " ++ show roughness
```

Well, that turned out to be enough for the second gold star of the day. I’m not sure I can disprove sea monsters overlapping each other by reading the statement again, so I figure it’s just Eric being nice to us.

Or maybe only to me? O:-)

This concludes this day’s solution. Hope you enjoyed it; see you soon!

Experimental

*for me*. I expect them to be fully specified.↩︎I got back to lenses for day 14, but that part of the journey isn’t published yet. I’m on it!↩︎

Well, ok, it

*is*shorter, but the point would stand even if it weren’t.↩︎Matrix addressing, ironically, fits the raster vision.↩︎

Of course there isn’t. I could go the rigorous route and mark the relevant pixels, perform their set union then difference with the rest of the hashes. But that would most likely take me more than a single minute. When the site grants us an attempt per minute, I’d be stupid not to try my luck before implementing the complex stuff.↩︎

```
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
import Prelude hiding ((+),(*))
import qualified Prelude
import Control.Applicative (liftA2)
import Data.Function (fix)
import Data.Functor (void)
import Data.Set (Set,member)
import qualified Data.Set as Set
import Data.IntMap (IntMap,(!),insert)
import qualified Data.IntMap as Map
import Data.List.Split (linesBy)
import Text.Parsec
import Text.ParserCombinators.ReadP (ReadP,(+++),readP_to_S)
import qualified Text.ParserCombinators.ReadP as ReadP
import Text.Regex.PCRE
type Parser = Parsec String ()
```

We are provided with the language grammar and some messages. Let’s run some statistics so i have a better understanding of where I’m going.

`133 rules, 461 messages Longest message: 96 characters Total messages' length: 17224 characters`

There are many angles of attack to the language decision problem. Getting a feeling of its structure goes a long way towards choosing the right path. (I don’t want to implement CYK! It’s too abstract to be fun.)

Let’s parse and analyze the grammar to get a better feeling of it. Starting with a few type definitions.

```
type Grammar = IntMap Rule
type Rule = Either NonTerminal Terminal
type NonTerminal = [[Int]]
type Terminal = Char
type Message = String
```

That `NonTerminal`

type needs commenting. From the statement, and matched by my input file, a nonterminal production is a choice between one or more sequences of recursive productions. I represent it as a list of lists of `Int`

egers. The integer is a recursive rule referenced by its index; the inner list is the sequence; the outer list is the choice.

Let’s parse. This seems like the simplest grammar I’ve drawn `Parsec`

for to date. If you’ve been following this series, you shouldn’t need much hand holding anymore.

```
rule :: Parser (Int,Rule)
= (,)
rule <$> (num <* string ": ")
<*> (Left <$> nonTerminal <|> Right <$> terminal)
nonTerminal :: Parser NonTerminal
= many1 num `sepBy1` string "| "
nonTerminal
num :: Parser Int
= read <$> many1 digit <* spaces
num
terminal :: Parser Terminal
= between (char '"') (char '"') anyChar terminal
```

For ease of REPL, let’s package it into an environment structure from which I could easily address the fields with `NamedFieldPuns`

or `RecordWildCards`

.

```
data Env = Env { grammar :: Grammar, messages :: [Message] }
parseInput :: String -> Env
null . lines -> [rawRules,messages]) = Env{..}
parseInput (linesBy where Right grammar = Map.fromList <$>
traverse (parse (rule <* eof) "rule") rawRules
```

I^{1} have a small number (461) of queries to perform, so I can probably afford a complex and/or suboptimal per-message algorithm. On the other hand, I have 133 rules, which is large. From the cursory glance at the actual messages, who seem to be binary^{2}, I’m looking at an approximate worst case of 2^{131} ≈ 2, 7 × 10^{31} productions. The statement is right in saying it’s finite. It’s still not tractable.

Since we’re given the grammar as a set of productions, we can actually compute its total number of productions efficiently—*i.e.* linearly—working from the terminals up. A terminal’s number of productions is trivially 1. A sequence’s number of productions is the product of the number of productions of its components: we’re enumerating by choosing one per. A choice’s number of productions is the sum of the number of productions of its components: we’re picking from either, but not simultaneously.

This gives the grammar a nice ring structure. Let’s make a typeclass out of it, see if I can exploit it later.

```
class GrammarRing r where
fromChar :: Char -> r
(+) :: r -> r -> r
(*) :: r -> r -> r
```

Now let’s express the language word count in terms of it.

```
instance GrammarRing Int where
= 1
fromChar _ +) = (Prelude.+)
(*) = (Prelude.*) (
```

With this setup, counting the words is now only a matter of converting the grammar’s set of rules into this structure, recursively dereferencing grammar rules by index. I’m representing the recursion with an explicit input parameter, so it can be externalized and acted upon depending on context.

```
toRing :: GrammarRing r => Grammar -> IntMap r -> IntMap r
= fmap ruleToRing grammar
toRing grammar ringMap where
Right c) = fromChar c
ruleToRing (Left refss) = foldr1 (+) (map seqToRing refss)
ruleToRing (= foldr1 (*) (map refToRing refs)
seqToRing refs = ringMap ! ref refToRing ref
```

And here’s a wrapper to:

- tie the knot around the recursively-defined map of production count by rule
- extract the one for rule 0.

```
rawLanguageSize :: Grammar -> Int
= fix (toRing grammar) ! 0 rawLanguageSize grammar
```

Raw language size: 2097152

2 million words… that’s not so bad. It’s small enough that I could just generate them all to put in a set structure to verify each message against.

How will I generate them? By exploiting the ring structure again.

```
instance GrammarRing (Set Message) where
= Set.singleton [c]
fromChar c * b = Set.fromList $ liftA2 (++) (Set.toList a) (Set.toList b)
a +) = Set.union (
```

A terminal symbol represents the word composed of that symbol only. A nonterminal sequence represents any word made by concatenating a word from the first language with one from the second. I use the list ~~monad~~ applicative’s distributive property to implement that. A nonterminal choice represents any word from either of its components, so its language is the union of theirs.

The wrapper helper looks very similar to the counting one.

```
genLanguage :: Grammar -> Set Message
= fix (toRing grammar) ! 0 genLanguage grammar
```

Yay genericity!

Actual language size: 2097152

The language set’s size is the exact same as the number of production. It’s not too important as both are tractable, but it does mean the grammar is *unambiguous*: any word from the language it produces can only be produced in a single way.

Now I can efficiently count the number of valid messages and glean my gold star.^{3}

```
test :: (Message -> Bool) -> [Message] -> Int
= length . filter p
test p
performTest :: [Message] -> String -> (Message -> Bool) -> IO ()
=
performTest msgs lbl p let n = test p msgs
in putStrLn $ lbl ++ ": " ++ show n ++ " messages"
```

`λ> performTest messages "Finite language" (`member` genLanguage grammar) Finite language: 156 messages`

Part 2 introduces loops in the grammar. What does that change, concretely?

With the path I took for part 1, it changes a lot of things. The language’s cardinality is now infinite. It could not be efficiently computed as I did earlier, since the map’s value at index 8 would be a strict function of itself, which isn’t well-defined for any of the evaluation models GHC provides.

I’m going to have to change my approach.

8: 42 | 42 8

Looking closer at the rule 8 change, we can see it’s a choice of either production 42, or a concatenation of production 42 with production 8. Iterating mentally, this could be summarized as “one or more repetitions of production 42”.

This hints at something. What class of languages can be inductively defined by atomic primitives, concatenation and choice? That’s the definition of regular languages!

Regular languages can be validated by an NFA derived from their grammar. Said NFA can either be simulated, or replaced with an equivalent DFA, with whom validation is even more straightforward.

Or I could just be ~~lazy~~ efficient and express the language as a regular expression to let a specialized library take care of it. The conversion is just as mechanical as the previous two I’ve detailed.

```
instance GrammarRing String where
= [c]
fromChar c + b = a ++ "|" ++ b
a * b = a ++ b a
```

I’m delegating parenthesizing to the wrapper, to ease the bracket load. This makes use of the fact there’s a maximum single level of choice per rule.

```
genRegex :: Grammar -> String
= anchor $ fix (fmap paren . toRing grammar) ! 0
genRegex grammar
anchor :: String -> String
paren,= "(" ++ s ++ ")"
paren s = "^" ++ s ++ "$" anchor s
```

Let’s check this is actually equivalent to our previous way of doing.

156 messages match rule 0 by regex.

Seems good.

So now all I have to do is patch up rules 8 and 11 and launch the generator again, right?

Of course not.

Letting the generator loose on a recursive definition is the same recipe for disaster I would have had if I’d tried to count or enumerate the recursive language: it’s not well-defined either. The GHC runtime can do of decent job of detecting it and aborting when run single-threaded, but it’s still not going to give a useful result.

Instead I’ll patch rule 8 by hand. The PCRE syntax for a nonempty Kleene star is a plus character.

`= g!42 ++ "+" rule8 g `

Oh. Did I just forget about rule 11?

11: 42 31 | 42 11 31

On the surface, it looks kind of similar. Rule 11 generates either the concatenation of productions 42 and 31, or the concatenation of productions 42, recursive production 11, and production 31.

The bad news is, this makes it not a regular language anymore. The good news is, PCRE handles it just fine anyway.

The syntax to invoke a recursive pattern is normally `(?`

, where *N*)*N* is a relative pointer to the targeted capture buffer. But that’s going to be unwieldy here where the regex is generated: I don’t want to have to count the number of opening parentheses that occur between pattern start and recursion. So I’ll use a named pattern and reference instead.^{4}

```
genRegex2 :: Grammar -> String
= anchor $ fix (patch . fmap paren . toRing grammar) ! 0
genRegex2 grammar where
= (insert 8 =<< rule8) . (insert 11 =<< rule11)
patch = g!42 ++ "+"
rule8 g = concat [ "(?<ELEVEN>", g!42, "(?&ELEVEN)?", g!31, ")"] rule11 g
```

And this solves part 2.

**Bonus**: I used `Parsec`

to parse the grammar, can’t I use it to parse the messages as well?

Well… not really. `Parsec`

is a predictive parser, its model of choice is to attempt the next branch only if the first failed without consuming any input. Using `try`

, it can be made to attempt the next branch if the first one failed even if it consumed input.

But take a look at patched rule 8 above. With a direct mapping to `Parsec`

, its second branch can never succeed: either the first one does and the second is never attempted; or the first one fails and the second is attempted. But since the first is a prefix of the second, the second is bound to fail as well.

I’m not aware of a way to make `Parsec`

backtrack over the choice point after a successful first alternative match. I could reorder rule 8, but there’s no saying that sort of situation doesn’t happen elsewhere, or indirectly. Maybe I could rewrite the rules in a way to fit the model, but it would be quite the endeavour for dubious results whose validity I wouldn’t be so sure of.

But there’s a choice-symmetrical parser combinator library in the base distribution that fits: `ReadP`

. Its interface is almost the same and I can generate a parser for it as easily as a regex or a full language:

```
instance GrammarRing (ReadP ()) where
= void (ReadP.char c)
fromChar c + b = a +++ b
a * b = a *> b
a
genParser :: Grammar -> ReadP ()
= fix (toRing grammar) ! 0
genParser grammar
genParser2 :: Grammar -> ReadP ()
= genParser
genParser2 . insert 8 (Left [[42],[42,8]])
. insert 11 (Left [[42,31],[42,11,31]])
runReadP :: ReadP () -> Message -> Bool
= interpret . filter (null . snd) . readP_to_S p
runReadP p where interpret = \case [] -> False
-> True
[_] -> error "Ambiguous parse" _
```

Sure enough, it yields the same results as PCRE.

Here’s the rest of the code for completeness

```
analyzeInput :: Env -> IO ()
Env{..} = do
analyzeInput putStrLn $ show (Map.size grammar) ++ " rules, "
++ show (length messages) ++ " messages"
putStrLn $ "Longest message: "
++ show (maximum (length <$> messages)) ++ " characters"
putStrLn $ "Total messages' length: "
++ show (sum (length <$> messages)) ++ " characters"
putStrLn $ "Raw language size: " ++ show (rawLanguageSize grammar)
```

```
main :: IO ()
= do
main Env{..} <- parseInput <$> readFile "day19.in"
Env{..}
analyzeInput
let language = genLanguage grammar
putStrLn $ "Actual language size: " ++ show (Set.size language)
"Finite language" (`member` language)
performTest messages
"Regex V1" (=~ genRegex grammar)
performTest messages "PCRE V2" (=~ genRegex2 grammar)
performTest messages
"ReadP V1" (runReadP (genParser grammar))
performTest messages "ReadP V2" (runReadP (genParser2 grammar)) performTest messages
```

This concludes today’s solution. Hope you liked it!

I, and most likely the rest of the world too.↩︎

Binary as in: from an alphabet of cardinality 2.↩︎

I actually used the part 2 algorithm for both of my stars, but couldn’t resist doing it the less efficient way to demonstrate the

`GrammarRing`

abstraction.↩︎I haven’t delved too much into what ought to happen if production 11 occurred multiple times in the rule 0 productions. It doesn’t in

*my*input. I can see four reasonable behaviors: the first named definition wins; the closest previous named definition wins; the last named definition wins; no one wins and an error is raised. By construction, I’d always generate the same regex fragment for rule 11, so any of the first three would be indistinguishable to me. Case 4 would require me to think some more, but at least I wouldn’t blindly get a wrong answer.↩︎

`eval`

, but I started this year in Haskell, let’s keep at it. As usual in this series, this post is a literate Haskell program.
I’ll use parser combinator library `Parsec`

.

```
import Text.Parsec
type Parser = Parsec String ()
```

I don’t want to get lost in tokenization or risk forgetting to skip spaces, so I’ll just simplify the input before processing it.

```
extractMath :: String -> [String]
= lines . filter (/= ' ') extractMath
```

Now to write the parser. It’s rather straightforward. An expression is a flat chain of operations performed on subexpressions. Those subexpressions are either a literal number or another expression.

`Parsec`

has^{2} a useful combinator that will directly perform the operation in a flat, left-associative manner. So we can use that directly.

```
num :: Parser Int
expr,= (paren expr <|> num) `chainl1` op <?> "expression"
expr = read <$> many1 digit <?> "number"
num
times :: Parser (Int -> Int -> Int)
op, plus,= plus <|> times <?> "operation"
op = (+) <$ char '+'
plus = (*) <$ char '*' times
```

This makes use of a rather generic helper I extracted for readability.

```
paren :: Parser a -> Parser a
= between (char '(') (char ')') rec <?> "parenthesized group" paren rec
```

And… that’s all there is to it! Parser combinators don’t have innate preference for the human way of ordering operations, and they’ll provide us with the results we want unconfused:

`λ> traverse (parse (expr <* eof) "expression") $ extractMath sample Right [51,26,437,12240,13632]`

Now for part 2. The operations *are* ordered here. Let’s adjust the parser for that.

```
exprV2 :: Parser Int
= (paren exprV2 <|> num) `chainl1` plus `chainl1` times <?> "advanced math" exprV2
```

`λ> traverse (parse (exprV2 <* eof) "expression") $ extractMath sample Right [51,46,1445,669060,23340]`

The parser was unimpressed by the plot twist. Here’s the end of the code for completeness.

```
main :: IO ()
= do
main <- extractMath (/= ' ') <$> readFile "day18.in"
math print $ sum <$> traverse (parse (expr <* eof) "basic math") math
print $ sum <$> traverse (parse (exprV2 <* eof) "advanced math") math
```

This concludes day 18. I hope you enjoyed it. See you tomorrow!

Here’s some imports and language extensions to make the introduction spicy.

```
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE TypeApplications #-}
import Prelude hiding ((.))
import Control.Category
import Control.Lens
import Data.Foldable (toList)
import Data.Ix (Ix,range)
import Data.Set as Set (Set,singleton,fromList,filter,member)
import Data.List (elemIndices)
import Linear
```

The sample is the classic 2D glider. This hints at “shifting borders”, so I’ll represent the environment as a simple `Set`

of positions in case part 2 is about following it.

Parsing input can be coded as a simple composition of functions.

```
= lines
parseInput >>> map (elemIndices '#')
>>> zipWith (\i js -> map (V2 i) js) [0..]
>>> concat
>>> fromList
```

What about the return type? It’s obviously a `Set (V2 Int)`

. But the puzzle is in 3D, so I’ll make it generic over the vector dimensionality. Well, not really, that would bring us into type-level magic that’s just not warranted here; I’ll settle for making it generic over the vector type.

```
parseInput :: String -> Env V2
type Env v = Set (v Int)
```

The automaton rule is the standard one, with no specific adjustment for dimension.

```
rule :: Bool -> Int -> Bool
True n | n `notElem` [2,3] = False
rule False 3 = True
rule = active rule active _
```

Now to compute the neighborhood. An easy way to do this is to use the vectors’ `Ix`

instance, as I did on day 11. But how am I to generate the bounds? I’ll use the `Applicative`

instance!^{1}

`λ> pure 42 :: V3 Int V3 42 42 42`

```
= fromList
neighbors r ^+^ delta | delta <- range (pure (-1),pure 1), delta /= zero ] [ r
```

What’s the typing for this? It needs quite a few constraints to work right.

- The result is a
`Set`

. This needs an`Ord`

instance on the vector instance type. - It adds (
`^+^`

) vectors. This needs an`Additive`

constraint on the vector type. - It generates the unit “circle” using
`range`

. This needs an`Ix`

constraint on the vector instance type. - It uses the
`Applicative`

trick for reasons detailed above.

That’s a mouthful. Let’s package them for ease of reading.

```
neighbors :: Conway v => v Int -> Env v
type Conway v = (Ord (v Int),Additive v,Ix (v Int),Applicative v)
```

(This needs the `ConstraintKinds`

extension.)

Now to implement the generational iteration. With such a setup, it’s rather straightforward.

```
step :: Conway v => Env v -> Env v
= foldMap life (activeCubes <> fringe) where
step activeCubes = foldMap neighbors activeCubes
fringe = case rule (isActive cube) (countNeighbors cube) of
life cube True -> singleton cube
False -> mempty
= cube `member` activeCubes
isActive cube = length . Set.filter isActive . neighbors countNeighbors
```

And… that’s it! All we have to do now is run a few iterations on the sample.

`λ> mapM_ print $ Data.List.take 5 $ iterate step sample fromList [V2 0 1,V2 1 2,V2 2 0,V2 2 1,V2 2 2] fromList [V2 1 0,V2 1 2,V2 2 1,V2 2 2,V2 3 1] fromList [V2 1 2,V2 2 0,V2 2 2,V2 3 1,V2 3 2] fromList [V2 1 1,V2 2 2,V2 2 3,V2 3 1,V2 3 2] fromList [V2 1 2,V2 2 3,V2 3 1,V2 3 2,V2 3 3]`

It’s not too visual, but comparing the first and last line you can verify that our little glider has successfully shifted by `V2 1 1`

.

But wait! We’re supposed to operate in 3D!

Well, if my generic style of implementing it worked, it should all be a matter of using the code from a 3D starting point instead of the flat one provided as input. Let’s write a function to convert the input from 2D to any higher dimensionality.

```
upgrade :: (R2 v,Conway v) => Env V2 -> Env v
= fromList . map ((zero &) . set _xy) . toList upgrade
```

The `linear`

package kindly provides the `R2`

^{2} class to access 2D subsets of vectors, so the conversion simply copies the 2D components of the input to the two first coordinates of the receiving higher-dimensional null vector.

And we can now solve part 1.

```
main :: IO ()
= do
main <- parseInput <$> readFile "sample17.in"
input print $ length $ iterate step (upgrade @V3 input) !! 6
```

Part 2 asks us to do the same thing in 4D.

Ok.

`print $ length $ iterate step (upgrade @V4 input) !! 6 `

This concludes this day’s solution. I hope you enjoyed it, and see you soon for a subsequent instalment!

Let’s start with the obligatory literate Haskell header.

```
{-# LANGUAGE RecordWildCards #-}
import Data.List
import Data.List.Split
```

As you can tell from my imports, the parsing isn’t going to be too heavyweight today. The input file is broken into paragraphs, one for fields’ definitions, one for my ticket and one for other people’s tickets. The two last sections have a header line. So here’s a reasonable problem environment definition and high-level parsing.

```
data Problem = Pb
fields :: [Field]
{ myTicket :: Ticket
, otherTickets :: [Ticket]
,
}parseInput :: String -> Problem
=
parseInput input let [_fields,[_,_myTicket],(_:_otherTickets)] =
null (lines input)
linesBy in Pb { fields = parseField <$> _fields
= parseTicket _myTicket
, myTicket = parseTicket <$> _otherTickets
, otherTickets }
```

The `linesBy`

trick may require a word. It’s really not operating on lines in the usual sense of it, hence the confusion. Read it as a form of `sepBy`

that doesn’t include the separator in the returned strings and doesn’t require termination. I’ve already split the input by lines, so it’s aggregating sequences of lines separated by an empty one—the `null`

test. In other words, paragraphs.

The rest of the parsing is simple recursive descent on `String`

s with list operations.

```
type Field = (String,[Range])
parseField :: String -> Field
=
parseField input let (name,':' : ' ' : ranges) = break (== ':') input
"or",r2] = words ranges
[r1,in (name,[parseRange r1,parseRange r2])
type Range = (Int,Int)
parseRange :: String -> Range
=
parseRange input let (lo,'-' : hi) = break (== '-') input
in (read lo,read hi)
type Ticket = [Int]
parseTicket :: String -> Ticket
= unfoldr toComma
parseTicket where toComma "" = Nothing
= Just (read l,drop 1 r)
toComma x where (l,r) = break (== ',') x
```

Now I need to identify invalid tickets. A ticket is invalid if it features a value that can’t be mapped to a field. (Side note: to simplify reasoning, even though the statement defines in terms of negatives, I’ll only use postive predicates, *i.e.* `isTicketValid`

instead of `isTicketInvalid`

.)

```
isTicketValid :: [Field] -> Ticket -> Bool
=
isTicketValid fields all (\value -> any (\field -> isFieldValid field value) fields)
```

And recursive descent to further specify what it means for a value to fit in a field.

```
isFieldValid :: Field -> Int -> Bool
= any (`isRangeValid` value) ranges
isFieldValid (_,ranges) value
isRangeValid :: Range -> Int -> Bool
= low <= value && value <= high isRangeValid (low,high) value
```

Let’s test it!

λ> let Pb{..} = parseInput sample

λ> filter (not . isTicketValid fields) otherTickets

[[40,4,50],[55,2,20],[38,6,12]]

It reports the same tickets as the statement. So far, so good.

The puzzle wants us to sum the offending values from the tickets. So I’ll slightly alter `isTicketValid`

to return those.

```
ticketInvalidRate :: [Field] -> Ticket -> [Int]
=
ticketInvalidRate fields filter (\value -> all (not . (`isFieldValid` value)) fields)
```

It was just a matter of replacing `all`

with `filter`

and inverting the inner logic (`any`

to `all . not`

).

Let’s verify.

λ> map (ticketInvalidRate fields) otherTickets

[[],[4],[55],[12]]

Those are indeed the offending values. I can now package this to a bona-fide function and glean my gold star.

```
ticketScanningErrrorRate :: [Field] -> [Ticket] -> Int
=
ticketScanningErrrorRate fields tickets sum $ concatMap (ticketInvalidRate fields) tickets
```

Part 2 asks for the grunt work: actually identifying which field is which.

I started with a simple backtracking search, operating on the values in ticket order.

```
orderFields :: [Ticket] -> [Field] -> [[Field]]
= go fields0 (transpose tickets)
orderFields tickets fields0 where
:fvs) = do
go fields (fieldValues<- filter (\f -> all (isFieldValid f) fieldValues)
validField
fields:) <$> go (delete validField fields) fvs
(validField = pure [] go [] []
```

The `transpose`

operation is there to convert a list of tickets to a list of list of values in ticket order, grouping the tickets’ values by their index within a ticket.

For each value set `fieldValues`

, it searches the current list of unplaced fields for one that “fits”. It then recurses down to identifying a field for the next ticket index, until all fields are placed or a dead-end is encountered.

λ> let validTickets = filter (isTicketValid fields) otherTickets

λ> map fst <$> orderFields validTickets fields

[[“row”,“class”,“seat”]]

It worked!

Well, it’s an exhaustive search, it’s guaranteed to work. It’s not guaranteed to work *fast*, though. Let’s implement the puzzle validation logic, so I can ask the site whether I got it right.

```
checkField :: Field -> Int -> Int
checkField (name,_) value| "departure " `isPrefixOf` name = value
| otherwise = 1
checkTicket :: [Field] -> Ticket -> Int
= product (zipWith checkField fields ticket) checkTicket fields ticket
```

λ> let [orderedFields] = orderFields validTickets fields

λ> checkTicket orderedFields myTicket

1

Is that answer correct? Hard to say. Well, it obviously *is* for the sample where *no* field starts in “departure”, but what about my input data?

The repl didn’t yield an answer in a reasonable time, so I interrupted it and ran the compiled version. That one answered correctly under ten seconds. So I’m not going to need to optimize that search at all, in the end.

Here’s the end of the program for completeness.

```
main :: IO ()
= do
main Pb{..} <- parseInput <$> readFile "day16.in"
print $ ticketScanningErrrorRate fields otherTickets
let validTickets = filter (null . ticketInvalidRate fields) otherTickets
= orderFields validTickets fields
[orderedFields] print $ checkTicket fields myTicket
```

This concludes day 16’s solution. See you around for more advent Haskell!

No footnotes today. What’s *wrong* with me?

As such, it doesn’t ring a bell just yet^{1}, but the definition is clear enough. And I only need to compute 2020 terms, so there’s not much to worry about: a straightforward implementation is a valid way of proceeding for now.

AoC is in the morning for me, so the coffee hasn’t really kicked in yet. I don’t have the perfect data representation that springs to mind by reflex. Not to worry, I can use top-down compile-error-message-driven design.

Let’s write a simple transcription of the game logic using do-notation:

```
= do
game starter
forM_ starter play$ do
forever >>= \case
lastNumberAge Nothing -> play 0
Just age -> play age
```

It complains about `LambdaCase`

not being active and `forM_`

not being defined. That’s easy enough to fix.

```
{-# LANGUAGE LambdaCase #-}
import Control.Monad
```

The remaining errors are:

Variable not in scope: play :: Int -> m b0

Variable not in scope: lastNumberAge :: m (Maybe t0)

Let’s start with the latter. How would I extract the previous number’s age? Let’s start simple.

```
import Control.Monad.State.Class
import qualified Data.IntMap.Strict as Map
= do
lastNumberAge <- gets stLastNum
n <$> gets stAge Map.lookup n
```

Ok, that was a cop out. It seems reasonable enough to maintain some state that remembers the last number played as `stLastNum`

. It seems a bit less reasonable to remember every number’s age as the game goes, since I’d have to update it all the time. Better keep track of time instead, and infer the age.

```
= do
lastNumberAge <- gets stLastNum
n <- gets stTurn
cur fmap (cur -) . Map.lookup n <$> gets stTurnPlayed
```

Variable not in scope: play :: Int -> m b0

Variable not in scope: stLastNum :: s0 -> Map.Key

Variable not in scope: stTurn :: s0 -> b

Variable not in scope: stTurnPlayed :: s0 -> Map.IntMap b

We’re getting a better understanding of the state’s shape.

Let’s continue with `play`

. A reasonable way to “say” a number would be to simply output it to a `Writer`

monad instance. The interesting part is how to maintain the state as it goes.

```
{-# LANGUAGE FlexibleContexts,NamedFieldPuns #-}
import Control.Monad.Writer.Class
= do
play n St{stTurn,stTurnPlayed} <- get
St { stTurn = stTurn + 1
put = n
, stLastNum = Map.insert n stTurn stTurnPlayed
, stTurnPlayed
} tell [n]
```

That looks like it could work! What I’m missing now is actually defining the `St`

structure with the fields listed in the error message.

```
data St = St { stTurn :: !Int
stLastNum :: Int
, stTurnPlayed :: !(Map.IntMap Int)
,
}st0 :: St
= St { stTurn = 0
st0 = error "No last number at game start!"
, stLastNum = Map.empty
, stTurnPlayed }
```

All of the “variable not in scope” errors are gone. All that remains are complaints about the type being ambiguous. This can be addressed by defining a wrapper to run the game as a pure function.

```
import Control.Monad.RWS.Lazy
runGame :: [Int] -> [Int]
= snd $ evalRWS (game starter) () st0 runGame starter
```

Let’s try it on the example!

λ> runGame [0,3,6] !! 2019

1

Something’s wrong. The expected answer is 436!^{2} What did I miss? Let’s trace the actual first few values.

λ> take 10 $ runGame [0,3,6]

[0,3,6,1,1,1,1,1,1,1]

The first three numbers are correct. Whew! But then it seems like I’m not only failing to output the correct next number, namely 0, but additionally the logic is getting stuck in a loop.

The reason for this is actually quite simple if you trace the algorithm. When the third and final starter number, namely 6, is played, it is added to the history map as having been played at turn 3. But when I request its age using the `lastNumberAge`

function, it will compute the age difference between the turn being computed, namely 4, and the turn I just stored in the map, resulting in 1.

The next number computed makes the exact same mistake, hence the loop. Oops!

So the root cause is that with the current algorithm, I never have the last number played’s previous two dates available at the same time. So I can’t compute a valid difference. To correct this, I’ll shift the computation of the last number’s age to when I still know its previous date.

So I’ll replace the `stLastNum`

state field with `stLastNumAge`

, and move the computation to the `play`

function.

```
st0 :: St
= St { stTurn = 0
st0 = error "No last number at game start!"
, stLastNumAge = Map.empty
, stTurnPlayed
}
= do
play n St{stTurn,stTurnPlayed} <- get
St { stTurn = stTurn + 1
put = (stTurn -) <$> Map.findWithDefault stTurn n stTurnPlayed
, stLastNumAge = Map.insert n stTurn stTurnPlayed
, stTurnPlayed
}
tell [n]
= gets stLastNumAge lastNumberAge
```

λ> take 10 $ runGame [0,3,6]

[0,3,6,0,3,3,1,0,4,0]

Much better. I ran it with my puzzle input, got my gold star and unlocked part 2.

Part 2 was about doing the same thing 30 million times. Well, it’s not that big a number, especially for offline processing. My code is already *O*(*N*log*N*),^{3} there’s not too much point optimizing it before I at least get an estimate of how long it will take.

As it turned out, it needed less than 45 seconds to compute when compiled. Less than two minutes interpreted.^{4} So really not worth the engineering time to reduce to linear or better. Maybe some other time.

This concludes day 15’s solution. Here’s the full code, with the additional simplifications of:

- directly considering a new number to have an age of 0 and skirting the
`Maybe`

wrapper.^{5} - fusing
`lastNumberAge`

and`play`

by having the last number age be the monadic return value of`play`

. This lets me drop`stLastNumAge`

from the state.

```
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE NamedFieldPuns #-}
import Control.Monad.RWS.Lazy
import qualified Data.IntMap.Strict as Map
import Data.Void
game :: [Int] -> RWS () [Int] St Void
= do
game starter
forM_ starter play-- That next 0 is only valid if the last
-- starter number is unique. Mine is.
>=>) 0
fix (play
data St = St { stTurn :: !Int
stTurnPlayed :: !(Map.IntMap Int)
,
}st0 :: St
= St { stTurn = 0, stTurnPlayed = Map.empty }
st0
runGame :: [Int] -> [Int]
= snd $ evalRWS (game starter) () st0
runGame starter
play :: Int -> RWS () [Int] St Int
= do
play n St{stTurn,stTurnPlayed} <- get
St { stTurn = stTurn + 1
put = Map.insert n stTurn stTurnPlayed
, stTurnPlayed
}
tell [n]pure $ stTurn - Map.findWithDefault stTurn n stTurnPlayed
main :: IO ()
= do
main <- read . (\s -> "["++s++"]") <$> getContents
starter print $ runGame starter !! 2019
print $ runGame starter !! 29999999
```

~~But I’ll be sure to check out the subreddit once I’m done.~~So it’s a Van Eck’s sequence. I’d already implemented them, but it apparently hasn’t left that much of an impression that I’d remember them on sight.↩︎I’m sorry. The expected answer is actually 436.↩︎

This is probably false. But not so blatantly. I’m using the lazy writer monad on

`[Int]`

, which likely incurs some concatenation penalty. But then again not that much considering the use site. I’ve run it using a`DList`

instead and it took the same time.↩︎I expected more difference. I’ll have to measure it more reliably.↩︎

And I don’t feel type-dirty for doing it.↩︎

This post is mostly literate Haskell, except I have no idea what I’m doing.^{1} So… let’s see how this goes!

As is customary, here’s five pages of language extensions and imports that you may freely skip over. I simply haven’t found the proper way to conceal them yet. Not that I’ve tried too hard. Yet.

```
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE ViewPatterns #-}
import Control.Monad
import Control.Monad.ST
import Data.Array
import Data.Function
import Data.Maybe
import qualified Data.Map.Strict as Map
import Data.STRef
import Linear.V2
```

Ok, we’re on!

The summarized topic of the day is to take a snapshot of a cellular automaton after it converges. The automaton ruleset is the same between both parts; what changes is the topology.

Let’s start by converting the input format to something tangible. I’ll simply represent the input as a two-dimensional array of booleans, telling me whether there’s a seat there. Array dimensions are taken from the number of input lines and the width of the first one. I won’t do any involved error detection as the input is unique and most likely correct.

```
type Pos = V2 Int
type SeatMap = Array Pos Bool
-- | Safe seat map indexing.
isSeat :: SeatMap -> Pos -> Bool
| inRange (bounds seatMap) pos = seatMap ! pos
isSeat seatMap pos | otherwise = False
parse :: String -> SeatMap
lines -> input) = listArray (V2 1 1,V2 h w) (map readSeat (concat input))
parse (where w = length (head input)
= length input
h '.' = False
readSeat 'L' = True readSeat
```

I want iteration to be fast, as there are going to be multiple of them. So I’ll represent the environment as a mesh of `STRef`

s, with nodes mapping to seats. Each node will hold the following information:

- an occupied flag. It will change from a generation to the next, so I’ll make it a reference.
- links to the other seats involved in its update. The information I need is the number of occupied ones. So I can afford to skip the neighboring nodes and link directly to the
`STRef`

s within.

```
data Node s = Node { nodeRef :: STRef s Bool
nodeNeighborRefs :: [STRef s Bool] } ,
```

With this node type, my environment state is simply a collection of them. A collection that doesn’t really need any structure at all, so I’ll just use a list by default.

`type Env s = [Node s]`

With this set up, I’m ready to write the generation iteration.

This specific cellular automaton doesn’t distinguish among neighbors, so all I need to make available to the rule function is the number of active neighbors.

`type Rule = Bool -> Int -> Bool`

On this year’s automaton, I expect the stabilization to manifest as an absence of change from a generation to the next. An easy way to make that apparent in our “mutable reference mesh” model is to split the traditional update in two steps:

- serialize the changes that occur in this generation
- carry them out

This allows me to peek in-between and verify whether or not there are actually any changes remaining. It has the added advantage of not touching the rule’s input generation before all of the changes have been computed, which avoids *that* class of bug. (The counterpoint is the short-term memory consumed by the change log.)

```
type Change s = ST s ()
scanChanges :: Rule -> Env s -> ST s [Change s]
= mapMaybeM scanNode
scanChanges f where scanNode Node {..} = do
<- readSTRef nodeRef
cur <- length <$> filterM readSTRef nodeNeighborRefs
n let new = f cur n
pure (writeSTRef nodeRef new <$ guard (new /= cur))
applyChanges :: [Change s] -> ST s ()
= sequence_ applyChanges
```

I still need an actual rule to be able to run this. Let’s write one for both parts 1 and 2.

```
data Flavor = Part1 | Part2
rule :: Flavor -> Rule
False 0 = True
rule _ True n | Part1 <- f, n >= 4 = False
rule f | Part2 <- f, n >= 5 = False
= occupied rule _ occupied _
```

Oh. There’s one thing I forgot. To run all of this, I also need a starting environment. That’s where we will observe most of the difference between parts 1 and 2.

When I first solved the problem, I deliberately didn’t use any smart, let alone sane datastructure to do so. I used the much-maligned lazy character list-based Haskell I/O as is, resulting in worse-than-suboptimal, quadratic algorithms. I didn’t care because the input size is small, and this is only be performed once anyway.^{2}

It’s kind of trivial to port it to arrays and not publish crazy bad code on the interwebz, so I’ll spare your eyes and improve performance in one stone.

The structural difference between parts 1 and 2 is which ~~cells~~ seats qualify as neighbors when computing the rule. In part 1, the seat neighbors are those of the 8 neighboring places that have a seat on then. In part 2, the neighbors are the first seat encountered when radiating from the seat at hand in the 8 cardinal directions, up to one per direction.

```
-- I'm too lazy to define a “Dir” type to be mostly the same thing.
cardinals :: [Pos]
= filter (/= 0) $ range (V2 (-1) (-1),V2 1 1)
cardinals
neighbors :: Flavor -> SeatMap -> Pos -> [Pos]
Part1 sm p = filter (isSeat sm) (map (p +) cardinals)
neighbors Part2 sm p = concatMap (firstSeat . trim . ray) cardinals
neighbors where ray dir = map (\i -> p + fromIntegral i * dir) [1 :: Int ..]
= takeWhile (inRange (bounds sm))
trim = take 1 . filter (sm !) firstSeat
```

To construct our mesh, I’ll tie the knot over an internal/temporary map from position to `STRef`

. My live solution actually used the RecursiveDo extension, but this doesn’t really win anything meaningful, so I’ll keep it simple here for further readability.

```
environment :: (Pos -> [Pos]) -> [Pos] -> ST s (Env s)
= do
environment nbs seatPoss <- Map.fromList <$> mapM (\p -> (p,) <$> newSTRef False) seatPoss
pos2ref pure $
map ( \(pos,nodeRef) ->
let nodeNeighborRefs = map (pos2ref Map.!) (nbs pos)
in Node{..} )
(Map.assocs pos2ref)
```

Now, what did the puzzle request as an output? The number of occupied seats. That’s rather straightforward to compute.

```
hash :: Env s -> ST s Int
= fmap length . filterM (\Node{nodeRef} -> readSTRef nodeRef) hash
```

And I can now package the complete chain!

```
solve :: Flavor -> SeatMap -> Int
= runST $ do
solve flavor seatMap
-- list of seat positions
let seatPoss = filter (seatMap !) (indices seatMap)
-- the two parameters: neighboring rule and threshold
let nbs = neighbors flavor seatMap
= rule flavor
r
<- environment nbs seatPoss
env $ \loop -> do
fix >>= \case
scanChanges (rule flavor) env -> hash env
[] -> applyChanges changes *> loop changes
```

The rest is just boilerplate…

```
main :: IO ()
= do
main <- parse <$> readFile "day11.in"
seatMap print $ solve Part1 seatMap
print $ solve Part2 seatMap
```

…and a small helper.

```
mapMaybeM :: Applicative m => (a -> m (Maybe b)) -> [a] -> m [b]
= fmap catMaybes . traverse f mapMaybeM f
```

To summarize: I took a bottom-up approach, building up my cellular automaton environment as a mesh of `STRef`

s, over which I computed and serialized a list of changes per generation.

This concludes this day’s solution. I hope you learned something along the way!

Some time ago, I came to the realisation I had published a lot of things over the years, but none of it was too easily browsable. It’s all scattered across many logical and physical places. Time to consolidate!

I’m trying out Slick this time. I’ve maintained jekyll and hakyll sites in the past, older stuff further up, and I feel I needed a change. The “Another site generator?!” paragraph in the README resonated with me, so here we are with a new layout. That drop capital is *clearly* going to need kerning.

I’ll try and keep the authorship dates mostly at when they happened. On the other hand the feed will notify for articles as they are brought in, so it may seem a bit haphazard at first. Until I’m done gathering it all.

Enjoy!

]]>