# Project Euler 60: Surprisingly difficult but solved in 70 milliseconds!

Problem 60 of Project Euler took me a solid 3 nights of work. Totalling roughly 6 to 8 hours! and That's to find a number. That's rough!

## The Problem Statement

The primes 3, 7, 109, and 673, are quite remarkable. By taking any two primes and concatenating them in any order the result will always be prime. For example, taking 7 and 109, both 7109 and 1097 are prime. The sum of these four primes, 792, represents the lowest sum for a set of four primes with this property.

Find the lowest sum for a set of five primes for which any two primes concatenate to produce another prime.

## The Process to arrive at the solution

First I built this table by hand to find any shortcuts. And observed the properties

- Concatenation with prime checking is commutative, the numbers 3, 7 and 7,3 both generate the pair (37, 73) and if we have checked 3,7 then we won't need to check 7,3
- Any number containing 2 or 5 is automatically disqualified. Thus, we should start our search at 3.
- For a given prime, we only need to check other primes larger than itself (blue triangle in Fig 1)

With the observations at hand, the problem still looked a bit difficult. And I really wanted the algorithm to be scalable, i.e. no nested loops. I should be able to set the depth 3,4,5 whatever and the result would still be computed.

To solve for this searching, and to avoid brute force, as I have my doubts. I came up with the following data structure

It is a HashMap that contains a Prime number `m`

as the key, and a vector containing all the other prime numbers `n`

such that `m*n`

and `n*m`

result in a prime. We can also use a HashSet. But since prime number generators already generate a sorted list, and in Rust, vectors do preserve chronological order, we can simply binary search the vector to find a given element in the vector within the HashSet.

### Traversing the Data Structure in Rust

The Traversal of this HashMap is a slightly modified DFS. we start with defining the inputs, `p_index`

as the hashmap, `chain`

as the current chain, `matches`

should contain all the matches and `idx`

as a pointer for recursion, which should tell us, at which index in `chain`

we are in.

Then, when the `idx`

is at 0, i.e. starting the search, we look at all the keys, sort them, and start a recursive search.

And, next, we need to check, if we have reached the desired depth or not. This is set in the `DEPTH`

constant. Whenever a match is found, we clone the current chain, and push in the matches `vector`

.

And there comes the bigger part. we start with assigning `index`

as the last element in the array, dereferencing `idx`

and subtracting 1. And we also clone the chain, to avoid mutations. Thanks to rust's amazing borrow checker, you won't be able to pass chain recursively after mutation as a mutable reference.

Now, we start looping. If it's the first iteration, i.e. `idx`

is 1. then, we don't need to test anything further, and make a recursive call increasing value of `idx`

But if `idx`

is > 1, we need to go through all the items in the current `chain`

, and iteratively check, if the `candidate`

is also part of the earlier elements in `chain`

.

for example, say 97 and 7 are compatible. (both 977, and 797 are prime), i.e. 97 will be part of 7's list, but 97 is definitely not in 3's list as 973 is 7*139.

After we test all the elements, if they pass all the tests, then, we can call `dfs_map`

iteratively, incrementing both `chain`

and `idx`

.

### Generating the HashMap

In order to generate the HashMap, we configure the rust library `primal`

it is amazingly fast, and considering others were loading primes from RAM, I think it's fair. I know how to write prime sieves quite well but `primal`

has caching, and there are only two difficult things in computing, **cache invalidation** and naming things. And i didn't wanna handle the former.

Up next, we go through all primes, sieving from 1 upto N, For each prime `m`

, we start generating another prime `n`

, which iterates from `m`

upto `N`

, for any given pair (m,n) we check, if it satisfied primality regardless of concatenation.

The following code is pretty self explanatory, it basically takes items from `p_dash`

which is a list of ordered pairs, and generates the aforementioned HashMap.

## Getting the Results

We then let the algorithm loose! on the generated HashMap

and find the minimums!

I set the `N_LIMIT`

to 10000 and `DEPTH`

to 5, and it prints the following

I expected, at best case, for it to run in 1 second, but i was surprised, `70 ms`

is crazy fast. I ran this on a i5-9400H running on Lenovo Legion Y540 laptop.

## Complete source

```
use primal::Sieve;
use std::collections::HashMap;
const N_LIMIT: usize = 10000;
const DEPTH: usize = 5;
use itertools::Itertools;
use std::cmp::min;
fn num_concat(m: usize, n: usize) -> usize {
let n_: f64 = n as f64;
let k: u32 = n_.log10().ceil() as u32;
return m * (10_usize.pow(k)) + n;
}
fn is_in_idx(p_index: &HashMap<usize, Vec<usize>>, i: &usize, n: &usize) -> bool {
// println!("Index: {} value {}", i, n);
let p_slice: &Vec<usize> = &p_index[i];
// println!("Found");
match p_slice.binary_search(&n) {
Ok(_) => return true,
Err(_) => return false,
}
}
fn dfs_map(
p_index: &HashMap<usize, Vec<usize>>,
mut chain: &mut [usize; DEPTH],
mut matches: &mut Vec<[usize; DEPTH]>,
idx: &usize,
) {
if *idx == 0 {
for i in p_index.keys().sorted() {
chain[0] = *i;
dfs_map(&p_index, &mut chain, &mut matches, &mut (*idx + 1));
}
} else if *idx == DEPTH {
// for p in chain {
// print!("{} ", p);
// }
// println!("Search Completed");
matches.push(chain.clone());
} else {
let index = chain[*idx - 1] as usize;
let mut cloned: [usize; DEPTH] = chain.clone();
// println!("At index {}", index);
for &i in &p_index[&index] {
let candidate = i;
let mut valid = true;
// is this candidate feasible
if *idx > 1 {
for j in 0..*idx {
valid = valid && is_in_idx(&p_index, &(cloned[j] as usize), &candidate);
}
}
if *idx < DEPTH && valid && p_index.contains_key(&candidate) {
cloned[*idx] = candidate;
dfs_map(&p_index, &mut cloned, &mut matches, &mut (*idx + 1));
}
}
}
}
fn main() {
let sieve: Sieve = Sieve::new(N_LIMIT * N_LIMIT);
let n_primes: usize = sieve.prime_pi(N_LIMIT);
// ALGORITHM -> Generate Prime Index
// set of all ordered pairs of primes where (m,n) such that m,n are primes under n, m < n
// and where m#n is prime, n#m is prime where # means concat
let mut p_dash: Vec<[usize; 2]> = Vec::new();
for p in 1..n_primes {
let m: usize = sieve.nth_prime(p + 1);
for n in sieve.primes_from(m).take_while(|x| *x < N_LIMIT) {
if m < n && sieve.is_prime(num_concat(m, n)) && sieve.is_prime(num_concat(n, m)) {
p_dash.push([m, n]);
}
}
}
// creating an index preparing for a DFS
let mut p_index: HashMap<usize, Vec<usize>> = HashMap::new();
// populate p_index
for i in p_dash {
let m = i[0];
let n = i[1];
if p_index.get_key_value(&m) == None {
p_index.insert(m, Vec::new());
}
if let Some(x) = p_index.get_mut(&m) {
x.push(n);
}
}
// query p_index
let mut cur_stack: [usize; DEPTH] = [0, 0, 0, 0, 0];
let mut matches: Vec<[usize; DEPTH]> = Vec::new();
let idx: usize = 0;
dfs_map(&p_index, &mut cur_stack, &mut matches, &idx);
println!("Search Finished");
let mut mini: u128 = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF;
for k in matches {
let mut sum: u128 = 0;
for j in &k {
print!("{} ", j);
sum = sum + (*j) as u128;
}
mini = min(sum, mini);
println!(" SUM {}", sum);
}
println!("Minimum is {}", mini);
}
```