# Find the minimum of an array with a non-increasing and a non-decreasing part

# 1 Problem

Consider an array $a$ of $n$ entries from a totally ordered set. There exist a index $j$, such that $a[i]≥a[i+1]$ for all $i<j$, and $a[i]≤a[i+1]$ for all $i≥j$.

How fast can we find such $j$? The worst time is $O(n)$. When all the elements are equal, you must transverse the entire array to figure that out.

If $m$ is the maximum time an element occurs in the array, then we can find an algorithm that solves this problem in $O(m+gn)$ time.

# 2 Algorithm

The idea is to use ternary search, which works well when all values are distinct, and go for linear search once we figure there are a lot of repeated elements.

First, we present a lemma.

If there exist a index $i<j<k$, such that $a[i]=a[j]=a[k]$, then at least $min(j−i,k−j)$ positions in the array has the same value as $a[i]$.

^{Image Credit: Vanessa
Li.}

Consider we want to find the minima from index $l$ to $r$ (not including $r$), and $r−l>3$. Let $m_{1}=l+⌊3r−l ⌋$ and $m_{2}=r−⌊3r−l ⌋$ such that $l<m_{1}<m_{2}<r$.

- If $a[m_{1}]<a[m_{2}]$, then we know the minima is in $a[l..m_{2}]$. Recurse.
- If $a[m_{1}]>a[m_{2}]$, then we know the minima is in $a[m_{1}..r]$. Recurse.
- Otherwise $a[m_{1}]=a[m_{2}]$. If $a[l]=a[m_{1}]$ or $a[r]=a[m_{1}]$, then by the lemma, at least $1/3$ of the values between position $l$ and $r$ is $a[m_{1}]$. We can also test if $a[⌊2m_{1}+m_{2} ⌋]=a[m_{1}]$, if it is, then $1/6$ of the values between position $l$ and $r$ is $a[m_{1}]$. Since there are so many repeated values, we just do a linear search for the minima.
- Finally, if all above fails, we must have some value between position $m_{1}$ and $m_{2}$ take a different value from them. It must be a smaller value, and no value smaller than $a[m_{1}]$ can exist outside $a[m_{1}..m_{2}]$. Recurse on $a[m_{1}..m_{2}]$.

Here is the code in Go:

# 3 Complexity

$T(m,n)$ is the time to run the algorithm on an array of length $n$ with $m$ repeats.

$T(m,n)=⎩⎨⎧ 32 T(m,n)+O(1)O(n)31 T(m,n)+O(1) if strict inequalityifn≤6m otherwise $ For $n$ larger than $6m $, the algorithm will have $O(gmn )$ recursive calls, each one cost $O(1)$ time. Once it reaches small $n$, it will spend $O(n)=O(m)$ time on a linear search. The algorithm spends a total of $O(m+gmn )=O(m+gn)$ time.

# 4 Notes

Brosef Stalin offered an alternative logic that offers cleaner code.Can we generalize this to first increase then decrease then increase arrays? One can show $O(n)$ is best possible by considering an array with $a[i]=i$ for all $i=j$, and $a[j]=−1$. There is no way to find $j$ with out looking over every position.