We examine the non-Markovian nature of human mobility by exposing the inability of Markov models to capture criticality in human mobility. In particular, the assumed Markovian nature of mobility was used to establish an upper bound on the predictability of human mobility, based on the temporal entropy. Since its inception, this bound has been widely used for validating the performance of mobility prediction models. We show that the variants of recurrent neural network architectures can achieve significantly higher prediction accuracy surpassing this upper bound. The central objective of our work is to show that human-mobility dynamics exhibit criticality characteristics which contributes to this discrepancy. In order to explain this anomaly, we shed light on the underlying assumption that human mobility characteristics follow an exponential decay that has resulted in this bias. By evaluating the predictability on real-world datasets, we show that human mobility exhibits scale-invariant long-distance dependencies, bearing resemblance to power-law decay, contrasting with the initial Markovian assumption. We experimentally validate that this assumption inflates the estimated mobility entropy, consequently lowering the upper bound on predictability. We demonstrate that the existing approach of entropy computation tends to overlook the presence of long-distance dependencies and structural correlations in human mobility. We justify why recurrent-neural network architectures that are designed to handle long-distance dependencies surpass the previously computed upper bound on mobility predictability.