Consider a scenario where a group of agents, each receiving
partially informative private signals, aim to learn the true
underlying state of the world that explains their collective
observations. These agents might represent a group of individuals
interacting over a social network, a team of autonomous robots
tasked with detection, or even a network of processors trying to
collectively solve a statistical inference problem. To enable such
agents to identify the truth from a finite set of hypotheses, we
propose a distributed learning rule that differs fundamentally from
existing approaches, in that it does not employ any form of
``belief-averaging". Instead, agents update their beliefs based on
a min-rule. Under standard assumptions on the observation model and
the network structure, we establish that each agent learns the
truth asymptotically almost surely. As our main contribution, we
prove that with probability 1, each false hypothesis is ruled out
by every agent exponentially fast, at a network-independent rate
that strictly improves upon existing rates. We then consider a
scenario where certain agents do not behave as expected, and
deliberately try to spread misinformation. Capturing such
misbehavior via the Byzantine adversary model, we develop a
computationally-efficient variant of our learning rule that
provably allows every regular agent to learn the truth
exponentially fast with probability 1.