Normalizing output of Viterbi algorithm

by Jan Vainer   Last Updated May 15, 2019 18:19 PM

Viterbi algorithm can be used to solve problems in belief networks of the following kind:

$$argmax_{x_{1:t}}P(x_{1:t}| e_{1:t})$$

where $e_{1:t} \in E^t$ are evidence variables and $x_{1:t} \in S^t$ are hidden states. The formula can be rewritten in the following way.

\begin{align}argmax_{x_{1:t}}P(x_{1:t}| e_{1:t}) &= argmax_{x_{1:t}} \alpha \cdot P( e_{1:t} | x_{1:t})P(x_{1:t})\\ &= argmax_{x_{1:t}} \Pi_{i=1}^t P( e_{i} | x_{i})P(x_{i}|x_{i-1})\\ \end{align}

The first equality comes from Bayes' rule. The second from conditional independence and the fact, that normalization constant does not affect argmax. The final formula can be computed dynamically.

Question: How can we compute $max_{x_{1:t}}P(x_{1:t}| e_{1:t})$ if we do not know the normalizing constant? We can find the optimal path, but can we also figure out the normalized probability? In e.g. Forward algorithm, we can calculate the prob. for each possible final state and still keep polynomial complexity. Is anything similar possible with the problem at hand? (It seems like it would be exponential, but maybe there is some clever trick.)



Related Questions


Hidden Markov Model without Viterbi

Updated May 02, 2017 08:19 AM

Transition Probabilities in Viterbi Algorithm

Updated December 10, 2017 23:19 PM