In this final project, we need to pipeline all the tasks we have done before. It might sound easy in the beginning; however, we soon notice that the error propagation issue is quite significant. Thus, we need to go back and improve each previous project model individually. For Hw1, we try three other models: Improved Self-implemented DNN, DNN with Cafe Package, and CNN with Cafe Package. For Hw2, we also have three other alternative solutions: Naive Stitching, Hidden Markov Model, and n-best RNNLM. For our last homework, we first use Lexicon WFST tool to generate n-best sequence for each sentence. Later, we use RNNLM package to find out the sequence with highest probability. At the end, we transfer the most likely sequence into required output with script provided by TA.