This paper proposes a deep neural network (DNN)-based statistical parametric speech synthesis system using an improved time-frequency trajectory excitation (ITFTE) model. The ITFTE model, which efficiently reduces the parametric redundancy of a TFTE model, improved the perceptual quality of the vocoding process and the estimation accuracy of the training process. However, there remain problems related to training ITFTE parameters in a hidden Markov model (HMM) framework, such as inefficiency of representing cross-dimensional correlations between ITFTE parameters, over-smoothed outputs caused by statistical averaging, and an over-fitted model due to a decision tree-based state clustering paradigm. To alleviate these limitations, a centralized DNN replaces the decision trees of the HMM training process. Analysis of trainability confirms that the DNN training process improves the model accuracy, which results in improved perceptual quality of synthesized speech. Objective and subjective test results also verify that the proposed system performs better than the conventional HMM-based system.