Your browser does not support the video tag. Please use IE9+ or Google Chrome.

00:20
1. Deep Neural Network for Acoustic Modeling00:02
2. Bottleneck Features from DNN00:01
3. Deep Neural Network for Acoustic Modeling00:11
4. RBM Initialization for DNN Training00:47
5. Deep Neural Network for Acoustic Modeling00:25
6. Bottleneck Features from DNN00:01
7. References for DNN00:13
8. Convolutional Neural Network (CNN)00:13
9. Convolutional Neural Network (CNN)00:01
10. Convolutional Neural Network (CNN)00:02
11. An example00:24
12. Long Short-term Memory (LSTM)00:23
13. Long Short-term Memory (LSTM)00:19
14. Long Short-term Memory (LSTM)00:07
15. Long Short-term Memory (LSTM)00:05
16. References05:34
17. Neural Network Language Modeling00:09
18. Recurrent Neural Network Language Modeling(RNNLM)00:25
19. Neural Network Language Modeling00:50
20. Recurrent Neural Network Language Modeling(RNNLM)00:09
21. RNNLM Structure00:06
22. Recurrent Neural Network Language Modeling(RNNLM)02:24
23. RNNLM Structure00:15
24. Recurrent Neural Network Language Modeling(RNNLM)00:36
25. RNNLM Structure00:03
26. Recurrent Neural Network Language Modeling(RNNLM)00:10
27. Neural Network Language Modeling00:09
28. Recurrent Neural Network Language Modeling(RNNLM)00:06
29. RNNLM Structure00:11
30. Recurrent Neural Network Language Modeling(RNNLM)00:46
31. RNNLM Structure00:15
32. Back propagation for RNNLM00:10
33. RNNLM Structure00:46
34. Back propagation for RNNLM00:32
35. References for RNNLM08:32
36. Word Vector Representations (Word Embedding)00:24
37. Word Vector Representations – Various Architectures00:06
38. Word Vector Representations (Word Embedding)01:42
39. Word Vector Representations – Various Architectures00:24
40. References for Word Vector Representations06:50
41. Weighted Finite State Transducer(WFST)03:30
42. WFST Operations (1/2)02:19
43. WFST Operations (2/2)00:52
44. WFST for ASR (1/6)00:03
45. WFST Operations (2/2)00:03
46. WFST Operations (1/2)00:05
47. WFST Operations (2/2)00:41
48. WFST for ASR (1/6)03:02
49. WFST for ASR (2/6)01:02
50. WFST for ASR (3/6)00:44
51. WFST for ASR (4/6)01:48
52. WFST for ASR (5/6)03:09
53. WFST for ASR (6/6)00:52
54. References07:10
55. Prosodic Features (І)04:53
56. Prosodic Features (Ⅱ)00:04
57. Prosodic Features (Ⅱ)00:10
58. Prosodic Features (І)01:21
59. Prosodic Features (Ⅱ)01:34
60. Random Forest for Tone Recognition for Mandarin00:09
61. Prosodic Features (Ⅱ)00:33
62. Random Forest for Tone Recognition for Mandarin04:59
63. Recognition Framework with Prosodic Modeling00:01
64. Random Forest for Tone Recognition for Mandarin00:09
65. Prosodic Features (Ⅱ)00:05
66. Prosodic Features (І)00:02
67. Prosodic Features (Ⅱ)00:01
68. Random Forest for Tone Recognition for Mandarin00:14
69. Recognition Framework with Prosodic Modeling00:17
70. Prosody“Improved Large Vocabulary Mandarin Speech Recognition by Selectively Using Tone Information with a Two-stage Prosodic Model”, Interspeech, Brisbane, Australia, Sep 2008, pp. 1137-1140“Latent Prosodic Modeling (LPM) for Speech with Applications in Recognizing Spontaneous Mandarin Speech with Disfluencies”, International Conference on Spoken Language Processing, Pittsburgh, U.S.A., Sep 2006.“Improved Features and Models for Detecting Edit Disfluencies in Transcribing Spontaneous Mandarin Speech”, IEEE Transactions on Audio, Speech and Language Processing, Vol. 17, No. 7, Sep 2009, pp. 1263-1278.Random Foresthttp://stat-www.berkeley.edu/users/breiman/RandomForests/cc_home.htmhttp://stat-www.berkeley.edu/users/breiman/RandomForests/cc_papers.htm05:03
71. Personalized Recognizer and Social Networks02:10
72. Personalized Recognizer and Social Networks01:20
73. Language Model Adaptation Framework00:11
74. References for Personalized Recognizer06:59
75. Recognizing Code-switched Speech04:11
76. Recognizing Code-switched Speech00:18
77. Recognizing Code-switched Speech00:12
78. Recognizing Code-switched Speech02:06
79. Recognizing Code-switched Speech00:48
80. References for Recognizing Code-switched Speech07:25
81. Speech-to-speech Translation07:56
82. Speech-to-speech Translation00:47
83. Machine Translation — Simplified Formulation00:09
84. Speech-to-speech Translation04:29
85. Machine Translation — Simplified Formulation00:41
86. Generative Models for SMT00:11
87. Machine Translation — Simplified Formulation03:09
88. Generative Models for SMT