Your browser does not support the video tag. Please use IE9+ or Google Chrome.
投影片 1 (Lab531, 2:02:20)
 
 
 
  • 1. Deep Neural Network for Acoustic Modeling
  • 2. Bottleneck Features from DNN
  • 3. Deep Neural Network for Acoustic Modeling
  • 4. RBM Initialization for DNN Training
  • 5. Deep Neural Network for Acoustic Modeling
  • 6. Bottleneck Features from DNN
  • 7. References for DNN
  • 8. Convolutional Neural Network (CNN)
  • 9. Convolutional Neural Network (CNN)
  • 10. Convolutional Neural Network (CNN)
  • 11. An example
  • 12. Long Short-term Memory (LSTM)
  • 13. Long Short-term Memory (LSTM)
  • 14. Long Short-term Memory (LSTM)
  • 15. Long Short-term Memory (LSTM)
  • 16. References
  • 17. Neural Network Language Modeling
  • 18. Recurrent Neural Network Language Modeling(RNNLM)
  • 19. Neural Network Language Modeling
  • 20. Recurrent Neural Network Language Modeling(RNNLM)
  • 21. RNNLM Structure
  • 22. Recurrent Neural Network Language Modeling(RNNLM)
  • 23. RNNLM Structure
  • 24. Recurrent Neural Network Language Modeling(RNNLM)
  • 25. RNNLM Structure
  • 26. Recurrent Neural Network Language Modeling(RNNLM)
  • 27. Neural Network Language Modeling
  • 28. Recurrent Neural Network Language Modeling(RNNLM)
  • 29. RNNLM Structure
  • 30. Recurrent Neural Network Language Modeling(RNNLM)
  • 31. RNNLM Structure
  • 32. Back propagation for RNNLM
  • 33. RNNLM Structure
  • 34. Back propagation for RNNLM
  • 35. References for RNNLM
  • 36. Word Vector Representations (Word Embedding)
  • 37. Word Vector Representations – Various Architectures
  • 38. Word Vector Representations (Word Embedding)
  • 39. Word Vector Representations – Various Architectures
  • 40. References for Word Vector Representations
  • 41. Weighted Finite State Transducer(WFST)
  • 42. WFST Operations (1/2)
  • 43. WFST Operations (2/2)
  • 44. WFST for ASR (1/6)
  • 45. WFST Operations (2/2)
  • 46. WFST Operations (1/2)
  • 47. WFST Operations (2/2)
  • 48. WFST for ASR (1/6)
  • 49. WFST for ASR (2/6)
  • 50. WFST for ASR (3/6)
  • 51. WFST for ASR (4/6)
  • 52. WFST for ASR (5/6)
  • 53. WFST for ASR (6/6)
  • 54. References
  • 55. Prosodic Features (І)
  • 56. Prosodic Features (Ⅱ)
  • 57. Prosodic Features (Ⅱ)
  • 58. Prosodic Features (І)
  • 59. Prosodic Features (Ⅱ)
  • 60. Random Forest for Tone Recognition for Mandarin
  • 61. Prosodic Features (Ⅱ)
  • 62. Random Forest for Tone Recognition for Mandarin
  • 63. Recognition Framework with Prosodic Modeling
  • 64. Random Forest for Tone Recognition for Mandarin
  • 65. Prosodic Features (Ⅱ)
  • 66. Prosodic Features (І)
  • 67. Prosodic Features (Ⅱ)
  • 68. Random Forest for Tone Recognition for Mandarin
  • 69. Recognition Framework with Prosodic Modeling
  • 70. Prosody“Improved Large Vocabulary Mandarin Speech Recognition by Selectively Using Tone Information with a Two-stage Prosodic Model”, Interspeech, Brisbane, Australia, Sep 2008, pp. 1137-1140“Latent Prosodic Modeling (LPM) for Speech with Applications in Recognizing Spontaneous Mandarin Speech with Disfluencies”, International Conference on Spoken Language Processing, Pittsburgh, U.S.A., Sep 2006.“Improved Features and Models for Detecting Edit Disfluencies in Transcribing Spontaneous Mandarin Speech”, IEEE Transactions on Audio, Speech and Language Processing, Vol. 17, No. 7, Sep 2009, pp. 1263-1278.Random Foresthttp://stat-www.berkeley.edu/users/breiman/RandomForests/cc_home.htmhttp://stat-www.berkeley.edu/users/breiman/RandomForests/cc_papers.htm
  • 71. Personalized Recognizer and Social Networks
  • 72. Personalized Recognizer and Social Networks
  • 73. Language Model Adaptation Framework
  • 74. References for Personalized Recognizer
  • 75. Recognizing Code-switched Speech
  • 76. Recognizing Code-switched Speech
  • 77. Recognizing Code-switched Speech
  • 78. Recognizing Code-switched Speech
  • 79. Recognizing Code-switched Speech
  • 80. References for Recognizing Code-switched Speech
  • 81. Speech-to-speech Translation
  • 82. Speech-to-speech Translation
  • 83. Machine Translation — Simplified Formulation
  • 84. Speech-to-speech Translation
  • 85. Machine Translation — Simplified Formulation
  • 86. Generative Models for SMT
  • 87. Machine Translation — Simplified Formulation
  • 88. Generative Models for SMT
0/88
Volume
1.0x
00:00/2:02:20