لغة الآلة | mathematicalmonk

mathematicalmonk

mathematicalmonk

Videos about math, at the graduate level or upper-level undergraduate.. Tools I use to produce these videos:. - Wacom Bamboo Fun tablet - medium size (~$150 pen tablet). - SmoothDraw 3.2.7 (free drawing program). - HyperCam 2 (free screen capture program). - Sennheiser ME 3-ew (~$125 headset microphone)

تفاصيل الكورس

  • دروس الكورس160
  • مدة الكورس35س 13د
  • عدد الطلاب1
  • اللغةEnglish
  • لا يحتاج متطلب سابق
  • (1)
  • ادرس الكورس مجانا

دروس الكورس

  1. 1 | (ML 1.1) Machine learning - overview and applications 00:08:56
  2. 2 | (ML 1.2) What is supervised learning? 00:10:26
  3. 3 | (ML 1.3) What is unsupervised learning? 00:08:58
  4. 4 | (ML 1.4) Variations on supervised and unsupervised 00:12:43
  5. 5 | (ML 1.5) Generative vs discriminative models 00:11:00
  6. 6 | (ML 1.6) k-Nearest Neighbor classification algorithm 00:14:19
  7. 7 | (ML 2.1) Classification trees (CART) 00:10:16
  8. 8 | (ML 2.2) Regression trees (CART) 00:09:47
  9. 9 | (ML 2.3) Growing a regression tree (CART) 00:13:44
  10. 10 | (ML 2.4) Growing a classification tree (CART) 00:14:06
  11. 11 | (ML 2.5) Generalizations for trees (CART) 00:13:20
  12. 12 | (ML 2.6) Bootstrap aggregation (Bagging) 00:14:57
  13. 13 | (ML 2.7) Bagging for classification 00:14:56
  14. 14 | (ML 2.8) Random forests 00:09:01
  15. 15 | (ML 3.1) Decision theory (Basic Framework) 00:10:55
  16. 16 | (ML 3.2) Minimizing conditional expected loss 00:11:18
  17. 17 | (ML 3.3) Choosing f to minimize expected loss 00:13:49
  18. 18 | (ML 3.4) Square loss 00:11:44
  19. 19 | (ML 3.5) The Big Picture (part 1) 00:12:25
  20. 20 | (ML 3.6) The Big Picture (part 2) 00:09:58
  21. 21 | (ML 3.7) The Big Picture (part 3) 00:10:54
  22. 22 | (ML 4.1) Maximum Likelihood Estimation (MLE) (part 1) 00:14:47
  23. 23 | (ML 4.2) Maximum Likelihood Estimation (MLE) (part 2) 00:06:56
  24. 24 | (ML 4.3) MLE for univariate Gaussian mean 00:14:31
  25. 25 | (ML 4.4) MLE for a PMF on a finite set (part 1) 00:13:22
  26. 26 | (ML 4.5) MLE for a PMF on a finite set (part 2) 00:11:14
  27. 27 | (ML 5.1) Exponential families (part 1) 00:14:52
  28. 28 | (ML 5.2) Exponential families (part 2) 00:13:35
  29. 29 | (ML 5.3) MLE for an exponential family (part 1) 00:14:55
  30. 30 | (ML 5.4) MLE for an exponential family (part 2) 00:14:42
  31. 31 | (ML 6.1) Maximum a posteriori (MAP) estimation 00:13:31
  32. 32 | (ML 6.2) MAP for univariate Gaussian mean 00:14:54
  33. 33 | (ML 6.3) Interpretation of MAP as convex combination 00:05:54
  34. 34 | (ML 7.1) Bayesian inference - A simple example 00:14:53
  35. 35 | (ML 7.2) Aspects of Bayesian inference 00:14:37
  36. 36 | (ML 7.3) Proportionality 00:05:09
  37. 37 | (ML 7.4) Conjugate priors 00:04:59
  38. 38 | (ML 7.5) Beta-Bernoulli model (part 1) 00:14:26
  39. 39 | (ML 7.6) Beta-Bernoulli model (part 2) 00:13:07
  40. 40 | (ML 7.7.A1) Dirichlet distribution 00:14:32
  41. 41 | (ML 7.7.A2) Expectation of a Dirichlet random variable 00:09:28
  42. 42 | (ML 7.7) Dirichlet-Categorical model (part 1) 00:14:54
  43. 43 | (ML 7.8) Dirichlet-Categorical model (part 2) 00:06:38
  44. 44 | (ML 7.9) Posterior distribution for univariate Gaussian (part 1) 00:14:26
  45. 45 | (ML 7.10) Posterior distribution for univariate Gaussian (part 2) 00:14:51
  46. 46 | (ML 8.1) Naive Bayes classification 00:14:53
  47. 47 | (ML 8.2) More about Naive Bayes 00:14:43
  48. 48 | (ML 8.3) Bayesian Naive Bayes (part 1) 00:14:11
  49. 49 | (ML 8.4) Bayesian Naive Bayes (part 2) 00:14:46
  50. 50 | (ML 8.5) Bayesian Naive Bayes (part 3) 00:14:53
  51. 51 | (ML 8.6) Bayesian Naive Bayes (part 4) 00:12:05
  52. 52 | (ML 9.1) Linear regression - Nonlinearity via basis functions 00:14:56
  53. 53 | (ML 9.2) Linear regression - Definition & Motivation 00:14:56
  54. 54 | (ML 9.3) Choosing f under linear regression 00:14:25
  55. 55 | (ML 9.4) MLE for linear regression (part 1) 00:14:25
  56. 56 | (ML 9.5) MLE for linear regression (part 2) 00:14:32
  57. 57 | (ML 9.6) MLE for linear regression (part 3) 00:14:52
  58. 58 | (ML 9.7) Basis functions MLE 00:06:08
  59. 59 | (ML 10.1) Bayesian Linear Regression 00:11:45
  60. 60 | (ML 10.2) Posterior for linear regression (part 1) 00:14:53
  61. 61 | (ML 10.3) Posterior for linear regression (part 2) 00:14:55
  62. 62 | (ML 10.4) Predictive distribution for linear regression (part 1) 00:14:55
  63. 63 | (ML 10.5) Predictive distribution for linear regression (part 2) 00:14:52
  64. 64 | (ML 10.6) Predictive distribution for linear regression (part 3) 00:14:41
  65. 65 | (ML 10.7) Predictive distribution for linear regression (part 4) 00:13:49
  66. 66 | (ML 11.1) Estimators 00:12:33
  67. 67 | (ML 11.2) Decision theory terminology in different contexts 00:11:17
  68. 68 | (ML 11.3) Frequentist risk, Bayesian expected loss, and Bayes risk 00:14:05
  69. 69 | (ML 11.4) Choosing a decision rule - Bayesian and frequentist 00:10:06
  70. 70 | (ML 11.5) Bias-Variance decomposition 00:13:34
  71. 71 | (ML 11.6) Inadmissibility 00:12:30
  72. 72 | (ML 11.7) A fun exercise on inadmissibility 00:05:05
  73. 73 | (ML 11.8) Bayesian decision theory 00:14:53
  74. 74 | (ML 12.1) Model selection - introduction and examples 00:14:23
  75. 75 | (ML 12.2) Bias-variance in model selection 00:12:35
  76. 76 | (ML 12.3) Model complexity parameters 00:04:47
  77. 77 | (ML 12.4) Bayesian model selection 00:13:17
  78. 78 | (ML 12.5) Cross-validation (part 1) 00:14:29
  79. 79 | (ML 12.6) Cross-validation (part 2) 00:14:12
  80. 80 | (ML 12.7) Cross-validation (part 3) 00:14:40
  81. 81 | (ML 12.8) Other approaches to model selection 00:07:36
  82. 82 | (ML 13.1) Directed graphical models - introductory examples (part 1) 00:14:53
  83. 83 | (ML 13.2) Directed graphical models - introductory examples (part 2) 00:07:23
  84. 84 | (ML 13.3) Directed graphical models - formalism (part 1) 00:14:50
  85. 85 | (ML 13.4) Directed graphical models - formalism (part 2) 00:12:24
  86. 86 | (ML 13.5) Generative process specification 00:10:59
  87. 87 | (ML 13.6) Graphical model for Bayesian linear regression 00:14:46
  88. 88 | (ML 13.7) Graphical model for Bayesian Naive Bayes 00:14:46
  89. 89 | (ML 13.8) Conditional independence in graphical models - basic examples (part 1) 00:14:13
  90. 90 | (ML 13.9) Conditional independence in graphical models - basic examples (part 2) 00:14:25
  91. 91 | (ML 13.10) D-separation (part 1) 00:13:39
  92. 92 | (ML 13.11) D-separation (part 2) 00:09:25
  93. 93 | (ML 13.12) How to use D-separation - illustrative examples (part 1) 00:14:31
  94. 94 | (ML 13.13) How to use D-separation - illustrative examples (part 2) 00:13:36
  95. 95 | (ML 14.1) Markov models - motivating examples 00:13:29
  96. 96 | (ML 14.2) Markov chains (discrete-time) (part 1) 00:14:43
  97. 97 | (ML 14.3) Markov chains (discrete-time) (part 2) 00:08:06
  98. 98 | (ML 14.4) Hidden Markov models (HMMs) (part 1) 00:14:30
  99. 99 | (ML 14.5) Hidden Markov models (HMMs) (part 2) 00:12:35
  100. 100 | (ML 14.6) Forward-Backward algorithm for HMMs 00:14:56
  101. 101 | (ML 14.7) Forward algorithm (part 1) 00:14:51
  102. 102 | (ML 14.8) Forward algorithm (part 2) 00:14:06
  103. 103 | (ML 14.9) Backward algorithm 00:14:47
  104. 104 | (ML 14.10) Underflow and the log-sum-exp trick 00:14:32
  105. 105 | (ML 14.11) Viterbi algorithm (part 1) 00:14:33
  106. 106 | (ML 14.12) Viterbi algorithm (part 2) 00:13:56
  107. 107 | (ML 15.1) Newton's method (for optimization) - intuition 00:11:16
  108. 108 | (ML 15.2) Newton's method (for optimization) in multiple dimensions 00:14:46
  109. 109 | (ML 15.3) Logistic regression (binary) - intuition 00:14:53
  110. 110 | (ML 15.4) Logistic regression (binary) - formalism 00:11:04
  111. 111 | (ML 15.5) Logistic regression (binary) - computing the gradient 00:14:54
  112. 112 | (ML 15.6) Logistic regression (binary) - computing the Hessian 00:13:56
  113. 113 | (ML 15.7) Logistic regression (binary) - applying Newton's method 00:14:30
  114. 114 | (ML 16.1) K-means clustering (part 1) 00:13:33
  115. 115 | (ML 16.2) K-means clustering (part 2) 00:14:17
  116. 116 | (ML 16.3) Expectation-Maximization (EM) algorithm 00:14:37
  117. 117 | (ML 16.4) Why EM makes sense (part 1) 00:14:26
  118. 118 | (ML 16.5) Why EM makes sense (part 2) 00:14:44
  119. 119 | (ML 16.6) Gaussian mixture model (Mixture of Gaussians) 00:14:51
  120. 120 | (ML 16.7) EM for the Gaussian mixture model (part 1) 00:14:51
  121. 121 | (ML 16.8) EM for the Gaussian mixture model (part 2) 00:14:49
  122. 122 | (ML 16.9) EM for the Gaussian mixture model (part 3) 00:14:54
  123. 123 | (ML 16.10) EM for the Gaussian mixture model (part 4) 00:14:56
  124. 124 | (ML 16.11) The likelihood is nondecreasing under EM (part 1) 00:14:46
  125. 125 | (ML 16.12) The likelihood is nondecreasing under EM (part 2) 00:14:45
  126. 126 | (ML 16.13) EM for MAP estimation 00:14:42
  127. 127 | (ML 17.1) Sampling methods - why sampling, pros and cons 00:12:42
  128. 128 | (ML 17.2) Monte Carlo methods - A little history 00:09:09
  129. 129 | (ML 17.3) Monte Carlo approximation 00:14:51
  130. 130 | (ML 17.4) Examples of Monte Carlo approximation 00:14:45
  131. 131 | (ML 17.5) Importance sampling - introduction 00:13:43
  132. 132 | (ML 17.6) Importance sampling - intuition 00:10:41
  133. 133 | (ML 17.7) Importance sampling without normalization constants 00:11:58
  134. 134 | (ML 17.8) Smirnov transform (Inverse transform sampling) - invertible case 00:14:49
  135. 135 | (ML 17.9) Smirnov transform (Inverse transform sampling) - general case 00:14:13
  136. 136 | (ML 17.10) Sampling an exponential using Smirnov 00:09:09
  137. 137 | (ML 17.11) Rejection sampling - uniform case 00:12:50
  138. 138 | (ML 17.12) Rejection sampling - non-uniform case 00:14:54
  139. 139 | (ML 17.13) Proof of rejection sampling (part 1) 00:14:41
  140. 140 | (ML 17.14) Proof of rejection sampling (part 2) 00:10:33
  141. 141 | (ML 18.1) Markov chain Monte Carlo (MCMC) introduction 00:17:04
  142. 142 | (ML 18.2) Ergodic theorem for Markov chains 00:14:48
  143. 143 | (ML 18.3) Stationary distributions, Irreducibility, and Aperiodicity 00:14:53
  144. 144 | (ML 18.4) Examples of Markov chains with various properties (part 1) 00:12:46
  145. 145 | (ML 18.5) Examples of Markov chains with various properties (part 2) 00:14:58
  146. 146 | (ML 18.6) Detailed balance (a.k.a. Reversibility) 00:14:43
  147. 147 | (ML 18.7) Metropolis algorithm for MCMC 00:16:54
  148. 148 | (ML 18.8) Correctness of the Metropolis algorithm 00:19:26
  149. 149 | (ML 18.9) Example illustrating the Metropolis algorithm 00:22:53
  150. 150 | (ML 19.1) Gaussian processes - definition and first examples 00:12:06
  151. 151 | (ML 19.2) Existence of Gaussian processes 00:06:18
  152. 152 | (ML 19.3) Examples of Gaussian processes (part 1) 00:11:47
  153. 153 | (ML 19.4) Examples of Gaussian processes (part 2) 00:13:45
  154. 154 | (ML 19.5) Positive semidefinite kernels (Covariance functions) 00:14:53
  155. 155 | (ML 19.6) Inner products and PSD kernels 00:14:29
  156. 156 | (ML 19.7) Operations preserving positive semidefinite kernels 00:09:40
  157. 157 | (ML 19.8) Proof that a product of PSD kernels is a PSD kernel 00:16:46
  158. 158 | (ML 19.9) GP regression - introduction 00:19:45
  159. 159 | (ML 19.10) GP regression - the key step 00:14:30
  160. 160 | (ML 19.11) GP regression - model and inference 00:19:27
    تقييمات الطلاب

    ( 5 من 5 )

    ١ تقييمات
    5 نجوم
    100%
    4 نجوم
    0%
    3 نجوم
    0%
    نجمتين
    0%
    نجمة
    0%
    Y
    Youtube

    29-07-2024