| | | 深度學習基礎(影印版)(英文版) | 該商品所屬分類:工業技術 -> 自動化技術 | 【市場價】 | 627-908元 | 【優惠價】 | 392-568元 | 【介質】 | book | 【ISBN】 | 9787564175177 | 【折扣說明】 | 一次購物滿999元台幣免運費+贈品 一次購物滿2000元台幣95折+免運費+贈品 一次購物滿3000元台幣92折+免運費+贈品 一次購物滿4000元台幣88折+免運費+贈品
| 【本期贈品】 | ①優質無紡布環保袋,做工棒!②品牌簽字筆 ③品牌手帕紙巾
| |
版本 | 正版全新電子版PDF檔 | 您已选择: | 正版全新 | 溫馨提示:如果有多種選項,請先選擇再點擊加入購物車。*. 電子圖書價格是0.69折,例如了得網價格是100元,電子書pdf的價格則是69元。 *. 購買電子書不支持貨到付款,購買時選擇atm或者超商、PayPal付款。付款後1-24小時內通過郵件傳輸給您。 *. 如果收到的電子書不滿意,可以聯絡我們退款。謝謝。 | | | | 內容介紹 | |
-
出版社:東南大學
-
ISBN:9787564175177
-
作者:(美)尼基爾·巴杜馬
-
頁數:283
-
出版日期:2018-02-01
-
印刷日期:2018-02-01
-
包裝:平裝
-
開本:16開
-
版次:1
-
印次:1
-
字數:367千字
-
Preface 1. The Neural Network Building Intelligent Machines The Limits of Traditional Computer Programs The Mechanics of Machine Learning The Neuron Expressing Linear Perceptrons as Neurons Feed-Forward Neural Networks Linear Neurons and Their Limitations Sigmoid, Tanh, and ReLU Neurons Softmax Output Layers Looking Forward 2. Training Feed-Forward Neural Networks The Fast-Food Problem Gradient Descent The Delta Rule and Learning Rates Gradient Descent with Sigmoidal Neurons The Backpropagation Algorithm Stochastic and Minibatch Gradient Descent Test Sets, Validation Sets, and Overfitting Preventing Overfitting in Deep Neural Networks Summary 3. Implementing Neural Networks in TensorFIow What Is TensorFlow? How Does TensorFlow Compare to Alternatives? Installing TensorFlow Creating and Manipulating TensorFlow Variables TensorFlow Operations Placeholder Tensors Sessions in TensorFlow Navigating Variable Scopes and Sharing Variables Managing Models over the CPU and GPU Specifying the Logistic Regression Model in TensorFlow Logging and Training the Logistic Regression Model Leveraging TensorBoard to Visualize Computation Graphs and Learning Building a Multilayer Model for MNIST in TensorFlow Summary 4. Beyond Gradient Descent The Challenges with Gradient Descent Local Minima in the Error Surfaces of Deep Networks Model Identifiability How Pesky Are Spurious Local Minima in Deep Networks? Flat Regions in the Error Surface When the Gradient Points in the Wrong Direction Momentum-Based Optimization A Brief View of Second-Order Methods Learning Rate Adaptation AdaGrad--Accumulating Historical Gradients RMSProp--Exponentially Weighted Moving Average of Gradients Adam--Combining Momentum and RMSProp
| | | | | |