MARC보기
LDR05455cmm u2200553Ki 4500
001000000316193
003OCoLC
00520230525175944
006m d
007cr cnu---unuuu
008190413s2019 enk o 000 0 eng d
020 ▼a 1789533449
020 ▼a 9781789533446 ▼q (electronic bk.)
035 ▼a 2094787 ▼b (N$T)
035 ▼a (OCoLC)1096525137
037 ▼a 17D228CA-B9A5-47F9-8400-6F06CA49CCCE ▼b OverDrive, Inc. ▼n http://www.overdrive.com
040 ▼a EBLCP ▼b eng ▼e pn ▼c EBLCP ▼d TEFOD ▼d OCLCF ▼d OCLCQ ▼d UKAHL ▼d OCLCQ ▼d N$T ▼d 248032
049 ▼a MAIN
050 4 ▼a QA76.73.P98
08204 ▼a 005.133 ▼2 23
1001 ▼a Balakrishnan, Kaushik.
24510 ▼a TensorFlow Reinforcement Learning Quick Start Guide : ▼b Get up and Running with Training and Deploying Intelligent, Self-Learning Agents Using Python.
260 ▼a Birmingham : ▼b Packt Publishing Ltd, ▼c 2019.
300 ▼a 1 online resource (175 pages)
336 ▼a text ▼b txt ▼2 rdacontent
337 ▼a computer ▼b c ▼2 rdamedia
338 ▼a online resource ▼b cr ▼2 rdacarrier
500 ▼a The A3C algorithm applied to LunarLander
5050 ▼a Cover; Title Page; Copyright and Credits; Dedication; About Packt; Contributors; Table of Contents; Preface; Chapter 1: Up and Running with Reinforcement Learning; Why RL?; Formulating the RL problem; The relationship between an agent and its environment; Defining the states of the agent; Defining the actions of the agent; Understanding policy, value, and advantage functions; Identifying episodes; Identifying reward functions and the concept of discounted rewards; Rewards; Learning the Markov decision process ; Defining the Bellman equation; On-policy versus off-policy learning
5058 ▼a On-policy methodOff-policy method; Model-free and model-based training; Algorithms covered in this book; Summary; Questions; Further reading; Chapter 2: Temporal Difference, SARSA, and Q-Learning; Technical requirements; Understanding TD learning; Relation between the value functions and state; Understanding SARSA and Q-Learning ; Learning SARSA ; Understanding Q-learning; Cliff walking and grid world problems; Cliff walking with SARSA; Cliff walking with Q-learning; Grid world with SARSA; Summary; Further reading; Chapter 3: Deep Q-Network; Technical requirements
5058 ▼a Learning the theory behind a DQNUnderstanding target networks; Learning about replay buffer; Getting introduced to the Atari environment; Summary of Atari games; Pong; Breakout; Space Invaders; LunarLander; The Arcade Learning Environment ; Coding a DQN in TensorFlow; Using the model.py file; Using the funcs.py file; Using the dqn.py file; Evaluating the performance of the DQN on Atari Breakout; Summary; Questions; Further reading; Chapter 4: Double DQN, Dueling Architectures, and Rainbow; Technical requirements; Understanding Double DQN ; Coding DDQN and training to play Atari Breakout
5058 ▼a Evaluating the performance of DDQN on Atari BreakoutUnderstanding dueling network architectures; Coding dueling network architecture and training it to play Atari Breakout; Combining V and A to obtain Q; Evaluating the performance of dueling architectures on Atari Breakout ; Understanding Rainbow networks; DQN improvements; Prioritized experience replay ; Multi-step learning; Distributional RL; Noisy nets; Running a Rainbow network on Dopamine; Rainbow using Dopamine; Summary; Questions; Further reading; Chapter 5: Deep Deterministic Policy Gradient; Technical requirements
5058 ▼a Actor-Critic algorithms and policy gradientsPolicy gradient; Deep Deterministic Policy Gradient; Coding ddpg.py; Coding AandC.py; Coding TrainOrTest.py; Coding replay_buffer.py; Training and testing the DDPG on Pendulum-v0; Summary; Questions; Further reading; Chapter 6: Asynchronous Methods -- A3C and A2C; Technical requirements; The A3C algorithm; Loss functions; CartPole and LunarLander; CartPole; LunarLander; The A3C algorithm applied to CartPole; Coding cartpole.py; Coding a3c.py; The AC class; The Worker() class; Coding utils.py; Training on CartPole
520 ▼a This book is an essential guide for anyone interested in Reinforcement Learning. The book provides an actionable reference for Reinforcement Learning algorithms and their applications using TensorFlow and Python. It will help readers leverage the power of algorithms such as Deep Q-Network (DQN), Deep Deterministic Policy Gradients (DDPG), and ...
5880 ▼a Print version record.
590 ▼a Added to collection customer.56279.3
650 0 ▼a Python (Computer program language)
650 0 ▼a Artificial intelligence.
650 0 ▼a Machine learning.
650 7 ▼a Artificial intelligence. ▼2 fast ▼0 (OCoLC)fst00817247
650 7 ▼a Machine learning. ▼2 fast ▼0 (OCoLC)fst01004795
650 7 ▼a Python (Computer program language) ▼2 fast ▼0 (OCoLC)fst01084736
655 4 ▼a Electronic books.
77608 ▼i Print version: ▼a Balakrishnan, Kaushik. ▼t TensorFlow Reinforcement Learning Quick Start Guide : Get up and Running with Training and Deploying Intelligent, Self-Learning Agents Using Python. ▼d Birmingham : Packt Publishing Ltd, 짤2019 ▼z 9781789533583
85640 ▼3 EBSCOhost ▼u http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=2094787
938 ▼a Askews and Holts Library Services ▼b ASKH ▼n AH36155814
938 ▼a ProQuest Ebook Central ▼b EBLB ▼n EBL5744473
938 ▼a EBSCOhost ▼b EBSC ▼n 2094787
990 ▼a 관리자
994 ▼a 92 ▼b N$T