Skip to content

Instantly share code, notes, and snippets.

@Nov05
Last active November 16, 2024 15:05
Show Gist options
  • Save Nov05/1d49183a91456a63e13782e5f49436be to your computer and use it in GitHub Desktop.
Save Nov05/1d49183a91456a63e13782e5f49436be to your computer and use it in GitHub Desktop.

Udacity Deep Reinforcement Learning - p2 & deeprl env setup

πŸ‘‰ check the drlnd_py310 env setup notes
πŸ‘‰ check the p1 env setup notes
πŸ‘‰ course curriculum
πŸ‘‰ Colab notebooks


Window 11, VSCode, Minicoda, Powershell

πŸ‘‰ copy from the env where cuda and pytorch have been installed
🟒 conda create --name drlnd_p2 --clone drlnd (Python 3.6)

(base) PS D:\github\udacity-deep-reinforcement-learning\python> conda create --name drlnd_p2 --clone drlnd
Source:      D:\Users\*\miniconda3\envs\drlnd
Destination: D:\Users\*\miniconda3\envs\drlnd_p2
Packages: 159
Files: 13970
  • or check how to install cuda + pytorch in windows 11
    conda install cuda --channel "nvidia/label/cuda-12.1.0"
  • or go to https://pytorch.org/, and select the right version to install
    ❌ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
    🟒 conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
pip install torchmeta
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidi

🟒 Follow these steps to install mujoco-py on Windows

🟒 Powershell $env:PATH += ";C:\Users\*\.mujoco\mjpro150\bin"
 Powershell $env:path -split ";" to display path variables

🟒 download mujoco-py-1.50.1.68.tar.gz from https://pypi.org/project/mujoco-py/1.50.1.68/#files

pip install "cython<3"  
pip install mujoco-py-1.50.1.68.tar.gz  
python D:\github\udacity-deep-reinforcement-learning\python\mujoco-py\examples\body_interaction.py  
  • you might need this pip install lockfile and some other packages. install them according to the error messages.
  • a worse case is that your python version is too high (maybe >=3.9?), you might need to install mujoco_py manually.
  • now you should be able to see this.

πŸ‘‰ install gym atari and lincense
https://stackoverflow.com/a/69602242

pip install -U gym
pip install -U gym[atari,accept-rom-license]
pip install bleach==1.5.0  
pip install --upgrade numpy   
pip install --upgrade tensorboard

πŸ‘‰ install OpenAI Baselines

pip install --upgrade pip setuptools wheel   
pip install opencv-python==4.5.5.64  
git clone https://github.com/openai/baselines.git
cd baselines
pip install -e .
  • for python 3.11, you can pip install opencv-python.
    and i Successfully installed opencv-python-4.9.0.80.

πŸ‘‰ intall the rest packages for the deeprl folder.
pip install -r .\deeprl_files\requirements.txt

  • requirements.txt
# torch
# torchvision
# torchmeta 
# gym==0.15.7
# tensorflow==1.15.0
# opencv-python==4.0.0.21
atari-py
scikit-image==0.14.2
tqdm
pandas
pathlib
seaborn
# roboschool==1.0.34
dm-control2gym  
tensorflow-io
  • for python 3.11, losen the version requirement scikit-image.
    I got scikit-image-0.22.0 installed.

πŸ‘‰ test the env setup

  • run notebooks
python -m ipykernel install --user --name=drlnd_p2
jupyter notebook D:\github\udacity-deep-reinforcement-learning\p2_continuous-control\Continuous_Control.ipynb  
jupyter notebook D:\github\udacity-deep-reinforcement-learning\p2_continuous-control\Crawler.ipynb  

🟒 python -m deeprl.component.envs

if __name__ == '__main__':
    import time
    ## num_envs=5 will only create 3 env and cause error
    ## "results = _flatten_list(results)"
    ## in "baselines\baselines\common\vec_env\subproc_vec_env.py"
    task = Task('Hopper-v2', num_envs=3, single_process=False)
    state = task.reset()

    ## This might be helpful for custom env debugging
    # env_dict = gym.envs.registration.registry.env_specs.copy()
    # for item in env_dict.items():
    #     print(item)

    start_time = time.time()
    while True:
        action = np.random.rand(task.action_space.shape[0])
        next_state, reward, done, _ = task.step(action)
        print(done)
        if time.time()-start_time > 10: ## run about 10s
            break  
    task.close()

🟒 run examples:
D:\github\udacity-deep-reinforcement-learning\python\deeprl_files\examples.py

if __name__ == '__main__':
    mkdir('log')
    mkdir('tf_log')
    set_one_thread()
    random_seed()
    # -1 is CPU, an non-negative integer is the index of GPU
    # select_device(-1)
    select_device(0) ## GPU
    
    game = 'Reacher-v2'
    # a2c_continuous(game=game)
    # ppo_continuous(game=game)
    ddpg_continuous(game=game)    




folder ./python/deeprl structure

https://github.com/ShangtongZhang/DeepRL
https://github.com/ChalamPVS/Unity-Reacher

🟒 copied python files from repo @ShangtongZhang/DeepRL to repo @Nov05/udacity-deep-reinforcement-learning under the './python' folder.

DeepRL/template_jobs.py

ddpg_continuous(game='Reacher-v2', run=0, env=env,
	remark=ddpg_continuous.__name__)

DeepRL/examples.py

def ddpg_continuous(**kwargs):
	config.task_fn = lambda: Task(config.game, env=env)
	run_steps(DDPGAgent(config))

deep_rl/utils/config.py

class Config:
	def __init__(self):
		self.task_fn = None

DeepRL/deep_rl/utils/misc.py

def run_steps(agent):
    config = agent.config
    agent.step()

deep_rl/agent/DDPG_agent.py

class DDPGAgent(BaseAgent):
	self.task = config.task_fn()
	def step(self):

deep_rl/component/envs.py

def make_env(env_id, seed, rank, episode_life=True):
class Task:
    def __init__(self,
                 name,
                 num_envs=1,
		 env=env,
if __name__ == '__main__':
    task = Task('Hopper-v2', 5, single_process=False)
@Nov05
Copy link
Author

Nov05 commented Oct 27, 2024

🟒⚠️ issue solved: Tennis game, more than 1 env to train and test. reset the envs once they are done.

  • solution: when ``self.statesis None, the agent will reset the envs. hence make sureself.states = None ## reset`.
Max episodes:  39%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰                                                        | 78/200 [00:31<00:41,  2.91it/s]2024-10-27 02:36:37,299 - root - INFO: Episode 78, Step 1456, 0.04 s/episode
2024-10-27 02:36:37,368 - root - INFO: Episode 78, Step 1483, episodic_return_train 0.05000000074505806
2024-10-27 02:36:37,368 - root - INFO: Episode 79, Step 1484, 0.07 s/episode
Process SpawnProcess-2:
Process SpawnProcess-1:
Traceback (most recent call last):
Traceback (most recent call last):
  File "D:\Users\guido\miniconda3\envs\drlnd_py310\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "D:\Users\guido\miniconda3\envs\drlnd_py310\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "D:\Users\guido\miniconda3\envs\drlnd_py310\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Users\guido\miniconda3\envs\drlnd_py310\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "D:\github\udacity-deep-reinforcement-learning\python\deeprl\component\envs.py", line 372, in unity_worker
    brain_info = env.step(data)[brain_name] ## info type ".unityagents.brain.BrainInfo"
  File "D:\github\udacity-deep-reinforcement-learning\python\deeprl\component\envs.py", line 372, in unity_worker
    brain_info = env.step(data)[brain_name] ## info type ".unityagents.brain.BrainInfo"
  File "D:\github\udacity-deep-reinforcement-learning\python\unityagents\environment.py", line 384, in step
    raise UnityActionException("⚠️ The episode is completed. Reset the environment with 'reset()'")
  File "D:\github\udacity-deep-reinforcement-learning\python\unityagents\environment.py", line 384, in step
    raise UnityActionException("⚠️ The episode is completed. Reset the environment with 'reset()'")
unityagents.exception.UnityActionException: ⚠️ The episode is completed. Reset the environment with 'reset()'
unityagents.exception.UnityActionException: ⚠️ The episode is completed. Reset the environment with 'reset()'
Max episodes:  40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž                                                       | 79/200 [00:38<00:58,  2.06it/s]
Traceback (most recent call last):
  File "D:\Users\guido\miniconda3\envs\drlnd_py310\lib\multiprocessing\connection.py", line 312, in _recv_bytes
    nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109] The pipe has been ended
  • It seems for the Unity Reacher game (p2), all episodes have the same number of steps to finish. However for the Unity Tennis game, episodes' number of steps vary?
  • in python\deeprl\agent\BaseAgent.py:
        if self.config.num_workers > 0:  ## agent could have no task when eval
            self.total_episodic_returns = [None] * self.config.task.num_envs   ## added by nov05
            self.episode_dones = [False] * self.config.task.num_envs  ## added by nov05

and in MADDPGAgent and DDPGAgent, change the logic to decide whether all envs have done. do the same to the eval logic:

        ## check whether the episode is done
        for i,(done,info) in enumerate(zip(dones,infos)):
            if np.any(done):  ## or np.all(done) which should be the same
                self.episode_dones[i] = True
                self.total_episodic_returns[i] = info['episodic_return']
        if all(self.episode_dones): ## all envs finish one episode
            ## reset self.episode_dones in "python\deeprl\utils\misc.py"
            ## log train returns
            self.record_online_return(self.total_episodic_returns, 
                                      by_episode=self.config.by_episode)  
            self.states = None  ## reset
            self.total_episodic_returns = [None] * self.task.num_envs  ## reset
            self.total_episodes += 1
        self.total_steps += 1

@Nov05
Copy link
Author

Nov05 commented Oct 28, 2024

⚠️ issue: p3 Unity Tennis game, MADDPG agent, if it uses the PrioritizedReplay buffer, sampled states etc. will contain nans, which will cause all the neural network outputs, such as a_target (action), q_target (Q-value), a, q, etc. to be nans.

  • debug: the local critic gets NaNs, hence actor loss is NaN during training. However the target critic and previous local critic forward seem fine. states_ could range [-20, 20] or more, and a (actions) [-1, 1].
    actor_loss = -self.networks[i].critic(states_.reshape(self.config.mini_batch_size,-1), a).mean(dim=0)

  • try to clip the actions to be within the action space, which is [-1,1] for Unity Tennis.

  • try to clip the states to be within the range of [-10,10].
    config.state_normalizer = MeanStdNormalizer()

  • try to clip the gradients before the optimizers step.
    torch.nn.utils.clip_grad_norm_(self.networks[i].critic_body.parameters(), max_norm=1.0)
    torch.nn.utils.clip_grad_norm_(self.networks[i].actor_body.parameters(), max_norm=1.0)

  • debug network parameters:

                  q = self.networks[i].critic(states_.reshape(self.config.mini_batch_size, -1), a)
                  if torch.isnan(q).any():
                      print('πŸ™„ q', q)
                      for param in self.networks[i].critic_body.parameters():
                          if torch.isnan(param).any():
                              print("πŸ™„ NaN found in parameters")
                          if torch.isinf(param).any():
                              print("πŸ™„ Inf found in parameters")
    

    Then found a bug. I worte something wrong, sampling_probs_ = tensor(transitions.mask).

              sampling_probs_ = tensor(transitions.sampling_prob).unsqueeze(-1).transpose(0, 1)
              sample_weights_ = 1.0 / (sampling_probs_ * self.replay.size())  ## Caution: it might create Inf
    

@Nov05
Copy link
Author

Nov05 commented Nov 2, 2024

🟒⚠️ issue: pip install torchrl unsuccessfully. uninstalled it. then got error. run `` to reinstall torchvision. got another error.

(drlnd_py310) PS D:\github\udacity-deep-reinforcement-learning\python> python -m experiments.deeprl_maddpg_continuous --is_training True                  
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
(drlnd_py310) PS D:\github\udacity-deep-reinforcement-learning> conda deactivate drlnd_py310
(base) PS D:\github\udacity-deep-reinforcement-learning> conda create --name drlnd_py310_backup --clone drlnd_py310
Source:      D:\Users\guido\miniconda3\envs\drlnd_py310
Destination: D:\Users\guido\miniconda3\envs\drlnd_py310_backup
Packages: 115
Files: 40037

Downloading and Extracting Packages:


Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate drlnd_py310_backup
#
# To deactivate an active environment, use
#

@Nov05
Copy link
Author

Nov05 commented Nov 16, 2024

🟒⚠️ issue solved: google colab, matd3 notebook

!pip install protobuf==3.19.0
!export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-5-03a59240fed2>](https://localhost:8080/#) in <cell line: 13>()
     11 import numpy
     12 import torch
---> 13 from unityagents import UnityEnvironment
     14 
     15 import matplotlib.pyplot as plt

4 frames
[/usr/local/lib/python3.10/dist-packages/google/protobuf/descriptor.py](https://localhost:8080/#) in __new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
    551                 has_default_value=True, containing_oneof=None, json_name=None,
    552                 file=None, create_key=None):  # pylint: disable=redefined-builtin
--> 553       _message.Message._CheckCalledFromGeneratedFile()
    554       if is_extension:
    555         return _message.default_pool.FindExtensionByName(full_name)

TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment