Skip to content

Instantly share code, notes, and snippets.

View nov05's full-sized avatar
πŸ’­
Homo Sapiens

nov05

πŸ’­
Homo Sapiens
View GitHub Profile

🟒 Different Levels of AWS Resources for Machine Learning Model Training and Deployment

  1. πŸ‘‰ EC2 Instances: Full User Control (Least Pre-built Content)
    With EC2, you have complete control over the entire setup. You need to:
    • Start an EC2 instance (e.g., GPU-enabled for training deep learning models).
    • Install dependencies manually (e.g., Python, ML libraries like PyTorch or TensorFlow).
    • Copy or configure the training script, and handle the training data management (downloading data from S3 or other sources).
    • Run the training process manually using your own code.
    • Manage all aspects of the environment, scaling, and resource management.
@nov05
nov05 / 20241122_AWS SageMaker JupyterLab (or any other IDE), set up GitHub username and password.md
Last active November 24, 2024 11:03
20241122_AWS SageMaker JupyterLab (or any other IDE), set up GitHub username and password
  • Don't use the email you registered with GitHub for commits. Instead, GitHub provides you with a proxy email for this purpose. Just go to 'Settings - Emails' in your GitHub account, and you'll find the proxy email there.
  • Don't use your GitHub login password for commits. Instead, go to 'Settings - Developer Settings - Personal access tokens', create a token, and use that as your password for commits. Since Fine-grained tokens are still in Preview, I'm using a classic token for now.
  • Local Install Requirements
Python 3.7
MXNet 1.8
Pandas >= 1.2.4
AutoGluon 0.2.0
  • πŸ‘‰ create sagemaker base environment
@nov05
nov05 / 20240322_reinforcement learning_neural network soft update.md
Last active March 22, 2024 12:22
20240322_reinforcement learning_neural network soft update

"deeprl/agent/DDPG_agent.py"

  • trg = trg*(1-Ο„) + src*Ο„
  • Ο„ is stored in self.config.target_network_mix
    def soft_update(self, target, source):
        ## trg = trg*(1-Ο„) + src*Ο„
        ## Ο„ is stored in self.config.target_network_mix
        for target_param, source_param in zip(target.parameters(), source.parameters()):
 target_param.detach_()

πŸ‘‰ Udacity Deep Reinforcement Learning Python Environment Setup

⚠️ Python 3.11 has to be downgraded to Python 3.10, or Multiprocessing will cause TypeError: code() argument 13 must be str, not int in both Windows and Linux. Google Colab is currently using Python 3.10 as well.


(drlnd_p2) PS D:\github\udacity-deep-reinforcement-learning\python\mujoco-py> python examples\body_interaction.py

You appear to be missing MuJoCo.  We expected to find the file here: C:\Users\*\.mujoco\mujoco210

This package only provides python bindings, the library must be installed separately.

Please follow the instructions on the README to install MuJoCo

⚠️ issue: from gym.wrappers import Monitor caused ImportError: cannot import name 'Monitor' from 'gym.wrappers'.

  • solution (2022'):
    from gym.wrappers.record_video import RecordVideo
    env = gym.make('CartPole-v1', render_mode="rgb_array")
    env = RecordVideo(env, './video',  episode_trigger = lambda episode_number: True)
    env.reset()
    

20240218_pong-PPO.ipynb
πŸ‘‰ training log for reference
1000 episodes, T4 GPU, Wall time: 1h 38min 14s

Episode: 20, score: -15.750000
[-16. -16. -16. -16. -16. -16. -16. -14.]
Episode: 40, score: -12.625000
@nov05
nov05 / 20240218_reinforcement learning_pong training log 1200e.md
Created February 19, 2024 06:00
20240218_reinforcement learning_pong training log 1200e

20240217_pong_REINFORCE.ipynb
πŸ‘‰ training log for reference
1200 episodes on T4 GPU, Wall time: 2h 12min 12s

Episode: 20, score: -14.500000
[-14. -15. -16. -13. -14. -16. -16. -12.]
Episode: 40, score: -14.500000