Skip to content

Instantly share code, notes, and snippets.

View HosikChae's full-sized avatar
💭

Hosik Chae HosikChae

💭
View GitHub Profile
@DannyQuah
DannyQuah / 2020.06-D.Quah-Pulse-Secure-Client-on-Ubuntu-Linux.md
Last active April 6, 2025 14:32
Pulse Secure Client on Ubuntu Linux

Pulse Secure Client on Ubuntu Linux

by Danny Quah, June 2020 (revised Jan 2022)

Pulse Secure Client is a VPN client that allows secure connection to a Pulse Connect Secure SSL VPN gateway. Many universities use that latter for faculty, staff, and student access to their computer systems. However, because Linux comes in many different flavors, the standard Pulse Secure Client installer does not always run to completion. (For one, [UWO.ca][] suggests "PulseSecure's understanding of Linux package managers and distributions in general seems very limited.") The user is then either forced to use a Windows machine, somehow, or fail VPN access when traveling with their Linux notebook.

This Gist describes the steps I took to install Pulse Secure Client on my Ubuntu-based Linux machines, including a Pixelbook running GalliumOS 3.1 and Dell desktops running Ubuntu 18.04 and 20.04. Other writeups elsewhere that I've looked at describe the same problems I encountered, but were either out-dated, overly localised,

@dongbum
dongbum / cmake-tutorial.md
Created September 25, 2019 17:38 — forked from luncliff/cmake-tutorial.md
CMake 할때 쪼오오금 도움이 되는 문서

CMake를 왜 쓰는거죠?
좋은 툴은 Visual Studio 뿐입니다. 그 이외에는 전부 사도(邪道)입니다 사도! - 작성자

주의

  • 이 문서는 CMake를 주관적으로 서술합니다
  • 이 문서를 통해 CMake를 시작하기엔 적합하지 않습니다
    https://cgold.readthedocs.io/en/latest/ 3.1 챕터까지 따라해본 이후 기본사항들을 속성으로 익히는 것을 돕기위한 보조자료로써 작성되었습니다
@Dulani
Dulani / miniconda_on_rpi.md
Created September 15, 2019 00:43 — forked from simoncos/miniconda_on_rpi.md
Install Miniconda 3 on Raspberry Pi
@luncliff
luncliff / cmake-tutorial.md
Last active July 2, 2025 08:00
CMake 할때 쪼오오금 도움이 되는 문서

CMake를 왜 쓰는거죠?
좋은 툴은 Visual Studio 뿐입니다. 그 이외에는 전부 사도(邪道)입니다 사도! - 작성자

주의

  • 이 문서는 CMake를 주관적으로 서술합니다
  • 이 문서를 통해 CMake를 시작하기엔 적합하지 않습니다
    https://cgold.readthedocs.io/en/latest/ 3.1 챕터까지 따라해본 이후 기본사항들을 속성으로 익히는 것을 돕기위한 보조자료로써 작성되었습니다
@t-vi
t-vi / __init__.pyi
Last active July 13, 2023 09:11
PyTorch Type Hints work in progress (put into python3.x/dist-packages/torch/ directory to try)
from typing import List, Tuple, Optional, Union, Any, ContextManager, Callable, overload
import builtins
import math
import pickle
class dtype: ...
_dtype = dtype
@lethee
lethee / pkg-config-guide.md
Last active December 27, 2022 00:52
Guide to pkg-config 한글 번역
@anujonthemove
anujonthemove / opencv-videocapture-useful-properties.txt
Last active January 24, 2025 09:04
A handy list of VideoCapture object parameters taken from official OpenCV docs.
CAP_PROP_POS_MSEC =0, //!< Current position of the video file in milliseconds.
CAP_PROP_POS_FRAMES =1, //!< 0-based index of the frame to be decoded/captured next.
CAP_PROP_POS_AVI_RATIO =2, //!< Relative position of the video file: 0=start of the film, 1=end of the film.
CAP_PROP_FRAME_WIDTH =3, //!< Width of the frames in the video stream.
CAP_PROP_FRAME_HEIGHT =4, //!< Height of the frames in the video stream.
CAP_PROP_FPS =5, //!< Frame rate.
CAP_PROP_FOURCC =6, //!< 4-character code of codec. see VideoWriter::fourcc .
CAP_PROP_FRAME_COUNT =7, //!< Number of frames in the video file.
CAP_PROP_FORMAT =8, //!< Format of the %Mat objects returned by VideoCapture::retrieve().
CAP_PROP_MODE =9, //!< Backend-specific value indicating the current capture mode.
@kashif
kashif / cem.md
Last active September 18, 2024 21:33
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@claymcleod
claymcleod / controller.py
Last active February 3, 2025 15:27
Playstation 4 Controller Python
#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# This file presents an interface for interacting with the Playstation 4 Controller
# in Python. Simply plug your PS4 controller into your computer using USB and run this
# script!
#
# NOTE: I assume in this script that the only joystick plugged in is the PS4 controller.
# if this is not the case, you will need to change the class accordingly.
#