Skip to content

Instantly share code, notes, and snippets.

View p208p2002's full-sized avatar

Philip p208p2002

View GitHub Profile

Python中單例模式的應用

建構一個大型AI系統,常常會在許多地方呼叫模型,但是模型佔用資源龐大,不可以無限制的實例。

當然可以透過傳遞的方式將模型傳給各別模組,但是可能會變成一個傳遞地獄;或是透過約定一個全域變數的方式來達成,這方法可行但仍然不夠好。

這時候就輪到單例模式登場了,單例模式只允許同個類別存在一個實例,若多次執行類別實例動作,則返回已經實例化的類別。沒有傳遞與管理全域變數的問題!

下面的範例使用一個裝飾器(decorator)來達到單例目的,而從輸出可以發現我們成功避免多次實例同個類別。

範例

@p208p2002
p208p2002 / Docker多階段建構.md
Last active March 15, 2022 03:26
#blog #docker #ssh #docker-multi-stage-build

Docker多階段建構

最近剛好有一個專案在建構的時候需要存取多個私有庫,然而直接把 ssh key 帶入映像檔中會有安全性疑慮,同時也要避免 ssh key 出現在 image layer 中,這時候可以借助 multi-stage build 來達到要求。

Dockerfile撰寫

用語法From foo as bar來幫映像檔取小名

FROM python:3.8 as build-system

並且可以在一同個 Dockerfile 內建置暫存映像檔,我們可以在這個階段進行ssh操作,最終這個image會被銷毀

FROM build-system as intermediate

@p208p2002
p208p2002 / Dockerfile
Created February 17, 2022 03:45 — forked from knowsuchagency/Dockerfile
Makefile Docker Git GitHub multi-stage build ssh private key recipe
FROM python:3 as build-system
RUN pip install -U pip
COPY requirements.txt requirements.txt
### create temporary image used to download and vendor packages using private key ###
FROM build-system as intermediate
# add credentials on build
@p208p2002
p208p2002 / 在Dockerfile與GitHub Action中安裝私有Python套件.md
Last active January 19, 2022 01:36
#blog #dockerfile #gh-action #ssh #pip-install-private

在Dockerfile與GitHub Action中安裝私有Python套件

對Python有進階認識的可能會知道可以直接從github repo安裝套件,對public repo可以這樣做

pip install git+https://github.com/USERNAME/REPO.git

那如果是一個private repo呢?這時候我們就必須使用ssh來進行身份認證

pip install [email protected]/username/private_repo.git
[{"_id": "0", "_models": ["qmst", "beam_search", "navie_qgg"], "article": "Hi, friends! Welcome to our Family Rock Band. There are four members in our band. They are Wangwang, Mimi, Yingying and I. Wangwang is my dog. It can sing and dance. Mimi is my cat. It can't dance but it can sing and do Chinese kung fu. Yingying is a parrot . It can sing very well. I can play the guitar. When I play the guitar, they sing and dance. We have a show in our home every Sunday evening. Many boys and girls come to my show. They like it very much.", "questionGroups": [["Who can dance?", "Wangwang, Mimi and Yingying can all _ .", "Please come and watch our show on _ .", "Mimi can _ .", "Yingying is a _ ."], ["_ is my dog.", "We have a show in our home every Sunday evening because _ .", "What does Yingying like?", "How many members are in Wangwang's band?", "Who can play the guitar?"], ["We have a show in our home every Sunday evening.", "Wangwang is _ .", "It can sing and do Chinese kung fu.", "What is Wangwang's
@p208p2002
p208p2002 / 用 PyTorch Lighting 拯救你的一天.md
Last active September 29, 2023 22:33
#blog #pytorch-lightning #pytorch

用 PyTorch Lighting 拯救你的一天

最近做DL實驗發現除了主要研究的核心,最花心力的就是維護的你training pipline 從資料處理、訓練、預測與算分到加入中斷點恢復,各種超參數與模型版本管理。 如果要一直驗證與處理這些問題,實在是很力不從心,好在大家都有同樣的困擾,於是PL出現了,根據官方說法

PyTorch Lightning is just organized PyTorch You do the research. Lightning will do everything else.

就是這麼簡單!不過要體會第二點,我自己覺得是還有段距離,除了對框架本身要熟悉,目前PL也沒有到非常穩定(1.2.x),存在一些小BUG

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@p208p2002
p208p2002 / PyTorch實作NLP中的Self-Attention.md
Last active May 11, 2025 13:22
#blog #nlp #pytorch #self-attention

PyTorch實作NLP中的Self-Attention

輸入準備

我們準備了兩個句子來進行這次實驗

sentences = ['helo attention','have a nice day']

一開始先建立詞表與對應的單詞one-hot encoding

vocabs = ' '.join(sentences).split()
vocabs = list(set(vocabs))
@p208p2002
p208p2002 / gradient_accumulation.py
Created March 17, 2021 06:36 — forked from thomwolf/gradient_accumulation.py
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...
@p208p2002
p208p2002 / BART Fine-tuning.ipynb
Last active December 1, 2022 07:38
#blog #bart-model #nlp
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.