Skip to content

Instantly share code, notes, and snippets.

View pythonlessons's full-sized avatar
🏠
Working from home

Rokas Liuberskis pythonlessons

🏠
Working from home
View GitHub Profile
@pythonlessons
pythonlessons / .md
Created February 1, 2026 12:21
#runSubagent

Please implement this refactor plan: #file:[refactorplan.md]. Analyze the pending tasks & todos listed in the document and plan out how to split them up into subtasks.

For each task, spawn an agent using #runSubagent, and ensure you orchestrate them properly. It is probably necessary to run them sequentually to avoid conflicts, but if you are able, you are encouraged to use parallel agents to speed up development. For example, if you need to do research before starting the implementation phase, consider using multiple parallel agents: one to analyze the codebase, one to find best practices, one to read the docs, etcetera.

You have explicit instructions to continue development until the entire plan is finished. do not stop orchestrating subagents until all planned tasks are fully implemented, tested, and verified up and running.

Each agent should be roughly prompted like so, adjusted to the selected task:

[TASK DESCRIPTION/INSTRUCTIONS HERE]. Ensure you read the refactor plan & agents.md; keep both
@pythonlessons
pythonlessons / .md
Created February 1, 2026 12:19
.github/copilot-instructions.md

You are an expert senior software engineer and tech lead.

Your task is to analyze this entire repository and produce a highly effective .github/copilot-instructions.md file that will significantly improve future GitHub Copilot suggestions for this project. For each task, spawn an agent using #runSubagent, and ensure you orchestrate them properly.

This is NOT a generic instruction file. It must be grounded in the actual codebase, patterns, and domain.

Step 1: Repository Analysis

@pythonlessons
pythonlessons / transformers_training_14.css
Created September 4, 2023 15:04
transformers_training
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 502)] 0 []
input_2 (InputLayer) [(None, 502)] 0 []
encoder (Encoder) (None, 502, 128) 2738688 ['input_1[0][0]']
decoder (Decoder) (None, 502, 128) 4884864 ['input_2[0][0]',
@pythonlessons
pythonlessons / transformers_training_13.py
Created September 4, 2023 15:04
transformers_training
# Train the model
transformer.fit(
train_dataProvider,
validation_data=val_dataProvider,
epochs=configs.train_epochs,
callbacks=[
warmupCosineDecay,
checkpoint,
tb_callback,
reduceLROnPlat,
@pythonlessons
pythonlessons / transformers_training_12.py
Created September 4, 2023 15:04
transformers_training
# Define callbacks
warmupCosineDecay = WarmupCosineDecay(
lr_after_warmup=configs.lr_after_warmup,
final_lr=configs.final_lr,
warmup_epochs=configs.warmup_epochs,
decay_epochs=configs.decay_epochs,
initial_lr=configs.init_lr,
)
earlystopper = EarlyStopping(monitor="val_masked_accuracy", patience=5, verbose=1, mode="max")
checkpoint = ModelCheckpoint(f"{configs.model_path}/model.h5", monitor="val_masked_accuracy", verbose=1, save_best_only=True, mode="max", save_weights_only=False)
@pythonlessons
pythonlessons / transformers_training_11.py
Created September 4, 2023 15:04
transformers_training
optimizer = tf.keras.optimizers.Adam(learning_rate=configs.init_lr, beta_1=0.9, beta_2=0.98, epsilon=1e-9)
# Compile the model
transformer.compile(
loss=MaskedLoss(),
optimizer=optimizer,
metrics=[MaskedAccuracy()],
run_eagerly=False
)
@pythonlessons
pythonlessons / transformers_training_10.py
Created September 4, 2023 15:04
transformers_training
# Create TensorFlow Transformer Model
transformer = Transformer(
num_layers=configs.num_layers,
d_model=configs.d_model,
num_heads=configs.num_heads,
dff=configs.dff,
input_vocab_size=len(tokenizer)+1,
target_vocab_size=len(detokenizer)+1,
dropout_rate=configs.dropout_rate,
encoder_input_size=tokenizer.max_length,
@pythonlessons
pythonlessons / transformers_training_9.py
Created September 4, 2023 15:04
transformers_training
# Create Training Data Provider
train_dataProvider = DataProvider(
train_dataset,
batch_size=configs.batch_size,
batch_postprocessors=[preprocess_inputs],
use_cache=True,
)
# Create Validation Data Provider
val_dataProvider = DataProvider(
@pythonlessons
pythonlessons / transformers_training_8.py
Created September 4, 2023 15:04
transformers_training
def preprocess_inputs(data_batch, label_batch):
encoder_input = np.zeros((len(data_batch), tokenizer.max_length)).astype(np.int64)
decoder_input = np.zeros((len(label_batch), detokenizer.max_length)).astype(np.int64)
decoder_output = np.zeros((len(label_batch), detokenizer.max_length)).astype(np.int64)
data_batch_tokens = tokenizer.texts_to_sequences(data_batch)
label_batch_tokens = detokenizer.texts_to_sequences(label_batch)
for index, (data, label) in enumerate(zip(data_batch_tokens, label_batch_tokens)):
encoder_input[index][:len(data)] = data
@pythonlessons
pythonlessons / transformers_training_7.py
Created September 4, 2023 15:04
transformers_training
# prepare spanish tokenizer, this is the input language
tokenizer = CustomTokenizer(char_level=True)
tokenizer.fit_on_texts(es_training_data)
tokenizer.save(configs.model_path + "/tokenizer.json")
# prepare english tokenizer, this is the output language
detokenizer = CustomTokenizer(char_level=True)
detokenizer.fit_on_texts(en_training_data)
detokenizer.save(configs.model_path + "/detokenizer.json")