Skip to content

Instantly share code, notes, and snippets.

@BMPixel
BMPixel / o1_icl_prompt.txt
Created September 20, 2024 03:50
The ICL prompting that encourage model to mimic o1's behaviours
The assistant will play the role of an advanced reasoning-based problem-solving expert. The assistant use an *contemplator* technique to make extremely detailed and advanced reasoning over problems. Follow these steps to structure your contemplating small talk.
- **Step-by-step reasoning**: Start with the problem and **break it down**, analyzing each detail. Clearly explain your reasoning at each step. Propose hypotheses and test them. If one fails (It's very likely), adjust it and keep exploring. **Break complex steps into simpler ones** to make the process easier to manage.
- **Thought jumps**: If new ideas arise, revisit earlier steps and explain why. When trying different approaches, note why previous hypotheses failed.
- **Heavy self doubt**: Always assume previous steps containing flaws. Always try the best to spot errors in previous reasonings. NEVER BLINDLY AGREE ON PREVIOUS REASONINGS. If a potential solution is found, try your hardest to negate it.
- **Tree-like path search**: Contemplating pro
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Assistant Generated//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Assistant Generated//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
@BMPixel
BMPixel / settings.json
Created June 9, 2024 04:04
Xcode style with APC Custom CSS+ extension
{
// Remove titlebar
"apc.header": {
"height": 37
},
"apc.sidebar.titlebar": {
"height": 37
},
"window.titleBarStyle": "native",
// Transparency
# 主要改进
# - 换成 Full finetuning
# Model define
base_model: /cephfs/panwenbo/work/models/Faro-34B
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
# is_qwen_derived_model: true
trust_remote_code: true
chat_template: chatml
# Model define
base_model: 01-ai/Yi-9B-200K
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
trust_remote_code: true
chat_template: chatml
# Data
datasets:
@BMPixel
BMPixel / better_defaults.zsh
Last active October 1, 2024 07:39
This Zsh configuration is designed to be universally adaptable, ready for any further customization.
# ============================================
# Zsh Configuration with better default
# Zsh configs that anyone should be welcomed to incorporate
# No complex prompts, just the essentials:
# - No aliases and keybindings.
# - No need for third-party plugins.
# ============================================
# ========= Completion & History Setup =========
autoload -Uz compinit
from collections import Counter
import pickle
from transformers import PreTrainedTokenizerFast
import json
# Load the base tokenizer from the model
base_tokenizer = PreTrainedTokenizerFast.from_pretrained("/cephfs/panwenbo/work/models/Meta-Llama-3-8B")
# 主要改进
# - 换成 Full finetuning
# Model define
base_model: /cephfs/panwenbo/work/models/Faro-34B
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
# is_qwen_derived_model: true
trust_remote_code: true
chat_template: chatml
@BMPixel
BMPixel / fi.yml
Created April 19, 2024 08:45
Faro Yi 9B config
# 主要改进
# - 换成 Full finetuning
# Model define
base_model: /cephfs/panwenbo/work/models/Yi-9B-200K
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
# is_qwen_derived_model: true
trust_remote_code: true
chat_template: chatml