Skip to content

Instantly share code, notes, and snippets.

View ilyar's full-sized avatar
🖖
this is the way

ilyar

🖖
this is the way
View GitHub Profile

Tutorial collect contract gas metric

PR: ton-community/ton-docs#1294

collect contract gas metric

When writing contracts for the TON blockchain, it's important to consider how efficiently gas is consumed when executing the logic you implement, in addition, unlike other blockchains, in the TON blockchain you need to pay for storing contract data and for forward messages between contracts

Therefore, when developing a contract, it's important to pay attention to how data size changes and how gas consumption changes after you modify the contract's behavior or add new functionality

@ilyar
ilyar / tonup.sh
Last active May 23, 2025 07:35
tonup - TON toolchain versions manager
#!/usr/bin/env bash
# tonup - TON toolchain versions manager
# INSTALL
# wget -O tonup.sh https://gist.githubusercontent.com/ilyar/ec7f560d4632cb09a319d60f75a518a2/raw/6ad10f2e33dad7f138cefee441c0ba2dcb83d182/tonup.sh
# cp tonup.sh ~/bin/tonup && chmod +x ~/bin/tonup
# SETUP
# export PATH="$HOME/bin-ton/current:$PATH"
set -eo pipefail # https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_pipefail/
{
"comment": "Some label",
"createdAt": "2025-05-07T19:07:22.277Z",
"items": [
{
"address": "EQBiA46W-PQaaZZNFIDglnVknV9CR6J5hs81bSv70FwfNTrD",
"codeHash": "0xd992502b94ea96e7b34e5d62ffb0c6fc73d78b3e61f11f0848fb3a1eb1afc912",
"contractName": "TreasuryContract",
"methodName": "send",
"receiver": "external-in",
@ilyar
ilyar / sample_parsing_a_payload_via_TL-B.md
Last active March 18, 2025 07:07
Sample parsing a payload via TL-B

Sample parsing a payload via TL-B

Short

Create schema.tlb

a a: int256 = A;
b b: MsgAddressInt = B;
@ilyar
ilyar / tonutils-reverse-proxy.md
Last active November 20, 2024 09:22
System V init script for tonutils-reverse-proxy

System V init script for tonutils-reverse-proxy

1. Install tonutils-reverse-proxy

If you haven’t installed tonutils-reverse-proxy, follow the instructions docs.ton.org/develop/dapps/tutorials/how-to-run-ton-site, for example:

curl -fsSL -o tonutils-reverse-proxy https://github.com/ton-utils/reverse-proxy/releases/latest/download/tonutils-reverse-proxy-linux-amd64
chmod +x tonutils-reverse-proxy
mv tonutils-reverse-proxy /usr/local/bin/
{
"p0": "5555555555555555555555555555555555555555555555555555555555555555",
"p1": "3333333333333333333333333333333333333333333333333333333333333333",
"p2": "0000000000000000000000000000000000000000000000000000000000000000",
"p7": [
{
"currency": 239,
"value": "666666666666"
},
{

mermaid

timeline
    title Roadmap 2023
    section Q3 <br> First Release
        Develop : sub-point 1a : sub-point 1b
                : sub-point 1c
        Bullet 2 : sub-point 2a : sub-point 2b

The COIN Conversation Model

The COIN Conversation Model is a structured approach to feedback and conflict resolution in the workplace. This method was developed to help managers and employees effectively discuss complex issues and reach consensus. The COIN model consists of four key stages:

  1. C (Connect): Start the conversation by establishing rapport and creating an atmosphere of trust. This might include expressing gratitude for the employee's efforts or acknowledging their achievements.
  2. O (Observe): Share your observations without judgment or evaluation. This is factual information about what you've seen or heard.
  3. I (Impact): Explain the implications of your observations for the team, project, or organization. This could include both positive and negative impacts.
  4. N (Next): Discuss the next steps or actions that need to be taken. This might involve corrective actions, new goals, or changes in behavior.

The COIN model can be particularly useful when there's a need to discuss c

@ilyar
ilyar / llama2-mac-gpu.sh
Created July 19, 2023 19:44 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"