Skip to content

Instantly share code, notes, and snippets.

View thallesrangel's full-sized avatar
:octocat:
Dev

Thalles Rangel thallesrangel

:octocat:
Dev
  • Aracruz-ES
View GitHub Profile
@crissilvaeng
crissilvaeng / README.md
Created May 9, 2016 19:40
Padrão e mensagens de commit.

Styleguides

Mensagens de commit styleguide

  • Usar modo imperativo ("Adiciona feature" não "Adicionando feature" ou "Adicionada feature")
  • Primeira linha deve ter no máximo 72 caracteres
  • Considere descrever com detalhes no corpo do commit
  • Considere usar um emoji no início da mensagem de commit

Emoji | Code | Commit Type

@jimsrc
jimsrc / gpt4_text_compression.md
Last active July 23, 2025 20:37
Minimizing the number of tokens usage to interact with GPT-4.

Overview

I just read this trick for text compression, in order to save tokens in subbsequent interactions during a long conversation, or in a subsequent long text to summarize.

SHORT VERSION:

It's useful to give a mapping between common words (or phrases) in a given long text that one intends to pass later. Then pass that long text to gpt-4 but encoded with such mapping. The idea is that the encoded version contains less tokens than the original text. There are several algorithms to identify frequent words or phrases inside a given text, such as NER, TF-IDF, part-of-speech (POS) tagging, etc.