Skip to content

Instantly share code, notes, and snippets.

@wolfram77
wolfram77 / notes-gpu-accelerated-graph-clustering-via-parallel-label-propagation.md
Last active June 25, 2025 19:41
GPU-Accelerated Graph Clustering via Parallel Label Propagation; Kozawa et al. (2017) : NOTES

My highlighted notes for the following paper:

Kozawa, Y., Amagasa, T., & Kitagawa, H. (2017, November). GPU-accelerated graph clustering via parallel label propagation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 567-576).

ORG

@wolfram77
wolfram77 / notes-fast-community-detection-algorithm-with-gpus-and-multicore-architectures.md
Last active June 25, 2025 19:41
Fast Community Detection Algorithm With GPUs and Multicore Architectures; Somal and Narang (2011) : NOTES

My highlighted notes for the following paper:

Soman, J., & Narang, A. (2011, May). Fast community detection algorithm with gpus and multicore architectures. In 2011 IEEE International Parallel & Distributed Processing Symposium (pp. 568-579). IEEE.

ORG

@wolfram77
wolfram77 / notes-advances-in-inverse-lithography.md
Last active June 25, 2025 19:41
Advances in Inverse Lithography (2022) : NOTES

My highlighted notes for the following paper:

Cecil, T., Peng, D., Abrams, D., Osher, S. J., & Yablonovitch, E. (2022). Advances in inverse lithography. ACS Photonics, 10(4), 910-918.

ORG

@wolfram77
wolfram77 / notes-low-latency-graph-streaming-using-compressed-purely-functional-trees.md
Last active June 25, 2025 19:41
Low-Latency Graph Streaming using Compressed Purely-Functional Trees : NOTES

My highlighted notes are below.

ORG

@wolfram77
wolfram77 / notes-interface-for-sparse-linear-algebra-operations.md
Last active June 25, 2025 19:41
Interface for Sparse Linear Algebra Operations : NOTES

My highlighted notes for the following paper:

Abdelfattah A, Ahrens W, Anzt H, Armstrong C, Brock B, Buluc A, Busato F, Cojean T, Davis T, Demmel J, Dinh G. Interface for Sparse Linear Algebra Operations. arXiv preprint arXiv:2411.13259. 2024 Nov 20.

ORG

@wolfram77
wolfram77 / output-pagerank-levelwise-multi-dynamic--8020.log
Last active January 26, 2025 06:34
Comparision of OpenMP and CUDA-based, Monolithic and Levelwise Dynamic PageRank algorithms : OUTPUT
Loading graph /home/subhajit.sahu/Data/indochina-2004.mtx ...
order: 7414866 size: 194109311 {}
# Batch size 1e-07
- batch update size: 20
- components: 1749035
- blockgraph-levels: 524
- affected-vertices: 7220621
- affected-components: 1721204
order: 7414866 size: 195418449 {} [27803.355 ms; 000 iters.] [0.0000e+00 err.] pagerankMonolithicOmpSplit (static)
@wolfram77
wolfram77 / notes-hydetect-a-hybrid-cpu-gpu-algorithm-for-community-detection.md
Last active June 25, 2025 19:41
HyDetect: A Hybrid CPU-GPU Algorithm for Community Detection; Bhowmick and Vadhiyar (2019) : NOTES

HyDetect: A Hybrid CPU-GPU Algorithm for Community Detection; Bhowmick and Vadhiyar (2019)

  1. Graph is parititoned for CPU and GPU.
  2. Louvain is independently performed to get psuedo-communities.
  3. Determine doubtful vertices which do not belong to communities formed on a device.
  4. Doubtful vertices are exchanged between the devices.
  5. Executes Louvain algorithm again from subgraph on devices, which includes communities formed earlier and doubtful vertices.
  6. This results in new communities and a new set of doubtful vertices.
  7. Doubtful vertices are exchanged again.
  8. Graph is coarsened to form a reduced graph of new vertices.
@wolfram77
wolfram77 / notes-flexgen-high-throughput-generative-inference-of-large-language-models-with-a-single-gpu.md
Last active June 25, 2025 19:41
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU : NOTES

FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU; Sheng et al. (2023)

  1. Motivated by latency insensitive tasks and high dependence on accelerators.
  2. FlexGen can be run on a single commodity system with CPU, GPU, and disk.
  3. Solves an LP problem, searching for efficient patterns to store and access tensors.
  4. Compresses weights and attention cache to 4 bits with minimal accuracy loss (fine-grained group-wise quantization).
  5. These enable FlexGen to have larger batch size choices and improve its throughput significantly.
  6. Running OPT-175B on a 16GB GPU, FlexGen achieves 1 token/s throughput for the first time.
  7. Runs HELM benchmark with a 30B model in 21 hours.
@wolfram77
wolfram77 / notes-introducing-drift-search-combining-global-and-local-search-methods-to-improve-quality-and-efficiency.md
Last active June 25, 2025 19:42
Introducing DRIFT Search: Combining global and local search methods to improve quality and efficiency : NOTES

It seems DRIFT search uses vector similarity search with top-k communities in the top-most hierarchy, and then drills down to the lower hierarchy levels. They seem to use follow-up questions for this purpose. The answers are ranked based on relavance to the query. Need to check the paper for more details.

ORG

@wolfram77
wolfram77 / notes-graphrag-unlocking-llm-discovery-on-narrative-private-data.md
Last active June 25, 2025 19:42
GraphRAG: Unlocking LLM discovery on narrative private data : NOTES

Unlike Baseline RAG, which uses embedding search from a vector database to find matching query points in the source text, GraphRAG builds a knowledge graph from the text, which is summarized hierarchically based on community clusters.

From what I understand, the knowledge graph is built by extracting entities and relations from the text, and the community clusters are formed based on the similarity of the entities and relations. The hierarchical summarization is done by summarizing the clusters at different levels of abstraction.

GraphRAG seems to use GPT-4-turbo to build the knowledge graph. However, how are the edge weights calculated? Do the summaries generated affect how the weights are calculated in the next hierarchical level?

ORG