Skip to content

Instantly share code, notes, and snippets.

View m0o0scar's full-sized avatar
๐Ÿ’ป
Oscar + coffee + AI => code

Oscar m0o0scar

๐Ÿ’ป
Oscar + coffee + AI => code
  • Sea
  • Singapore
View GitHub Profile
@m0o0scar
m0o0scar / ๐Ÿ“– Very Large-Scale Multi-Agent Simulation in AgentScope.md Very Large-Scale Multi-Agent Simulation in AgentScope. Continue this conversation at http://localhost:3000?gist=de7e2d9bfa2f7649c67b9686a4474d0e

[arxiv] Very Large-Scale Multi-Agent Simulation in AgentScope

Source

Xuchen Pan, Dawei Gao, Yuexiang Xie, Zhewei Wei, Yaliang Li, Bolin Ding, Ji-Rong Wen, Jingren Zhou

Recent advances in large language models (LLMs) have opened new avenues for applying multi-agent systems in very large-scale simulations. However, there remain several challenges when conducting multi-agent simulations with existing platforms, such as limited scalability and low efficiency, unsatisfied agent diversity, and effort-intensive management processes. To address these challenges, we develop several new features and components for AgentScope, a user-friendly multi-agent platform, enhancing its convenience and flexibility for supporting very large-scale multi-agent simulations. Specifically, we propose an actor-based distributed mechanism as the underlying technological infrastructure towards great scalability and high efficiency, and provide flexible environment support for simulating various real-world scenarios, which ena

@m0o0scar
m0o0scar / ๐Ÿ“– Very Large-Scale Multi-Agent Simulation in AgentScope.md
Last active July 29, 2024 09:27
Very Large-Scale Multi-Agent Simulation in AgentScope. Continue this conversation at http://localhost:3000?gist=225f94099e4d8ccbbf5c7901fc7f55cd

[arxiv] Very Large-Scale Multi-Agent Simulation in AgentScope

Source

Xuchen Pan, Dawei Gao, Yuexiang Xie, Zhewei Wei, Yaliang Li, Bolin Ding, Ji-Rong Wen, Jingren Zhou

Recent advances in large language models (LLMs) have opened new avenues for applying multi-agent systems in very large-scale simulations. However, there remain several challenges when conducting multi-agent simulations with existing platforms, such as limited scalability and low efficiency, unsatisfied agent diversity, and effort-intensive management processes. To address these challenges, we develop several new features and components for AgentScope, a user-friendly multi-agent platform, enhancing its convenience and flexibility for supporting very large-scale multi-agent simulations. Specifically, we propose an actor-based distributed mechanism as the underlying technological infrastructure towards great scalability and high efficiency, and provide flexible environment support for simulating various real-world scenarios, which ena

@m0o0scar
m0o0scar / ๐Ÿ“– pytube_pytube.md
Last active July 29, 2024 08:49
pytube/pytube. Continue this conversation at http://localhost:3000?gist=d406383e28e494c37c05888ba70d2c69

[github] pytube/pytube

Source

Python / 5.1K lines of code. A lightweight, dependency-free Python library (and command-line utility) for downloading YouTube Videos.

URL: https://github.com/pytube/pytube

Conversation

[github] opendatalab/MinerU

Source

Python / 25.1K lines of code. A one-stop, open-source, high-quality data extraction tool, supports PDF/webpage/e-book extraction.ไธ€็ซ™ๅผๅผ€ๆบ้ซ˜่ดจ้‡ๆ•ฐๆฎๆๅ–ๅทฅๅ…ท๏ผŒๆ”ฏๆŒPDF/็ฝ‘้กต/ๅคšๆ ผๅผ็”ตๅญไนฆๆๅ–ใ€‚

URL: https://github.com/opendatalab/MinerU

Conversation

@m0o0scar
m0o0scar / ๐Ÿ“– Knowledge Mechanisms in Large Language Models! A Survey and Perspective.md
Created July 26, 2024 09:05
Knowledge Mechanisms in Large Language Models: A Survey and Perspective. Continue this conversation at https://readfm.vercel.app?gist=3e931950d65a4999ad7dd9f2fe1ed5c6

[arxiv] Knowledge Mechanisms in Large Language Models: A Survey and Perspective

Source

Mengru Wang, Yunzhi Yao, Ziwen Xu, Shuofei Qiao, Shumin Deng, Peng Wang, Xiang Chen, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang

Understanding knowledge mechanisms in Large Language Models (LLMs) is crucial for advancing towards trustworthy AGI. This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution. Knowledge utilization delves into the mechanism of memorization, comprehension and application, and creation. Knowledge evolution focuses on the dynamic progression of knowledge within individual and group LLMs. Moreover, we discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address. We hope this work can help understand knowledge in LLMs and provide insights for future research.

URL: https://huggingface.co

@m0o0scar
m0o0scar / ๐Ÿ“– Internal Consistency and Self-Feedback in Large Language Models! A Survey.md
Created July 26, 2024 09:04
Internal Consistency and Self-Feedback in Large Language Models: A Survey. Continue this conversation at https://readfm.vercel.app?gist=77870d6b7221abd339bd623ee1269421

[arxiv] Internal Consistency and Self-Feedback in Large Language Models: A Survey

Source

Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li

Large language models (LLMs) are expected to respond accurately but often exhibit deficient reasoning or generate hallucinatory content. To address these, studies prefixed with ``Self-'' such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating itself to mitigate the issues. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization without examining the motivations behind these works. In this paper, we summarize a theoretical framework, termed Internal Consistency, which offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations. Internal Consistency assesses the coherence among LLMs' latent layer, deco

@m0o0scar
m0o0scar / ๐Ÿ“– Internal Consistency and Self-Feedback in Large Language Models! A Survey.md
Created July 26, 2024 07:50
Internal Consistency and Self-Feedback in Large Language Models: A Survey. Continue this conversation at https://readfm.vercel.app?gist=55451b37cf920cc1f40c3d073e424bd1

[arxiv] Internal Consistency and Self-Feedback in Large Language Models: A Survey

Source

Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li

Large language models (LLMs) are expected to respond accurately but often exhibit deficient reasoning or generate hallucinatory content. To address these, studies prefixed with ``Self-'' such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating itself to mitigate the issues. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization without examining the motivations behind these works. In this paper, we summarize a theoretical framework, termed Internal Consistency, which offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations. Internal Consistency assesses the coherence among LLMs' latent layer, deco

@m0o0scar
m0o0scar / ๐Ÿ“– stanford-oval_storm.md
Created July 26, 2024 07:41
stanford-oval/storm. Continue this conversation at http://localhost:3000?gist=1ceffd24c807aec44d2bcc419b42ae73

[github] stanford-oval/storm

Source

Python / 5.5K lines of code. An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.

URL: https://github.com/stanford-oval/storm

Conversation