- never use mocks in tests
- Use IDE diagnostics to find and fix errors
- Always check test coverage after implementation
- Keep track of all tasks in github issues using gh tool
- commit every change and keep github issues updated with the progress using gh tool
- Use tmux to spin off background tasks and read their output and drive interaction
Exported on 20/06/2025 at 8:08:52 BST from Cursor (1.1.3)
User
Using @rust-sdk.md or @https://github.com/modelcontextprotocol/rust-sdk/tree/main/examples/clients example create a test for terraphim_mcp_server
{ | |
"id": "Server", | |
"global_shortcut": "Ctrl+X", | |
"roles": { | |
"Engineer": { | |
"shortname": "Engineer", | |
"name": "Engineer", | |
"relevance_function": "terraphim-graph", | |
"theme": "lumen", | |
"kg": { |
fabric -y "https://www.youtube.com/watch?v=JTU8Ha4Jyfc" --stream --pattern write_essay What Language Models Can and Can't Do
In this fascinating interview, AI researcher François Chollet offers his insights on the capabilities and limitations of modern large language models (LLMs). He argues that while LLMs have achieved impressive performance on many benchmarks, this does not necessarily translate to true intelligence.
Chollet makes the key point that intelligence is fundamentally about the ability to handle novelty - to deal with situations you've never seen before and come up with suitable models on the fly. This is something current LLMs struggle with. If you ask them to solve problems that are significantly different from their training data, they will often fail.
The reason, Chollet explains, is that LLMs are essentially just very sophisticated "interpolative databases." They memorize an enormous number of functions and patterns from their training data, and when queried, they retrieve and combine t
fabric -y "https://www.youtube.com/watch?v=JTU8Ha4Jyfc" --stream --pattern extract_wisdom SUMMARY: François Chollet discusses intelligence, the limitations of large language models, and his work on measuring intelligence with the Abstraction and Reasoning Corpus (ARC) in an interview with Tim.
IDEAS:
- Intelligence is the ability to handle novelty and come up with models on the fly.
- Large language models fail at solving problems significantly different from their training data.
- The Abstraction Reasoning Corpus (ARC) is designed to be resistant to memorization.
- Introspection is effective for understanding how the mind handles system 2 thinking.
- Scale is not all you need in AI; performance increase is orthogonal to intelligence.
{ | |
"ignored_packages": | |
[ | |
"LSP-rust-analyzer", | |
"Rust", | |
"Solarized Color Scheme", | |
"SublimeLinter", | |
"SublimeLinter-flake8", | |
"Terminus", | |
"Theme - Spacegray", |
#!/bin/bash | |
bucket_name="my-unique-name" | |
aws s3api create-bucket --bucket "${bucket_name}" > /dev/null # 1 | |
aws s3api put-public-access-block --bucket "${bucket_name}" --public-access-block-configuration "BlockPublicPolicy=false" # 2 | |
aws s3api put-bucket-policy --bucket "${bucket_name}" --policy '{ | |
"Version": "2012-10-17", | |
"Statement": [ | |
{ | |
"Sid": "PublicReadGetObject", |
[ | |
{ "keys": ["alt+shift+p"], "command": "show_overlay", "args": {"overlay": "command_palette"} }, | |
{ "keys": ["ctrl+k","ctrl+a"], "command": "select_all" }, | |
{ "keys": ["ctrl+e"], "command": "move_to", "args": {"to": "eol", "extend": false} }, | |
{ "keys": ["ctrl+a"], "command": "move_to", "args": {"to": "bol", "extend": false} }, | |
{ "keys": ["shift+right"], "command": "lsp_symbol_definition", | |
"args": {"side_by_side": false, "force_group": true, "fallback": false, "group": -1}, | |
"context": [ | |
{"key": "lsp.session_with_capability", "operand": "definitionProvider"}, | |
{"key": "auto_complete_visible", "operand": false} |
# Editor Config | |
theme = "catppuccin_macchiato" | |
[editor] | |
scrolloff = 6 | |
mouse = true | |
middle-click-paste = true | |
scroll-lines = 3 | |
shell = [ "zsh","-c" ] | |
line-number = "absolute" |