Note: on legacy intel system the path may be /usr/local/etc/clamav instead of /opt/homebrew/etc/clamav/
$ brew install clamav
$ cd /opt/homebrew/etc/clamav/
$ cp freshclam.conf.sample freshclam.conf| import akka.actor.{Actor, FSM} | |
| import java.util.concurrent.TimeUnit | |
| sealed trait Health | |
| case object Stale extends Health | |
| case object Alive extends Health | |
| case object Dead extends Health | |
| case object HeartBeat |
| /** | |
| * UPDATE: this trick doesn't work here because when using 'asInstanceOf', Nothing is inferred as being the expected type, thus the conversion fails at run-time. | |
| */ | |
| /** | |
| * Let's say you have some ugly casting to do | |
| */ | |
| // this doesn't compile with something like: "expected _, got Any" | |
| function match { |
I have been struggling to start a new project with Phoenix 1.3 and the new vue-cli 3 for Vue.js. There are tons of example already but none of them suited my needs, because:
Assuming that you have Elixir and Phoenix 1.3 are both installed, let's build our new App.
Thanks to the work of @agraf, @KhaosT, @imbushuo, and others, we have Virtualization.framework working on M1 Macs. These [changes][1] have been merged with QEMU v5.2.0 RC3 (will rebase once the final release is out) and integrated with UTM, a brand new QEMU frontend designed in SwiftUI for iOS 14 and macOS 11.
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.