April 2026 TLDR setup for Ollama + Gemma 4 on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive
- Mac mini with Apple Silicon (M1/M2/M3/M4/M5)
- At least 16GB unified memory for Gemma 4 (default 8B)
- macOS with Homebrew installed
April 2026 TLDR setup for Ollama + Gemma 4 on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d.| abajgat | |
| abakusz | |
| abál | |
| abált | |
| abaposztó | |
| abárol | |
| abba | |
| abbahagy | |
| abbahagyat |
| abajgat | |
| abakusz | |
| abál | |
| abált | |
| abaposztó | |
| abárol | |
| abba | |
| abbahagy | |
| abbahagyat |
| { | |
| "PD-KB401W": { | |
| "typeNumber": "PD-KB401W", | |
| "layoutType": 1, | |
| "colorType": 0, | |
| "series": 0, | |
| "layoutTypeName": 1, | |
| "postfix": "", | |
| "isKeymapChangeable": true, | |
| "firmTypeNumber": "AHHX01", |
A curated list of AWS resources to prepare for the AWS Certifications
A curated list of awesome AWS resources you need to prepare for the all 5 AWS Certifications. This gist will include: open source repos, blogs & blogposts, ebooks, PDF, whitepapers, video courses, free lecture, slides, sample test and many other resources.
##VGG19 model for Keras
This is the Keras model of the 19-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
##VGG16 model for Keras
This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
| function genS_jl(I) | |
| s0 = 600.0 | |
| r = 0.02 | |
| sigma = 2.0 | |
| T = 1.0 | |
| M = 100 | |
| dt = T/M | |
| a = (r - 0.5*sigma^2)*dt | |
| b = sigma*sqrt(dt) |