Skip to content

Instantly share code, notes, and snippets.

View pratamawijaya's full-sized avatar

Pratama Nur Wijaya pratamawijaya

View GitHub Profile
@sezabass
sezabass / Cache-Article-Chunk-Pubspec.yml
Created May 3, 2021 03:50
Cache-Article-Chunk-Pubspec
- name: Cache pubspec dependencies
uses: actions/cache@v2
with:
path: |
${{ env.FLUTTER_HOME }}/.pub-cache
**/.packages
**/.flutter-plugins
**/.flutter-plugin-dependencies
**/.dart_tool/package_config.json
key: build-pubspec-${{ hashFiles('**/pubspec.lock') }}
@sezabass
sezabass / Cache-Article-Chunk-Build-Runner.yml
Last active April 8, 2023 02:46
Cache-Article-Chunk-Build-Runner
- name: Cache build runner
uses: actions/cache@v2
with:
path: |
**/.dart_tool
**/*.g.dart
**/*.mocks.dart
**/*.config.dart
key: build-runner-${{ hashFiles('**/asset_graph.json', '**/*.dart', '**/pubspec.lock', '**/outputs.json') }}
restore-keys: |
@sezabass
sezabass / Cache-Article-Whole-CI-File.yml
Last active April 8, 2023 02:47
Cache-Article-Whole-CI-File
name: PR Verification
on:
push:
branches:
- develop
pull_request:
jobs:
pr-verification:
runs-on: ubuntu-latest
@othyn
othyn / 00_local_llm_guide.md
Last active August 21, 2025 00:59
Setting up a local only LLM (Qwen/Llama3/etc.) on macOS with Ollama, Continue and VSCode

Setting up a local only LLM (Qwen/Llama3/etc.) on macOS with Ollama, Continue and VSCode

As with a lot of organisations, the idea of using LLM's is a reasonably frightning concept, as people freely hand over internal IP and sensitive comms to remote entities that are heavily data bound by nature. I know it was on our minds when deciding on LLM's and their role within the team and wider company. 6 months ago, I set out to explore what offerings were like in the self-hosted and/or OSS space, and if anything could be achieved locally. After using this setup since then, and after getting a lot of questions on it, I thought I might share some of the things I've come across and getting it all setup.

Que in Ollama and Continue. Ollama is an easy way to locally download, manage and run models. Its very familiar to Docker in its usuage, and can probably be most conceptually aligned with it in how it operates, think imag