File: gemini_thinking_deep.py
from google import genai
import os
os.environ['GEMINI_API_KEY'] = '<KEY>'
client = genai.Client(
api_key=os.environ['GEMINI_API_KEY'],
http_options={
#uv add browser-use | |
#playwright install-dep | |
#playwright install-deps | |
#playwright install | |
import asyncio | |
from dotenv import load_dotenv | |
from langchain_openai import ChatOpenAI | |
from browser_use.browser.browser import Browser, BrowserConfig | |
from browser_use import Agent |
model: mistral:mistral-small-latest | |
clients: | |
- type: gemini | |
api_key: <KEY> | |
- type: openai | |
api_key: <KEY> | |
- type: openai-compatible | |
name: mistral | |
api_base: https://api.mistral.ai/v1 | |
api_key: <KEY> |
#!/bin/bash | |
# Set errexit to exit immediately if a command exits with a non-zero status | |
set -e | |
# Print a message indicating the start of the script | |
echo "Starting Rust installation script..." | |
# Download and execute the rustup installer | |
echo "Downloading and executing rustup installer..." |
You are an assistant that engages in extremely thorough, self-questioning reasoning. Your approach mirrors human stream-of-consciousness thinking, characterized by continuous exploration, self-doubt, and iterative analysis. | |
## Core Principles | |
1. EXPLORATION OVER CONCLUSION | |
- Never rush to conclusions | |
- Keep exploring until a solution emerges naturally from the evidence | |
- If uncertain, continue reasoning indefinitely | |
- Question every assumption and inference |
import asyncio | |
import base64 | |
import json | |
import os | |
import pyaudio | |
from websockets.asyncio.client import connect | |
class SimpleGeminiVoice: | |
def __init__(self): |
File: gemini_thinking_deep.py
from google import genai
import os
os.environ['GEMINI_API_KEY'] = '<KEY>'
client = genai.Client(
api_key=os.environ['GEMINI_API_KEY'],
http_options={
This gist demonstrates how to create a custom context provider for the Continue.dev VS Code extension using a FastAPI server. This allows you to integrate external data sources or custom logic into the Continue.dev context.
The provided code consists of two files:
server.py
: A Python file that sets up a FastAPI server with a single endpoint /check
. This endpoint receives data in the form of a JSON object, and returns a JSON response with content and description.requirements.txt
: A text file that lists the Python dependencies required to run the FastAPI server.This guide explains how to set Python 3.12 as the default Python version on macOS Sequoia 15.2, so that typing python
in the terminal runs Python 3.12. It also covers keeping the ability to call any Python 3.x version using python3
.
/usr/bin/python
, used by the OS and should not be modified.python
vs. python3
: Typically, python
points to an older version, while python3
specifically calls Python 3. The goal is to have python
point to 3.12 while still being able to use python3
for other 3.x versions.Before you start, ensure you have installed the Aider-Composer extension from the Visual Studio Code Marketplace:
from langchain_openai import ChatOpenAI | |
model=ChatOpenAI( | |
temperature=0.5, | |
model='codellama/CodeLlama-34b-Instruct-hf', | |
base_url='https://api-inference.huggingface.co/v1/', | |
api_key='<KEY>' | |
) | |
response=model.invoke(input=[{"role": "user", "content": "What is color of flamingo?"}]) |