Skip to content

Instantly share code, notes, and snippets.

@rajivmehtaflex
Last active December 26, 2024 13:54
Show Gist options
  • Save rajivmehtaflex/0eab9a00523cecaa6363e2b08da1fb6e to your computer and use it in GitHub Desktop.
Save rajivmehtaflex/0eab9a00523cecaa6363e2b08da1fb6e to your computer and use it in GitHub Desktop.
Demonstrates the use of Google's Gemini 2.0 Flash model with "thinking" output enabled, showing how the model reveals its thought process while solving a simple math problem.

File: gemini_thinking_deep.py

from google import genai
import os
os.environ['GEMINI_API_KEY'] = '<KEY>'

client = genai.Client(
    api_key=os.environ['GEMINI_API_KEY'],
    http_options={
        'api_version': 'v1alpha',
    })

stream = client.models.generate_content_stream(
    model='gemini-2.0-flash-thinking-exp-1219', 
    contents=f"""What is 2*2-1*4^4"""
)
thinking = True
print("****** Start thinking... ******")
for chunk in stream:
    for candidate in chunk.candidates:
        for part in candidate.content.parts:
            if part.thought:
                is_thinking = True
            elif is_thinking:  # prints "Finished thinking" when transitioning from thinking to not thinking
                is_thinking = False
                print("\n")
                print("****** Finished thinking... ******")
                print("\n")
        
            print(part.text, end="", flush=True)

File: gemini_gradio.py

import os
os.environ['GEMINI_API_KEY'] = 'AIzaSyAkTenlT1rs8KnV0YBNgTFOToFepPZNfW0'
import gradio as gr
import gemini_gradio

gr.load(
    name='gemini-2.0-flash-exp',
    src=gemini_gradio.registry,
    enable_voice=False,
    title='Gemini 2.0 Flash',
    description='Gemini 2.0 Flash is a model that can understand and generate text, images, and code. It is a large language model trained by Google DeepMind.'
).launch(share=True)

File: gemini_thinking.py

from google import genai
import os
os.environ['GEMINI_API_KEY'] = 'AIzaSyAkTenlT1rs8KnV0YBNgTFOToFepPZNfW0'

# create client
client = genai.Client(api_key=os.environ['GEMINI_API_KEY'])

# use Gemini 2.0 with Flash Thinking 
stream = client.models.generate_content_stream(
    model='gemini-2.0-flash-thinking-exp-1219', 
    contents=f"""Can you crack the code? 
9 2 8 5 (One number is correct but in the wrong position) 
1 9 3 7 (Two numbers are correct but in the wrong positions) 
5 2 0 1 (one number is correct and in the right position) 
6 5 0 7 (nothing is correct) 
8 5 24 (two numbers are correct but in the wrong"""
)
for chunk in stream:
    print(chunk.text, end="", flush=True)

File: pyproject.toml

[project]
name = "gdata"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
    "gemini-gradio>=0.0.3",
    "google-genai>=0.3.0",
]

How to use these in gist.github.com:

  1. Go to gist.github.com.
  2. You don't need to be logged in to create a public gist, or if you login it can be private.
  3. For each file:
    • Copy the entire code block for the given file.
    • Paste it into the gist editor.
    • Give the file a name that matches the .py or .toml file name (e.g., gemini_thinking_deep.py , pyproject.toml)
  4. Add a description for each gist if required
  5. Click "Create public gist" or "Create secret gist".

Now you have your gists hosted on GitHub!

Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment