Skip to content

Instantly share code, notes, and snippets.

@alonsosilvaallende
Last active July 22, 2024 07:04
Show Gist options
  • Save alonsosilvaallende/c4731e0db6bc8292ad2ae7e66ceb1ffd to your computer and use it in GitHub Desktop.
Save alonsosilvaallende/c4731e0db6bc8292ad2ae7e66ceb1ffd to your computer and use it in GitHub Desktop.
Ollama-FC
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@812781385
Copy link

Can't hit function call

---- response ---- ChatCompletion(id='chatcmpl-800', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="I'm sorry, but as an AI language model, I don't have access to real-time information about the current weather. However, you can easily check the current weather in Paris by searching online for a reliable weather website or app, or by checking local news sources. They should be able to provide you with up-to-date information on temperature, precipitation, and other relevant weather conditions.", role='assistant', function_call=None, tool_calls=None))], created=1721613846, model='qwen:32b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage=CompletionUsage(completion_tokens=78, prompt_tokens=14, total_tokens=92))

@alonsosilvaallende
Copy link
Author

alonsosilvaallende commented Jul 22, 2024

@812781385 Thank you for trying this gist. It probably has to do with the template of the model you are using (qwen:32b?) that has not been updated yet. I have seen three models with updated templates:

You can also try to modify the template yourself by following the templates of the models I have mentioned: https://www.gpu-mart.com/blog/custom-llm-models-with-ollama-modelfile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment