-
-
Save dfinke/951f9588f225d0f7bb1558ed9248ff67 to your computer and use it in GitHub Desktop.
#Requires -Module PSAISuite | |
$feature = 'Implement a GO program to chat with OpenAI models' | |
$prompt = @' | |
Based on this engineering spec for `{0}`, write a JIRA ticket that includes: | |
- problem statement | |
- context | |
- goals | |
- acceptance criteria | |
- technical notes for implementation. | |
'@ -f $feature | |
Write-Host "AI is generating a JIRA ticket for feature: $feature" -ForegroundColor Cyan | |
$response = Invoke-ChatCompletion -Model openrouter:x-ai/grok-4-fast:free -Message $prompt | |
$response > jira-ticket.md | |
Write-Host "JIRA ticket saved to jira-ticket.md`n" -ForegroundColor Green | |
JIRA Ticket
Summary: Implement a Go Program to Chat with OpenAI Models
Issue Type: Story
Priority: Medium
Assignee: [Unassigned]
Reporter: [Your Name]
Project: [Relevant Project, e.g., AI Integration]
Components: Go Development, AI/ML Integration
Labels: go-lang, openai, chat-interface
Description
Problem Statement
Currently, there is no dedicated Go-based command-line tool or program available in our codebase to interact with OpenAI's chat completion models (e.g., GPT-3.5 or GPT-4). Teams needing to prototype, test, or integrate OpenAI functionalities must rely on manual API calls via tools like curl or use language-specific SDKs in other languages (e.g., Python), which introduces context-switching and inefficiency for Go-centric projects.
Context
This implementation stems from an engineering specification to build a lightweight, reusable Go program for chatting with OpenAI models. It is intended as a foundational tool for internal developers to experiment with AI-driven features, such as natural language processing or automated responses in our applications. The program should support basic conversational interactions via the OpenAI Chat Completions API, allowing for easy extension into larger systems (e.g., integrating with web services or bots). This aligns with our broader goal of enhancing AI capabilities in Go-based microservices without depending on external wrappers.
Key assumptions:
- Access to an OpenAI API key (handled via environment variables for security).
- Focus on the Chat Completions endpoint (
https://api.openai.com/v1/chat/completions
). - Initial support for text-based models only; future enhancements could include multimodal features.
Goals
- Develop a simple, interactive CLI program in Go that allows users to send messages to OpenAI models and receive streamed or non-streamed responses.
- Ensure the program is secure, efficient, and follows Go best practices (e.g., error handling, modularity).
- Provide a starting point for integration, such as embedding this logic into larger applications or adding features like conversation history.
- Achieve a minimal viable implementation that runs on standard Go environments (Go 1.19+).
Acceptance Criteria
- The program compiles successfully using
go build
without external dependencies beyond the standard library and a chosen OpenAI Go client library (e.g.,github.com/sashabaranov/go-openai
). - The program authenticates with OpenAI using an API key provided via environment variable (
OPENAI_API_KEY
); it fails gracefully with a clear error message if the key is missing or invalid. - The CLI supports an interactive chat mode: Users can input messages (e.g., via stdin), which are sent to a specified model (default:
gpt-3.5-turbo
), and responses are displayed in the terminal. - Basic error handling is implemented for common issues: network failures, rate limits, invalid inputs, and API errors (e.g., 401 Unauthorized, 429 Too Many Requests).
- The program supports at least one non-interactive mode (e.g., via command-line flags for a single query:
./chatgpt -model gpt-4 -message "Hello, world!"
). - Unit tests cover core functions (e.g., API client initialization, message sending) with at least 80% coverage using Go's testing package.
- Documentation is included: A README.md with setup instructions, usage examples, and API key configuration; inline comments for complex logic.
- The program handles conversation context: Maintains message history across exchanges in interactive mode (up to 10 turns by default).
- Performance: Responds to a simple query in under 5 seconds under normal conditions (tested with a valid API key).
Technical Notes for Implementation
-
Language and Dependencies:
- Use Go 1.19 or later.
- Recommended library:
github.com/sashabaranov/go-openai
(add viago get github.com/sashabaranov/go-openai
). This handles JSON serialization, HTTP requests, and authentication out-of-the-box. If avoiding third-party deps, implement raw HTTP calls usingnet/http
andencoding/json
. - For CLI input: Use
bufio
for interactive mode orflag
package for command-line args.
-
Architecture Outline:
- Main structure: A simple main.go with a CLI entrypoint. Create a
client.go
for OpenAI interactions (initialize client with API key fromos.Getenv("OPENAI_API_KEY")
). - Chat loop: In interactive mode, use a loop to read user input (
scanner := bufio.NewScanner(os.Stdin)
), append to a message history slice (type:[]openai.ChatCompletionMessage
), callclient.CreateChatCompletion
, and print the response (handle streaming if enabled via flag). - Models: Default to
gpt-3.5-turbo
. Support flag for model selection (e.g.,-model string
). - Error Handling: Use
log
package for warnings;fmt
for user-facing errors. Implement retries for transient errors (e.g., exponential backoff for rate limits). - Security: Never hardcode API keys; validate key length/format on init. Use HTTPS only.
- Testing: Mock the OpenAI client using interfaces for testability. Use
httptest
if implementing raw HTTP. - Edge Cases: Empty messages, very long inputs (truncate if > API limits), cancellation via Ctrl+C.
- Build/Run: Ensure cross-platform compatibility (Linux/macOS/Windows). Example build:
go build -o chatgpt
. Run:./chatgpt
for interactive mode.
- Main structure: A simple main.go with a CLI entrypoint. Create a
-
Potential Risks/Considerations:
- API costs: Include a note in README about token usage.
- Rate Limits: OpenAI defaults (e.g., 3 RPM for GPT-4); add optional delay between requests.
- Extensions: Design modularly for future features like temperature control (
-temperature float
), max tokens (-max-tokens int
), or exporting chat logs.
Estimation: 8 story points (4 hours development, 2 hours testing, 2 hours docs/review).
Attachments: [Link to engineering spec document if available].
Linked Issues: None.
In Action
PowerShell-Create-Jira-Tickets.mp4