Maker is an advanced, AI-powered web-based tool designed to automate various tasks related to project creation, building, deployment, and URL mapping. It provides a user-friendly interface for selecting and executing tasks with customizable inputs, leveraging AI to optimize workflows and supporting community contributions through a marketplace system.
graph TB
User[User] --> |Interacts with| Frontend[Frontend SvelteKit App]
Frontend --> |API Requests| API[API Gateway]
Frontend --> |WebSocket| WS[WebSocket Server]
API --> |Routes requests| Services[Microservices]
WS --> |Real-time updates| Services
Services --> |Read/Write| DB[(PostgreSQL)]
Services --> |Cache| Redis[(Redis)]
Services --> |Executes| Tasks[Task Runner]
Tasks --> |Uses| AI[AI Service]
AI --> |Interfaces with| AIProviders[AI Providers]
Services --> |Manages| Plugins[Plugin System]
Services --> |Integrates| Marketplace[Marketplace]
Tasks --> |Logs| Monitoring[Monitoring System]
- Language: Go (version 1.21 or later)
- Web Framework: Gin (github.com/gin-gonic/gin)
- ORM: GORM (gorm.io/gorm)
- WebSocket: Gorilla WebSocket (github.com/gorilla/websocket)
- Database: PostgreSQL (with Supabase support)
- Logging: Zerolog (github.com/rs/zerolog)
- Configuration: Viper (github.com/spf13/viper)
- AI Integration: tozd/go/fun (gitlab.com/tozd/go/fun)
- Task Runner: Machinery (github.com/RichardKnop/machinery)
- Caching: Redis (github.com/go-redis/redis/v8)
- Framework: SvelteKit
- UI Components: Custom components with Tailwind CSS
- State Management: Built-in Svelte stores
- HTTP Client: Axios
- WebSocket Client: svelte-websocket-store
- Containerization: Docker
- Orchestration: Kubernetes (for scalability)
- CI/CD: GitHub Actions
- Monitoring: Prometheus & Grafana
maker/
├── backend/
│ ├── cmd/
│ │ └── server/
│ │ └── main.go
│ ├── internal/
│ │ ├── api/
│ │ │ ├── handlers/
│ │ │ ├── middleware/
│ │ │ └── routes.go
│ │ ├── config/
│ │ │ └── config.go
│ │ ├── db/
│ │ │ └── db.go
│ │ ├── models/
│ │ ├── services/
│ │ │ ├── ai_service.go
│ │ │ ├── analytics_service.go
│ │ │ ├── cache_service.go
│ │ │ ├── collaboration_service.go
│ │ │ ├── marketplace_service.go
│ │ │ ├── plugin_service.go
│ │ │ ├── task_service.go
│ │ │ └── user_service.go
│ │ └── utils/
│ ├── migrations/
│ ├── plugins/
│ ├── config/
│ │ ├── config.yaml
│ │ └── tasks/
│ │ ├── project.toml
│ │ └── web.toml
│ └── go.mod
├── frontend/
│ ├── src/
│ │ ├── lib/
│ │ ├── routes/
│ │ ├── components/
│ │ └── stores/
│ ├── static/
│ └── svelte.config.js
├── shared/
│ └── api/
│ └── openapi.yaml
└── README.md
- Tasks are defined in
backend/config/tasks/*.toml
- Loaded and managed by the backend services
- Each task has a name, description, function, and input fields
type Task struct {
ID string
Name string
Description string
Inputs []TaskInput
Execute func(context.Context, map[string]interface{}) (interface{}, error)
}
type TaskInput struct {
Name string
Type string
Description string
Required bool
}
type TaskService struct {
tasks map[string]*Task
runner *machinery.Server
}
func (s *TaskService) RegisterTask(task *Task) {
s.tasks[task.ID] = task
s.runner.RegisterTask(task.Name, task.Execute)
}
func (s *TaskService) ExecuteTask(ctx context.Context, taskID string, inputs map[string]interface{}) (interface{}, error) {
task, exists := s.tasks[taskID]
if !exists {
return nil, fmt.Errorf("task not found")
}
asyncResult, err := s.runner.SendTask(&machinery.Signature{
Name: task.Name,
Args: inputs,
})
if err != nil {
return nil, err
}
return asyncResult.Get(ctx)
}
- SvelteKit-based frontend provides a responsive and intuitive interface
- Real-time updates using WebSocket connections
- Task selection, input fields, and execution status displayed in a user-friendly manner
- RESTful API endpoints for task management and execution
- WebSocket support for real-time updates
- Database integration for storing task results and user data
- AI integration for task automation and optimization
- Stored in the database with a flexible schema to support various technologies
- Used for project scaffolding, structure guidance, and documentation
- Easily extensible for adding new technologies or updating existing ones
- JWT-based authentication for secure access to the application
- Role-based access control for managing user permissions
- Implemented as a backend service with frontend visualization
- Supports concurrent crawling with rate limiting
- Generates JSON output and visualizations of crawled pages
graph TB
API[API Gateway] --> UserService[User Service]
API --> TaskService[Task Service]
API --> AnalyticsService[Analytics Service]
API --> MarketplaceService[Marketplace Service]
API --> CollaborationService[Collaboration Service]
UserService --> UserDB[(User DB)]
TaskService --> TaskDB[(Task DB)]
AnalyticsService --> AnalyticsDB[(Analytics DB)]
MarketplaceService --> MarketplaceDB[(Marketplace DB)]
TaskService --> MessageQueue[Message Queue]
MessageQueue --> WorkerPool[Worker Pool]
WorkerPool --> AIService[AI Service]
CollaborationService --> Redis[(Redis PubSub)]
-
RESTful API Endpoints:
GET /api/tasks # List all available tasks POST /api/tasks/:taskName # Execute a specific task GET /api/tasks/:taskName/status # Get the status of a running task GET /api/knowledge-base # Retrieve knowledge base entries POST /api/knowledge-base # Add a new knowledge base entry GET /api/marketplace # List marketplace items POST /api/marketplace # Create a new marketplace item GET /api/marketplace/:id # Get a specific marketplace item POST /api/marketplace/:id/install # Install a marketplace item
-
WebSocket Endpoint:
WS /ws # WebSocket connection for real-time updates
-
Authentication Endpoints:
POST /api/auth/login # User login POST /api/auth/refresh # Refresh authentication token POST /api/auth/logout # User logout
- User interacts with the SvelteKit frontend to select and configure tasks.
- Frontend sends requests to the Gin backend API.
- Backend processes the request, interacts with the database and external services as needed.
- For long-running tasks, status updates are sent via WebSocket to the frontend.
- Results are stored in the database and sent back to the frontend for display.
- Use HTTPS for all communications between frontend and backend.
- Implement rate limiting on the backend to prevent abuse.
- Use prepared statements and parameter binding to prevent SQL injection.
- Sanitize and validate all user inputs on both frontend and backend.
- Implement proper error handling to avoid leaking sensitive information.
- Regularly update dependencies to patch security vulnerabilities.
- Use database indexing and query optimization for faster data retrieval.
- Implement caching mechanisms for frequently accessed data using Redis and in-memory caches.
- Use goroutines for concurrent task execution in the backend.
- Optimize frontend bundle size and implement code splitting in SvelteKit.
- Use a CDN for serving static assets.
- Implement a microservices architecture for better scalability and maintainability.
- Use message queues (e.g., RabbitMQ or Redis) for handling long-running tasks asynchronously.
The plugin system allows for extensibility through dynamically loaded plugins.
graph TB
PluginLoader[Plugin Loader] --> PluginA[Plugin A]
PluginLoader --> PluginB[Plugin B]
PluginLoader --> PluginC[Plugin C]
PluginA --> |Registers Routes| Router[Gin Router]
PluginB --> |Registers Routes| Router
PluginC --> |Registers Routes| Router
PluginA --> |Uses| DB[(Database)]
PluginB --> |Uses| DB
PluginC --> |Uses| DB
PluginA --> |Logs| Logger[Logger]
PluginB --> |Logs| Logger
PluginC --> |Logs| Logger
type Plugin struct {
Name string
Version string
Description string
Init func(*PluginContext) error
Cleanup func() error
Handlers map[string]gin.HandlerFunc
}
type PluginContext struct {
Router *gin.Engine
DB *gorm.DB
Config *viper.Viper
Logger *zerolog.Logger
}
func LoadPlugins(ctx *PluginContext) error {
files, err := ioutil.ReadDir("./plugins")
if err != nil {
return err
}
for _, file := range files {
if filepath.Ext(file.Name()) == ".so" {
p, err := plugin.Open("./plugins/" + file.Name())
if err != nil {
return err
}
symPlugin, err := p.Lookup("Plugin")
if err != nil {
return err
}
plugin := symPlugin.(*Plugin)
if err := plugin.Init(ctx); err != nil {
return err
}
for path, handler := range plugin.Handlers {
ctx.Router.Any("/plugins/"+plugin.Name+path, handler)
}
}
}
return nil
}
The AI Service integrates multiple AI providers using the fun
package for enhanced task automation.
graph TB
AIService[AI Service] --> OpenAIProvider[OpenAI Provider]
AIService --> CustomProvider[Custom AI Provider]
AIService --> FutureProvider[Future AI Provider]
OpenAIProvider --> |Uses| FunLibrary[tozd/go/fun Library]
CustomProvider --> |Uses| FunLibrary
FutureProvider --> |Uses| FunLibrary
TaskService[Task Service] --> |Requests Completion| AIService
PluginA[Plugin A] --> |Requests Completion| AIService
import (
"gitlab.com/tozd/go/fun"
"gitlab.com/tozd/go/fun/openai"
)
type AIService struct {
providers map[string]fun.LLM
}
func (s *AIService) AddProvider(name string, provider fun.LLM) {
s.providers[name] = provider
}
func (s *AIService) GetCompletion(ctx context.Context, providerName, prompt string) (string, error) {
provider, ok := s.providers[providerName]
if !ok {
return "", fmt.Errorf("provider not found: %s", providerName)
}
response, err := provider.Complete(ctx, fun.ChatRequest{
Messages: []fun.ChatMessage{
{
Role: fun.ChatMessageRoleUser,
Content: prompt,
},
},
})
if err != nil {
return "", err
}
return response.Text(), nil
}
The Marketplace enables sharing and discovery of community-created tasks and workflows.
sequenceDiagram
participant User
participant Frontend
participant MarketplaceService
participant Database
participant PluginSystem
User->>Frontend: Browse Marketplace
Frontend->>MarketplaceService: Request Items
MarketplaceService->>Database: Query Items
Database-->>MarketplaceService: Return Items
MarketplaceService-->>Frontend: Display Items
User->>Frontend: Select Item to Install
Frontend->>MarketplaceService: Request Installation
MarketplaceService->>Database: Retrieve Item Details
Database-->>MarketplaceService: Return Item Details
MarketplaceService->>PluginSystem: Install Plugin
PluginSystem-->>MarketplaceService: Installation Result
MarketplaceService-->>Frontend: Confirm Installation
Frontend-->>User: Display Confirmation
type MarketplaceItem struct {
ID uint `gorm:"primaryKey"`
Name string
Description string
AuthorID uint
Category string
Version string
Content datatypes.JSON
CreatedAt time.Time
UpdatedAt time.Time
}
type MarketplaceService struct {
db *gorm.DB
}
func (s *MarketplaceService) CreateItem(item *MarketplaceItem) error {
return s.db.Create(item).Error
}
func (s *MarketplaceService) GetItem(id uint) (*MarketplaceItem, error) {
var item MarketplaceItem
if err := s.db.First(&item, id).Error; err != nil {
return nil, err
}
return &item, nil
}
func (s *MarketplaceService) ListItems(category string, page, pageSize int) ([]MarketplaceItem, error) {
var items []MarketplaceItem
query := s.db.Model(&MarketplaceItem{})
if category != "" {
query = query.Where("category = ?", category)
}
if err := query.Offset((page - 1) * pageSize).Limit(pageSize).Find(&items).Error; err != nil {
return nil, err
}
return items, nil
}
Implement a comprehensive analytics system using time-series data and real-time processing.
import (
"github.com/influxdata/influxdb-client-go/v2"
)
type AnalyticsService struct {
influxClient influxdb2.Client
}
func NewAnalyticsService(url, token, org, bucket string) *AnalyticsService {
client := influxdb2.NewClient(url, token)
return &AnalyticsService{influxClient: client}
}
func (s *AnalyticsService) TrackEvent(userID string, eventType string, metadata map[string]interface{}) error {
writeAPI := s.influxClient.WriteAPI("maker", "events")
point := influxdb2.NewPoint(
eventType,
map[string]string{"user_id": userID},
metadata,
time.Now(),
)
writeAPI.WritePoint(point)
return nil
}
func (s *AnalyticsService) GetUserActivity(userID string, start, end time.Time) ([]influxdb2.Result, error) {
queryAPI := s.influxClient.QueryAPI("maker")
query := fmt.Sprintf(`
from(bucket:"events")
|> range(start: %s, stop: %s)
|> filter(fn: (r) => r["user_id"] == "%s")
`, start.Format(time.RFC3339), end.Format(time.RFC3339), userID)
return queryAPI.Query(context.Background(), query)
}
Implement real-time collaboration using WebSockets and a pub/sub system with Redis.
graph TB
User1[User 1] --> |WebSocket| CollabHub[Collaboration Hub]
User2[User 2] --> |WebSocket| CollabHub
User3[User 3] --> |WebSocket| CollabHub
CollabHub --> |Publish/Subscribe| Redis[Redis PubSub]
CollabHub --> |Store/Retrieve| DB[(Database)]
import (
"github.com/go-redis/redis/v8"
"github.com/gorilla/websocket"
)
type CollaborationHub struct {
clients map[*websocket.Conn]bool
broadcast chan []byte
register chan *websocket.Conn
unregister chan *websocket.Conn
redisClient *redis.Client
}
func NewCollaborationHub() *CollaborationHub {
return &CollaborationHub{
clients: make(map[*websocket.Conn]bool),
broadcast: make(chan []byte),
register: make(chan *websocket.Conn),
unregister: make(chan *websocket.Conn),
redisClient: redis.NewClient(&redis.Options{
Addr: "localhost:6379",
}),
}
}
func (h *CollaborationHub) Run() {
pubsub := h.redisClient.Subscribe(context.Background(), "collaboration")
defer pubsub.Close()
go func() {
for msg := range pubsub.Channel() {
h.broadcast <- []byte(msg.Payload)
}
}()
for {
select {
case client := <-h.register:
h.clients[client] = true
case client := <-h.unregister:
if _, ok := h.clients[client]; ok {
delete(h.clients, client)
client.Close()
}
case message := <-h.broadcast:
for client := range h.clients {
if err := client.WriteMessage(websocket.TextMessage, message); err != nil {
delete(h.clients, client)
client.Close()
}
}
}
}
}
-
Version Control:
- Use Git for version control
- Implement a branching strategy (e.g., GitFlow)
- Main branches:
main
(production),develop
(integration) - Feature branches for new features and bug fixes
-
Code Quality:
- Use linters for both Go (e.g., golangci-lint) and JavaScript/TypeScript (ESLint)
- Enforce code formatting with gofmt for Go and Prettier for JavaScript/TypeScript
- Implement pre-commit hooks to run linters and formatters
-
Testing:
- Write unit tests for all core functionality
- Implement integration tests for API endpoints and service interactions
- Use end-to-end tests for critical user flows
- Aim for high test coverage (e.g., >80%)
-
Continuous Integration (CI):
- Use GitHub Actions for CI pipelines
- Run tests, linters, and security scans on every pull request
- Build and test Docker images as part of CI
-
Continuous Deployment (CD):
- Implement automatic deployment to staging environment for
develop
branch - Use manual approval process for production deployments from
main
branch - Implement blue-green or canary deployment strategies for zero-downtime updates
- Implement automatic deployment to staging environment for
-
Code Review:
- Require pull request reviews before merging
- Use a code review checklist to ensure consistency
- Encourage pair programming for complex features
-
Documentation:
- Maintain up-to-date API documentation using OpenAPI/Swagger
- Use inline code comments for complex logic
- Keep README files updated with setup and contribution guidelines
-
Monitoring and Logging:
- Implement structured logging using Zerolog
- Set up centralized log aggregation (e.g., ELK stack)
- Use Prometheus for metrics collection and Grafana for visualization
-
Security:
- Regularly update dependencies to patch known vulnerabilities
- Implement security scanning in CI pipeline (e.g., using Snyk or OWASP ZAP)
- Conduct periodic security audits
-
Performance:
- Implement performance benchmarks for critical paths
- Use profiling tools to identify and fix bottlenecks
- Conduct load testing before major releases
This architecture provides a solid foundation for building Maker, a user-friendly, AI-powered web application for automating development workflows. It leverages the strengths of Go and Gin for the backend, and SvelteKit for a responsive frontend, while maintaining flexibility for future expansions and improvements.
The modular design, with features like the plugin system and marketplace, allows for easy extensibility. The integration of AI services and advanced analytics provides powerful capabilities for task automation and insights.
As the project evolves, this architecture can be adjusted and expanded to meet new requirements and incorporate emerging technologies. Regular reviews and refactoring sessions should be conducted to ensure the architecture remains aligned with the project's goals and maintains its scalability and maintainability.