Clone frappe_docker
git clone https://github.com/frappe/frappe_docker.git
cd frappe_docker
DevOps Exercise: Scalable Data Pipeline with AWS, Prometheus, Grafana, and SFTP | |
Objective | |
Deploy a data ingestion and processing pipeline using Docker Compose, AWS services, and monitoring tools. The pipeline should securely transfer data from an SFTP server, process it, store it in an AWS S3 bucket, and provide real-time monitoring of performance and health. This exercise tests your understanding of containerization, cloud services, monitoring, and secure data handling. | |
Scenario | |
You are tasked with building a data pipeline that ingests log files from an external SFTP server, processes them using a custom Python script, and stores the processed data in an AWS S3 bucket. The pipeline needs to be scalable, monitored, and secure. |
import requests | |
import json | |
account_id = "1ed--61e" | |
url = f"https://api.cloudflare.com/client/v4/accounts/{account_id}/challenges/widgets" | |
headers = { | |
"X-Auth-Email": "[email protected]", | |
"X-Auth-Key": "504--1e6" #not API token | |
} |
DevOps Exercise: Multi-Service Deployment with Docker Compose | |
Objective | |
Deploy a scalable, multi-service e-commerce setup using Docker Compose that includes a reverse proxy, product management service, caching, and a database. Your setup should be secure, resilient, and able to handle simulated load. This exercise tests container orchestration, secure configuration management, and performance monitoring. | |
Scenario | |
You're setting up the backend infrastructure for a product catalog in an e-commerce platform. The platform includes: | |
1.API Gateway (Nginx): A reverse proxy that routes requests to the Product Service and caches frequently accessed data. |
<!DOCTYPE html> | |
<html> | |
<head> | |
<title>Payslip - Consolidated</title> | |
<style> | |
table { | |
width: 100%; | |
border-collapse: collapse; | |
} |
# Define an upstream block for backend services | |
upstream api_backend { | |
server localhost:6565; | |
} | |
# HTTP server block to handle requests | |
server { | |
listen 80; | |
server_name ; |
To set up multiple WordPress instances with separate databases using Docker, you'll need to modify the Docker commands to create additional containers for WordPress and MariaDB. Here's how you can set it up for three websites: | |
1. Create Docker networks: | |
```bash | |
docker network create wordpress-network1 | |
docker network create wordpress-network2 | |
docker network create wordpress-network3 | |
``` | |
2. Create volumes for each WordPress instance and database: |
import os | |
from openai import OpenAI | |
client = OpenAI( | |
api_key = os.getenv("OPENAI_API_KEY"), | |
) | |
completion = client.chat.completions.create( # Change the method | |
model = "gpt-3.5-turbo", | |
messages = [ # Change the prompt parameter to messages parameter |
import React, { useEffect } from 'react'; | |
import * as THREE from 'three'; | |
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader'; | |
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls'; | |
function Basic3d() { | |
useEffect(() => { | |
// Scene | |
const scene = new THREE.Scene(); | |
scene.background = new THREE.Color(0xaaaaaa); // Optional: Change scene background |
# The script returns a kubeconfig for the service account given | |
# reff: https://gist.github.com/innovia/fbba8259042f71db98ea8d4ad19bd708 | |
# you need to have kubectl on PATH with the context set to the cluster you want to create the config for | |
# Cosmetics for the created config | |
clusterName=some-cluster | |
# your server address goes here get it via kubectl cluster-info | |
server=https://157.90.17.72:6443 | |
# the Namespace and ServiceAccount name that is used for the config | |
namespace=kube-system |