Answer:
Use logrotate
or a cron job with find + tar
. Example:
0 0 * * * /usr/bin/find /var/log/myapp -name '*.log' -mtime +1 -exec gzip {} \;
Scorecard:
- 5 β Understands rotation, retention, cron
- 3 β Suggests manual cleanup only
- 1 β Doesnβt understand log management
For: Junior
Answer:
- name: Create user
hosts: all
become: yes
tasks:
- name: Ensure user exists
user:
name: devuser
state: present
Scorecard:
- 5 β Uses Ansible with
user
module - 3 β Uses bash loop with SSH
- 1 β Suggests manual creation
For: Junior
Answer:
- Enable encryption, versioning, restrict access via bucket policy.
- Use
boto3
oraws s3 cp
for upload.
Scorecard:
- 5 β Mentions bucket policy, encryption
- 3 β Uploads to open bucket
- 1 β Doesnβt know IAM or config storage
For: Junior
Answer:
# cron job
0 2 * * * rsync -az /data user@remote:/backups/data
Scorecard:
- 5 β Uses
rsync
, secure transport - 3 β Uses
scp
, less efficient - 1 β No incremental or schedule awareness
For: Junior
Answer:
#!/bin/bash
pgrep nginx > /dev/null || systemctl restart nginx
Hook to cron every minute.
Scorecard:
- 5 β Combines
pgrep
,systemctl
, cron - 3 β Hardcoded restart
- 1 β Doesnβt detect failure
For: Junior
Answer:
- Git push triggers pipeline
- Ansible deploys app to VM, installs deps
- systemd unit manages process
Scorecard:
- 5 β Covers Git trigger, Ansible deploy, systemd run
- 3 β Mixes build/deploy logic
- 1 β Doesn't integrate stages
For: Both
Answer:
- Use
ec2
module or Terraform for provisioning - Ansible installs NGINX and updates security group
Scorecard:
- 5 β Understands provisioning and config
- 3 β Only installs package
- 1 β Misses networking/security
For: Both
Answer:
#!/bin/bash
df -h | awk '$5+0 > 80 { print $6 " is above threshold" }' | mail -s "Disk Alert" [email protected]
Scorecard:
- 5 β Efficient script + mail + cron
- 3 β Alerts without thresholds
- 1 β No script or notification
For: Both
Answer:
- Use
ufw
oriptables
, disable root login, use SSH keys, restrict sudoers.
sudo ufw allow 22
sudo ufw enable
Scorecard:
- 5 β Mentions all areas (SSH, firewall, sudo)
- 3 β Hardens one area only
- 1 β Doesnβt mention security features
For: Both
Answer:
import boto3, os
s3 = boto3.client('s3')
for file in os.listdir('/var/log/app'):
s3.upload_file(f'/var/log/app/{file}', 'my-bucket', f'logs/{file}')
Add as cron job.
Scorecard:
- 5 β Uses boto3, cron, structure
- 3 β Static upload logic
- 1 β Doesnβt know how to automate
For: Both
Answer:
- Use EC2 ASG + Launch Template
- ALB routes traffic
- Health checks on
/health
Scorecard:
- 5 β Mentions ASG, ALB, health check
- 3 β Deploys EC2 only
- 1 β No scaling or load balancer
For: Senior
Answer:
- Upload to primary S3 bucket
- Use cross-region replication
- Sync with
aws s3 sync
Scorecard:
- 5 β Covers sync, IAM, security
- 3 β Uses CLI but no policies
- 1 β Manual copy without safety
For: Senior
Answer:
- Tag small subset of hosts
- Deploy to tagged group
- Verify health, then deploy to rest
Scorecard:
- 5 β Uses groups, checks, staged rollout
- 3 β Deploys to all at once
- 1 β No rollout logic
For: Senior
Answer:
- Use
rsyslog
to forward logs - Central server collects and rotates them
- Secure using TLS or VPN
Scorecard:
- 5 β Uses syslog, rotation, security
- 3 β Hardcodes log copy
- 1 β No centralization logic
For: Senior
Answer:
import psutil, requests
data = {'cpu': psutil.cpu_percent(), 'mem': psutil.virtual_memory().percent}
requests.post('https://monitor/api/metrics', json=data)
Scorecard:
- 5 β Uses psutil, REST, cron or daemon
- 3 β Collects but no push logic
- 1 β Just prints data locally
For: Senior