Answer:
#!/bin/bash
ps aux --sort=-%mem | head -n 6
Scorecard:
- 5 – Uses
ps
with sorting and formatting - 3 – Uses
top
, can explain idea - 1 – Doesn’t know
ps
usage
For: Junior
Answer:
#!/bin/bash
DIR="/var/log/myapp"
[ ! -d "$DIR" ] && mkdir -p "$DIR"
Scorecard:
- 5 – Uses
-d
,mkdir -p
correctly - 3 – Knows how to create, but misses check
- 1 – Doesn’t know test operators or paths
For: Junior
Answer:
find /var/log -name "*.log" -size +10M
Scorecard:
- 5 – Uses
find
with-size
,-name
- 3 – Uses
ls -lh
, less precise - 1 – Doesn’t know how to check file sizes
For: Junior
Answer:
#!/bin/bash
grep -i "error" "$1" | wc -l
Scorecard:
- 5 – Uses grep with
-i
, variable$1
- 3 – Hardcodes filename or misses flag
- 1 – Doesn’t know
grep
orwc
For: Junior
Answer:
who | awk '{print $1}' | sort | uniq
Scorecard:
- 5 – Combines
who
,awk
,uniq
correctly - 3 – Uses
who
, but redundant logic - 1 – Doesn’t understand user sessions
For: Junior
Answer:
#!/bin/bash
find /var/log -name "*.log" -mtime +7 -print0 | tar -czvf logs.tar.gz --null -T -
find /var/log -name "*.log" -mtime +7 -delete
Scorecard:
- 5 – Uses
find
,tar
,-print0
safely - 3 – Misses null separator or delete safety
- 1 – Doesn't know archiving or
find
options
For: Senior
Answer:
#!/bin/bash
PORT=80
ss -tuln | grep ":$PORT"
Scorecard:
- 5 – Uses
ss
ornetstat
with filters - 3 – Greps
ps
, less accurate - 1 – Doesn’t know how to check ports
For: Senior
Answer:
#!/bin/bash
df -h | awk '$5+0 > 80 {print "Alert: " $6 " is at " $5}'
Scorecard:
- 5 – Parses
%
properly withawk
- 3 – Uses
df
, but static or grep-only logic - 1 – Doesn’t know
df
or threshold logic
For: Senior
Answer:
#!/bin/bash
if ! pgrep nginx > /dev/null; then
systemctl restart nginx
fi
Scorecard:
- 5 – Uses
pgrep
,systemctl
correctly - 3 – Uses
ps
, may have false positives - 1 – Doesn’t know how to manage services
For: Senior
10. Create a script to tail a log file in real-time, filter only “ERROR” lines and write to another file.
Answer:
#!/bin/bash
tail -F /var/log/app.log | grep --line-buffered "ERROR" >> error.log
Scorecard:
- 5 – Uses
tail -F
,grep
, stream handling - 3 – Misses
--line-buffered
or redirection - 1 – Can’t construct pipeline
For: Senior
1. Read a text file line by line and print only lines containing the word “error” (case-insensitive).
Answer:
with open('app.log') as f:
for line in f:
if 'error' in line.lower():
print(line.strip())
Scorecard:
- 5 – Uses file object and string methods cleanly
- 3 – Reads file but inefficient or misses case
- 1 – Doesn’t handle file IO properly
For: Junior
Answer:
def filter_logs(files):
return [f for f in files if f.endswith('.log')]
Scorecard:
- 5 – Uses list comprehension and string method
- 3 – Uses for-loop instead
- 1 – Fails to filter or hardcodes logic
For: Junior
Answer:
import json
with open('config.json') as f:
data = json.load(f)
print(list(data.keys()))
Scorecard:
- 5 – Uses
json.load
,keys()
- 3 – Knows json but manually parses
- 1 – Doesn’t know
json
module
For: Junior
Answer:
import os, sys
filename = sys.argv[1]
print(os.path.getsize(filename))
Scorecard:
- 5 – Uses
sys.argv
andos.path
correctly - 3 – Hardcodes path or doesn't handle input
- 1 – Fails on argument handling or path usage
For: Junior
Answer:
try:
with open('input.txt') as f:
print(f.read())
except FileNotFoundError:
print("File not found.")
Scorecard:
- 5 – Uses
try-except
cleanly with specific error - 3 – Catches generic exception
- 1 – Doesn’t handle errors at all
For: Junior
Answer:
import time
with open('app.log') as f:
f.seek(0, 2) # Move to end
while True:
line = f.readline()
if not line:
time.sleep(0.5)
continue
if 'critical' in line.lower():
print(line.strip())
Scorecard:
- 5 – Implements tailing with filtering correctly
- 3 – Reads file but no loop or filter
- 1 – Misuses file read logic
For: Senior
Answer:
import yaml
with open('config.yml') as f:
cfg = yaml.safe_load(f)
print(cfg['server']['port'])
Scorecard:
- 5 – Uses
PyYAML
correctly - 3 – Parses YAML but hardcodes path
- 1 – Doesn’t use external libraries
For: Senior
Answer:
import subprocess
result = subprocess.run(['df', '-h'], stdout=subprocess.PIPE)
print(result.stdout.decode())
Scorecard:
- 5 – Uses
subprocess.run()
cleanly - 3 – Uses
os.system()
(less secure) - 1 – Doesn’t know shell execution from Python
For: Senior
Answer:
from collections import Counter
with open('input.txt') as f:
words = f.read().lower().split()
counts = Counter(words)
print(counts)
Scorecard:
- 5 – Uses
Counter
, handles case - 3 – Uses manual dict, less efficient
- 1 – Fails string split/count logic
For: Senior
Answer:
(Requires watchdog
)
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import time
class ConfChangeHandler(FileSystemEventHandler):
def on_modified(self, event):
if event.src_path.endswith('.conf'):
print(f"{event.src_path} was modified.")
observer = Observer()
observer.schedule(ConfChangeHandler(), path='.', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
Scorecard:
- 5 – Uses observer pattern properly
- 3 – Knows polling, doesn’t use
watchdog
- 1 – Doesn’t know how to monitor file changes
For: Senior
Answer:
#!/bin/bash
for host in "$@"; do
if ssh -o BatchMode=yes -o ConnectTimeout=3 "$host" 'exit' 2>/dev/null; then
echo "$host is reachable"
else
echo "$host is unreachable"
fi
done
Scorecard:
- 5 – Uses
ssh
flags safely, loops over args - 3 – Uses ping instead of ssh
- 1 – Doesn’t understand remote execution
For: Senior
Answer:
#!/bin/bash
remote_file="user@host:/path/to/file"
local_file="/tmp/file"
scp "$remote_file" "$local_file"
remote_sum=$(ssh user@host "sha256sum /path/to/file" | cut -d' ' -f1)
local_sum=$(sha256sum "$local_file" | cut -d' ' -f1)
[[ "$remote_sum" == "$local_sum" ]] && echo "Checksum match" || echo "Mismatch"
Scorecard:
- 5 – Uses
scp
,sha256sum
, secure compare - 3 – Fetches file but doesn’t verify
- 1 – Uses insecure logic or skips validation
For: Senior
3. Write a script that logs into a remote server and restarts a service (nginx
) only if it’s not running.
Answer:
#!/bin/bash
ssh user@host '
if ! pgrep nginx > /dev/null; then
sudo systemctl restart nginx
echo "NGINX restarted"
else
echo "NGINX already running"
fi
'
Scorecard:
- 5 – Correct remote logic with service check
- 3 – Just restarts blindly
- 1 – Can’t form remote conditional logic
For: Senior
Answer:
#!/bin/bash
for host in "$@"; do
scp -r ./mydir "$host:~/mydir" &
done
wait
echo "All copies done"
Scorecard:
- 5 – Uses background jobs, scp correctly
- 3 – Copies one at a time
- 1 – Doesn’t understand parallel execution
For: Senior
Answer:
#!/bin/bash
for host in "$@"; do
ssh "$host" "uptime" > "output_${host}.log" 2>&1 &
done
wait
Scorecard:
- 5 – Understands redirection and concurrency
- 3 – Runs serially or misuses
ssh
- 1 – Outputs overwrite or fails entirely
For: Senior
1. Write a Python script that uses paramiko
to run a command (df -h
) on a remote host and print the output.
Answer:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect("host", username="user", password="pass")
stdin, stdout, stderr = ssh.exec_command("df -h")
print(stdout.read().decode())
ssh.close()
Scorecard:
- 5 – Uses
paramiko
securely - 3 – Runs command but no error handling
- 1 – Doesn’t know SSH in Python
For: Senior
Answer:
import json, requests, os
servers = requests.get("https://api.example.com/servers").json()
for server in servers['hosts']:
response = os.system(f"ping -c 1 {server}")
print(f"{server}: {'UP' if response == 0 else 'DOWN'}")
Scorecard:
- 5 – Parses API and uses system command
- 3 – Hardcodes data or misses error cases
- 1 – Can’t tie together HTTP and ping
For: Senior
Answer:
import boto3, os
s3 = boto3.client('s3')
s3.download_file('my-bucket', 'file.txt', '/tmp/file.txt')
remote_size = s3.head_object(Bucket='my-bucket', Key='file.txt')['ContentLength']
local_size = os.path.getsize('/tmp/file.txt')
print("Match" if remote_size == local_size else "Mismatch")
Scorecard:
- 5 – Uses
boto3
, checks metadata - 3 – Downloads but doesn’t verify
- 1 – Doesn’t understand S3 API basics
For: Senior
Answer:
import paramiko
hosts = ['host1', 'host2']
for h in hosts:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(h, username='user', password='pass')
_, stdout, _ = ssh.exec_command('df -h /')
print(f"{h}:\n{stdout.read().decode()}")
ssh.close()
Scorecard:
- 5 – Loops across multiple hosts cleanly
- 3 – Static host or poor formatting
- 1 – Repeats code or handles one host only
For: Senior
Answer:
import boto3, mimetypes, os
s3 = boto3.client('s3')
for root, _, files in os.walk('./static'):
for f in files:
path = os.path.join(root, f)
key = path[len('./static/'):]
mime = mimetypes.guess_type(f)[0] or 'binary/octet-stream'
s3.upload_file(path, 'my-bucket', key, ExtraArgs={'ContentType': mime})
Scorecard:
- 5 – Uses MIME types and traverses recursively
- 3 – Uploads but static types or bad keys
- 1 – Uploads one file only, wrong types
For: Senior