linux: sort -u filename
to remove duplicates from file
linux: use LC_ALL=C x ...
instead of x ...
to speed up x in {grep, sed, awk, sort}
latex: use package and command bm for bold math symbols
linux: awk '{print $1}' your_file | sort | uniq | wc -l
to get unique counts for a column in a file
slurm: sacct --starttime 2022-09-01 --format=User,JobID,Jobname,partition,state,time,start,end,elapsed,MaxRss,MaxVMSize,nnodes,ncpus,nodelist
for list all jobs since a specific data
slurm: seff 28387610
to get the utilization of CPU and Memory for a specific job
slurm: squeue -u abhisheksharma -o "%.18i %.9P %.45j %.8u %.2t %.10M %.6D %R"
to get the current running jobs