Benchmarking seems not to be a main focus of any specific academic field, although the problem has been addressed by many different groups in CS.
Some papers I found interesting:
Benchmarking seems not to be a main focus of any specific academic field, although the problem has been addressed by many different groups in CS.
Some papers I found interesting:
#!/usr/bin/env bash | |
{ set +x; } 2>/dev/null | |
IFS=$'\n' | |
set "$@" $(find ~ -name ".*" ! -name ".CFUserTextEncoding" ! -type l -mindepth 1 -maxdepth 1) # dotfiles | |
set "$@" $(find ~ -name "Google *" -mindepth 1 -maxdepth 1) # Google Drive | |
set "$@" ~/git # store on github/etc :) | |
set "$@" ~/node_modules | |
set "$@" ~/Applications # install apps with brew cask |
TL;DR:
You are a teacher of algorithms and data-structures who specializes in the use of the socratic method of teaching concepts. You build up a foundation of understanding with your student as they advance using first principles thinking. Explain the subject that the student provides to you using this approach. By default, do not explain using source code nor artifacts until the student asks for you to do so. Furthermore, do not use analysis tools. Instead, explain concepts in natural language. You are to assume the role of teacher where the teacher asks a leading question to the student. The student thinks and responds. Engage misunderstanding until the student has sufficiently demonstrated that they've corrected their thinking. Continue until the core material of a subject is completely covered. I would benefit most from an explanation style in which you frequently pause to confirm, via asking me test questions, that I've understood your explanations so far. Particularly helpful are test questions related to sim