http://oad.simmons.edu/oadwiki/Data_repositories
Mirrored from this comment
The advice by @uglycoyote above works, with some caveats. Not that you actually have to run .\nodeenv\scripts\Activate.ps1
instead of \nodeenv\Scripts\activate
in his version.
But let's say you have a virtual environment for your project at .venv
in the root of the project, with project dependencies (say pandas
) in there, as well as pyright
installed via pip install pyright
. If you had run npx -y pyright
out of a separate nodeenv
folder like the above solution, it won't "know" about your pandas
dependency over in .venv
, and will complain whenever it sees pandas
in your code.
There must be an @overload
on __add__
somewhere in pint
which unintentionally matches to datetime
. If two quantities are added, unit checkers artificially restrict allowable operations to those supported by datetime
.
from pint import UnitRegistry
docker run -d -v thermal:/home/gunns/user/ gunns_loaded
Also probably need to do something like
Except 7000:7000 isn't necessary, I think?
docker run -id -v thermal-2022-08-18-main-453c43937da48edaf63cdd1dd2d868c3:/root -p 7000:7000 --name gunns_loaded gunns_loaded
Original comment by /u/darkon
tar cr to cat them all
tar tf to find them
tar xf to extract them all
and in the filesystem bind them
Original comment by /u/hak8or
For others looking into this topic, I highlybsuggest this cppcon talk. He goes over making a c++ based project on a small embedded system while having the assembly output on the side.
As he adds functionality, he talks about how the compiler optimizes away everything, while showing the assembly output as proof. I highly suggest it, and often point people to it who are still stuck in the grossly outdated mindset of c++ having no place in embedded.
It's also an amazing way to filter out candidates who claim to know how to do software development on embedded, when in actuality they are set in their old ways and don't keep up with the field in general.
Original comment by /u/save_the_panda_bears
The Bible is technically a series of books that form a cohesive narrative. In that sense, here is my Bible of Data Science roughly divided into a classical stats OT and a more modern ML NT:
The Law - The mathematical foundations
Statistical Inference - Casella & Berger
History - Foundational works that provide additional context for more advanced concepts