!pip install git+https://github.com/sevamoo/SOMPY.git
idea = """ | |
GitHub Gist can store code snippets, which can be called using their gist url... | |
This is going to be very handy for coding presentations especially when used in combination with | |
Steamlit (https://streamlit.io/). | |
Imagine: you are presenting at a major conference & to reduce your anxiety, you buy yourself | |
this free insurance by following these steps: | |
For each steps in your presentation, create a gist. Then, create another gist, i.e. confX_presentation.py that list each steps. | |
""" | |
print(idea) |
def get_min_pca(scaled_data, min_var_explained=0.95, verbose=False):
"""
Decompose `scaled_data` into principal components.
Return the number of components above `min_var_explained` threshold,
the threshold value and the transformed data.
Args
----
scaled_data: numpy array: scaled data
min_var_explained: float, default 0.95: variance explained threshold
Scikit-learn decomposition module includes many decomposition algorithms that can be used for dimensionality reduction. PCA and SVD are two algorithms that differ in their functional implementation in regards to their parameters. PCA uses its n_components
parameter in different ways depending on its type:
n_components int, float or ‘mle’, default=None
Notably:
If 0 < n_components < 1 and svd_solver == 'full', select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components.
I used to command line version given in this SO post to upgrade my current git version:
(base) C:\Users\me>git --version
git version 2.34.1.windows.1
(base) C:\Users\me>git update-git-for-windows
import matplotlib.pyplot as plt
sandwich_list = [
'Lobster roll',
'Chicken caesar',
Code source: DETR_panoptic notebook on colab
i
will end (because out["pred_masks"][keep]
was
not saved into a variable, hence we can't access its length).#[...]