Selected contributions:
microsoft/python-type-stubs
: Incorrect return type onmatplotlib.pyplot.subplot_mosaic
iterative/dvc
: plots: support svgw4po/ExplorerTabUtility
: Version 2.0 detected as trojan by windows defenderw4po/ExplorerTabUtility
: Thanks for the great tool, loving v2! However, Windows quarantines it as a "trojan", code signing may helpmicrosoft/pylance-release
: Feature request: Implement GitHub Action to set up and runpyright
exactly as Pylance would do in VSCodemicrosoft/vscode
: Associate a keyboard shortcut with the new "Fuzzy Match" toggle buttonMathWorks-Teaching-Resources/Programming-A-Starter-Project-Using-MATLAB-and-Python
: Ignore API key file from tutorialmicrosoft/vscode-python-debugger
: Disablingdebugpy.debugJustMyCode
insettings.json
doesn't apply to thedebug-test
purpose
inlaunch.json
- pydantic/pydantic#4072
- A very early PR of mine as I started using type annotations more heavily. I actually need to fix this again, as I used a
String
dtype instead ofstr
! Whenever I usesubplot_mosaic
in VSCode, I chuckle at the yellow squiggly that's due to my merged PR (it was a yellow squiggly before, but for a different reason)
In #3972,
# pyright: ignore
was added in multiple places in the docs, and# pylance: ignore
only once. I believe it's a typo, as AFAIK such a typing ignore comment flag doesn't exist.Related PR: #3972.
- Unit tests for the changes exist
- Tests pass on CI and coverage remains at 100%
- Documentation reflects the changes where applicable
changes/<pull request or issue id>-<github username>.md
file added describing change
(see changes/README.md for details)- My PR is ready to review, please add a comment including the phrase "please review" to assign reviewers
Fixes #1.
Also add pyright to tox and tool config in
pyrightconfig.json
to test this fix.
Introducing the dependencytyping_extensions
requires dropping support for Python 3.6.Note that the new tox env
pyright
is flaky on Windows due to RobertCraigie/pyright-python#45, but works as expected on Linux (tested in GitHub Codespaces). The pyright tox env purposefully typecheckstests/test_datadict.py
, because that is where the errorerror: Expected no arguments to "Point" constructor (reportGeneralTypeIssues)
pops if the@dataclass_transform()
decorator is missing (or commented out) indatadict/datadict.py
.
- microsoft/python-type-stubs#238
- A very early PR of mine as I started using type annotations more heavily. I actually need to fix this again, as I used a
String
dtype instead ofstr
! Whenever I usesubplot_mosaic
in VSCode, I chuckle at the yellow squiggly that's due to my merged PR (it was a yellow squiggly before, but for a different reason)
The return type is
dict[Text, Axes]
. Since this is the functional API, it returns a tuple of figure and axes dictionary, so it should betuple[Figure, dict[Text, Axes]]
. I can submit a PR on this, if you'd like. It's alright to useFigure
as the return type, or should I annotate it with some other abstract superclass?def subplot_mosaic( ... ) -> tuple[Figure, dict[Text, Axes]]: ...
Fixes #4109.
Note
I'd open a PR for this minor change, but I'm pointing out two potential changes and don't want to bundle them into one commit in case only one change or the other is necessary.
This bug is related to the functionality introduced in #17. In the linked code below, it looks like the if (matches2) {...}
clause is indented too shallowly, blocking the rest of the function from returning.
Also, maybe citekeyRegex3
could be malformed? Per experimentation with https://regexr.com/, the ^
after the opening \[
seems to be operating as a "beginning" rather than a "negated set". Below I suggest an update to that regex which matches my cursory testing, but you could be trying to negate something related to carats or slashes that I'm not quite understanding at the beginning of your regex.
/\[^\^([^\s].*)\]\(/ig # Current citekeyRegex3
/\[([^\s].*)\]\(/ig # Suggested citekeyRegex3
Here is an example of one such Markdown link that is not being detected, but in fact all such links are undetected:
[aderaNonwettingDropletsHot2013](notes/aderaNonwettingDropletsHot2013.md)
Platform
- OS: Windows 10 Pro 10.0.19045
- Obsidian version v1.1.9
- Plugin version v1.1.1
anoopkcn/obsidian-reference-map
: Finding citekey from links without prefix not working for Markdown links
Fixes #43. There were further deprecations of the use of
mplcm.get_cmap
andmplcm.register_cmap
which I have replaced with their recommended modern calls tocolormap.get_cmap
andcolormap.register
, respectively.
MathWorks-Teaching-Resources/Programming-A-Starter-Project-Using-MATLAB-and-Python
: Ignore API key file from tutorial
Just a one-line change to the
.gitignore
, in case users are following this tutorial in a fork or clone of the repo, say, connected to their account. This should avoid the accidental commit of the user's API key to a public repo.Notice I made this edit from the GitHub UI, and I think it must've automatically coerced the whitespace at the end of the
.gitignore
as well, so that's why you see two lines changed.
Just getting set up to implement this feature, no work done yet.
Closes #86
- Performed a self-review of my code
- Formatted my code with
pkgmt format
- Added tests (when necessary).
- Added docstring documentation and update the changelog (when needed)
📚 Documentation preview 📚: ploomber-engine--88.org.readthedocs.build/en/88
Looks like this line got prefixed with
notebook
despite the setting targeting non-notebook files. Just a one-line change, so I hope it's alright I didn't raise an issue on it. Fixes what looks like a one-line error introduced in astral-sh/ruff-vscode#379?
Just a minor typo I saw while reading MathJax docs. Changed "ensurea" to "ensures".
This also happened to me, and I've given some detail on this over in a discussion as well (wasn't sure about the dev's preference for issues over discussions, and decided to err on the side of creating a discussion, I can close it if desired to avoid duplicating info), where I recommend obtaining a certificate and signing the binaries as a possible long-term solution, and also describe a workaround for the time being (see quoted bit below). I'll quote the relevant bits here, and leave the discussion to cater towards discussing the nuances of code signing as a potential way forward.
Code signing costs at least $120/yr, though, so it's not a free proposition, and for a hobby project, it may be unreasonable to expect this kind of investment. Maybe it's a reasonable stretch goal to encourage GitHub sponsoring (I just sponsored at $1/mo!) or donation to keep the code signing bill paid! It's technically possible to craft an EXE that dodges antivirus heuristics, but it's brittle and not really a viable solution, but I may be under-informed on that matter.
...
Please note that my sponsorship is not contingent on whether or not you decide to implement code signing.
I've found that Windows Security heuristics flags the new binary as
Trojan:Win32/Bearfoos.A!ml
and quarantines it. I've posted a workaround below, but a potential long-term solution would be to digitally sign the binary, which used to be a pretty expensive endeavor (Digitcert, OV/EV keys, etc..., $200+/yr), but is becoming more accessible as of late with Azure Trusted Signing, a $10/mo. proposition.(...SNIP...visit the discussion link for more detail on code signing...)
Instal via
winget
on a machine that was not used to build the binary itself, with default Windows Security settings:winget install --exact --dependency-source 'winget' --id 'w4po.ExplorerTabUtility'Winget extracts the binary to
%TEMP%/WinGet/w4po.ExplorerTabUtility.2.0.0/extracted/ExplorerTabUtility/ExplorerTabUtility.exe
, where it is quarantined before the process can finish, erroring out the command. Manually visiting "Virus & threat protection" e.g from Start menu search, then clicking "Protection history" under "Current threats", expanding the latest "Severe" threat, then clicking "Restore" will allow the threat next time, so the abovewinget
command must be run one more time, and the installer will complete successfully.The target of the newly-added
expltab
CLI alias,%LOCALAPPDATA%/Microsoft/WinGet/Packages/w4po.ExplorerTabUtility_Microsoft.Winget.Source_8wekyb3d8bbwe/ExplorerTabUtility
, does not seem to be affected or quarantined after this process is followed, at least.
microsoft/pylance-release
: Feature request: Implement GitHub Action to set up and run pyright
exactly as Pylance would do in VSCode
- microsoft/pylance-release#3045
- A feature request was considered out of scope by the developers, I created an "unofficial" Pylance stubs repo that got a few stars from others in my predicament
This comment helps a user who is having trouble getting the
pyright
CLI and the Problems tab of VSCode to behave the same. In summary, there are a few settings that Pylance does differently by default thanpyright
, Pylance doesn't always have the very latestpyright
, and Pylance leverages additional type stubs thanpyright
. This issue also occurs when trying to get a GitHub Actions workflow to check a project in exactly the same way that the local VSCode Pylance extension is doing.Would it be possible to supply an "official" GitHub Action to the Marketplace that sets up and runs
pyright
exactly as Pylance would do in VSCode? Such an action would properly handle the caveats in the comment above, especially pinning thepyright
version, pulling Microsoft type stubs and putting them intypings
, and reading from, e.g. thepyrightconfig.json
in the repo. Such an Action would follow the release cadence of Pylance and automatically update its pinnedpyright
version in step with the latest release of Pylance.I would understand if this feature is considered out of scope for this project, but facilitating
pyright
to run the same as Pylance in workflows would reduce the need to manually keepmypy
roughly synced with Pylance just for workflow type-checking.
See my minimal repro repo with a
.vscode/.code-profile
with the following VSCode extensions:This issue reproduces on my machine when importing the
.vscode/.code-profile
provided with no other settings configured, all default VSCode settings, and no other extensions installed.Environment fails to activate, either by clicking the activate environment button in the terminal pane strip, or by automatic activation. See relevant system info and logs below. Getting
Failed to activate environment
followed byTypeError: Cannot read properties of undefined (reading 'Sh')
.Issue goes away in one of two scenarios:
- Deactivate "Python Environments" extension
- Roll back to
[email protected]
My system has no global Python versions installed, it only has uv version 0.5.20 and the environment was created with
uv sync
. Let me know if you need any more information!Contents of "Help: About" command
Version: 1.96.3 (user setup) Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083 Date: 2025-01-09T18:14:09.060Z Electron: 32.2.6 ElectronBuildId: 10629634 Chromium: 128.0.6613.186 Node.js: 20.18.1 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22631
Relevant log from "Python Environments" in the "Output" pane (anonymized paths)
2025-02-06 10:27:30.571 [error] Failed to activate environment: TypeError: Cannot read properties of undefined (reading 'Sh') at ~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:288062 at t.identifyTerminalShell (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:288710) at t.getActivationCommand (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:285816) at t.TerminalManagerImpl.activateUsingShellIntegration (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:339871) at t.TerminalManagerImpl.activateInternal (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:343526) at t.TerminalManagerImpl.activate (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:343978) at ~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:511551 at cw.h (file:///%LOCALAPPDATA%/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:115:32825)
It looks like there was a misunderstanding when this issue was closed as not planned. The perceived request was "assign a default keyboard shortcut to toggle Fuzzy Match", but what I meant was "just make it possible at all to assign a keyboard shortcut to toggle Fuzzy Match. As it stands, clicking the GUI element for toggling Fuzzy appears to be the only way to do it.
I'm not gonna spam a new issue about this minor nitpick again so soon. If someone stumbles upon this issue and wants to request the feature properly next time, be clearer with a title like, "Allow the user to assign a keyboard shortcut to toggle Fuzzy Match in various places", e.g. in the Explorer filter menu, the "Terminal: Recent command" and "Terminal: Recent directory", etc.
A "Fuzzy Match" toggle was introduced in #164376, and is documented in January 2023 release notes. Would it be possible to assign a keyboard shortcut to this toggle as well?
sourcery-ai/sourcery-vscode
: Consider not binding Ctrl+Y
(commonly "Redo") to "Sourcery: Ask Sourcery" aka sourcery.chat.ask
I see there are some variations on
Ctrl+Y
in yourpackage.json
keybind forsourcery.chat.ask
. I don't know exactly why, perhaps it's related to me having hadsettingsSync.keybindingsPerPlatform
disabled insettings.json
as a workaround for another issue I once encountered, butsourcery.chat.ask
was bound toCtrl+Y
for me, even on Windows.Anyways, I think this choice of keybinding conflicts with the very common
Ctrl+Y
keybinding for "Redo" that I imagine many users will have. Could your team consider not bindingCtrl+Y
to this command, as it hijacked my "Redo" today, which was very surprising. I've unbound it manually, but it would be nice not to bind this particular keyboard combination by default. Thanks!
MathWorks-Teaching-Resources/Programming-A-Starter-Project-Using-MATLAB-and-Python
: Weather examples don't work for newly-created OpenWeatherMap accounts
See domoticz/domoticz#5336 (comment) for this problem affecting another project as well. Basically, only the following URL works for newly-created accounts:
https://api.openweathermap.org/data/2.5/weather?lat=40&lon=-115&appid=$appid
this URL differs from the URL in the tutorial in that
onecall
is replaced withweather
. I think the URL above should work for old and new accounts alike.But this may break again some day as well. If OpenWeatherMap fully shuts down their 2.5 API, as the
onecall
API requires users to follow this process (scroll to bottom of page, reply by Uwe Jacobs), which requires a credit card and a billing subscription even though you're signing up for the "free" plan. If this tutorial eventually has to go down that path (well, an alternative should be found in that case), then users should be instructed to set their billing limit to 1000 calls so that their credit card won't get charged for excess calls.For now, swapping out
onecall
forweather
in the URL should work, but you may want to try this on an old OpenWeatherMap account (over a year old) and a newly-created account just in case! The result of the API call is the following, so if the tutorial uses other keys not present here, the actual tutorial contents may need modification as well:{ "coord": { "lon": -115, "lat": 40 }, "weather": [ { "id": 800, "main": "Clear", "description": "clear sky", "icon": "01d" } ], "base": "stations", "main": { "temp": 296.68, "feels_like": 295.48, "temp_min": 296.68, "temp_max": 296.68, "pressure": 1014, "humidity": 15, "sea_level": 1014, "grnd_level": 817 }, "visibility": 10000, "wind": { "speed": 4.37, "deg": 239, "gust": 5.02 }, "clouds": { "all": 6 }, "dt": 1697491946, "sys": { "country": "US", "sunrise": 1697464268, "sunset": 1697504381 }, "timezone": -25200, "id": 5707899, "name": "Rock House", "cod": 200 }
ploomber/ploomber-engine
: Possible to avoid translating tuple
parameter values as strings, or else allow a user-supplied Translator
?
Parametrizing a notebook on
parameters={"list_param": [0,1], "tuple_param": (0,1)}
will result inlist_param
value being injected as a list, but thetuple_param
value being injected as the stringified representation of atuple
, e.g."(0, 1)"
(literally with the quotes). I understand thatploomber-engine
strives to match thepapermill
API, so maybe this is intentional to maintain that compatibility, but is there a chance for the parameter translator to faithfully inject tuples?Maybe a more sensible request would be to allow the user to pass a custom
Translator
toPloomberClient
, which could pass the customTranslator
subclass toparametrize_notebook
(below), allowing the user to customize behavior, in the case that it is desired to mimic thepapermill
API surface by default?Here we see that
tuple
falls through to the "last resort" translator which stringifies it.ploomber-engine/src/ploomber_engine/_translator.py
Lines 76 to 95 in e00bdc2
@classmethod def translate(cls, val): """Translate each of the standard json/yaml types to appropiate objects.""" if val is None: return cls.translate_none(val) elif isinstance(val, str): return cls.translate_str(val) # Needs to be before integer checks elif isinstance(val, bool): return cls.translate_bool(val) elif isinstance(val, int): return cls.translate_int(val) elif isinstance(val, float): return cls.translate_float(val) elif isinstance(val, dict): return cls.translate_dict(val) elif isinstance(val, list): return cls.translate_list(val) # Use this generic translation as a last resort return cls.translate_escaped_str(val) ploomber-engine/src/ploomber_engine/_util.py
Lines 72 to 73 in e00bdc2
def parametrize_notebook(nb, parameters): """Add parameters to a notebook object""" ...
ploomber-engine/src/ploomber_engine/_util.py
Lines 92 to 94 in e00bdc2
params_translated = translate_parameters( parameters=parameters, comment="Injected parameters" )
microsoft/vscode-python-debugger
: Disabling debugpy.debugJustMyCode
in settings.json
doesn't apply to the debug-test
purpose
in launch.json
I'm using "Python Debugger" extension
v2024.6.0
. It looks like microsoft/vscode-python#22903 closed related issues #174 and #112, but this is not exactly the same issue. Basically, disablingdebugpy.debugJustMyCode
insettings.json
doesn't apply to thedebug-test
purpose
inlaunch.json
. See https://github.com/blakeNaccarato/vscode-python-debugger-335 for a minimal repro of three behaviors. The subfolders represent three different debug configurations:
- ❌
1-debugpy-adapter-settings-justmycode
- Here we omit
justMyCode
inlaunch.json
.- We specify
debug-test
in ourpurpose
.- We do explicitly disable the global setting
debugpy.debugJustMyCode
insettings.json
.- We cannot stop at breakpoints in standard library code here.
- ✅
2-debugpy-adapter
: Here we explicitly disablejustMyCode
inlaunch.json
and specifydebug-test
in ourpurpose
. We successfully stop at breakpoints in standard library code here.- ✅
3-vscode-python-debug-adapter
: Here we use the soon-to-be-deprecatedpython
debug adapter, explicitly disablejustMyCode
inlaunch.json
, and specifydebug-test
in ourpurpose
. We successfully stop at breakpoints in standard library code here.Could
debugpy.debugJustMyCode
global toggle insettings.json
be extended to cover test debugging as well?It's nice to have a global toggle for this, so that a
launch.json
config checked into the repo doesn't have a merge-conflict-pronejustMyCode
toggle embedded in it.Here's the output of my VSCode
Help > About
:Version: 1.88.1 (user setup) Commit: e170252f762678dec6ca2cc69aba1570769a5d39 Date: 2024-04-10T17:41:02.734Z Electron: 28.2.8 ElectronBuildId: 27744544 Chromium: 120.0.6099.291 Node.js: 18.18.2 V8: 12.0.267.19-electron.0 OS: Windows_NT x64 10.0.22631
copier-org/copier
: Template update process on Copier 9.3.0/9.3.1 is slow/indefinite on Windows compared with 9.2.0 for repos with lots of .gitignore
s (I suspect a new copytree
operation)
My template https://github.com/blakeNaccarato/copier-python is used for my project https://github.com/softboiler/boilercv, among others. Since Copier 9.3.0, I notice behavior where the cached repo contents in
%TEMP%
during e.g.copier update --vcs-ref=HEAD
is 17GB+, and includes even.gitignored
content such as.venv
, and in this case, a.dvc/cache
directory containing ~2000 files.I think that this manifests as Copier trying to update from template and somehow starting to make an insanely large diff against the 17GB+ directory copied to
%TEMP%
. It causes a long-lasting hang during the template updating phase, which was at least fifteen minutes long before I cancelled the operation.In Copier 9.2.0 and prior, I see the cached repo contents over in
%TEMP%
is only ~1GB, as expected, and doesn't venture into.gitignored
territory. In comparing diffs across your tagged releases, I see the following section where acopytree
operation seems to take place, only ignoring the.git
folder. My suspicion is that this may be involved. See below for the permalink to the potentially offending code snippet.By updating the template with Copier 9.2.0 instead of 9.3.0 or 9.3.1, I get the relatively fast template update operation that I expect.
https://github.com/blakeNaccarato/copier-python
- Potentially, be on Windows, but this may be OS agnostic
- Have a repository with lots and lots of
.gitignored
files (perhaps "lots" by number of files, or by size on disk, or both)- Run
copier update
on this repository, with Copier 9.3.0 or 9.3.1- Wait forever,
copier update
will not finishNo response
Template updating should behave as in Copier 9.2.0, with no major slowdown and no massive
copytree
operation sending ~17GB to%TEMP%
.No response
Windows
Windows 11
9.3.0
CPython 3.11
pipx+pypi
No response
Taking some time to check out Cappa since discovering it through Mastodon. It's pretty slick!
This is probably just a slight oversight after refactoring from a slightly different form, I'm sure. The main issues are that the
print_cmd
clobbers thecode
inFail.__call__
needs to be prefixed withself
. I've applied these fixes below.from __future__ import annotations from dataclasses import dataclass import cappa @dataclass class Example: cmd: cappa.Subcommands[Print | Fail] def print_cmd(print_: Print): if print_.loudly: print("PRINTING!") else: print("printing!") @cappa.command(invoke=print_cmd) class Print: loudly: bool @dataclass class Fail: code: int def __call__(self): # again, __call__ is shorthand for the above explicit `invoke=` form. raise cappa.Exit(code=self.code) cappa.invoke(Example)
obsidian-tasks-group/obsidian-tasks
: Behavior change in handling \
at the very end of tasks
queries [in Live Preview, due to Obsidian unintended change]
- I searched previous Bug Reports didn't find any similar reports.
I recently updated Obsidian Tasks after few months of not updating, and lots of my queries have started returning nearly every task in my vault. Queries which had a final
\
, e.g. in the "leaky" query below, would slow down Obsidian significantly and list all tasks in the vault. (The reason I often have trailing\
in queries is that I tend to mix/match query snippets and keeping the trailing\
is robust to changes in that query, e.g. moving that "clause" up or down, or elsewhere).I found that removing the
\
at the very end of the query restored functionality to what I expected from before updating, properly filtering tasks.I would say tentatively that this apparent "regression" has cropped up in one of the updates in the past ~six months. If there's an easy methodology for me to quickly "bisect" Tasks plugin versions to narrow down the exact release this happened on, or Obsidian Sync logs I'm unaware of, please let me know. My Sync history only shows in the manifest my recent update to 7.10.2, then updating to 7.11.1, so I can't exactly recall which version I updated from when this regression crept in.
- Create a new vault
- Enable community plugins and install Tasks
- Create two files,
one.md
andtwo.md
with the following contents- Switch to Live Preview (tested in Obsidian 1.7.4 - this line added by @claremacrae)
- Note the difference between the "leaky" and "correct" query in
two.md
. The only difference is the trailing slash. The leaky query "incorrectly" sees the task inone.md
.
- (Not yet done by myself) To find the version in which the regression appeared, install progressively older versions of the plugin until slash handling in the "leaky" query is handled the same as in the new "correct" query
#######
one.md
### One #### Task - [ ] one#######
two.md
### Two #### Task - [ ] two #### Leaky query (trailing slash) ```tasks ( path includes {{ query.file.path }} )\ ``` #### Correct query (no trailing slash) ```tasks ( path includes {{ query.file.path }} ) ```Queries with trailing
\
should still function properly.Queries with trailing
\
fetch everything in the vault instead of filtering as expected.
- Android
- iPhone/iPad
- Linux
- macOS
- Windows
1.7.4
7.11.1
- I have tried it with all other plugins disabled and the error still occurs
- Verify reproduction of this behavior
- Determine whether this is intended behavior. If so, consider communicating it in docs. If not, proceed...
- Determine the version in which the behavior change was introduced and find the PR that changed it
- Consider implementing a "fix" for this "regression" if deemed as such
- Add a test to the test suite that checks the case of "backslash is the final character in the query before closing backticks"
See my minimal repro repo with a
.vscode/.code-profile
with the following VSCode extensions:This issue reproduces on my machine when importing the
.vscode/.code-profile
provided with no other settings configured, all default VSCode settings, and no other extensions installed.Environment fails to activate, either by clicking the activate environment button in the terminal pane strip, or by automatic activation. See relevant system info and logs below. Getting
Failed to activate environment
followed byTypeError: Cannot read properties of undefined (reading 'Sh')
.Issue goes away in one of two scenarios:
- Deactivate "Python Environments" extension
- Roll back to
[email protected]
My system has no global Python versions installed, it only has uv version 0.5.20 and the environment was created with
uv sync
. Let me know if you need any more information!Contents of "Help: About" command
Version: 1.96.3 (user setup) Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083 Date: 2025-01-09T18:14:09.060Z Electron: 32.2.6 ElectronBuildId: 10629634 Chromium: 128.0.6613.186 Node.js: 20.18.1 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22631
Relevant log from "Python Environments" in the "Output" pane (anonymized paths)
2025-02-06 10:27:30.571 [error] Failed to activate environment: TypeError: Cannot read properties of undefined (reading 'Sh') at ~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:288062 at t.identifyTerminalShell (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:288710) at t.getActivationCommand (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:285816) at t.TerminalManagerImpl.activateUsingShellIntegration (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:339871) at t.TerminalManagerImpl.activateInternal (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:343526) at t.TerminalManagerImpl.activate (~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:343978) at ~\.vscode\extensions\ms-python.vscode-python-envs-0.3.10371610-win32-x64\dist\extension.js:2:511551 at cw.h (file:///%LOCALAPPDATA%/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/api/node/extensionHostProcess.js:115:32825)
PEP 723 specifies "inline script metadata", e.g. a way for Python scripts to specify their own requirements in a
pyproject.toml
-like field embedded in a specially-formatted comment in the script itself. This allowspipx run
/pip-run
/etc. to run dev tooling Python scripts that specify their own dependencies and avoids the chicken-and-egg problem of needing a Python environment to set up a Python environment.It would be nice if dependencies in standalone scripts (e.g. in dev tooling) could be managed by Renovate! Some signals to pay attention to regarding the acceptance/adoption of PEP 723 are as follows. This PEP is currently "Accepted" status, but has not yet been marked "Final":
- PEP 723 officially marked "Final"
But there's a note just below the table of contents suggesting that it will reach "Final" status under the following conditions (I've linked to indications that these steps are complete):
- It has been specified at https://packaging.python.org/ (merged on 2024-01-21).
Implemented in "a couple of tools", and they mention
pipx
andpip-run
by name:
pipx
1.4.2 (2024-01-12)pip-run
12.5.0 (2024-01-20)uv
viauv pipx run
or similar: A requested feature (astral-sh/uv#1207, astral-sh/uv#1173)Additionally, the January 2024 history entry suggests:
- The specification is "no longer provisional" (2024-01)
So this spec seems stable enough to start considering a manager implementation. Once PEP 723 is officially marked "Final," maybe a manager implementation can begin at Renovate? Thank you for your consideration!
twisted/towncrier
: Loving the orphan fragments feature! Could it be mentioned briefly in towncrier create --help
?
pydantic/pydantic
: Are the docs still correct in suggesting that Annotated[..., Field(default=...)]
is unsupported?
In the docs for using
Annotated
(both stable, latest, and dev), a note is given that "theField.default
argument is not supported insideAnnotated
". Is this still true, or should that warning be removed? Here are the relevant lines:I ask because
flag: Annotated[bool, Field(default_factory=lambda: False)]
is unwieldy compared withflag: Annotated[bool, Field(default=False)]
, for instance, and the latter works just fine for model instantiation. Possibly it breaks down in some other usage, though, so that might need clarification. Of course you could doflag: bool = False
, but say I'm rolling a model with primarilyAnnotated
fields, I might as well useAnnotated
everywhere, even in the case of defaultbool
s, for instance.Even the code example given in the docs works fine with a straight
default
instead ofdefault_factory
. Here it is now:class User(BaseModel): id: Annotated[str, Field(default_factory=lambda: uuid4().hex)]And here it is more simply stated:
class User(BaseModel): id: Annotated[str, Field(default=uuid4().hex)]In fact, in this toy example it's harmful to use
default_factory
(to get around a possibly phantom limitation?), because it's actually a constant, evaluated once, so every user instantiated without anid
will have the sameid
in a given session.EDIT: Hmm, I guess if the expected interface to the user is to require that field (by type hints telling them it's missing), then specifying defaults in
Annotated
supports a default on the "backend" while still prompting the user for a value, but maybe it's just not a great practice to specify defaults inside ofAnnotated
anyways.
renovatebot/renovate
: Document the need to add custom.regex
to enabledManagers
in the custom manager documentation article
If I have a
cappa.command
-decorated class (e.g.Binarize
below), which I intend to be a terminating command with no subcommands, but one of its fields/attributes happens to be dataclass-like, I believe Cappa assumes it to be an incomplete continuation of CLI structure, e.g. an unsatisfied dependency in the CLI structure.Dataclass-like structures are special in that they are promoted to command/subcommand status, but is there a way to tell Cappa "treat this not as a subcommand/unsatisfied dep, but rather treat it as a dict-like argument but with dotted attribute access"?
In the workaround below, the
Deps
TypedDict
mirrors the_Deps
BaseModel
, indicating this is a "leaf" node of the CLI, and a plain old dict becomes a parameter to the leaf commandBinarize
. If I were to place_Deps
directly inBinarize
instead, the CLI fails to parse/build.Of course you might suggest unwrapping
Deps
and placing its argsraw
andother
directly intoBinarize
, though here I'm implementing a common "pipeline stage" concept with each stage having a certain shape (e.g. all having aDeps
argument which is a namespace of paths).I also see the point of keeping dataclass-like classes single-purpose, so if there's no way to make dataclass-like classes act like arguments instead, I would understand the intent behind a design decision like that, especially if undoing that invariant adds overhead to every single class inference made in CLI parsing.
from __future__ import annotations from pathlib import Path from typing import TypedDict from cappa import command, invoke from pydantic import BaseModel class _Deps(BaseModel): raw: Path = Path("raw") other: Path = Path("other") class Deps(TypedDict): raw: Path other: Path def main(args: Binarize): print(args.deps["raw"], args.deps["other"]) @command(invoke=main) class Binarize(BaseModel): deps: Deps = _Deps().model_dump() invoke(Binarize)
renovatebot/renovate
: Error: missing a pyproject.toml
on lockfileMaintenance
/updateLockFiles
with uv
submodule workspace members for pep621
even w/ cloneSubmodules
enabled
pydata/xarray
: Possible to select by label with integer slices from an integer dimension coordinate?
(EDIT: Here's a dataset and a notebook Gist with outputs illustrating the behavior. In summary,
sel
withrange
notslice
if you want label-based selection of integer coordinates)So it seems
sel
is sort of the "preferred"/best-practice and best documented method, as opposed to sayloc
, but maybeloc
behaves differently.I have a
DataArray
instance, call itda
, with an indexed dimension coordinate,frame
. The coordinate is a sorted, continuous, integer index starting from0
. If Ida.sel(frame=slice(None, None, 10)
, I benefit from automatic unbounded selection and I get back frames0
,10
,20
, etc. It's doing integer-based indexing under the hood, not label-based, but it's no issue here.If I then just save back every tenth frame (a small subset for testing/checking into version control), load it back in as
da_small
, thenda.sel(frame=slice(None, None, 10)
would return0
,100
,200
, etc. I'd like it to still give0
,10
,20
.I could sidestep this by passing a
range
index instead, but need to wrapload_dataset
and conditionally pass a range indexer instead when an all-integer slice is passed, e.g.range(start or 0, (stop or da.frame[-1]) + (step := step or 1), step)
.Ideally, for my case, the same slice index would always return
0
,10
,20
, but I understand that doing the "right" thing with dynamic selection likesel
means picking a sensible default, and integer slices aren't label-based by default.
- Is there a built-in, best-practice way to achieve this? Does
loc
make a different distinction? I think I could also do someisin
masking, but it's a bit more verbose and I think it's not done "lazily" for an on-disk data array?- Could I spin up my own custom index with minor changes to the default behavior, that prefers label-based indexing even on integer slices? (essentially baking in the
range
logic above)?- Is
da.frame[-1]
"lazy", e.g. it's not gonna fetch all the frames from disk if I do that?
xarray
is absolutely amazing, by the way, and serves my high-speed video processing data flows just as well as it does for geosciences. It's a heck of a lot easier than wrangling raw numpy arrays, for sure. Anyways, thanks!
pydantic/pydantic
: Even though the experimental plugins docs have been removed, is the plugin_settings
model config concept relatively "safe"?
w4po/ExplorerTabUtility
: Thanks for the great tool, loving v2! However, Windows quarantines it as a "trojan", code signing may help
I've found that Windows Security heuristics flags the new binary as
Trojan:Win32/Bearfoos.A!ml
and quarantines it. I've posted a workaround below, but a potential long-term solution would be to digitally sign the binary, which used to be a pretty expensive endeavor (Digitcert, OV/EV keys, etc..., $200+/yr), but is becoming more accessible as of late with Azure Trusted Signing, a $10/mo. proposition.It's still a bit ugly to get set up, but I recently figured out code signing with Azure Trusted Signing for some of my own projects, and found this guide pretty handy. I've linked to my down-page reply to the OP's guide, as my clarifications are for an individual with individual/non-org Microsoft account, and how to sign without setting up a "dummy" intermediary signing account. I used this approach to get code signing to work over at my repo as an example.
It's also possible to get this all working in CI, but I found it easier as a first step to get signing working locally, then to embark on the next step of configuring Azure Trusted Signing authentication with OIDC in GitHub Actions (I have not yet done, but will manage to someday soon...).
Whether this is all worth $10/mo. or not, is another question! You may need to instruct users how to run the workaround below in the meantime.
Instal via
winget
on a machine that was not used to build the binary itself, with default Windows Security settings:winget install --exact --dependency-source 'winget' --id 'w4po.ExplorerTabUtility'Winget extracts the binary to
%TEMP%/WinGet/w4po.ExplorerTabUtility.2.0.0/extracted/ExplorerTabUtility/ExplorerTabUtility.exe
, where it is quarantined before the process can finish, erroring out the command. Manually visiting "Virus & threat protection" e.g from Start menu search, then clicking "Protection history" under "Current threats", expanding the latest "Severe" threat, then clicking "Restore" will allow the threat next time, so the abovewinget
command must be run one more time, and the installer will complete successfully.The target of the newly-added
expltab
CLI alias,%LOCALAPPDATA%/Microsoft/WinGet/Packages/w4po.ExplorerTabUtility_Microsoft.Winget.Source_8wekyb3d8bbwe/ExplorerTabUtility
, does not seem to be affected or quarantined after this process is followed, at least.