this Rust line works...
title = subfield.value.clone().unwrap_or_else(|| "".to_string());
...and this doesn't:
title = subfield.value.unwrap_or_else(|| "".to_string());
For the stolen-relations project, we're doing some programmatic data-cleaning. I recently uploaded test sql to an empty database and examined the load in phpMyAdmin.
I noticed that the collation for the few updated tables had changed from the utf8mb4_unicode_520_ci encoding I've ensured all my sql tables use for the last few years. The updated tables are using the collation utf8mb4_0900_ai_ci.
So I asked chatGPT about the differences, and then did a few google searches to verify the info. Over time, I'll move collations to this utf8mb4_0900_ai_ci collation.
i want to use timeline.js and pass it a date and have it display that date on a timeline with other static pre-populated dates.
To achieve this, you can use TimelineJS, an open-source tool developed by Knight Lab that allows you to build visually-rich, interactive timelines using a simple JSON or Google Sheets data source. In this example, I'll show you how to create a timeline using a JSON data source, including your static pre-populated dates and the date you want to pass.
User:
i just saw this js code: "import { describe, it } from 'node:test'" -- can you explain how javascript/node uses "describe" and "it"?
ChatGPT:
This code snippet is related to testing in JavaScript, specifically using the Mocha testing framework. Mocha is a popular testing library for Node.js applications, and it provides a clean and easy way to organize and structure your tests.
User: A pytorch tutorial contains this section:
"""
Step 7: Create a loss function and optimizer
Now we are going to set the loss function and optimizer. Here, we are using cross-entropy loss and adam optimizer. The performance of a classification model whose output is a probability value between 0 and 1 is measured by cross-entropy loss. Adam is a stochastic gradient descent technique that uses first and second-order moment estimation. Using exponential moving average, the approach determines the gradient’s first-order moment (the gradient mean) and second-order moment (element-wise squared gradient) and corrects its bias. Learning rate times first-order moment divided by the square root of second-order moment provides the final weight update.
User:
I'm going through a pytorch tutorial, and see this statement and code:
"""
In order to keep fix the update of the feature part of the network, we can code below:
for param in model.features.parameters():
Goal...
Context...
| - Data from parsing the course-codes of 14,607 entries in the 2023-March-17 course export. | |
| - Example course-code: "brown.biol.0080.2023-spring.s01" | |
| - From that course-code, below are the --> unique <-- "institutions", "departments", (not numbers), "years", "terms", and "sections". | |
| - The section 'EMPTY' wasn't in the OIT data, I put it there to show that there were sections-not-listed, and I provided a count. (111) | |
| [17/Mar/2023 13:45:42] DEBUG [prepare_oit_summer_2023_subset-<module>()::109] OIT course entries count, ``14607`` | |
| [17/Mar/2023 13:45:42] DEBUG [prepare_oit_summer_2023_subset-<module>()::115] course_code_institutions, ``['brown']`` | |
| [17/Mar/2023 13:45:42] DEBUG [prepare_oit_summer_2023_subset-<module>()::116] course_code_departments, ``['afri', | |
| 'amst', | |
| 'anth', |
Timeframe: April-2022 through March-2023.
On this page...