Created
April 8, 2026 17:14
-
-
Save cab938/42f5d241fef9ac1fcad86e5663ad91f9 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| --- | |
| name: movement-science-lecture-update | |
| description: Use this skill when updating a lecture with recent PubMed research. It extracts lecture objectives and baseline concepts from a slide deck, searches the last year of PubMed, filters for relevance and novelty, and produces a concise research summary plus optional slide-outline and email draft. | |
| --- | |
| # Movement Science Lecture Update | |
| Use this skill when a user wants to update an existing lecture deck with recent research while avoiding papers that are already redundant with the current lecture content. | |
| ## Inputs | |
| Gather or confirm these inputs before starting: | |
| - A lecture deck or lecture text source. | |
| - The lecture identifier or topic to search around. | |
| - Whether the user wants only a research summary, or also a slide outline and email draft. | |
| - Optional delivery targets such as a Google Slides template, Drive folder, or email recipient. | |
| If a deck is provided as Google Slides, extract slide text directly. If only text is available, work from that text. | |
| ## Core Goal | |
| Find recent papers that are both: | |
| - Relevant to the lecture objectives and key terms. | |
| - Novel relative to what the current lecture already teaches. | |
| Then produce a concise update package the instructor can use. | |
| ## Workflow | |
| ### 1. Extract the lecture baseline | |
| Read the deck and identify: | |
| - Lecture objectives. | |
| - Key terms emphasized in the content. | |
| - Concepts, mechanisms, definitions, and distinctions the lecture already covers. | |
| - Activity or logistics slides that should be ignored. | |
| Build a compact lecture baseline with fields like: | |
| - `lecture_title` | |
| - `known_points` | |
| - `known_mechanisms` | |
| - `known_definitions` | |
| - `known_key_terms` | |
| - `notes_for_novelty_check` | |
| Do not invent facts beyond the deck. | |
| ### 2. Generate PubMed search queries | |
| Using the lecture objectives and key terms, create multiple PubMed queries that are broad enough to find relevant recent work but specific enough to stay on-topic. | |
| For each query, keep: | |
| - `query` | |
| - `rationale` | |
| Prefer the last 12 months unless the user specifies a different range. | |
| ### 3. Search and normalize PubMed results | |
| For each query: | |
| - Search PubMed. | |
| - Collect PMIDs. | |
| - Deduplicate PMIDs across queries. | |
| - Fetch article metadata and abstracts. | |
| - Normalize each article into a consistent structure with: | |
| - `pmid` | |
| - `doi` | |
| - `title` | |
| - `abstract` | |
| - `journal` | |
| - `pubDate` | |
| - `authors` | |
| - `url` | |
| Skip items with no usable abstract unless the user asks to include title-only records. | |
| ### 4. Score topical relevance | |
| Evaluate each article against the lecture objectives and key terms. | |
| Use this rule set: | |
| - Score `0-100`. | |
| - Mark `relevant=true` only when the paper clearly supports the lecture topic. | |
| - Be strict about off-topic organisms, unrelated diseases, and weak conceptual overlap. | |
| Return for each paper: | |
| - `pmid` | |
| - `score` | |
| - `relevant` | |
| - `reason` | |
| Default threshold: keep only papers with relevance score `>= 75`. | |
| ### 5. Score novelty against the lecture baseline | |
| For papers that passed relevance screening, score how much genuinely new information they add beyond the current lecture. | |
| Use this rule set: | |
| - Score `0-100`. | |
| - Mark `novel=true` only when the abstract adds findings, mechanisms, distinctions, or evidence not already covered in the baseline. | |
| - Low novelty means the paper mostly repeats concepts already taught. | |
| Return for each paper: | |
| - `pmid` | |
| - `noveltyScore` | |
| - `novel` | |
| - `reason` | |
| - `whatIsNew` | |
| Default threshold: keep only papers with novelty score `>= 70`. | |
| ### 6. Produce the update package | |
| Summarize only the papers that passed both screens. | |
| Required outputs: | |
| - A concise overall summary. | |
| - Takeaways grouped by lecture objective when possible. | |
| - A citation list with PMID, title, and URL. | |
| - Notes on why the chosen papers matter. | |
| Optional outputs when requested: | |
| - A slide outline for a short lecture update deck. | |
| - Replacement text or placeholders for a slide template. | |
| - An email draft to the instructor summarizing the update. | |
| ## Output Contract | |
| Unless the user asks for a different format, produce: | |
| ```json | |
| { | |
| "summary": "string", | |
| "by_objective": [ | |
| { | |
| "objective": "string", | |
| "takeaways": ["string"], | |
| "supporting_pmids": ["string"] | |
| } | |
| ], | |
| "citations": [ | |
| { | |
| "pmid": "string", | |
| "title": "string", | |
| "url": "string" | |
| } | |
| ], | |
| "notes": ["string"] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment