To use the live-html (webr/pyodide) in quarto:
quarto add r-wasm/quarto-live
--- | |
title: "Github Repos for Topic" | |
author: "Sean Davis" | |
format: html | |
params: | |
gh_topic: "r01ca230551" | |
--- | |
## Required packages |
# Install Bioconductor and required packages | |
# This command installs the BiocManager package for managing Bioconductor packages | |
install.packages("BiocManager") | |
# Use BiocManager to install specific packages for data analysis and visualization | |
BiocManager::install(c("GEOquery", "SummarizedExperiment", "ggplot2", | |
"party", "ggparty", "partykit", "randomForest")) | |
# Load necessary packages for modeling and visualization | |
library(party) # For creating classification trees |
Nextflow is a powerful workflow management system designed for creating scalable and reproducible scientific workflows. It enables you to write workflows in a declarative language, making it easy to define complex pipelines that can be executed on various platforms, including local machines, clusters, and cloud environments like Google Cloud.
This short tutorial is meant for informatics users who are comfortable with a command line interface. It also assumes that the user is familiar with and has run nextflow on a local computer or HPC system.
Roughly, this document will walk through:
#!/bin/bash | |
curl https://gis.cdc.gov/Cancer/DataVizApi/GetJSON/USCS_County | sed -e 's/<string xmlns="http:\/\/schemas.microsoft.com\/2003\/10\/Serialization\/">//g' -e 's/<\/string>//g'| jq -c '.[] | .USCS_County[]' > output.jsonl |
Sure! Here's the converted Docker Compose YAML file with a MySQL server as a separate container and a Docker volume for storage:
version: '3'
services:
wandb-local:
image: wandb/local
container_name: wandb-local
environment:
- HOST=https://YOUR_DNS_NAME
You are an HR specialist and are evaluating the qualifications of job applicants | |
for a high-performance computing (HPC) specialist position. | |
You have been given a set of criteria to evaluate each candidate. | |
The candidate materials are in the attached PDF. | |
For each job applicant, fill in the following YAML-format criteria document. You | |
may use the "comment" field to provide additional context or justification for | |
your evaluation. | |
--- | |
# candidate name |
# convert all CMGD SummarizedExperiments to CSV files | |
# Should run more-or-less directly as a script | |
# Requires more than 128GB RAM to complete | |
# Generates about 200GB of files | |
# BiocManager::install('curatedMetagenomicData') | |
# BiocManager::install(c('arrow','data.table','dplyr', 'readr')) | |
library(curatedMetagenomicData)convert all CMGD SummarizedExperiments to CSV files |
#!/bin/bash | |
# results in json format | |
# Actual data in "results" array | |
# | |
# Opportunity numbers taken from https://commonfund.nih.gov/dataecosystem/FundedResearch | |
curl \ | |
-X POST \ | |
https://api.reporter.nih.gov/v2/projects/search \ | |
-d '{"criteria":{"opportunity_numbers": ["RFA-RM-23-003", "PA20-185", "OTA-23-004", "RFA-RM-22-007", "OTA-23-005", "RFA-RM-17-026", "RFA-RM-21-007", "RFA-RM-19-012"]}}' \ | |
-H 'Content-Type: application/json' |