- Introduction
- Installation
- Basic Concepts
- Data Types
- Working with Strings
- Working with Lists
- Working with Tables
- Files and Filesystem
- Custom Commands
- Variables
- Modules
- Environment Variables
- Configuration
- Pipelines
- Control Flow
- Common Commands
- Coming from Bash
- Tips and Tricks
Nushell (Nu) is a modern, cross-platform shell that treats everything as structured data. Unlike traditional shells that work with text streams, Nushell works with tables, lists, and records, making data manipulation more intuitive and powerful.
- Cross-platform: Works on Linux, macOS, BSD, and Windows
- Structured data: Pipelines use tables, lists, and records instead of plain text
- Built-in data formats: Native support for JSON, YAML, CSV, TOML, XML, and more
- Type system: Strongly typed with excellent error messages
- Modern shell: Built in Rust with performance and safety in mind
brew install nushellnix profile install nixpkgs#nushell# User scope (default)
winget install nushell
# Machine scope (Run as admin)
winget install nushell --scope machine# Clone the repository
git clone https://github.com/nushell/nushell
# Build with cargo
cd nushell
cargo build --release --features=extra
# Or install directly
cargo install nuAfter installing, launch Nu by typing:
nuCommands and Subcommands: Use kebab-case
# Good
def fetch-user [] { ... }
def "str my-command" [] { ... }
# Bad
def fetch_user [] { ... }
def fetchUser [] { ... }Variables and Parameters: Use snake_case
# Good
let user_id = 123
def process [user_name: string] { ... }
# Bad
let userId = 123
def process [userName: string] { ... }Environment Variables: Use SCREAMING_SNAKE_CASE
# Good
$env.MY_APP_CONFIG = "value"
# Bad
$env.myAppConfig = "value"Prefer Immutable Variables
# Good - Immutable by default
let items = [1 2 3]
let sum = ($items | math sum)
# Use mut only when necessary
mut count = 0
for item in $items { $count += 1 }Use Pipelines Over Loops
# Good - Functional approach
[1 2 3 4 5] | each { $in * 2 } | math sum
# Less ideal - Imperative approach
mut sum = 0
for x in [1 2 3 4 5] { $sum += ($x * 2) }Keep Commands Focused
# Good - Single responsibility
def get-user-name [id: int] {
fetch-user $id | get name
}
# Bad - Doing too much
def process-everything [id: int] {
# fetching, transforming, saving, logging...
}-
Avoid collecting streams unnecessarily
# Good - Streaming ls | where size > 1mb | get name # Bad - Forces collection let files = (ls | where size > 1mb) $files | get name
-
Use native Nushell commands over externals
# Good - Native and cross-platform ls | where type == file # Less ideal - External dependency ^find . -type f
-
Leverage parallelism when appropriate
# Sequential ls | each { |file| process $file } # Parallel - much faster for independent operations ls | par-each { |file| process $file }
When generating Nushell code:
- Always prefer pipelines over loops - Nushell is designed for pipeline-based data transformation
- Use structured data - Tables, records, and lists are first-class citizens
- Avoid string parsing - Use
from json,from csv, etc. to convert text to structured data - Type annotations - Include type annotations for clarity:
def command [x: int] - Error handling - Use
try/catchblocks for operations that might fail - Streaming - Remember that Nushell commands stream data when possible
- Immutability - Default to
letunless mutation is explicitly needed - Documentation - Include comments with
#for command descriptions - Examples - Provide working examples with expected output using
# =>comments - Cross-platform - Use native Nushell commands for portability
# Brief description of what this command does
#
# Longer explanation if needed, including edge cases
# and usage notes
def my-command [
input: string # Description of input parameter
--flag (-f) # Description of flag
--option (-o): int # Description of option with value
]: string -> table { # Input type -> Output type
# Command implementation
$input | some-transformation
}Nushell treats command output as structured data. For example, ls returns a table, not text:
ls
# Returns a table with columns: name, type, size, modifiedLike traditional shells, Nushell uses pipes (|) to connect commands:
ls | where size > 1kb | sort-by modifiedNushell has a strong type system:
- Primitives: int, float, string, bool, nothing, binary, date, duration, filesize
- Collections: list, record, table, range
- Special: block, closure, glob, cellpath
# Integers
42
-17
0x2A # Hexadecimal
# Floats
3.14
-0.5
1.23e-4 # Scientific notation# Simple strings
"hello"
'world'
# String interpolation
let name = "Alice"
$"Hello, ($name)!" # => "Hello, Alice!"
# Multi-line strings
"line one
line two"true
falsenull# Current date
date now
# Parse date
"2024-01-15" | into datetime
# Durations
10sec
5min
2hr
1day
3ms100b # bytes
1kb # kilobytes
5mb # megabytes
2gb # gigabytes
10kib # kibibytes (1024-based)1..5 # Range from 1 to 5
1..10..2 # Range from 1 to 10, step by 2
1.. # Open-ended range from 1[1, 2, 3, 4, 5]
["apple", "banana", "cherry"]
[1, "mixed", true, 3.14] # Can contain different types{ name: "Alice", age: 30, active: true }
{ x: 10, y: 20 }[[name, age, city];
[Alice, 30, NYC]
[Bob, 25, SF]
[Carol, 35, LA]]# String to int
"42" | into int
# String to datetime
"2024-01-15" | into datetime
# Int to string
42 | into string
# To JSON
{name: "Alice", age: 30} | to json
# From JSON
'{"name": "Bob"}' | from json# Concatenation with interpolation
let name = "Alice"
$"Hello, ($name)!" # => "Hello, Alice!"
# Length
"hello" | str length # => 5
# Contains
"Hello, world!" | str contains "world" # => true
# Starts with / Ends with
"hello" | str starts-with "he" # => true
"hello" | str ends-with "lo" # => true# Uppercase / Lowercase
"hello" | str upcase # => "HELLO"
"WORLD" | str downcase # => "world"
# Trim whitespace
" hello " | str trim # => "hello"
# Replace
"hello world" | str replace "world" "universe" # => "hello universe"
# Split
"one,two,three" | split row "," # => ["one", "two", "three"]
# Join
["one", "two", "three"] | str join ", " # => "one, two, three"
# Substring
"Hello World!" | str substring 0..5 # => "Hello"
"Hello World!" | str substring 6.. # => "World!"# Parse structured text
"Nushell 0.80" | parse "{shell} {version}"
# => ╭───┬─────────┬─────────╮
# => │ # │ shell │ version │
# => ├───┼─────────┼─────────┤
# => │ 0 │ Nushell │ 0.80 │
# => ╰───┴─────────┴─────────╯# Color text in terminal
$"(ansi red)Error!(ansi reset)"
$"(ansi green_bold)Success!(ansi reset)"
$"(ansi blue)Info(ansi reset)"[1, 2, 3, 4, 5]
[one two three] # Space-separated
1..10 | to list # From rangelet list = [apple banana cherry]
$list.0 # => apple
$list.1 # => banana
$list | get 2 # => cherry# Length
[1, 2, 3] | length # => 3
# Append
[1, 2, 3] | append 4 # => [1, 2, 3, 4]
# Prepend
[1, 2, 3] | prepend 0 # => [0, 1, 2, 3]
# Insert at index
[foo bar baz] | insert 1 beeze # => [foo, beeze, bar, baz]
# Update by index
[1, 2, 3, 4] | update 1 10 # => [1, 10, 3, 4]
# First / Last
[1, 2, 3, 4, 5] | first # => 1
[1, 2, 3, 4, 5] | first 3 # => [1, 2, 3]
[1, 2, 3, 4, 5] | last # => 5
[1, 2, 3, 4, 5] | last 2 # => [4, 5]
# Skip
[1, 2, 3, 4, 5] | skip 2 # => [3, 4, 5]
# Reverse
[1, 2, 3] | reverse # => [3, 2, 1]
# Unique
[1, 2, 2, 3, 3, 3] | uniq # => [1, 2, 3]# Each - iterate over list
[1, 2, 3] | each { |x| $x * 2 } # => [2, 4, 6]
# Enumerate - add index
[a b c] | enumerate
# => ╭───┬───────┬──────╮
# => │ # │ index │ item │
# => ├───┼───────┼──────┤
# => │ 0 │ 0 │ a │
# => │ 1 │ 1 │ b │
# => │ 2 │ 2 │ c │
# => ╰───┴───────┴──────╯
# Reduce - fold list to single value
[1, 2, 3, 4] | reduce { |item, acc| $acc + $item } # => 10
# Reduce with initial value
[3, 8, 4] | reduce --fold 1 { |item, acc| $acc * $item } # => 96# Where - filter by condition
[1, 2, 3, 4, 5] | where $it > 3 # => [4, 5]
# Any - check if any item matches
[1, 2, 3, 4] | any { |x| $x > 3 } # => true
# All - check if all items match
[1, 2, 3, 4] | all { |x| $x > 0 } # => true
# Find - search for element
[apple banana cherry] | find ban # => [banana]# Inline table
[[name, age]; [Alice, 30] [Bob, 25] [Carol, 35]]
# From records
[
{name: Alice, age: 30}
{name: Bob, age: 25}
{name: Carol, age: 35}
]# Select specific columns
ls | select name size
# Reject columns
ls | reject type# Where clause
ls | where size > 1kb
ls | where type == dir
ls | where name =~ ".md" # Regex match# Sort by column
ls | sort-by size
ls | sort-by modified --reverse
# Sort by multiple columns
[[name, age]; [Alice, 30] [Bob, 25] [Carol, 30]]
| sort-by age name# Add column
ls | insert bigger_size { |row| $row.size * 2 }
# Update column
ls | update modified { |row| $row.modified | date format "%Y-%m-%d" }
# Rename column
ls | rename old_name new_name
# Group by
ls | group-by type
# Transpose
[[a, b]; [1, 2] [3, 4]] | transpose# Count
ls | length
# Sum
[1, 2, 3, 4] | math sum
# Average
[1, 2, 3, 4, 5] | math avg
# Min / Max
[5, 2, 8, 1, 9] | math min
[5, 2, 8, 1, 9] | math max# List files
ls
ls *.rs # With glob pattern
ls **/*.md # Recursive glob
# Change directory
cd /path/to/dir
cd ~ # Home directory
cd - # Previous directory
# Current directory
pwd
$env.PWD # As variable# Create file
touch file.txt
# Create directory
mkdir mydir
mkdir -p path/to/nested/dir # Create parents
# Copy
cp source.txt dest.txt
cp -r sourcedir destdir # Recursive
# Move/Rename
mv old.txt new.txt
mv file.txt /other/location/
# Remove
rm file.txt
rm -r directory # Recursive
rm -t file.txt # Move to trash (safer)# Read as structured data (auto-detects format)
open data.json
open data.csv
open data.toml
# Read as raw text
open --raw file.txt
# Read specific format
open file.json | from json# Save structured data
{name: "Alice", age: 30} | save person.json
# Save as specific format
[1, 2, 3] | to csv | save numbers.csv
# Append to file
"more text" | save --append log.txt
# Save raw text
"Hello, world!" | save --raw message.txt# File metadata
ls -l file.txt
# File type
"file.txt" | path type # => file or dir
# File exists
"file.txt" | path exists # => true or false
# File size
ls file.txt | get size# Basename
"/path/to/file.txt" | path basename # => file.txt
# Dirname
"/path/to/file.txt" | path dirname # => /path/to
# Extension
"/path/to/file.txt" | path extension # => txt
# Join paths
["path", "to", "file.txt"] | path join # => path/to/file.txt
# Expand tilde
"~/documents" | path expand # => /home/user/documents
# Parse path
"/path/to/file.txt" | path parse# Find files recursively
glob **/*.rs
# Find with depth limit
glob **/*.md --depth 2
# Watch files and run command on change
watch . --glob=**/*.rs {|| cargo test }# Define a command
def greet [name: string] {
$"Hello, ($name)!"
}
# Use it
greet Alice # => "Hello, Alice!"def greet [name = "World"] {
$"Hello, ($name)!"
}
greet # => "Hello, World!"
greet Alice # => "Hello, Alice!"def greet [
name: string
--shout (-s) # Boolean flag
--times (-t): int = 1 # Flag with value
] {
let message = if $shout {
($"Hello, ($name)!" | str upcase)
} else {
$"Hello, ($name)!"
}
1..$times | each { $message } | str join "\n"
}
greet Alice --shout
greet Bob --times 3
greet Carol -s -t 2# Variable number of arguments
def greet-all [...names: string] {
$names | each { |name| $"Hello, ($name)!" }
}
greet-all Alice Bob Carol
# => Hello, Alice!
# => Hello, Bob!
# => Hello, Carol!def process [
count: int # Integer
rate: float # Float
name: string # String
active: bool # Boolean
items: list # List
config: record # Record
] {
# Function body
}# Accept pipeline input
def double [] {
$in * 2
}
5 | double # => 10
# Typed pipeline input
def sum-list []: list<int> -> int {
$in | reduce { |item, acc| $acc + $item }
}
[1, 2, 3, 4] | sum-list # => 10# Basic assignment
let x = 42
let name = "Alice"
let items = [1, 2, 3]
# Cannot reassign
# let x = 43 # Error!# Declare with mut
mut count = 0
$count += 1
$count = $count * 2
# Works with all types
mut items = [1, 2, 3]
$items = ($items | append 4)# Evaluated at parse time
const CONFIG_FILE = "config.toml"
const MAX_RETRIES = 5
# Can be used in module imports
source $CONFIG_FILElet outer = "outside"
do {
let inner = "inside"
print $outer # OK - can access outer scope
print $inner # OK
}
# print $inner # Error - inner is out of scope
print $outer # OKlet x = 10
print $x # => 10
do {
let x = 20
print $x # => 20
}
print $x # => 10 (unchanged)# Return null instead of error if path doesn't exist
let value = $record.field?.subfield?
# Useful for optional fields
let files = (ls)
$files.name?.0? # Safe access even if emptymodule greetings {
export def hello [name: string] {
$"Hello, ($name)!"
}
export def goodbye [name: string] {
$"Goodbye, ($name)!"
}
}
use greetings hello
hello "World" # => "Hello, World!"Create a file greetings.nu:
# greetings.nu
export def hello [name: string] {
$"Hello, ($name)!"
}
export def goodbye [name: string] {
$"Goodbye, ($name)!"
}Use it:
use greetings.nu
greetings hello "Alice"
# Or import specific items
use greetings.nu hello
hello "Bob"
# Or import all
use greetings.nu *
hello "Carol"
goodbye "Dave"# module.nu
export-env {
$env.MY_VAR = "some value"
}
export def command [] {
print $env.MY_VAR
}# greetings.nu
export def main [] {
"Greetings and salutations!"
}
export def hello [name: string] {
$"Hello, ($name)!"
}Use it:
use greetings.nu
greetings # => "Greetings and salutations!"
greetings hello Bob # => "Hello, Bob!"# Get environment variable
$env.PATH
$env.HOME
$env.USER
# List all environment variables
$env
# Check if variable exists
'PATH' in $env# Set for current session
$env.MY_VAR = "value"
# Set for single command
FOO=BAR some-command
# Unset variable
hide-env MY_VAR# View path
$env.PATH # or $env.Path on Windows
# Add to path (prepend)
$env.PATH = ($env.PATH | prepend "/new/path")
# Add to path (append)
$env.PATH = ($env.PATH | append "/another/path")
# Using standard library helper
use std/util "path add"
path add "/my/path" # Prepends by default# Convert colon-separated string to list
$env.ENV_CONVERSIONS = {
"XDG_DATA_DIRS": {
from_string: {|s| $s | split row (char esep) }
to_string: {|v| $v | str join (char esep) }
}
}Nushell uses three main configuration files in $nu.default-config-dir:
- env.nu - Environment variables (legacy, use config.nu instead)
- config.nu - Main configuration and settings
- login.nu - Loaded only for login shells
# Edit config.nu
config nu
# Edit env.nu
config env
# View default config
config nu --default | nu-highlight | less -R
# View config documentation
config nu --doc | nu-highlight | less -R# In config.nu
# Set default editor
$env.config.buffer_editor = "code"
# or
$env.config.buffer_editor = ["vim", "-n"]
# Disable welcome banner
$env.config.show_banner = false
# Table settings
$env.config.table.mode = "rounded"
$env.config.table.index_mode = "auto"
# History settings
$env.config.history.file_format = "sqlite"
$env.config.history.max_size = 100_000
# Completion settings
$env.config.completions.quick = true
$env.config.completions.partial = true
# Line editor mode
$env.config.edit_mode = "emacs" # or "vi"# Simple prompt
$env.PROMPT_COMMAND = {
$"(pwd | path basename) > "
}
# Right prompt
$env.PROMPT_COMMAND_RIGHT = {
date now | format date "%H:%M:%S"
}
# Transient prompt (after command execution)
$env.TRANSIENT_PROMPT_COMMAND = ""
$env.TRANSIENT_PROMPT_COMMAND_RIGHT = ""# Use vivid for LS_COLORS
$env.LS_COLORS = (vivid generate molokai)# In config.nu
alias ll = ls -l
alias la = ls -a
alias g = gitFiles in autoload directories are loaded automatically:
$nu.vendor-autoload-dirs- For vendor/package manager scripts$nu.user-autoload-dirs- For user scripts
# Chain commands with pipe
ls | where size > 1mb | sort-by modified# Use $in to refer to pipeline input
ls | each { |row| $row.size * 2 }
# $in represents the entire input
"hello" | $in ++ " world" # => "hello world"ls
| where size > 1kb
| sort-by modified
| select name size
| first 10# Save intermediate results
command1
| tee { save intermediate.txt }
| command2# Discard output
some-command | ignore
# Discard stdout and stderr
some-command out+err>| ignoreif $x > 10 {
"big"
} else if $x > 5 {
"medium"
} else {
"small"
}match $value {
1 => "one",
2 => "two",
3..10 => "several",
_ => "many"
}for item in [1, 2, 3, 4] {
print $item
}
# With range
for i in 1..10 {
print $i
}mut count = 0
while $count < 5 {
print $count
$count += 1
}loop {
# Do something
if $condition {
break
}
}for i in 1..10 {
if $i == 5 {
continue # Skip to next iteration
}
if $i == 8 {
break # Exit loop
}
print $i
}try {
# Code that might fail
open nonexistent.txt
} catch {
print "File not found"
}# System info
sys
# CPU info
sys cpu
# Memory info
sys mem
# Disk info
sys disks
# Network info
sys net
# Processes
ps
# Current user
whoami# Current date/time
date now
# Format date
date now | format date "%Y-%m-%d %H:%M:%S"
# Parse date
"2024-01-15" | into datetime
# Date arithmetic
date now | date add 1day
date now | date subtract 2hr# Basic math
2 + 3
10 - 5
4 * 6
15 / 3
10 mod 3 # Modulo
# Math functions
[1, 2, 3, 4, 5] | math sum
[1, 2, 3, 4, 5] | math avg
[1, 2, 3, 4, 5] | math min
[1, 2, 3, 4, 5] | math max
[1, 2, 3, 4, 5] | math median
# Single value functions
-5 | math abs # => 5
2.7 | math floor # => 2
2.3 | math ceil # => 3
2.5 | math round # => 3
9 | math sqrt # => 3# GET request
http get https://api.example.com/data
# POST request
http post https://api.example.com/data {key: "value"}
# With headers
http get https://api.example.com/data --headers {Authorization: "Bearer token"}# Parse JSON
'{"name": "Alice", "age": 30}' | from json
# Convert to JSON
{name: "Bob", age: 25} | to json
# Pretty print JSON
{name: "Carol"} | to json --indent 2
# YAML
open config.yaml
{key: "value"} | to yaml
# TOML
open Cargo.toml
{key: "value"} | to toml# Get user input
let name = input "Enter your name: "
# Secret input (password)
let password = input -s "Enter password: "# Random integer
random int 1..100
# Random float
random float 0..1
# Random boolean
random bool
# Random UUID
random uuid
# Random item from list
[apple banana cherry] | random| Bash | Nu | Description |
|---|---|---|
ls |
ls |
List files |
ls -la |
ls --long --all or ls -la |
List all files with details |
cd <dir> |
cd <dir> |
Change directory |
pwd |
pwd or $env.PWD |
Current directory |
mkdir -p <path> |
mkdir <path> |
Create directory (parents auto-created) |
touch file.txt |
touch file.txt |
Create file |
cat <file> |
open --raw <file> |
Display file contents |
cp <src> <dest> |
cp <src> <dest> |
Copy file |
mv <src> <dest> |
mv <src> <dest> |
Move/rename file |
rm <file> |
rm <file> |
Remove file |
rm -rf <dir> |
rm -r <dir> |
Remove directory recursively |
echo $PATH |
$env.PATH (Linux/Mac) or $env.Path (Windows) |
View PATH |
export FOO=BAR |
$env.FOO = "BAR" |
Set environment variable |
echo $FOO |
$env.FOO |
Use environment variable |
grep <pattern> |
where $it =~ <pattern> or find <pattern> |
Filter strings |
find . -name *.rs |
ls **/*.rs |
Find files recursively |
which <cmd> |
which <cmd> |
Locate command |
man <cmd> |
help <cmd> |
Get command help |
command1 && command2 |
command1; command2 |
Run commands sequentially |
> |
out> or | save |
Redirect output |
>> |
out>> or | save --append |
Append output |
2>&1 |
out+err>| or o+e>| |
Combine stdout/stderr |
# Bash: for f in *.md; do echo $f; done
# Nu:
ls *.md | each { |file| $file.name }
# Bash: cat file1 file2 | grep pattern
# Nu:
[file1 file2] | each { open } | str join | find pattern
# Bash: export PATH=$PATH:/new/path
# Nu:
$env.PATH = ($env.PATH | append "/new/path")
# Bash: if [ -f file.txt ]; then echo "exists"; fi
# Nu:
if ("file.txt" | path exists) { "exists" }
# Bash: command=$(ls | wc -l)
# Nu:
let command = (ls | length)5 == 5 # Equal
5 != 3 # Not equal
5 > 3 # Greater than
5 >= 5 # Greater than or equal
3 < 5 # Less than
3 <= 5 # Less than or equal
"a" =~ "a" # Regex match
"a" !~ "b" # Regex not match
5 in [1,2,3,4,5] # In list3 + 2 # Addition => 5
5 - 2 # Subtraction => 3
3 * 4 # Multiplication => 12
10 / 2 # Division => 5
10 mod 3 # Modulo => 1
2 ** 3 # Exponentiation => 8true and false # Logical AND
true or false # Logical OR
not true # Logical NOT"hello" ++ " world" # Concatenation => "hello world"
"hello" starts-with "he" # true
"hello" ends-with "lo" # true
"hello" in "hello world" # true[1 2] ++ [3 4] # Concatenation => [1, 2, 3, 4]
[1 2 3] | append 4 # Append => [1, 2, 3, 4]
[0 1 2 3] | prepend 4 # Prepend => [4, 0, 1, 2, 3]mut x = 5
$x += 3 # x = x + 3
$x -= 2 # x = x - 2
$x *= 2 # x = x * 2
$x /= 2 # x = x / 2
$x ++= [4, 5] # Append to list# Spread list elements as separate arguments
def greet [...names] {
$names | each { |name| $"Hello, ($name)!" }
}
let guests = ["Alice", "Bob", "Carol"]
greet ...$guests # Spreads list as individual arguments# Run command in parallel for each item
ls | par-each { |file|
# Process file
$file.name | str upcase
}
# Note: par-each doesn't preserve order# Try-catch block
try {
open nonexistent.txt
} catch {
print "File not found!"
}
# Create custom error
def my-command [x] {
if $x < 0 {
error make {
msg: "Value must be positive"
label: {
text: "negative value here"
span: (metadata $x).span
}
}
}
$x * 2
}# Spawn a background job
let job_id = job spawn {
sleep 10sec
"Job complete" | save result.txt
}
# List active jobs
job list
# Kill a job
job kill $job_id
# Send data to a job
"data" | job send $job_id
# Receive data from a job
job recvPolars is a blazingly fast DataFrame library that integrates with Nushell for high-performance data analysis.
First, install the polars plugin:
plugin add nu_plugin_polars
plugin use polars# From Nushell table
[[name age city]; [Alice 30 NYC] [Bob 25 LA]]
| polars into-df
# From CSV file (creates lazy dataframe by default)
polars open data.csv
# From CSV file (eager dataframe)
polars open --eager data.csv
# From list to dataframe
[1 2 3 4 5] | polars into-df
# From record with schema
{a: [1 2 3], b: ["x" "y" "z"]}
| polars into-df --as-columns
# Create with custom schema
[[a b]; [1 "foo"] [2 "bar"]]
| polars into-df -s {a: u8, b: str}# Read CSV
polars open data.csv
# Read Parquet
polars open data.parquet
# Read JSON lines
polars open data.ndjson
# Read multiple file types
polars open data.arrow
polars open data.avro
# Read with specific sheets (for Excel/ODS)
polars open data.xlsx --sheets [Sheet1 Sheet2]# View first/last rows
polars open data.csv | polars first 10 | polars collect
polars open data.csv | polars last 5 | polars collect
# Get shape (rows and columns)
let df = polars open data.csv
$df | polars shape
# View schema
$df | polars schema
# View columns
$df | polars columns
# Select specific columns
$df | polars select [name age salary]
# Drop columns
$df | polars drop [unused_col1 unused_col2]
# Rename columns
$df | polars rename old_name new_name# Filter rows
polars open data.csv
| polars filter (polars col age > 30)
| polars collect
# Multiple conditions
polars open data.csv
| polars filter (
((polars col age) > 25) and ((polars col salary) < 100000)
)
| polars collect
# Filter with string matching
polars open data.csv
| polars filter ((polars col name) =~ "^A")
| polars collect
# Filter nulls
polars open data.csv
| polars filter (polars col age | polars is-not-null)
| polars collect
# Complex filter with multiple columns
polars open data.csv
| polars filter (
((polars col department) == "Engineering") and
((polars col years_experience) >= 5)
)
| polars collect# Select columns
polars open data.csv
| polars select [name email department]
| polars collect
# Select with expressions
polars open data.csv
| polars select [
(polars col name)
((polars col salary) * 1.1 | polars as new_salary)
]
| polars collect
# Select all columns matching regex
polars open data.csv
| polars select (polars col '^sales_.*$')
| polars collect# Add new column
polars open data.csv
| polars with-column (
(polars col salary) * 0.15 | polars as tax
)
| polars collect
# Add multiple columns
polars open data.csv
| polars with-column [
((polars col first_name) ++ (polars col last_name) | polars as full_name)
((polars col salary) / 12 | polars as monthly_salary)
]
| polars collect
# Update existing column
polars open data.csv
| polars with-column (
(polars col salary) * 1.05 | polars as salary
)
| polars collect
# Cast column types
polars open data.csv
| polars cast str age
| polars collect# Basic aggregations
polars open data.csv | polars sum | polars collect
polars open data.csv | polars mean | polars collect
polars open data.csv | polars median | polars collect
polars open data.csv | polars min | polars collect
polars open data.csv | polars max | polars collect
polars open data.csv | polars std | polars collect
# Count non-null values
polars open data.csv | polars count | polars collect
# Get summary statistics
polars open data.csv | polars summary
# Aggregation on specific columns
polars open data.csv
| polars select (polars col salary | polars sum)
| polars collect# Group by single column
polars open data.csv
| polars group-by department
| polars agg (polars col salary | polars mean)
| polars collect
# Group by multiple columns
polars open data.csv
| polars group-by [department location]
| polars agg [
(polars col salary | polars mean | polars as avg_salary)
(polars col employee_id | polars count | polars as employee_count)
]
| polars collect
# Multiple aggregations
polars open data.csv
| polars group-by department
| polars agg [
(polars col salary | polars min | polars as min_salary)
(polars col salary | polars max | polars as max_salary)
(polars col salary | polars mean | polars as avg_salary)
(polars col employee_id | polars count | polars as count)
]
| polars collect
# Group by with complex expressions
polars open sales.csv
| polars group-by (polars col date | polars get-month)
| polars agg (polars col revenue | polars sum)
| polars collect# Sort by single column
polars open data.csv
| polars sort-by age
| polars collect
# Sort by multiple columns
polars open data.csv
| polars sort-by [department salary]
| polars collect
# Sort descending
polars open data.csv
| polars sort-by salary --reverse
| polars collect
# Sort with nulls last
polars open data.csv
| polars sort-by age --nulls-last
| polars collect# Inner join
let df1 = [[id name]; [1 Alice] [2 Bob]] | polars into-df
let df2 = [[id salary]; [1 50000] [2 60000]] | polars into-df
$df1 | polars join $df2 id id | polars collect
# Left join
$df1 | polars join --left $df2 id id | polars collect
# Outer join
$df1 | polars join --outer $df2 id id | polars collect
# Join on multiple columns
polars open employees.csv
| polars join (polars open departments.csv)
[dept_id location]
[dept_id location]
| polars collect
# Cross join (cartesian product)
$df1 | polars join --cross $df2 | polars collect# String length
polars open data.csv
| polars select (polars col name | polars str-lengths)
| polars collect
# Uppercase/Lowercase
polars open data.csv
| polars select (polars col name | polars uppercase)
| polars collect
polars open data.csv
| polars select (polars col email | polars lowercase)
| polars collect
# String contains
polars open data.csv
| polars filter ((polars col name) =~ "smith")
| polars collect
# String replace
polars open data.csv
| polars with-column (
polars col phone
| polars str-replace --pattern "-" --replace ""
)
| polars collect
# String slicing
polars open data.csv
| polars select (
polars col id
| polars str-slice 0 --length 5
| polars as short_id
)
| polars collect# Parse dates
["2021-12-30" "2021-12-31"] | polars into-df
| polars as-date "%Y-%m-%d"
# Extract date parts
polars open events.csv
| polars select [
(polars col timestamp | polars get-year | polars as year)
(polars col timestamp | polars get-month | polars as month)
(polars col timestamp | polars get-day | polars as day)
(polars col timestamp | polars get-hour | polars as hour)
]
| polars collect
# Date arithmetic
polars open data.csv
| polars filter (
(polars col created_date) > (
(polars col updated_date) - (polars lit 30day)
)
)
| polars collect
# Format dates
polars open data.csv
| polars with-column (
polars col timestamp
| polars strftime "%Y-%m-%d"
| polars as formatted_date
)
| polars collect# Rolling sum
[1 2 3 4 5] | polars into-df
| polars rolling sum 2
| polars drop-nulls
# Cumulative sum
[[a]; [1] [2] [3] [4] [5]]
| polars into-df
| polars select (polars col a | polars cumulative sum)
| polars collect
# Window function with partition
[[a b]; [x 2] [x 4] [y 6] [y 4]]
| polars into-lazy
| polars select [
(polars col a)
(polars col b | polars cumulative sum | polars over a | polars as cum_b)
]
| polars collect# Pivot (wide format)
[[name subject score];
[Alice Math 90]
[Alice English 85]
[Bob Math 88]
[Bob English 92]]
| polars into-df
| polars pivot --on [subject] --index [name] --values [score]
# Unpivot (long format)
[[name math english]; [Alice 90 85] [Bob 88 92]]
| polars into-df
| polars unpivot --index [name] --on [math english]
# Transpose
polars open data.csv | polars first 1 | polars collect# Drop nulls
polars open data.csv
| polars drop-nulls
| polars collect
# Fill nulls with value
polars open data.csv
| polars fill-null 0
| polars collect
# Fill nulls with expression
[1 2 null 3 null] | polars into-df
| polars fill-null (polars col 0 | polars mean)
# Count nulls
polars open data.csv
| polars select (polars col age | polars count-null)
| polars collect
# Check for nulls
polars open data.csv
| polars with-column (
polars col age | polars is-null | polars as age_is_null
)
| polars collect# Quantiles
polars open data.csv
| polars quantile 0.95
| polars collect
# Value counts
[apple banana apple cherry banana apple]
| polars into-df
| polars value-counts
# Unique values
polars open data.csv
| polars select (polars col category | polars unique)
| polars collect
# N-unique (count of unique values)
polars open data.csv
| polars select (polars col category | polars n-unique)
| polars collect# Lazy dataframe (default for polars open)
let lazy_df = polars open large_file.csv
# View the query plan (doesn't execute)
$lazy_df | polars first 5
# Execute with collect
$lazy_df | polars first 5 | polars collect
# Convert eager to lazy
let eager_df = [[a b]; [1 2] [3 4]] | polars into-df
let lazy_df = $eager_df | polars into-lazy
# Chain operations on lazy dataframe
polars open data.csv
| polars filter ((polars col age) > 30)
| polars select [name email salary]
| polars sort-by salary --reverse
| polars first 10
| polars collect # Execute the entire chain# Concatenate vertically (append rows)
let df1 = [[a b]; [1 2] [3 4]] | polars into-df
let df2 = [[a b]; [5 6] [7 8]] | polars into-df
polars concat $df1 $df2 | polars collect
# Concatenate horizontally (append columns)
$df1 | polars append $df2 --col# Explode list column
[[id tags]; [1 [a b c]] [2 [d e]]]
| polars into-df -s {id: i64, tags: list<str>}
| polars explode tags
| polars collect
# Check if list contains value
[[id items]; [1 [a b c]] [2 [d e f]]]
| polars into-df -s {id: i64, items: list<str>}
| polars with-column (
polars col items
| polars list-contains (polars lit "b")
| polars as has_b
)
| polars collect# Query with SQL
polars open data.csv
| polars query "SELECT name, age FROM df WHERE age > 30"
| polars collect
# Complex SQL query
polars open employees.csv
| polars query "
SELECT department,
AVG(salary) as avg_salary,
COUNT(*) as employee_count
FROM df
GROUP BY department
HAVING AVG(salary) > 50000
ORDER BY avg_salary DESC
"
| polars collect# Save to CSV
polars open data.parquet | polars save output.csv
# Save to Parquet
polars open data.csv | polars save output.parquet
# Save to JSON lines
polars open data.csv | polars save output.ndjson
# Lazy save (streaming)
polars open huge_file.csv
| polars filter ((polars col age) > 18)
| polars save filtered.parquet # Streams directly to file# Use lazy evaluation for large datasets
polars open large.csv
| polars filter ((polars col status) == "active")
| polars select [id name email]
| polars collect # Only executes here
# Cache intermediate results
polars open data.csv
| polars filter ((polars col category) == "A")
| polars cache # Cache this filtered result
| polars group-by region
| polars agg (polars col sales | polars sum)
| polars collect
# Use streaming for huge datasets
polars open massive.csv
| polars filter ((polars col year) == 2024)
| polars save filtered.parquet # Streams without loading all into memory# Complex data transformation pipeline
polars open sales_data.csv
| polars with-column [
# Add calculated columns
((polars col quantity) * (polars col price) | polars as total)
((polars col date) | polars get-year | polars as year)
((polars col date) | polars get-month | polars as month)
]
| polars filter ((polars col total) > 1000)
| polars group-by [year month]
| polars agg [
(polars col total | polars sum | polars as revenue)
(polars col order_id | polars count | polars as order_count)
]
| polars sort-by [year month]
| polars collect
# Time series analysis
polars open stock_prices.csv
| polars sort-by date
| polars with-column [
# 7-day moving average
(polars col price | polars rolling mean 7 | polars as ma_7)
# Daily returns
((polars col price) / (polars col price | polars shift 1) - 1
| polars as daily_return)
]
| polars collect
# Data cleaning pipeline
polars open messy_data.csv
| polars drop-nulls # Remove rows with any nulls
| polars drop-duplicates # Remove duplicate rows
| polars with-column [
# Trim whitespace from strings
(polars col name | polars str-strip-chars " " | polars as name)
# Fill missing values
(polars col age | polars fill-null 0)
]
| polars collectNushell has a built-in in-memory SQLite database accessible via stor commands.
# Create a table with schema
stor create --table-name employees --columns {
id: int,
name: str,
department: str,
salary: int,
hire_date: datetime
}
# Verify table was created
stor open | query db "SELECT name FROM sqlite_master WHERE type='table'"# Insert single record
stor insert --table-name employees --data-record {
id: 1,
name: "Alice",
department: "Engineering",
salary: 90000,
hire_date: 2020-01-15
}
# Insert via pipeline (single record)
{id: 2, name: "Bob", department: "Sales", salary: 75000, hire_date: 2021-03-20}
| stor insert --table-name employees
# Insert multiple records (table)
[
{id: 3, name: "Carol", department: "Engineering", salary: 95000, hire_date: 2019-07-10}
{id: 4, name: "David", department: "Marketing", salary: 70000, hire_date: 2022-02-01}
]
| stor insert --table-name employees
# Insert ls output
ls | stor insert --table-name files
# Insert with nested data (stored as JSON)
[{
name: "Project Alpha",
metadata: {status: "active", priority: "high"},
tags: ["important", "q4"]
}]
| stor insert --table-name projects# Basic SELECT query
stor open | query db "SELECT * FROM employees"
# Query with WHERE clause
stor open
| query db "SELECT name, salary FROM employees WHERE salary > 80000"
# Query with ORDER BY
stor open
| query db "
SELECT department, AVG(salary) as avg_salary
FROM employees
GROUP BY department
ORDER BY avg_salary DESC
"
# Query with parameters (prevents SQL injection)
stor open
| query db "SELECT * FROM employees WHERE department = ?" -p ["Engineering"]
# Query with named parameters
stor open
| query db "
SELECT * FROM employees
WHERE salary > :min_salary AND department = :dept
" -p {min_salary: 80000, dept: "Engineering"}
# Join tables
stor open
| query db "
SELECT e.name, e.salary, d.location
FROM employees e
JOIN departments d ON e.department = d.name
"# Update records
stor update --table-name employees
--update-record {salary: 100000}
--where-clause "name = 'Alice'"
# Update via pipeline
{salary: 78000}
| stor update --table-name employees
--where-clause "id = 2"
# Bulk update
stor open
| query db "
UPDATE employees
SET salary = salary * 1.05
WHERE department = 'Engineering'
"# Delete specific rows
stor delete --table-name employees
--where-clause "id = 4"
# Delete all rows matching condition
stor delete --table-name employees
--where-clause "hire_date < '2020-01-01'"
# Drop entire table
stor delete --table-name old_table# Insert data with JSON/JSONB columns
stor create --table-name documents --columns {
id: int,
title: str,
metadata: jsonb
}
[{
id: 1,
title: "Report 2024",
metadata: {author: "Alice", version: 1, tags: ["finance", "q1"]}
}]
| stor insert --table-name documents
# Query JSON fields (automatically parsed)
stor open
| query db "SELECT id, title, metadata FROM documents"
| get metadata.0 # Access as Nushell record
# => {author: Alice, version: 1, tags: [finance, q1]}
# Query JSON sub-fields with SQLite JSON operators
stor open
| query db "SELECT metadata->>'author' as author FROM documents"
# Returns text that needs parsing
# Extract and parse JSON manually
stor open
| query db "SELECT metadata->'tags' as tags FROM documents"
| update tags { from json }# Export in-memory database to file
stor export --file-name backup.db
# Import database from file
stor import --file-name backup.db
# Reset database (drop all tables)
stor reset
# Complex analytics query
stor open
| query db "
WITH monthly_sales AS (
SELECT
strftime('%Y-%m', sale_date) as month,
SUM(amount) as total_sales,
COUNT(*) as transaction_count
FROM sales
GROUP BY month
)
SELECT
month,
total_sales,
transaction_count,
total_sales / transaction_count as avg_transaction
FROM monthly_sales
WHERE total_sales > 10000
ORDER BY month DESC
"# Load from database, process with Polars, save back
stor open
| query db "SELECT * FROM raw_data"
| polars into-df
| polars filter ((polars col value) > 100)
| polars group-by category
| polars agg (polars col value | polars mean | polars as avg_value)
| polars collect
| polars into-nu
| stor insert --table-name processed_data
# Use Polars for complex transformation, then query with SQL
polars open large_dataset.csv
| polars filter ((polars col status) == "active")
| polars select [id name revenue]
| polars collect
| polars into-nu
| stor insert --table-name active_customers
stor open
| query db "
SELECT name, revenue
FROM active_customers
WHERE revenue > (SELECT AVG(revenue) FROM active_customers)
ORDER BY revenue DESC
"# Nushell native (good for small datasets)
ls | where size > 1mb | get name
# Polars (much faster for large datasets)
polars open file_list.csv
| polars filter ((polars col size) > 1048576)
| polars select name
| polars collect
# Benchmark example
use std bench
bench {
# Native Nushell
open large.csv | where value > 100 | length
}
# vs
bench {
# Polars
polars open large.csv
| polars filter ((polars col value) > 100)
| polars count
| polars collect
}# List installed plugins
plugin list
# Add a plugin
plugin add ~/.cargo/bin/nu_plugin_query
# Use plugin commands
query web --query 'table' https://example.comHooks allow running commands at specific events:
# In config.nu
$env.config.hooks = {
pre_prompt: [{
# Run before each prompt
null
}]
pre_execution: [{
# Run before each command
null
}]
env_change: {
PWD: [{|before, after|
# Run when directory changes
null
}]
}
}# Create overlay
overlay new my-overlay
# Activate module as overlay
overlay use my-module.nu
# Hide overlay
overlay hide my-overlay
# List active overlays
overlay list# Use $in to refer to pipeline input
ls | where $in.size > 1mb
"hello" | $in ++ " world"# Change table mode temporarily
ls | table --mode rounded
# Available modes: basic, compact, default, heavy, light,
# markdown, none, reinforced, rounded, etc.# Interactively explore data
sys | explore
# Explore with specific configuration
ls | explore --mode table# Format as JSON
{name: "Alice", age: 30} | to json
# Format with custom pattern
ls | format pattern "{name} is {size}"
# Format date
date now | format date "%Y-%m-%d %H:%M:%S"# View command metadata
metadata ls
# Explain command
explain { ls | where size > 1mb }
# Profile execution
debug profile { ls | where size > 1mb }
# Inspect values
let x = [1 2 3]
$x | inspect# View command history
history
# Search history
history | where command =~ "git"
# History in SQLite format (default)
$env.config.history.file_format = "sqlite"# Custom completions
def "nu-complete git-branches" [] {
^git branch | lines | each { |line| $line | str trim }
}
def "git checkout" [
branch: string@"nu-complete git-branches"
] {
^git checkout $branch
}# In config.nu
$env.config.keybindings = [
{
name: my_custom_binding
modifier: control
keycode: char_t
mode: [emacs, vi_normal, vi_insert]
event: { send: ExecuteHostCommand cmd: "commandline edit --insert 'test'" }
}
]# Fastest startup (no config, no stdlib)
nu -n --no-std-lib
# No config files
nu -n
# Custom config
nu --config my-config.nu --env-config my-env.nu# Parse and manipulate JSON
open data.json | select name age | where age > 30
# Nested field access
open config.json | get server.host
# Update nested field
open config.json | upsert server.port 8080 | save config.json# Match
"hello123" =~ '\d+' # => true
# Extract
"Price: $42.50" | parse --regex 'Price: \$(?P<price>[\d.]+)'
# Replace
"hello world" | str replace --regex '\w+' 'X' # => "X X"# Access nested data
let data = {user: {name: "Alice", age: 30}}
$data.user.name # => "Alice"
# Optional access (doesn't error if missing)
$data.user?.email? # => null if doesn't exist
# Use in commands
open config.json | get server.database.host# File operations
ls, cd, pwd, mkdir, touch, rm, cp, mv, open, save
# Data manipulation
select, where, sort-by, group-by, first, last, take, skip
# Iteration
each, par-each, reduce, enumerate, filter
# Text operations
str upcase, str downcase, str trim, str replace, split, parse
# Data conversion
to json, to yaml, to csv, from json, from yaml, from csv
# System
ps, sys, whoami, which, date now
# Help
help <command>, help commands, help --find <term># Filter and sort
ls | where size > 1mb | sort-by size
# Group and count
ls | group-by type | get file | length
# Transform data
ls | select name size | update size { |row| $row.size / 1mb }
# Chain operations
open data.json
| where status == "active"
| select name email
| sort-by name
| save filtered.json
# Conditional processing
ls | each { |file|
if $file.size > 1mb {
$file.name
}
}- Official Website: https://www.nushell.sh
- Documentation: https://www.nushell.sh/book/
- GitHub: https://github.com/nushell/nushell
- Discord: https://discord.gg/NtAbbGn
- Scripts Repository: https://github.com/nushell/nu_scripts
- Awesome Nu: https://github.com/nushell/awesome-nu
# Set editor
$env.config.buffer_editor = "vim"
# Disable banner
$env.config.show_banner = false
# Table settings
$env.config.table.mode = "rounded"
# History settings
$env.config.history.file_format = "sqlite"
$env.config.history.max_size = 100_000# In config.nu
alias ll = ls -l
alias la = ls -a
alias lla = ls -la
alias g = git
alias k = kubectl
alias d = docker
alias cat = bat # If bat is installed
alias grep = rg # If ripgrep is installed# In config.nu
$env.PROMPT_COMMAND = {
let dir = (pwd | path basename)
let git_branch = (do -i { git branch --show-current } | complete | get stdout | str trim)
if ($git_branch | is-empty) {
$"($dir) > "
} else {
$"($dir) [($git_branch)] > "
}
}
1..10..2 # Range from 1 to 10, step by 2is incorrect.1..10 | where ($it mod 2) == 0 # Range from 1 to 10, step by 20..2..10 | enumerate | skip 1This is from the Nushell version 0.108 Manual