Skip to content

Instantly share code, notes, and snippets.

@laoar
laoar / mmap_zcopy
Last active June 3, 2024 09:07
an example of kernel space to user space zero-copy via mmap, and also the comparing mmap with read/write
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <asm/uaccess.h>
#define MAX_SIZE (PAGE_SIZE * 2) /* max size mmaped to userspace */
#define DEVICE_NAME "mchar"
@laoar
laoar / netns: tcp_rto_max.diff
Last active October 22, 2017 10:20
introduce a new sysctl knob sysctl_tcp_rto_max to control the max value of TCP_RTO and make it in netns
diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
index 9a14a08..f12c655 100644
--- a/include/net/netns/ipv4.h
+++ b/include/net/netns/ipv4.h
@@ -125,6 +125,7 @@ struct netns_ipv4 {
int sysctl_tcp_sack;
int sysctl_tcp_window_scaling;
int sysctl_tcp_timestamps;
+ int sysctl_tcp_rto_max;
struct inet_timewait_death_row tcp_death_row;
@laoar
laoar / cgroup-aware-slab.diff
Last active October 22, 2017 10:33
memcg-aware slab
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -145,7 +145,7 @@ static int anon_pipe_buf_steal(struct pipe_inode_info *pipe,
if (page_count(page) == 1) {
if (memcg_kmem_enabled())
- memcg_kmem_uncharge(page, 0);
+ memcg_kmem_uncharge(page, 0, NULL);
__SetPageLocked(page);
return 0;
@laoar
laoar / per memcg score_adj
Created August 22, 2019 07:13
patch for memcg socre_adj
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -21,6 +21,7 @@
#include <linux/vmstat.h>
#include <linux/writeback.h>
#include <linux/page-flags.h>
+#include <linux/oom.h>
struct mem_cgroup;
struct page;
@laoar
laoar / page-reclaim.py
Last active September 18, 2019 01:27
under tools/perf/scripts/python/
import os
import sys
import getopt
import signal
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
usage = "usage: perf script report page-reclaim -- [-h] [-p] [-v]\n"
latency_metric = ['total', 'max', 'avg', 'min']
@laoar
laoar / mm-memcg-introduce-multiple-level-memory-low-protect.patch
Last active October 25, 2019 08:20
mm, memcg: introduce multiple level memory low protection
From f16bb9724a4a9b802a981231c7021d2aea9f4dc2 Mon Sep 17 00:00:00 2001
From: Yafang Shao <[email protected]>
Date: Tue, 22 Oct 2019 22:17:15 -0400
Subject: [PATCH] mm, memcg: introduce multiple level memory low protection
This patch introduces a new memory controller file memory.low.level,
which is used to set multiple level memory.low protetion.
The valid value of memory.low.level is [0..3], meaning we support four
levels protection now. This new controller file takes effect only when
memory.low is set.
Subject: [PATCH] mm, memcg: fix the stupid OOM killer when shrinking memcg
hard limit
When there are no more processes in a memcg (e.g., due to OOM
group), we can still have file pages in the page cache.
If these pages are protected by memory.min, they can't be reclaimed.
Especially if there won't be another process in this memcg and the memcg
is kept online, we do want to drop these pages from the page cache.
@laoar
laoar / get_memcg_count.stp
Last active December 9, 2019 10:33
trace: trace to help anayze issue
# usage : stap -g get_memcg_count.stp
# I only verified it on CentOS 7 (kernel-3.10) @yafang
# With it we can calculate how many offline memcgs.
# This script gets all memcgs, and /proc/cgroup gets online memcgs.
#
# all memcgs = online memcgs + offline memcgs
%{
diff --git a/fs/inode.c b/fs/inode.c
index fef457a..fb4a0a0 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -753,6 +753,39 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
return LRU_ROTATE;
}
+ /* Page protection only works in reclaimer */
+ if (inode->i_data.nrpages && current->reclaim_state) {
Subject: [PATCH 1/4] mm, memcg: reduce size of struct mem_cgroup by using bit
field
There are some members in struct mem_group can be either 0(false) or
1(true), so we can define them using bit field to reduce size. With this
patch, the size of struct mem_cgroup can be reduced by 64 bytes in theory,
but as there're some MEMCG_PADDING()s, the real number may be different,
which is relate with the cacheline size. Anyway, this patch could reduce
the size of struct mem_cgroup more or less.