Mailing List Archive

[linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression
Hello,

We noticed a -33.1% regression of vm-scalability.throughput due to commit:

commit: 5649d113ffce9f532a9ecc5ab96a93e02efbf283 ("swap_state: update shadow_nodes for anonymous page")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: vm-scalability
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory
with following parameters:

thp_enabled: never
thp_defrag: never
nr_task: 128
nr_pmem: 2
priority: 1
test: swap-w-seq
cpufreq_governor: performance

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/

In addition to that, the commit also has significant impact on the following tests:

+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput -11.1% regression |
| test machine | 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_pmem=2 |
| | nr_task=8 |
| | priority=1 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=never |
+------------------+------------------------------------------------------------------------------------------------+


If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <yujie.liu@intel.com>
| Link: https://lore.kernel.org/oe-lkp/202303201529.87356b9e-yujie.liu@intel.com


Details are as below:

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-11/performance/x86_64-rhel-8.3/2/128/1/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp9/swap-w-seq/vm-scalability/never/never

commit:
04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to folios")
5649d113ff ("swap_state: update shadow_nodes for anonymous page")

04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
---------------- ---------------------------
%stddev %change %stddev
\ | \
10026093 ? 3% -33.1% 6702748 ? 2% vm-scalability.throughput
74822 ? 4% -32.0% 50904 vm-scalability.median
2691 ? 24% -1370.3 1320 ? 14% vm-scalability.stddev%
4.19 ? 11% -52.6% 1.98 ? 6% vm-scalability.free_time
481301 ? 3% +5.3% 506880 ? 2% vm-scalability.time.maximum_resident_set_size
1251 ? 2% +21.6% 1522 ? 2% vm-scalability.time.system_time
88.12 -11.5% 77.95 vm-scalability.time.user_time
40.76 ? 7% -21.5% 32.00 ? 12% sched_debug.cfs_rq:/.util_est_enqueued.avg
22.51 ? 5% -18.8% 18.28 ? 5% iostat.cpu.idle
72.30 +7.1% 77.40 iostat.cpu.system
5.19 ? 2% -16.8% 4.31 ? 2% iostat.cpu.user
14.62 ? 9% -3.6 11.02 ? 10% mpstat.cpu.all.idle%
1.14 -0.2 0.90 ? 2% mpstat.cpu.all.irq%
5.67 ? 2% -1.0 4.66 ? 2% mpstat.cpu.all.usr%
13.20 ? 11% -18.1% 10.81 ? 10% turbostat.CPU%c1
23051407 ? 2% -14.4% 19727636 ? 4% turbostat.IRQ
224.88 -1.2% 222.24 turbostat.PkgWatt
16.80 -5.4% 15.90 turbostat.RAMWatt
434.67 ? 25% +325.1% 1847 ? 21% vmstat.memory.buff
6474867 ? 5% -20.8% 5126808 ? 4% vmstat.memory.free
3163131 ? 2% -21.1% 2497139 ? 2% vmstat.swap.so
8889 ? 4% -13.4% 7701 ? 3% vmstat.system.cs
596093 ? 2% -27.4% 432921 ? 5% vmstat.system.in
464.33 ? 24% +303.4% 1873 ? 22% meminfo.Buffers
440.83 ? 21% +294.9% 1740 ? 21% meminfo.Inactive(file)
176308 -37.1% 110933 meminfo.KReclaimable
5551950 ? 5% -21.3% 4371063 ? 6% meminfo.MemAvailable
5622919 ? 5% -20.4% 4474001 ? 6% meminfo.MemFree
105072 ? 3% -7.8% 96881 meminfo.PageTables
176308 -37.1% 110933 meminfo.SReclaimable
363906 -17.8% 299245 meminfo.Slab
8567910 ? 8% -26.1% 6329722 ? 2% numa-numastat.node0.local_node
3092009 ? 16% +64.1% 5072859 ? 17% numa-numastat.node0.numa_foreign
8575364 ? 8% -25.6% 6382391 ? 2% numa-numastat.node0.numa_hit
2604290 ? 34% -58.3% 1086131 ? 12% numa-numastat.node0.numa_miss
2624919 ? 35% -56.6% 1139437 ? 13% numa-numastat.node0.other_node
2603580 ? 34% -58.3% 1086128 ? 13% numa-numastat.node1.numa_foreign
3092797 ? 16% +64.0% 5073593 ? 17% numa-numastat.node1.numa_miss
3133318 ? 16% +62.2% 5083376 ? 17% numa-numastat.node1.other_node
21532 ? 15% -49.3% 10921 ? 36% numa-meminfo.node0.Active
21433 ? 15% -49.3% 10860 ? 36% numa-meminfo.node0.Active(anon)
13621410 ? 8% -13.5% 11786580 numa-meminfo.node0.AnonPages.max
958086 ?124% +180.6% 2688329 ? 2% numa-meminfo.node0.FilePages
3163357 ? 3% -34.1% 2083966 ? 8% numa-meminfo.node0.MemFree
46464 ? 10% -31.9% 31664 ? 4% numa-meminfo.node0.PageTables
953704 ?125% +181.4% 2683325 ? 2% numa-meminfo.node0.Unevictable
10476108 ? 8% +20.3% 12605919 numa-meminfo.node1.AnonPages
13236460 ? 8% +13.6% 15039440 numa-meminfo.node1.AnonPages.max
1784484 ? 67% -96.8% 56959 ?118% numa-meminfo.node1.FilePages
10463958 ? 8% +20.4% 12599017 numa-meminfo.node1.Inactive
10463702 ? 8% +20.4% 12597326 numa-meminfo.node1.Inactive(anon)
254.67 ? 80% +563.6% 1690 ? 19% numa-meminfo.node1.Inactive(file)
104016 ? 22% -67.6% 33731 ? 7% numa-meminfo.node1.KReclaimable
104016 ? 22% -67.6% 33731 ? 7% numa-meminfo.node1.SReclaimable
192710 ? 10% -40.4% 114923 ? 3% numa-meminfo.node1.Slab
1780916 ? 67% -97.1% 51307 ?132% numa-meminfo.node1.Unevictable
64526 ? 15% -63.9% 23321 ? 2% proc-vmstat.allocstall_movable
5288063 +5.9% 5600084 proc-vmstat.nr_anon_pages
137571 ? 6% -22.6% 106496 ? 3% proc-vmstat.nr_dirty_background_threshold
275479 ? 6% -22.6% 213254 ? 3% proc-vmstat.nr_dirty_threshold
1417431 ? 6% -22.0% 1105772 ? 3% proc-vmstat.nr_free_pages
5281604 +6.3% 5614707 proc-vmstat.nr_inactive_anon
109.67 ? 21% +296.5% 434.83 ? 20% proc-vmstat.nr_inactive_file
1512 ? 3% +36.8% 2069 proc-vmstat.nr_isolated_anon
26212 ? 3% -7.0% 24371 ? 2% proc-vmstat.nr_page_table_pages
43972 -36.9% 27738 proc-vmstat.nr_slab_reclaimable
9555764 -18.7% 7764551 ? 3% proc-vmstat.nr_vmscan_write
16334593 -10.2% 14664852 proc-vmstat.nr_written
5280581 +6.3% 5612439 proc-vmstat.nr_zone_inactive_anon
106.83 ? 20% +305.9% 433.67 ? 20% proc-vmstat.nr_zone_inactive_file
5346486 ? 14% +36.2% 7282563 ? 3% proc-vmstat.numa_pte_updates
1249754 ? 2% -24.6% 942522 ? 2% proc-vmstat.pgalloc_dma32
23052451 +1.2% 23336109 proc-vmstat.pgalloc_normal
3608 ? 38% +57.9% 5697 ? 17% proc-vmstat.pgmajfault
6975365 ? 26% -45.4% 3808817 ? 7% proc-vmstat.pgscan_kswapd
16335291 -10.2% 14665678 proc-vmstat.pgsteal_anon
15767255 -9.8% 14224534 proc-vmstat.pgsteal_direct
568595 ? 7% -22.3% 441711 ? 2% proc-vmstat.pgsteal_kswapd
3654 ? 38% +57.2% 5744 ? 17% proc-vmstat.pswpin
16334605 -10.2% 14664870 proc-vmstat.pswpout
51680 ? 69% +1346.9% 747751 ? 5% proc-vmstat.slabs_scanned
8.67 ? 43% +1e+05% 8981 ? 12% proc-vmstat.workingset_nodes
5292 ? 14% -49.2% 2690 ? 38% numa-vmstat.node0.nr_active_anon
239506 ?124% +180.6% 672173 ? 2% numa-vmstat.node0.nr_file_pages
770198 ? 7% -31.2% 530210 ? 4% numa-vmstat.node0.nr_free_pages
623.50 ? 14% -50.5% 308.83 ? 10% numa-vmstat.node0.nr_isolated_anon
11688 ? 10% -31.8% 7970 ? 4% numa-vmstat.node0.nr_page_table_pages
73.50 ?143% +469.4% 418.50 ? 43% numa-vmstat.node0.nr_swapcached
238425 ?125% +181.4% 670830 ? 2% numa-vmstat.node0.nr_unevictable
4093464 ? 13% -59.2% 1672046 ? 2% numa-vmstat.node0.nr_vmscan_write
7032593 ? 12% -56.8% 3037254 ? 2% numa-vmstat.node0.nr_written
5322 ? 14% -49.1% 2706 ? 38% numa-vmstat.node0.nr_zone_active_anon
238426 ?125% +181.4% 670829 ? 2% numa-vmstat.node0.nr_zone_unevictable
3092009 ? 16% +64.1% 5072859 ? 17% numa-vmstat.node0.numa_foreign
8575433 ? 8% -25.6% 6381099 ? 2% numa-vmstat.node0.numa_hit
8567979 ? 8% -26.1% 6328430 ? 2% numa-vmstat.node0.numa_local
2604290 ? 34% -58.3% 1086131 ? 12% numa-vmstat.node0.numa_miss
2624919 ? 35% -56.6% 1139437 ? 13% numa-vmstat.node0.numa_other
6.17 ? 79% +1.5e+05% 9305 ? 11% numa-vmstat.node0.workingset_nodes
37.33 ? 63% +137.9% 88.83 ? 27% numa-vmstat.node1.nr_active_file
2635461 ? 8% +19.3% 3144960 numa-vmstat.node1.nr_anon_pages
446121 ? 67% -96.8% 14145 ?119% numa-vmstat.node1.nr_file_pages
2632428 ? 8% +19.3% 3141695 numa-vmstat.node1.nr_inactive_anon
64.00 ? 79% +558.3% 421.33 ? 18% numa-vmstat.node1.nr_inactive_file
884.00 ? 14% +102.3% 1788 ? 5% numa-vmstat.node1.nr_isolated_anon
25994 ? 23% -67.5% 8445 ? 7% numa-vmstat.node1.nr_slab_reclaimable
445228 ? 67% -97.1% 12826 ?132% numa-vmstat.node1.nr_unevictable
5485756 ? 8% +15.5% 6335562 numa-vmstat.node1.nr_vmscan_write
9301996 ? 9% +25.0% 11627597 numa-vmstat.node1.nr_written
37.33 ? 63% +137.9% 88.83 ? 27% numa-vmstat.node1.nr_zone_active_file
2631806 ? 8% +19.3% 3140718 numa-vmstat.node1.nr_zone_inactive_anon
61.00 ? 78% +588.3% 419.83 ? 18% numa-vmstat.node1.nr_zone_inactive_file
445228 ? 67% -97.1% 12826 ?132% numa-vmstat.node1.nr_zone_unevictable
2603580 ? 34% -58.3% 1086128 ? 13% numa-vmstat.node1.numa_foreign
3092797 ? 16% +64.0% 5073593 ? 17% numa-vmstat.node1.numa_miss
3133318 ? 16% +62.2% 5083376 ? 17% numa-vmstat.node1.numa_other
2.33 ? 76% +4092.9% 97.83 ? 19% numa-vmstat.node1.workingset_nodes
12.30 ? 97% -12.3 0.00 perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.__cmd_record
14.15 ? 75% -12.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__cmd_record
9.92 ? 98% -9.9 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.__cmd_record
11.31 ? 50% -7.8 3.47 ?145% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.31 ? 50% -7.8 3.47 ?145% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
7.14 ?100% -5.8 1.39 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.14 ?100% -5.8 1.39 ?223% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.16 ?107% -5.2 0.00 perf-profile.calltrace.cycles-pp.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record
5.09 ?161% -5.1 0.00 perf-profile.calltrace.cycles-pp.vsnprintf.seq_printf.show_interrupts.seq_read_iter.proc_reg_read_iter
4.76 ?141% -4.8 0.00 perf-profile.calltrace.cycles-pp.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.__cmd_record
4.76 ?141% -4.8 0.00 perf-profile.calltrace.cycles-pp.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
4.76 ?141% -4.8 0.00 perf-profile.calltrace.cycles-pp.walk_component.link_path_walk.path_lookupat.filename_lookup.vfs_statx
4.17 ?152% -4.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
4.17 ?152% -4.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
4.17 ?152% -4.2 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
4.17 ?152% -4.2 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
5.09 ?161% -3.7 1.39 ?223% perf-profile.calltrace.cycles-pp.seq_printf.show_interrupts.seq_read_iter.proc_reg_read_iter.vfs_read
4.17 ?152% -2.8 1.39 ?223% perf-profile.calltrace.cycles-pp.open64
2.78 ?141% -2.8 0.00 perf-profile.calltrace.cycles-pp.lookup_fast.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.lockref_put_or_lock.dput.step_into.link_path_walk
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.up_read.kernfs_dop_revalidate.lookup_fast.walk_component.link_path_walk
4.76 ?141% -2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.ring_buffer_read_head.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.__cmd_record
4.76 ?141% -2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.ring_buffer_read_head.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist
4.76 ?141% -2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.ring_buffer_read_head.perf_mmap__read_head.perf_mmap__push
4.76 ?141% -2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.ring_buffer_read_head.perf_mmap__read_head
4.76 ?141% -2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.ring_buffer_read_head
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.lockref_put_or_lock.dput.step_into.link_path_walk.path_openat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.kernfs_dop_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.folio_add_lru.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_add_lru.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.lru_add_fn.folio_batch_move_lru.folio_add_lru.shmem_get_folio_gfp.shmem_write_begin
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.mem_cgroup_update_lru_size.lru_add_fn.folio_batch_move_lru.folio_add_lru.shmem_get_folio_gfp
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.getdents64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.record__mmap_read_evlist.__cmd_record
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.record__mmap_read_evlist.__cmd_record
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.record__mmap_read_evlist.__cmd_record
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.record__mmap_read_evlist
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__check_object_size.strncpy_from_user.getname_flags.vfs_fstatat.__do_sys_newstat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__check_heap_object.__check_object_size.strncpy_from_user.getname_flags.vfs_fstatat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.lockref_put_return.dput.terminate_walk.path_openat.do_filp_open
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.next_tgid.proc_pid_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__do_fault.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.perf_mmap_fault.__do_fault.do_read_fault.do_fault.__handle_mm_fault
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.dput.terminate_walk.path_openat.do_filp_open.do_sys_openat2
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.getname_flags.vfs_fstatat.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__rb_insert_augmented.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.proc_pid_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.terminate_walk.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.vfs_fstatat.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.strncpy_from_user.getname_flags.vfs_fstatat.__do_sys_newstat.do_syscall_64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.getdents64
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__d_lookup_rcu.lookup_fast.walk_component.link_path_walk.path_lookupat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.__alloc_file.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.___slab_alloc.kmem_cache_alloc.__alloc_file.alloc_empty_file.path_openat
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.__alloc_file.alloc_empty_file.path_openat.do_filp_open
2.38 ?223% -2.4 0.00 perf-profile.calltrace.cycles-pp.swap_range_free.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
3.77 ?148% -2.4 1.39 ?223% perf-profile.calltrace.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
5.62 ?104% -2.3 3.33 ?223% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.62 ?104% -2.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
6.48 ?143% -1.9 4.63 ?145% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read_iter.proc_reg_read_iter.vfs_read.ksys_read
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist__cpu_begin.__evlist__enable.__cmd_record
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist__cpu_begin.__evlist__enable
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.memcpy_erms.vsnprintf.seq_printf.show_interrupts.seq_read_iter
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist__cpu_begin
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__sched_setaffinity.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__set_cpus_allowed_ptr.__sched_setaffinity.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity.sched_setaffinity.__x64_sys_sched_setaffinity
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity.sched_setaffinity
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__schedule.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.__mmdrop.finish_task_switch.__schedule.__cond_resched.__wait_for_common
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.free_pcppages_bulk.free_unref_page.__mmdrop.finish_task_switch
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.evlist__cpu_begin.__evlist__enable.__cmd_record
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.sched_setaffinity.evlist__cpu_begin.__evlist__enable.__cmd_record
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.__cond_resched.__wait_for_common.affine_move_task
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.free_unref_page.__mmdrop.finish_task_switch.__schedule.__cond_resched
1.85 ?223% -1.9 0.00 perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page.__mmdrop.finish_task_switch.__schedule
4.76 ?141% -1.4 3.33 ?223% perf-profile.calltrace.cycles-pp.link_path_walk.path_lookupat.filename_lookup.vfs_statx.vfs_fstatat
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__xstat64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.waitid
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.pid_revalidate.lookup_fast.open_last_lookups.path_openat.do_filp_open
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.pid_revalidate.lookup_fast.walk_component.path_lookupat.filename_lookup
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_mas_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_mas_munmap.__vm_munmap.elf_map.load_elf_interp.load_elf_binary
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_mas_align_munmap.do_mas_munmap.mmap_region.do_mmap.vm_mmap_pgoff
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_mas_align_munmap.do_mas_munmap.__vm_munmap.elf_map.load_elf_interp
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.unmap_region.do_mas_align_munmap.do_mas_munmap.mmap_region.do_mmap
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.unmap_region.do_mas_align_munmap.do_mas_munmap.__vm_munmap.elf_map
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.lookup_fast.walk_component.path_lookupat.filename_lookup.vfs_statx
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.get_pid_task.proc_pid_permission.inode_permission.link_path_walk.path_openat
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.lru_add_fn.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.pid_task.pid_revalidate.lookup_fast.walk_component.path_lookupat
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.elf_map.load_elf_interp.load_elf_binary.search_binary_handler.exec_binprm
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__vm_munmap.elf_map.load_elf_interp.load_elf_binary.search_binary_handler
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.filename_lookup.vfs_statx.vfs_fstatat.__do_sys_newstat.do_syscall_64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region.do_mas_align_munmap
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.vfs_write
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.inode_permission.link_path_walk.path_openat.do_filp_open.do_sys_openat2
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.waitid
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.find_get_pid.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.load_elf_interp.load_elf_binary.search_binary_handler.exec_binprm.bprm_execve
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.do_mas_align_munmap.do_mas_munmap.__vm_munmap
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.do_mas_align_munmap.do_mas_munmap
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__xstat64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.vfs_fstatat.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.vfs_statx.vfs_fstatat.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.path_lookupat.filename_lookup.vfs_statx.vfs_fstatat.__do_sys_newstat
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.proc_pid_permission.inode_permission.link_path_walk.path_openat.do_filp_open
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.lockref_get_not_dead.__legitimize_path.try_to_unlazy
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.number.vsnprintf.seq_printf.show_interrupts.seq_read_iter
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.filemap_map_pages.do_read_fault.do_fault
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write.__generic_file_write_iter
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.fault_in_readable.fault_in_iov_iter_readable
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.fault_in_readable
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.fopen
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.__legitimize_path.try_to_unlazy.lookup_fast.open_last_lookups.path_openat
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.lockref_get_not_dead.__legitimize_path.try_to_unlazy.lookup_fast
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.fopen
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.fopen
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.fopen
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.filemap_map_pages.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.do_read_fault.do_fault.__handle_mm_fault
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.lockref_get_not_dead.__legitimize_path.try_to_unlazy.lookup_fast.open_last_lookups
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.run_rebalance_domains.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.try_to_unlazy.lookup_fast.open_last_lookups.path_openat.do_filp_open
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.update_blocked_averages.run_rebalance_domains.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.update_rq_clock.update_blocked_averages.run_rebalance_domains.__do_softirq.__irq_exit_rcu
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.sched_clock_cpu.update_rq_clock.update_blocked_averages.run_rebalance_domains.__do_softirq
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.native_sched_clock.sched_clock_cpu.update_rq_clock.update_blocked_averages.run_rebalance_domains
1.39 ?223% -1.4 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.fopen
3.77 ?148% -1.4 2.38 ?223% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
3.77 ?148% -1.4 2.38 ?223% perf-profile.calltrace.cycles-pp.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
2.38 ?223% -1.0 1.39 ?223% perf-profile.calltrace.cycles-pp.dput.step_into.link_path_walk.path_openat.do_filp_open
2.38 ?223% -1.0 1.39 ?223% perf-profile.calltrace.cycles-pp.step_into.link_path_walk.path_openat.do_filp_open.do_sys_openat2
4.23 ?143% -0.9 3.33 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
4.23 ?143% -0.9 3.33 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
4.23 ?143% -0.9 3.33 ?223% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
4.23 ?143% -0.9 3.33 ?223% perf-profile.calltrace.cycles-pp.__mmap
4.23 ?143% -0.9 3.33 ?223% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
2.78 ?141% -0.7 2.08 ?223% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
13.29 ? 89% -0.6 12.66 ? 92% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
13.29 ? 89% -0.6 12.66 ? 92% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.12 ? 33% -0.6 42.54 ? 17% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
43.12 ? 33% -0.6 42.54 ? 17% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
43.12 ? 33% -0.6 42.54 ? 17% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
43.12 ? 33% -0.6 42.54 ? 17% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write.writen.record__pushfn.perf_mmap__push
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write.writen.record__pushfn
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.__libc_write.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write.writen
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
5.16 ?107% -0.4 4.76 ?223% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
2.38 ?223% -0.3 2.08 ?223% perf-profile.calltrace.cycles-pp.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
2.38 ?223% -0.3 2.08 ?223% perf-profile.calltrace.cycles-pp.__folio_alloc.vma_alloc_folio.do_swap_page.__handle_mm_fault.handle_mm_fault
2.38 ?223% -0.3 2.08 ?223% perf-profile.calltrace.cycles-pp.__alloc_pages.__folio_alloc.vma_alloc_folio.do_swap_page.__handle_mm_fault
2.38 ?223% -0.3 2.08 ?223% perf-profile.calltrace.cycles-pp.vma_alloc_folio.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__lxstat64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__lxstat64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.__do_sys_newlstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__lxstat64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.filename_lookup.vfs_statx.vfs_fstatat.__do_sys_newlstat.do_syscall_64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.path_lookupat.filename_lookup.vfs_statx.vfs_fstatat.__do_sys_newlstat
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.__lxstat64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.vfs_fstatat.__do_sys_newlstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__lxstat64
4.76 ?141% -0.0 4.72 ?158% perf-profile.calltrace.cycles-pp.vfs_statx.vfs_fstatat.__do_sys_newlstat.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% +0.0 2.38 ?223% perf-profile.calltrace.cycles-pp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.vfs_write
2.38 ?223% +0.0 2.38 ?223% perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.execve
1.39 ?223% +0.0 1.39 ?223% perf-profile.calltrace.cycles-pp.walk_component.path_lookupat.filename_lookup.vfs_statx.vfs_fstatat
4.23 ?143% +0.0 4.23 ?143% perf-profile.calltrace.cycles-pp.free_swap_slot.__swap_entry_free.free_swap_and_cache.zap_pte_range.zap_pmd_range
4.23 ?143% +0.0 4.23 ?143% perf-profile.calltrace.cycles-pp.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache.zap_pte_range
1.85 ?223% +0.2 2.08 ?223% perf-profile.calltrace.cycles-pp.__evlist__enable.__cmd_record
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.read
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.proc_reg_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.48 ?143% +0.2 6.71 ?103% perf-profile.calltrace.cycles-pp.seq_read_iter.proc_reg_read_iter.vfs_read.ksys_read.do_syscall_64
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.asm_sysvec_reschedule_ipi
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
1.85 ?223% +0.5 2.38 ?223% perf-profile.calltrace.cycles-pp.swap_cgroup_record.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free
1.39 ?223% +1.0 2.38 ?223% perf-profile.calltrace.cycles-pp.fault_in_iov_iter_readable.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.vfs_write
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.getxattr
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.getxattr
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp._raw_spin_trylock.dentry_kill.dput.step_into.link_path_walk
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.strnlen_user.copy_strings.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.xattr_resolve_name.__vfs_getxattr.do_getxattr.getxattr.path_getxattr
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.__d_alloc.d_alloc.d_alloc_parallel.__lookup_slow.walk_component
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.mod_objcg_state.memcg_slab_post_alloc_hook.kmem_cache_alloc_lru.__d_alloc
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.__lookup_slow.walk_component.path_lookupat.filename_lookup.vfs_statx
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.__vfs_getxattr.do_getxattr.getxattr.path_getxattr.do_syscall_64
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat.filename_lookup
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.d_alloc.d_alloc_parallel.__lookup_slow.walk_component.path_lookupat
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.copy_strings.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.do_getxattr.getxattr.path_getxattr.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.dentry_kill.dput.step_into.link_path_walk.path_openat
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.getxattr.path_getxattr.do_syscall_64.entry_SYSCALL_64_after_hwframe.getxattr
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.kmem_cache_alloc_lru.__d_alloc.d_alloc.d_alloc_parallel.__lookup_slow
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.memcg_slab_post_alloc_hook.kmem_cache_alloc_lru.__d_alloc.d_alloc.d_alloc_parallel
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.mod_objcg_state.memcg_slab_post_alloc_hook.kmem_cache_alloc_lru.__d_alloc.d_alloc
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.path_getxattr.do_syscall_64.entry_SYSCALL_64_after_hwframe.getxattr
0.00 +1.4 1.39 ?223% perf-profile.calltrace.cycles-pp.getxattr
6.15 ?170% +1.6 7.70 ? 74% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
43.12 ? 33% +1.8 44.92 ? 20% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
43.12 ? 33% +1.8 44.92 ? 20% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
43.12 ? 33% +1.8 44.92 ? 20% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
43.12 ? 33% +1.8 44.92 ? 20% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
43.12 ? 33% +1.8 44.92 ? 20% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp._find_next_bit.show_interrupts.seq_read_iter.proc_reg_read_iter.vfs_read
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp.copy_mc_fragile.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp.do_swap.sort_r.sort.swapcache_free_entries.free_swap_slot
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp.sort.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp.sort_r.sort.swapcache_free_entries.free_swap_slot.__swap_entry_free
0.00 +1.9 1.85 ?223% perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__open64_nocancel
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist_cpu_iterator__next.__evlist__enable.__cmd_record
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist_cpu_iterator__next.__evlist__enable
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.evlist_cpu_iterator__next
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__check_object_size.get_user_cpu_mask.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.check_heap_object.__check_object_size.get_user_cpu_mask.__x64_sys_sched_setaffinity.do_syscall_64
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__virt_addr_valid.check_heap_object.__check_object_size.get_user_cpu_mask.__x64_sys_sched_setaffinity
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp._raw_spin_lock.d_alloc.d_alloc_parallel.lookup_open.open_last_lookups
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.vfs_write.ksys_write.do_syscall_64
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.vfs_write.ksys_write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.arch_show_interrupts.seq_read_iter.proc_reg_read_iter.vfs_read.ksys_read
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.vsnprintf.seq_printf.arch_show_interrupts.seq_read_iter.proc_reg_read_iter
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.vfs_write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__open64_nocancel
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.d_alloc_parallel.lookup_open.open_last_lookups.path_openat.do_filp_open
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.d_alloc.d_alloc_parallel.lookup_open.open_last_lookups.path_openat
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.record__pushfn.perf_mmap__push.record__mmap_read_evlist
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.record__pushfn.perf_mmap__push
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.__folio_alloc.vma_alloc_folio.do_swap_page
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.sched_setaffinity.evlist_cpu_iterator__next.__evlist__enable.__cmd_record
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.get_user_cpu_mask.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.record__pushfn
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.pipe_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages.__folio_alloc.vma_alloc_folio
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages.__folio_alloc
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.seq_printf.arch_show_interrupts.seq_read_iter.proc_reg_read_iter.vfs_read
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.__open64_nocancel
0.00 +2.1 2.08 ?223% perf-profile.calltrace.cycles-pp.evlist_cpu_iterator__next.__evlist__enable.__cmd_record
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp._raw_spin_lock.finish_fault.do_read_fault.do_fault.__handle_mm_fault
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.menu_reflect.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.__schedule.schedule.smpboot_thread_fn.kthread.ret_from_fork
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.update_sg_lb_stats.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.balance_fair.__schedule.schedule.smpboot_thread_fn.kthread
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page.exit_mmap.__mmput.exit_mm
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.free_unref_page.exit_mmap.__mmput.exit_mm.do_exit
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.finish_fault.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.schedule.smpboot_thread_fn.kthread.ret_from_fork
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.newidle_balance.balance_fair.__schedule.schedule.smpboot_thread_fn
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.balance_fair.__schedule.schedule
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.balance_fair.__schedule
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.balance_fair
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.perf_mmap__read_head.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shmem_add_to_page_cache.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.swap_cgroup_record.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot
0.00 +2.4 2.38 ?223% perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_get_folio_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
0.00 +2.8 2.78 ?223% perf-profile.calltrace.cycles-pp.__entry_text_start.open64
2.38 ?223% +2.8 5.18 ?149% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% +2.8 5.18 ?149% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% +2.8 5.18 ?149% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ?223% +2.8 5.18 ?149% perf-profile.calltrace.cycles-pp.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
2.38 ?223% +2.8 5.18 ?149% perf-profile.calltrace.cycles-pp.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +3.2 3.24 ?143% perf-profile.calltrace.cycles-pp.asm_exc_page_fault
0.00 +3.2 3.24 ?143% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault
0.00 +3.2 3.24 ?143% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +3.2 3.24 ?143% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.exit_mm.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.__mmput.exit_mm.do_exit.do_group_exit.get_signal
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.free_swap_and_cache.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
4.23 ?143% +3.3 7.57 ?105% perf-profile.calltrace.cycles-pp.__swap_entry_free.free_swap_and_cache.zap_pte_range.zap_pmd_range.unmap_page_range
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mas_replace.mas_wmb_replace.mas_split.mas_wr_bnode.mas_store_prealloc
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.lockref_put_return.dput.step_into.link_path_walk.path_lookupat
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.dput.step_into.link_path_walk.path_lookupat.filename_lookup
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mas_store_prealloc.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mas_wr_bnode.mas_store_prealloc.mmap_region.do_mmap.vm_mmap_pgoff
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mas_split.mas_wr_bnode.mas_store_prealloc.mmap_region.do_mmap
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.mas_wmb_replace.mas_split.mas_wr_bnode.mas_store_prealloc.mmap_region
0.00 +3.3 3.33 ?223% perf-profile.calltrace.cycles-pp.step_into.link_path_walk.path_lookupat.filename_lookup.vfs_statx
0.00 +4.8 4.76 ?223% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write.__generic_file_write_iter
0.00 +6.1 6.08 ?146% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.1 6.08 ?146% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.1 6.08 ?146% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.1 6.08 ?146% perf-profile.calltrace.cycles-pp.__mmput.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group
0.00 +6.1 6.08 ?146% perf-profile.calltrace.cycles-pp.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 +6.8 6.84 ?156% perf-profile.calltrace.cycles-pp.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record
4.23 ?143% +7.0 11.27 ?111% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.__mmput.exit_mm.do_exit
4.23 ?143% +7.0 11.27 ?111% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.__mmput.exit_mm
4.23 ?143% +7.0 11.27 ?111% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap.__mmput
4.23 ?143% +7.0 11.27 ?111% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.__libc_start_main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.main.__libc_start_main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.cmd_record.run_builtin.main.__libc_start_main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.__cmd_record.cmd_record.run_builtin.main.__libc_start_main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.__cmd_record.cmd_record.run_builtin.main
0.00 +9.2 9.23 ?114% perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record.run_builtin
4.23 ?143% +9.4 13.65 ? 84% perf-profile.calltrace.cycles-pp.exit_mmap.__mmput.exit_mm.do_exit.do_group_exit
6.74 ? 2% -24.2% 5.11 perf-stat.i.MPKI
1.396e+10 ? 2% -10.1% 1.254e+10 perf-stat.i.branch-instructions
36852425 ? 4% -15.0% 31322172 ? 3% perf-stat.i.branch-misses
44.63 +3.1 47.68 perf-stat.i.cache-miss-rate%
1.843e+08 ? 3% -35.0% 1.199e+08 perf-stat.i.cache-misses
4.069e+08 ? 4% -38.6% 2.5e+08 perf-stat.i.cache-references
8372 ? 7% -17.3% 6921 ? 3% perf-stat.i.context-switches
4.80 ? 2% +4.8% 5.03 perf-stat.i.cpi
2.275e+11 +2.3% 2.327e+11 perf-stat.i.cpu-cycles
4546555 ? 15% -27.9% 3277727 ? 7% perf-stat.i.dTLB-load-misses
1.389e+10 ? 2% -10.6% 1.243e+10 perf-stat.i.dTLB-loads
0.14 ? 4% +0.0 0.15 perf-stat.i.dTLB-store-miss-rate%
7181570 ? 4% -17.0% 5961634 perf-stat.i.dTLB-store-misses
4.619e+09 ? 3% -19.1% 3.739e+09 perf-stat.i.dTLB-stores
3082839 ? 3% -23.5% 2359116 ? 4% perf-stat.i.iTLB-load-misses
343748 ? 8% -22.0% 268269 ? 8% perf-stat.i.iTLB-loads
5.471e+10 ? 2% -10.3% 4.908e+10 perf-stat.i.instructions
0.27 ? 2% -12.5% 0.24 perf-stat.i.ipc
2.57 +2.1% 2.63 perf-stat.i.metric.GHz
755.43 ? 6% -25.8% 560.80 ? 3% perf-stat.i.metric.K/sec
370.23 ? 2% -12.0% 325.72 perf-stat.i.metric.M/sec
1478193 ? 4% -16.7% 1231768 perf-stat.i.minor-faults
74.96 +3.2 78.11 perf-stat.i.node-load-miss-rate%
24247876 ? 4% -21.3% 19080033 perf-stat.i.node-load-misses
6320470 ? 2% -25.8% 4687351 perf-stat.i.node-loads
80.10 +1.9 81.97 perf-stat.i.node-store-miss-rate%
13678740 ? 4% -19.8% 10971410 ? 2% perf-stat.i.node-store-misses
3135638 ? 7% -17.7% 2580680 perf-stat.i.node-stores
1478372 ? 4% -16.7% 1232012 perf-stat.i.page-faults
7.43 ? 3% -31.5% 5.09 perf-stat.overall.MPKI
45.55 +2.5 48.04 perf-stat.overall.cache-miss-rate%
4.23 +12.8% 4.77 perf-stat.overall.cpi
1251 ? 3% +55.8% 1950 perf-stat.overall.cycles-between-cache-misses
0.15 +0.0 0.16 perf-stat.overall.dTLB-store-miss-rate%
17689 ? 2% +17.6% 20810 ? 3% perf-stat.overall.instructions-per-iTLB-miss
0.24 -11.3% 0.21 perf-stat.overall.ipc
79.29 +1.0 80.26 perf-stat.overall.node-load-miss-rate%
8390 +6.6% 8948 perf-stat.overall.path-length
1.291e+10 ? 2% -8.7% 1.179e+10 perf-stat.ps.branch-instructions
33397822 ? 2% -12.9% 29096714 ? 3% perf-stat.ps.branch-misses
1.716e+08 ? 3% -34.2% 1.129e+08 perf-stat.ps.cache-misses
3.768e+08 ? 4% -37.6% 2.351e+08 perf-stat.ps.cache-references
7861 ? 6% -17.1% 6516 ? 3% perf-stat.ps.context-switches
2.145e+11 +2.7% 2.203e+11 perf-stat.ps.cpu-cycles
4230370 ? 15% -27.0% 3088833 ? 7% perf-stat.ps.dTLB-load-misses
1.288e+10 ? 2% -9.2% 1.169e+10 perf-stat.ps.dTLB-loads
6544297 ? 3% -14.9% 5567126 perf-stat.ps.dTLB-store-misses
4.263e+09 ? 2% -17.7% 3.51e+09 perf-stat.ps.dTLB-stores
2867704 ? 2% -22.5% 2221945 ? 4% perf-stat.ps.iTLB-load-misses
304019 ? 8% -18.7% 247129 ? 8% perf-stat.ps.iTLB-loads
5.07e+10 -8.9% 4.617e+10 perf-stat.ps.instructions
1347553 ? 3% -14.6% 1150302 perf-stat.ps.minor-faults
22641206 ? 4% -20.5% 17991646 perf-stat.ps.node-load-misses
5907643 ? 2% -25.1% 4424982 perf-stat.ps.node-loads
12779474 ? 4% -18.9% 10364793 ? 2% perf-stat.ps.node-store-misses
2805435 ? 5% -14.6% 2395268 ? 2% perf-stat.ps.node-stores
1347714 ? 3% -14.6% 1150544 perf-stat.ps.page-faults
9.001e+11 +6.6% 9.599e+11 perf-stat.total.instructions


***************************************************************************************************
lkp-csl-2sp9: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/priority/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-11/performance/x86_64-rhel-8.3/2/8/1/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp9/swap-w-seq/vm-scalability/never/never

commit:
04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to folios")
5649d113ff ("swap_state: update shadow_nodes for anonymous page")

04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
---------------- ---------------------------
%stddev %change %stddev
\ | \
553378 -11.1% 492012 ? 2% vm-scalability.median
4397849 -11.1% 3909359 ? 2% vm-scalability.throughput
193.12 ? 2% +9.6% 211.73 ? 3% vm-scalability.time.system_time
9.03 +2.3% 9.23 iostat.cpu.system
1.90 -6.3% 1.78 ? 2% iostat.cpu.user
11.67 ? 32% +1.2e+05% 13992 ? 70% numa-vmstat.node0.workingset_nodes
2.17 ? 56% +4.1e+05% 8805 ?147% numa-vmstat.node1.workingset_nodes
26.50 -8.8% 24.17 perf-node.node-local-load-ratio
2.021e+08 ? 2% +10.8% 2.239e+08 ? 2% perf-node.node-store-misses
1957846 -6.5% 1831262 ? 2% vmstat.swap.so
601515 -4.7% 573111 vmstat.system.in
168704 -31.3% 115939 ? 2% meminfo.KReclaimable
168704 -31.3% 115939 ? 2% meminfo.SReclaimable
353022 -14.8% 300764 meminfo.Slab
6223 ? 10% -19.2% 5027 ? 8% sched_debug.cfs_rq:/.min_vruntime.stddev
2951 ? 92% -255.2% -4580 sched_debug.cfs_rq:/.spread0.avg
6223 ? 10% -19.1% 5032 ? 8% sched_debug.cfs_rq:/.spread0.stddev
42176 -31.3% 28973 ? 2% proc-vmstat.nr_slab_reclaimable
6643539 -3.5% 6413330 proc-vmstat.pgsteal_kswapd
38484 ? 4% +743.8% 324728 ? 20% proc-vmstat.slabs_scanned
366592 ? 2% +6.6% 390656 proc-vmstat.unevictable_pgs_scanned
14.33 ? 19% +1.6e+05% 22725 ? 22% proc-vmstat.workingset_nodes
7.84 +5.9% 8.31 ? 2% perf-stat.i.MPKI
5.816e+09 -5.9% 5.471e+09 ? 2% perf-stat.i.branch-instructions
0.31 +0.0 0.34 ? 2% perf-stat.i.branch-miss-rate%
89096547 ? 2% -4.3% 85261416 ? 2% perf-stat.i.cache-misses
5.821e+09 -5.5% 5.502e+09 ? 2% perf-stat.i.dTLB-loads
3953795 ? 2% -4.7% 3768612 ? 2% perf-stat.i.dTLB-store-misses
2.496e+09 -5.2% 2.365e+09 ? 2% perf-stat.i.dTLB-stores
2.286e+10 -5.5% 2.16e+10 ? 2% perf-stat.i.instructions
10182 -6.5% 9519 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.77 -5.4% 0.73 ? 2% perf-stat.i.ipc
162.23 -5.6% 153.17 ? 2% perf-stat.i.metric.M/sec
748060 -6.7% 698091 ? 2% perf-stat.i.minor-faults
4204564 ? 4% -9.5% 3804975 ? 3% perf-stat.i.node-loads
748159 -6.7% 698208 ? 2% perf-stat.i.page-faults
7.21 +4.5% 7.54 ? 2% perf-stat.overall.MPKI
0.30 ? 2% +0.0 0.33 ? 2% perf-stat.overall.branch-miss-rate%
54.17 -1.6 52.57 ? 2% perf-stat.overall.cache-miss-rate%
1.29 +6.3% 1.37 ? 2% perf-stat.overall.cpi
7548 ? 2% -4.9% 7180 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.78 -5.9% 0.73 ? 2% perf-stat.overall.ipc
73.44 +2.2 75.64 perf-stat.overall.node-load-miss-rate%
81.58 +1.3 82.91 perf-stat.overall.node-store-miss-rate%
6823 +1.4% 6921 perf-stat.overall.path-length
5.64e+09 -6.2% 5.292e+09 ? 2% perf-stat.ps.branch-instructions
86605001 ? 2% -4.5% 82733559 ? 2% perf-stat.ps.cache-misses
5.647e+09 -5.7% 5.325e+09 ? 2% perf-stat.ps.dTLB-loads
3834445 ? 2% -5.0% 3640859 ? 2% perf-stat.ps.dTLB-store-misses
2.423e+09 -5.5% 2.29e+09 ? 2% perf-stat.ps.dTLB-stores
2.217e+10 -5.7% 2.09e+10 ? 2% perf-stat.ps.instructions
725375 -7.0% 674261 ? 2% perf-stat.ps.minor-faults
4087232 ? 4% -9.6% 3693484 ? 3% perf-stat.ps.node-loads
725470 -7.0% 674377 ? 2% perf-stat.ps.page-faults
7.32e+11 +1.4% 7.424e+11 perf-stat.total.instructions
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.__munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.do_mas_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.do_mas_align_munmap.do_mas_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.unmap_region.do_mas_align_munmap.do_mas_munmap.__vm_munmap.__x64_sys_munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
37.28 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_mas_align_munmap.do_mas_munmap.__vm_munmap
37.28 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_mas_align_munmap.do_mas_munmap
37.28 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_mas_align_munmap
37.28 ? 6% -19.2 18.08 ? 17% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
36.87 ? 7% -18.8 18.03 ? 17% perf-profile.calltrace.cycles-pp.free_swap_and_cache.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
36.56 ? 7% -18.7 17.85 ? 17% perf-profile.calltrace.cycles-pp.__swap_entry_free.free_swap_and_cache.zap_pte_range.zap_pmd_range.unmap_page_range
36.35 ? 7% -18.6 17.73 ? 17% perf-profile.calltrace.cycles-pp.free_swap_slot.__swap_entry_free.free_swap_and_cache.zap_pte_range.zap_pmd_range
35.82 ? 7% -18.4 17.45 ? 17% perf-profile.calltrace.cycles-pp.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache.zap_pte_range
25.98 ? 10% -13.6 12.34 ? 18% perf-profile.calltrace.cycles-pp._raw_spin_lock.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
25.79 ? 10% -13.6 12.24 ? 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.swapcache_free_entries.free_swap_slot.__swap_entry_free
7.32 ? 7% -3.5 3.86 ? 14% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free.free_swap_and_cache
2.92 ? 7% -1.4 1.51 ? 14% perf-profile.calltrace.cycles-pp.swap_cgroup_record.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free
2.58 ? 10% -1.2 1.36 ? 14% perf-profile.calltrace.cycles-pp.page_counter_uncharge.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free
1.89 ? 8% -0.9 0.98 ? 13% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.swap_cgroup_record.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot
1.43 ? 6% -0.7 0.75 ? 16% perf-profile.calltrace.cycles-pp.mem_cgroup_id_put_many.__mem_cgroup_uncharge_swap.swapcache_free_entries.free_swap_slot.__swap_entry_free
1.10 ? 32% +0.7 1.80 ? 13% perf-profile.calltrace.cycles-pp.folio_end_writeback.pmem_rw_page.bdev_write_page.__swap_writepage.pageout
2.39 ? 10% +0.8 3.16 ? 17% perf-profile.calltrace.cycles-pp.do_rw_once
3.29 ? 12% +1.1 4.36 ? 14% perf-profile.calltrace.cycles-pp.pmem_rw_page.bdev_write_page.__swap_writepage.pageout.shrink_folio_list
0.28 ?100% +1.3 1.62 ? 33% perf-profile.calltrace.cycles-pp.rmap_walk_anon.folio_referenced.shrink_folio_list.shrink_inactive_list.shrink_lruvec
0.47 ? 45% +1.4 1.84 ? 24% perf-profile.calltrace.cycles-pp.folio_referenced.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
0.00 +1.8 1.75 ? 29% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.list_lru_add.workingset_update_node.xas_store
0.00 +1.8 1.82 ? 31% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.list_lru_del.workingset_update_node.xas_store
3.94 ? 12% +2.0 5.96 ? 19% perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_folio_list
4.00 ? 12% +2.0 6.03 ? 19% perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_folio_list.shrink_inactive_list
4.05 ? 12% +2.0 6.10 ? 19% perf-profile.calltrace.cycles-pp.try_to_unmap_flush_dirty.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
4.04 ? 12% +2.0 6.09 ? 19% perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush_dirty.shrink_folio_list.shrink_inactive_list.shrink_lruvec
0.00 +2.3 2.32 ? 31% perf-profile.calltrace.cycles-pp._raw_spin_lock.list_lru_del.workingset_update_node.xas_store.add_to_swap_cache
0.00 +2.4 2.38 ? 31% perf-profile.calltrace.cycles-pp._raw_spin_lock.list_lru_add.workingset_update_node.xas_store.__delete_from_swap_cache
0.00 +2.9 2.91 ? 25% perf-profile.calltrace.cycles-pp.list_lru_del.workingset_update_node.xas_store.add_to_swap_cache.add_to_swap
0.00 +3.0 3.00 ? 21% perf-profile.calltrace.cycles-pp.list_lru_add.workingset_update_node.xas_store.__delete_from_swap_cache.__remove_mapping
0.00 +3.0 3.01 ? 21% perf-profile.calltrace.cycles-pp.workingset_update_node.xas_store.__delete_from_swap_cache.__remove_mapping.shrink_folio_list
0.00 +3.0 3.04 ? 21% perf-profile.calltrace.cycles-pp.workingset_update_node.xas_store.add_to_swap_cache.add_to_swap.shrink_folio_list
0.00 +3.1 3.12 ? 21% perf-profile.calltrace.cycles-pp.xas_store.add_to_swap_cache.add_to_swap.shrink_folio_list.shrink_inactive_list
0.00 +3.1 3.12 ? 21% perf-profile.calltrace.cycles-pp.xas_store.__delete_from_swap_cache.__remove_mapping.shrink_folio_list.shrink_inactive_list
0.00 +3.7 3.72 ? 20% perf-profile.calltrace.cycles-pp.__delete_from_swap_cache.__remove_mapping.shrink_folio_list.shrink_inactive_list.shrink_lruvec
4.26 ? 13% +3.9 8.13 ? 17% perf-profile.calltrace.cycles-pp.add_to_swap.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
0.00 +4.0 3.97 ? 20% perf-profile.calltrace.cycles-pp.add_to_swap_cache.add_to_swap.shrink_folio_list.shrink_inactive_list.shrink_lruvec
0.41 ? 70% +4.2 4.66 ? 19% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
6.93 ? 29% +4.3 11.24 ? 28% perf-profile.calltrace.cycles-pp.ret_from_fork
6.93 ? 29% +4.3 11.24 ? 28% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
6.00 ? 31% +4.9 10.90 ? 29% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs.shrink_node.balance_pgdat
6.00 ? 31% +4.9 10.90 ? 29% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node_memcgs.shrink_node.balance_pgdat.kswapd
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.calltrace.cycles-pp.shrink_node_memcgs.shrink_node.balance_pgdat.kswapd.kthread
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
10.74 ? 8% +10.2 20.95 ? 20% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs.shrink_node.shrink_zones
10.80 ? 8% +10.2 21.04 ? 20% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node_memcgs.shrink_node.shrink_zones.do_try_to_free_pages
11.12 ? 8% +10.6 21.68 ? 20% perf-profile.calltrace.cycles-pp.shrink_node_memcgs.shrink_node.shrink_zones.do_try_to_free_pages.try_to_free_pages
11.19 ? 8% +10.6 21.79 ? 19% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.__folio_alloc
11.19 ? 8% +10.6 21.78 ? 19% perf-profile.calltrace.cycles-pp.shrink_zones.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages
11.18 ? 8% +10.6 21.78 ? 19% perf-profile.calltrace.cycles-pp.shrink_node.shrink_zones.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
11.22 ? 8% +10.6 21.83 ? 19% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.__folio_alloc.vma_alloc_folio
12.03 ? 8% +10.8 22.79 ? 19% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages.__folio_alloc.vma_alloc_folio.do_anonymous_page
12.30 ? 8% +10.9 23.20 ? 19% perf-profile.calltrace.cycles-pp.__alloc_pages.__folio_alloc.vma_alloc_folio.do_anonymous_page.__handle_mm_fault
12.31 ? 8% +10.9 23.22 ? 19% perf-profile.calltrace.cycles-pp.__folio_alloc.vma_alloc_folio.do_anonymous_page.__handle_mm_fault.handle_mm_fault
12.36 ? 8% +10.9 23.28 ? 19% perf-profile.calltrace.cycles-pp.vma_alloc_folio.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
14.28 ? 9% +11.6 25.84 ? 19% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
14.56 ? 9% +11.7 26.21 ? 19% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
14.42 ? 9% +11.7 26.14 ? 19% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
14.87 ? 9% +11.8 26.63 ? 19% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
14.90 ? 9% +11.8 26.68 ? 19% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
16.23 ? 10% +12.4 28.66 ? 19% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
17.96 ? 9% +13.1 31.02 ? 19% perf-profile.calltrace.cycles-pp.do_access
16.40 ? 13% +13.1 29.52 ? 18% perf-profile.calltrace.cycles-pp.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs.shrink_node
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.__munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.unmap_region
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.unmap_page_range
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.zap_pmd_range
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.do_mas_munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.do_mas_align_munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.__x64_sys_munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.__vm_munmap
37.29 ? 6% -19.2 18.08 ? 17% perf-profile.children.cycles-pp.zap_pte_range
37.29 ? 6% -19.2 18.09 ? 17% perf-profile.children.cycles-pp.unmap_vmas
37.51 ? 6% -18.9 18.56 ? 15% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
37.51 ? 6% -18.9 18.56 ? 15% perf-profile.children.cycles-pp.do_syscall_64
36.88 ? 7% -18.8 18.03 ? 17% perf-profile.children.cycles-pp.free_swap_and_cache
36.56 ? 7% -18.7 17.86 ? 17% perf-profile.children.cycles-pp.__swap_entry_free
36.35 ? 7% -18.6 17.73 ? 17% perf-profile.children.cycles-pp.free_swap_slot
35.83 ? 7% -18.4 17.45 ? 17% perf-profile.children.cycles-pp.swapcache_free_entries
26.84 ? 9% -8.3 18.53 ? 9% perf-profile.children.cycles-pp._raw_spin_lock
26.22 ? 10% -7.7 18.52 ? 9% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
7.34 ? 7% -3.5 3.86 ? 14% perf-profile.children.cycles-pp.__mem_cgroup_uncharge_swap
3.37 ? 6% -1.4 2.02 ? 9% perf-profile.children.cycles-pp.swap_cgroup_record
2.66 ? 10% -1.2 1.42 ? 13% perf-profile.children.cycles-pp.page_counter_uncharge
2.29 ? 7% -0.9 1.41 ? 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.16 ? 7% -0.7 0.43 ? 17% perf-profile.children.cycles-pp.clear_shadow_from_swap_cache
1.44 ? 6% -0.7 0.76 ? 16% perf-profile.children.cycles-pp.mem_cgroup_id_put_many
0.79 ?109% -0.7 0.12 ?114% perf-profile.children.cycles-pp.kcompactd
0.79 ?109% -0.7 0.12 ?114% perf-profile.children.cycles-pp.compact_zone
0.78 ?111% -0.7 0.12 ?116% perf-profile.children.cycles-pp.proactive_compact_node
0.74 ?114% -0.6 0.11 ?118% perf-profile.children.cycles-pp.migrate_pages
0.74 ?114% -0.6 0.11 ?118% perf-profile.children.cycles-pp.unmap_and_move
0.69 ?115% -0.6 0.10 ?116% perf-profile.children.cycles-pp.__unmap_and_move
0.57 ?114% -0.5 0.09 ?119% perf-profile.children.cycles-pp.try_to_migrate
0.56 ?114% -0.5 0.09 ?119% perf-profile.children.cycles-pp.try_to_migrate_one
0.56 ?108% -0.4 0.11 ? 98% perf-profile.children.cycles-pp.ptep_clear_flush
0.56 ?109% -0.4 0.11 ? 98% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.38 ? 11% -0.3 0.12 ? 18% perf-profile.children.cycles-pp.xas_find
0.35 ? 10% -0.2 0.11 ? 20% perf-profile.children.cycles-pp.xas_load
0.55 ? 9% -0.2 0.32 ? 12% perf-profile.children.cycles-pp.sort
0.55 ? 9% -0.2 0.32 ? 12% perf-profile.children.cycles-pp.sort_r
0.55 ? 12% -0.2 0.36 ? 15% perf-profile.children.cycles-pp.swap_range_free
0.55 ? 10% -0.2 0.37 ? 17% perf-profile.children.cycles-pp._swap_info_get
0.35 ? 16% -0.1 0.24 ? 4% perf-profile.children.cycles-pp.propagate_protected_usage
0.32 ? 11% -0.1 0.24 ? 11% perf-profile.children.cycles-pp.__mod_memcg_state
0.16 ? 10% -0.1 0.10 ? 8% perf-profile.children.cycles-pp.swp_entry_cmp
0.08 ? 17% +0.0 0.11 ? 17% perf-profile.children.cycles-pp.__perf_sw_event
0.02 ? 99% +0.0 0.06 perf-profile.children.cycles-pp.memset_erms
0.04 ? 72% +0.0 0.08 ? 14% perf-profile.children.cycles-pp.folio_mapping
0.04 ? 72% +0.0 0.09 ? 12% perf-profile.children.cycles-pp.__cond_resched
0.11 ? 29% +0.1 0.16 ? 14% perf-profile.children.cycles-pp.workingset_age_nonresident
0.14 ? 8% +0.1 0.19 ? 19% perf-profile.children.cycles-pp._find_next_bit
0.04 ? 73% +0.1 0.10 ? 21% perf-profile.children.cycles-pp.folio_unlock
0.25 ? 4% +0.1 0.33 ? 16% perf-profile.children.cycles-pp.llist_reverse_order
0.13 ? 21% +0.1 0.22 ? 21% perf-profile.children.cycles-pp.__mod_node_page_state
0.22 ? 9% +0.1 0.31 ? 13% perf-profile.children.cycles-pp.sync_regs
0.15 ? 8% +0.1 0.25 ? 19% perf-profile.children.cycles-pp.up_read
0.05 ? 45% +0.1 0.15 ? 26% perf-profile.children.cycles-pp.count_shadow_nodes
0.00 +0.1 0.12 ? 26% perf-profile.children.cycles-pp.__list_lru_walk_one
0.00 +0.1 0.12 ? 26% perf-profile.children.cycles-pp.shadow_lru_isolate
0.00 +0.1 0.12 ? 26% perf-profile.children.cycles-pp.list_lru_walk_one_irq
0.20 ? 20% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.task_numa_work
0.20 ? 20% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.change_prot_numa
0.20 ? 20% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.change_protection_range
0.20 ? 20% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.change_pmd_range
0.21 ? 21% +0.1 0.34 ? 21% perf-profile.children.cycles-pp.task_work_run
0.20 ? 21% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.change_pte_range
0.21 ? 21% +0.1 0.34 ? 20% perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.21 ? 22% +0.1 0.35 ? 20% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.44 ? 8% +0.1 0.58 ? 15% perf-profile.children.cycles-pp.flush_tlb_func
0.21 ? 21% +0.1 0.35 ? 21% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.07 ? 32% +0.2 0.26 ? 19% perf-profile.children.cycles-pp.xas_create_range
0.11 ? 16% +0.2 0.32 ? 18% perf-profile.children.cycles-pp.xas_create
0.00 +0.2 0.24 ? 21% perf-profile.children.cycles-pp.noop_dirty_folio
0.00 +0.2 0.24 ? 39% perf-profile.children.cycles-pp.move_folios_to_lru
0.16 ? 12% +0.3 0.42 ? 20% perf-profile.children.cycles-pp.shrink_slab
0.21 ? 9% +0.3 0.51 ? 20% perf-profile.children.cycles-pp.do_shrink_slab
0.02 ? 99% +0.3 0.32 ? 18% perf-profile.children.cycles-pp.__list_add_valid
1.10 ? 6% +0.4 1.47 ? 14% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
1.12 ? 6% +0.4 1.49 ? 14% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.18 ? 22% +0.4 0.55 ? 24% perf-profile.children.cycles-pp.isolate_lru_folios
0.39 ? 13% +0.4 0.77 ? 22% perf-profile.children.cycles-pp.folio_lock_anon_vma_read
1.22 ? 8% +0.4 1.62 ? 14% perf-profile.children.cycles-pp.sysvec_call_function_single
0.28 ? 14% +0.4 0.69 ? 18% perf-profile.children.cycles-pp.__list_del_entry_valid
1.40 ? 12% +0.4 1.82 ? 13% perf-profile.children.cycles-pp.folio_end_writeback
0.62 ? 16% +0.5 1.08 ? 22% perf-profile.children.cycles-pp.page_vma_mapped_walk
1.82 ? 8% +0.5 2.37 ? 14% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.44 ? 19% +0.6 1.03 ? 25% perf-profile.children.cycles-pp.folio_referenced_one
0.02 ?142% +0.6 0.63 ? 83% perf-profile.children.cycles-pp.lru_note_cost
2.76 ? 10% +0.9 3.65 ? 17% perf-profile.children.cycles-pp.do_rw_once
3.30 ? 12% +1.1 4.38 ? 14% perf-profile.children.cycles-pp.pmem_rw_page
0.82 ? 17% +1.1 1.90 ? 25% perf-profile.children.cycles-pp.folio_referenced
0.36 ? 99% +1.4 1.78 ? 61% perf-profile.children.cycles-pp.start_kernel
0.36 ? 99% +1.4 1.78 ? 61% perf-profile.children.cycles-pp.arch_call_rest_init
0.36 ? 99% +1.4 1.78 ? 61% perf-profile.children.cycles-pp.rest_init
4.55 ? 17% +1.6 6.15 ? 17% perf-profile.children.cycles-pp.smp_call_function_many_cond
4.56 ? 17% +1.6 6.16 ? 17% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
0.50 ? 7% +1.7 2.15 ? 77% perf-profile.children.cycles-pp._raw_spin_lock_irq
4.08 ? 12% +2.1 6.14 ? 19% perf-profile.children.cycles-pp.try_to_unmap_flush_dirty
4.07 ? 12% +2.1 6.14 ? 19% perf-profile.children.cycles-pp.arch_tlbbatch_flush
0.00 +3.0 3.01 ? 21% perf-profile.children.cycles-pp.list_lru_del
0.00 +3.0 3.02 ? 21% perf-profile.children.cycles-pp.list_lru_add
0.26 ? 11% +3.5 3.74 ? 20% perf-profile.children.cycles-pp.__delete_from_swap_cache
0.46 ? 14% +3.6 4.09 ? 20% perf-profile.children.cycles-pp.add_to_swap_cache
4.29 ? 12% +3.9 8.19 ? 17% perf-profile.children.cycles-pp.add_to_swap
0.86 ? 14% +3.9 4.76 ? 19% perf-profile.children.cycles-pp.__remove_mapping
6.93 ? 29% +4.3 11.24 ? 28% perf-profile.children.cycles-pp.kthread
6.93 ? 29% +4.3 11.24 ? 28% perf-profile.children.cycles-pp.ret_from_fork
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.children.cycles-pp.kswapd
6.00 ? 31% +4.9 10.92 ? 30% perf-profile.children.cycles-pp.balance_pgdat
0.40 ? 5% +5.9 6.34 ? 21% perf-profile.children.cycles-pp.xas_store
0.00 +6.1 6.10 ? 21% perf-profile.children.cycles-pp.workingset_update_node
11.29 ? 8% +11.1 22.44 ? 21% perf-profile.children.cycles-pp.shrink_zones
11.29 ? 8% +11.2 22.44 ? 21% perf-profile.children.cycles-pp.do_try_to_free_pages
11.32 ? 8% +11.2 22.48 ? 21% perf-profile.children.cycles-pp.try_to_free_pages
12.36 ? 8% +11.2 23.57 ? 20% perf-profile.children.cycles-pp.__folio_alloc
12.40 ? 8% +11.2 23.64 ? 20% perf-profile.children.cycles-pp.vma_alloc_folio
12.15 ? 8% +11.3 23.46 ? 20% perf-profile.children.cycles-pp.__alloc_pages_slowpath
12.42 ? 8% +11.5 23.88 ? 20% perf-profile.children.cycles-pp.__alloc_pages
14.36 ? 9% +11.6 26.00 ? 19% perf-profile.children.cycles-pp.do_anonymous_page
14.48 ? 9% +11.9 26.34 ? 19% perf-profile.children.cycles-pp.__handle_mm_fault
14.63 ? 9% +11.9 26.54 ? 19% perf-profile.children.cycles-pp.handle_mm_fault
14.94 ? 9% +12.0 26.92 ? 19% perf-profile.children.cycles-pp.do_user_addr_fault
14.97 ? 9% +12.0 26.96 ? 19% perf-profile.children.cycles-pp.exc_page_fault
15.95 ? 9% +12.3 28.29 ? 19% perf-profile.children.cycles-pp.asm_exc_page_fault
18.59 ? 9% +13.0 31.56 ? 18% perf-profile.children.cycles-pp.do_access
16.51 ? 13% +13.3 29.78 ? 18% perf-profile.children.cycles-pp.shrink_folio_list
16.83 ? 13% +15.7 32.50 ? 21% perf-profile.children.cycles-pp.shrink_inactive_list
16.90 ? 13% +15.7 32.59 ? 21% perf-profile.children.cycles-pp.shrink_lruvec
17.21 ? 13% +16.0 33.25 ? 20% perf-profile.children.cycles-pp.shrink_node_memcgs
17.28 ? 13% +16.1 33.35 ? 20% perf-profile.children.cycles-pp.shrink_node
26.14 ? 10% -7.7 18.46 ? 9% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.46 ? 10% -1.2 1.30 ? 13% perf-profile.self.cycles-pp.page_counter_uncharge
2.20 ? 8% -0.9 1.32 ? 10% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.41 ? 6% -0.7 0.74 ? 16% perf-profile.self.cycles-pp.mem_cgroup_id_put_many
1.14 ? 6% -0.4 0.71 ? 8% perf-profile.self.cycles-pp.swap_cgroup_record
0.30 ? 12% -0.2 0.08 ? 24% perf-profile.self.cycles-pp.xas_load
0.38 ? 7% -0.2 0.21 ? 14% perf-profile.self.cycles-pp.free_swap_slot
0.54 ? 10% -0.2 0.36 ? 16% perf-profile.self.cycles-pp._swap_info_get
0.47 ? 11% -0.2 0.30 ? 15% perf-profile.self.cycles-pp.swap_range_free
0.36 ? 8% -0.2 0.20 ? 18% perf-profile.self.cycles-pp.sort_r
0.21 ? 7% -0.1 0.08 ? 8% perf-profile.self.cycles-pp.xas_store
0.16 ? 42% -0.1 0.04 ? 71% perf-profile.self.cycles-pp.zap_pte_range
0.33 ? 16% -0.1 0.22 ? 5% perf-profile.self.cycles-pp.propagate_protected_usage
0.13 ? 3% -0.1 0.04 ? 75% perf-profile.self.cycles-pp.clear_shadow_from_swap_cache
0.46 ? 6% -0.1 0.38 ? 10% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.28 ? 10% -0.1 0.21 ? 12% perf-profile.self.cycles-pp.__mod_memcg_state
0.13 ? 9% -0.1 0.08 ? 19% perf-profile.self.cycles-pp.__mem_cgroup_uncharge_swap
0.14 ? 7% -0.1 0.09 ? 10% perf-profile.self.cycles-pp.swp_entry_cmp
0.10 ? 12% -0.0 0.05 perf-profile.self.cycles-pp.swapcache_free_entries
0.04 ? 45% +0.0 0.08 ? 19% perf-profile.self.cycles-pp.rmqueue_bulk
0.10 ? 4% +0.0 0.15 ? 18% perf-profile.self.cycles-pp._find_next_bit
0.11 ? 29% +0.1 0.16 ? 14% perf-profile.self.cycles-pp.workingset_age_nonresident
0.03 ?102% +0.1 0.09 ? 24% perf-profile.self.cycles-pp.folio_unlock
0.07 ? 22% +0.1 0.13 ? 23% perf-profile.self.cycles-pp.__might_resched
0.04 ? 71% +0.1 0.12 ? 17% perf-profile.self.cycles-pp.__remove_mapping
0.25 ? 4% +0.1 0.33 ? 16% perf-profile.self.cycles-pp.llist_reverse_order
0.09 ? 23% +0.1 0.17 ? 30% perf-profile.self.cycles-pp.rmap_walk_anon
0.13 ? 23% +0.1 0.21 ? 19% perf-profile.self.cycles-pp.__mod_node_page_state
0.20 ? 10% +0.1 0.29 ? 14% perf-profile.self.cycles-pp.flush_tlb_func
0.22 ? 9% +0.1 0.31 ? 13% perf-profile.self.cycles-pp.sync_regs
0.12 ? 17% +0.1 0.21 ? 19% perf-profile.self.cycles-pp.add_to_swap_cache
0.13 ? 9% +0.1 0.23 ? 18% perf-profile.self.cycles-pp.up_read
0.00 +0.1 0.10 ? 28% perf-profile.self.cycles-pp.count_shadow_nodes
0.18 ? 21% +0.1 0.30 ? 19% perf-profile.self.cycles-pp.change_pte_range
0.00 +0.1 0.14 ? 45% perf-profile.self.cycles-pp.move_folios_to_lru
0.44 ? 8% +0.1 0.58 ? 13% perf-profile.self.cycles-pp.__flush_smp_call_function_queue
0.18 ? 9% +0.2 0.34 ? 18% perf-profile.self.cycles-pp.folio_lock_anon_vma_read
0.09 ? 25% +0.2 0.25 ? 24% perf-profile.self.cycles-pp.folio_referenced_one
0.08 ? 14% +0.2 0.28 ? 21% perf-profile.self.cycles-pp.xas_create
0.00 +0.2 0.21 ? 21% perf-profile.self.cycles-pp.list_lru_del
0.11 ? 24% +0.2 0.32 ? 26% perf-profile.self.cycles-pp.isolate_lru_folios
0.00 +0.2 0.22 ? 20% perf-profile.self.cycles-pp.list_lru_add
0.19 ? 20% +0.2 0.41 ? 20% perf-profile.self.cycles-pp.shrink_folio_list
0.00 +0.2 0.22 ? 23% perf-profile.self.cycles-pp.noop_dirty_folio
0.29 ? 18% +0.2 0.53 ? 21% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.01 ?223% +0.3 0.32 ? 19% perf-profile.self.cycles-pp.__list_add_valid
0.03 ?100% +0.4 0.40 ? 19% perf-profile.self.cycles-pp.__delete_from_swap_cache
0.98 ? 13% +0.4 1.36 ? 14% perf-profile.self.cycles-pp.folio_end_writeback
0.27 ? 15% +0.4 0.68 ? 19% perf-profile.self.cycles-pp.__list_del_entry_valid
2.45 ? 11% +0.7 3.18 ? 16% perf-profile.self.cycles-pp.do_access
2.58 ? 10% +0.8 3.41 ? 17% perf-profile.self.cycles-pp.do_rw_once
0.70 ? 8% +1.1 1.83 ? 18% perf-profile.self.cycles-pp._raw_spin_lock
3.82 ? 19% +1.5 5.28 ? 18% perf-profile.self.cycles-pp.smp_call_function_many_cond


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
[linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression [ In reply to ]
> 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 10026093 ± 3% -33.1% 6702748 ± 2% vm-scalability.throughput

I try to reproduce this and see vm-scalability.throughput really decrease.
And I use ftrace found that functions related to this patch
add_to_swap_cache()/__delete_from_swap_cache()/clear_shadow_from_swap_cache()
consume more time while workingset_update_node() be called much more times.

Since the patch result in consuming much more resource, and the problem this
patch try to solve is not apparent to user, we may abandon this patch.

By the way, as what this test result shows, mapping_set_update() should also
consume much time. Should we care about this?

Thanks.

Reproduce before this patch:
/vm-scalability-master # cat /sys/kernel/debug/tracing/trace_stat/function0
Function Hit Time Avg s^2
-------- --- ---- --- ---
add_to_swap_cache 26108 476290.6 us 18.243 us 487.762 us
__delete_from_swap_cache 26117 462492.6 us 17.708 us 77.801 us
clear_shadow_from_swap_cache 27840 199925.1 us 7.181 us 313.126 us

Reproduce after this patch:
/vm-scalability-master # cat /sys/kernel/debug/tracing/trace_stat/function*
Function Hit Time Avg s^2
-------- --- ---- --- ---
add_to_swap_cache 51268 1371819 us 26.757 us 676.311 us
__delete_from_swap_cache 51260 1322712 us 25.803 us 123.010 us
workingset_update_node 157455 770064.9 us 4.890 us 15.108 us
clear_shadow_from_swap_cache 52928 563597.4 us 10.648 us 199.766 us
[linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression [ In reply to ]
> commit:
> 04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to folios")
> 5649d113ff ("swap_state: update shadow_nodes for anonymous page")
> 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 10026093 ± 3% -33.1% 6702748 ± 2% vm-scalability.throughput

> 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 553378 -11.1% 492012 ± 2% vm-scalability.median

I see the two results are much different, one is -33.1%, another is -11.1%.
So I tried more times to reproduce on my machine, and see a 8% of regression
of vm-scalability.throughput.

As this test add/delete/clear swap cache frequently, the impact of commit
5649d113ff might be magnified ?

Commit 5649d113ff tried to fix the problem that if swap space is huge and
apps are using many shadow entries, shadow nodes may waste much space in
memory. So the shadow nodes should be reclaimed when it's number is huge while
memory is in tense.

I reviewed commit 5649d113ff carefully, and didn't found any obviously
problem. If we want to correctly update shadow_nodes for anonymous page,
we have to update them when add/delete/clear swap cache.

Thanks.
Re: [linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression [ In reply to ]
Hi Yang,

On Tue, 2023-03-21 at 07:56 +0000, Yang Yang wrote:
> > commit:
> >  04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to
> > folios")
> >  5649d113ff ("swap_state: update shadow_nodes for anonymous page")
> > 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \ 
> >  10026093 ±  3%     -33.1%    6702748 ±  2%  vm-
> > scalability.throughput
>
> > 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \ 
> >    553378           -11.1%     492012 ±  2%  vm-scalability.median
>
> I see the two results are much different, one is -33.1%, another is -
> 11.1%.
> So I tried more times to reproduce on my machine, and see a 8% of
> regression
> of vm-scalability.throughput.
>
> As this test add/delete/clear swap cache frequently, the impact of
> commit
> 5649d113ff might be magnified ?
>
> Commit 5649d113ff tried to fix the problem that if swap space is huge
> and
> apps are using many shadow entries, shadow nodes may waste much space
> in
> memory. So the shadow nodes should be reclaimed when it's number is
> huge while
> memory is in tense.
>
> I reviewed commit 5649d113ff carefully, and didn't found any
> obviously
> problem. If we want to correctly update shadow_nodes for anonymous
> page,
> we have to update them when add/delete/clear swap cache.
Thanks for the info and sorry for delayed response. We didn't get your
replies in our compony Inbox (don't know why). Just noticed you replies
on lore.kernel.org when revise the ticket. We will mark this regression
won't fix as it's for functional fixing.


Regards
Yin, Fengwei

>
> Thanks.