Mailing List Archive

[PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s
Before putting them to use (several commits later), add rcu_read_lock()
to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a
separate commit, since it risks exposing imbalances: prior commits have
fixed all the known imbalances, but we may find some have been missed.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
include/linux/pgtable.h | 4 ++--
mm/pgtable-generic.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index a1326e61d7ee..8b0fc7fdc46f 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -99,7 +99,7 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
((pte_t *)kmap_local_page(pmd_page(*(pmd))) + pte_index((address)))
#define pte_unmap(pte) do { \
kunmap_local((pte)); \
- /* rcu_read_unlock() to be added later */ \
+ rcu_read_unlock(); \
} while (0)
#else
static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
@@ -108,7 +108,7 @@ static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
}
static inline void pte_unmap(pte_t *pte)
{
- /* rcu_read_unlock() to be added later */
+ rcu_read_unlock();
}
#endif

diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index c7ab18a5fb77..674671835631 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
{
pmd_t pmdval;

- /* rcu_read_lock() to be added later */
+ rcu_read_lock();
pmdval = pmdp_get_lockless(pmd);
if (pmdvalp)
*pmdvalp = pmdval;
@@ -250,7 +250,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
}
return __pte_map(&pmdval, addr);
nomap:
- /* rcu_read_unlock() to be added later */
+ rcu_read_unlock();
return NULL;
}

--
2.35.3
Re: [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s [ In reply to ]
On Mon, May 29, 2023 at 8:15?AM Hugh Dickins <hughd@google.com> wrote:
> Before putting them to use (several commits later), add rcu_read_lock()
> to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a
> separate commit, since it risks exposing imbalances: prior commits have
> fixed all the known imbalances, but we may find some have been missed.
[...]
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index c7ab18a5fb77..674671835631 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
> {
> pmd_t pmdval;
>
> - /* rcu_read_lock() to be added later */
> + rcu_read_lock();
> pmdval = pmdp_get_lockless(pmd);
> if (pmdvalp)
> *pmdvalp = pmdval;

It might be a good idea to document that this series assumes that the
first argument to __pte_offset_map() is a pointer into a second-level
page table (and not a local copy of the entry) unless the containing
VMA is known to not be THP-eligible or the page table is detached from
the page table hierarchy or something like that. Currently a bunch of
places pass references to local copies of the entry, and while I think
all of these are fine, it would probably be good to at least document
why these are allowed to do it while other places aren't.

$ vgrep 'pte_offset_map(&'
Index File Line Content
0 arch/sparc/mm/tlb.c 151 pte = pte_offset_map(&pmd, vaddr);
1 kernel/events/core.c 7501 ptep = pte_offset_map(&pmd, addr);
2 mm/gup.c 2460 ptem = ptep = pte_offset_map(&pmd, addr);
3 mm/huge_memory.c 2057 pte = pte_offset_map(&_pmd, haddr);
4 mm/huge_memory.c 2214 pte = pte_offset_map(&_pmd, haddr);
5 mm/page_table_check.c 240 pte_t *ptep = pte_offset_map(&pmd, addr);
Re: [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s [ In reply to ]
On Wed, 31 May 2023, Jann Horn wrote:
> On Mon, May 29, 2023 at 8:15?AM Hugh Dickins <hughd@google.com> wrote:
> > Before putting them to use (several commits later), add rcu_read_lock()
> > to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a
> > separate commit, since it risks exposing imbalances: prior commits have
> > fixed all the known imbalances, but we may find some have been missed.
> [...]
> > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> > index c7ab18a5fb77..674671835631 100644
> > --- a/mm/pgtable-generic.c
> > +++ b/mm/pgtable-generic.c
> > @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
> > {
> > pmd_t pmdval;
> >
> > - /* rcu_read_lock() to be added later */
> > + rcu_read_lock();
> > pmdval = pmdp_get_lockless(pmd);
> > if (pmdvalp)
> > *pmdvalp = pmdval;
>
> It might be a good idea to document that this series assumes that the
> first argument to __pte_offset_map() is a pointer into a second-level
> page table (and not a local copy of the entry) unless the containing
> VMA is known to not be THP-eligible or the page table is detached from
> the page table hierarchy or something like that. Currently a bunch of
> places pass references to local copies of the entry, and while I think
> all of these are fine, it would probably be good to at least document
> why these are allowed to do it while other places aren't.

Thanks Jann: but I have to guess that here you are showing awareness of
an important issue that I'm simply ignorant of.

I have been haunted by a dim recollection that there is one architecture
(arm-32?) which is fussy about the placement of the pmdval being examined
(deduces info missing from the arch-independent interface, by following
up the address?), but I couldn't track it down when I tried.

Please tell me more; or better, don't spend your time explaining to me,
but please just send a link to a good reference on the issue. I'll be
unable to document what you ask there, without educating myself first.

Thanks,
Hugh

>
> $ vgrep 'pte_offset_map(&'
> Index File Line Content
> 0 arch/sparc/mm/tlb.c 151 pte = pte_offset_map(&pmd, vaddr);
> 1 kernel/events/core.c 7501 ptep = pte_offset_map(&pmd, addr);
> 2 mm/gup.c 2460 ptem = ptep = pte_offset_map(&pmd, addr);
> 3 mm/huge_memory.c 2057 pte = pte_offset_map(&_pmd, haddr);
> 4 mm/huge_memory.c 2214 pte = pte_offset_map(&_pmd, haddr);
> 5 mm/page_table_check.c 240 pte_t *ptep = pte_offset_map(&pmd, addr);
Re: [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s [ In reply to ]
On Fri, Jun 2, 2023 at 4:50?AM Hugh Dickins <hughd@google.com> wrote:
> On Wed, 31 May 2023, Jann Horn wrote:
> > On Mon, May 29, 2023 at 8:15?AM Hugh Dickins <hughd@google.com> wrote:
> > > Before putting them to use (several commits later), add rcu_read_lock()
> > > to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a
> > > separate commit, since it risks exposing imbalances: prior commits have
> > > fixed all the known imbalances, but we may find some have been missed.
> > [...]
> > > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> > > index c7ab18a5fb77..674671835631 100644
> > > --- a/mm/pgtable-generic.c
> > > +++ b/mm/pgtable-generic.c
> > > @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
> > > {
> > > pmd_t pmdval;
> > >
> > > - /* rcu_read_lock() to be added later */
> > > + rcu_read_lock();
> > > pmdval = pmdp_get_lockless(pmd);
> > > if (pmdvalp)
> > > *pmdvalp = pmdval;
> >
> > It might be a good idea to document that this series assumes that the
> > first argument to __pte_offset_map() is a pointer into a second-level
> > page table (and not a local copy of the entry) unless the containing
> > VMA is known to not be THP-eligible or the page table is detached from
> > the page table hierarchy or something like that. Currently a bunch of
> > places pass references to local copies of the entry, and while I think
> > all of these are fine, it would probably be good to at least document
> > why these are allowed to do it while other places aren't.
>
> Thanks Jann: but I have to guess that here you are showing awareness of
> an important issue that I'm simply ignorant of.
>
> I have been haunted by a dim recollection that there is one architecture
> (arm-32?) which is fussy about the placement of the pmdval being examined
> (deduces info missing from the arch-independent interface, by following
> up the address?), but I couldn't track it down when I tried.
>
> Please tell me more; or better, don't spend your time explaining to me,
> but please just send a link to a good reference on the issue. I'll be
> unable to document what you ask there, without educating myself first.

Sorry, I think I was somewhat confused about what was going on when I
wrote that message.

After this series, __pte_offset_map() looks as follows, with added
comments describing my understanding of the semantics:

// `pmd` points to one of:
// case 1: a pmd_t stored outside a page table,
// referencing a page table detached by the caller
// case 2: a pmd_t stored outside a page table, which the caller copied
// from a page table in an RCU-critical section that extends
// until at least the end of this function
// case 3: a pmd_t stored inside a page table
pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
{
unsigned long __maybe_unused flags;
pmd_t pmdval;

// begin an RCU section; this is needed for case 3
rcu_read_lock();
config_might_irq_save(flags);
// read the pmd_t.
// if the pmd_t references a page table, this page table can not
// go away because:
// - in case 1, the caller is the main owner of the page table
// - in case 2, because the caller
// started an RCU read-side critical section before the caller
// read the original pmd_t. (This pmdp_get_lockless() is just
// reading a copied pmd_t off the stack.)
// - in case 3, because we started an RCU section above before
// reading the pmd_t out of the page table here
pmdval = pmdp_get_lockless(pmd);
config_might_irq_restore(flags);

if (pmdvalp)
*pmdvalp = pmdval;
if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
goto nomap;
if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval)))
goto nomap;
if (unlikely(pmd_bad(pmdval))) {
pmd_clear_bad(pmd);
goto nomap;
}
return __pte_map(&pmdval, addr);
nomap:
rcu_read_unlock();
return NULL;
}

case 1 is what happens in __page_table_check_pte_clear_range(),
__split_huge_zero_page_pmd() and __split_huge_pmd_locked().
case 2 happens in lockless page table traversal (gup_pte_range() and
perf_get_pgtable_size()).
case 3 is normal page table traversal under mmap lock or mapping lock.

I think having a function like this that can run in three different
contexts in which it is protected in three different ways is somewhat
hard to understand without comments. Though maybe I'm thinking about
it the wrong way?

Basically my point is: __pte_offset_map() normally requires that the
pmd argument points into a page table so that the rcu_read_lock() can
provide protection starting from the time the pmd_t is read from a
page table. The exception are cases where the caller has taken its own
precautions to ensure that the referenced page table can not have been
freed.