Mailing List Archive

[PATCH v3 2/2] regmap: Add maple tree based register cache
The current state of the art for sparse register maps is the
rbtree cache. This works well for most applications but isn't
always ideal for sparser register maps since the rbtree can get
deep, requiring a lot of walking. Fortunately the kernel has a
data structure intended to address this very problem, the maple
tree. Provide an initial implementation of a register cache
based on the maple tree to start taking advantage of it.

The entries stored in the maple tree are arrays of register
values, with the maple tree keys holding the register addresses.
We store data in host native format rather than device native
format as we do for rbtree, this will be a benefit for devices
where we don't marshal data within regmap and simplifies the code
but will result in additional CPU overhead when syncing the cache
on devices where we do marshal data in regmap.

This should work well for a lot of devices, though there's some
additional areas that could be looked at such as caching the
last accessed entry like we do for rbtree and trying to minimise
the maple tree level locking. We should also use bulk writes
rather than single register writes when resyncing the cache where
possible, even if we don't store in device native format.

Very small register maps may continue to to better with rbtree
longer term.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
drivers/base/regmap/Makefile | 2 +-
drivers/base/regmap/internal.h | 1 +
drivers/base/regmap/regcache-maple.c | 278 +++++++++++++++++++++++++++++++++++
drivers/base/regmap/regcache.c | 1 +
drivers/base/regmap/regmap-kunit.c | 3 +
include/linux/regmap.h | 1 +
6 files changed, 285 insertions(+), 1 deletion(-)

diff --git a/drivers/base/regmap/Makefile b/drivers/base/regmap/Makefile
index 4cb73468a197..f6c6cb017200 100644
--- a/drivers/base/regmap/Makefile
+++ b/drivers/base/regmap/Makefile
@@ -3,7 +3,7 @@
CFLAGS_regmap.o := -I$(src)

obj-$(CONFIG_REGMAP) += regmap.o regcache.o
-obj-$(CONFIG_REGMAP) += regcache-rbtree.o regcache-flat.o
+obj-$(CONFIG_REGMAP) += regcache-rbtree.o regcache-flat.o regcache-maple.o
obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o
obj-$(CONFIG_REGMAP_KUNIT) += regmap-kunit.o
obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o
diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
index 7b9ef43bcea6..6361df6f553a 100644
--- a/drivers/base/regmap/internal.h
+++ b/drivers/base/regmap/internal.h
@@ -282,6 +282,7 @@ enum regmap_endian regmap_get_val_endian(struct device *dev,
const struct regmap_config *config);

extern struct regcache_ops regcache_rbtree_ops;
+extern struct regcache_ops regcache_maple_ops;
extern struct regcache_ops regcache_flat_ops;

static inline const char *regmap_name(const struct regmap *map)
diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
new file mode 100644
index 000000000000..497cc708d277
--- /dev/null
+++ b/drivers/base/regmap/regcache-maple.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+//
+// Register cache access API - maple tree based cache
+//
+// Copyright 2023 Arm, Ltd
+//
+// Author: Mark Brown <broonie@kernel.org>
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/maple_tree.h>
+#include <linux/slab.h>
+
+#include "internal.h"
+
+static int regcache_maple_read(struct regmap *map,
+ unsigned int reg, unsigned int *value)
+{
+ struct maple_tree *mt = map->cache;
+ MA_STATE(mas, mt, reg, reg);
+ unsigned long *entry;
+
+ rcu_read_lock();
+
+ entry = mas_find(&mas, reg);
+ if (!entry) {
+ rcu_read_unlock();
+ return -ENOENT;
+ }
+
+ *value = entry[reg - mas.index];
+
+ rcu_read_unlock();
+
+ return 0;
+}
+
+static int regcache_maple_write(struct regmap *map, unsigned int reg,
+ unsigned int val)
+{
+ struct maple_tree *mt = map->cache;
+ MA_STATE(mas, mt, reg, reg);
+ unsigned long *entry, *upper, *lower;
+ unsigned long index, last;
+ size_t lower_sz, upper_sz;
+ int ret;
+
+ rcu_read_lock();
+
+ entry = mas_find(&mas, reg);
+ if (entry) {
+ entry[reg - mas.index] = val;
+ rcu_read_unlock();
+ return 0;
+ }
+
+ /* Any adjacent entries to extend/merge? */
+ mas_set_range(&mas, reg - 1, reg + 1);
+ index = reg;
+ last = reg;
+
+ lower = mas_find(&mas, reg - 1);
+ if (lower) {
+ index = mas.index;
+ lower_sz = (mas.last - mas.index + 1) * sizeof(unsigned long);
+ }
+
+ upper = mas_find(&mas, reg + 1);
+ if (upper) {
+ last = mas.last;
+ upper_sz = (mas.last - mas.index + 1) * sizeof(unsigned long);
+ }
+
+ rcu_read_unlock();
+
+ entry = kmalloc((last - index + 1) * sizeof(unsigned long),
+ GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ if (lower)
+ memcpy(entry, lower, lower_sz);
+ entry[reg - index] = val;
+ if (upper)
+ memcpy(&entry[reg - index + 1], upper, upper_sz);
+
+ /*
+ * This is safe because the regmap lock means the Maple lock
+ * is redundant, but we need to take it due to lockdep asserts
+ * in the maple tree code.
+ */
+ mas_lock(&mas);
+
+ mas_set_range(&mas, index, last);
+ ret = mas_store_gfp(&mas, entry, GFP_KERNEL);
+
+ mas_unlock(&mas);
+
+ if (ret == 0) {
+ kfree(lower);
+ kfree(upper);
+ }
+
+ return ret;
+}
+
+static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ unsigned int max)
+{
+ struct maple_tree *mt = map->cache;
+ MA_STATE(mas, mt, min, max);
+ unsigned long *entry, *lower, *upper;
+ unsigned long lower_index, lower_last;
+ unsigned long upper_index, upper_last;
+ int ret;
+
+ lower = NULL;
+ upper = NULL;
+
+ mas_lock(&mas);
+
+ mas_for_each(&mas, entry, max) {
+ /*
+ * This is safe because the regmap lock means the
+ * Maple lock is redundant, but we need to take it due
+ * to lockdep asserts in the maple tree code.
+ */
+ mas_unlock(&mas);
+
+ /* Do we need to save any of this entry? */
+ if (mas.index < min) {
+ lower_index = mas.index;
+ lower_last = min -1;
+
+ lower = kmemdup(entry, ((min - mas.index) *
+ sizeof(unsigned long)),
+ GFP_KERNEL);
+ if (!lower) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+
+ if (mas.last > max) {
+ upper_index = max + 1;
+ upper_last = mas.last;
+
+ upper = kmemdup(&entry[max + 1],
+ ((mas.last - max) *
+ sizeof(unsigned long)),
+ GFP_KERNEL);
+ if (!upper) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+
+ kfree(entry);
+ mas_lock(&mas);
+ mas_erase(&mas);
+
+ /* Insert new nodes with the saved data */
+ if (lower) {
+ mas_set_range(&mas, lower_index, lower_last);
+ ret = mas_store_gfp(&mas, lower, GFP_KERNEL);
+ if (ret != 0)
+ goto out;
+ lower = NULL;
+ }
+
+ if (upper) {
+ mas_set_range(&mas, upper_index, upper_last);
+ ret = mas_store_gfp(&mas, upper, GFP_KERNEL);
+ if (ret != 0)
+ goto out;
+ upper = NULL;
+ }
+ }
+
+out:
+ mas_unlock(&mas);
+ kfree(lower);
+ kfree(upper);
+
+ return ret;
+}
+
+static int regcache_maple_sync(struct regmap *map, unsigned int min,
+ unsigned int max)
+{
+ struct maple_tree *mt = map->cache;
+ unsigned long *entry;
+ MA_STATE(mas, mt, min, max);
+ unsigned long lmin = min;
+ unsigned long lmax = max;
+ unsigned int r;
+ int ret;
+
+ map->cache_bypass = true;
+
+ rcu_read_lock();
+
+ mas_for_each(&mas, entry, max) {
+ for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) {
+ ret = regcache_sync_val(map, r, entry[r - mas.index]);
+ if (ret != 0)
+ goto out;
+ }
+ }
+
+out:
+ rcu_read_unlock();
+
+ map->cache_bypass = false;
+
+ return ret;
+}
+
+static int regcache_maple_exit(struct regmap *map)
+{
+ struct maple_tree *mt = map->cache;
+ MA_STATE(mas, mt, 0, UINT_MAX);
+ unsigned int *entry;;
+
+ /* if we've already been called then just return */
+ if (!mt)
+ return 0;
+
+ mas_lock(&mas);
+ mas_for_each(&mas, entry, UINT_MAX)
+ kfree(entry);
+ __mt_destroy(mt);
+ mas_unlock(&mas);
+
+ kfree(mt);
+ map->cache = NULL;
+
+ return 0;
+}
+
+static int regcache_maple_init(struct regmap *map)
+{
+ struct maple_tree *mt;
+ int i;
+ int ret;
+
+ mt = kmalloc(sizeof(*mt), GFP_KERNEL);
+ if (!mt)
+ return -ENOMEM;
+ map->cache = mt;
+
+ mt_init(mt);
+
+ for (i = 0; i < map->num_reg_defaults; i++) {
+ ret = regcache_maple_write(map,
+ map->reg_defaults[i].reg,
+ map->reg_defaults[i].def);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+
+err:
+ regcache_maple_exit(map);
+ return ret;
+}
+
+struct regcache_ops regcache_maple_ops = {
+ .type = REGCACHE_MAPLE,
+ .name = "maple",
+ .init = regcache_maple_init,
+ .exit = regcache_maple_exit,
+ .read = regcache_maple_read,
+ .write = regcache_maple_write,
+ .drop = regcache_maple_drop,
+ .sync = regcache_maple_sync,
+};
diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
index e5d6b535c002..0b47721089e6 100644
--- a/drivers/base/regmap/regcache.c
+++ b/drivers/base/regmap/regcache.c
@@ -17,6 +17,7 @@

static const struct regcache_ops *cache_types[] = {
&regcache_rbtree_ops,
+ &regcache_maple_ops,
&regcache_flat_ops,
};

diff --git a/drivers/base/regmap/regmap-kunit.c b/drivers/base/regmap/regmap-kunit.c
index 6f2bfa4650fe..3486bf9e28b8 100644
--- a/drivers/base/regmap/regmap-kunit.c
+++ b/drivers/base/regmap/regmap-kunit.c
@@ -29,6 +29,7 @@ static const struct regcache_types regcache_types_list[] = {
{ REGCACHE_NONE, "none" },
{ REGCACHE_FLAT, "flat" },
{ REGCACHE_RBTREE, "rbtree" },
+ { REGCACHE_MAPLE, "maple" },
};

KUNIT_ARRAY_PARAM(regcache_types, regcache_types_list, case_to_desc);
@@ -36,12 +37,14 @@ KUNIT_ARRAY_PARAM(regcache_types, regcache_types_list, case_to_desc);
static const struct regcache_types real_cache_types_list[] = {
{ REGCACHE_FLAT, "flat" },
{ REGCACHE_RBTREE, "rbtree" },
+ { REGCACHE_MAPLE, "maple" },
};

KUNIT_ARRAY_PARAM(real_cache_types, real_cache_types_list, case_to_desc);

static const struct regcache_types sparse_cache_types_list[] = {
{ REGCACHE_RBTREE, "rbtree" },
+ { REGCACHE_MAPLE, "maple" },
};

KUNIT_ARRAY_PARAM(sparse_cache_types, sparse_cache_types_list, case_to_desc);
diff --git a/include/linux/regmap.h b/include/linux/regmap.h
index 24fc4a9ed1f9..11b360da199d 100644
--- a/include/linux/regmap.h
+++ b/include/linux/regmap.h
@@ -51,6 +51,7 @@ enum regcache_type {
REGCACHE_NONE,
REGCACHE_RBTREE,
REGCACHE_FLAT,
+ REGCACHE_MAPLE,
};

/**

--
2.34.1
Re: [PATCH v3 2/2] regmap: Add maple tree based register cache [ In reply to ]
* Mark Brown <broonie@kernel.org> [230329 20:10]:
> The current state of the art for sparse register maps is the
> rbtree cache. This works well for most applications but isn't
> always ideal for sparser register maps since the rbtree can get
> deep, requiring a lot of walking. Fortunately the kernel has a
> data structure intended to address this very problem, the maple
> tree. Provide an initial implementation of a register cache
> based on the maple tree to start taking advantage of it.
>
> The entries stored in the maple tree are arrays of register
> values, with the maple tree keys holding the register addresses.

Why not store the registers to values in the maple tree without the
array? From reading the below code, the maple tree will hold a ranges
(based on registers) pointing to an array which will store the value at
the register offset. Could we just store the value in the maple tree
directly?

this:
mt -> value
Instead of:
mt -> array -> value


> We store data in host native format rather than device native
> format as we do for rbtree, this will be a benefit for devices
> where we don't marshal data within regmap and simplifies the code
> but will result in additional CPU overhead when syncing the cache
> on devices where we do marshal data in regmap.
>
> This should work well for a lot of devices, though there's some
> additional areas that could be looked at such as caching the
> last accessed entry like we do for rbtree and trying to minimise
> the maple tree level locking.

In the case of the VMAs, we had a vmacache, which was removed when the
maple tree was added since it wasn't providing any benefit. We lost any
speed increase to cache misses and updating the cache. I don't know
your usecase or if it would result in the same outcome here, but I
thought I'd share what happened in the VMA space.

> We should also use bulk writes
> rather than single register writes when resyncing the cache where
> possible, even if we don't store in device native format.
>
> Very small register maps may continue to to better with rbtree
> longer term.
>
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
> drivers/base/regmap/Makefile | 2 +-
> drivers/base/regmap/internal.h | 1 +
> drivers/base/regmap/regcache-maple.c | 278 +++++++++++++++++++++++++++++++++++
> drivers/base/regmap/regcache.c | 1 +
> drivers/base/regmap/regmap-kunit.c | 3 +
> include/linux/regmap.h | 1 +
> 6 files changed, 285 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/base/regmap/Makefile b/drivers/base/regmap/Makefile
> index 4cb73468a197..f6c6cb017200 100644
> --- a/drivers/base/regmap/Makefile
> +++ b/drivers/base/regmap/Makefile
> @@ -3,7 +3,7 @@
> CFLAGS_regmap.o := -I$(src)
>
> obj-$(CONFIG_REGMAP) += regmap.o regcache.o
> -obj-$(CONFIG_REGMAP) += regcache-rbtree.o regcache-flat.o
> +obj-$(CONFIG_REGMAP) += regcache-rbtree.o regcache-flat.o regcache-maple.o
> obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o
> obj-$(CONFIG_REGMAP_KUNIT) += regmap-kunit.o
> obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o
> diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
> index 7b9ef43bcea6..6361df6f553a 100644
> --- a/drivers/base/regmap/internal.h
> +++ b/drivers/base/regmap/internal.h
> @@ -282,6 +282,7 @@ enum regmap_endian regmap_get_val_endian(struct device *dev,
> const struct regmap_config *config);
>
> extern struct regcache_ops regcache_rbtree_ops;
> +extern struct regcache_ops regcache_maple_ops;
> extern struct regcache_ops regcache_flat_ops;
>
> static inline const char *regmap_name(const struct regmap *map)
> diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
> new file mode 100644
> index 000000000000..497cc708d277
> --- /dev/null
> +++ b/drivers/base/regmap/regcache-maple.c
> @@ -0,0 +1,278 @@
> +// SPDX-License-Identifier: GPL-2.0
> +//
> +// Register cache access API - maple tree based cache
> +//
> +// Copyright 2023 Arm, Ltd
> +//
> +// Author: Mark Brown <broonie@kernel.org>
> +
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/maple_tree.h>
> +#include <linux/slab.h>
> +
> +#include "internal.h"
> +
> +static int regcache_maple_read(struct regmap *map,
> + unsigned int reg, unsigned int *value)
> +{
> + struct maple_tree *mt = map->cache;
> + MA_STATE(mas, mt, reg, reg);
> + unsigned long *entry;
> +
> + rcu_read_lock();
> +
> + entry = mas_find(&mas, reg);

mas_walk() might be a better interface for this.

> + if (!entry) {
> + rcu_read_unlock();
> + return -ENOENT;
> + }
> +
> + *value = entry[reg - mas.index];
> +
> + rcu_read_unlock();
> +
> + return 0;
> +}
> +
> +static int regcache_maple_write(struct regmap *map, unsigned int reg,
> + unsigned int val)
> +{
> + struct maple_tree *mt = map->cache;
> + MA_STATE(mas, mt, reg, reg);
> + unsigned long *entry, *upper, *lower;
> + unsigned long index, last;
> + size_t lower_sz, upper_sz;
> + int ret;
> +
> + rcu_read_lock();
> +
> + entry = mas_find(&mas, reg);

I think you want mas_walk(&mas) here as well?

> + if (entry) {
> + entry[reg - mas.index] = val;
> + rcu_read_unlock();
> + return 0;
> + }
> +
> + /* Any adjacent entries to extend/merge? */
> + mas_set_range(&mas, reg - 1, reg + 1);
> + index = reg;
> + last = reg;
> +
> + lower = mas_find(&mas, reg - 1);

If you just want to check the previous, you can use:
mas_prev(&mas, reg - 1);
This will try the previous entry without rewalking from the top of the
tree and you don't need to mas_set_range() call.

> + if (lower) {
> + index = mas.index;
> + lower_sz = (mas.last - mas.index + 1) * sizeof(unsigned long);
> + }
> +
> + upper = mas_find(&mas, reg + 1);

mas_next(&mas, reg + 1) might also be better here.

> + if (upper) {
> + last = mas.last;
> + upper_sz = (mas.last - mas.index + 1) * sizeof(unsigned long);
> + }
> +
> + rcu_read_unlock();
> +
> + entry = kmalloc((last - index + 1) * sizeof(unsigned long),
> + GFP_KERNEL);
> + if (!entry)
> + return -ENOMEM;
> +
> + if (lower)
> + memcpy(entry, lower, lower_sz);
> + entry[reg - index] = val;
> + if (upper)
> + memcpy(&entry[reg - index + 1], upper, upper_sz);
> +
> + /*
> + * This is safe because the regmap lock means the Maple lock
> + * is redundant, but we need to take it due to lockdep asserts
> + * in the maple tree code.
> + */
> + mas_lock(&mas);
> +
> + mas_set_range(&mas, index, last);
> + ret = mas_store_gfp(&mas, entry, GFP_KERNEL);

You can avoid this walk as well by changing the order of the code
before:

mas_walk(&mas, reg);
if entry... return
mas_next(&mas, reg + 1);
...
mas_prev(&mas, reg - 1);
...

This should now be pointing at the location mas_store_gfp() expects:
mas.last = last;
ret = mas_store_gfp()

> +
> + mas_unlock(&mas);
> +
> + if (ret == 0) {
> + kfree(lower);
> + kfree(upper);
> + }
> +
> + return ret;
> +}
> +
> +static int regcache_maple_drop(struct regmap *map, unsigned int min,
> + unsigned int max)
> +{
> + struct maple_tree *mt = map->cache;
> + MA_STATE(mas, mt, min, max);
> + unsigned long *entry, *lower, *upper;
> + unsigned long lower_index, lower_last;
> + unsigned long upper_index, upper_last;
> + int ret;
> +
> + lower = NULL;
> + upper = NULL;
> +
> + mas_lock(&mas);
> +
> + mas_for_each(&mas, entry, max) {
> + /*
> + * This is safe because the regmap lock means the
> + * Maple lock is redundant, but we need to take it due
> + * to lockdep asserts in the maple tree code.
> + */
> + mas_unlock(&mas);
> +
> + /* Do we need to save any of this entry? */
> + if (mas.index < min) {
> + lower_index = mas.index;
> + lower_last = min -1;
> +
> + lower = kmemdup(entry, ((min - mas.index) *
> + sizeof(unsigned long)),
> + GFP_KERNEL);
> + if (!lower) {
> + ret = -ENOMEM;
> + goto out;

With the above unlock, you will call mas_unlock() twice here.

> + }
> + }
> +
> + if (mas.last > max) {
> + upper_index = max + 1;
> + upper_last = mas.last;
> +
> + upper = kmemdup(&entry[max + 1],
> + ((mas.last - max) *
> + sizeof(unsigned long)),
> + GFP_KERNEL);
> + if (!upper) {
> + ret = -ENOMEM;
> + goto out;

Again, the double unlock here.

> + }
> + }
> +
> + kfree(entry);
> + mas_lock(&mas);
> + mas_erase(&mas);
> +
> + /* Insert new nodes with the saved data */
> + if (lower) {
> + mas_set_range(&mas, lower_index, lower_last);
> + ret = mas_store_gfp(&mas, lower, GFP_KERNEL);
> + if (ret != 0)
> + goto out;
> + lower = NULL;
> + }
> +
> + if (upper) {
> + mas_set_range(&mas, upper_index, upper_last);
> + ret = mas_store_gfp(&mas, upper, GFP_KERNEL);
> + if (ret != 0)
> + goto out;
> + upper = NULL;
> + }
> + }
> +
> +out:
> + mas_unlock(&mas);
> + kfree(lower);
> + kfree(upper);
> +
> + return ret;
> +}
> +
> +static int regcache_maple_sync(struct regmap *map, unsigned int min,
> + unsigned int max)
> +{
> + struct maple_tree *mt = map->cache;
> + unsigned long *entry;
> + MA_STATE(mas, mt, min, max);
> + unsigned long lmin = min;
> + unsigned long lmax = max;
> + unsigned int r;
> + int ret;
> +
> + map->cache_bypass = true;
> +
> + rcu_read_lock();
> +
> + mas_for_each(&mas, entry, max) {
> + for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) {
> + ret = regcache_sync_val(map, r, entry[r - mas.index]);
> + if (ret != 0)
> + goto out;
> + }
> + }
> +
> +out:
> + rcu_read_unlock();
> +
> + map->cache_bypass = false;
> +
> + return ret;
> +}
> +
> +static int regcache_maple_exit(struct regmap *map)
> +{
> + struct maple_tree *mt = map->cache;
> + MA_STATE(mas, mt, 0, UINT_MAX);
> + unsigned int *entry;;
> +
> + /* if we've already been called then just return */
> + if (!mt)
> + return 0;
> +
> + mas_lock(&mas);
> + mas_for_each(&mas, entry, UINT_MAX)
> + kfree(entry);
> + __mt_destroy(mt);
> + mas_unlock(&mas);
> +
> + kfree(mt);
> + map->cache = NULL;
> +
> + return 0;
> +}
> +
> +static int regcache_maple_init(struct regmap *map)
> +{
> + struct maple_tree *mt;
> + int i;
> + int ret;
> +
> + mt = kmalloc(sizeof(*mt), GFP_KERNEL);
> + if (!mt)
> + return -ENOMEM;
> + map->cache = mt;
> +
> + mt_init(mt);
> +
> + for (i = 0; i < map->num_reg_defaults; i++) {
> + ret = regcache_maple_write(map,
> + map->reg_defaults[i].reg,
> + map->reg_defaults[i].def);
> + if (ret)
> + goto err;
> + }
> +
> + return 0;
> +
> +err:
> + regcache_maple_exit(map);
> + return ret;
> +}
> +
> +struct regcache_ops regcache_maple_ops = {
> + .type = REGCACHE_MAPLE,
> + .name = "maple",
> + .init = regcache_maple_init,
> + .exit = regcache_maple_exit,
> + .read = regcache_maple_read,
> + .write = regcache_maple_write,
> + .drop = regcache_maple_drop,
> + .sync = regcache_maple_sync,
> +};
> diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
> index e5d6b535c002..0b47721089e6 100644
> --- a/drivers/base/regmap/regcache.c
> +++ b/drivers/base/regmap/regcache.c
> @@ -17,6 +17,7 @@
>
> static const struct regcache_ops *cache_types[] = {
> &regcache_rbtree_ops,
> + &regcache_maple_ops,
> &regcache_flat_ops,
> };
>
> diff --git a/drivers/base/regmap/regmap-kunit.c b/drivers/base/regmap/regmap-kunit.c
> index 6f2bfa4650fe..3486bf9e28b8 100644
> --- a/drivers/base/regmap/regmap-kunit.c
> +++ b/drivers/base/regmap/regmap-kunit.c
> @@ -29,6 +29,7 @@ static const struct regcache_types regcache_types_list[] = {
> { REGCACHE_NONE, "none" },
> { REGCACHE_FLAT, "flat" },
> { REGCACHE_RBTREE, "rbtree" },
> + { REGCACHE_MAPLE, "maple" },
> };
>
> KUNIT_ARRAY_PARAM(regcache_types, regcache_types_list, case_to_desc);
> @@ -36,12 +37,14 @@ KUNIT_ARRAY_PARAM(regcache_types, regcache_types_list, case_to_desc);
> static const struct regcache_types real_cache_types_list[] = {
> { REGCACHE_FLAT, "flat" },
> { REGCACHE_RBTREE, "rbtree" },
> + { REGCACHE_MAPLE, "maple" },
> };
>
> KUNIT_ARRAY_PARAM(real_cache_types, real_cache_types_list, case_to_desc);
>
> static const struct regcache_types sparse_cache_types_list[] = {
> { REGCACHE_RBTREE, "rbtree" },
> + { REGCACHE_MAPLE, "maple" },
> };
>
> KUNIT_ARRAY_PARAM(sparse_cache_types, sparse_cache_types_list, case_to_desc);
> diff --git a/include/linux/regmap.h b/include/linux/regmap.h
> index 24fc4a9ed1f9..11b360da199d 100644
> --- a/include/linux/regmap.h
> +++ b/include/linux/regmap.h
> @@ -51,6 +51,7 @@ enum regcache_type {
> REGCACHE_NONE,
> REGCACHE_RBTREE,
> REGCACHE_FLAT,
> + REGCACHE_MAPLE,
> };
>
> /**
>
> --
> 2.34.1
>
>
Re: [PATCH v3 2/2] regmap: Add maple tree based register cache [ In reply to ]
On Mon, Apr 03, 2023 at 11:45:08AM -0400, Liam R. Howlett wrote:
> * Mark Brown <broonie@kernel.org> [230329 20:10]:

> > The entries stored in the maple tree are arrays of register
> > values, with the maple tree keys holding the register addresses.

> Why not store the registers to values in the maple tree without the
> array? From reading the below code, the maple tree will hold a ranges
> (based on registers) pointing to an array which will store the value at
> the register offset. Could we just store the value in the maple tree
> directly?

AFAICT that means that we can't readily get the values back out en masse
to do bulk operations on them without doing a bunch of work to check for
adjacency and then doing some intermediate marshalling, with cache sync
block operations are a noticable win. I'm *hopeful* this might end up
working out fast enough to make the cache more viable on faster buses.

> > This should work well for a lot of devices, though there's some
> > additional areas that could be looked at such as caching the
> > last accessed entry like we do for rbtree and trying to minimise
> > the maple tree level locking.

> In the case of the VMAs, we had a vmacache, which was removed when the
> maple tree was added since it wasn't providing any benefit. We lost any
> speed increase to cache misses and updating the cache. I don't know
> your usecase or if it would result in the same outcome here, but I
> thought I'd share what happened in the VMA space.

Yeah, I'm hopeful that the maple tree is fast enough that it's not worth
it. The main use case is read/modify/write sequences where you hit the
same register twice in quick succession.

> > + rcu_read_lock();
> > +
> > + entry = mas_find(&mas, reg);

> mas_walk() might be a better interface for this.

Ah, that's not very discoverable. mas_find() should possibly be called
mas_find_pausable() or something?

> > + /* Any adjacent entries to extend/merge? */
> > + mas_set_range(&mas, reg - 1, reg + 1);
> > + index = reg;
> > + last = reg;
> > +
> > + lower = mas_find(&mas, reg - 1);

> If you just want to check the previous, you can use:
> mas_prev(&mas, reg - 1);
> This will try the previous entry without rewalking from the top of the
> tree and you don't need to mas_set_range() call.

Hrm, right - it looks like that doesn't actually apply the constraints
so that'd work. The whole specifying constraints for some operations in
the mas is a bit confusing.

> > +
> > + mas_set_range(&mas, index, last);
> > + ret = mas_store_gfp(&mas, entry, GFP_KERNEL);

> You can avoid this walk as well by changing the order of the code
> before:

> mas_walk(&mas, reg);
> if entry... return
> mas_next(&mas, reg + 1);
> ...
> mas_prev(&mas, reg - 1);
> ...

> This should now be pointing at the location mas_store_gfp() expects:
> mas.last = last;
> ret = mas_store_gfp()

Don't we need to set mas.index as well? It does feel a bit wrong to be
just writing into the mas struct.
Re: [PATCH v3 2/2] regmap: Add maple tree based register cache [ In reply to ]
On Mon, Apr 03, 2023 at 11:45:08AM -0400, Liam R. Howlett wrote:
> * Mark Brown <broonie@kernel.org> [230329 20:10]:

> > + /* Any adjacent entries to extend/merge? */
> > + mas_set_range(&mas, reg - 1, reg + 1);
> > + index = reg;
> > + last = reg;
> > +
> > + lower = mas_find(&mas, reg - 1);

> If you just want to check the previous, you can use:
> mas_prev(&mas, reg - 1);
> This will try the previous entry without rewalking from the top of the
> tree and you don't need to mas_set_range() call.

I did give this a spin but it doesn't seem to be doing what I'd expect
for indexes 0 and 1, in a register map with index 0 if we attempt to
insert index 1 the mas_prev() doesn't find the existing entry for index
0 so we don't attempt to combine them. It seems to do the right thing
for non-zero indexes, I'll have a poke at some point.

> > + /*
> > + * This is safe because the regmap lock means the Maple lock
> > + * is redundant, but we need to take it due to lockdep asserts
> > + * in the maple tree code.
> > + */
> > + mas_lock(&mas);

> > + mas_set_range(&mas, index, last);
> > + ret = mas_store_gfp(&mas, entry, GFP_KERNEL);

> You can avoid this walk as well by changing the order of the code
> before:

> mas_walk(&mas, reg);
> if entry... return
> mas_next(&mas, reg + 1);
> ...
> mas_prev(&mas, reg - 1);
> ...

> This should now be pointing at the location mas_store_gfp() expects:
> mas.last = last;
> ret = mas_store_gfp()

This appears to be triggering data corruption for me in the cache drop
test, again I'll have a poke at some point. We seem to be getting
rubbish for the upper data block, though only in the drop test which
means it's possibly an interaction with how the tree is affected by
dropping the middle of a block. Might well be something stupid I'm
doing either here or in the drop function.
Re: [PATCH v3 2/2] regmap: Add maple tree based register cache [ In reply to ]
* Mark Brown <broonie@kernel.org> [230403 12:58]:
> On Mon, Apr 03, 2023 at 11:45:08AM -0400, Liam R. Howlett wrote:
> > * Mark Brown <broonie@kernel.org> [230329 20:10]:
>
> > > The entries stored in the maple tree are arrays of register
> > > values, with the maple tree keys holding the register addresses.
>
> > Why not store the registers to values in the maple tree without the
> > array? From reading the below code, the maple tree will hold a ranges
> > (based on registers) pointing to an array which will store the value at
> > the register offset. Could we just store the value in the maple tree
> > directly?
>
> AFAICT that means that we can't readily get the values back out en masse
> to do bulk operations on them without doing a bunch of work to check for
> adjacency and then doing some intermediate marshalling, with cache sync
> block operations are a noticable win. I'm *hopeful* this might end up
> working out fast enough to make the cache more viable on faster buses.
>
> > > This should work well for a lot of devices, though there's some
> > > additional areas that could be looked at such as caching the
> > > last accessed entry like we do for rbtree and trying to minimise
> > > the maple tree level locking.
>
> > In the case of the VMAs, we had a vmacache, which was removed when the
> > maple tree was added since it wasn't providing any benefit. We lost any
> > speed increase to cache misses and updating the cache. I don't know
> > your usecase or if it would result in the same outcome here, but I
> > thought I'd share what happened in the VMA space.
>
> Yeah, I'm hopeful that the maple tree is fast enough that it's not worth
> it. The main use case is read/modify/write sequences where you hit the
> same register twice in quick succession.
>
> > > + rcu_read_lock();
> > > +
> > > + entry = mas_find(&mas, reg);
>
> > mas_walk() might be a better interface for this.
>
> Ah, that's not very discoverable. mas_find() should possibly be called
> mas_find_pausable() or something?

Well, it finds a value at reg or higher, within the limits you pass in.
It was designed for the VMA code where there was a find() that did just
this (but without limits so you actually had to check once it returned).

>
> > > + /* Any adjacent entries to extend/merge? */
> > > + mas_set_range(&mas, reg - 1, reg + 1);
> > > + index = reg;
> > > + last = reg;
> > > +
> > > + lower = mas_find(&mas, reg - 1);
>
> > If you just want to check the previous, you can use:
> > mas_prev(&mas, reg - 1);
> > This will try the previous entry without rewalking from the top of the
> > tree and you don't need to mas_set_range() call.
>
> Hrm, right - it looks like that doesn't actually apply the constraints
> so that'd work. The whole specifying constraints for some operations in
> the mas is a bit confusing.
>
> > > +
> > > + mas_set_range(&mas, index, last);
> > > + ret = mas_store_gfp(&mas, entry, GFP_KERNEL);
>
> > You can avoid this walk as well by changing the order of the code
> > before:
>
> > mas_walk(&mas, reg);
> > if entry... return
> > mas_next(&mas, reg + 1);
> > ...
> > mas_prev(&mas, reg - 1);
> > ...
>
> > This should now be pointing at the location mas_store_gfp() expects:
> > mas.last = last;
> > ret = mas_store_gfp()
>
> Don't we need to set mas.index as well? It does feel a bit wrong to be
> just writing into the mas struct.

Thinking about this more, it might be safer to set mas.index if there
isn't a previous. Perhaps use mas_set_range() if there isn't a
previous.

Perhaps the interface needs to be expanded for setting mas.last. The write
path should be safe for changing where the write ends. I've tried to
avoid re-walking the tree when needed.

What you have will work. If you need more optimisations later, then we
can have another look.
Re: [PATCH v3 2/2] regmap: Add maple tree based register cache [ In reply to ]
On Mon, Apr 03, 2023 at 02:26:44PM -0400, Liam R. Howlett wrote:

> What you have will work. If you need more optimisations later, then we
> can have another look.

Yeah, what's there tests out OK. In terms of optimisation the baseline
requirement is that we have to be able to pull values out faster than
we'd be able to read them from the device so until we're working with
MMIO devices (which people do use regmap with sometimes but aren't the
common case) we can typically burn a *lot* of CPU cycles and still be
doing just fine. The hot spots are basic read/write of an existing
cached register, then sync and initial load in of cache defaults.