lkml.org 
[lkml]   [2020]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v15 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver
Hey Ben,

Thanks for sending out the new series, this patchset is functional for
booting both R5 0 and R5 1 in split mode.

A few comments below, still working my way through the rest of the code
though now that this works.

On Mon, Sep 21, 2020 at 09:14:06AM -0700, Ben Levinsky wrote:
<...>
> +static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
> +{
> + int ret, i = 0;
> + struct device *dev = &pdev->dev;
> + struct device_node *nc;
> +
> + rpu_mode = of_get_property(dev->of_node, "lockstep-mode", NULL) ?
> + PM_RPU_MODE_LOCKSTEP : PM_RPU_MODE_SPLIT;

Extra whitespace, and of_property_read_bool would read a bit nicer here
(does the same thing in the end, though).

Since rpu_mode is only used here and in r5_set_mode, I think it'd be
better to plumb it through zynqmp_r5_probe instead of making it global
in this file.

> +
> + dev_dbg(dev, "RPU configuration: %s\n",
> + rpu_mode == PM_RPU_MODE_LOCKSTEP ? "lockstep" : "split");
> +
> + for_each_available_child_of_node(dev->of_node, nc) {
> + /*
> + * if 2 RPUs provided but one is lockstep, then we have an
> + * invalid configuration.
> + */
> + if (i > 0 && rpu_mode == PM_RPU_MODE_LOCKSTEP)
> + return -EINVAL;
> +
> + /* only call zynqmp_r5_probe if proper # of rpu's */
> + ret = (i < MAX_RPROCS) ? zynqmp_r5_probe(&rpus[i], pdev, nc) :
> + -EINVAL;
> + dev_dbg(dev, "%s to probe rpu %pOF\n",
> + ret ? "Failed" : "Able",
> + nc);

It'd be cleaner to check the child node count before the loop:

rpu_nodes = of_get_available_child_count(nc)
if ((rpu_mode == PM_RPU_MODE_LOCKSTEP && rpu_nodes != 1) || rpu_nodes > 2)
return -EINVAL;

> +
> + if (ret)
> + return ret;
> +
> + i++;
> + }
> +
> + return 0;
> +}
> +
> +static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_RPROCS; i++) {
> + struct zynqmp_r5_pdata *pdata = &rpus[i];
> + struct rproc *rproc;
> +
> + /* only do clean up for pdata with active rpu */
> + if (pdata->pnode_id == 0)
> + continue;

This seems like a bit of a hack, resulting from the use of a static
array for holding the zynqmp_r5_pdata for each rpu.

Consider allocating zynqmp_r5_pdata in zynqmp_r5_probe, and adding each
instance to a linked-list at file scope.
- memory is only allocated RPUs actually in use
- no need for this pnode_id == 0 hack
- MAX_RPROCS can be eliminated, just traverse that list in
remove
- No reuse of the pdata across probe/removes, so all of the e.g.
condtionals below ("if (rproc)") and NULL assignments can be
eliminated.

> +
> + rproc = pdata->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + pdata->rproc = NULL;
> + }
> + if (pdata->tx_chan) {
> + mbox_free_channel(pdata->tx_chan);
> + pdata->tx_chan = NULL;
> + }
> + if (pdata->rx_chan) {
> + mbox_free_channel(pdata->rx_chan);
> + pdata->rx_chan = NULL;
> + }
> + if (&(&pdata->dev)->dma_pools)
> + device_unregister(&pdata->dev);

The condition here looks very wrong to me, as it will always be true.
What is this trying to achieve?

> + }
> +
> + return 0;
> +}
> +
> +/* Match table for OF platform binding */
> +static const struct of_device_id zynqmp_r5_remoteproc_match[] = {
> + { .compatible = "xlnx,zynqmp-r5-remoteproc-1.0", },
> + { /* end of list */ },
> +};
> +MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match);
> +
> +static struct platform_driver zynqmp_r5_remoteproc_driver = {
> + .probe = zynqmp_r5_remoteproc_probe,
> + .remove = zynqmp_r5_remoteproc_remove,
> + .driver = {
> + .name = "zynqmp_r5_remoteproc",
> + .of_match_table = zynqmp_r5_remoteproc_match,
> + },
> +};
> +module_platform_driver(zynqmp_r5_remoteproc_driver);
> +
> +MODULE_AUTHOR("Ben Levinsky <ben.levinsky@xilinx.com>");
> +MODULE_LICENSE("GPL v2");
> --
> 2.17.1
>

Thanks,
Michael

\
 
 \ /
  Last update: 2020-09-22 00:38    [W:0.081 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site