lkml.org 
[lkml]   [2020]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root Port driver
    Date
    > Subject: Re: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root Port driver
    >
    > [+MarcZ, FHI]
    >
    > On Tue, Feb 25, 2020 at 02:39:56PM +0000, Bharat Kumar Gogada wrote:
    >
    > [...]
    >
    > > > > +/* ECAM definitions */
    > > > > +#define ECAM_BUS_NUM_SHIFT 20
    > > > > +#define ECAM_DEV_NUM_SHIFT 12
    > > >
    > > > You don't need these ECAM_* defines, you can use pci_generic_ecam_ops.
    > > Does this need separate ranges region for ECAM space ?
    > > We have ECAM and controller space in same region.
    >
    > You can create an ECAM window with pci_ecam_create where *cfgres
    > represent the ECAM area, I don't get what you mean by "same region".
    >
    > Do you mean "contiguous" ? Or something else ?
    Yes, contiguous; within ECAM region some space is for controller registers.
    >
    > > > > +
    > > > > +/**
    > > > > + * struct xilinx_cpm_pcie_port - PCIe port information
    > > > > + * @reg_base: Bridge Register Base
    > > > > + * @cpm_base: CPM System Level Control and Status Register(SLCR)
    > > > > +Base
    > > > > + * @irq: Interrupt number
    > > > > + * @root_busno: Root Bus number
    > > > > + * @dev: Device pointer
    > > > > + * @leg_domain: Legacy IRQ domain pointer
    > > > > + * @irq_misc: Legacy and error interrupt number */ struct
    > > > > +xilinx_cpm_pcie_port {
    > > > > + void __iomem *reg_base;
    > > > > + void __iomem *cpm_base;
    > > > > + u32 irq;
    > > > > + u8 root_busno;
    > > > > + struct device *dev;
    > > > > + struct irq_domain *leg_domain;
    > > > > + int irq_misc;
    > > > > +};
    > > > > +
    > > > > +static inline u32 pcie_read(struct xilinx_cpm_pcie_port *port,
    > > > > +u32
    > > > > +reg) {
    > > > > + return readl(port->reg_base + reg); }
    > > > > +
    > > > > +static inline void pcie_write(struct xilinx_cpm_pcie_port *port,
    > > > > + u32 val, u32 reg)
    > > > > +{
    > > > > + writel(val, port->reg_base + reg); }
    > > > > +
    > > > > +static inline bool cpm_pcie_link_up(struct xilinx_cpm_pcie_port
    > > > > +*port) {
    > > > > + return (pcie_read(port, XILINX_CPM_PCIE_REG_PSCR) &
    > > > > + XILINX_CPM_PCIE_REG_PSCR_LNKUP) ? 1 : 0;
    > > >
    > > > u32 val = pcie_read(port, XILINX_CPM_PCIE_REG_PSCR);
    > > >
    > > > return val & XILINX_CPM_PCIE_REG_PSCR_LNKUP;
    > > >
    > > > And this function call is not that informative anyway - it is used
    > > > just to print a log whose usefulness is questionable.
    > > We need this logging information customers are using this info in case
    > > of link down failure.
    >
    > Out of curiosity, to do what ?
    They use this information as first level debug and initiate a query to xilinx support team.
    >
    > [...]
    >
    > > > > +/**
    > > > > + * xilinx_cpm_pcie_intx_map - Set the handler for the INTx and
    > > > > +mark IRQ as valid
    > > > > + * @domain: IRQ domain
    > > > > + * @irq: Virtual IRQ number
    > > > > + * @hwirq: HW interrupt number
    > > > > + *
    > > > > + * Return: Always returns 0.
    > > > > + */
    > > > > +static int xilinx_cpm_pcie_intx_map(struct irq_domain *domain,
    > > > > + unsigned int irq, irq_hw_number_t hwirq) {
    > > > > + irq_set_chip_and_handler(irq, &dummy_irq_chip,
    > > > > +handle_simple_irq);
    > > >
    > > > INTX are level IRQs, the flow handler must be handle_level_irq.
    > > Accepted will change.
    > > >
    > > > > + irq_set_chip_data(irq, domain->host_data);
    > > > > + irq_set_status_flags(irq, IRQ_LEVEL);
    > > >
    > > > The way INTX are handled in this patch is wrong. You must set-up a
    > > > chained IRQ with the appropriate flow handler, current code uses an
    > > > IRQ action and that's an IRQ layer violation and it goes without saying that it
    > is almost certainly broken.
    > > In our controller we use same irq line for controller errors and
    > > legacy errors. we have two cases here where error interrupts are
    > > self-consumed by controller, and legacy interrupts are flow handled.
    > > Its not INTX handling alone for this IRQ line . So chained IRQ can be
    > > used for self consumed interrupts too ?
    >
    > No. In this specific case both solutions are not satisfying, we need to give it
    > some thought, I will talk to Marc (CC'ed) to find the best option here going
    > forward.
    >
    Ok, will wait for Marc to provide inputs.

    Regards,
    Bharat

    \
     
     \ /
      Last update: 2020-02-28 13:49    [W:3.526 / U:0.296 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site