lkml.org 
[lkml]   [2017]   [May]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/7] RISC-V: arch/riscv Makefile and Kconfigs
On Sat, May 27, 2017 at 2:57 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
> On Tue, 23 May 2017 04:46:22 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <palmer@dabbelt.com> wrote:
>>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>>> new file mode 100644
>>> index 000000000000..510ead1d3343
>>> --- /dev/null
>>> +++ b/arch/riscv/Kconfig
>>> @@ -0,0 +1,300 @@
>>> +#
>>> +# For a description of the syntax of this configuration file,
>>> +# see Documentation/kbuild/kconfig-language.txt.
>>> +#
>>> +
>>> +config RISCV
>>> + def_bool y
>>> + select OF
>>> + select OF_EARLY_FLATTREE
>>> + select OF_IRQ
>>> + select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>>> + select ARCH_WANT_FRAME_POINTERS
>>> + select CLONE_BACKWARDS
>>> + select COMMON_CLK
>>> + select GENERIC_CLOCKEVENTS
>>> + select GENERIC_CPU_DEVICES
>>> + select GENERIC_IRQ_SHOW
>>> + select GENERIC_PCI_IOMAP
>>
>> You normally don't want GENERIC_PCI_IOMAP, unless your
>> inb()/outb() uses other instructions than your readl()/writel()
>
> We don't have any special instructions for inb/outb, but there is a special
> fence to ensure ordering.

Ah, interesting. What is the exact behavior of that fence? (this is slightly
unrelated but worth looking at anyway)

> It looks like most of my candidates for patterning things after use
> GENERIC_PCI_IOMAP, but I ended up setting GENERIC_IOMAP and things still at
> least build and boot without any PCI
>
> https://github.com/riscv/riscv-linux/commit/c43c599b0dd9d15886c03a9dd179f8936b0cbb2e
>
> I'll be sure to check if I broke PCI, as I don't actually know what's going on
> here.

Actually I made a mistake: GENERIC_PCI_IOMAP is fine, please use that,
GENERIC_IOMAP is the option you don't want, and you already don't
enable that one.

>>> +# even on 32-bit, physical (and DMA) addresses are > 32-bits
>>> +config ARCH_PHYS_ADDR_T_64BIT
>>> + def_bool y
>>> +
>>> +config ARCH_DMA_ADDR_T_64BIT
>>> + def_bool y
>>
>> Are you required to use 64-bit addressing for RAM on 32-bit
>> architectures though? Using 32-bit dma_addr_t and phys_addr_t
>> when possible makes some code noticeably more efficient.
>>
>>> +config PGTABLE_LEVELS
>>> + int
>>> + default 3 if 64BIT
>>> + default 2
>>
>> With 2-level page tables, you usually can't address much more
>> than 32-bit physical memory anyway, so I'd guess that most
>> 32-bit chips would actually put their RAM under the 4GB boundary.
>
> We can address 34 bits of physical address space on Sv32 (the 32-bit virtual
> addressing scheme in RV32). If this is a meaningful performance constraint
> then we could always make this configurable.

I'd suggest to leave it turned off initially and only use it once someone
actually builds a system that makes use of the high address space.

Of course if there is already hardware that needs it, there has to
be a way to turn it on.

This raises a much more general question about how you want to deal
with SoC implementations in the future. The two most common ways of
doing this are:

- Every major platform gets a Kconfig option in the architecture menu,
and that selects the essential drivers (irqchip, clocksource, pinctrl,
clk, ...) that you need for that platform, along with architecture features
(ISA level and optional features, ...)

- The architecture code knows nothing about the SoC and just keeps
to the basics (CPU architecture level selection, SMP/MMU/etc enabled,
selecting drivers that everyone needs) and leaves the rest up to be
selected in the defconfig file.

On ARM, we have a bit of both, which is not as good as being
consistent one way or another.

>>> +config RV_ATOMIC
>>> + bool "Use atomic memory instructions (RV32A or RV64A)"
>>> + default y
>>> +
>>> +config RV_SYSRISCV_ATOMIC
>>> + bool "Include support for atomic operation syscalls"
>>> + default n
>>> + help
>>> + If atomic memory instructions are present, i.e.,
>>> + CONFIG_RV_ATOMIC, this includes support for the syscall that
>>> + provides atomic accesses. This is only useful to run
>>> + binaries that require atomic access but were compiled with
>>> + -mno-atomic.
>>> +
>>> + If CONFIG_RV_ATOMIC is unset, this option is mandatory.
>>
>> I wonder what the cost would be of always providing the syscalls
>> for compatibility. This is also something worth putting into a VDSO
>> instead of exposing the syscall:
>>
>> That way, user space that is built with -mno-atomic can call into
>> the vdso, which depending on the hardware support will perform
>> the atomic operation directly or enter the syscall.
>
> I think the theory is that most Linux-capable systems are going to have the A
> extension, and that this is really there for educational/hobbyist use.
>
> The actual implementation isn't that expensive: it's no-SMP-only, so it's just
> a disable/enable interrupts that wraps a compare+exchange. I think putting it
> in the VDSO is actually the right thing to do: that way non-A user code will
> still have reasonable performance.
>
> I'll add it to my TODO list.

This would be another variant of the question above.

>>> +config PCI_DOMAINS
>>> + def_bool PCI
>>> +
>>> +config PCI_DOMAINS_GENERIC
>>> + def_bool PCI
>>> +
>>> +config PCI_SYSCALL
>>> + def_bool PCI
>>
>> I don't think you want PCI_SYSCALL
>
> OK. I'm a bit worried we need this to run DOOM, but I'll give everything a
> shot without it and see what happens

I'd argue that's a user space bug that we want to fix anyway to make
the code portable to other architectures.

Arnd

\
 
 \ /
  Last update: 2017-05-29 13:18    [W:0.183 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site