lkml.org 
[lkml]   [1998]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: unicode (char as abstract data type)
On Sat, 18 Apr 1998, Albert D. Cahalan wrote:

> > To convert what? We have multiple encodings because we have multiple
> > languages, and conversion through Unicode is useful only within the
> > language because otherwise there will be nothing to map into. koi8-r and
> > iso8859-1 charsets have no common characters except the 7-bit ASCII range.
>
> You have multiple Russian encodings.

I certainly don't. If anything for any reason can arrive at my host in
Russian, in encoding other than koi-8, it gets converted once (and yes,
UTF-8 is used as intermediate format). The same happens with all properly
installed Unix boxes with cyrillic support.

> There are multiple Czech encodings.
> I'm fairly sure most European languages have at least two encodings,
> thanks to Bill Gates and the ISO.

iso8859-5 is something that exists only in someone's sick mind, however
DOS and Windows encodings are alive, and people already configured their
software to convert them when it's necessary. And we/they certainly don't
need yet another one.

> >> The kernel must convert for foreign filesystem support.
> >
> > What filesystems support? Linux is incapable of using
> > read-write currently the most popular filesystem among Unixlike
> > OSes, and no one seems to have problems with that.
>
> SMB, which is getting extensions for better Unix support.

SMB definitely is not something that I describe as "the most popular
filesystem among Unixlike OSes" -- UFS is.

> >> The library & apps must convert for many other reasons.
> >> If libc can use UCS2 to call the kernel, then the kernel
> >> only needs to perform half of the conversion and libc won't
> >> need to convert back to UCS2. Put more of it in user-space!
> >
> > That will work only if absolutely everything un userspace uses Unicode
> > or always has charset information available for every string at the time
> > it is passed to kernel. None of these two situations exist in reality.
>
> No, only one thing in userspace really needs charset info: libc.

Where will it get charset names in MIME-compliant mailreaders?

> >> Think of a machine with several users and several filesystems.
> >> Maybe they are all Czech, which Martin Mares reports as having
> >> more than 5 character encodings. Each user wants to see the system
> >> in their preferred encoding. Solution: the kernel reads filenames
> >> from disk in whatever format is there, then converts to UCS2.
> >> The library converts UCS2 into the format which each user wants.
> >
> > I have never seen users voluntairily using different encodings of the
> > same language on the same OS -- originally multiple encodings for the same
> > languages were created because of incompatible operating systems and
> > hardware.
>
> People share both disk and network filesystems with other OSs.

...and in that case they have to keep the same charset on all systems
that are supposed to read files -- for simple reason of being capable of
understanding the data in their files.

> > The real problem is, what will happen if user uses
> > multiple-charset-aware program (say, knews newsreader) for multiple
> > languages, and OS will happily "convert" everything he copied into files
> > with the assumption that all those texts are in the default charset and
> > language.
>
> That's not a kernel issue either.

This problem will be created if kernel will be designed in such
irresponsible way. It doesn't exist now.

> > And please, don't tell me that every program will be able to
> > label charser before it writes -- I will like to see, what will convert
> > encodings in
> >
> > ls -l >> "`ls | head -1`"
>
> Sure:
>
> 1. libc gets UCS2 directory listing from the kernel
> 2. ls (both of them) get KOI-8 from libc
> 3. head and the shell get KOI-8 -- it's in userspace
> 4. libc gets KOI-8 from the shell
> 5. the kernel gets UCS2 from libc

I mean, a directory with fienames that must be represented in different
charsets, not single-charset case that has no problems now anyway.

> >>> But in what encoding will it represent that to the terminal?
> >>
> >> KOI-8 if you prefer. Except for virtual consoles, this is not
> >> a kernel issue at all.
> >
> > Again, how will it know, what should be converted to what?
>
> This is NOT A KERNEL ISSUE, other than your setup scripts adjusting
> the console to assume whatever you want it to assume.

Again, it will be created by poor handling of data by the kernel.

> >> It is an ncurses issue if you want your consoles in UTF-8 mode.
> >> Note that you could have ncurses support dumb 8-bit apps on
> >> UTF-8 consoles, and it doesn't have to be ASCII or Latin-1.
> >
> > If the application will know, what charset it is using. Right now
> > parts of my xemacs are still under impression that I use iso8859-1
> > charset, however the default fonts are in koi8-r, and things work just
> > fine. One can say that xemacs could be designed better, however there are
> > thousands and thousands of programs that are used in this manner, and no
> > one has any intention to abandon or rewrite them.
>
> Xemacs will not see any change. It can't, because the "system calls"
> it uses must go through libc.

No, it will -- it will repeatedly tell libc that it at the best of his
knowledge does everything in iso8859-1, and my koi8 text will be
"unicodified" as if it consisted of the same bytes in iso8859-1, that
happen to correspond to completely different characters.

> >>> How will it know that it's koi8
> >>
> >> Option 1: compile that knowledge into libc
> >> Option 2: use an environment variable that libc interprets
> >
> > Yes, then smart MIME parser will know one thing and even smarter
> > libc will know something completely different.
>
> This has nothing to do with MIME. As a user and app programmer,
> you won't even notice a UCS2 kernel interface.

...except when I will suddently find that "new" applications that
_know_ about that interface see wrong characters everywhere.

> Well, the man page would mention some nice extra features.
>
> >>> if charset labeling will be eliminated (and this is the whole
> >>> point of Unicode -- to avoid need of charset labeling by
> >>> providing some flat space)?
> >>
> >> No & no.
> >>
> >> You don't use charset labeling on your filenames, do you?
> >
> > Because I don't use non-English filenames now. However I _do_ use
> > non-English headers in email, and they are separately charset-labeled,
> > as well as message body or message body parts.
>
> Oh man... WTF does that have to do with the kernel?

Nothing -- those things when used as filenames will be translated
incorrectly, that's all. Now they aren't translated at all, so if anything
knows what charset they are in, it will see the same text, and everything
that doesn't know, still can open file by name without any conversion.

> >> Somehow you are able to interpret them anyway.
> >>
> >> Unicode can "limp by" without language information just like Latin-1
> >> can limp by without it. Full use of Latin-1 needs language information
> >> for sorting and selection of a desirable font. No difference here!
> >
> > Unicode is supposed to be used by people who don't and can't use Latin1.
> > Myself included.
>
> Your point?

It affects my ability to work with texts in my language. Latin1 doesn't.

> I used Latin-1 in the example only because I needed an encoding
> that is used for multiple languages. The example works for _any_
> other encoding that is used for multiple languages.

It only means that language labeling problem will remain with Unicode.
However completely artificial problem with characters misidentification
will be created.

> Perhaps I should have used Latin-2.
>
> >> At the kernel level, it's already done.
> >
> > No, it isn't. Kernel just uses wide characters, and no one in userspace
> > seriously relies on that. If one will try to use Unicode, countless things
> > in userspace will be broken, so actual unicode text is almost never comes
> > close to those filesystems in other form than ASCII or local charset's
> > conversion.
>
> That is OK. The foundation is there.

A hell of a foundation. Then all hard drives and memory chips have a
great foundation for 128-bit charset.

> >> It is not often that those 3 companies agree on anything.
> >
> > Look, _how_ they use it. Try to find anything in Solaris Unicode-based
> > except text-converting utilities.
>
> All of Java is Unicode.

1. Java has nothing to do with Solaris.
2. All Java implementations use local charsets by converting Unicode to
them. And do it extremely poorly, too, so no one bothers.

--
Alex


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:42    [W:0.098 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site