lkml.org 
[lkml]   [1998]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: unicode (char as abstract data type)
On Fri, 17 Apr 1998, Albert D. Cahalan wrote:

> > In a decade Unicode most likely will be in the same place
> > where EBCDIC is now.
>
> That would be KOI-8, used only by Alex Belits.

Since seventies koi8 and koi7 in various forms were used on all Russian
computers except mainframes and, later, DOS-based PCs. koi8 is still the
only charset used in Russian newsgroups and email, and I will be very
surprised if it will go away any soon (as well as ASCII).

> Look at it this way:
>
> We are stuck in a world with multiple character encodings.
> To convert, you generally need to go through UCS2.

To convert what? We have multiple encodings because we have multiple
languages, and conversion through Unicode is useful only within the
language because otherwise there will be nothing to map into. koi8-r and
iso8859-1 charsets have no common characters except the 7-bit ASCII range.

> The kernel must convert for foreign filesystem support.

What filesystems support? Linux is incapable of using
read-write currently the most popular filesystem among Unixlike OSes, and
no one seems to have problems with that.

> The library & apps must convert for many other reasons.
> If libc can use UCS2 to call the kernel, then the kernel
> only needs to perform half of the conversion and libc won't
> need to convert back to UCS2. Put more of it in user-space!

That will work only if absolutely everything un userspace uses Unicode
or always has charset information available for every string at the time
it is passed to kernel. None of these two situations exist in reality.

> Think of a machine with several users and several filesystems.
> Maybe they are all Czech, which Martin Mares reports as having
> more than 5 character encodings. Each user wants to see the system
> in their preferred encoding. Solution: the kernel reads filenames
> from disk in whatever format is there, then converts to UCS2.
> The library converts UCS2 into the format which each user wants.

I have never seen users voluntairily using different encodings of the
same language on the same OS -- originally multiple encodings for the same
languages were created because of incompatible operating systems and
hardware. The real problem is, what will happen if user uses
multiple-charset-aware program (say, knews newsreader) for multiple
languages, and OS will happily "convert" everything he copied into files
with the assumption that all those texts are in the default charset and
language. And please, don't tell me that every program will be able to
label charser before it writes -- I will like to see, what will convert
encodings in

ls -l >> "`ls | head -1`"

> The yucky alternative: the conversion from UCS2 to _one_ local
> encoding is also in the kernel and users that don't like the chosen
> encoding are screwed: live with it or suffer a _second_ conversion.

Users don't bring encodings with themselves. In Russia even at the time
when every desktop PC with DOS was incapable of displaying anything but
cp866 encoding because of pseudographics in IBM charset, all email between
those boxes was transferred in koi8 with no problems. Now even Windows
users have enough clue to configure koi8 fonts there.

> >>>> I certainly don't want to see 8-bit kernel calls on Merced.
> >>>
> >>> Then you won't see vi there either.
> >>
> >> Oh? The last time I heard, vi accessed system calls via libc.
> >
> > But in what encoding will it represent that to the terminal?
>
> KOI-8 if you prefer. Except for virtual consoles, this is not
> a kernel issue at all.

Again, how will it know, what should be converted to what?

> It is an ncurses issue if you want your consoles in UTF-8 mode.
> Note that you could have ncurses support dumb 8-bit apps on
> UTF-8 consoles, and it doesn't have to be ASCII or Latin-1.

If the application will know, what charset it is using. Right now
parts of my xemacs are still under impression that I use iso8859-1
charset, however the default fonts are in koi8-r, and things work just
fine. One can say that xemacs could be designed better, however there are
thousands and thousands of programs that are used in this manner, and no
one has any intention to abandon or rewrite them.

> >> That is the applications. This is the kernel mailing list.
> >> We have a library called "libc" that provides an interface
> >> between the applications and the kernel. Applications can
> >> still see filenames in KOI-8 if you so desire. (you won't
> >> care what libc does to non-KOI-8 filenames because you won't
> >> have any such names on your disk)
> >
> > How will it know that it's koi8
>
> Option 1: compile that knowledge into libc
> Option 2: use an environment variable that libc interprets

Yes, then smart MIME parser will know one thing and even smarter libc
will know something completely different.

> > if charset labeling will be eliminated (and this is the whole
> > point of Unicode -- to avoid need of charset labeling by
> > providing some flat space)?
>
> No & no.
>
> You don't use charset labeling on your filenames, do you?

Because I don't use non-English filenames now. However I _do_ use
non-English headers in email, and they are separately charset-labeled, as
well as message body or message body parts.

> Somehow you are able to interpret them anyway.
>
> Unicode can "limp by" without language information just like Latin-1
> can limp by without it. Full use of Latin-1 needs language information
> for sorting and selection of a desirable font. No difference here!

Unicode is supposed to be used by people who don't and can't use Latin1.
Myself included.

> >> Think about the consequences of UTF-8 at the system call level:
> >> Every system call that uses text must be first converted to UTF-8.
> >> This burden is with us forever. Meanwhile, Windows and MacOS can
> >> avoid conversion costs after the world converts to UCS2.
> >>
> >> The world _will_ convert too. As much as you may hate it, you
> >> must realize that when Sun, Microsoft, and Apple agree...
> >> It is only a matter of time -- perhaps a decade.
> >
> > They say it, but they don't _do_ it -- and they can't do that anyway.
>
> At the kernel level, it's already done.

No, it isn't. Kernel just uses wide characters, and no one in userspace
seriously relies on that. If one will try to use Unicode, countless things
in userspace will be broken, so actual unicode text is almost never comes
close to those filesystems in other form than ASCII or local charset's
conversion.

> This is of course the
> Linux KERNEL mailing list, so only the KERNEL part matters here.
>
> It is not often that those 3 companies agree on anything.

Look, _how_ they use it. Try to find anything in Solaris Unicode-based
except text-converting utilities.

--
Alex


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:42    [W:0.078 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site