Discussion:
Creeping Bloat in Phobos
Walter Bright via Digitalmars-d
2014-09-27 20:57:58 UTC
Permalink
From time to time, I take a break from bugs and enhancements and just look at
what some piece of code is actually doing. Sometimes, I'm appalled. Phobos, for
example, should be a lean and mean fighting machine:


http://www.nbcnews.com/id/38545625/ns/technology_and_science-science/t/king-tuts-chariots-were-formula-one-cars/#.VCceNmd0xjs

Instead, we have something more akin to:


http://untappedcities.com/2012/10/31/roulez-carrosses-carriages-of-versailles-arrive-in-arras/

More specifically, I looked at std.file.copy():

https://github.com/D-Programming-Language/phobos/blob/master/std/file.d

Which is 3 lines of code:

void copy(in char[] from, in char[] to) {
immutable result = CopyFileW(from.tempCStringW(), to.tempCStringW(),
false);
if (!result)
throw new FileException(to.idup);
}

Compiling this code for Windows produces the rather awful:

_D3std4file4copyFxAaxAaZv comdat
assume CS:_D3std4file4copyFxAaxAaZv
L0: push EBP
mov EBP,ESP
mov EDX,FS:__except_list
push 0FFFFFFFFh
lea EAX,-0220h[EBP]
push offset _D3std4file4copyFxAaxAaZv[0106h]
push EDX
mov FS:__except_list,ESP
sub ESP,8
sub ESP,041Ch
push 0
push dword ptr 0Ch[EBP]
push dword ptr 8[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCSÇàÆTuTaZÇìÆFNbNixAaZSÇ┬├3Res
mov dword ptr -4[EBP],0
lea EAX,-0220h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res3ptrMxFNaNbNdNiNfZPxu
push EAX
lea EAX,-0430h[EBP]
push dword ptr 014h[EBP]
push dword ptr 010h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCSÇàÆTuTaZÇìÆFNbNixAaZSÇ┬├3Res
mov dword ptr -4[EBP],1
lea EAX,-0430h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res3ptrMxFNaNbNdNiNfZPxu
push EAX
call dword ptr __imp__CopyFileW at 12
mov -01Ch[EBP],EAX
mov dword ptr -4[EBP],0
call near ptr L83
jmp short L8F
L83: lea EAX,-0220h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res6__dtorMFNbNiZv
ret
L8F: mov dword ptr -4[EBP],0FFFFFFFFh
call near ptr L9D
jmp short LA9
L9D: lea EAX,-0430h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res6__dtorMFNbNiZv
ret
LA9: cmp dword ptr -01Ch[EBP],0
jne LF3
mov ECX,offset FLAT:_D3std4file13FileException7__ClassZ
push ECX
call near ptr __d_newclass
add ESP,4
push dword ptr 0Ch[EBP]
mov -018h[EBP],EAX
push dword ptr 8[EBP]
call near ptr _D6object12__T4idupTxaZ4idupFNaNbNdNfAxaZAya
push EDX
push EAX
call dword ptr __imp__GetLastError at 0
push EAX
push dword ptr _D3std4file13FileException6__vtblZ[02Ch]
push dword ptr _D3std4file13FileException6__vtblZ[028h]
push 095Dh
mov EAX,-018h[EBP]
call near ptr
_D3std4file13FileException6__ctorMFNfxAakAyakZC3std4file13FileException
push EAX
call near ptr __d_throwc
LF3: mov ECX,-0Ch[EBP]
mov FS:__except_list,ECX
mov ESP,EBP
pop EBP
ret 010h
mov EAX,offset FLAT:_D3std4file13FileException6__vtblZ[0310h]
jmp near ptr __d_framehandler

which is TWICE as much generated code as for D1's copy(), which does the same
thing. No, it is not because D2's compiler sux. It's because it has become
encrustified with gee-gaws, jewels, decorations, and other crap.

To scrape the barnacles off, I've filed:

https://issues.dlang.org/show_bug.cgi?id=13541
https://issues.dlang.org/show_bug.cgi?id=13542
https://issues.dlang.org/show_bug.cgi?id=13543
https://issues.dlang.org/show_bug.cgi?id=13544

I'm sure there's much more in std.file (and elsewhere) that can be done. Guys,
when developing Phobos/Druntime code, please look at the assembler once in a
while and see what is being wrought. You may be appalled, too.
Peter Alexander via Digitalmars-d
2014-09-27 21:59:17 UTC
Permalink
On Saturday, 27 September 2014 at 20:57:53 UTC, Walter Bright
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements
and just look at what some piece of code is actually doing.
Sometimes, I'm appalled.
Me too, and yes it can be appalling. It's pretty bad for even
simple range chains, e.g.

import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}

Here's what LDC produces (with -O -inline -release -noboundscheck)

__Dmain:
0000000100001480 pushq %r15
0000000100001482 pushq %r14
0000000100001484 pushq %rbx
0000000100001485 movq %rsi, %rbx
0000000100001488 movq %rdi, %r14
000000010000148b callq 0x10006df10 ## symbol stub for:
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
0000000100001490 xorb $0x1, %al
0000000100001492 movzbl %al, %r9d
0000000100001496 leaq _.str12(%rip), %rdx ## literal pool for:
"/Users/pja/ldc2-0.14.0-osx-x86_64/bin/../import/std/algorithm.d"
000000010000149d movq 0xcbd2c(%rip), %r8 ## literal pool symbol
address:
__D3std9algorithm24__T6reduceVAyaa3_612b62Z124__T6reduceTS3std9algorithm85__T9MapResultS633std10functional36__T8unaryFunVAyaa8_612e6c656e677468Z8unaryFunTAAyaZ9MapResultZ6reduceFNaNfS3std9algorithm85__T
00000001000014a4 movl $0x2dd, %edi
00000001000014a9 movl $0x3f, %esi
00000001000014ae xorl %ecx, %ecx
00000001000014b0 callq 0x10006e0a2 ## symbol stub for:
__D3std9exception14__T7enforceTbZ7enforceFNaNfbLAxaAyamZb
00000001000014b5 movq (%rbx), %r15
00000001000014b8 leaq 0x10(%rbx), %rsi
00000001000014bc leaq -0x1(%r14), %rdi
00000001000014c0 callq 0x10006df10 ## symbol stub for:
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
00000001000014c5 testb $0x1, %al
00000001000014c7 jne 0x1000014fa
00000001000014c9 addq $-0x2, %r14
00000001000014cd addq $0x20, %rbx
00000001000014d1 nopw %cs:(%rax,%rax)
00000001000014e0 addq -0x10(%rbx), %r15
00000001000014e4 movq %r14, %rdi
00000001000014e7 movq %rbx, %rsi
00000001000014ea callq 0x10006df10 ## symbol stub for:
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
00000001000014ef decq %r14
00000001000014f2 addq $0x10, %rbx
00000001000014f6 testb $0x1, %al
00000001000014f8 je 0x1000014e0
00000001000014fa movl %r15d, %eax
00000001000014fd popq %rbx
00000001000014fe popq %r14
0000000100001500 popq %r15
0000000100001502 ret

and for:

import std.algorithm, std.stdio;
int main(string[] args) {
int r = 0;
foreach (i; 0..args.length)
r += args[i].length;
return r;
}

__Dmain:
00000001000015c0 xorl %eax, %eax
00000001000015c2 testq %rdi, %rdi
00000001000015c5 je 0x1000015de
00000001000015c7 nopw (%rax,%rax)
00000001000015d0 movl %eax, %eax
00000001000015d2 addq (%rsi), %rax
00000001000015d5 addq $0x10, %rsi
00000001000015d9 decq %rdi
00000001000015dc jne 0x1000015d0
00000001000015de ret

(and sorry, don't even bother looking at what dmd does...)

I'm not complaining about LDC here (although I'm surprised
array.empty isn't inlined). The way ranges are formulated make
them difficult to optimize. I think there's things we can do here
in the library. Maybe I'll write up something about that at some
point.

I think the takeaway here is that people should be aware of (a)
what kind of instructions their code is generating, (b) what kind
of instructions their code SHOULD be generating, and (c) what is
practically possible for present-day compilers. Like you say, it
helps to look at the assembled code once in a while to get a feel
for this kind of thing. Modern compilers are good, but they
aren't magic.
H. S. Teoh via Digitalmars-d
2014-09-27 22:09:46 UTC
Permalink
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements and just
look at what some piece of code is actually doing. Sometimes, I'm
appalled.
Me too, and yes it can be appalling. It's pretty bad for even simple
range chains, e.g.
import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}
I vaguely recall somebody mentioning a while back that range-based code
is poorly optimized because compilers weren't designed to recognize
such patterns before. I wonder if there are ways for the compiler to
recognize range primitives and apply special optimizations to them.

I do find, though, that gdc -O3 generally tends to do a pretty good job
of reducing range-based code to near-minimal assembly. Sadly, dmd is
changing too fast for gdc releases to catch up with the latest and
greatest, so I haven't been using gdc very much recently. :-(


T
--
If Java had true garbage collection, most programs would delete
themselves upon execution. -- Robert Sewell
Brad Roberts via Digitalmars-d
2014-09-27 22:26:08 UTC
Permalink
What we're seeing here is pretty much the same problem that early c++
suffered from: abstraction penalty. It took years of work to help
overcome it, both from the compiler and the library. Not having trivial
functions inlined and optimized down through standard techniques like
dead store elimination, value range propagation, various loop
restructurings, etc means that code will look like what Walter and you
have shown. Given DMD's relatively weak inliner, I'm not shocked by
Walter's example. I am curious why ldc failed to inline those functions.
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements and just
look at what some piece of code is actually doing. Sometimes, I'm
appalled.
Me too, and yes it can be appalling. It's pretty bad for even simple
range chains, e.g.
import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}
Here's what LDC produces (with -O -inline -release -noboundscheck)
0000000100001480 pushq %r15
0000000100001482 pushq %r14
0000000100001484 pushq %rbx
0000000100001485 movq %rsi, %rbx
0000000100001488 movq %rdi, %r14
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
0000000100001490 xorb $0x1, %al
0000000100001492 movzbl %al, %r9d
"/Users/pja/ldc2-0.14.0-osx-x86_64/bin/../import/std/algorithm.d"
000000010000149d movq 0xcbd2c(%rip), %r8 ## literal pool symbol
__D3std9algorithm24__T6reduceVAyaa3_612b62Z124__T6reduceTS3std9algorithm85__T9MapResultS633std10functional36__T8unaryFunVAyaa8_612e6c656e677468Z8unaryFunTAAyaZ9MapResultZ6reduceFNaNfS3std9algorithm85__T
00000001000014a4 movl $0x2dd, %edi
00000001000014a9 movl $0x3f, %esi
00000001000014ae xorl %ecx, %ecx
__D3std9exception14__T7enforceTbZ7enforceFNaNfbLAxaAyamZb
00000001000014b5 movq (%rbx), %r15
00000001000014b8 leaq 0x10(%rbx), %rsi
00000001000014bc leaq -0x1(%r14), %rdi
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
00000001000014c5 testb $0x1, %al
00000001000014c7 jne 0x1000014fa
00000001000014c9 addq $-0x2, %r14
00000001000014cd addq $0x20, %rbx
00000001000014d1 nopw %cs:(%rax,%rax)
00000001000014e0 addq -0x10(%rbx), %r15
00000001000014e4 movq %r14, %rdi
00000001000014e7 movq %rbx, %rsi
__D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb
00000001000014ef decq %r14
00000001000014f2 addq $0x10, %rbx
00000001000014f6 testb $0x1, %al
00000001000014f8 je 0x1000014e0
00000001000014fa movl %r15d, %eax
00000001000014fd popq %rbx
00000001000014fe popq %r14
0000000100001500 popq %r15
0000000100001502 ret
import std.algorithm, std.stdio;
int main(string[] args) {
int r = 0;
foreach (i; 0..args.length)
r += args[i].length;
return r;
}
00000001000015c0 xorl %eax, %eax
00000001000015c2 testq %rdi, %rdi
00000001000015c5 je 0x1000015de
00000001000015c7 nopw (%rax,%rax)
00000001000015d0 movl %eax, %eax
00000001000015d2 addq (%rsi), %rax
00000001000015d5 addq $0x10, %rsi
00000001000015d9 decq %rdi
00000001000015dc jne 0x1000015d0
00000001000015de ret
(and sorry, don't even bother looking at what dmd does...)
I'm not complaining about LDC here (although I'm surprised array.empty
isn't inlined). The way ranges are formulated make them difficult to
optimize. I think there's things we can do here in the library. Maybe
I'll write up something about that at some point.
I think the takeaway here is that people should be aware of (a) what
kind of instructions their code is generating, (b) what kind of
instructions their code SHOULD be generating, and (c) what is
practically possible for present-day compilers. Like you say, it helps
to look at the assembled code once in a while to get a feel for this
kind of thing. Modern compilers are good, but they aren't magic.
Walter Bright via Digitalmars-d
2014-09-27 22:26:35 UTC
Permalink
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements and just look at
what some piece of code is actually doing. Sometimes, I'm appalled.
Me too, and yes it can be appalling. It's pretty bad for even simple range
chains, e.g.
import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}
Here's what LDC produces (with -O -inline -release -noboundscheck)
Part of this particular case problem is not a compiler optimizer weakness, but
that autodecode problem I've been throwing (!) chairs through windows on.
H. S. Teoh via Digitalmars-d
2014-09-27 22:40:36 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements and
just look at what some piece of code is actually doing. Sometimes,
I'm appalled.
Me too, and yes it can be appalling. It's pretty bad for even simple
range chains, e.g.
import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}
Here's what LDC produces (with -O -inline -release -noboundscheck)
Part of this particular case problem is not a compiler optimizer
weakness, but that autodecode problem I've been throwing (!) chairs
through windows on.
If we can get Andrei on board, I'm all for killing off autodecoding.


T
--
MAS = Mana Ada Sistem?
bearophile via Digitalmars-d
2014-09-27 22:52:21 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Post by Peter Alexander via Digitalmars-d
import std.algorithm, std.stdio;
int main(string[] args) {
return cast(int)args.map!("a.length").reduce!"a+b"();
}
Here's what LDC produces (with -O -inline -release
-noboundscheck)
Part of this particular case problem is not a compiler
optimizer weakness, but that autodecode problem I've been
throwing (!) chairs through windows on.
There is no char auto decoding in this program, right?

Note: in Phobos now we have std.algorithm.sum, that is better
than reduce!"a+b"().

Bye,
bearophile
Walter Bright via Digitalmars-d
2014-09-27 23:03:56 UTC
Permalink
Post by bearophile via Digitalmars-d
There is no char auto decoding in this program, right?
Notice the calls to autodecoding 'front' in the assembler dump.
Peter Alexander via Digitalmars-d
2014-09-27 23:06:02 UTC
Permalink
On Saturday, 27 September 2014 at 23:04:00 UTC, Walter Bright
Post by Walter Bright via Digitalmars-d
Post by bearophile via Digitalmars-d
There is no char auto decoding in this program, right?
Notice the calls to autodecoding 'front' in the assembler dump.
I think you're imagining things Walter!

There's no auto-decoding my example, it's just adding up the
lengths.
Walter Bright via Digitalmars-d
2014-09-27 23:35:03 UTC
Permalink
Post by Peter Alexander via Digitalmars-d
Post by Walter Bright via Digitalmars-d
Post by bearophile via Digitalmars-d
There is no char auto decoding in this program, right?
Notice the calls to autodecoding 'front' in the assembler dump.
I think you're imagining things Walter!
There's no auto-decoding my example, it's just adding up the lengths.
oh crap, I misread empty as front!
bearophile via Digitalmars-d
2014-09-27 23:00:16 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
Killing auto-decoding for std.algorithm functions will break most
of my D2 code... perhaps we can do that in a D3 language.

Bye,
bearophile
H. S. Teoh via Digitalmars-d
2014-09-27 23:31:21 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off autodecoding.
Killing auto-decoding for std.algorithm functions will break most of
my D2 code... perhaps we can do that in a D3 language.
[...]

Well, obviously it's not going to be done in a careless, drastic way!

There will be a proper migration path and deprecation cycle. We already
have byCodeUnit and byCodePoint, and the first step is probably to
migrate towards requiring usage of one or the other for iterating over
strings, and only once all code is using them, we will get rid of
autodecoding (the job now being done by byCodePoint). Then, the final
step would be to allow the direct use of strings in iteration constructs
again, but this time without autodecoding by default. Of course,
.byCodePoint will still be available for code that needs to use it.


T
--
Ph.D. = Permanent head Damage
via Digitalmars-d
2014-09-28 10:04:21 UTC
Permalink
On Saturday, 27 September 2014 at 23:33:14 UTC, H. S. Teoh via
On Sat, Sep 27, 2014 at 11:00:16PM +0000, bearophile via
Post by bearophile via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
Killing auto-decoding for std.algorithm functions will break
most of
my D2 code... perhaps we can do that in a D3 language.
[...]
Well, obviously it's not going to be done in a careless,
drastic way!
There will be a proper migration path and deprecation cycle. We
already
have byCodeUnit and byCodePoint, and the first step is probably
to
migrate towards requiring usage of one or the other for
iterating over
strings, and only once all code is using them, we will get rid
of
autodecoding (the job now being done by byCodePoint). Then, the
final
step would be to allow the direct use of strings in iteration
constructs
again, but this time without autodecoding by default. Of course,
.byCodePoint will still be available for code that needs to use
it.
The final step would almost inevitably lead to Unicode
incorrectness, which was the reason why autodecoding was
introduced in the first place. Just require
byCodePoint/byCodeUnit, always. It might be a bit inconvenient,
but that's a consequence of the fact that we're dealing with
Unicode strings.
Marco Leise via Digitalmars-d
2014-09-28 10:58:10 UTC
Permalink
Am Sun, 28 Sep 2014 10:04:21 +0000
Post by via Digitalmars-d
On Saturday, 27 September 2014 at 23:33:14 UTC, H. S. Teoh via
On Sat, Sep 27, 2014 at 11:00:16PM +0000, bearophile via
Post by bearophile via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
Killing auto-decoding for std.algorithm functions will break
most of
my D2 code... perhaps we can do that in a D3 language.
[...]
Well, obviously it's not going to be done in a careless,
drastic way!
There will be a proper migration path and deprecation cycle. We
already
have byCodeUnit and byCodePoint, and the first step is probably
to
migrate towards requiring usage of one or the other for
iterating over
strings, and only once all code is using them, we will get rid
of
autodecoding (the job now being done by byCodePoint). Then, the
final
step would be to allow the direct use of strings in iteration
constructs
again, but this time without autodecoding by default. Of course,
.byCodePoint will still be available for code that needs to use
it.
The final step would almost inevitably lead to Unicode
incorrectness, which was the reason why autodecoding was
introduced in the first place. Just require
byCodePoint/byCodeUnit, always. It might be a bit inconvenient,
but that's a consequence of the fact that we're dealing with
Unicode strings.
And I would go so far to say that you have to make an informed
decision between code unit, code point and grapheme. They are
all useful. Graphemes being the most generally useful, hiding
away normalization and allowing cutting by "user perceived
character".
--
Marco
Andrei Alexandrescu via Digitalmars-d
2014-09-28 12:23:57 UTC
Permalink
The final step would almost inevitably lead to Unicode incorrectness,
which was the reason why autodecoding was introduced in the first place.
Just require byCodePoint/byCodeUnit, always. It might be a bit
inconvenient, but that's a consequence of the fact that we're dealing
with Unicode strings.
Also let's not forget how well it's worked for C++ to conflate arrays of
char with Unicode strings. -- Andrei
bearophile via Digitalmars-d
2014-09-28 10:14:44 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
There will be a proper migration path and deprecation cycle.
I get refusals if I propose tiny breaking changes that require
changes in a small amount of user code. In comparison the user
code changes you are suggesting are very large.

Bye,
bearophile
Walter Bright via Digitalmars-d
2014-09-28 18:13:24 UTC
Permalink
I get refusals if I propose tiny breaking changes that require changes in a
small amount of user code. In comparison the user code changes you are
suggesting are very large.
I'm painfully aware of what a large change removing autodecoding is. That means
it'll take a long time to do it. In the meantime, we can stop adding new code to
Phobos that does autodecoding. We have taken the first step by adding the
.byDchar and .byCodeUnit adapters.
bearophile via Digitalmars-d
2014-09-28 18:39:19 UTC
Permalink
Post by Walter Bright via Digitalmars-d
I'm painfully aware of what a large change removing
autodecoding is. That means it'll take a long time to do it. In
the meantime, we can stop adding new code to Phobos that does
autodecoding. We have taken the first step by adding the
.byDchar and .byCodeUnit adapters.
We have .representation and .assumeUTF, I am using it to avoid
most autodecoding problems. Have you tried to use them in your D
code?

The changes you propose seem able to break almost every D program
I have written (most or all code that uses strings with Phobos
ranges/algorithms, and I use them everywhere). Compared to this
change, disallowing comma operator to implement nice built-in
tuples will cause nearly no breakage in my code (I have done a
small analysis of the damages caused by disallowing the tuple
operator in my code). It sounds like a change fit for a D3
language, even more than the introduction of reference counting.
I think this change will cause some people to permanently stop
using D.

In the end you are the designer and the benevolent dictator of D,
I am not qualified to refuse or oppose such changes. But before
doing this change I suggest to study how many changes it causes
in an average small D program that uses strings and
ranges/algorithms.

Bye,
bearophile
Walter Bright via Digitalmars-d
2014-09-28 19:38:25 UTC
Permalink
Post by Walter Bright via Digitalmars-d
I'm painfully aware of what a large change removing autodecoding is. That
means it'll take a long time to do it. In the meantime, we can stop adding new
code to Phobos that does autodecoding. We have taken the first step by adding
the .byDchar and .byCodeUnit adapters.
We have .representation and .assumeUTF, I am using it to avoid most autodecoding
problems. Have you tried to use them in your D code?
Yes. They don't work. Well, technically they do "work", but your code gets
filled with explicit casts, which is awful.

The problem is the "representation" of char[] is type char, not type ubyte.
The changes you propose seem able to break almost every D program I have written
(most or all code that uses strings with Phobos ranges/algorithms, and I use
them everywhere). Compared to this change, disallowing comma operator to
implement nice built-in tuples will cause nearly no breakage in my code (I have
done a small analysis of the damages caused by disallowing the tuple operator in
my code). It sounds like a change fit for a D3 language, even more than the
introduction of reference counting. I think this change will cause some people
to permanently stop using D.
It's quite possible we will be unable to make this change. But the question that
started all this would be what would I change if breaking code was allowed.

I suggest that in the future write code that is explicit about the intention -
by character or by decoded character - by using adapters .byChar or .byDchar.
Marco Leise via Digitalmars-d
2014-09-29 12:09:18 UTC
Permalink
Am Sun, 28 Sep 2014 12:38:25 -0700
Post by Walter Bright via Digitalmars-d
I suggest that in the future write code that is explicit about the intention -
by character or by decoded character - by using adapters .byChar or .byDchar.
... or by "user perceived character" or by "word" or by
"line". I'm always on the fence with code points. Sure they
are the code points, but what does it mean in practice?
Is it valid to start a Unicode string with just a diacritical
mark? Does it make sense to split in the middle of Korean
symbols, effectively removing parts of the glyphs and
rendering them invalid?

Bearophile, what does your code _do_ with the dchar ranges?
How is it not rendered into a caricature of its own attempts
to support non-ASCII by the above ?
--
Marco
Andrei Alexandrescu via Digitalmars-d
2014-09-28 12:09:53 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off autodecoding.
Killing auto-decoding for std.algorithm functions will break most of
my D2 code... perhaps we can do that in a D3 language.
[...]
Well, obviously it's not going to be done in a careless, drastic way!
Stuff that's missing:

* Reasonable effort to improve performance of auto-decoding;

* A study of the matter revealing either new artifacts and idioms, or
the insufficiency of such;

* An assessment of the impact on compilability of existing code

* An assessment of the impact on correctness of existing code (that
compiles and runs in both cases)

* An assessment of the improvement in speed of eliminating auto-decoding

I think there's a very strong need for this stuff, because claims that
current alternatives to selectively avoid auto-decoding use the throwing
of hands (and occasional chairs out windows) without any real
investigation into how library artifacts may help. This approach to
justifying risky moves is frustratingly unprincipled.

Also I submit that diverting into this is a huge distraction at probably
the worst moment in the history of the D programming language.

C++ and GC. C++ and GC...



Andrei
Walter Bright via Digitalmars-d
2014-09-28 18:36:42 UTC
Permalink
Post by Andrei Alexandrescu via Digitalmars-d
* Reasonable effort to improve performance of auto-decoding;
* A study of the matter revealing either new artifacts and idioms, or the
insufficiency of such;
* An assessment of the impact on compilability of existing code
* An assessment of the impact on correctness of existing code (that compiles and
runs in both cases)
* An assessment of the improvement in speed of eliminating auto-decoding
I think there's a very strong need for this stuff, because claims that current
alternatives to selectively avoid auto-decoding use the throwing of hands (and
occasional chairs out windows) without any real investigation into how library
artifacts may help. This approach to justifying risky moves is frustratingly
unprincipled.
I know I have to go a ways further to convince you :-) This is definitely a
longer term issue, not a stop-the-world-we-must-fix-it-now thing.
Post by Andrei Alexandrescu via Digitalmars-d
Also I submit that diverting into this is a huge distraction at probably the
worst moment in the history of the D programming language.
I don't plan to work on this particular issue for the time being, but do want to
stop adding more autodecoding functions like the proposed std.path.withExtension().
Post by Andrei Alexandrescu via Digitalmars-d
C++ and GC. C++ and GC...
Currently, the autodecoding functions allocate with the GC and throw as well.
(They'll GC allocate an exception and throw it if they encounter an invalid UTF
sequence. The adapters use the more common method of inserting a substitution
character and continuing on.) This makes it harder to make GC-free Phobos code.
bearophile via Digitalmars-d
2014-09-28 18:51:00 UTC
Permalink
but do want to stop adding more autodecoding functions like the
proposed std.path.withExtension().
I am not sure that can work. Perhaps you need to create a range2
and algorithm2 modules, and keep adding some autodecoding
functions to the old modules.

Bye,
bearophile
Walter Bright via Digitalmars-d
2014-09-28 19:57:17 UTC
Permalink
but do want to stop adding more autodecoding functions like the proposed
std.path.withExtension().
I am not sure that can work. Perhaps you need to create a range2 and algorithm2
modules, and keep adding some autodecoding functions to the old modules.
It can work just fine, and I wrote it. The problem is convincing someone to pull
it :-( as the PR was closed and reopened with autodecoding put back in.

As I've explained many times, very few string algorithms actually need decoding
at all. 'find', for example, does not. Trying to make a separate universe out of
autodecoding algorithms is missing the point.

Certainly, setExtension() does not need autodecoding, and in fact all the
autodecoding in it does is slow it down, allocate memory on errors, make it
throwable, and produce dchar output, meaning at some point later you'll need to
put it back to char.

I.e. there are no operations on paths that require decoding.

I know that you care about performance - you post about it often. I would expect
that unnecessary and pervasive decoding would be of concern to you.
bearophile via Digitalmars-d
2014-09-28 20:38:54 UTC
Permalink
Post by Walter Bright via Digitalmars-d
It can work just fine, and I wrote it. The problem is
convincing someone to pull it :-( as the PR was closed and
reopened with autodecoding put back in.
Perhaps you need a range2 and algorithm2 modules. Introducing
your changes in a sneaky way may not produce well working and
predictable user code.
Post by Walter Bright via Digitalmars-d
I know that you care about performance - you post about it
often. I would expect that unnecessary and pervasive decoding
would be of concern to you.
I care first of all about program correctness (that's why I
proposed unusual things like optional strong typing for built-in
array indexes, or I proposed the "enum preconditions"). Secondly
I care for performance in the functions or parts of code where
performance is needed. There are plenty of code where performance
is not the most important thing. That's why I have tons of
range-base code. In such large parts of code having short,
correct, nice looking code that looks correct is more important.
Please don't assume I am simple minded :-)

Bye,
bearophile
Walter Bright via Digitalmars-d
2014-09-28 23:06:26 UTC
Permalink
Post by Walter Bright via Digitalmars-d
It can work just fine, and I wrote it. The problem is convincing someone to
pull it :-( as the PR was closed and reopened with autodecoding put back in.
Perhaps you need a range2 and algorithm2 modules. Introducing your changes in a
sneaky way may not produce well working and predictable user code.
I'm not suggesting sneaky ways. setExt() was a NEW function.
Post by Walter Bright via Digitalmars-d
I know that you care about performance - you post about it often. I would
expect that unnecessary and pervasive decoding would be of concern to you.
I care first of all about program correctness (that's why I proposed unusual
things like optional strong typing for built-in array indexes, or I proposed the
"enum preconditions").
Ok, but you implied at one point that you were not aware of which parts of your
string code decoded and which did not. That's not consistent with being very
careful about correctness.

Note that autodecode does not always happen - it doesn't happen for ranges of
chars. It's very hard to look at piece of code and tell if autodecode is going
to happen or not.
Secondly I care for performance in the functions or parts
of code where performance is needed. There are plenty of code where performance
is not the most important thing. That's why I have tons of range-base code. In
such large parts of code having short, correct, nice looking code that looks
correct is more important. Please don't assume I am simple minded :-)
It's very hard to disable the autodecode when it is not needed, though the new
.byCodeUnit has made that much easier.
monarch_dodra via Digitalmars-d
2014-09-29 14:22:29 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Note that autodecode does not always happen - it doesn't happen
for ranges of chars. It's very hard to look at piece of code
and tell if autodecode is going to happen or not.
Arguably, this means we need to unify the behavior of strings,
and "string-like" objects. Pointing to an inconsistency doesn't
mean the design is flawed and void.
monarch_dodra via Digitalmars-d
2014-09-29 14:33:12 UTC
Permalink
Post by Walter Bright via Digitalmars-d
It's very hard to disable the autodecode when it is not needed,
though the new .byCodeUnit has made that much easier.
One issue with this though is that "byCodeUnit" is not actually
an array. As such, by using "byCodeUnit", you have just as much
chances of improving performance, as you have of *hurting*
performance for algorithms that are string optimized.

For example, which would be fastest:
"hello world".find(' '); //(1)
"hello world".byCodeUnit.find(' '); //(2)

Currently, (1) is faster :/

This is a good argument though to instead use ubyte[] or
std.encoding.AsciiString.

What I think we (maybe) need though is std.encoding.UTF8Array,
which explicitly means: This is a range containing UTF8
characters. I don't want decoding. It's an array you may memchr
or slice operate on.
H. S. Teoh via Digitalmars-d
2014-09-28 20:39:02 UTC
Permalink
Post by Walter Bright via Digitalmars-d
but do want to stop adding more autodecoding functions like the
proposed std.path.withExtension().
I am not sure that can work. Perhaps you need to create a range2 and
algorithm2 modules, and keep adding some autodecoding functions to
the old modules.
It can work just fine, and I wrote it. The problem is convincing
someone to pull it :-( as the PR was closed and reopened with
autodecoding put back in.
The problem with pulling such PRs is that they introduce a dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions autodecode and
which don't.
Post by Walter Bright via Digitalmars-d
As I've explained many times, very few string algorithms actually need
decoding at all. 'find', for example, does not. Trying to make a
separate universe out of autodecoding algorithms is missing the point.
[...]

Maybe what we need to do, is to change the implementation of
std.algorithm so that it internally uses byCodeUnit for narrow strings
where appropriate. We're already specialcasing Phobos code for narrow
strings anyway, so it wouldn't make things worse by making those special
cases not autodecode.

This doesn't quite solve the issue of composing ranges, since one
composed range returns dchar in .front composed with another range will
have autodecoding built into it. For those cases, perhaps one way to
hack around the present situation is to use Phobos-private enums in the
wrapper ranges (e.g., enum isNarrowStringUnderneath=true; in struct
Filter or something), that ranges downstream can test for, and do the
appropriate bypasses.

(BTW, before you pick on specific algorithms you might want to actually
look at the code for things like find(), because I remember there were a
couple o' PRs where find() of narrow strings will use (presumably) fast
functions like strstr or strchr, bypassing a foreach loop over an
autodecoding .front.)


T
--
I think Debian's doing something wrong, `apt-get install pesticide',
doesn't seem to remove the bugs on my system! -- Mike Dresser
Andrei Alexandrescu via Digitalmars-d
2014-09-28 20:43:53 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
but do want to stop adding more autodecoding functions like the
proposed std.path.withExtension().
I am not sure that can work. Perhaps you need to create a range2 and
algorithm2 modules, and keep adding some autodecoding functions to
the old modules.
It can work just fine, and I wrote it. The problem is convincing
someone to pull it :-( as the PR was closed and reopened with
autodecoding put back in.
The problem with pulling such PRs is that they introduce a dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions autodecode and
which don't.
I agree. -- Andrei
Dmitry Olshansky via Digitalmars-d
2014-09-28 20:55:59 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
but do want to stop adding more autodecoding functions like the
proposed std.path.withExtension().
I am not sure that can work. Perhaps you need to create a range2 and
algorithm2 modules, and keep adding some autodecoding functions to
the old modules.
It can work just fine, and I wrote it. The problem is convincing
someone to pull it :-( as the PR was closed and reopened with
autodecoding put back in.
The problem with pulling such PRs is that they introduce a dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions autodecode and
which don't.
Agreed.
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
As I've explained many times, very few string algorithms actually need
decoding at all. 'find', for example, does not. Trying to make a
separate universe out of autodecoding algorithms is missing the point.
[...]
Maybe what we need to do, is to change the implementation of
std.algorithm so that it internally uses byCodeUnit for narrow strings
where appropriate. We're already specialcasing Phobos code for narrow
strings anyway, so it wouldn't make things worse by making those special
cases not autodecode.
This doesn't quite solve the issue of composing ranges, since one
composed range returns dchar in .front composed with another range will
have autodecoding built into it. For those cases, perhaps one way to
hack around the present situation is to use Phobos-private enums in the
wrapper ranges (e.g., enum isNarrowStringUnderneath=true; in struct
Filter or something), that ranges downstream can test for, and do the
appropriate bypasses.
We need to either generalize the hack we did for char[] and wchar[] or
start creating a whole new phobos without auto-decoding.

I'm not sure what's best but the latter is more disruptive.
Post by H. S. Teoh via Digitalmars-d
(BTW, before you pick on specific algorithms you might want to actually
look at the code for things like find(), because I remember there were a
couple o' PRs where find() of narrow strings will use (presumably) fast
functions like strstr or strchr, bypassing a foreach loop over an
autodecoding .front.)
Yes, it has fast path.
--
Dmitry Olshansky
Walter Bright via Digitalmars-d
2014-09-28 23:21:14 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
It can work just fine, and I wrote it. The problem is convincing
someone to pull it :-( as the PR was closed and reopened with
autodecoding put back in.
The problem with pulling such PRs is that they introduce a dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions autodecode and
which don't.
That's ALREADY the case, as I explained to bearophile.

The solution is not to have the ranges autodecode, but to have the ALGORITHMS
decide to autodecode (if they need it) or not (if they don't).
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
As I've explained many times, very few string algorithms actually need
decoding at all. 'find', for example, does not. Trying to make a
separate universe out of autodecoding algorithms is missing the point.
[...]
Maybe what we need to do, is to change the implementation of
std.algorithm so that it internally uses byCodeUnit for narrow strings
where appropriate. We're already specialcasing Phobos code for narrow
strings anyway, so it wouldn't make things worse by making those special
cases not autodecode.
Those special cases wind up going everywhere and impacting everyone who attempts
to write generic algorithms.
Post by H. S. Teoh via Digitalmars-d
This doesn't quite solve the issue of composing ranges, since one
composed range returns dchar in .front composed with another range will
have autodecoding built into it. For those cases, perhaps one way to
hack around the present situation is to use Phobos-private enums in the
wrapper ranges (e.g., enum isNarrowStringUnderneath=true; in struct
Filter or something), that ranges downstream can test for, and do the
appropriate bypasses.
More complexity :-( for what should be simple tasks.
Post by H. S. Teoh via Digitalmars-d
(BTW, before you pick on specific algorithms you might want to actually
look at the code for things like find(), because I remember there were a
couple o' PRs where find() of narrow strings will use (presumably) fast
functions like strstr or strchr, bypassing a foreach loop over an
autodecoding .front.)
Oh, I know that many algorithms have such specializations. Doesn't it strike you
as sucky to have to special case a whole basket of algorithms when the
InputRange does not behave in a reliable manner?

It's very simple for an algorithm to decode if it needs to, it just adds in a
.byDchar adapter to its input range. Done. No special casing needed. The lines
of code written drop in half. And it works with both arrays of chars, arrays of
dchars, and input ranges of either.

---

The stalling of setExt() has basically halted my attempts to adjust Phobos so
that one can write nothrow and @nogc algorithms that work on strings.
Marco Leise via Digitalmars-d
2014-09-29 12:36:24 UTC
Permalink
Am Sun, 28 Sep 2014 16:21:14 -0700
Post by Walter Bright via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
The problem with pulling such PRs is that they introduce a dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions autodecode and
which don't.
That's ALREADY the case, as I explained to bearophile.
The solution is not to have the ranges autodecode, but to have the ALGORITHMS
decide to autodecode (if they need it) or not (if they don't).
Yes, that sounds like the right abstraction!
--
Marco
Dicebot via Digitalmars-d
2014-09-29 12:47:08 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
Post by Walter Bright via Digitalmars-d
It can work just fine, and I wrote it. The problem is
convincing
someone to pull it :-( as the PR was closed and reopened with
autodecoding put back in.
The problem with pulling such PRs is that they introduce a
dichotomy
into Phobos. Some functions autodecode, some don't, and from a user's
POV, it's completely arbitrary and random. Which leads to bugs because
people can't possibly remember exactly which functions
autodecode and
which don't.
That's ALREADY the case, as I explained to bearophile.
The solution is not to have the ranges autodecode, but to have
the ALGORITHMS decide to autodecode (if they need it) or not
(if they don't).
No it isn't, despite you pretending otherwise. Right now there is
a simple rule - Phobos does auto-decoding everywhere and any
failure to do so is considered a bug. Sometimes it is possible to
bypass decoding for speed-up while preserving semantical
correctness but it is always implementation detail, from the
point of view of the API it can't be noticed (for valid unicode
string at least).

Your proposal would have been a precedent to adding _intetional_
exception. It is unacceptable.
monarch_dodra via Digitalmars-d
2014-09-29 14:25:32 UTC
Permalink
Post by Walter Bright via Digitalmars-d
It's very simple for an algorithm to decode if it needs to, it
just adds in a .byDchar adapter to its input range. Done. No
special casing needed. The lines of code written drop in half.
And it works with both arrays of chars, arrays of dchars, and
input ranges of either.
This just misses the *entire* familly of algorithms that operate
on generic types, such as "map". EG: the totality of
std.algorithm. Oops.
via Digitalmars-d
2014-09-29 15:07:31 UTC
Permalink
Post by monarch_dodra via Digitalmars-d
This just misses the *entire* familly of algorithms that
operate on generic types, such as "map". EG: the totality of
std.algorithm. Oops.
But if you know the common operations on strings used in many
programs, then you want them to perform. So you need a mix of
special cased precomputed lookup-tables and/or special cased SIMD
instructions that will outperform a generic map() by a
significant margin.

I am not arguing against generic apis being desirable, I am
questioning of the feasibility of being competitive in the space
of utf-8 strings without designing for SIMD.

Obviously, doing bitwise and 0x80808080...80 followed by a simd
compare with " " will beat anything scalar based if you
want to test for a space in a utf-8 string.

Intel has made a real effort at making UTF-8 SIMD optimizable
with the last few generations of instruction sets.

Figuring out how to tap into that potential is a lot more
valuable than defining an api a priori. That means writing SIMD
code and comparing it to what you have. If SIMD blows what you
have out of the water, then it ain't god enough.

If you can define the string APIs to fit what SIMD is good at,
then you are onto something that could be really good. D is in a
position where you can do that. C++ aint.
Andrei Alexandrescu via Digitalmars-d
2014-09-28 20:33:34 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Currently, the autodecoding functions allocate with the GC and throw as
well. (They'll GC allocate an exception and throw it if they encounter
an invalid UTF sequence. The adapters use the more common method of
inserting a substitution character and continuing on.) This makes it
harder to make GC-free Phobos code.
The right solution here is refcounted exception plus policy-based
functions in conjunction with RCString. I can't believe this focus has
already been lost and we're back to let's remove autodecoding and ban
exceptions. -- Andrei
Dmitry Olshansky via Digitalmars-d
2014-09-28 21:00:15 UTC
Permalink
Post by Andrei Alexandrescu via Digitalmars-d
Post by Walter Bright via Digitalmars-d
Currently, the autodecoding functions allocate with the GC and throw as
well. (They'll GC allocate an exception and throw it if they encounter
an invalid UTF sequence. The adapters use the more common method of
inserting a substitution character and continuing on.) This makes it
harder to make GC-free Phobos code.
The right solution here is refcounted exception plus policy-based
functions in conjunction with RCString. I can't believe this focus has
already been lost and we're back to let's remove autodecoding and ban
exceptions. -- Andrei
I've already stated my perception of the "no stinking exceptions", and
"no destructors 'cause i want it fast" elsewhere.

Code must be correct and fast, with correct being a precondition for any
performance tuning and speed hacks.

Correct usually entails exceptions and automatic cleanup. I also do not
believe the "exceptions have to be slow" motto, they are costly but
proportion of such costs was largely exaggerated.
--
Dmitry Olshansky
Walter Bright via Digitalmars-d
2014-09-28 23:48:53 UTC
Permalink
I've already stated my perception of the "no stinking exceptions", and "no
destructors 'cause i want it fast" elsewhere.
Code must be correct and fast, with correct being a precondition for any
performance tuning and speed hacks.
Sure. I'm not arguing for preferring incorrect code.
Correct usually entails exceptions and automatic cleanup. I also do not believe
the "exceptions have to be slow" motto, they are costly but proportion of such
costs was largely exaggerated.
I think it was you that suggested that instead of throwing on invalid UTF, that
the replacement character be used instead? Or maybe not, I'm not quite sure.

Regardless, the replacement character method is widely used and accepted
practice. There's no reason to throw.
Dmitry Olshansky via Digitalmars-d
2014-09-29 07:56:12 UTC
Permalink
Post by Walter Bright via Digitalmars-d
I've already stated my perception of the "no stinking exceptions", and "no
destructors 'cause i want it fast" elsewhere.
Code must be correct and fast, with correct being a precondition for any
performance tuning and speed hacks.
Sure. I'm not arguing for preferring incorrect code.
Correct usually entails exceptions and automatic cleanup. I also do not believe
the "exceptions have to be slow" motto, they are costly but proportion of such
costs was largely exaggerated.
I think it was you that suggested that instead of throwing on invalid
UTF, that the replacement character be used instead? Or maybe not, I'm
not quite sure.
Aye that was me. I'd much prefer nothrow decoding. There should be an
option to throw on bad input though (and we have it already), for
programs that do not expect to work with even partially broken input.
Post by Walter Bright via Digitalmars-d
Regardless, the replacement character method is widely used and accepted
practice. There's no reason to throw.
--
Dmitry Olshansky
Marco Leise via Digitalmars-d
2014-09-29 13:30:08 UTC
Permalink
Am Sun, 28 Sep 2014 16:48:53 -0700
Post by Walter Bright via Digitalmars-d
Regardless, the replacement character method is widely used and accepted
practice. There's no reason to throw.
I feel a bit uneasy about this. Could it introduce a silent
loss of information? While the replacement character method is
widely used, so is the error method. APIs typically provide
flags for this.

MultiByteToWideChar: The flag MB_ERR_INVALID_CHARS decides
whether the API errors out or drops invalid chars.
ICU: You set up an "error callback". The default replaces
invalid characters with the Unicode substitution
character. (We are talking about characters from
arbitrary charsets like Amiga to Unicode.)
Other prefab error handlers drop the invalid character or
error out.
iconv: By default it errors out at the location where an
incomplete or invalid sequence is detected. With the
"//IGNORE" flag, it will silently drop invalid characters.

I'm not opposed to the decision, but I expected the reasoning
to me more along the line of:
`string` is per definition correct UTF-8. Exception or
substitution character is of no concern to a correctly
written D program, because decoding errors wont happen.
Validate and treat all input as ubyte[]. (Especially when
coming from a Windows console)
or:
We may lose information in the conversion, but it's the only
practical way to reach the @nogc goal. And we are far from
having reference-counted Exceptions.
instead of:
Many people use the substitution character [in unspecified
context], so it follows that it can replace Exceptions for
Phobos' string-dchar decoding. :)
--
Marco
monarch_dodra via Digitalmars-d
2014-09-29 14:27:28 UTC
Permalink
Post by Walter Bright via Digitalmars-d
I think it was you that suggested that instead of throwing on
invalid UTF, that the replacement character be used instead? Or
maybe not, I'm not quite sure.
Regardless, the replacement character method is widely used and
accepted practice. There's no reason to throw.
This I'm OK to stand stand behind as acceptable change (should we
decide to go with). It will kill the "auto-decode throws and uses
the GC" argument.
via Digitalmars-d
2014-09-29 10:33:47 UTC
Permalink
On Sunday, 28 September 2014 at 21:00:46 UTC, Dmitry Olshansky
Post by Dmitry Olshansky via Digitalmars-d
Post by Andrei Alexandrescu via Digitalmars-d
The right solution here is refcounted exception plus
policy-based
functions in conjunction with RCString. I can't believe this
focus has
already been lost and we're back to let's remove autodecoding
and ban
exceptions. -- Andrei
Consider what end users are going to use first, then design the
library to fit the use cases while retaining general usefulness.

If UTF-8 decoding cannot be efficiently done as a generic adapter
then D's approach to generic programming is a failure: dead on
arrival.

Generic programming is not supposed to be
special-case-everything-programming. If you cannot do generic
programming well on strings, then don't. Provide a concrete
single dedicated utf-8 string type instead.
Post by Dmitry Olshansky via Digitalmars-d
I've already stated my perception of the "no stinking
exceptions", and "no destructors 'cause i want it fast"
elsewhere.
Code must be correct and fast, with correct being a
precondition for any performance tuning and speed hacks.
Correct usually entails exceptions and automatic cleanup. I
also do not believe the "exceptions have to be slow" motto,
they are costly but proportion of such costs was largely
exaggerated.
Correctness has nothing to do with exceptions and an
exception-specific cleanup model. It has to with having a well
specified model of memory management, understanding the model and
implementing code to the model with rigour.

The alternative is to have isolates only, higher level constructs
only, GC everywhere and uniform activision-records for everything
(no conceptual stack). Many high level languages work this way,
even in the 60s.

When it comes to exception efficiency you have many choices:

1. no exceptions, omit frame pointer

2. no extra overhead when not throwing, standard codegen and
linking, slow unwind

3. no extra overhead when not throwing, nonstandard codegen,
faster unwind

4. extra overhead when not throwing, nonstandard codegen, fast
unwind

5. small extra overhead when not throwing, no RAII/single
landingpad, omit frame pointer possible, very fast unwind
(C-style longjmp)

6. hidden return error value , fixed medium overhead

D has selected standard C++ (2), which means low overhead when
not throwing and that you can use regular C backend/linker. In a
fully GC language I think that (5) is quite acceptable and what
you usually want when doing a web service. You just bail out to
the root and you never really acquire resources except for
transactions that should be terminated in the handler before
exiting (and they will time out anyway).

But there is no best model. There are trade offs based on what
kind application you write.

So as usual it comes back to this: what kind of applications are
D actually targeting?

D is becoming less and less a system level language, and more a
more a compiled scripting framework.

The more special casing, the less transparent D becomes, and the
more of a scripting framework it becomes.

A good system level language requires transparency:

- easy to visualize memory layout

- predictable compilation of code to machine language

- no fixed memory model

- no arbitrary limits and presumptions about execution model

- allows you to get close to the max hardware performance
potential

I have a hard time picturing D as a system level language these
days. And the "hacks" that try to make it GC free are not making
it a better system level language (I don't consider @nogc to be a
hack). It makes D even less transparent.

Despite all the flaws of the D1 compiler, D1 was fairly
transparent.

You really need to decide if D is supposed to be primarily a
system level programming or if it is supposed to provide system
level programming as an after thought on top of an application
level programming language.

Currently it is the latter and more so for every iteration.
Walter Bright via Digitalmars-d
2014-09-28 23:42:17 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Currently, the autodecoding functions allocate with the GC and throw as
well. (They'll GC allocate an exception and throw it if they encounter
an invalid UTF sequence. The adapters use the more common method of
inserting a substitution character and continuing on.) This makes it
harder to make GC-free Phobos code.
The right solution here is refcounted exception plus policy-based functions in
conjunction with RCString. I can't believe this focus has already been lost and
we're back to let's remove autodecoding and ban exceptions. -- Andrei
Or setExt() can simply insert .byCodeUnit as I suggested in the PR, and it's
done and working correctly and doesn't throw and doesn't allocate and goes fast.

Not everything in Phobos can be dealt with so easily, of course, but there's
quite a bit of low hanging fruit of this nature we can just take care of now.
Kagamin via Digitalmars-d
2014-10-01 11:52:08 UTC
Permalink
On Sunday, 28 September 2014 at 12:09:50 UTC, Andrei Alexandrescu
Post by Andrei Alexandrescu via Digitalmars-d
On Sat, Sep 27, 2014 at 11:00:16PM +0000, bearophile via
Post by bearophile via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
Killing auto-decoding for std.algorithm functions will break
most of
my D2 code... perhaps we can do that in a D3 language.
[...]
Well, obviously it's not going to be done in a careless,
drastic way!
* Reasonable effort to improve performance of auto-decoding;
* A study of the matter revealing either new artifacts and
idioms, or the insufficiency of such;
* An assessment of the impact on compilability of existing code
* An assessment of the impact on correctness of existing code
(that compiles and runs in both cases)
* An assessment of the improvement in speed of eliminating
auto-decoding
I think there's a very strong need for this stuff, because
claims that current alternatives to selectively avoid
auto-decoding use the throwing of hands (and occasional chairs
out windows) without any real investigation into how library
artifacts may help. This approach to justifying risky moves is
frustratingly unprincipled.
As far as I see, backward compatibility is fairly easy.
Extract autodecoding modules into `autodecoding` dub package and
clean up phobos modules into non-decoding behavior. The phobos
code will be simplified: it will deal with ranges as is without
specialization, the `autodecoding` dub package will be simple:
just wraps strings into dchar range and invokes non-decoding
function from phobos, preserves current module interface to keep
legacy D code working.
Run dfix on your sources, it will replace `import std.algorithm`
with `import autodecoding.algorithm` - then the code should work.
What do you think? Worth a DIP?

Andrei Alexandrescu via Digitalmars-d
2014-09-28 00:14:00 UTC
Permalink
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off autodecoding.
That's rather vague; it's unclear what would replace it. -- Andrei
Peter Alexander via Digitalmars-d
2014-09-28 10:17:56 UTC
Permalink
On Sunday, 28 September 2014 at 00:13:59 UTC, Andrei Alexandrescu
Post by Andrei Alexandrescu via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
That's rather vague; it's unclear what would replace it. --
Andrei
No autodecoding ;-)

Specifically:

1. ref T front(T[] r) always returns r[0]
2. popFront(ref T[] r) always does { ++r.ptr; --r.length; }
3. Narrow string will be hasLength, hasSlicing,
isRandomAccessRange (i.e. they are just like any other array).

Also:

4. Disallow implicit conversions, comparisons, or any operation
among char, wchar, dchar. This makes things like "foo".find('π')
compile time errors (or better, errors until we specialize to it
to do "foo".find("π"), as it should)
5. Provide byCodePoint for narrow strings (although I suspect
this will be rarely used).

The argument is as follows:
* First, this is a hell of a lot simpler for the implementation.
* People rarely ever search for single, non-ASCII characters in
strings, and #4 makes it an error if they do (until we specialize
to make it work).
* Searching, comparison, joining, and splitting functions will be
fast and correct by default.

One possible counter argument is that this makes it easier to
corrupt strings (since you could, e.g. insert a substring into
the middle of a multi-byte code point). To that I say that it's
unlikely. When inserting into a string, you're either doing it at
the front or back (which is safe), or to some point that you've
found by some other means (e.g. using find). I can't imagine a
scenario where you could find a point in the middle of a string,
that is in the middle of a code point.

Of course, I'd probably say this change isn't practical right
now, but this is how I'd do things if I were to start over.
Uranuz via Digitalmars-d
2014-09-28 12:06:16 UTC
Permalink
On Sunday, 28 September 2014 at 00:13:59 UTC, Andrei Alexandrescu
Post by Andrei Alexandrescu via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
That's rather vague; it's unclear what would replace it. --
Andrei
I believe that removing autodeconding will make things even
worse. As far as understand if we will remove it from front()
function that operates on narrow strings then it will return just
byte of char. I believe that proceeding on narrow string by `user
perceived chars` (graphemes) is more common use case. Operating
on single bytes of multibyte character is uncommon task and you
can do that via direct indexing of char[] array. I believe what
number of bytes is in *user perceived chars* is internal
implementation of UTF-8 encoding and it should not be considered
in common tasks such as parsing, searching, replacing text and
etc. If you need byte representation of string you should cast it
into ubyte[] and work with it using the same range functions
without autodecoding.

The main problem that I see that unexpirienced in D programmer
can be confused where he operates by bytes or by graphemes.
Especially it could happen when he migrates from C#, Python where
string is not considered as array of it's bytes. Because *char*
in D is not char it's a part of char, but not entire char. It's
main inconsistence.

Possible solution is to include class or struct implementation of
string and hide internal implementation of narrow string for
those users who don't need to operate on single bytes of UTF-8
characters. I believe it's the best way to kill all the rabbits))
We could provide this class String with method returning ubyte[]
(better way) or char[] that will expose internal implementation
for those who need it.

A question: can you list some languages that represent UTF-8
narrow strings as array of single bytes?
H. S. Teoh via Digitalmars-d
2014-09-28 14:37:03 UTC
Permalink
Post by Andrei Alexandrescu via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off autodecoding.
That's rather vague; it's unclear what would replace it. -- Andrei
I believe that removing autodeconding will make things even worse. As
far as understand if we will remove it from front() function that
operates on narrow strings then it will return just byte of char. I
believe that proceeding on narrow string by `user perceived chars`
(graphemes) is more common use case.
[...]

Unfortunately this is not what autodecoding does today. Today's
autodecoding only segments strings into code *points*, which are not the
same thing as graphemes. For example, combining diacritics are normally
not considered separate characters from the user's POV, but they *are*
separate codepoints from their base character. The only reason today's
autodecoding is even remotely considered "correct" from an intuitive POV
is because most Western character sets happen to use only precomposed
characters rather than combining diacritic sequences. If you were
processing, say, Korean text, the present autodecoding .front would
*not* give you what you might imagine is a "single character"; it would
only be halves of Korean graphemes. Which, from a user's POV, would
suffer from the same issues as dealing with individual bytes in a UTF-8
stream -- any mistake on the program's part in handling these half-units
will cause "corruption" of the text (not corruption in the same sense as
an improperly segmented UTF-8 byte stream, but in the sense that the
wrong glyphs will be displayed on the screen -- from the user's POV
these two are basically the same thing).

You might then be tempted to say, well let's make .front return
graphemes instead. That will solve the "single intuitive character"
issue, but the performance will be FAR worse than what it is today.

So basically, what we have today is neither efficient nor complete, but
a halfway solution that mostly works for Western character sets but
is incomplete for others. We're paying efficiency for only a partial
benefit. Is it worth the cost?

I think the correct solution is not for Phobos to decide for the
application at what level of abstraction a string ought to be processed.
Rather, let the user decide. If they're just dealing with opaque blocks
of text, decoding or segmenting by grapheme is completely unnecessary --
they should just operate on byte ranges as opaque data. They should use
byCodeUnit. If they need to work with Unicode codepoints, let them use
byCodePoint. If they need to work with individual user-perceived
characters (i.e., graphemes), let them use byGrapheme.

This is why I proposed the deprecation path of making it illegal to pass
raw strings to Phobos algorithms -- the caller should specify what level
of abstraction they want to work with -- byCodeUnit, byCodePoint, or
byGrapheme. The standard library's job is to empower the D programmer by
giving him the choice, not to shove a predetermined solution down his
throat.


T
--
Life is unfair. Ask too much from it, and it may decide you don't
deserve what you have now either.
John Colvin via Digitalmars-d
2014-09-28 17:03:29 UTC
Permalink
On Sunday, 28 September 2014 at 14:38:57 UTC, H. S. Teoh via
On Sun, Sep 28, 2014 at 12:06:16PM +0000, Uranuz via
Post by Peter Alexander via Digitalmars-d
On Sunday, 28 September 2014 at 00:13:59 UTC, Andrei
Post by Andrei Alexandrescu via Digitalmars-d
Post by H. S. Teoh via Digitalmars-d
If we can get Andrei on board, I'm all for killing off
autodecoding.
That's rather vague; it's unclear what would replace it. --
Andrei
I believe that removing autodeconding will make things even
worse. As
far as understand if we will remove it from front() function
that
operates on narrow strings then it will return just byte of
char. I
believe that proceeding on narrow string by `user perceived
chars`
(graphemes) is more common use case.
[...]
Unfortunately this is not what autodecoding does today. Today's
autodecoding only segments strings into code *points*, which
are not the
same thing as graphemes. For example, combining diacritics are
normally
not considered separate characters from the user's POV, but
they *are*
separate codepoints from their base character. The only reason
today's
autodecoding is even remotely considered "correct" from an
intuitive POV
is because most Western character sets happen to use only
precomposed
characters rather than combining diacritic sequences. If you
were
processing, say, Korean text, the present autodecoding .front
would
*not* give you what you might imagine is a "single character";
it would
only be halves of Korean graphemes. Which, from a user's POV,
would
suffer from the same issues as dealing with individual bytes in
a UTF-8
stream -- any mistake on the program's part in handling these
half-units
will cause "corruption" of the text (not corruption in the same
sense as
an improperly segmented UTF-8 byte stream, but in the sense
that the
wrong glyphs will be displayed on the screen -- from the user's
POV
these two are basically the same thing).
You might then be tempted to say, well let's make .front return
graphemes instead. That will solve the "single intuitive
character"
issue, but the performance will be FAR worse than what it is
today.
So basically, what we have today is neither efficient nor
complete, but
a halfway solution that mostly works for Western character sets
but
is incomplete for others. We're paying efficiency for only a
partial
benefit. Is it worth the cost?
I think the correct solution is not for Phobos to decide for the
application at what level of abstraction a string ought to be
processed.
Rather, let the user decide. If they're just dealing with
opaque blocks
of text, decoding or segmenting by grapheme is completely
unnecessary --
they should just operate on byte ranges as opaque data. They
should use
byCodeUnit. If they need to work with Unicode codepoints, let
them use
byCodePoint. If they need to work with individual user-perceived
characters (i.e., graphemes), let them use byGrapheme.
This is why I proposed the deprecation path of making it
illegal to pass
raw strings to Phobos algorithms -- the caller should specify
what level
of abstraction they want to work with -- byCodeUnit,
byCodePoint, or
byGrapheme. The standard library's job is to empower the D
programmer by
giving him the choice, not to shove a predetermined solution
down his
throat.
T
I totally agree with all of that.

It's one of those cases where correct by default is far too slow
(that would have to be graphemes) but fast by default is far too
broken. Better to force an explicit choice.

There is no magic bullet for unicode in a systems language such
as D. The programmer must be aware of it and make choices about
how to treat it.
Walter Bright via Digitalmars-d
2014-09-28 18:10:25 UTC
Permalink
There is no magic bullet for unicode in a systems language such as D. The
programmer must be aware of it and make choices about how to treat it.
That's really the bottom line.

The trouble with autodecode is it is done at the lowest level, meaning it is
very hard to bypass. By moving the decision up a level (by using .byDchar or
.byCodeUnit adapters) the caller makes the decision.
Uranuz via Digitalmars-d
2014-09-28 19:44:39 UTC
Permalink
Post by John Colvin via Digitalmars-d
I totally agree with all of that.
It's one of those cases where correct by default is far too
slow (that would have to be graphemes) but fast by default is
far too broken. Better to force an explicit choice.
There is no magic bullet for unicode in a systems language such
as D. The programmer must be aware of it and make choices about
how to treat it.
I see didn't know about difference between byCodeUnit and
byGrapheme, because I speak Russian and it is close to English,
because it doesn't have diacritics. As far as I remember German,
that I learned at school have diacritics. So you opened my eyes
in this question. My position as usual programmer is that I
speaking language which graphemes coded by 2 bytes and I alwas
need to do decoding otherwise my programme will be broken. Other
possibility is to use wstring or dstring, but it is less memory
efficient. Also UTF-8 is more commonly used in the Internet so I
don't want to do some conversions to UTF-32, for example.

Where I could read about byGrapheme? Isn't this approach
overcomplicated? I don't want to write Dostoevskiy's book "War
and Peace" in order to write some parser for simple DSL.
Dmitry Olshansky via Digitalmars-d
2014-09-28 20:08:29 UTC
Permalink
Post by Uranuz via Digitalmars-d
Post by John Colvin via Digitalmars-d
I totally agree with all of that.
It's one of those cases where correct by default is far too slow (that
would have to be graphemes) but fast by default is far too broken.
Better to force an explicit choice.
There is no magic bullet for unicode in a systems language such as D.
The programmer must be aware of it and make choices about how to treat
it.
I see didn't know about difference between byCodeUnit and
byGrapheme, because I speak Russian and it is close to English,
because it doesn't have diacritics. As far as I remember German,
that I learned at school have diacritics. So you opened my eyes
in this question. My position as usual programmer is that I
speaking language which graphemes coded by 2 bytes
In UTF-16 and UTF-8.
Post by Uranuz via Digitalmars-d
and I alwas
need to do decoding otherwise my programme will be broken. Other
possibility is to use wstring or dstring, but it is less memory
efficient. Also UTF-8 is more commonly used in the Internet so I
don't want to do some conversions to UTF-32, for example.
Where I could read about byGrapheme?
std.uni docs:
http://dlang.org/phobos/std_uni.html#.byGrapheme
Post by Uranuz via Digitalmars-d
Isn't this approach
overcomplicated? I don't want to write Dostoevskiy's book "War
and Peace" in order to write some parser for simple DSL.
It's Tolstoy actually:
http://en.wikipedia.org/wiki/War_and_Peace

You don't need byGrapheme for simple DSL. In fact as long as DSL is
simple enough (ASCII only) you may safely avoid decoding. If it's in
Russian you might want to decode. Even in this case there are ways to
avoid decoding, it may involve a bit of writing in as for typical short
novel ;)

In fact I did a couple of such literature exercises in std library.

For codepoint lookups on non-decoded strings:
http://dlang.org/phobos/std_uni.html#.utfMatcher

And to create sets of codepoints to detect with matcher:
http://dlang.org/phobos/std_uni.html#.CodepointSet
--
Dmitry Olshansky
Uranuz via Digitalmars-d
2014-09-28 20:44:29 UTC
Permalink
Post by Dmitry Olshansky via Digitalmars-d
http://en.wikipedia.org/wiki/War_and_Peace
You don't need byGrapheme for simple DSL. In fact as long as
DSL is simple enough (ASCII only) you may safely avoid
decoding. If it's in Russian you might want to decode. Even in
this case there are ways to avoid decoding, it may involve a
bit of writing in as for typical short novel ;)
Yes, my mistake ;) I was thinking about *Crime and Punishment*
but writen *War and Peace*. Don't know why. May be because it is
longer.

Thanks for useful links. As far as we are talking about standard
library I think that some stanard aproach should be provided to
solve often tasks: searching, sorting, parsing, splitting
strings. I see that currently we have a lot of ways of doing
similar things with strings. I think this is a problem of
documentation at some part. When I parsing text I can't
understand why I need to use all of these range interfaces
instead of just manipulating on raw narrow string. We have
several modules about working on strings: std.range,
std.algorithm, std.string, std.array, std.utf and I can't see how
they help me to solve my problems. In opposite they just creating
me new problem to think of them in order to find *right* way. So
most of my time I spend on thinking about it but not solving my
task.

It is hard for me to accept that we don't need to decode to do
some operations. What is annoying is that I always need to think
of codelength that I should show to user and byte length that is
used to slice char array. It's very easy to be confused with them
and do something wrong.

I see that all is complicated we have 3 types of character and
more than 5 modules for trivial manipulations on strings with
10ths of functions. It all goes into hell. But I don't even
started to do my job. And we don't have *standard* way to deal
with it in std lib. At least this way in not documented enough.
Dmitry Olshansky via Digitalmars-d
2014-09-28 21:13:08 UTC
Permalink
Post by Dmitry Olshansky via Digitalmars-d
http://en.wikipedia.org/wiki/War_and_Peace
You don't need byGrapheme for simple DSL. In fact as long as DSL is
simple enough (ASCII only) you may safely avoid decoding. If it's in
Russian you might want to decode. Even in this case there are ways to
avoid decoding, it may involve a bit of writing in as for typical
short novel ;)
Yes, my mistake ;) I was thinking about *Crime and Punishment* but
writen *War and Peace*. Don't know why. May be because it is longer.
Admittedly both are way too long for my taste :)
Thanks for useful links. As far as we are talking about standard library
I think that some stanard aproach should be provided to solve often
tasks: searching, sorting, parsing, splitting strings. I see that
currently we have a lot of ways of doing similar things with strings. I
think this is a problem of documentation at some part.
Some of this is historical, in particular std.string is way older then
std.algorithm.
When I parsing
text I can't understand why I need to use all of these range interfaces
instead of just manipulating on raw narrow string. We have several
modules about working on strings: std.range, std.algorithm, std.string,
std.array,
std.range publicly imports std.array thus I really do not see why we
still have std.array as standalone module.

std.utf and I can't see how they help me to solve my
problems. In opposite they just creating me new problem to think of them
in order to find *right* way.
There is no *right* way, every level of abstraction has its uses. Also
there is a bit of trade-off on performance vs easy/obvious/nice code.
So most of my time I spend on thinking
about it but not solving my task.
Takes time to get accustomed with a standard library. See also std.conv
and std.format. String processing is indeed shotgun-ed across entire phobos.
It is hard for me to accept that we don't need to decode to do some
operations. What is annoying is that I always need to think of
codelength that I should show to user and byte length that is used to
slice char array. It's very easy to be confused with them and do
something wrong.
As long as you use decoding primitives you keep getting back proper
indices automatically. That must be what some folks considered correct
way to do Unicode until it was apparent to everybody that Unicode is way
more then this.
I see that all is complicated we have 3 types of character and more than
5 modules for trivial manipulations on strings with 10ths of functions.
It all goes into hell.
There are many tools, but when I write parsers I actually use almost
none of them. Well, nowdays I'm going to use the stuff in std.uni like
CodePointSet, utfMatcher etc. std.regex makes some use of these already,
but prior to that std.utf.decode was my lone workhorse.
But I don't even started to do my job. And we
don't have *standard* way to deal with it in std lib. At least this way
in not documented enough.
Well on the bright side consider that C has lots of broken functions in
stdlib, and even some that are _never_ safe like "gets" ;)
--
Dmitry Olshansky
ketmar via Digitalmars-d
2014-09-29 01:33:08 UTC
Permalink
On Sun, 28 Sep 2014 19:44:39 +0000
I speaking language which graphemes coded by 2 bytes
UCS-4? KOI8? my locale is KOI8, and i HATE D for assuming that everyone
one the planet using UTF-8 and happy with it. from my POV, almost all
string decoding is broken. string i got from filesystem? good god, lest
it not contain anything out of ASCII range! string i got from text
file? the same. string i must write to text file or stdout? oh, 'cmon,
what do you mean telling me "п©я─п╊п╡п╣я┌"?! i can't read that!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: not available
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140929/d53a9af9/attachment-0001.sig>
Marco Leise via Digitalmars-d
2014-09-29 15:09:06 UTC
Permalink
Am Mon, 29 Sep 2014 04:33:08 +0300
Post by ketmar via Digitalmars-d
On Sun, 28 Sep 2014 19:44:39 +0000
I speaking language which graphemes coded by 2 bytes
UCS-4? KOI8? my locale is KOI8, and i HATE D for assuming that everyone
one the planet using UTF-8 and happy with it. from my POV, almost all
string decoding is broken. string i got from filesystem? good god, lest
it not contain anything out of ASCII range! string i got from text
file? the same. string i must write to text file or stdout? oh, 'cmon,
what do you mean telling me "п©я─п╊п╡п╣я┌"?! i can't read that!
My friend, we agree here! We must convert the whole world to
UTF-8 eventually to end this madness! But for now when we
write to a terminal, we have to convert to the system locale,
because there are still people who don't use Unicode. (On
Windows consoles the wide-char writing functions are good
enough for NFC strings.)
And a path from the filesystem is actually in no specific
encoding on Unix. We only know it is byte based and uses ASCII
'/' and '\0' as delimiters. On Windows it is ushort based IIRC.
To make matters more messy, Gtk assumes Unicode, while Qt
assumes the user's locale for file names. And in reality it is
determined by the IO charset at mount time.
--
Marco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140929/4eabea0c/attachment.sig>
Walter Bright via Digitalmars-d
2014-09-28 18:08:13 UTC
Permalink
A question: can you list some languages that represent UTF-8 narrow strings as
array of single bytes?
C and C++.
Marco Leise via Digitalmars-d
2014-09-29 15:12:44 UTC
Permalink
Am Sun, 28 Sep 2014 11:08:13 -0700
Post by Walter Bright via Digitalmars-d
A question: can you list some languages that represent UTF-8 narrow strings as
array of single bytes?
C and C++.
Not really, C string are in "C locale", not a specific
encoding. I.e. it _cannot_ deal with UTF-8 specifically.
Assuming UTF-8 would lead to funny output on consoles and the
like when passing D string to C functions. :)
--
Marco
Walter Bright via Digitalmars-d
2014-09-27 22:54:23 UTC
Permalink
What we're seeing here is pretty much the same problem that early c++ suffered
from: abstraction penalty. It took years of work to help overcome it, both from
the compiler and the library. Not having trivial functions inlined and
optimized down through standard techniques like dead store elimination, value
range propagation, various loop restructurings, etc means that code will look
like what Walter and you have shown. Given DMD's relatively weak inliner, I'm
not shocked by Walter's example. I am curious why ldc failed to inline those
functions.
Again, this accumulation of barnacles is not a failure of the optimizer. It's a
failure of adding gee-gaws to the source code without checking their effect.
Brad Roberts via Digitalmars-d
2014-09-27 22:59:55 UTC
Permalink
Post by Walter Bright via Digitalmars-d
What we're seeing here is pretty much the same problem that early c++ suffered
from: abstraction penalty. It took years of work to help overcome it, both from
the compiler and the library. Not having trivial functions inlined and
optimized down through standard techniques like dead store
elimination, value
range propagation, various loop restructurings, etc means that code will look
like what Walter and you have shown. Given DMD's relatively weak inliner, I'm
not shocked by Walter's example. I am curious why ldc failed to inline those
functions.
Again, this accumulation of barnacles is not a failure of the optimizer.
It's a failure of adding gee-gaws to the source code without checking
their effect.
Look at Peter's example, it's better for this, I believe. Why isn't
empty being inlined? That's a tiny little function with a lot of impact.

Of course there's more than just optimization, but it's a big player in
the game too.
Walter Bright via Digitalmars-d
2014-09-27 23:02:37 UTC
Permalink
Look at Peter's example, it's better for this, I believe. Why isn't empty being
inlined? That's a tiny little function with a lot of impact.
It's the autodecode'ing front(), which is a fairly complex function.
Andrei Alexandrescu via Digitalmars-d
2014-09-28 00:23:32 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Look at Peter's example, it's better for this, I believe. Why isn't empty being
inlined? That's a tiny little function with a lot of impact.
It's the autodecode'ing front(), which is a fairly complex function.
front() should follow a simple pattern that's been very successful in
HHVM: small inline function that covers most cases with "if (c < 0x80)"
followed by an out-of-line function on the multicharacter case. That
approach would make the cost of auto-decoding negligible in the
overwhelming majority of cases.

Andrei
Martin Nowak via Digitalmars-d
2014-09-29 02:08:15 UTC
Permalink
Post by Andrei Alexandrescu via Digitalmars-d
front() should follow a simple pattern that's been very successful in
HHVM: small inline function that covers most cases with "if (c < 0x80)"
followed by an out-of-line function on the multicharacter case. That
approach would make the cost of auto-decoding negligible in the
overwhelming majority of cases.
Andrei
Well, we're using the same trick for already 3 years now :).
https://github.com/D-Programming-Language/phobos/pull/299
Martin Nowak via Digitalmars-d
2014-09-29 02:29:15 UTC
Permalink
Post by Walter Bright via Digitalmars-d
It's the autodecode'ing front(), which is a fairly complex function.
At least for dmd it's caused by a long-standing compiler bug.
https://issues.dlang.org/show_bug.cgi?id=7625

https://github.com/D-Programming-Language/phobos/pull/2566
David Nadlinger via Digitalmars-d
2014-09-27 23:15:25 UTC
Permalink
On Saturday, 27 September 2014 at 23:00:20 UTC, Brad Roberts via
Post by Brad Roberts via Digitalmars-d
Look at Peter's example, it's better for this, I believe. Why
isn't empty being inlined? That's a tiny little function with
a lot of impact.
This is most likely due to an issue with how the new DMD template
emission strategy (needsCodegen() et al.) was integrated into
LDC: https://github.com/ldc-developers/ldc/issues/674

The issue in question was fixed recently in LDC Git master, but
regressed again when 2.066 was merged.

David
Andrei Alexandrescu via Digitalmars-d
2014-09-28 00:18:56 UTC
Permalink
Post by Walter Bright via Digitalmars-d
Again, this accumulation of barnacles is not a failure of the optimizer.
It's a failure of adding gee-gaws to the source code without checking
their effect.
The Go project has something nice set up - easy-to-run benchmarks that
are part of the acceptance testing. That's good prevention against
creeping inefficiencies. -- Andrei
deadalnix via Digitalmars-d
2014-09-28 00:34:44 UTC
Permalink
On Saturday, 27 September 2014 at 22:11:39 UTC, H. S. Teoh via
Post by H. S. Teoh via Digitalmars-d
I vaguely recall somebody mentioning a while back that
range-based code
is poorly optimized because compilers weren't designed to
recognize
such patterns before. I wonder if there are ways for the
compiler to
recognize range primitives and apply special optimizations to
them.
I do find, though, that gdc -O3 generally tends to do a pretty
good job
of reducing range-based code to near-minimal assembly. Sadly,
dmd is
changing too fast for gdc releases to catch up with the latest
and
greatest, so I haven't been using gdc very much recently. :-(
That was me, specifically for LLVM (I don't know much about GCC's
innards). Hopefully, this is being worked (as it also impact
C++'s stdlib).
Andrei Alexandrescu via Digitalmars-d
2014-09-28 00:12:55 UTC
Permalink
Post by Walter Bright via Digitalmars-d
https://issues.dlang.org/show_bug.cgi?id=13541
https://issues.dlang.org/show_bug.cgi?id=13542
https://issues.dlang.org/show_bug.cgi?id=13543
https://issues.dlang.org/show_bug.cgi?id=13544
I'm sure there's much more in std.file (and elsewhere) that can be done.
Noice. Should be told, though, that reading the actual file will
dominate execution time. -- Andrei
Martin Nowak via Digitalmars-d
2014-09-29 16:43:10 UTC
Permalink
Post by Andrei Alexandrescu via Digitalmars-d
Noice. Should be told, though, that reading the actual file will
dominate execution time. -- Andrei
Absolutely, and we should have different priorities right now.

It would also help to fix compiler bugs that regularly cause performance
regressions.

https://issues.dlang.org/show_bug.cgi?id=7625
https://issues.dlang.org/buglist.cgi?keywords=performance&resolution=---
via Digitalmars-d
2014-09-29 17:07:30 UTC
Permalink
Post by Martin Nowak via Digitalmars-d
Post by Andrei Alexandrescu via Digitalmars-d
Noice. Should be told, though, that reading the actual file
will
dominate execution time. -- Andrei
Absolutely, and we should have different priorities right now.
That is not obvious. Modern parsing techniques that deals with
ambiguity are O(N^3) to O(N^4), add to this the desire for
lexer-free parsers and you'll start to see that performance
matters.
Andrei Alexandrescu via Digitalmars-d
2014-09-29 17:23:38 UTC
Permalink
Post by Martin Nowak via Digitalmars-d
Post by Andrei Alexandrescu via Digitalmars-d
Noice. Should be told, though, that reading the actual file will
dominate execution time. -- Andrei
Absolutely, and we should have different priorities right now.
It would also help to fix compiler bugs that regularly cause performance regressions.
https://issues.dlang.org/show_bug.cgi?id=7625
https://issues.dlang.org/buglist.cgi?keywords=performance&resolution=---
Totally, thank you. -- Andrei
Dmitry Olshansky via Digitalmars-d
2014-09-28 11:46:11 UTC
Permalink
Post by Walter Bright via Digitalmars-d
From time to time, I take a break from bugs and enhancements and just
look at what some piece of code is actually doing. Sometimes, I'm
http://www.nbcnews.com/id/38545625/ns/technology_and_science-science/t/king-tuts-chariots-were-formula-one-cars/#.VCceNmd0xjs
http://untappedcities.com/2012/10/31/roulez-carrosses-carriages-of-versailles-arrive-in-arras/
https://github.com/D-Programming-Language/phobos/blob/master/std/file.d
void copy(in char[] from, in char[] to) {
immutable result = CopyFileW(from.tempCStringW(),
to.tempCStringW(), false);
if (!result)
throw new FileException(to.idup);
}
In all honesty - 2 RAII structs w/o inlining + setting up exception
frame + creating and allocating an exception + idup-ing a string does
account to about this much.
Post by Walter Bright via Digitalmars-d
_D3std4file4copyFxAaxAaZv comdat
assume CS:_D3std4file4copyFxAaxAaZv
L0: push EBP
mov EBP,ESP
mov EDX,FS:__except_list
push 0FFFFFFFFh
lea EAX,-0220h[EBP]
push offset _D3std4file4copyFxAaxAaZv[0106h]
push EDX
mov FS:__except_list,ESP
sub ESP,8
sub ESP,041Ch
push 0
push dword ptr 0Ch[EBP]
push dword ptr 8[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCSÇàÆTuTaZÇìÆFNbNixAaZSÇ┬├3Res
mov dword ptr -4[EBP],0
lea EAX,-0220h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res3ptrMxFNaNbNdNiNfZPxu
push EAX
lea EAX,-0430h[EBP]
push dword ptr 014h[EBP]
push dword ptr 010h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCSÇàÆTuTaZÇìÆFNbNixAaZSÇ┬├3Res
mov dword ptr -4[EBP],1
lea EAX,-0430h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res3ptrMxFNaNbNdNiNfZPxu
push EAX
call dword ptr __imp__CopyFileW at 12
mov -01Ch[EBP],EAX
mov dword ptr -4[EBP],0
call near ptr L83
jmp short L8F
L83: lea EAX,-0220h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res6__dtorMFNbNiZv
ret
L8F: mov dword ptr -4[EBP],0FFFFFFFFh
call near ptr L9D
jmp short LA9
L9D: lea EAX,-0430h[EBP]
call near ptr
_D3std8internal7cstring21__T11tempCStringTuTaZ11tempCStringFNbNixAaZ3Res6__dtorMFNbNiZv
ret
LA9: cmp dword ptr -01Ch[EBP],0
jne LF3
mov ECX,offset
FLAT:_D3std4file13FileException7__ClassZ
push ECX
call near ptr __d_newclass
add ESP,4
push dword ptr 0Ch[EBP]
mov -018h[EBP],EAX
push dword ptr 8[EBP]
call near ptr
_D6object12__T4idupTxaZ4idupFNaNbNdNfAxaZAya
push EDX
push EAX
call dword ptr __imp__GetLastError at 0
push EAX
push dword ptr _D3std4file13FileException6__vtblZ[02Ch]
push dword ptr _D3std4file13FileException6__vtblZ[028h]
push 095Dh
mov EAX,-018h[EBP]
call near ptr
_D3std4file13FileException6__ctorMFNfxAakAyakZC3std4file13FileException
push EAX
call near ptr __d_throwc
LF3: mov ECX,-0Ch[EBP]
mov FS:__except_list,ECX
mov ESP,EBP
pop EBP
ret 010h
mov EAX,offset
FLAT:_D3std4file13FileException6__vtblZ[0310h]
jmp near ptr __d_framehandler
which is TWICE as much generated code as for D1's copy(), which does the
same thing. No, it is not because D2's compiler sux. It's because it has
become encrustified with gee-gaws, jewels, decorations, and other crap.
https://issues.dlang.org/show_bug.cgi?id=13541
https://issues.dlang.org/show_bug.cgi?id=13542
https://issues.dlang.org/show_bug.cgi?id=13543
https://issues.dlang.org/show_bug.cgi?id=13544
I'm sure there's much more in std.file (and elsewhere) that can be done.
Guys, when developing Phobos/Druntime code, please look at the assembler
once in a while and see what is being wrought. You may be appalled, too.
--
Dmitry Olshansky
Walter Bright via Digitalmars-d
2014-09-28 18:06:04 UTC
Permalink
In all honesty - 2 RAII structs w/o inlining + setting up exception frame +
creating and allocating an exception + idup-ing a string does account to about
this much.
Twice as much generated code as actually necessary, and this is just for 3 lines
of source code.
Dicebot via Digitalmars-d
2014-09-29 12:43:31 UTC
Permalink
I refuse to accept any code gen complaints based on DMD. It's
optimization facilities are generally crappy compared to gdc /
ldc and not worth caring about - it is just a reference
implementation after all. Clean and concise library code is more
important.

Now if the same inlining failure happens with other two compilers
- that is something worth talking about (I don't know if it
happens)
Dmitry Olshansky via Digitalmars-d
2014-09-30 18:15:12 UTC
Permalink
Post by Dicebot via Digitalmars-d
I refuse to accept any code gen complaints based on DMD. It's
optimization facilities are generally crappy compared to gdc / ldc and
not worth caring about - it is just a reference implementation after
all. Clean and concise library code is more important.
Now if the same inlining failure happens with other two compilers - that
is something worth talking about (I don't know if it happens)
+1
--
Dmitry Olshansky
Loading...