Post by Jacob Carlborg via Digitalmars-d
1. We're not breaking code where it wasn't broken previously
2. We're fixing broken code. That is when opEqual and opCmp == 0 gave
Code that worked perfectly fine before is now slower, because it's
using opCmp for opEquals when it wasn't before.
I don't understand why you keep bringing up the point of being slower. I
thought the whole point of D was to be safe first, then performant if
you ask for it. In this case, sure there will be a (small!) performance
hit, but then the solution is just to define opEquals yourself -- which
you should have been doing in the first place! So this is really just
prodding the programmer in the right direction.
Even worse, if you define opEquals, you're then forced to define
toHash, which is much harder to get right.
If you're redefining opCmp and opEquals, I seriously question whether
the default toHash actually produces the correct result. If it did, it
begs the question, what's the point of redefining opCmp?
So, in order to avoid a performance hit on opEquals from defining
opCmp, you now have to define toHash, which significantly increases
the chances of bugs. And regardless of the increased risk of bugs,
it's extra code that you shouldn't need to write anyway, because the
normal, default opEquals and toHash worked just fine.
I honestly have no sympathy for anyone who defined opCmp to be
different from the default opEquals but didn't define opEquals.
Getting that right is simple, and it's trivial to test for you're unit
testing like you should be.
Frankly, I find this rather incongruous. First you say that requiring
programmers to define toHash themselves is too high an expectation, then
you say that you have no sympathy on these same programmers 'cos they
can't get their opEquals code right. If it's too much to expect them to
write toHash properly, why would we expect them to write opEquals
correctly either? But if they *are* expected to get opEquals right, then
why is it a problem for them to also get toHash right? I'm honestly
baffled at what your point is.
I don't want to pay in my code just to make the compiler friendlier to
someone who didn't even bother to do something so simple.
And you don't have to. You just define opEquals correctly as you have
always done, and you pay *nothing*. The only time you pay is when you
forgot to define opEquals -- in which case, which is worse, bad
performance, or incorrect code? Perhaps you have different priorities,
but I'd rather have bad performance than incorrect code, especially
*subtly* wrong code that's very difficult to track down.
I'd much rather be able to take advantage of the fast, default
opEquals and correct toHash than be forced to define them just because
I defined opCmp and didn't want a performance hit on opEquals.
So perhaps we should implement `bool opEquals = default;`.
My program has no bugs! Only unintentional features...