Discussion:
Why don't other programming languages have ranges?
(too old to reply)
Justin Johansson
2010-07-24 13:36:34 UTC
Permalink
It sounds like the D PL has invented the range idiom unlike any other PL.

Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?

Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?

Cheers
Justin Johansson
Michal Minich
2010-07-24 13:49:43 UTC
Permalink
maybe this will give some pointers

http://lambda-the-ultimate.org/node/3520
Jonathan M Davis
2010-07-24 14:56:48 UTC
Permalink
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?
Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?
Cheers
Justin Johansson
Part of it probably comes from how limitedly most languages use iterators. Most
languages only use them for iteration and perhaps to aid in insertion or erasure
from a container. They don't generally pass them around for algorithms and such.
C++ and D are the only ones that I'm aware of which have done so. There are, of
course, problems with using iterators in that fashion. Java's and C#'s solution
was to put it on the list of C++ features that they didn't use and didn't make
them powerful enough to really be used in algorithms. With D, it was decided to
keep that power, and ranges were the solution to the problems with C++
iterators.

Regardless, there are a number of ideas floating out there which never make their
way into a programming language - particularly serious and well-used programming
languages. That doesn't necessarily mean that they're bad ideas, but the
creators of the major programming languages either weren't aware of them, didn't
like them for some reason, or couldn't fit them into their language for one
reason or another. It doesn't help that programming languages aren't really
invented all that often - or at least it's not all that often that new ones come
along which actually become heavily used. Also, most languages which are heavily
used are 10+ years old if not 20+, so many newer ideas came along too late to be
incorporated into most of the major programming languages or their stdandard
libraries and philosophies.

As far as I can tell, ranges have not been a big thing in any way shape or form.
Andrei didn't invent them, but he appears to be the first to put them into major
use - at least in a manner that would get other programmers to use them rather
than them just being used for a particular project or company. So, they don't
appear to be a particularly well-known idea at this point - certainly not a
heavily used one.

- Jonathan M Davis
Leandro Lucarella
2010-07-24 15:43:21 UTC
Permalink
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?
Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?
http://www.boost.org/doc/libs/1_42_0/libs/range/index.html
--
Leandro Lucarella (AKA luca) http://llucax.com.ar/
----------------------------------------------------------------------
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05)
----------------------------------------------------------------------
This is what you get,
when you mess with us.
Walter Bright
2010-07-24 18:05:28 UTC
Permalink
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.

It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
levenshtein
2010-07-25 11:11:21 UTC
Permalink
Post by Walter Bright
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.
It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.
Peter Alexander
2010-07-25 12:30:02 UTC
Permalink
Post by levenshtein
Post by Walter Bright
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.
It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.
Not to belittle D, but type inference was around long before D came on
the scene, and I don't think they got the use of the auto keyword from D
(auto was already an (essentially unused) keyword in C++).
levenshtein
2010-07-25 13:01:51 UTC
Permalink
Post by Peter Alexander
Post by levenshtein
Post by Walter Bright
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.
It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.
Not to belittle D, but type inference was around long before D came on
the scene, and I don't think they got the use of the auto keyword from D
(auto was already an (essentially unused) keyword in C++).
Type inference might have been around, but I believe it was A. Alexandrescu's influence that made C++0x adopt the same 'auto' keyword for type inference. You can see here:

http://en.wikipedia.org/wiki/C++0x#Type_inference

It's essentially copied from D. Seems funny, but the mighty C++ committee is actually listening to us.
Don
2010-07-25 18:08:34 UTC
Permalink
Post by levenshtein
Post by Peter Alexander
Post by levenshtein
Post by Walter Bright
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.
It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
At least the C++ fellows already stole your 'auto' type inference and the new template functionality. C# stole your delegate system. They even use the same terms. The world dominance already started.
Not to belittle D, but type inference was around long before D came on
the scene, and I don't think they got the use of the auto keyword from D
(auto was already an (essentially unused) keyword in C++).
http://en.wikipedia.org/wiki/C++0x#Type_inference
It's essentially copied from D. Seems funny, but the mighty C++ committee is actually listening to us.
C++0x announced that it was going to use the 'auto' keyword, so D copied
it from C++0x. But D implemented it before C++ did. <g>.
dsimcha
2010-07-25 18:19:03 UTC
Permalink
== Quote from Don (nospam at nospam.com)'s article
Post by Don
Post by levenshtein
Post by Peter Alexander
Post by levenshtein
Post by Walter Bright
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Pointer programming is deeply embedded into the C++ culture, and iterators segue
nicely into that culture. For D, however, programming revolves around arrays,
and ranges fit naturally into that.
It'll take years, but I'll be very surprised if ranges don't filter into many
major languages, as well as other ideas that D has proven to be solid.
At least the C++ fellows already stole your 'auto' type inference and the
new template functionality. C# stole your delegate system. They even use the same
terms. The world dominance already started.
Post by Don
Post by levenshtein
Post by Peter Alexander
Not to belittle D, but type inference was around long before D came on
the scene, and I don't think they got the use of the auto keyword from D
(auto was already an (essentially unused) keyword in C++).
Type inference might have been around, but I believe it was A. Alexandrescu's
influence that made C++0x adopt the same 'auto' keyword for type inference. You
Post by Don
Post by levenshtein
http://en.wikipedia.org/wiki/C++0x#Type_inference
It's essentially copied from D. Seems funny, but the mighty C++ committee is
actually listening to us.
Post by Don
C++0x announced that it was going to use the 'auto' keyword, so D copied
it from C++0x. But D implemented it before C++ did. <g>.
This is my general impression of D as a language. It's not a very innovative
language in the sense that everything in it has been done somewhere, in some
context, and it breaks no completely new ground. However, its contribution to the
programming world is extremely important nonetheless in that it takes the best
features from lots of different languages, including some researchy languages that
almost noone uses in production, makes them work (more or less) well together and
makes them all usable from one **practical, production-oriented** language.

Basically, my take as a practical programmer rather than a theoretical comp-sci
researcher is "Who cares if it's been done before if it's not implemented in any
practical language?".
Walter Bright
2010-07-25 18:42:52 UTC
Permalink
Post by dsimcha
Basically, my take as a practical programmer rather than a theoretical comp-sci
researcher is "Who cares if it's been done before if it's not implemented in any
practical language?".
There's rarely anything truly new in programming. I agree with you that turning
an idea into something useful is where it's at, and how it's done. For example,
there's the idea of putting windows in a house, and then there's the design,
size, and placement of those windows that make it matter.

This is hardly unique to programming. For example, the Vikings 'discovered'
America centuries before Columbus. But they did no followup, forgot about it,
and it had no lasting impact.

Many argue that others invented the airplane before the Wright brothers. But
none of those claims can be properly documented, none of them did any followup,
and none of them had any impact on the evolution of airplane design.

Led Zeppelin based several of its major hits on obscure blues songs. Zep should
have given credit at the time, but also, if Zep hadn't reworked those songs into
hits, those songs would be quite forgotten.
Andrei Alexandrescu
2010-07-25 19:10:35 UTC
Permalink
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?
Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?
Cheers
Justin Johansson
I thought of this for a couple of days. Clearly as the person using
ranges in D consistently I'm very much at risk for accepting theories
that make me look and feel good, so probably even if I'm trying to stay
aware of that risk, I'm too biased for my answer to be useful.

At the same time there are a few lucid points that I feel compelled to note.

First, I agree with Walter that D's arrays have been a very strong
influence on ranges. Pointers pervade C and iterators were a natural
generalization of them; slices pervade D and I have been very explicitly
preoccupied for a long time to find a formalism that legitimizes and
formalizes slices. Though it was clear that slices do offer access to
their underlying pointers (s.ptr and s.ptr + s.length), it was also
clear that that's an unclean mechanism, one of last resort that should
not be part of the formalism.

Secod, clearly ranges are derivative to STL iterators, and STL iterators
Post by Justin Johansson
Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport.
Just wind the clock back by 16 years. We're now in 1994, and there had
been 34 years since Lisp. However Stepanov did come with a revolutionary
new idea.

Personally I'm not as much amazed about the time taken, as about the
fact that many people _today_ don't figure what's good about the STL. My
theory (which I expanded in my article On Iteration) is that STL
requires some unusual and borderline obscure PL features, and because of
that it could only be implemented in C++ using means that were at the
time highly non-idiomatic. The net result was that STL's beautiful
design was difficult to distinguish from the wall of syntax that
surrounded it. As such, most people who weren't C++ programmers didn't
"get" it and only some C++ programmers "got" it (even Bjarne took some
time). In my humble opinion, the design of Java, C#, and Go is proof
that their authors didn't get the STL. Otherwise, those languages would
be very different.

I strongly believe Walter got the STL and generic programming in
general. He might be fuzzy about some minor details, but he is plenty
good at plenty other things and always had a good listening ear for the
importance of genericity.

Third, ranges were "in the air" already at the time I formalized them.
Boost and Adobe had notions of "range", even though all their primitives
were to expose begin() and end(), so they were essentially lackeys of
the STL iterator abstraction. People were talking about "range" whenever
they discussed two iterators delimiting a portion of a container. It was
only a matter of time until someone said, hey, let's make range a
first-class abstraction.


Andrei
dsimcha
2010-07-25 19:38:59 UTC
Permalink
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail at erdani.org)'s article
Post by Andrei Alexandrescu
Personally I'm not as much amazed about the time taken, as about the
fact that many people _today_ don't figure what's good about the STL. My
theory (which I expanded in my article On Iteration) is that STL
requires some unusual and borderline obscure PL features, and because of
that it could only be implemented in C++ using means that were at the
time highly non-idiomatic. The net result was that STL's beautiful
design was difficult to distinguish from the wall of syntax that
surrounded it. As such, most people who weren't C++ programmers didn't
"get" it and only some C++ programmers "got" it (even Bjarne took some
time). In my humble opinion, the design of Java, C#, and Go is proof
that their authors didn't get the STL. Otherwise, those languages would
be very different.
I see at least a few reasons why people (including me before ranges came along and
made me see in hindsight what STL was trying to do, but didn't quite get right)
miss the underlying beauty of STL:

1. Despite being extremely generic and good at making complicated things
possible, STL sucks compared to both std.range/std.algorithm and
Python/Ruby/Perl/PHP duck typing at making simple things simple.

a. Iterators come in pairs (both horribly verbose and statically uncheckable).
Yes, this enables a few things that wouldn't otherwise be possible, but at the
expense of making the other ~98% of use cases much more awkward and dangerous.
IMHO this is a very bad tradeoff.

b. Calling the iteration syntax horrible would be an understatement. Iterating
over a container (via iterators/ranges) is something that's done so often that
even small improvements in syntactic clarity and terseness are worth reaching for.
C++'s way of writing a loop to iterate over a container via iterators looks about
the same before and after RSA encryption, and the encrypted message looks really long.

2. Lack of variadic templates. This is a key thing that really makes
std.range/std.algorithm tick. Without them, you can't have chain or zip and map
and reduce are severely neutered.

Breaking outside of standard lib stuff into userland, I've found taking a variable
number of generic ranges to be unbelievably useful in my dstats library, and I use
it for everything from mutual information to regression to ANOVA to partial
correlations.

3. Lack of powerful compile-time reflection/introspection. Pretty much all
compile time reflection/introspection in C++ is a massive kludge, which makes it
virtually impossible for mere mortals to write their own generic algorithms and
containers in all but the simplest cases. They can still use ones written by
gurus, but it's a lot easier to appreciate stuff like this when you can actually
make your own.
Don
2010-07-25 19:45:48 UTC
Permalink
Post by Andrei Alexandrescu
Post by Justin Johansson
It sounds like the D PL has invented the range idiom unlike any other PL.
Since the dawn of PL's, which must be about 50 years now since Lisp
for example, it is hard to imagine a new PL inventing a completely
new idiom as "ranges" seem to purport. Given the many academic
arguments for ranges as opposed to iterators, why is it that the
D PL seems to be the major if not only sponsor of the range idiom?
Is D really taking the lead here and is it likely that other PL's
will eventually take D's lead?
Third, ranges were "in the air" already at the time I formalized them.
Boost and Adobe had notions of "range", even though all their primitives
were to expose begin() and end(), so they were essentially lackeys of
the STL iterator abstraction. People were talking about "range" whenever
they discussed two iterators delimiting a portion of a container. It was
only a matter of time until someone said, hey, let's make range a
first-class abstraction.
It's also worth noting that one of the primary advocates of ranges in
C++ was Matthew Wilson, who was hugely influential in the early years of
D. Even the limited ranges that exist in Boost, are somewhat influenced
by D.
Walter Bright
2010-07-25 21:04:38 UTC
Permalink
Post by Andrei Alexandrescu
I strongly believe Walter got the STL and generic programming in
general. He might be fuzzy about some minor details, but he is plenty
good at plenty other things and always had a good listening ear for the
importance of genericity.
To be fair, it took me many years to get it. STL's brilliance was nearly
completely obscured by the syntax of it, and I was thoroughly misled by that.

Bartosz Milewski once gave a fantastic talk where he showed some type
metaprogramming in Haskell. I don't know Haskell, but the examples were one
liners and easily understood. Then he showed the equivalent using C++ template
metaprogramming, and it was many lines of complex syntax. Then, the brilliant
part was he highlighted the Haskell bits that were embedded in the C++ template
syntax. It was one of those "ahaa!" moments where suddenly it made sense.
Post by Andrei Alexandrescu
Third, ranges were "in the air" already at the time I formalized them.
Boost and Adobe had notions of "range", even though all their primitives
were to expose begin() and end(), so they were essentially lackeys of
the STL iterator abstraction. People were talking about "range" whenever
they discussed two iterators delimiting a portion of a container. It was
only a matter of time until someone said, hey, let's make range a
first-class abstraction.
In the early days of D, we talked about using arrays as the basis for the "D
Template Library" rather than pointers. I can't find the thread about it, though.
bearophile
2010-07-25 21:54:30 UTC
Permalink
Post by Andrei Alexandrescu
In my humble opinion, the design of Java, C#, and Go is proof
that their authors didn't get the STL. Otherwise, those languages would
be very different.
I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.

Here you can find why C# designers have refused C++-style templates & STL and chosen the generics instead:
http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx
One important problem of C# generics can be solved adding IArithmetic<T>:
http://www.osnews.com/story/7930

I like D templates and I agree that adding them to D1 was a good idea (mostly because D1 is designed to be similar to C++) but you must accept that some language designers can understand STL and still refuse to add all the features necessary to implement it.

Maybe there is a way to build something like STL without C++/D-style templates :-)

Bye,
bearophile
Don
2010-07-25 22:17:16 UTC
Permalink
Post by bearophile
Post by Andrei Alexandrescu
In my humble opinion, the design of Java, C#, and Go is proof
that their authors didn't get the STL. Otherwise, those languages would
be very different.
I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.
http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx
IMHO, that link strongly suggests that they didn't grok the STL. The
pathetic arguments they give look to me like an attempt to rationalize a
decision they'd already made.
Walter Bright
2010-07-25 23:03:41 UTC
Permalink
Post by bearophile
In my humble opinion, the design of Java, C#, and Go is proof that their
authors didn't get the STL. Otherwise, those languages would be very
different.
I don't believe you. Among the designers of Java, C# and Go there are people
that are both experts and smart.
Of course they are experts and smart. That doesn't mean they won't make
mistakes. Both Java and C# have added generic support only after several cycles.
We'll see how long Go manages to resist adding it. My prediction is Go will not
become a mainstream language without it.
Post by bearophile
C# designers have shown to be sometimes smarter than D designers.
And D sometimes has smarter decisions than C#. I think the statement is meaningless.
Post by bearophile
So I think some of them 'get the STL' and
understand its advantages, but despite this in the universe they have found
some other reasons to refuse it. I think they were unwilling to pay the large
price in language complexity, bug-prone-ness, code bloat and compilation
speed that C++ and D are willing to pay.
I think that's projecting a lot of the weaknesses of the C++ implementation onto
D, and I don't think that is justified. For example, D simply doesn't have the
compilation speed problems C++ has.

The code bloat issue is resolvable with implementation effort, and neither C#
nor Java have ever had any shortage of resources thrown into their development.
The same goes for bug-prone-ness.

D has gone a long way towards reducing the language complexity issue. D
templates are *far* simpler than C++ ones, yet more powerful. I also don't see
how C# generics are simpler (for the user) than D templates, and C# generics are
quite complex under the hood.
Post by bearophile
Here you can find why C# designers have refused C++-style templates & STL and
http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx One
http://www.osnews.com/story/7930
I think a better article is here:
http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html

The msdn article makes some misinformed statements.
Post by bearophile
I like D templates and I agree that adding them to D1 was a good idea (mostly
because D1 is designed to be similar to C++) but you must accept that some
language designers can understand STL and still refuse to add all the
features necessary to implement it.
The computer language space is far, far too large for any one programmer (no
matter how expert & smart) to be experienced enough in all the varied paradigms
to make informed tradeoffs.
bearophile
2010-07-26 12:20:53 UTC
Permalink
Post by Walter Bright
http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html
Thank you for the link.
Post by Walter Bright
Indeed, if Foo is a reference type, and if we do the design right, we can share the instantiation for all reference types.<
Bye,
bearophile
Walter Bright
2010-07-26 18:38:39 UTC
Permalink
Post by bearophile
Post by Walter Bright
http://windowsdevcenter.com/pub/a/oreilly/windows/news/hejlsberg_0800.html
Thank you for the link.
You're welcome, it took me a fair bit of work to find. It's an important
historical document.
levenshtein
2010-07-26 13:14:37 UTC
Permalink
Post by Walter Bright
Post by bearophile
In my humble opinion, the design of Java, C#, and Go is proof that their
authors didn't get the STL. Otherwise, those languages would be very
different.
I don't believe you. Among the designers of Java, C# and Go there are people
that are both experts and smart.
Of course they are experts and smart. That doesn't mean they won't make
mistakes. Both Java and C# have added generic support only after several cycles.
We'll see how long Go manages to resist adding it. My prediction is Go will not
become a mainstream language without it.
At least the hype around Go shows that the language's authors enjoy a huge respect. Some people believe that because of the divine effort from Rob Pike, no matter how bad Go looks now, it will replace ALL other languages very soon. Go is going to be used in kernel development, in systems programming, in application programming, as a low-level scripting language. It will replace everything.
Post by Walter Bright
Post by bearophile
C# designers have shown to be sometimes smarter than D designers.
And D sometimes has smarter decisions than C#. I think the statement is meaningless.
C# is already a legacy language. Now we have Clojure and Go.
Walter Bright
2010-07-26 18:40:19 UTC
Permalink
Post by levenshtein
At least the hype around Go shows that the language's authors enjoy a huge
respect.
Not only the authors, but the respect for Google itself. It gives Go an enormous
and enviable head start. But, eventually, Go will have to deliver.
Andrei Alexandrescu
2010-07-26 01:21:05 UTC
Permalink
Post by bearophile
In my humble opinion, the design of Java, C#, and Go is proof that
their authors didn't get the STL. Otherwise, those languages would
be very different.
I don't believe you. Among the designers of Java, C# and Go there are
people that are both experts and smart. C# designers have shown to be
sometimes smarter than D designers. So I think some of them 'get the
STL' and understand its advantages, but despite this in the universe
they have found some other reasons to refuse it. I think they were
unwilling to pay the large price in language complexity,
bug-prone-ness, code bloat and compilation speed that C++ and D are
willing to pay.
But then their containers and algorithms are markedly inferior to STL's.
They are toxic to the programmers who have only been exposed to them. So
Java and C# got STL and decided to not go with it, I'm sure they would
have at least gotten a few good bits from it. But no - the entire e.g.
Java container library is parallel to everything STL stands for.
Post by bearophile
Here you can find why C# designers have refused C++-style templates&
http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx One
important problem of C# generics can be solved adding
IArithmetic<T>: http://www.osnews.com/story/7930
I like D templates and I agree that adding them to D1 was a good idea
(mostly because D1 is designed to be similar to C++) but you must
accept that some language designers can understand STL and still
refuse to add all the features necessary to implement it.
Maybe there is a way to build something like STL without C++/D-style templates :-)
I'm sure there is, and Java and C# could have come a long way. But
taking their libraries at face value, they are simply ignorant of STL's
values - to their detriment.


Andrei
retard
2010-07-26 13:41:28 UTC
Permalink
Post by Andrei Alexandrescu
Post by bearophile
In my humble opinion, the design of Java, C#, and Go is proof that
their authors didn't get the STL. Otherwise, those languages would be
very different.
I don't believe you. Among the designers of Java, C# and Go there are
people that are both experts and smart. C# designers have shown to be
sometimes smarter than D designers. So I think some of them 'get the
STL' and understand its advantages, but despite this in the universe
they have found some other reasons to refuse it. I think they were
unwilling to pay the large price in language complexity,
bug-prone-ness, code bloat and compilation speed that C++ and D are
willing to pay.
But then their containers and algorithms are markedly inferior to STL's.
They are toxic to the programmers who have only been exposed to them. So
Java and C# got STL and decided to not go with it, I'm sure they would
have at least gotten a few good bits from it. But no - the entire e.g.
Java container library is parallel to everything STL stands for.
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults) AND to make it
possible for ordinary people to develop applications. Even Java was too
complex so the less bright people made PHP for the idiot part of the
community (I'm not saying all PHP users are idiots, but if you're an
idiot, you can't develop in Java or C++, but you can fluently write PHP).

It's a sad fact that not everyone is as intelligent as A. Alexandrescu or
W. Bright. A PHP website for a local fishing club doesn't need super
efficient code. Why? There are maybe 2..4 visitors daily and the only
interactive part on the page is a feedback form. A typical C/C++/D
program written by a mediocre developer crashes every 15..60 minutes
because of bug prone low level code. That's the reason why there are
"inferior" languages. PHP doesn't crash. You just get an interactive
error message. And you can fix the bug in 1..5 minutes without restarting
the server. A novice coder is willing to sacrifice 99.9% of performance
to get this feature.
Walter Bright
2010-07-26 18:42:54 UTC
Permalink
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.

2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
bearophile
2010-07-26 19:29:48 UTC
Permalink
Post by Walter Bright
1. segfaults *are* exceptions.
Aren't exceptions objects?

Bye,
bearophile
Simen kjaeraas
2010-07-26 20:50:51 UTC
Permalink
Post by bearophile
Post by Walter Bright
1. segfaults *are* exceptions.
Aren't exceptions objects?
That's Exception, not exception. The latter may be a hardware thing.
--
Simen
Walter Bright
2010-07-26 21:05:40 UTC
Permalink
Post by bearophile
Post by Walter Bright
1. segfaults *are* exceptions.
Aren't exceptions objects?
That's an implementation detail.
Jim Balter
2010-07-27 04:11:12 UTC
Permalink
"bearophile" <bearophileHUGS at lycos.com> wrote in message
Post by bearophile
Post by Walter Bright
1. segfaults *are* exceptions.
Aren't exceptions objects?
Bye,
bearophile
Not at all -- exceptions are system-generated events that are implemented in
modern languages by the equivalent of a throw statement, and Exception
objects are used to communicate information about either system- or user-
generated throws. And Walter's point is getting lost -- 'retard'
misidentified the issue as giving up speed by going with "exceptions" to
avoid "hard crashes" such as segfaults, but of course there is no loss of
speed for catching a segfault signal -- an asynchronous event -- and
throwing an Exception object rather than calling a C signal catcher. The
speed cost doesn't come from using Exceptions, it comes from doing array
bounds checking (which, of course, results in an Exception object being
thrown if violated). Of course, in D, if you're desperate for it, you can
regain the speed by giving up the memory safety and using pointers, in which
case an array bounds violation might overwrite something -- and if the thing
overwritten is a pointer, that is likely to result in a segfault -- which is
an exception, which in D will cause an Exception object to be thrown. Of
course, in languages that have them, objects are used to communicate the
information about exceptions, but even in C there are (non standard)
libraries available that provide a catch/throw/exception-passing mechanism
using a stack of setjmp structs and a longjmp for the throw.
KennyTM~
2010-07-26 20:10:14 UTC
Permalink
Post by Walter Bright
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Catching exception is easy, but handling (segfault) signal is a mess.
retard
2010-07-26 20:45:58 UTC
Permalink
Post by KennyTM~
Post by Walter Bright
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Catching exception is easy, but handling (segfault) signal is a mess.
Indeed, I'd like to know how you recover from a segfault without help
from an external processes. Sometimes you know that some routine might
fail once in a week, but the program MUST run non-stop for several
months. In Java you can achieve this with exceptions. And you can also
dynamically fix classes with the class loader.
Walter Bright
2010-07-26 21:04:53 UTC
Permalink
Post by retard
Post by KennyTM~
Post by Walter Bright
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Catching exception is easy, but handling (segfault) signal is a mess.
Indeed, I'd like to know how you recover from a segfault without help
from an external processes.
On Windows, it's fairly straightforward. I did it once as part of a gc
implementation that would mark unmodified pages as hardware readonly. It would
catch the seg fault from writes to it, log the page as modified, make the page
writable, and restart the failed instruction.
Post by retard
Sometimes you know that some routine might
fail once in a week, but the program MUST run non-stop for several
months. In Java you can achieve this with exceptions. And you can also
dynamically fix classes with the class loader.
Any program that attempts to achieve reliability by "recovering" from program
bugs and continuing is an extremely badly designed one.

http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html

http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html

Sadly, it's a topic that has not penetrated software engineering instructional
materials, and programmers have to learn it the hard way again and again.
retard
2010-07-27 15:29:58 UTC
Permalink
Post by Walter Bright
Post by retard
Post by KennyTM~
Post by Walter Bright
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Catching exception is easy, but handling (segfault) signal is a mess.
Indeed, I'd like to know how you recover from a segfault without help
from an external processes.
On Windows, it's fairly straightforward. I did it once as part of a gc
implementation that would mark unmodified pages as hardware readonly. It
would catch the seg fault from writes to it, log the page as modified,
make the page writable, and restart the failed instruction.
Post by retard
Sometimes you know that some routine might fail once in a week, but the
program MUST run non-stop for several months. In Java you can achieve
this with exceptions. And you can also dynamically fix classes with the
class loader.
Any program that attempts to achieve reliability by "recovering" from
program bugs and continuing is an extremely badly designed one.
http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html
http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html
Sadly, it's a topic that has not penetrated software engineering
instructional materials, and programmers have to learn it the hard way
again and again.
But these are your articles with no cited sources about the software
methodologies. It seems like they're written afterwards to advertise the
features implemented in D. The contract programming only covers a small
runtime dynamic part of programming. There's no mention about automated
theorem proving. No word about exceptions nor sandboxing with virtual
machines. Why? Because these would make D look ridiculous.

How the web programming world works:

''I'm not a real programmer. I throw together things until it works then
I move on. The real programmers will say "yeah it works but you're
leaking memory everywhere. Perhaps we should fix that." I'll just restart
apache every 10 requests.'' -- Rasmus Lerdorf

It it widely accepted that web applications fail often. It suffices if
the developers are fixing bad code eventually, but the remaining parts
should work reasonably well. Either a process is restarted or exceptions
are used to catch small error conditions so the server could server
*something* . I'm talking about practical web applications, not X-ray
machines.
Andrei Alexandrescu
2010-07-27 15:59:31 UTC
Permalink
Post by retard
Post by Walter Bright
http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html
http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html
Sadly, it's a topic that has not penetrated software engineering
instructional materials, and programmers have to learn it the hard way
again and again.
But these are your articles with no cited sources about the software
methodologies. It seems like they're written afterwards to advertise the
features implemented in D. The contract programming only covers a small
runtime dynamic part of programming. There's no mention about automated
theorem proving. No word about exceptions nor sandboxing with virtual
machines. Why? Because these would make D look ridiculous.
Isn't it actually simpler to reason that Walter defined D according to
his views and expertise?

Andrei
Adam Ruppe
2010-07-27 16:05:01 UTC
Permalink
Lerdorf's quote strikes me as actually being somewhat close to what
Walter is talking about. Web applications don't focus on making a
thing that never fails, but instead achieve reliability by having an
external watchdog switch to backups - that is, a fresh copy of the
program - when something goes wrong with the first one.

Doing this gives them adequately good results at low cost.
Walter Bright
2010-07-27 19:02:36 UTC
Permalink
Post by retard
Post by Walter Bright
http://www.drdobbs.com/blog/archives/2009/10/safe_systems_fr.html
http://www.drdobbs.com/blog/archives/2009/11/designing_safe.html
Sadly, it's a topic that has not penetrated software engineering
instructional materials, and programmers have to learn it the hard way
again and again.
But these are your articles
Yes.
Post by retard
with no cited sources about the software methodologies.
As I said, it hasn't penetrated the software literature. It's one of the reasons
I wrote that, I feel it's a contribution, not a retread.

I did not invent those ideas, they come from my working at Boeing learning how
the aviation system builds reliable systems from unreliable parts. (Not just
mechanical systems, Boeing applied those rules to its software subsystems.)
Post by retard
It seems like they're written afterwards to advertise the
features implemented in D.
Or maybe it's the other way around, that the features in D were designed to
support those ideas. Why would I waste my time implementing complex features if
I had to after-the-fact come up with some justification for them? Please
remember that nobody is paying me to do this, I'm not saying "Yes Sir! Three
bags full, Sir!" to some middle-management phb.

I'm often wrong, but I really do believe that D's feature set is a good and
justifiable one.
Post by retard
The contract programming only covers a small
runtime dynamic part of programming. There's no mention about automated
theorem proving. No word about exceptions nor sandboxing with virtual
machines. Why? Because these would make D look ridiculous.
That misses the point about reliability. Again, you're approaching from the
point of view that you can make a program that cannot fail (i.e. prove it
correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely on
such for something important, like say your life. Software can (and will) fail
even if you proved it correct, for example, what if a memory cell fails and
flips a bit? Cosmic rays flip a bit?

Are you willing to bet your life?
Post by retard
''I'm not a real programmer. I throw together things until it works then
I move on. The real programmers will say "yeah it works but you're
leaking memory everywhere. Perhaps we should fix that." I'll just restart
apache every 10 requests.'' -- Rasmus Lerdorf
It it widely accepted that web applications fail often. It suffices if
the developers are fixing bad code eventually, but the remaining parts
should work reasonably well. Either a process is restarted or exceptions
are used to catch small error conditions so the server could server
*something* . I'm talking about practical web applications, not X-ray
machines.
It's retarded (!) to pretend that this is how one makes reliable systems.
bearophile
2010-07-27 19:30:48 UTC
Permalink
Post by Walter Bright
That misses the point about reliability. Again, you're approaching from the
point of view that you can make a program that cannot fail (i.e. prove it
correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely on
such for something important, like say your life. Software can (and will) fail
even if you proved it correct, for example, what if a memory cell fails and
flips a bit? Cosmic rays flip a bit?
Are you willing to bet your life?
If you have a critical system, you use most or all means you know to make it work, so if you can you use theorem proving too. If you have the economic resources to use those higher means, then refusing to use them is not wise. And then you also use error correction memory, 3-way or 6-way redundancy, plus watchdogs and more.

If it is a critical system you can even design it fail gracefully even if zero software is running, see the recent designs of the concrete canal under the core of nuclear reactors, to let the fused core go where you want it to go (where it will kept acceptably cool and safe, making accidents like Chernobyl very hard to happen) :-)

Bye,
bearophile
dsimcha
2010-07-27 19:46:20 UTC
Permalink
== Quote from bearophile (bearophileHUGS at lycos.com)'s article
Post by bearophile
Post by Walter Bright
That misses the point about reliability. Again, you're approaching from the
point of view that you can make a program that cannot fail (i.e. prove it
correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely on
such for something important, like say your life. Software can (and will) fail
even if you proved it correct, for example, what if a memory cell fails and
flips a bit? Cosmic rays flip a bit?
Are you willing to bet your life?
If you have a critical system, you use most or all means you know to make it
work, so if you can you use theorem proving too. If you have the economic
resources to use those higher means, then refusing to use them is not wise. And
then you also use error correction memory, 3-way or 6-way redundancy, plus
watchdogs and more.
Post by bearophile
If it is a critical system you can even design it fail gracefully even if zero
software is running, see the recent designs of the concrete canal under the core
of nuclear reactors, to let the fused core go where you want it to go (where it
will kept acceptably cool and safe, making accidents like Chernobyl very hard to
happen) :-)
Post by bearophile
Bye,
bearophile
But the point is that redundancy is probably the **cheapest, most efficient** way
to get ultra-high reliability. Yes, cost matters even when people's lives are at
stake. If people accepted this more often, maybe the U.S. healthcare system
wouldn't be completely bankrupt.

Anyhow, if you try to design one ultra reliable system, you can't be stronger than
your weakest link. If your system has N components, each with independent
probability p_i of failure, your failure probability is:

1 - product_i=1 to n( 1 - p_i), i.e. 1 - the product of the probabilities that
everything works. If p_i is large for any component, you're very likely to fail
and if the system has a lot of components, you're bound to have at least one
oversight. In the limit where one link is a lot more prone to failure than any of
the rest, you're basically as strong as your weakest link.

For example, if all components except one are perfect and that one component has a
1% chance of failure then the system as a whole has a 1% chance of failure.

If, on the other hand, you have redundancy, you're at least as strong as your
strongest link because only one system needs to work. Assuming the systems were
designed by independent teams and have completely different weak points, we can
assume their failures are statistically independent. Assume we have m redundant
systems each with probability p_s of failing in some way or another. Then the
probability of the whole thing failing is:

product_i = 1 to m(p_s), or the probability that ALL of your redundant systems
fail. Assuming they're all decent and designed independently, with different weak
points, they probably aren't going to fail at the same time.

For example, if each redundant system really sucks and has a 5% chance of failure,
then the probability that they both fail and you're up the creek is only 0.25%.
Walter Bright
2010-07-27 21:51:11 UTC
Permalink
Post by dsimcha
But the point is that redundancy is probably the **cheapest, most efficient** way
to get ultra-high reliability.
It also works incredibly well. Airliners use a dual path system, which means
that no single failure can bring it down. If it didn't work, the skies would be
raining airplanes.

Triple path systems add a great deal of cost, with only a negligible increase in
safety.
Post by dsimcha
Yes, cost matters even when people's lives are at stake.
If the FAA required a triple path system in airliners, likely the airplane would
be so heavy it could barely lift itself, let alone have a payload.
Post by dsimcha
If, on the other hand, you have redundancy, you're at least as strong as your
strongest link because only one system needs to work. Assuming the systems were
designed by independent teams and have completely different weak points, we can
assume their failures are statistically independent. Assume we have m redundant
systems each with probability p_s of failing in some way or another. Then the
product_i = 1 to m(p_s), or the probability that ALL of your redundant systems
fail. Assuming they're all decent and designed independently, with different weak
points, they probably aren't going to fail at the same time.
For example, if each redundant system really sucks and has a 5% chance of failure,
then the probability that they both fail and you're up the creek is only 0.25%.
Yup. The important thing to make this work is to avoid coupling, where a failure
in one path causes the other path to fail as well. This can be tricky to get right.

Coupling is why a process attempting to diagnose and fix its own bugs is doomed.
&quot;Jérôme M. Berger&quot;
2010-07-27 20:34:06 UTC
Permalink
Post by Walter Bright
The contract programming only covers a small runtime dynamic part of
programming. There's no mention about automated theorem proving. No
word about exceptions nor sandboxing with virtual machines. Why?
Because these would make D look ridiculous.
That misses the point about reliability. Again, you're approaching from
the point of view that you can make a program that cannot fail (i.e.
prove it correct). That view is WRONG WRONG WRONG and you must NEVER
NEVER NEVER rely on such for something important, like say your life.
Software can (and will) fail even if you proved it correct, for example,
what if a memory cell fails and flips a bit? Cosmic rays flip a bit?
Plus, how do you prove the proof? I know of at least two examples
of software that were proven formally and that AFAIK worked
perfectly to spec and yet failed spectacularly. One of them caused a
Mars probe to crash (it had been proven using measurements in feet,
but the sensors were reporting in kilometers IIRC) and the other
caused the ground control crew to activate the self destruct for the
first Ariane 5 rocket (it had been proven using data designed for
Ariane 4 and no longer valid for Ariane 5).

Jerome
--
mailto:jeberger at free.fr
http://jeberger.free.fr
Jabber: jeberger at jabber.fr

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20100727/dbac6774/attachment.pgp>
bearophile
2010-07-27 20:54:43 UTC
Permalink
Post by &quot;Jérôme M. Berger&quot;
Plus, how do you prove the proof? I know of at least two examples
of software that were proven formally and that AFAIK worked
perfectly to spec and yet failed spectacularly.
You have to think about proofs as another (costly) tool to avoid bugs/bangs, but not as the ultimate and only tool you have to use (I think dsimcha was trying to say that there are more costly-effective tools. This can be true, but you can't be sure that is right in general).

Bye,
bearophile
Walter Bright
2010-07-27 22:00:20 UTC
Permalink
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.

Building reliable systems is not about trying to make components that cannot
fail. It is about building a system that can TOLERATE failure of any of its
components.

It's how you build safe systems from UNRELIABLE parts. And all parts are
unreliable. All of them. Really. All of them.
Jonathan M Davis
2010-07-27 22:17:52 UTC
Permalink
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid
bugs/bangs, but not as the ultimate and only tool you have to use (I
think dsimcha was trying to say that there are more costly-effective
tools. This can be true, but you can't be sure that is right in
general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure of
any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts are
unreliable. All of them. Really. All of them.
Especially the programmer. ;)

- Jonathan M Davis
bearophile
2010-07-27 23:21:05 UTC
Permalink
Post by Walter Bright
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that cannot
fail. It is about building a system that can TOLERATE failure of any of its
components.
It's how you build safe systems from UNRELIABLE parts. And all parts are
unreliable. All of them. Really. All of them.
Each of those parts must be pretty reliable if you want to design a globally reliable system. Space Shuttle control systems are redundant as you say, and probably each single point of failure has a backup, but each software system is pretty reliable by itself, probably they have proved some of its parts for each of the independently written redundant software systems. If your subsystems are crap, your overall system is crap, unless you have tons of them and they can be all used in "parallel" (with no single coordinator that can be a failure point).

Bye,
bearophile
Walter Bright
2010-07-28 00:30:29 UTC
Permalink
Post by bearophile
Each of those parts must be pretty reliable if you want to design a globally
reliable system. Space Shuttle control systems are redundant as you say, and
probably each single point of failure has a backup, but each software system
is pretty reliable by itself, probably they have proved some of its parts for
each of the independently written redundant software systems. If your
subsystems are crap, your overall system is crap, unless you have tons of
them and they can be all used in "parallel" (with no single coordinator that
can be a failure point).
The space shuttle computer systems that are on the critical path have redundant
backups that are written with different algorithms by different groups, and they
try very hard to not have any coupling between them.

Even so, there remain at least two obvious single points of failure.

Also, of course, they still try to make each component as reliable as they can.
BCS
2010-07-28 14:53:06 UTC
Permalink
Hello Walter,
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid
bugs/bangs, but not as the ultimate and only tool you have to use (I
think dsimcha was trying to say that there are more costly-effective
tools. This can be true, but you can't be sure that is right in
general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
Agreed. You can make a system that can tolerate failures (e.g. not do something
damaging), but that doesn't make it acceptable (e.g. continue to do what
it's supposed to). Pure redundancy aside, if a part not working didn't degrade
the system, you would remove it from the design. Making parts more reliable
will increases the amount of time the system is in a non depredated state.
--
... <IXOYE><
Jim Balter
2010-07-28 18:09:17 UTC
Permalink
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure of
any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts are
unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While all
parts are unreliable, they aren't *equally* unreliable. Unit tests, contract
programming, memory safe access, and other reliability techniques,
*including correctness proofs*, all increase reliability.

On the flip side, you can't guarantee reliability with simplistic rules like
"no continuing after an exception". Numerous (relatively) reliable systems
have demonstrated that religion to be a myth as well.
Walter Bright
2010-07-28 18:48:42 UTC
Permalink
Post by Jim Balter
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable.
They don't have to have equal reliability in order for redundancy to be very
effective.
Post by Jim Balter
Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
True, but the problem is when one is seduced by that into thinking that
redundancy is not necessary.
Post by Jim Balter
On the flip side, you can't guarantee reliability with simplistic rules
like "no continuing after an exception".
Of course, but you can guarantee unreliability by thinking one can continue
after an exception thrown by a programming error. (In engineering, one can never
"guarantee" reliability anyway. What one does is set a maximum failure rate, and
prove a design is more reliable than that.)

Blindly applying rules without using one's brain is bad, and ignoring rules
without thoroughly understanding their rationale is equally bad.
Post by Jim Balter
Numerous (relatively) reliable systems have demonstrated that religion to be a myth as well.
If there's an interesting example here, please tell me about it!

As for the religion aspect, please consider that I get this from my experience
with how airliners are designed. I think there can be little doubt that these
techniques (religion) are extremely effective in producing incredibly reliable
and safe airline travel.
retard
2010-07-28 19:22:26 UTC
Permalink
Post by Walter Bright
Post by Jim Balter
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable.
They don't have to have equal reliability in order for redundancy to be
very effective.
Post by Jim Balter
Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
True, but the problem is when one is seduced by that into thinking that
redundancy is not necessary.
I give you one use case where I really appreciate exceptions instead of
segfaults:

I've just spent 8 hours encoding a video file on my home computer. The
job gets done and my wife wants me to start playing it in 25 minutes (it
takes 15 minutes to burn the video to a cd/dvd). 250 guests are waiting
for me, I'm in my son's wedding. I have very little time to save the
results.

Now, the save file dialog has a bug when the target file system has too
little space. I want it to save the work SOMEWHERE. It should NOT just
crash! I'd donate my kidney to keep the software running without
problems. People typically buy Apple's computers in these kinds of
situations for a good reason. PC is just too unreliable and the Windows/
Linux developers have an unproductive mentality. I don't have time to
encode it twice, redundance makes no sense.
Don
2010-07-29 10:11:21 UTC
Permalink
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
retard
2010-07-29 10:26:03 UTC
Permalink
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous. I'm
genuinely astonished that such an absurd idea ever had any traction.
What's your favorite then? 100% unit test coverage?
Don
2010-07-29 17:54:50 UTC
Permalink
Post by retard
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous. I'm
genuinely astonished that such an absurd idea ever had any traction.
What's your favorite then? 100% unit test coverage?
A completely independent code reviewer helps, too. It really helps if
you're aggressively trying to break the code.

I agree with Walter's statement that ALL of the components are
unreliable, and I think it's important to realize that proofs are the
same. Even in the case where the program perfectly implements the
algorithm, there can be bugs in the proof. Very many mathematical proofs
published in peer-reviewed journals have subsequently been found to have
errors in them.
I think the value in 'correctness proofs' mainly comes from the fact
that they have some kind of independence from the code. In practice, you
run your code, and find bugs in the proof...

My feeling is that the value of any reliability technique depends on how
independent it is from the code you're testing.
Walter Bright
2010-07-29 18:49:20 UTC
Permalink
Post by Don
I agree with Walter's statement that ALL of the components are
unreliable, and I think it's important to realize that proofs are the
same. Even in the case where the program perfectly implements the
algorithm, there can be bugs in the proof.
Also, the hardware running the correct program can fail.
BCS
2010-07-30 15:23:57 UTC
Permalink
Hello Don,
Post by Don
I agree with Walter's statement that ALL of the components are
unreliable, and I think it's important to realize that proofs are the
same.
That's where automatic proof checkers come in...
--
... <IXOYE><
dsimcha
2010-07-29 13:22:35 UTC
Permalink
== Quote from Don (nospam at nospam.com)'s article
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
Yea, here's a laundry list of stuff that theory doesn't account for that can go
wrong on real machines:

overflow
rounding error
compiler bugs
hardware bugs
OS bugs

I sincerely wish all my numerics code always worked if it was provably
mathematically correct.
bearophile
2010-07-29 16:08:17 UTC
Permalink
Post by dsimcha
overflow
Good provers take in account integral overflows too.
Post by dsimcha
rounding error
Interval (floating point) arithmetic can be used to face a large part of this problem. I hope to see a good Interval arithmetic lib for D in few years.
Post by dsimcha
compiler bugs
OS bugs
Those software too can be proven :-) Microsoft is working on provable OS kernel.
Post by dsimcha
hardware bugs
You can use RAM with error correction, redundancy, etc.

Proofs are one useful tool for high-reliability systems. They are not enough, of course.

"Beware of bugs in the above code; I have only proved it correct, not tried it.''
-- Knuth

Bye,
bearophile
dsimcha
2010-07-29 16:38:58 UTC
Permalink
== Quote from bearophile (bearophileHUGS at lycos.com)'s article
Post by bearophile
Post by dsimcha
overflow
Good provers take in account integral overflows too.
Post by dsimcha
rounding error
Interval (floating point) arithmetic can be used to face a large part of this
problem. I hope to see a good Interval arithmetic lib for D in few years.
Post by bearophile
Post by dsimcha
compiler bugs
OS bugs
Those software too can be proven :-) Microsoft is working on provable OS kernel.
Post by dsimcha
hardware bugs
You can use RAM with error correction, redundancy, etc.
Proofs are one useful tool for high-reliability systems. They are not enough, of course.
"Beware of bugs in the above code; I have only proved it correct, not tried it.''
-- Knuth
Bye,
bearophile
Oh, I forgot to mention memory allocation issues:

heap fragmentation
stack overflows
just plain running out of memory
BCS
2010-07-31 23:33:36 UTC
Permalink
Hello dsimcha,
Post by dsimcha
stack overflows
just plain running out of memory
Easy to account for
Post by dsimcha
heap fragmentation
Not so easy but if you can show the maximum size of the allocated memory
you might be able to prove there is no allocation of the n-1 parts that doesn't
leave a hole big enough for the nth part.
--
... <IXOYE><
retard
2010-07-29 19:57:24 UTC
Permalink
Post by dsimcha
== Quote from Don (nospam at nospam.com)'s article
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components
that cannot fail. It is about building a system that can TOLERATE
failure of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman.
While all parts are unreliable, they aren't *equally* unreliable.
Unit tests, contract programming, memory safe access, and other
reliability techniques, *including correctness proofs*, all increase
reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous. I'm
genuinely astonished that such an absurd idea ever had any traction.
Yea, here's a laundry list of stuff that theory doesn't account for that
overflow
rounding error
The proofs help here actually.
Post by dsimcha
compiler bugs
This is a bit unfortunate and unfortunately in some cases there's nothing
you can do - no matter what code you write, the compiler breaks it
fatally.
Post by dsimcha
hardware bugs
In many cases there's nothing you can do. The hardware bug / failure is
fatal. I'm talking about x86 micro-computers here since D isn't that much
suitable yet for other targets. A typical desktop PC has zero redundancy.
A single error in any of the components will kill your program or at
least some part of the user experience.
Post by dsimcha
OS bugs
If your numerics code is fucked up, the proofs really do help. I don't
think your numerics code will use any (problematic non-tested) syscalls.
If the OS breaks independently regardless of your code, there's nothing
you can do. The best way to protect against random OS bugs is to not run
any code on that OS.
Post by dsimcha
I sincerely wish all my numerics code always worked if it was provably
mathematically correct.
I really love digitalmars.D because this is one of the few places where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
Andrei Alexandrescu
2010-07-29 20:21:40 UTC
Permalink
Post by retard
I really love digitalmars.D because this is one of the few places where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
From the desk of: Almighty Andrei, New H(e)aven, CT

Almighty Andrei confirms hereby that he tried to used an automated
theorem prover to simplify his Masters thesis work back in 2003. To his
shame he even forgot its name (Coq?) Unfortunately he decided it was too
difficult to set up and use, so he resorted to manual proofs. His MS
thesis contains twenty-something theorems, all proven by hand.

Almighty Andrei wishes to state that his perception of automated theorem
proving is that it can be useful for proving properties of systems as
long as both the properties and the systems are confined enough. For
interesting properties of large systems, theorem proving has notable
scaling difficulties. Also, as Don mentioned, the models used by theorem
provers often abstract away certain realities.


Andrei
Jim Balter
2010-08-03 06:40:21 UTC
Permalink
"Andrei Alexandrescu" <SeeWebsiteForEmail at erdani.org> wrote in message
Post by Andrei Alexandrescu
Post by retard
I really love digitalmars.D because this is one of the few places where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
From the desk of: Almighty Andrei, New H(e)aven, CT
Almighty Andrei confirms hereby that he tried to used an automated theorem
prover to simplify his Masters thesis work back in 2003. To his shame he
even forgot its name (Coq?) Unfortunately he decided it was too difficult
to set up and use, so he resorted to manual proofs. His MS thesis contains
twenty-something theorems, all proven by hand.
Almighty Andrei wishes to state that his perception of automated theorem
proving is that it can be useful for proving properties of systems as long
as both the properties and the systems are confined enough. For
interesting properties of large systems, theorem proving has notable
scaling difficulties. Also, as Don mentioned, the models used by theorem
provers often abstract away certain realities.
Which does not make the idea "absurd", or "astonishing that it ever had any
traction". What Don said was stupid and ignorant, and you should be honest
enough to say so.
Post by Andrei Alexandrescu
Andrei
dsimcha
2010-07-29 20:32:43 UTC
Permalink
== Quote from retard (re at tard.com.invalid)'s article
Post by retard
I really love digitalmars.D because this is one of the few places where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
Does the Maxima computer algebra system count? It has a basic theorem prover
built in, and I've played around with it a little. If this is how theorem provers
work, then they have a long way to go.
Walter Bright
2010-07-29 22:31:32 UTC
Permalink
Post by retard
Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
That's *A*lmighty Andrei, note the caps. Please show due respect.
&quot;Jérôme M. Berger&quot;
2010-07-30 17:16:19 UTC
Permalink
Post by retard
I really love digitalmars.D because this is one of the few places where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
I have lots of experience with python. I have tried several other
languages including Haskell (hopelessly broken), caml (broken too
but maybe not hopeless), sml, felix, nimrod, and cyclone, java, php,
C and C++.

As far as formal proving of software is concerned, I have yet to
find a *free* solution worth the trouble. OTOH, I have had some
experience with Klocwork and my opinion is neither as strongly in
favour as you seem to be, nor as strongly against as you imply we
are. Basically:
- Even tool like klockwork *cannot* prove that your code is 100%
correct and will never fail;
- However, it *will* find bugs in your code that you wouldn't have
found any other way. Whether that's worth the cost depends on how
reliable you want your soft to be and how confident you are of the
quality of your devs.

Jerome
--
mailto:jeberger at free.fr
http://jeberger.free.fr
Jabber: jeberger at jabber.fr

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20100730/1ed9fec1/attachment.pgp>
BCS
2010-07-31 23:38:17 UTC
Permalink
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.

http://www.dsource.org/projects/scrapple/browser/trunk/backmath

Now I know why that sort of thing isn't done more often.
--
... <IXOYE><
retard
2010-08-01 10:44:19 UTC
Permalink
Post by BCS
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.
http://www.dsource.org/projects/scrapple/browser/trunk/backmath
Now I know why that sort of thing isn't done more often.
Learning how to document your code might also help. It looks like a mess.
BCS
2010-08-01 13:32:56 UTC
Permalink
Hello retard,
Post by retard
Post by BCS
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.
http://www.dsource.org/projects/scrapple/browser/trunk/backmath
Now I know why that sort of thing isn't done more often.
Learning how to document your code might also help. It looks like a mess.
I wish that had been the main problem...
--
... <IXOYE><
Andrei Alexandrescu
2010-08-01 21:25:58 UTC
Permalink
Post by retard
Post by BCS
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.
http://www.dsource.org/projects/scrapple/browser/trunk/backmath
Now I know why that sort of thing isn't done more often.
Learning how to document your code might also help. It looks like a mess.
I think it takes some good amount of courage and confidence to put one's
identity and code out there for everyone to criticize. Conversely,
criticizing from the comfort of anonymity and without producing "do as I
do, not only as I say" examples doesn't sit well.

Andrei
Justin Johansson
2010-08-01 23:04:11 UTC
Permalink
Post by Andrei Alexandrescu
Post by retard
Post by BCS
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.
http://www.dsource.org/projects/scrapple/browser/trunk/backmath
Now I know why that sort of thing isn't done more often.
Learning how to document your code might also help. It looks like a mess.
I think it takes some good amount of courage and confidence to put one's
identity and code out there for everyone to criticize. Conversely,
criticizing from the comfort of anonymity and without producing "do as I
do, not only as I say" examples doesn't sit well.
Andrei
Well put. Similarly can be said about posting with anonymity on
a newsgroup period. I for one have made numerous newsgroup bloopers,
whether being an "academic" err or perhaps a socially tactless response.

Whenever I realize one of my bloops, I often wish for retrospective
anonymity.

Justin ( no last name given :-) )
Jim Balter
2010-08-03 06:47:49 UTC
Permalink
"Andrei Alexandrescu" <SeeWebsiteForEmail at erdani.org> wrote in message
Post by Andrei Alexandrescu
Post by retard
Post by BCS
Hello retard,
Post by retard
Has anyone except the almighty
Andrei ever even downloaded a theorem prover?
Yes, ACL2.
http://www.dsource.org/projects/scrapple/browser/trunk/backmath
Now I know why that sort of thing isn't done more often.
Learning how to document your code might also help. It looks like a mess.
I think it takes some good amount of courage and confidence to put one's
identity and code out there for everyone to criticize. Conversely,
criticizing from the comfort of anonymity and without producing "do as I
do, not only as I say" examples doesn't sit well.
Andrei
Ad hominem argument. If you disagree that it looks like a mess, you should
argue that. If you concur, then it doesn't matter who pointed it out. And
anonymity on the internet has been held up as a fundamental right for
decades.

Jim Balter
2010-08-03 06:37:39 UTC
Permalink
"retard" <re at tard.com.invalid> wrote in message
Post by retard
Post by dsimcha
== Quote from Don (nospam at nospam.com)'s article
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components
that cannot fail. It is about building a system that can TOLERATE
failure of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman.
While all parts are unreliable, they aren't *equally* unreliable.
Unit tests, contract programming, memory safe access, and other
reliability techniques, *including correctness proofs*, all increase
reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous. I'm
genuinely astonished that such an absurd idea ever had any traction.
Yea, here's a laundry list of stuff that theory doesn't account for that
overflow
rounding error
The proofs help here actually.
Post by dsimcha
compiler bugs
This is a bit unfortunate and unfortunately in some cases there's nothing
you can do - no matter what code you write, the compiler breaks it
fatally.
Post by dsimcha
hardware bugs
In many cases there's nothing you can do. The hardware bug / failure is
fatal. I'm talking about x86 micro-computers here since D isn't that much
suitable yet for other targets. A typical desktop PC has zero redundancy.
A single error in any of the components will kill your program or at
least some part of the user experience.
Post by dsimcha
OS bugs
If your numerics code is fucked up, the proofs really do help. I don't
think your numerics code will use any (problematic non-tested) syscalls.
If the OS breaks independently regardless of your code, there's nothing
you can do. The best way to protect against random OS bugs is to not run
any code on that OS.
Post by dsimcha
I sincerely wish all my numerics code always worked if it was provably
mathematically correct.
I really love digitalmars.D because this is one of the few places
Nah, such appalling ignorance and arrogance is actually quite common.
Post by retard
where
99% of the community has zero experience with other languages, other
paradigms (non-imperative), automatic theorem provers, or anything not
related to D. There's a whole choir against theorem proving now. The
funniest thing is that none of you seem to have any clue about how those
programs work. Has anyone except the almighty Andrei ever even downloaded
a theorem prover?
BCS
2010-07-31 23:40:40 UTC
Permalink
Hello dsimcha,
Post by dsimcha
Yea, here's a laundry list of stuff that theory doesn't account for
overflow
Theory can
Post by dsimcha
rounding error
Theory has:

A mechanically checked proof of the AMD5K 86 floating-point division program:
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http://ieeexplore.ieee.org/iel4/12/15456/00713311.pdf%3Farnumber%3D713311&authDecision=-203

Any proof that fails to take those into account is flawed.
Post by dsimcha
compiler bugs
hardware bugs
OS bugs
OTOH A proof might be able to derive a unit test that will find most relevant
bugs
Post by dsimcha
I sincerely wish all my numerics code always worked if it was provably
mathematically correct.
--
... <IXOYE><
Jim Balter
2010-08-03 06:33:37 UTC
Permalink
"dsimcha" <dsimcha at yahoo.com> wrote in message
Post by dsimcha
== Quote from Don (nospam at nospam.com)'s article
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure
of any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts
are unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
Yea, here's a laundry list of stuff that theory doesn't account for that can go
overflow
rounding error
compiler bugs
hardware bugs
OS bugs
I sincerely wish all my numerics code always worked if it was provably
mathematically correct.
I have no idea why any rational person would think that this shows that
correctness proofs don't increase reliability.
Jonathan M Davis
2010-07-29 17:12:54 UTC
Permalink
Post by Don
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
It's likely because the idea of being able to _prove_ that your program is
correct is rather seductive. As it is, you never know for sure whether you found
all of the bugs or not. So, I think that it's quite understandable that the
academic types at the least have been interested in it. I'd also have expected
folks like Boeing to have some interest in it. However, since it's so time
consuming, error prone, and ultimately not enough even if you do it right, it
just doesn't make sense to do it in most cases, and it's never a good idea to
rely on it.

Given how computer science is effectively applied mathematics, I don't think that
it's at all surprising or ridiculous that correctness proofs have been tried (if
anything, it would be totally natural and obvious to a lot of math folks), but I
do think that that it's fairly clear that they aren't really a solution for a
whole host of reasons.

- Jonathan M Davis
&quot;Jérôme M. Berger&quot;
2010-07-29 18:16:21 UTC
Permalink
Post by Don
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
The idea isn't absurd, but you need to use it properly. Saying "I
have run a correctness prover on my code so there aren't any bugs"
is a fallacy, but so is "I have run unit tests with 100% coverage so
there aren't any bugs". The point of correctness proofs isn't that
they will find *all* the bugs, but rather that they will find a
completely different category of bugs than testing.

So you shouldn't run a correctness prover on your code to prove
that there aren't any bugs, but simply to find some of the bugs (and
you can find a lot of bugs with such a tool).

Jerome
--
mailto:jeberger at free.fr
http://jeberger.free.fr
Jabber: jeberger at jabber.fr

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20100729/97c79b6c/attachment.pgp>
Don
2010-07-29 19:34:35 UTC
Permalink
Post by &quot;Jérôme M. Berger&quot;
Post by Don
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
The idea isn't absurd, but you need to use it properly. Saying "I
have run a correctness prover on my code so there aren't any bugs"
is a fallacy, but so is "I have run unit tests with 100% coverage so
there aren't any bugs". The point of correctness proofs isn't that
they will find *all* the bugs, but rather that they will find a
completely different category of bugs than testing.
So you shouldn't run a correctness prover on your code to prove
that there aren't any bugs, but simply to find some of the bugs (and
you can find a lot of bugs with such a tool).
Jerome
You can certainly catch bugs with that technique, but the word "proof"
has no business being there. It's like the "unsinkable" Titanic.
(I think it's really similar, actually. Apparently the only reason the
Titanic sank was that many of the rivets were defective).
My recollection from university courses was not that "proofs will find
bugs in your programs". Rather, it was that "proofs will ensure your
program is correct".

The reason I think it's absurd is that (AFAIK) no other modern
engineering discpline has attempted to rely on correctness proofs.
Jonathan M Davis
2010-07-29 20:04:09 UTC
Permalink
Post by Don
The reason I think it's absurd is that (AFAIK) no other modern
engineering discpline has attempted to rely on correctness proofs.
Really, the reason that you even _can_ attempt such proofs is because computer
science is effectively applied mathematics. No other engineering discipline is so
purely mathematical. They have have to deal with real world physics, so while
they could undoubtedly prove _some_ stuff about what they do, it's not like you
could ever prove something like your bridge will never collapse. Best case, you
could show that it wouldn't collapse under certain types of stress. And since
what you're doing is so thoroughly rooted in the physical world (with it being a
physical structure built out of physical materials), it really doesn't make much
sense to try and prove much about since that would all be rooted in theory.

Computer science, on the other hand, being thoroughly rooted in mathematics and
generally having very little with physical reality allows for such proofs - even
begs for them in at least some cases because that's the sort of thing that you
do in math. However, even if you are able to correctly do such proofs, programs
have to run on a physical machine at _some_ point in order to be worth anything,
and then many of the assumptions of such proofs fall apart (due to hardware
defects or limitations or whatnot). Also, it's not all that hard for a program
to become complex enough that doing a proof for it is completely unreasonable if
not outright infeasible.

I'd be tempted to at least consider proofs as _one_ of the steps in ensuring a
correct and safe program for something like a plane or the spaceshuttle where
the cost of failure is extremely high. But relying on such proofs would unwise
(at best, they're just one of the tools in your toolbox) and the cost is
prohibitive enough that there's no point in considering it for most projects.

Proofs are mainly used by academic types, and you're really not likely to see
them much elsewhere. They're just too costly to do, and actually proving your
program correct only gets you so far even if you can do it. That's probably a
good chunk of the reason why the only time that you've really run into such
proofs are with academic types interested in proving program correctness for the
sake of correctness rather than programmers looking to use it as a tool to help
ensure that they have safe and properly debugged code.

So, personally, I don't think that they're absurd, but it is unwise to rely on
them (particularly when it's so easy to get them wrong), and most of the time,
there's no point in dealing with them.

- Jonathan M Davis
&quot;Jérôme M. Berger&quot;
2010-07-30 17:58:16 UTC
Permalink
Post by Don
You can certainly catch bugs with that technique, but the word "proof"
has no business being there. It's like the "unsinkable" Titanic.
(I think it's really similar, actually. Apparently the only reason the
Titanic sank was that many of the rivets were defective).
Actually, the word "proof" is perfectly appropriate. You just have
to remember that *any* proof relies on some starting hypothesis.
What a theorem prover does is, given a certain set of starting
conditions, and a certain set of assertions to check, they will tell
you:

- Either that the assertions will fail, in which case they will
provide an example of valid starting conditions and an execution
path that causes the failure;

- Or tell you that the assertion will always pass. In which case,
barring bugs in the theorem prover, you can be 100% sure that so
long as the starting hypothesis hold then the assertion will pass
(compiler and hardware reliability are always part of the starting
hypothesis);

- Or that they cannot conclude.



What I just said refers to the *theorem* prover itself. A
*software* prover (like Klocwork) contains some extra code to
extract some starting hypothesis and some assertions from your
source and give them to the theorem prover, then post-process the
results to make them usable. This has several important implications:

- Usually, the software prover will not make the distinction between
options 2 and 3: if an assertion cannot be proven to fail, then it
will be silently assumed to pass;

- Everything depends on the assertions that the software prover
uses. Some of them are easy to define (when you dereference a
pointer then that pointer must be valid, or if the source code
contains a call to "assert" then that "assert" must pass), but some
are a lot less obvious (how do you define that "the output must make
sense"?)
Post by Don
My recollection from university courses was not that "proofs will find
bugs in your programs". Rather, it was that "proofs will ensure your
program is correct".
Well, I haven't taken any formal university courses on software
provers, but if that's the approach your teachers took, then they
were wrong (unfortunately, this is a kind of mistakes teachers are
very often prone to).

Jerome
--
mailto:jeberger at free.fr
http://jeberger.free.fr
Jabber: jeberger at jabber.fr

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20100730/9e52383a/attachment-0001.pgp>
BCS
2010-07-31 16:35:05 UTC
Permalink
Hello Don,
Post by Don
Post by &quot;Jérôme M. Berger&quot;
Post by Don
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
The idea isn't absurd, but you need to use it properly. Saying "I
have run a correctness prover on my code so there aren't any bugs" is
a fallacy, but so is "I have run unit tests with 100% coverage so
there aren't any bugs". The point of correctness proofs isn't that
they will find *all* the bugs, but rather that they will find a
completely different category of bugs than testing.
So you shouldn't run a correctness prover on your code to prove
that there aren't any bugs, but simply to find some of the bugs (and
you can find a lot of bugs with such a tool).
Jerome
You can certainly catch bugs with that technique, but the word "proof"
has no business being there. It's like the "unsinkable" Titanic.
(I think it's really similar, actually. Apparently the only reason the
Titanic sank was that many of the rivets were defective).
My recollection from university courses was not that "proofs will find
bugs in your programs". Rather, it was that "proofs will ensure your
program is correct".
I think the best a correctness proof can do is assert that, under a set of
assumptions (including things like the machine working correctly) the program
will do certain things and will not do certain things. E.g. all memory allocations
have exactly one owner at all times and nothing is ever de-allocated that
is an owner of anything.

There are several issues with this: First that it can't be proven that the
things the program will and will not do are the correct things. Also, it
is very costly ( >1 man/year per kLOC)
Post by Don
The reason I think it's absurd is that (AFAIK) no other modern
engineering discpline has attempted to rely on correctness proofs.
Every engineering discipline I have any experience with gets a heck of a
lot closer to producing formal proofs of correctness than programing. (BTW:
I don't consider CS as generally practiced to be an engineering discipline.)
--
... <IXOYE><
Walter Bright
2010-07-31 18:42:39 UTC
Permalink
Post by BCS
Every engineering discipline I have any experience with gets a heck of a
lot closer to producing formal proofs of correctness than programing.
Mechanical engineering designs also tend to be a lot simpler than programs,
although the environment they work in is far more complex. Modeling for the
design analysis also takes a very simplified view of the actual design,
justified by taking the worst case. For example, the strength calculations are
done for the weakest cross section, and are not bothered with for the obviously
stronger sections.

Furthermore, after a while a good mechanical engineer develops a "feel" for
things that is pretty darned accurate. Going through the analysis is a backup
for that and is used to fine tune the design.
Mike James
2010-07-31 19:19:29 UTC
Permalink
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by BCS
Every engineering discipline I have any experience with gets a heck of a
lot closer to producing formal proofs of correctness than programing.
Mechanical engineering designs also tend to be a lot simpler than
programs, although the environment they work in is far more complex.
Modeling for the design analysis also takes a very simplified view of the
actual design, justified by taking the worst case. For example, the
strength calculations are done for the weakest cross section, and are not
bothered with for the obviously stronger sections.
Furthermore, after a while a good mechanical engineer develops a "feel"
for things that is pretty darned accurate. Going through the analysis is a
backup for that and is used to fine tune the design.
Sometimes it gets complicated... :-)

http://mysite.du.edu/~etuttle/rail/lock.htm


-=mike=-
BCS
2010-07-31 23:41:45 UTC
Permalink
Hello Walter,
Post by Walter Bright
Post by BCS
Every engineering discipline I have any experience with gets a heck
of a lot closer to producing formal proofs of correctness than
programing.
Mechanical engineering designs also tend to be a lot simpler than
programs, although the environment they work in is far more complex.
Modeling for the design analysis also takes a very simplified view of
the actual design, justified by taking the worst case. For example,
the strength calculations are done for the weakest cross section, and
are not bothered with for the obviously stronger sections.
Now days they just jump to using finite element and compute everything.
Post by Walter Bright
Furthermore, after a while a good mechanical engineer develops a
"feel" for things that is pretty darned accurate. Going through the
analysis is a backup
No, the analysis is mandated, by code if not law.
--
... <IXOYE><
Walter Bright
2010-08-01 00:57:50 UTC
Permalink
Post by BCS
Hello Walter,
Post by Walter Bright
Post by BCS
Every engineering discipline I have any experience with gets a heck
of a lot closer to producing formal proofs of correctness than
programing.
Mechanical engineering designs also tend to be a lot simpler than
programs, although the environment they work in is far more complex.
Modeling for the design analysis also takes a very simplified view of
the actual design, justified by taking the worst case. For example,
the strength calculations are done for the weakest cross section, and
are not bothered with for the obviously stronger sections.
Now days they just jump to using finite element and compute everything.
I still see calcs submitted for approval that are done by hand on paper.

If you want to see real seat of the pants engineering, look at one of those hot
rod shows like Musclecar. I don't think those guys have ever even seen a calculator.
Post by BCS
Post by Walter Bright
Furthermore, after a while a good mechanical engineer develops a
"feel" for things that is pretty darned accurate. Going through the
analysis is a backup
No, the analysis is mandated, by code if not law.
Not much. Even for buildings, only a few critical spots need checking. This is
possible because building structures are usually way over-designed, because it's
cheap and convenient to do so. Where every gram counts, like in a spacecraft,
everything is analyzed.

I once had a fire hydrant installed on my property. The city required an
engineering analysis, which ran to quite a stack of paper. After approval, the
workers came by to install it. They never looked at the analysis, or even the
drawings, they just dug up the water main and stuck a hydrant on it with a
specialized tool they had. Done in an hour or so.

The "engineering analysis" was a freakin' joke.
BCS
2010-08-01 01:48:31 UTC
Permalink
Hello Walter,
Post by Walter Bright
Post by BCS
Hello Walter,
Post by Walter Bright
Post by BCS
Every engineering discipline I have any experience with gets a heck
of a lot closer to producing formal proofs of correctness than
programing.
Mechanical engineering designs also tend to be a lot simpler than
programs, although the environment they work in is far more complex.
Modeling for the design analysis also takes a very simplified view
of the actual design, justified by taking the worst case. For
example, the strength calculations are done for the weakest cross
section, and are not bothered with for the obviously stronger
sections.
Now days they just jump to using finite element and compute
everything.
I still see calcs submitted for approval that are done by hand on paper.
If you want to see real seat of the pants engineering, look at one of
those hot rod shows like Musclecar. I don't think those guys have ever
even seen a calculator.
and anyone who knows what they are doing should be able to clean up... but
where's the fun in that.
Post by Walter Bright
Post by BCS
Post by Walter Bright
Furthermore, after a while a good mechanical engineer develops a
"feel" for things that is pretty darned accurate. Going through the
analysis is a backup
No, the analysis is mandated, by code if not law.
Not much. Even for buildings, only a few critical spots need checking.
This is possible because building structures are usually way
over-designed, because it's cheap and convenient to do so. Where every
gram counts, like in a spacecraft, everything is analyzed.
Mostly they avoid doing detailed analysts by reducing thing to already solved
problems: i.e. they do what the building code says or look up the accepted
values or follow the best practices.

These sources can be treated as theorems: under conditions X, Y and Z if
you satisfy constraints A, B and C, things don't break. Thus we have design
by modus ponens.
Post by Walter Bright
I once had a fire hydrant installed on my property. The city required
an engineering analysis, which ran to quite a stack of paper. After
approval, the workers came by to install it. They never looked at the
analysis, or even the drawings, they just dug up the water main and
stuck a hydrant on it with a specialized tool they had. Done in an
hour or so.
I'd almost bet that buried somewhere in the fine print of the "engineering
analysis" was the assertion "the standard way works" or the same things in
10 times the words.
--
... <IXOYE><
Walter Bright
2010-08-01 06:35:37 UTC
Permalink
Post by BCS
Post by Walter Bright
I once had a fire hydrant installed on my property. The city required
an engineering analysis, which ran to quite a stack of paper. After
approval, the workers came by to install it. They never looked at the
analysis, or even the drawings, they just dug up the water main and
stuck a hydrant on it with a specialized tool they had. Done in an
hour or so.
I'd almost bet that buried somewhere in the fine print of the
"engineering analysis" was the assertion "the standard way works" or the
same things in 10 times the words.
It was painfully obvious that this was nothing more than a money-making scheme
for the water utility. It colluded with the city to get those regs written, so
they could literally quintuple the cost of a hydrant install and one had no
choice but pay.
Jeff Nowakowski
2010-08-01 16:29:42 UTC
Permalink
Post by Walter Bright
Post by BCS
Post by Walter Bright
I once had a fire hydrant installed on my property. The city required
an engineering analysis, which ran to quite a stack of paper. After
approval, the workers came by to install it. They never looked at the
analysis, or even the drawings, they just dug up the water main and
stuck a hydrant on it with a specialized tool they had. Done in an
hour or so.
I'd almost bet that buried somewhere in the fine print of the
"engineering analysis" was the assertion "the standard way works" or
the same things in 10 times the words.
It was painfully obvious that this was nothing more than a money-making
scheme for the water utility. It colluded with the city to get those
regs written, so they could literally quintuple the cost of a hydrant
install and one had no choice but pay.
It's possible that's the reason. Then again, regulations are often in
response to an accident. You also make a big deal about them not looking
at the analysis. Couldn't they have already seen a copy before they went
to the site? This is obviously planned work, so you'd think they'd have
a meeting beforehand.

As BCS said, if the stack of paper is due diligence with the conclusion
"the standard way works", I don't see how you can tell from your experience.
Walter Bright
2010-08-02 04:46:40 UTC
Permalink
Post by Jeff Nowakowski
Post by Walter Bright
Post by BCS
Post by Walter Bright
I once had a fire hydrant installed on my property. The city required
an engineering analysis, which ran to quite a stack of paper. After
approval, the workers came by to install it. They never looked at the
analysis, or even the drawings, they just dug up the water main and
stuck a hydrant on it with a specialized tool they had. Done in an
hour or so.
I'd almost bet that buried somewhere in the fine print of the
"engineering analysis" was the assertion "the standard way works" or
the same things in 10 times the words.
It was painfully obvious that this was nothing more than a money-making
scheme for the water utility. It colluded with the city to get those
regs written, so they could literally quintuple the cost of a hydrant
install and one had no choice but pay.
It's possible that's the reason. Then again, regulations are often in
response to an accident. You also make a big deal about them not looking
at the analysis. Couldn't they have already seen a copy before they went
to the site? This is obviously planned work, so you'd think they'd have
a meeting beforehand.
I talked to them. They had no knowledge of the analysis. But they did know how
to install fire hydrants.
Post by Jeff Nowakowski
As BCS said, if the stack of paper is due diligence with the conclusion
"the standard way works", I don't see how you can tell from your experience.
Bah, one glance at the location would show it would be a standard install and
all it would have cost was the xerox copy.
Jim Balter
2010-08-03 06:26:42 UTC
Permalink
"Don" <nospam at nospam.com> wrote in message
Post by Don
Post by Jim Balter
"Walter Bright" <newshound2 at digitalmars.com> wrote in message
Post by Walter Bright
Post by bearophile
You have to think about proofs as another (costly) tool to avoid bugs/bangs,
but not as the ultimate and only tool you have to use (I think dsimcha was
trying to say that there are more costly-effective tools. This can be true,
but you can't be sure that is right in general).
I want to re-emphasize the point that keeps getting missed.
Building reliable systems is not about trying to make components that
cannot fail. It is about building a system that can TOLERATE failure of
any of its components.
It's how you build safe systems from UNRELIABLE parts. And all parts are
unreliable. All of them. Really. All of them.
You're being religious about this and arguing against a strawman. While
all parts are unreliable, they aren't *equally* unreliable. Unit tests,
contract programming, memory safe access, and other reliability
techniques, *including correctness proofs*, all increase reliability.
I have to disagree with that. "Correctness proofs" are based on a total
fallacy. Attempting to proving that a program is correct (on a real
machine, as opposed to a theoretical one) is utterly ridiculous.
I'm genuinely astonished that such an absurd idea ever had any traction.
I've also heard from people genuinely astonished that such an absurd idea as
that humans evolved from worms ever had any traction. I'm equally impressed
by both. For a more rational discussion, see
http://c2.com/cgi/wiki?ProofOfCorrectness
retard
2010-07-27 20:58:40 UTC
Permalink
Post by Walter Bright
That misses the point about reliability. Again, you're approaching from
the point of view that you can make a program that cannot fail (i.e.
prove it correct). That view is WRONG WRONG WRONG and you must NEVER
NEVER NEVER rely on such for something important, like say your life.
Software can (and will) fail even if you proved it correct, for example,
what if a memory cell fails and flips a bit? Cosmic rays flip a bit?
ECC memory helps a bit and so does RAID arrays. A server might also have 2
+ power supplies with batteries. On software side you can spawn new
processes and kill old ones. Typical low-end server hardware used to
serve web pages doesn't have much more hardware support for reliability.
Post by Walter Bright
Post by retard
''I'm not a real programmer. I throw together things until it works
then I move on. The real programmers will say "yeah it works but you're
leaking memory everywhere. Perhaps we should fix that." I'll just
restart apache every 10 requests.'' -- Rasmus Lerdorf
It it widely accepted that web applications fail often. It suffices if
the developers are fixing bad code eventually, but the remaining parts
should work reasonably well. Either a process is restarted or
exceptions are used to catch small error conditions so the server could
server *something* . I'm talking about practical web applications, not
X-ray machines.
It's retarded (!) to pretend that this is how one makes reliable systems.
I understand you meant reliable systems. But my original post was about
real world web sites. I've been in a project where the manager told us to
outsource development because the system written by the local workforce
would have made it *too reliable*. If you can make the system only 20%
more unreliable by cutting off 80% from the salaries, it's a big win. The
idea is that since people accept a certain downtime and certain kind of
error states for typical web sites, it's ok to have those problems. You
just need to keep the number of bugs low enough. A 90..100% reliability
isn't important.
Walter Bright
2010-07-27 19:04:42 UTC
Permalink
Post by Adam Ruppe
Lerdorf's quote strikes me as actually being somewhat close to what
Walter is talking about. Web applications don't focus on making a
thing that never fails, but instead achieve reliability by having an
external watchdog switch to backups - that is, a fresh copy of the
program - when something goes wrong with the first one.
Doing this gives them adequately good results at low cost.
That part is true, but the part about catching exceptions thrown by programming
bugs and then continuing the same process runs completely counter to any
sensible notion of reliability.
Walter Bright
2010-07-26 20:51:19 UTC
Permalink
Post by KennyTM~
Post by Walter Bright
Post by retard
I think the Java/C# developers gave up X % of the execution speed to
avoid hard crashes (exceptions instead of segfaults)
1. segfaults *are* exceptions.
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
Catching exception is easy, but handling (segfault) signal is a mess.
Exceptions thrown because of programming bugs are just as unrecoverable as seg
faults are. Making them easier to catch has the unfortunate side effect of
seducing programmers into thinking that such are recoverable.
Michel Fortin
2010-07-26 21:02:52 UTC
Permalink
Post by Walter Bright
1. segfaults *are* exceptions.
At the processor level, yes. They're exceptions at the language level
only on Windows, which I'd consider 'implementation defined', so not
exceptions as far as the language is concerned.
Post by Walter Bright
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
That's more a wish than reality though. I have yet to see @safe applied
on ranges and algorithms. I'm pretty sure it'll require a couple of
adjustments to @safe. As it exists currently, @safe does not work very
well with templates and delegate literals.
--
Michel Fortin
michel.fortin at michelf.com
http://michelf.com/
Walter Bright
2010-07-26 21:14:20 UTC
Permalink
Post by Michel Fortin
Post by Walter Bright
1. segfaults *are* exceptions.
At the processor level, yes. They're exceptions at the language level
only on Windows, which I'd consider 'implementation defined', so not
exceptions as far as the language is concerned.
They are exceptions in the sense that a condition required by the program has
been violated.
Post by Michel Fortin
Post by Walter Bright
2. D offers a memory safe subset, and D's ranges and algorithms are memory safe.
on ranges and algorithms. I'm pretty sure it'll require a couple of
well with templates and delegate literals.
While it's true that @safe hasn't been thoroughly debugged yet, the ranges and
algorithms are still memory safe because they use the array paradigm rather than
the unchecked pointer paradigm.
%u
2010-07-26 16:24:30 UTC
Permalink
Post by Andrei Alexandrescu
But then their containers and algorithms are markedly inferior to STL's.
They are toxic to the programmers who have only been exposed to them. So
Java and C# got STL and decided to not go with it, I'm sure they would
have at least gotten a few good bits from it. But no - the entire e.g.
Java container library is parallel to everything STL stands for.
If you are talking about the original System.Collection you are right. But
System.Collection.Generic (.NET 2.0) and LINQ (3.0) have come a long way since
then and C# has gained much of the expressiveness that D is just reaching with
std.range and std.algorithm.

In fact you can see many similarities:

.Where() => filter()
.Select() => map()
.Aggregate() => reduce()
.Take() => take()
...
Sean Kelly
2010-07-26 04:32:17 UTC
Permalink
Post by bearophile
Post by Andrei Alexandrescu
In my humble opinion, the design of Java, C#, and Go is proof
that their authors didn't get the STL. Otherwise, those languages would
be very different.
I don't believe you. Among the designers of Java, C# and Go there are people that are both experts and smart. C# designers have shown to be sometimes smarter than D designers. So I think some of them 'get the STL' and understand its advantages, but despite this in the universe they have found some other reasons to refuse it. I think they were unwilling to pay the large price in language complexity, bug-prone-ness, code bloat and compilation speed that C++ and D are willing to pay.
The reason behind the Java design is much more mundane: they chose binary backwards compatibility as a requirement for future language changes. So you can do ridiculous things like:

class A {
void put(List x) {}
}

class B {
void pass(List<Integer> x) {
(new A).put(x);
}
}
Post by bearophile
http://msdn.microsoft.com/it-it/magazine/cc163683%28en-us%29.aspx
C# generics are a heck of a lot nicer than Java generics, but there also I think there were other practical reasons for the decision that they didn't fully address. C# is effectively the native language for .NET, and so its libraries should be as useful as possible to other languages compiled to CLR bytecode. If C# used C++ style templates, C++ would integrate well with it, but no other languages really would. Try telling some Visual Basic programmer that they have to define a different container interface for each stored type and see if they use your library. The "Binary Compatibility" section mentions this, but only briefly.
Post by bearophile
Maybe there is a way to build something like STL without C++/D-style templates :-)
If you figure out how, I'd love to know it :-)
Sean Kelly
2010-07-26 04:36:51 UTC
Permalink
Post by Sean Kelly
class A {
void put(List x) {}
}
class B {
void pass(List<Integer> x) {
(new A).put(x);
}
}
Change that to:

class A {
void put(List<Integer)> x) {}
}

class B {
void pass(List x) {
(new A).put(x);
}
}
Bruno Medeiros
2010-07-26 18:45:05 UTC
Permalink
Post by Sean Kelly
C# generics are a heck of a lot nicer than Java generics, but there also I think there were other practical reasons for the decision that they didn't fully address. C# is effectively the native language for .NET, and so its libraries should be as useful as possible to other languages compiled to CLR bytecode. If C# used C++ style templates, C++ would integrate well with it, but no other languages really would. Try telling some Visual Basic programmer that they have to define a different container interface for each stored type and see if they use your library. The "Binary Compatibility" section mentions this, but only briefly.
Why is that? (my C# knowledge is very rusty) Is it because they maintain
some runtime information unlike Java generics which are completely erased?
--
Bruno Medeiros - Software Engineer
Nick Sabalausky
2010-07-26 08:50:16 UTC
Permalink
"bearophile" <bearophileHUGS at lycos.com> wrote in message
Post by bearophile
http://www.osnews.com/story/7930
That's one of the reasons I've pretty much given up on C# despite having
initially been relatively impressed with it. They've had that giant, gaping,
and frankly quite easy to remedy, lack-of-IArithmetic hole since C# first
got generics (ie, forever ago), and there's been no shortage of people
complaining about it. And yet, last I checked (numerous versions and years
after the hole was created in the first place), MS still had yet to even
acknowledge the issue, much less remedy it. Not the best management
situation.
Sean Kelly
2010-07-26 19:02:12 UTC
Permalink
Post by Bruno Medeiros
Post by Sean Kelly
C# generics are a heck of a lot nicer than Java generics, but there also I think there were other practical reasons for the decision that they didn't fully address. C# is effectively the native language for .NET, and so its libraries should be as useful as possible to other languages compiled to CLR bytecode. If C# used C++ style templates, C++ would integrate well with it, but no other languages really would. Try telling some Visual Basic programmer that they have to define a different container interface for each stored type and see if they use your library. The "Binary Compatibility" section mentions this, but only briefly.
Why is that? (my C# knowledge is very rusty) Is it because they maintain
some runtime information unlike Java generics which are completely erased?
Yes. It's been a while since I looked at C# as well, but I think this is the crux of it.
BCS
2010-07-31 16:58:27 UTC
Permalink
Hello Jonathan,
Post by Jonathan M Davis
Post by Don
The reason I think it's absurd is that (AFAIK) no other modern
engineering discpline has attempted to rely on correctness proofs.
Really, the reason that you even _can_ attempt such proofs is because
computer science is effectively applied mathematics. No other
engineering discipline is so purely mathematical. They have have to
deal with real world physics, so while they could undoubtedly prove
_some_ stuff about what they do, it's not like you could ever prove
something like your bridge will never collapse. Best case, you could
show that it wouldn't collapse under certain types of stress. And
since what you're doing is so thoroughly rooted in the physical world
(with it being a physical structure built out of physical materials),
it really doesn't make much sense to try and prove much about since
that would all be rooted in theory.
Computer science, on the other hand, being thoroughly rooted in
mathematics and generally having very little with physical reality
allows for such proofs - even begs for them in at least some cases
because that's the sort of thing that you do in math. However, even if
you are able to correctly do such proofs, programs have to run on a
physical machine at _some_ point in order to be worth anything, and
then many of the assumptions of such proofs fall apart (due to
hardware defects or limitations or whatnot). Also, it's not all that
hard for a program to become complex enough that doing a proof for it
is completely unreasonable if not outright infeasible.
I agree with you in theory but not in practice

In mechanical or civil engineering, yes there are huge amounts of very ill
defined interactions and properties and in CS there are well defined abstractions
that are very nearly ideal. However, most programs are written and debugged
empirically; "if it runs all the test cases, it's correct". In the engineering
disciplines OTOH, it is more or less required by law that a "semi-formal
proof" be done for virtually ever aspect of a design (keep reading :).

The procedure in a nut shell is; make a (mostly) formal statement of the
success criteria, pick a bunch of (conservative) assumptions (general from
well vetted research), and then formally show that under those assumptions,
you have satisfied the success criteria.

Now I will grant that (aside from the last step) this is a far cry from a
formal math proof but in CS the success criteria is rarely more than informal
and most projects doesn't even bother listing there assumptions, and rarely
is there any proof, formal or informal, that the success criteria was met.

Ironic.
--
... <IXOYE><
Continue reading on narkive:
Loading...