[Israel.pm] Some articles about the curse of code reuse.
shlomif at shlomifish.org
Sun Jan 1 12:10:29 PST 2012
On Sun, 1 Jan 2012 12:56:16 +0200
Erez Schatz <moonbuzz at gmail.com> wrote:
> On 30 December 2011 21:47, Shlomi Fish <shlomif at shlomifish.org> wrote:
> > Hi all,
> > The way a lot of programming goes today isn’t any fun because it’s just plugging in magic incantations—combine somebody else’s software and start it up. It doesn’t have much creativity. I’m worried that it’s becoming too boring because you don’t have a chance to do anything much new. Your kick comes out of seeing fun results coming out of the machine, but not the kind of kick that I always got by creating something new. The kick now is after you’ve done your boring work then all of a sudden you get a great image. But the work didn’t used to be boring. (page 594)
> That's quite correct, although with two glaring caveats, one, the
> amount of moving parts a programmer needs to deal with these days is
> several orders of magnitude larger than the one they needed to deal
> with 20, 30 and 40+ years ago. At those times, you had a machine,
> maybe a time-sharing mainframe or similar, and everything was handled
> by that machine, and all coding was done on that machine, and every
> workstation was using the power of that machine. You would write your
> own persistence layer that was closely matching your business logic
> and would be able to optimize your code to handle just that, almost
> without redundancies.
> Now, you want to create software that needs to handle different
> outputs from many different machines, across the internet. You have to
> deal with different databases, code bases, network protocols, clients,
> backends, etc. There is no way you can actually write all that from
> scratch and still create a working application in a competitive,
> practical time frame. You need frameworks, you need external
> libraries. At Knuth's time, programmers wrote systems, today we write
> Think of it as Linux Distros. Installing a debian/redhat package is
> much less efficient than compiling it from source, as it is usually
> pre-compiled to meet the largest common denominators and is, as
> result, much more redundant and less optimised to work on the specific
> machine it runs on, but, as we all know, the benefits of using the
> package-distros outway the cons of it. And you can make the same
> distinction between writing in Perl to writing in C, or even assembly.
Well, some of my Assembly code turned out to be slower than the one
generated by the equivalent gcc code, but I get your point.
Of course, there are some source-based distributions (Gentoo/etc.):
That put aside, a certain benchmark I've run for a CPU+RAM intensive task
( http://fc-solve.shlomifish.org/ ) ran somewhat faster on my Mandriva Cooker
partition than it did on my Archlinux partition (which otherwise seemed
somewhat faster and more responsive). It wasn't a very representative
benchmark, but still.
> Aside, the whole "this isn't fun anymore, this is boring" is not
> exactly an objective term, I could think of many programmers whose
> idea of "fun, not boring" programming is *not* having to reinvent the
> wheel, maintain the tires, implement a half-assed tube system and
> patch every hole caused because you were writing the outer tube in
> crunch mode. And remember, Knuth is the one who commented on his books
> "Beware of bugs in the above code; I have only proved it correct, not
> tried it." So there's fun for you.
Knuth did say this, but I'm not sure it was about his books. (Maybe it was
something else). It was http://en.wikipedia.org/wiki/Edsger_W._Dijkstra who was
notorious in insisting that algorithms should be notated in non-executable
pseudocode, proved to be correct, and never tested on a computer.
The links I've given deal with what kind of programmers find what kind of
things fun (though I've only skimmed them).
> > Ten years ago, to write code, you needed to know a programming language, and you needed to know a library of maybe 50 functions that you used regularly. And those functions worked, every time, although some of them (gets) could not be used without creating security bugs.
> > Today, you need to know how to work with libraries of thousands of functions, representing buggy code written by other people. You can't possibly learn them all, and the documentation is never good enough to write solid code, so you learn to use online resources like Google, DejaNews, MSDN. (I became much more productive after a coworker at Google showed me that you're better off using Google to search Microsoft's knowledge base rather than the pathetic search engine Microsoft supplies). In this new world, you're better off using common languages like Visual Basic and common libraries like WinInet, because so many other people are using them it's easier to find bug fixes and sample code on the Web. Last week, Michael added a feature to CityDesk to check the installed version of Internet Explorer. It's not hard code to write, but why bother? It only took him a few seconds to find the code, in VB, on the Web and cut and paste it.
> > We used to write algorithms. Now we call APIs.
> > Nowadays a good programmer spends a lot of time doing defensive coding, working around other people's bugs. It's not uncommon to set up an exception handler to prevent your code from crashing when your crap library crashes.
> First, in Spolski's world, those libraries he use are black-boxed .NET
> classes which he has no idea about what the hell goes inside them.
Well, it's "Spolsky" - not "Spolski" and the quote in question discusses
the older Visual Basic Classic and Visual C++ combo - not .NET (it
predates .NET I think). Furthermore, Fog Creek Software's software, FogBugz
runs fine and is supported on UNIX systems using the open-source Mono, where you
have the access to the full source code. There are differences in the behaviour
of Mono and the Microsoft .NET stack, and Mono is still incomplete (or was the
last time I checked).
> use CPAN, where every module can be scrutinized and analysed, picked
> apart, gutted, patched, forked, cloned or re-implemented. I can look
> at the internals as well as the Documentation and API. This is a part
> of why I can't read Spolski, because he blamed the short-comings of
> his technology of choice on "programming these days", while blatantly
> ignoring other solutions to these problems.
Well, I agree that availability of the source (under a suitable licence) is a
necessary condition for having the bugs fixed, but I disagree that it is
sufficient. Even if you have the source, you may not be able to understand it.
Even if you understand it, you may not be able to debug it or fix it. And even
if you are, you may not be able to integrate it into the mainline (and on CPAN,
forking a module to fix a bug, often means changing its name, not to mention
that forks are frowned upon in the FOSS community and are seen as the
action of the last resort.).
As an example, I've recently became a maintainer of XML-LibXML, and while I
had investigated, patched and fixed some bugs, there are some bugs in its
queue, which I've acknowledged as valid, but which I am unable to tell how to
fix. They may be bugs in XML-LibXML's XS code (which is complex, and I'm not a
very big XS expert) and they may exist in libxml2 (which is also complex.). So
the bug reports remain open and some people might be affected by them despite
all the source being available and under a FOSS licence.
Furthermore, if you have read the links, you would know that part of the
problem is that many APIs (such as many CPAN modules) may work as advertised
and may not be too buggy, but may have impedance mismatches. I.e: they won't
fully accept the other's output as input, or will otherwise require writing a
lot of glue code.
Anyway, I was quoting Joel on Software as an earlier echo (at least based on
what I read - not sure if it's chronological) to Knuth's sentiments, though
I may have misremembers the point.
While we are in the neighbourhood, you may wish to read this:
"The Tale of J. Random Newbie" - about programmers writing code in proprietary
> And speaking of "programming these days", every progression in the
> programming world since whenever has been towards this "unfun"
> direction. Every new language implements memory-management tools,
> higher-level abstractions layers, easier pluggability, stuff like
> object-models, functional calls etc. A lot of people praise LISP, one
> of the most abstract, high-level languages as "the most elegant/pure",
> and not as "unfun and boring". And why not? The basic tools are still
> there for anyone to use. Alternatively, solving problems in a fun,
> challenging way did not disappear, regardless of your choice of
Well, we should distinguish between language extensions that are APIs
( http://en.wikipedia.org/wiki/Application_programming_interface ) and between
such that are meta-syntactical (or maybe also meta-semantical) abstractions. For
example, Moose is not an API per-ce, because you still need to write the
business logic using it (but naturally, it may be easier and the result be more
Similarly, the Common Lisp standard provides many meta-syntactical
abstractions, but defines very few APIs (at least in comparison to what is
provided by Perl 5+CPAN, C++ with Qt and other available libraries, Python,
Java with the JDK and the various Apache Java APIs, etc.). Most of the
"complaints" in the links were against the availability of APIs, rather than
abstractions, because even the best abstractions will still require you to
write some algorithmic code for implementing various stuff.
I agree that it's not a 100% distinction, and some language extensions are a
bit of both, but I think you get my point.
P.S: Seems like my signature's comment was right on spot for this E-mail.
Shlomi Fish http://www.shlomifish.org/
What Makes Software Apps High Quality - http://shlom.in/sw-quality
Chuck Norris is a real programmer. He writes programs by implementing the most
optimised machines for them using real atoms.
Please reply to list if it's a mailing list post - http://shlom.in/reply .
More information about the Perl