Re: Circlemud design issues

From: James Turner (turnerjh@XTN.NET)
Date: 04/20/98


George <greerga@CIRCLEMUD.ORG> writes:

> >You are very correct that it can cause extra recompilation.  However, in
> >most cases, this isn't so big an issue, at least for me; recompiling my
> >code takes around a minute.
>
> I'm assuming high end Pentium or slow Pentium II.  Not everyone has such a
> fast computer to take time on.  A 486/33 I used to compile on took 20
> minutes to compile the entire code base. (30 minutes if the other users did
> intesive stuff.)

There is certainly a need for backwards compatibility, but this
attitude can be taken too far.  On one hand, multithreaded Circle is
being suggested (which would not work very well at all on older
machines, as there is a good deal more overhead in such an
implementation), yet compile times are a serious reason to fragment
the header files?  That's not quite kosher IMO.

> >But rarely is a change in a .h file going to occur only in one of the
> >smaller ones.  More commonly, changes will occur in one of the bigger
> >files (utils.h, structs.h), which will need a major recompile anyway.
>
> I've been editing oasis.h often lately and I'm glad it only has to
> recompile about 6 things when I modify it.  Even when you do have a fast
> computer, it's annoying to wait for something that shouldn't need
> recompiled.

In cases like this, there is no reason to not use spearate .h files --
I do myself.  However, once this intense, localized development is
finished, the need for the separate .h files is lessened
significantly.

Development header design, and distribution header design, are two
different things.

> >It's pointless to have the same group of includes in a large number of
> >different files; if several headers are included in all the sources,
> >it generally makes sense to put them all into one.  Particularly if
> >those headers are simply prototypes (something I've done away with in
> >my code, at least manually; see the script posted elsewhere for
> >automatic prototype generation).
>
> -Wmissing-prototypes
>
> But then if you put a all prototyes together, minor changes in one file
> cause everything to recompile.

If you're adding a function, not necessarily.  If you're changing the
return value, or the parameters to a function, then you've got other
changes ahead most likely.

> >Compile-time memory requirements are not really an issue.
>
> It is for many people, and especially C++.

Circle isn't C++.  gcc takes quite a bit of memory, regardless of the
size of the file.  It has a good deal of overhead.

> >The compile process is inherently file based, and broken into passes.  The
> >extra headers won't increase the compilation's memory requirements by
> >anything noticeable
>
> Have you ever played with templates extensively?

gcc is (by Gnu's own admition) a somewhat defective C++ compiler.
There are movements underway to repair this deficiency, but they
aren't here yet.  That's another reason to stick with C -- the tools
(Unix and otherwise) aren't as mature for C++ as they are for C.

> >particularly since: a) the compilation process takes much more memory
> >optimizing than in headers -- should we turn optimization off? (IMO in
> >general yes, but for different reasons).
>
> Because Visual C++ cannot generate debugging info when optimizing?

No.  I don't particularly care about VC++, because I do all my
development (and actually all my other computing tasks) in Linux.
Optimization in code as IO bound as Circlemud is best done on the
algorithmic level.  Premature optimization is a Bad thing, though.
Leaving optimization off in all but the final stages of a production
build is generally a good thing -- besides, optimization has been
known to cause problems in otherwise good code.  And as I said, Circle
being so IO bound, optimization has a stunted impact right from the start.

> >b)  the memory it adds would be tiny compared to other processes -- even
> >tcsh requires over 600k of memory while running.
>
> GCC usually reaches 3 MB of RAM, more if you have bigger files than stock.

Indeed it does.  But since gcc, and all compilers afaik, process each
file separately (with separate invocations of the actual compiler
itself), the overhead is less in the code it is compiling and more in
the compiler itself.  Linking is another issue, however.

> >Memory is cheap.  Disk space is cheap.  Compile time is barely
> >affected.  Recompiling is an issue, but not a large one.
>
> Perhaps you'd like to donate money to everyone then?

Surely you're not denying that memory is cheap and drive space is
cheap compared to 6 months ago?  A year ago?  Progress moves.  We
should keep up with the mean.

--
James Turner               turnerjh@xtn.net
                           http://www.vuse.vanderbilt.edu/~turnerj1/


     +------------------------------------------------------------+
     | Ensure that you have read the CircleMUD Mailing List FAQ:  |
     | http://democracy.queensu.ca/~fletcher/Circle/list-faq.html |
     +------------------------------------------------------------+



This archive was generated by hypermail 2b30 : 12/15/00 PST