[Israel.pm] Handling huge data-structures?
gaal at forum2.org
Sun Aug 29 10:03:33 PDT 2004
On Sun, Aug 29, 2004 at 07:29:05PM +0300, Offer Kaye wrote:
> > If by "works" you mean only "has the same semantics as", then yes; but
> > anything that's going to insert or delete -- or indeed, change the length
> > of an existing record -- is going to be very expensive.
> Not in memory- the file *is not* loaded into memory when using
> Tie::File. It will however be slow, as said in the CAVEATS section of
> the Tie::File documentation:
I never said it's going to be expensive in memory, just that it's going
to be expensive.
> However IMHO dealing with extremly large files when you don't have
> enough RAM will ALWAYS be slow- I don't think there is any way around
Depends on what you mean by slow. You can't expect the same perfomance
you get from a 300-record flat file (or, say, Storable), but you can
use database-ish techniques (B-trees, etc.). SQLite for example can be
> If you do consider DF_File, read this first:
> Note especially:
> DB_File reads the entire file into memory, modifies it in memory, and
> the writes out the entire file again when you untie the file. This
> is completely impractical for large files.
Ah, good, that's one option shot down quick then.
Gaal Yahas <gaal at forum2.org>
More information about the Perl