[Israel.pm] Handling huge data-structures?
offer.kaye at gmail.com
Sun Aug 29 09:29:05 PDT 2004
On Sun, 29 Aug 2004 18:46:41 +0300, Gaal Yahas wrote:
> If by "works" you mean only "has the same semantics as", then yes; but
> anything that's going to insert or delete -- or indeed, change the length
> of an existing record -- is going to be very expensive.
Not in memory- the file *is not* loaded into memory when using
Tie::File. It will however be slow, as said in the CAVEATS section of
the Tie::File documentation:
· Reasonable effort was made to make this module efficient. Never-
theless, changing the size of a record in the middle of a large
file will always be fairly slow, because everything after the new
record must be moved.
However IMHO dealing with extremly large files when you don't have
enough RAM will ALWAYS be slow- I don't think there is any way around
> Yuval, if I were you I'd do some research on DB_File and maybe SQLite
> via Class::DBI. Or just chuck the semantic equivalence requirement and
> bite the SQL bullet: sufficiently different problems tolerate different
If you do consider DF_File, read this first:
DB_File reads the entire file into memory, modifies it in memory, and
the writes out the entire file again when you untie the file. This
is completely impractical for large files.
More information about the Perl