[Israel.pm] Next/Previous item from database
nisanov at netvision.net.il
Thu May 27 02:49:00 PDT 2004
On Wed, 2004-05-26 at 04:15, Gabor Szabo wrote:
> The problem with 1-3 approaches seem to be that when the result set is
> large (10,000 words) then the whole thing seems to be quite slow.
> Maybe this is because I am using Class::DBI or maybe because SQLite ?
> 4 seem to be too much work :)
> Any other ideas how to solve this thing ?
I have some idea, index all your words in DB with int value ( some ID
field ). On first request translate result words list into index list,
create string from this list and compress it with zip, compression
level will be very high because it's only digits ( may be some
delimiter ), for list of 20000-30000 words compressed list it will
take ~1K-2K. You can pass this string on next requests via
"hidden/POST". With this solution you don't need create words subset
on every next request ( just unzip string and parse it ) and access
to specific word via indexed ID will be very quick.
More information about the Perl