Subject Re: SEARCHING VERY LARGE FILE
From Akshat Kapoor <akshat.kapoor@kapoorsons.in>
Date Thu, 2 Sep 2021 16:40:30 +0530
Newsgroups dbase.getting-started

Good Evening Mervyn,

>>
>>> This method works as a charm in medium/small files, it is very simple
> .......
>>> I'll keep  on researching until I get a perfect method to get a good
>>> incremental search for my large databases.
>>> Once again thank you very much and have a wonderful day.
>>
>> Have you tried using a timer.
>> I prefer waiting for 1 second before requery()
>
> Agostinho is using LIKE '%whatever%' to select records from over 1
> million records in a .dbf file.  This is a hard task for SQL and takes
> long enough without adding a delay between sending individual characters
> of the search string to the SQL engine.
>
> Agostinho, unless you change your approach and somehow limit the initial
> number of records to search by introducing additional parameters, which
> should preferably access indexed fields, into the select process you
> will not find a software solution to speeding up your character by
> character search of over 1 million records.
>
> Faster hardware for table access is the only viable solution to speed up
> the existing search and even that can only go so far.  This would be
> true even with a "proper" SQL server.


It is just for hit and try. It will work with fast typers.

Suppose the search string is abc
Normal search takes 3 seconds.
Total 9 seconds.

With my method it will be 1+3 = 4 seconds provided abc are typed within
1 second.

If it fails then no harm done. Just small coding was done.
BUT if it works then cost of hardware upgrade will be saved.

I have 2 forms. Both accessing the same table (around 3 k rows).
One that is used many times daily has been optimised and works fine (No
delay is observable)
The other which is used few times a month has not been optimised using a
timer and it is creating problems similar to Agostinho is facing.

Regards
Akshat