| Subject |
Re: SEARCHING VERY LARGE FILE |
| From |
Mervyn Bick <invalid@invalid.invalid> |
| Date |
Thu, 2 Sep 2021 10:20:16 +0200 |
| Newsgroups |
dbase.getting-started |
On 2021/09/01 18:59, Akshat Kapoor wrote:
> Good Evening Agostinho,
>
>> This method works as a charm in medium/small files, it is very simple
.......
>> I'll keepĀ on researching until I get a perfect method to get a good
>> incremental search for my large databases.
>> Once again thank you very much and have a wonderful day.
>
> Have you tried using a timer.
> I prefer waiting for 1 second before requery()
Agostinho is using LIKE '%whatever%' to select records from over 1
million records in a .dbf file. This is a hard task for SQL and takes
long enough without adding a delay between sending individual characters
of the search string to the SQL engine.
Agostinho, unless you change your approach and somehow limit the initial
number of records to search by introducing additional parameters, which
should preferably access indexed fields, into the select process you
will not find a software solution to speeding up your character by
character search of over 1 million records.
Faster hardware for table access is the only viable solution to speed up
the existing search and even that can only go so far. This would be
true even with a "proper" SQL server.
Mervyn.
|
|