Subject Re: complex indexes vs index key fields
From Akshat Kapoor <akshat.kapoor@kapoorsons.in>
Date Sat, 25 Jul 2020 10:42:40 +0530
Newsgroups dbase.getting-started

On 25.07.2020 02:29, Gaetano De Luisi wrote:
> My DBF file has the potential of getting quyite large and to reduce the processing time for queries i need a couple of complex indexes.
>
> While creating the indexes, I noticed that saving the files took a fair amount of time for my test data of 25,000 records and appending the CSV data into the DBF also took noticeably longer than without the complex indexes.
>
> For large tables, what do you think would optimize performance: use a complex index (e.g. ["SO2"+left("0000000"+siteID,13)+DTOS(eTimeStamp)], or create a calculated field in the DBF file to create a simple index on?
>
> The database could potentially have daily uploads or deletions of data and exceed 1 million records.
>

Good Morning Gaetano,
Beofre proceeding further I would request you to have a look at the
limits of dbf tables.
With a million records a day you could easily be reaching the limit of
dbf tables.

I would suggest shifting to database servers. they are much faster and
much greater limits.

dbf tables have constraints controlled by BDE and BDE is 32 bit which
has not been upgraded for a long long time.

Database servers are 64 bit and are constantly being updated. Hence much
faster and may not be requiring indexes. order by is sufficient for 25k
tables that I have tested I cannot say for tables where there are few
million rows.

Choice is yours

Regards
Akshat