| Subject |
Re: complex indexes vs index key fields |
| From |
Gaetano De Luisi <gaetanodd@hotmail.com> |
| Date |
Tue, 28 Jul 2020 01:02:54 -0400 |
| Newsgroups |
dbase.getting-started |
It seems that 32-bit is the only one visible to the BDE administrator. Now I realise that those are drivers and the 32/64 bit versions are not interchangeable...
I have two choices still: ANSI or Unicode. For testing purposes I chose ANSI as I only need accented characters, not the whole range of encodeable characters, but I must say I don't fully understand the implications. The description of the database on the server says latin-swedish-case insensitive, so that seems to match the ANSI character set as far as my character set knowledge goes.
I tried ANSI and was able to set up the the ODBC alias and the BDE alias. After stumbling on a few hurdles, the following tutorial helped me create a table from dBase into the SQL database (http://www.dbasehost.com/tutorials/mysql1.php).
I then tried to append the data from my DBF file to the server but that is taking forever, about 350 records per minute... can I interrupt the append process without corrupting teh database? can you suggest a more efficient way to load data to the database server?
Thank you,
Gaetano.
Gaetano De Luisi Wrote:
> Hi Akshat,
>
> My apologies for the confusion, I didn't mean 1 million records a day after one year, but instead: the total volume volume could reach 1 million records after one year. Daily volumes are not expected to be very high until that time, if everything goes to plan...
>
> My web site plan supports MySQL databases, I have set one up and am not trying to connect to it with dBase.
>
> Any advice in this respect would be welcome. E.g. there is a 32-bit and a 64-bit version of the ODBC connector for MySQL, I am not sure which one to use as I have Win10-64 but the BDE is 32-bit and I don't know whether either of the versions is fine or if it is best to stick to 32-bit versions.
>
> Akshat Kapoor Wrote:
>
> > On 25.07.2020 02:29, Gaetano De Luisi wrote:
> > > My DBF file has the potential of getting quyite large and to reduce the processing time for queries i need a couple of complex indexes.
> > >
> > > While creating the indexes, I noticed that saving the files took a fair amount of time for my test data of 25,000 records and appending the CSV data into the DBF also took noticeably longer than without the complex indexes.
> > >
> > > For large tables, what do you think would optimize performance: use a complex index (e.g. ["SO2"+left("0000000"+siteID,13)+DTOS(eTimeStamp)], or create a calculated field in the DBF file to create a simple index on?
> > >
> > > The database could potentially have daily uploads or deletions of data and exceed 1 million records.
> > >
> >
> > Good Morning Gaetano,
> > Beofre proceeding further I would request you to have a look at the
> > limits of dbf tables.
> > With a million records a day you could easily be reaching the limit of
> > dbf tables.
> >
> > I would suggest shifting to database servers. they are much faster and
> > much greater limits.
> >
> > dbf tables have constraints controlled by BDE and BDE is 32 bit which
> > has not been upgraded for a long long time.
> >
> > Database servers are 64 bit and are constantly being updated. Hence much
> > faster and may not be requiring indexes. order by is sufficient for 25k
> > tables that I have tested I cannot say for tables where there are few
> > million rows.
> >
> > Choice is yours
> >
> > Regards
> > Akshat
>
|
|