Subject Re: SQLite to DbF
From Mervyn Bick <invalid@invalid.invalid>
Date Fri, 17 Feb 2023 15:21:12 +0200
Newsgroups dbase.getting-started

On 2023/02/16 18:09, Ralf Krieg wrote:

> Hi Mervyn,
>
> thank you for your quick response.
>
> I already have the empty tables as a template. These come from the manufacturer of the database interface we use. All in-house applications rely on these table structures.
> There are no "memo" type fields in these dbf tables.

A character field in a .dbf file can't hold more than 254 characters. If
no character field in the SQLite tables ever holds more than 254
characters then there shouldn't be a problem writing to a .dbf character
field even if the field in the SQLite table is reported as MEMO by the
ODBC driver.


> A query object on the SQLite database connected via ODBC creates a rowset object with fields of type "memo" or "longvarchar".
> The content of the "memo" fields must then be entered into the "chr" fields of the template Dbf. This works via "append" or "updateSet.append()" as long as it is a manageable amount of data. With large tables the error message "Temporary tableresource limit" is output.
> As you show in your example, to transfer over two query objects record by record and each field individually by "beginAppend()" I have already tried as a test.

> This costs too much time, especially I would have to determine the field names for each table first, in order to be able to address them correctly. Date fields would have to be handled separately, because SQLite only knows timestamps and no pure date format. This is all possible, but will cost a lot of runtime in the end.

As long as the structures in the two rowsets are compatible you don't
need the fieldnames if you are writing from one rowset to another.  You
can use the field's ordinal number

     q1.rowset.fields[n].value = q2.rowset.fields[n].value

Dates can be dealt with reasonably easily.


     if q1.rowset.fields[n].type = 'Timestamp'
        q2.rowset.fields[n].value = dttod(q1.rowset.fields[n].value)
     else
        q2.rowset.fields[n].value = q1.rowset.fields[n].value
     endif


> With a simple append from it also works, just as long as the table does not exceed a certain size.
> I would also like to use ADO instead of BDE to access the SQLite database, but unfortunately I got the same errors with dBASE PLus 8.
> Is the behavior (ADO access via ODBC) better solved under dBASE 2019? Then I would like to upgrade.

ADO in dBASE 8 had many shortcomings and I didn't use it.  The ADO
implementation has, however, been significantly improved over the years
although it is not complete and there are still some issued.  It is,
however, usable and I use it to access my own data which I have moved to
Firebird.

Any transfer of data from one rowset to another will be slow compared to
APPEND which is handled by the BDE or ADO.  ADO in dBASE has improved
significantly since dBASE 8 but as I don't use SQLite I'm not able to
test transferring a large table.

There is a commercial tool available from www.fullconvert.com which may
be worth looking at but at a subscription of €599 for 3 months it's a
bit expensive.

Mervyn.