Subject Re: Updating buffer info to database file
From Akshat Kapoor <>
Date Fri, 10 Aug 2018 18:13:04 +0530
Newsgroups dbase.getting-started

On 10/08/2018 17:25, Ken Mayer wrote:
> On 8/9/2018 10:56 PM, john wade wrote:
>> Ken Mayer Wrote:
>>> On 8/9/2018 1:51 PM, john wade wrote:
>>>> Eventually was able to hold other terminal users at bay while
>>>> updating a stock file by using a file with one field "Locked"  
>>>> "Y/N". If the "Locked" field was "Y", the terminals were put into a
>>>> loop until the "Locked" field was "N". The terminal updating the
>>>> "stock" database changed the "Locked" field from "Y" to "N" when
>>>> update was complete.
>>>> Which command is used to write the buffered data to the field value.
>>>> In the multi user environment, the terminal effects the change, but
>>>> the file server needs to close databases, and then only do the
>>>> changes made by the terminal get written away.
>>>> I have tried commit(), refresh(), flush(), save(), you name it.
>>>> Any suggestions, as I cannot close databases to update the changes
>>>> without errors coming up.
>>> Once the users are locked out, they shouldn't able to do *anything*. If
>>> you want them to work with transactions, which handle the kind of
>>> buffering I think you're talking about, then that is where commit and
>>> flush come in. Take a look at transaction processing in online help.
>>> However, it may be a huge amount of work (I've never liked it myself so
>>> have avoided it). If I lock a user out of something I don't let them
>>> work with the data at all ... they have to wait until the table is no
>>> longer locked.
>>> Ken
>>> --
>>> *Ken Mayer*
>>> Ken's dBASE Page:
>>> The dUFLP:
>>> dBASE Books:
>>> dBASE Tutorial:
>> Ken,
>> The application is point of sale in particular. Stock is sold and the
>> closing quantity is updated online real time. I can now process the
>> multiple items being sold from the stock file by the process of the
>> loop while other sales points are processing the sale transactions.
>> Running the process on the terminal, the closing quantity is updated.
>> It looks as if the data has been written away, but the file server
>> picks up the previous closing quantity and the terminal data is thrown
>> away. If the file server exits the process with a "close databases"
>> command,
>> the terminal process updates the closing qty real time online.
>> Opening the application on the file server again reflects that the
>> data HAS been written away.
>> The same process on two locations gives two results.
>> While the file server is running the application,as if in a single
>> user environment,  the result is correct. As soon as the terminal
>> effects a change in the databases, the fileserver picks up the "old"
>> closing qty in the database, and data is thrown away. (My reason for
>> thinking that the data is in a buffer somewhere)
>> Is this because of the data being buffered, and what in your opinion
>> is the best way to write the buffered data to the database on line,
>> real time.
>> If you need a copy of the "test" application, let me know and I will
>> forward to you. Thanks Ken
>> John
> I honestly have no clue. I see Akshat pointed to a setting in the BDE
> (the database engine), dealing with sharing. I don't know if that's
> enough. Hopefully those who do more with multi-user databases can pop
> in, as I really haven't done a *lot* with them. The small amount I've
> done I have never seen anything like this.
> Ken

I am also waiting for John's answer. I am 99% sure it is this setting as
I have experienced the same thing a little less than an year ago.

As for 1% if John is using oodml then the rowset being used to update
values could be out of date and a requery() should solve the problem.

There could be a host of other problems that I do not know also but for
that we have to wait for John's reply