Subject |
Re: csv field data |
From |
Mervyn Bick <invalid@invalid.invalid> |
Date |
Wed, 14 Jul 2021 07:48:22 +0200 |
Newsgroups |
dbase.getting-started |
On 2021/07/13 14:58, Charlie wrote:
> I hace a program that Myrvin helped me with a while ago. Basically all it is doing is taking data that I have replaced into fields in a table and converting the data to a csv file. That is easy to do igf it is cut and dry data. But I need to put av array in the postage column. It is like this: {"price":["5.00","10.00"],"method":["Standard","Priority"]}
>
> Of course the commas are screwing up the columns. Is there a way of changing this statement so that the commas will not interfere with the structure of the csv file?
>
> Thanks for any help
>
>
The only way to prevent the "extra" commas from causing problems in a
.csv file is for the contents of the field to be delimited as a whole.
If you use double quotes as the delimiter throughout the .csv file then
you can't have double quotes in the contents.
"{'price':['5.00','10.00'],'method':['Standard',"Priority']}"
If you change the delimiter to a single quote for the entire .csv file then
'{"price":["5.00","10.00"],"method":["Standard","Priority"]}'
The delimiters don't get saved when the contents is placed in a table
field so this is probably your best option. It does, however present
problems if you have apostrophes in other text fields.
An alternative is to to use, say, a semicolon as the separator instead
of a comma as the separator for the .csv file. usvout.prg in the dUFLP
makes provision for this but it does mean that anyone who needs to read
the .csv file will also need a custom reader.
Meryvn.
|
|