On 12/20/2016 02:55 AM, Per Jessen wrote:
Anton Aylward wrote:
In the mean time, the real issue isn't that you can't import a snapshot of the journalctl into oocalc, its that ooclc ignores the JSON standard unless you have the add-in for it. Seems dumb to me that it should handle CSV and not JSON in this day and age.
For largish amounts of data (megabytes), JSON or XML is overkill, but CSV is very efficient.
if you're talking "byte efficient" as in low overhead, then yes. But the whole point of something like XML is not about byte efficiency, it is about semantic clarity and error handling. A few garbled bytes of a CSV writes off everything that follows. A few garbled bytes of a XML transmission resyncs on the following stanza because of the semantic structure. JSON is similar though the contrast between XML and JSON is rather like between ALGOL and C with respect to keywords vs curly brackets. Error resilience, semantic coherency, correct structuring, are going to be more important when dealing with large amounts of data in a stream. The present costs of storage and bandwidth mean the ideas of 'efficiency' we old fashioned engineers were brought up to now take second place. There is in my database of DotSigQuotes; ------------------ [Alice] has courage which can only be described as awesome. Against all odds, over a noisy telephone line, tapped by the tax authorities and the secret police, Alice will happily attempt, with someone [Bob] she doesn't trust, whom she cannot hear clearly, and who is probably someone else, to fiddle her tax returns and to organize a coup d'etat, while at the same time minimizing the cost of the phone call. A coding theorist is someone who doesn't think Alice is crazy. -------------------- The moral here is that most people are not coding theorists. Or engineers striving for 'byte efficiency'. The efficiencies of XML and JSON have more to do with Alice's noisy telephone line in that they are streaming protocols, the data being supplied in large amounts from a remote source over a high speed network. The cost, considering the large mount of data, of going back to to the start if a burst error occurs, is enormous. Not just in terms of "the phone call" but in terms of time, since this is real-time streaming data. Yes, I'm perfectly aware that you cold code up a packetized protocol for CSV with checksum for each packet and acknowledgement of good receipt vs request of retransmission, sort of like TCP writ large. But pretty soon you get into the overhead and you get into all the issues with queueing and flow control that has produced hundreds of studies and papers about TCP packet management over the decades. You really don't want to go there. Use XML or JSON instead. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org