[om] Re: [om-a] critique of OpenMath
Richard Fateman
fateman at cs.berkeley.edu
Thu Jan 18 18:17:05 CET 2001
David Carlisle wrote:
>
> [changed cc list to om@ from om at announce]
>
> > In fact Dan Zwillinger, in re-typesetting Gradshteyn & Rhyzik encoded stuff like
> > zfunction(\sin, x). Two sets of macros would allow either translation
> > to TeX or to some computer algebra system (to the extent possible).
>
> So this is, in OpenMath parlance, defining a Content Dictionary.
Since it is done without reference to any pre-existing CDs, I assume it
would not be acceptable to OMS.
> Ie a fixed set of function names with some defined semantics.
Nowhere in the document are the semantics defined. They are merely
references to TeX via the generated code, on the one hand, and implicit
references to the GR formulas on the other. Calling this semantics
is a stretch. Zwillinger does not in his pseudoCD try to define
cos() via a series expansion etc.
> It happens to use TeX syntax rather than the XML syntax normally
> associated with Content Dictionaries, but that is really a small
> matter.
You may think so, but <emphasis> the syntax is so offensive that it
is hard to believe anyone would ever use it </emphasis>. And I know
that
there are x,y,z tools to make it unnecessary to use it. So why
is it thrust in our face all the time? If there
is a 1:1 correspondence between nice tool x and CDlanguage, then
tool x should be used, and xlanguage shown in all documents. Personally,
I prefer x=lisp where one can use a full-fledged language to define
substitutions, macros, parsers, contexts etc. Zwillinger used TeX's
macro language. The idea that the existing syntax is preferable because
it can be easily parsed is hardly plausible to anyone who has
written a "parser" to read lisp. (6 lines of code in lisp) and
a lisp to xml printer (the code is probably 4 lines of lisp +
endless data on verbose equivalents to simple data items) So
far as I can tell the reason to use XML is to prevent anyone from
doing interesting things. What I would consider interesting is this:
program a tells program b: I'm going to send you a glob of neat stuff
that is a character string. Here's a program to read it (the program
is sent in lisp). (the neat stuff is sent in the encoding). This can
be done in conventional lisp by definition of read macros at which
point one can switch syntax from (this (kind (of stuff))) to
Any[stuff]*you+might^want-so_long_as_you_program_it.
The guiding light that everything conveyed must be checkable by an
XML parser is a stab in the foot for data transmission. I'm willing
to go along with
<xml-metadata> stuff <\xml-metadata> <ThisIsTheGoodStuff> morestuff </This...>
>
> > (he prefaced each macro with "z" to try to avoid name conflicts)
>
> Fine for a one off project, but such mechanisms don't scale too well.
> If you wanted to do the same thing but allow different people to
> contribute different macro sets in a coordinated fashion, then you would
> be moving from writing a single Content Dictionary to defining the
> OpenMath project.
I agree that if several people contributed and wrote new macros it
would be potentially confusing. Translating from separate macro sets
to a "universal" macro set, the union of all others (eliminating
duplications) would require some work. Since it is not clear that
any pre-existing CD would work for this (or any other) reference work,
nor is it clear that any ordinary person encoding such data would know
about all CDs, it is not obvious that using OM would be especially
helpful compared to (say) using TeX macros. So you've identified a
problem but not a particularly good solution. A good solution would
be a program that automatically reads math books and converts them
to OpenMath, producing addition CDs as needed. I mean reads math books
using an optical scanner.
This is not utterly infeasible: partial versions can be demonstrated.
(Here at Berkeley, and probably other places).
>
> > Actually, without hints from the source, it is hard to make breaks except
> > in situations you are quite familiar with.
>
> In a TeX document the author knows (or thinks he knows) the current page
> width. In a document that's designed to be multi purpose (passsed to a
> CA system, rendered on screen in an unknown font in a window that may
> change size at any time) then it isn't at all clear that such hints are
> at all desirable. It is true that it is currently hard (for general
> documents and even more so of Mathematics) to get the same quality
> from a TeX source that is machine generated from XML as it is from
> a hand written TeX document. However things are improving, and it wasn't
> so long ago that people said that printing presses would never catch on
> as you got better quality from a scribe. For production work (to make a
> paper book for example) it may currently be necessary to "hand tweak"
> the generated TeX, but such tweaks should be there, in the final print
> form for a known media, not in the MathML (or OpenMath).
In TeX I think there are ideas like \mathchoice{1/2} {{1}\over{2}} where
the version chosen depends on some environment parameters (exponent vs
baseline display mode?) It would be much better if we had variables and
a programming language and could (as advocated by Soiffer) deal with
common subexpressions by formatting them once. MathML/XML/Display seems
to (as I've said) attack the easy problem. I guess I am picky about this
(sorry, David) because I've looked at the problem and am particularly
skeptical about others' claims to have solved the problem when the
hard parts are ignored.
How to attack the hard problem? I don't know for sure, but I'm looking
at MINSE and glyphd. The author of this stuff (Ka-Ping Yee) is now
a grad student at UC Berkeley. He did this quite impressive stuff while
an undergrad at Waterloo, and it seems to have been totally ignored.
I ignored it myself, but then I was already somewhat disillusioned!
Not as disillusioned as Ping, though.
>
> > Say a commutative diagram, for arguments' sake. I think the
> > MathML renderer can't possibly do all that by itself.
>
> MathML doesn't do Commutative Diagrams at all.
I assume that's not really true if you can set a table with arrow
symbols
in it. Certainly someone can piece together an image.
> You could do them in SVG.
I don't know what SVG is but maybe that's just a graphical language.
>You could lay them out by hand (and as for
> text that may currently be the best way, although difficult if you
> don't know how much space you've got) But automated tools for laying
> out graphs seem to be getting better and it may already be feasible just
> to specify the logical graph structure of your commutative diagram and
> let an automated tool render that into however much space it has.
>
> > Such a person could use a CD, but it would not be official.
> So nothing is hidden. Most (currently, all) CDs may not be approved
> it doesn't stop them being used in the interim.
The approval process itself seems to be hidden. One could post
the CD under the "extra" category, which is currently empty suggesting
that this is not a hot activity.
> The Society could set up a web form that offered an "instant approval"
> system rather than the proposed review system, but what would be the
> point in that?
I guess there would be no need, since no one is even asking for
approval!
>
> > which turn out to be a non-existent page.
> These things happen in an imperfect world. It doesn't mean that there is
> a conspiracy of secrecy, which was what you appeared to imply.
Sorry, but if you expect people to use something, hyperlinks
should work. There should be tools run over the web site periodically
to find dead links.
>
> > As for approval, it seems that anyone using the software
> > written at INRIA for any commercial purpose may be subjected
> > to some unknown fee imposed by the authors.
>
> It's their softare they have chosen to make it available under a
> "non-commercial" licence. Commercial users need to contact them first.
> So, it's not GPL, but it is not that unusual a licence is it?
>
I have no idea what kind of license they might require so I
can't tell if it is unusual. It is certainly a turn-off to
anyone looking at this technology for commercial use.
(Essentially: should we subject ourselves to this standards
organization and pay INRIA money too?)
> > yes only if the new language is demonstrably able to solve problems that
> > cannot be solved by (say)
> > Mathematica's prefix form
> > Lisp s-expressions
> > Java RMI
> > Corba
> > MathML
> > TeX
>
> You could of course specify the OpenMath trees using s-expressions or
> TeX or MathML etc, but that misses the point of OpenMath which is
> to have a mechanism for recording the symbols used in the expression.
> If we switched the concrete syntax to lisp then as far as I can see,
> nothing in the OpenMath project would really need to change (except of
> course some software which would need to parse/write lisp rather than
> XML) The point of OpenMath is not the "language" if by that you mean a
> particular XML vocabulary. It is the collection of symbol definitions
> and combinations of those symbols.
There is also a collection of symbol definitions in any computer
algebra system. See also the Wolfram
"special functions" project. It has its peculiarities and flaws, but
also a certain consistency. I hope the NIST dlmf project will
also have something to offer. Apparently they have not bought
into OpenMath, even though Bruce Miller is there. (The encoding
of the sample chapter by Olver seems to include some openmath
encodings, but unless they are derived from the same source
as the TeX encodings, how are they going to be debugged?)
So what is
it that XML has to offer: a syntax which you say is inessential
(agreed: but it is the most visible component of the project) and
a symbol collection (which, so far as I can tell, is rather
incomplete.)
>
> You could specify an OpenMath encoding in the \ and {} syntax that
> people think of as TeX. You can also get TeX to read OpenMath in the <
> and > syntax that people think of as XML. But if you do either, or both,
> of these, then OpenMath is still solving problemns that are not
> addressed by TeX (or the other systems that you list).
>
I guess I don't see XML or openmath
as solving any problems that I need solved
when I am communicating between programs such as TILU
and browsers or the Mac graphing calculator or TeX.
I suspect that the only problem solved by the XML encoding
is to allow XML-aware programs to search through ascii files
and say "this is XML!" and maybe "I found an integral".
This IS worth something.
My apologies for the argumentative tone I seem to be
assuming for this dialog. I hope I'm not too far off-base
on my understanding of what's been proposed and done.
In part I may be expressing my exasperation that this
activity has taken so long to achieve what appears to
me to be relatively little, while several other projects
gallop past.
RJF
> David
>
--
om at openmath.org - general discussion on OpenMath
Post public announcements to om-announce at openmath.org
Automatic list maintenance software at majordomo at openmath.org
Mail om-owner at openmath.org for assistance with any problems
More information about the Om
mailing list