[om] different representations for a/b and a*b^(-1)

Richard Fateman fateman at cs.berkeley.edu
Sun May 19 17:22:59 CEST 2002

Paul Libbrecht wrote:

> On Samedi, mai 18, 2002, at 09:40 PM, Richard Fateman wrote:
>> Well, this is not true; in fact I wrote a translator from
>> Macsyma to TeX.
> OK, now, what about a rendering engine for a collection of authored 
> documents ?

You mean like TeX? I would expect the author or the editor of
a document to specify details like how much space to put between
the equation number and the equation, and whether it is on the left
or the right, or what characters to use for indeterminates from
different classes.  If a renderer imposed new names, for example bold 
font for vectors in the display, then it would also have to re-typeset 
the text to make sure it was consistent.  So your rendering engine
would have to render complete articles and books.  So your OM
representation would have to be capable of everything that might
contain mathematics, not just all of mathematics. Text. Embedded
graphics. Animations.

>>> The advantage of OpenMath in precisely not distinguishing the big 
>>> fraction and the slash is because they have the exact same meaning 
>>> (I'm calling this semantic).
>> I thought that we disposed of this misconception.  OM does
>> not deal with meaning. It deals with syntax trees.  Since it
>> deals with syntax trees, I see no problem in having different
>> operators for division.

You might be amused to see that the Macsyma TeX rendering of sin(1/2)
  \sin \left(\mathchoice {{1}\over{2}}{{1}\over{2}}{1/2}{1/2}\right)

which means that --depending on context, determined by the
renderer, 1/2 is either a built-up fraction or not.

> Yes, I agree that the naming "semantic" is just absolutely fuzzy. In a 
> sense, everything is a syntax tree, so this is also an absolute 
> non-sense...

No, because a dumb computer can read and write syntax trees, and
this can be done in C, Java, Lisp, ML, ... with the important
operations of reading/reconstructing in memory/writing all done
clearly and unambiguously.  The definition of cosine in OM is
not useful.  I haven't looked at it in years, but unless it
has changed, two computer systems can use this OM notation and
not agree on what has been transmitted.  How, for example,
is cos(0.5)  different from cos(1/2)?  Radians or degrees?
What does conversion to floating-point mean?  Are complex
arguments allowed?  The answer is "oh, you know what cosine
is, we don't really have to tell you."  This is hardly useful.
A more definitive alternative would be to say that each name
means exactly what it means in a particular computer algebra
system (say Maple V  or Mathematica 3.0).  I choose an OLD
version of these systems because they have stopped changing!

>> (Oddly enough, some computer algebra systems equate exp(x) and e^x  
>> which arguably are not the same.  exp(1/2) is a single value, but 
>> e^(1/2) = sqrt(e) = two values + and - .)
> You're in the right world as you consider this as a mistake.
>>> They are however, sorry to say it, >>not the same math expressions<<. 
>>> Wether they're the same function is the work of some processes (which 
>>> can take infinite amount of time if more complicated).
>> I'm not asking for OM to solve undecidable problems. I'm only
>> try to get it to say what it does do.  I thought I got that
>> clarified.  It is transmitting trees.  Essentially
>> what  Lisp's functions   (setf x (read))  and (print x)
>> do.
> This looks like procedural stream-things... so I'm unclear here...

No, there are no procedural streams here.

if x is a lisp representation of  a+b*c  then the simplest was
of representing it as a data structure is as a lisp list. Call
the list L.
(print L)  would then result in showing the string
"(+ A (* B C))"

If you  ran the program (read),  and typed that string to
it... or read from a file.... then you would construct a
lisp list isomorphic to L.

>> I suspect that you have not done what I have done, which is
>> look at tables of integrals published in France, USSR, USA, between 
>> the years 1837 and the present.  Styles vary.
> And don't you think that there were some special annotations between the 
> authors and the typists of these big works ? That's what I call style 
> hints....

I suppose there is a question as to whether the typesetters knew
any mathematics at all, and whether the transmission of information
was purely presentation or not.  But the point is, whatever
was conveyed had presentation information on a formula by
formula basis. Most likely distinguishing 1/2 etc.  Which suggests
to me that OM without these hints -- on a formula by formula basis --
would not be sufficient to communicate between a mathematician
and a typesetter of an important table of integrals.  Which
suggests that "semantic OM" fails to fulfill one of its most
obvious goals.
   A simple test of OM and an OM renderer would be to take
  a few random pages from a nicely typeset table of integrals or
a few pages from Abramowitz & Stegun.  Describe the pages in
OM "semantics"  and run it through a renderer and see how it
looks.  I have proposed such a test of OM before.  So far
as I know it has not been tried.

>>> Why don't you think of OpenMath very close to a Lisp expression 
>>> representation of a parse-tree ?
>> I will do so from now on.   A verbose uninterpreted (except by
>> occasional informal comments) tree.
> Oh, but if you find it too verbose, just pick another syntax. There is a 
> lisp syntax for XML and it would probably taste better to you (and it is 
> generally the way the very few Lispers that consider XML do it).

Most Lispers consider XML to be terrible.

>> Calling it "semantic" or
>> "content" is like calling it "standard".  The words should
>> not be taken literally, but only informally.
> Now, you're just fighting on naming...
> I believe OpenMath does a better job in standardization than TeX does... 
> The idea of making it a standard is to promote interoperability which 
> we're striving for.

TeX has no need to standardize  math.  It doesn't need to know
about any standards except fonts and position on a page.  It doesn't
even know if xyz  means x*y*z  or is a symbol.  I think that
OM attempts to set up a common language (I would not call it
a standard), and attempts to promote interoperability.
Given the passage of time and the combined efforts of quite
  a few people, at least a few of whom I know to be intelligent,
there does not seem to be much to show for it.

>>  I suspect that OM is two orders of magnitude more verbose
>> and also insufficient without defining operators that
>> suggest the actual semantics I need.
> So what's your definition of "semantics". I can't believe your semantics 
> want to differentiate the big fraction from the in-line one, or does it ?

I have already been told that if I want to make this distinction,
I can do so in OM.  I would definitely expect to be able
to distinguish them in a lisp syntax tree, and if I cannot
do this in OM, then it would be clear that OM is  defective.

>> For example, if I find 2 or more answers in my data base I have a 
>> construction that says essentially that the user should pick one of 
>> them to continue the computation.
>> Sure I could write a CD for this, but anyone interested
>> in getting answers from Tilu can more easily get the
>> results by using lisp's read.
> Again, too much lisp for me.

Here it is in TeX  \answerchoice{1/2){0.5}

The renderer would make a choice of equivalent answers.   In
tilu, sometimes the answers look much different because arctan
and log can be used in the result.  Sometimes the formulas
simplify differently.

> I don't catch the point of the CD for a choice, this looks quite odd 
> (but... you're free to add a symbol whenever you want as long as you 
> keep it for you, no-one will complain).

This, of course, defeats the goal of interoperability, and therefore
promoting private CDs as a solution is pointless.  All you have
is interoperability on the intersection of all systems.  And
since some of the systems may be extremely weak, the interoperability
is severely limited.

>> An alternative
>> that I have used, and takes about 25 lines of code, is
>> to program a simple infix traversal of the tree and print out a 
>> character string which is transmitted.
> So that's printing a little (linear) string out to represent the 
> expression.


> The thing is... you just care about a linear string, which makes Tilu's 
> output not so rich display (this is kind of ok for only real, integer or 
> complex functions). 

This is false;  if you tried tilu, you would see that tries 2-D display.
I send linear strings to the Mac graphing calculator because that is
what it is happiest  with.

In fact with some switch settings Tilu does variable-width font
typesetting using MINSE  (look on internet...)

If TILU would ever think about using OpenMath, it 
> could take advantage of many available rendering things.

It can use MINSE, which is a beautiful program that typesets
into ready-to-display GIFs.  It requires no browser plug-ins
and works in browsers that know nothing about MathML.
OM does not provide a solution to any unsolved problem
in tilu display.  In fact, I suspect it explicitly fails
to have representations for things that tilu needs.  (Of
course I could write a CD, and a stylesheet for each of some unknown 
number of renderers etc etc.  if
I wanted to waste my time.

>> I suspect that another reason not to use
>>  OM is that it is still not defined completely.

> Saracasm: do you mean it should cover the whole mathematics ?

No, it was my impression that except for the very
simple basic CDs and those forced upon OM by mathml,
nothing was complete.  For example, the calculus1 CD
is not sufficient to describe the calculus that
one would find in a table of integrals.  And
certainly not what you would find in a complex variables
If I wanted to use OM for tilu, I could not.

> Paul
> PS: since there is a fair similarity between lisp trees and OpenMath 
> trees, I've always been wondering why the beautiful and powerful Lisp 
> implementations never came to solve the hard problems of XML.

You will have to ask others why they came up with XML.

I think you can
find comments on XML on the comp.lang.lisp newsgroup.
 From the perspective of OM, I think the XML-like syntax
was a political compromise that went along these lines:
html is acceptable.
sgml is accepted and standard.
if OM comes up with something neat and clean but does
not look like sgml  (later, xml), then some group of
morons will propose a math representation schema that
looks like xml, and we will be left out.  We won't be
able to get funding from government bureaucrats. So
let us be the morons to propose XML syntax for math.

> XML 
> databases are certainly one good example.

Yes, I think that many people in the database
community say this is an example of how a bad idea can become
popular.  I am not part of that community, but
it was my general impression that this XML database
stuff was a return to the hierarchical database
technology that was largely being replaced by
relational technology.  So it is from that respect
it is a return to 30-year-old ideas.

 And an XSL transformer that 
> could achieve the famous "a little change in the source, a little change 
> in the output" would also come great.

I don't know what this refers to. Sorry.

om at openmath.org  -  general discussion on OpenMath
Post public announcements to om-announce at openmath.org
Automatic list maintenance software at majordomo at openmath.org
Mail om-owner at openmath.org for assistance with any problems

More information about the Om mailing list