;; -*-mode: Outline; -*- Impressions of the International Lisp conference ILC2002 October 27-31, 2002 -- San Francisco, CA http://www.international-lisp-conference.org/ I attended the conference on Oct 29, 2002. TOC -- XML, XPath, XSLT implementations as SXML, SXPath, and SXSLT -- ITA Software and Orbitz: Lisp in the Online Travel World -- Social issues in LISP -- Fair threads -- Sharpening the parentheses: bringing Lisp ideas to programming the Web -- RoboCup -- Upcoming book on practical Lisp Applications -- Large Scale Web services, and the programming languages that build them -- Intelligent agents on the web and in the Ether * General impression Comparing the Lisp conference with the Scheme workshop a month ago, I got a strong impression that Lisp and Scheme are different languages and communities. I think Python is closer to Haskell than Lisp to Scheme. CL seems to me like a Smalltalk with round parentheses. Scheme community is more diverse. Scheme community seems to value a functional approach. In contrast, CL community seems more pragmatic -- and more ad hoc. The CL code I saw is very object-oriented, very stateful, very Smalltalk-ish. * XML, XPath, XSLT implementations as SXML, SXPath, and SXSLT http://pobox.com/~oleg/ftp/papers/SXs-talk.pdf The conference was attended by, I'd say, 150-300 people. My presentation was in the morning, in a parallel session. I'd say there were around 20 people in the audience. I don't know the attendance of the other session. I know they have a bigger room. I did notice that the attendance notably increased after lunch: perhaps people woke up and came to listen to Peter Norvig. People who attended my talk seemed to be interested in it. I was asked several (perhaps 10+) questions. One person was really curious about SXML's handling of namespaces (the topic that I originally planned to skip). Another person asked me if the SSAX parser is streaming. That was a very good question: it let me make an important point that SSAX is indeed a SAX, streaming parser. One person asked if STX can be used to generate dynamic content. I almost literally quoted Kirill's message and pointed out the PLT Scheme server and mod_pipe. _Several_ people were wondering if SXML/SXSLT tools can be re-written in CL. I said why not. I should point out that the conference was dominated by CL people. Schemers were in a _distinct_ minority. One person asked how he can find SXSLT (especially related to XSLT and (S)XPath) for Dr.Scheme. I pointed out the SSAX CVS repository and said that we're working on the uniform and complete packaging of the code. I said that the code in the repository does work on PLT Scheme, because that's what Kirill uses. * ITA Software and Orbitz: Lisp in the Online Travel World Rodney Daughtrey "ITA Software is a 50-person company based in Cambridge, MA and the creator of the world's most advanced airfare shopping search engine, primarily written in Common Lisp. ITA licenses this technology to airlines and to other companies in the travel industry." Orbitz, a popular online (air)travel reservation system, uses Lisp throughout! The engine accepts XML requests and generates XML and HTML pages, but internally runs Lisp. The presenter said customers don't care what's inside of the system. The presenter pointed out that the airline industry is in flux, therefore, the search and reservation code has to be frequently adjusted. Hence, clarity of expression of an algorithm is paramount -- that's the main reason why they use Lisp. * Social issues in LISP Kent Pittman I learned that Kent Pittman is an angry man -- he accused OpenSource programmers of being irresponsible and of destroying businesses. It turns out that Kent Pittman privately developed a commercial Web server. When he was about to sell it, Franz announced a free version of their web server. For Franz, the free add-ons is another way to attract customers and to get them to buy Allegro CL. This move however made it hard for Kent Pittman sell his server. Well, tough luck. Apple did something similar: it bundled AppleScript as a free add-on to MacOS 7, and destroyed Userland Frontier, a commercial scripting language for MacOS. Dave Winer, the creator of Frontier, never missed a chance to complain about Apple. I can understand Kent Pittman's frustration. I don't understand though how OpenSource programmers, Free Software programmers and Richard Stallman personally can be held responsible. Judging from the questions, lots of people in the audience didn't understand it either. Kent Pittman went on to say that OpenSource is a threat to software businesses, and that "responsible" programmers should not release 'hot' products until one-two years after commercial products showed up. Responsible programmers should not pre-empt commercial products. In the second part of the talk, Kent Pittman floated a proposal to "unbundle" consensus building from assigning permanent names to the descriptions of particular language features. He called them 'substandards'. Consensus will come afterwards, in the process of implementing and using the features. Kent Pittman already registered a domain name and a company in NH. He went on to describe substandards, which sound exactly like SRFI. When he was asked (by Rob Warnock) directly about SRFIs, he admitted that there are many similarities. There are differences: Kent Pittman will charge the submitters of sub-standards up to $300; His web site will let people log their preference for particular SRFIs (I mean, substandards). People will be charged around $20 per year for such a privilege. Kent Pittman then went on to say that in his view, SRFIs are elitist. Some other presenters (such as Peter Norvig) also exhibited a new kind of logic that I'm not familiar with. SRFI editors are to be congratulated: they designed and implemented yet another feature that CL folks only now are coming to appreciate. * Fair threads Manuel Serrano http://www.international-lisp-conference.org/Speakers/People/Manuel-Serrano.htm Manuel Serrano presented a talk about fair threads. He said that the presenter right before him enthusiastically advocated preemptive, POSIX-style threads for CL. Manuel and I were puzzled how people fail to research threads and understand the many drawbacks of pre-emptive threads. * Sharpening the parentheses: bringing Lisp ideas to programming the Web Henry Lieberman of MIT Multimedia Lab. Henry Lieberman said that Lisp is indeed good for web programming, but people seem to prefer sharp parentheses (<>) to round ones. If you can use Lisp, you should -- he said, -- but sometimes, you're constrained: you have to accept legacy XML documents and XSLT stylesheets. His solution: design a programming language with an XML syntax. He went on to describe a programming language, . As it turns out, XML syntax is indeed unsuitable for a programming language. So, the Language 'Water' uses some kind of a simplified XML syntax. The language is not Lisp either -- neither in notation (which is infix), nor in semantics. It looks a lot like a Javascript. Programs in the Water language can run either on a server, or on the client, in a browser plug-in. This talk left several people puzzled: at first the author said he wanted to use XML because it's popular, and Lisp because it's a good language. He ended up using neither. BTW, Water requires a license for a commercial use. I drew two conclusions: first, we need to advertise SXML better. SXML can do everything Water does -- and can do more and better. I also need to look up Henry Lieberman's slides, which say "Web community blew the web programming" and "web programming collapses under its own weight." Imagine a slide: Henry Lieberman, a colleague of Tim Berners-Lee, says: "Web programming is collapsing under its own weight." We need to save it. I wanted to talk with Henry Lieberman and point out that there is another way to assure interoperability with the XML culture. Rather than translating Lisp to XML, we can translate XML and XML tools into Lisp. That's what the SXML talk was all about. I didn't catch him. The conference schedule didn't leave much time for discussions. Anyway, SXML ideas are timely, we are not doing worse than other people -- and perhaps better. My overall impression from that talk is disappointment: I thought people at MIT media lab can design better languages than I do. * RoboCup Daniel Polani presented a great talk about RoboCup, a soccer tournament for robots. It is important to note the absence of a big brother: nobody has the global view of the whole system. A player has only the information it can see. A player cannot know intentions of other players. Each player is completely autonomous. It should co-operate with other players and execute an attack or a defence against the competing team. The last 10 minutes of a talk was a video of a real RoboCup game in the simulation league. The video showed how players indeed co-operated to execute a complex pass, and eventually scored. Watching robots playing soccer was indeed as exciting as watching real soccer. http://www.robocup.org The official website of the 2002 RoboCup Simulation League is: http://www.uni-koblenz.de/~fruit/orga/rc02 * Upcoming book on practical Lisp Applications Franz has distributed the following notice to the attendees: Franz Inc. works with Apress Publishing to publish a book on practical Lisp Applications. Areas of interest: bioinformatics, robotics, Web/Internet, databases. The applications should be Lisp-based, used in the "real world", and have commercial/educational value. Contact: Lisa Fettner, lisa@franz.com * Large Scale Web services, and the programming languages that build them Peter Norvig, Google. The Keynote address There was an interesting slide, of one of the Google server rooms: 10,000 computers densely packed in racks. Peter Norvig said that they pushed the envelope on how many computers one can fit in a room. Therefore, they have extra fans on the floor between the racks. That slide was symptomatic: Google indeed pushes the envelope. Some day the envelope may push Google. If an accident (fire, flood) occurs in that machine room, an insurance investigator might find the owner disregarded safety rules and the electric code, and refuse to pay. Peter Norvig seems to have turned to the dark side. He promoted statistical approach to knowledge representation. There is no need to build knowledge databases like Cyc or Semantic Web, he claimed. We just index the humongous collection of all web pages. To answer the question "Who killed Lincoln" we enter that string into Google and find the answer on the first page. Alas, the correct answer is seldom the very first. So we have to choose among several answers on the page. So, we have to already know what the answer looks like. That fact, that we do need to have some knowledge to find the answer in Google, seems to have escaped Peter Norvig. I'm appalled how Peter Norvig who wrote books on AI could fail to see the trap. BTW, a similar query "who killed McKinley" gives the sought answer at the very bottom of the page. At the top of the first page are "We Killed McKinley" song and a newsbrief "Four Killed in McKinley Crash." The results do vary depending on how you ask the question. Furthermore, Google merely indexes pages without ascertaining their veracity. Google has the same bias as a TV poll: both reflect the views of the people who actively chose to make their opinions known -- of a vocal _minority_. Peter Norvig was also cavalier about programming languages. He said people often need more control over memory allocation -- that's why they choose C++. Google uses a lot of C++. Doesn't Peter Norvig know how hard it is to manage memory? That's why it is better to leave this task to professionals. One of his slides said: "It's better to hire an active VB programmer than a guru who isn't interested in your problem." This is exactly the attitude that keeps bringing buffer overflow news every week. Peter Norvig mentioned -- an experimental feature of Google. You enter a few (key)words and Google will find sets or clusters of 'related' items. My contention that the pure statistical approach to knowledge acquisition is deficient found a surprising corroboration from a story in Washington Post: "With the Sniper, TV Profilers Missed Their Mark " By Paul Farhi and Linton Weeks Washington Post Staff Writers Friday, October 25, 2002; Page C01 http://www.washingtonpost.com/wp-dyn/articles/A13761-2002Oct24.html The article was a comment on the arrest of two alleged snipers who had terrorized the Washington, DC area for over a month. The article elaborated how wrong various "profilers", "forensic psychologists" and "former FBI investigators" turned out to be. These pundits dominated TV and online news, and yet their attempts to develop a profile of and to uncover the motivation for the sniper attacks largely missed the mark. More chillingly, the speculations of pundits might have diverted the community and the police into looking for wrong suspects. The article concludes, "Criminal profilers may be the logical outgrowths of a society that believes that all of human reality can be quantified, a culture that has a touching faith in the truth-revealing ability of statistical analysis [sic!]. It's part of the same belief system that has given us governance by polls, insurance by actuarial tables, newspapers by readership surveys and just about everything else by focus groups. It has also given us criminal investigation by number-crunching spreadsheets and computer-enhanced conjecture. But as the sniper case seems to reveal, profiling can be especially dicey when you're dealing with the madness of the human mind." In general, I got an uneasy feeling about Google. They seem to be too much 'cowboy programmers'. One day it will cost them. * Intelligent agents on the web and in the Ether Tim Finin, U. Maryland - BC A very good talk on semantic web. Tim Finin made a good case why XML and XML Schemas are not enough: there is no hope of achieving one and only one Schema. The XML Schema Recommendation offers no way to relate different schemas however. XML Schema is also weak on semantics. RDF is a formally defined "is-a" relationship. DAML+OIL builds on the top of it, and provides more support for ontologies. In particular, DAML+OIL defines cardinality constraints, classes, equivalence, the disjointness of classes. The next generation of DAML+OIL is OWL (web ontology language), whose discussion nears completion. RDF is widely used -- as Dublin Core. Also, Adobe has committed to RDF, and uses it in XMP (extensible metadata form). Every PDF document stores its meta-data in RDF. There is a DAML version of a freely-available subset of Cyc!