Thursday, December 27, 2007

Mutable Syntax in Mercury

Mutable Syntax in the Mercury Programming Language

Introduction

The Mercury programming
language
is a compiled, strict, pure, type-safe logical and
functional programming language. Its programming methodology is
based on predicate logic, with syntax and semantics in line with
that of Prolog. Along with logic, it has a fully integrated
Hindley-Milner type system with lambda-terms, very much in the
style of the Haskell programming
language
.

The smooth integration of Prolog with Haskell sounds like a
marriage of programming paradigms to release the programmer into
Coding Nirvana, as it were. This is how it is for most cases, as the
language has a consistent design philosophy backed up by
well-researched principles and explained by copious and clear
documentation with numerous practical examples. What remains are
niche constructs, that is: constructs that may be helpful for
specific problems, but are not strictly necessary, nor generally
applicable.

One such niche construct, one that I turn to quite often when
programming in Prolog, is the op/3 declaration, or,
the ability to introduce new syntax into the language so that I
may model the problem more naturally. This document covers
extending the language to include the op/3
declaration in its full breath of functionality.

Alternatives, and Raison d'être

The approach we take is the modify the compiler so that it
accepts the op/3 directive in a module and
thereafter, within only that module, parse the operator
declared with the specification and priority given in the
directive. This may seem like a drastic measure, so we must
consider the alternatives before choosing this course of action.
There are basically four viable, albeit inferior, alternatives:

  1. The Mercury programming language provides the grave
    syntactic construct which converts the standard prefixed call
    to an infix one:

    fn(X, Y) becomes X `fn` Y

    See, for example, the pprint module, as it used to
    make extensive use of this style (until it recently deprecated
    this approach to use one of the builtin operators, instead).
    Just as the Mercury language developers have discovered, this
    approach has at least two drawbacks:




    1. these infix "operators" are clearly marked as
      second-class citizens, unnecessarily lengthening what is
      supposed to be a concise
      representation;
      1
      and,

    2. only binary infix operators are allowed under this syntax; I
      often find it useful to type values using a postfix
      type.




  2. One could construct a specialized instance of
    the op_table typeclass from the ops
    module and thereafter use read_term_with_op_table/4
    from the term_io module to parse strings at
    runtime. See samples/calculator2.m provided with the
    distribution for an example program that demonstrates this
    approach.



    This approach also has its own set of associated problems:




    1. Constructing one's own op_table is excessive
      when using only a few operators and tedious when introducing
      many operators. This manual process steals precious time
      away from program development that addresses the problem,
      itself.

    2. Until now, there was no "cookbook" approach addressing
      the problem of how to create a mutable syntax.
      The ops module and the sample calculator program
      are well-documented and provide good examples of how to
      implement static syntax, but provide no guidance for
      constructing dynamic, mutable, syntax. For this, one had to
      design such a framework from first
      principles
      .2

    3. An user-defined op_table instance may only be
      used at runtime. The Mercury compiler, as implemented, does
      not allow such tables during module compilation.


  3. Third, use one of the available scanners (such as
    samples/lex/) or parser generators (such as samples/moose/) to
    create a language syntax-aware preprocessor that substitutes
    operators and their arguments with the well-formed term
    replacement. Problems:


    1. This is highly redundant and fruitless exercise,
      as the compiler has its own parse phase that does the same
      work, and with the language itself in flux (as is the case for
      any living language) changes to the syntax quickly render a
      system created by these means obsolete. Parser generators
      for other programming languages provide complete grammars
      for every version of the host programming language, Mercury
      has no parser generator with such grammars, so this task is
      left to a user of these kinds of systems.

    2. Furthermore, although the domain-specific languages for
      these tools closely follow the Mercury programming language
      to do their work, they do have their own languages that
      require time and effort to master. When presented with
      powerful parsing facilities built right into logic
      programming languages (I'm referring specifically to Definite
      Clause Grammars (DCGs)), one must weigh the costs of learning
      these languages before embarking on such an endeavor.

  4. Worst for last: as with C/C++, create a specific
    preprocessor that parses the source file, converting annotated
    operators to equivalent Mercury terms by following the
    preprocessing directives. This approach requires so much work
    (the C preprocesser is a compiler-sized program) and has so many
    known pitfalls (such as replacing elements inappropriately (in
    a quoted string, for example) and causing an unacceptable
    disjunction between the generated executable and the original
    source base (confusing debugging and error reporting efforts),
    that it should not receive serious
    consideration
    ;3
    it does not in this document, at any rate.

By embedding the op/3 syntax into the compiler, the
changes we make are hygenic in that they are part of the
language syntax, not external and blindly unaware of it, as is the
case with with C preprocessor and immediate so that they
may be used at compile time in the module in which they are
declared. This implementation also limits the lexical scope
of the operator within the module in which it is
declared
,4
preventing these declarations from corrupting modules that
eventually use modules with specialized syntax.



The desired state is to integrate the op/3
declaration fully into the the language, so that, e.g., facts may
be stated in their vernacular and still be compiled into
executable content in the Mercury idiom, as in this real-world
example:

for the open weekly timecard ending date(2006, 1, 6):
employee cgi_emp_001 billed [
3 hours on sunday - date(2006, 1, 1),
16.5 hours on monday - date(2006, 1, 2),
5 hours on tuesday - date(2006, 1, 3),
5 hours on wednesday - date(2006, 1, 4)
] against contract lt_2005_001.

Far from being a contrived pedagogical
example
,5 the
above illustrates the various typing uses of op/3
defined syntax, both prefix ('employee cgi_emp_001')
and postfix ('3 hours'). The above fact is certainly
"only" a data term (in fact, as well as being a data term, the
above fact also contains op/3-based data
terms), but fully actualized operators exist as well; the Prolog
syntax
module
is rife with such examples. These uses of
op/3-declared syntax (describing entity
relationships clearly and as activated syntax) are in no way
limited to the rather straightforward problems of accounting, but
are also used in production expert systems handling over 1,000,000
transactions per day; the use of these extensions are tied
directly to rule findings satisfying customer requirements.

In short, op/3-declared syntax is used extensively in
production systems built using Prolog serving real-world
requirements under heavy demands. With the preexisting extensions
for purity, typing, and functional programming, imagine the
utility and expressivity that could be obtained with Mercury so
extended!


Implementation

Now that we have justification for modifying the compiler,

nothing remains but to get to it. Fortunately, the Mercury
compiler, after some study, yields a straightforward
implementation approach.

First things first: the
ops module uses a discriminator (the type
category) to choose among
different uses for an operator (e.g. unary '-'
verses binary '-'). This discriminator is internal,
and, as we need the same functionality when defining new
operators, so we externalize that type in library/ops.m by moving
the type declaration from the implementation section to the
interface.

Given this type, we now decorate predefined (inflexible) op table
that will permit additional syntax declarations. For this, we
need to index the operator and its category to the
syntax declaration, and then make this new type an
op_table (typeclass) instance ... we add this type
to the interface of compiler/prog_io_util.m:

:- type op_map == map(pair(string, ops.category), op_info).
:- type mercury_op_map ---> mercury_op_map(ops.table, op_map).
:- instance ops.op_table(mercury_op_map).

To further support the new type, we need information against
which we index, and we need supporting predicates to construct
the information for the parser when encountering the
operator (the declarations for this also go into the interface of
compiler/prog_io_util.m):



:- type op_info ---> op_info(ops.specifier, ops.priority).
:- func op_specifier_from_string(string) = ops.specifier.
:- func op_category_from_specifier(ops.specifier) = ops.category.

The op_specifier_from_string function simply takes
an input string, e.g. "xfx", and coverts it to the
equivalent specifier representation, e.g. the functor
xfx. The op_category_from_specifier
function follows the (implied) convention of the ops
module, which is all prefix specifier types
(including binary prefix) are the before
category and all other specifier types
(one of several different infix and postfix possibilities) are
the after category type. The complete
set of changes are enumerated explicitly in
the email on the
implementation
.

After we augment the functional of the ops module,
we need to integrate this into the compiler's parser module
(which is actually called prog_io). The efficacious
point is where the parser works at the module
level
,6 this
occurs, after some initialization in read_module/11,
in read_all_items/7. We initialize the op map here
(with a call to init_mercury_op_map), and then pass
along that nascent syntax map to the calls that parse the items in the
module (by modifying the signatures
of read_first_item/9
and the recursive calls read_items_loop_2/11
and read_items_loop/10).

So, for example, read_items_loop/9 becomes:

read_items_loop(ModuleName,
SourceFileName, !Msgs,
!Items, !Error, Syn0,
!IO)

... where Syn0 is the new syntax map. This map is
initialized in the new read_all_items/7 before
calling read_items_loop/10 with the goal:

init_mercury_op_map(init_mercury_op_table,
Syntax)

The magic occurs in module parser's
read_term_with_op_table/5 (called via
read_item/7) which normally scans and parses the
items in the module. When it encounters an op/3
declaration, however, it eventually resolves to the
process_decl/8 back in the prog_io
module, which reads the declaration and then adds the syntax
declaration to the op map, enhancing the syntax for the current
module.

When read_all_items/7 completes its iteration on a
module's items, it exits, discarding the op_map
instance and any syntax it accumulated from op/3
declarations in that module, returning the compiler to the base,
Mercury-defined, syntax. So that the "next" module starts fresh
without syntax from other modules polluting the compilation.

Reconsideration

"Worse is Better"7

After some discussion on the Mercury maillist, it was resolved
that dynamic syntactic extensions should be external to the
compiler. So, Logical Types has developed separate compiliation
system that converts modules with syntactic enhancements to
plain-jane Mercury equivalents. For modules with no syntactic
enhancements, `ltq' is equivalent to `mmc
--make --infer-all
'. For modules with op/3
declarations in the implementation, `ltq' first
parses the module and writes out all terms canonically. After
this translation, the system compiles the modules into the
resulting executable or library.

Operation

This system reduces rather nicely by using facilities provided
by the Mercury compiler, and another declarative system:
make. `mmc -M <file>'
discovers file's dependencies and stores these in a
makefile variable $(<file>.ms) in the file
<file>.dv Given this,
ltq simply builds a makefile with the enumerated
dependencies and then calls the system that manages the dynamic
syntax, which then writes out syntactically-enhanced modules in
their canonical form (called `dopp'). Both
ltq and dopp are available, along
with samples as

dynamic_ops.tgz
.

Conclusion

This study came from my experience with the ease of use of
mallable syntax in Prolog and comments in the Mercury sources
about the need to add op/3 declarations as well as
at least two aborted implementation attempts to do so. In the
ensuing process, where I did implement this solution, quite a
discussion emerged on the maillist on the estetic of allowing the
user to introduce or to change syntax, and how to go about doing
it properly. This implementation is one approach, and is offered
to assist those who wish to add syntactic extensions to their
Mercury systems.



Endnotes


1

The normal infix operators do not have this
grave branding, and for good reason.
Imagine
writing algebraic statements, such as the following:

Aroot = sqrt((B * B - 4 * A * C) / 2 * A) - B

while shackled to the grave syntax:

Aroot `=` (sqrt(((B `*` B) `-` (4 `*` A `*` C))
`/` (2 `*` A)) `-` B)

Note the extra parentheses -- these are now necessary, as the

grave syntax does not communicate operator
precedence. Also note that the single-character operators are
now three times their original size. Given the above, it's
tempting to avoid infix syntax altogether...

(setq aroot (- (sqrt (/ (- (* b b) (* 4 a c))
(* 2 a))) b))

...but I have no desire to write out the parsed internal

representation by hand (it may look like Lisp, circa
1965, because the syntax of most Lisps (with
one notable
exception
) is also its parsed internal
representation), so the Mercury prefix code is therefore
presented:

'='(Aroot, '-'(sqrt('/'('-'('*'(B, B),
'*'('*'(4, A), C)), '*'(2, A))), B))

There! Isn't the canonical tree syntax so much better than
the cons syntax? Drek!


2

This is not all that bad, given the
documentation and the calculator2.m sample
. In
calc4.m we provide a
straightforward example using the map
type.

3

This pronouncement in no way prevented this
author
from submitting such a proposal to the Mercury team.
Ah, the blessed ignorance of youth! All was not in vain,
however: every misstep hides the seeds of greatness: one of the
responses showed that samples/expand_term.m (the responder was
the author of that module, in fact) provides the functionality
of Prolog's term_expansion/2 predicate, which is
an essential prerequesite for implementing
Aspect-Oriented
Programming (AOP) in predicate-logic based languages

(specifically Prolog). How aspects are implemented in Mercury
shall be a topic for another paper.

4

In ISO Prolog op/3
declarations have global extent.

5

[Bratko2001],
§ 3.3 demonstrates op/3 declared syntax with
such charming statements...


:- op(300, xfx, plays).
:- op(200, xfy, and).

Term1 = jimmy plays football and squash.
Term2 = susan plays tennis and basketball and volleyball.

...but then the textbook quickly redeems itself -- it
is still my preferred Prolog textbook -- with a meatier
problem, which I adapt for your enjoyment:

ruth was the executive director at wncog.
sally was the executive administrative_assistant at wncog.
diane was the director of the human_resources department at wncog.
juan was the administrative_assistant of the human_resources department at wncog.
sunny was the director of the finance department at wncog.
stuart was the director of the operations department at wncog.
joe was the system_administrator of the operations department at wncog.

?- Who was the director of the What department at wncog.

Who = diane,
What = human_resources ;

Who = sunny,
What = finance ;

Who = stuart,
What = operations ;

no

I leave the op/3 declarations as a coding
challenge to the enterprising reader.

6

Prolog's op/3 declarations have
global extent
, but I consider this a mistake in the presence
of a module system -- op/3 should only affect the
module in which is it declared.

7

"Worse Is Better" the catchy title of one
of the most fameous apologies (after Socrates', of course), is
available from several sources:

http://www.jwz.org/doc/worse-is-better.html
is one such.


Works Consulted

[Bratko2001] Prolog Programming for Artificial Intelligence, 3rd ed.,
Ivan Bratko, Addison-Wesley, Reading, Massachusetts, 2001.



author:Douglas M. Auclair
(email: dauclair at hotmail dot com)
date:January 3, 2006
This document is copyright © 2006 by
Logical Types, LLC. under the
GNU FDL,
version 1.2

Installation Instructions

Installation
Instructions for Quicksilver
PowerPC/Apple
architecture


  1. Decompress the archive in a directory of your choice
    (alias==$dir)

  2. Modify the script mmc in directory
    $dir/quicksilver-0.12.1.powerpc-apple-darwin8.3/bin
    so that the MERCURY_COMPILER variable points to
    $dir/quicksilver-0.12.2.powerpc-apple-darwin8.3/lib/mercury/bin/powerpc-apple-darwin8.3/lib/mercury_compile
    and so that the MERCURY_CONFIG_DIR variable points to
    $dir/quicksilver-0.12.2.powerpc-apple-darwin8.3/lib/mercury

  3. Export the following environmental variables with the following
    values:


    MERCURY_HOME $dir/quicksilver-0.12.2.powerpc-apple-darwin8.3
    MERCURY_STDLIB_DIR $MERCURY_HOME/lib/mercury

  4. Add the following paths to your DYLD_LIBRARY_PATH
    environmental variable:

    $MERCURY_STDLIB_DIR/lib/reg.gc/powerpc-apple-darwin8.3

    $MERCURY_STDLIB_DIR/lib/powerpc-apple-darwin8.3


  5. Add the following path to your PATH environmental
    variable:


    $MERCURY_HOME/bin
  6. Add the following path to your MANPATH environmental
    variable:

    $MERCURY_HOME/man
  7. You should be able to do the following with the file
    hello.m:


    $ mmc --make hello
    $ ./hello

  8. Since this compiler allows op/3 declarations,
    the following module, play.m,
    demonstrates this capability. I intentionally left out some
    declarations, so compiliation is slightly different:


    $ mmc --infer-all --make play
    $ ./play

Creating syntax with op/3 can become complicated
when several operators interact to create a term. I've provided a
module that prints the canonical representation of a parsed term
(write_canonical.m) and a testing
module (test_op.m) that allows prototyping
of operator declarations and allows submitting terms under that
syntax. The whole test system may be built in the usual way:

$ mmake test_op.depend
$ mmake test_op


Copyright © 2006, Logical Types, LLC. All rights reserved.


qcheck 2.0


Introduction



QuickCheck
is a system developed by John Hughes and Koen Claessen. Its
premise is that comprehesive testing can be obtained by combining
a testing specification language and test data automation
("random testing"). It has been ported to several languages from
its native Haskell: the Mercury programming language provides a QuickCheck facility in their
"extras" distribution, called qcheck.

As it stands, the original qcheck (herein after
referred to as Q1) is an excellent piece of work,
fully capable of testing entire systems. It can decipher how to
generate example data from user-defined types. It also provides
examples for various general and specific situations where the
user may wish to exercise control over the ranges or frequencies
of the data generated. What, then, would need to be changed?



Not much, it turns out. The fundamental aspects of the system
-- comprehesive unit testing fed by randomly-generated values and
directed by a test specification remain intact, but there are
several features to comprehensive testing that can be added to
improve this system. Q1 is good at verifying
that one particular predicate behaves as it should, but it gives
no indication that all the (interface) predicates of a system
have been tested. Further, the user has some control over the
detail reported for each test, but, for a user wishing a summary
report, even the smallest report allowed is too much detail:
adding a facility that give the user complete control over
reportage becomes necessary for larger systems. Finally,
Q1 has an excellent facility to generate random
values for user-defined types, but for primitive types (such as
char, int, and float), the
approach is a bit arcane (only mentioned in one of the last
examples) and lossy -- giving the user control over the random
number generator itself, and integrating that part of the system
with the goal of ease of use will carry forward the automation of
user-defined typed values as well as simplify controlling ranges
of generated primitive values.



These improvements were the aim of this new version: keep the
essence of previous version while adding these new layers to help
the tester verify much larger systems. The first two aims
outlined above, that of comprehension and detail, are addressed
by the new reporting facility that exists both outside
qcheck2 proper and is also integrated into its
state. The third aim, more and simpler control over
randomly-generated values, incorporate changes that now allow the
user to define their own random number generator (or to use the
very excellent one supplied) and also to change the generator's
behavior in the midst of testing. We will address each aim, and
their implementations, in the following sections.



Aims

Aim 1: Comprehension



One question that testing frameworks, such as QuickCheck, must
eventually address, particularly for larger systems, is the one
of completeness. Or, "has the entire functionality of the system
been tested?" Under the first implementation this is a difficult
question to answer, and the root of this problem is a rather
trivial one to fix: the second input argument to
qcheck/[4,7,8] is a string that has the
purpose of describing the test to be performed. Typing this
argument as a string is rather limitting: although
the (human) user can seen the purpose from the description, the
problem is that encoding the test description as a
string does not facilitate reasoning about test
results mechanically, which makes it difficult for the system to
report on what was and, importantly, was not, covered in the
testing.



A new accumulated state variable



The fix to this problem is therefore simple: generalize the test
description argument to any (univeral) type. This way other
systems may, for example, generate a model of the system being
tested and then, after the tests are completed, a method of
tabulating and reporting the results. The internals of
Q1 change in two ways: first, the description
type signature is relaxed, and second, as reportage is spread
across several calls to qcheck/[4,7,8], we will
defer the commitment of outputting results until the user so
determines. In this case, we convert the state variable from
io.state (which commits the output) to a specific
accumulator (that simply collects it):



:- pred qcheck(T, Desc, int, list(list(frequency)),
general_frequencies, list(user_gen_type(RNG)),
qstate(RNG, Desc), qstate(RNG, Desc))
<= (testable(T), random_number_generator(RNG)).

:- mode qcheck(in, in, in, in, in, list_skel_in(user_gen_inst(RNG)),
in, out) is det.

where:


T the type (pred/func) to test
Desc a generic type describing the test
int number of tests
list(list(frequency)) list of specific frequencies (as per Q1)
general_frequencies list of general frequencies for testing (described later)
list(user_gen_type(RNG)) user-defined types for value generation, as per Q1,
specialized on the random number generator
qstate(RNG,
Desc)
The state variable collecting test results and keeping the
random number generator au courant



The typeclass testable(T) is as per
Q1; and the random number generator typeclass
(random_number_generator(RNG)) will be
discussed under the value generator aim.

For the new types for qcheck2:

:- type general_frequencies == assoc_list(type_desc, list(frequency)).
:- type qstate(RNG, Description)
---> qstate(generator(RNG), map(Description, test_results)).
:- type test_results
---> test_results(int, int, int, bag(univ))
; falsifiable(univ).



The above qstate type shows that the test
description is mapped to the test_results type, as
qcheck/[4,7,8] is called for each test, it
accumulates the test results indexed by the (generic)
description.



The Program Modelling Tool



As Description can be of any type, we can now allow
the user to pass in a simple description string, as
in Q1, or, alternatively, we can enhance the
description with something that we can use as a declarative
index. In Prolog, we could use the
index of the module-qualified name of the predicate. Mercury is
not as dynamic a system to allow any number of
module/predicate/function values
for
Description,1
so we will employ a preprocessing system to identify and alias
all interface predicate/function types to a uniform (enumerated)
type. One such system is qcpt available from
logicaltypes.com
packaged into the

ltq
system. From an input set of files, qcpt
generates the following types:



:- program_module
---> <names of modules in system>.
:- func all_program_modules = list(program_module).
:- func all_public_predicates = list(full_predicate_signature).

:- type public_predicate_signature
---> public_predicate / int.

:- type full_predicate_signature
---> public(program_module, public_predicate_signature).

:- type test_desc == pair(full_predicate_signature, string).

:- type public_predicate
---> <names of interface predicates>.



Example


Say we wish to test a single module, for example the

peano
module. By running ...



$ qcpt peano.m

... the file qcheck2.tests_digests_types.m is
generated with the above type declarations and the following
specific realizations:

% ...

:- type program_module
---> peano.

% ...

:- type public_predicate
---> increment
; peano
; to_peano.

:- implementation.

all_program_modules = [
peano
].

all_public_predicates = [
public(peano, increment/2),
public(peano, peano/2),
public(peano, to_peano/2)
].


A new reporting facility



Given that we now have collected all the public (or interface)
predicates and functions of a system, it is now also possible to
determine the completeness of the testing, in that all of those
predicates were tested. It simply now falls to a system to
collect the results of testing and report the results. The user
may attack this task any number of ways, but we also provide one
implementation in module
qcheck2.tests_digest_reports. The predicate



show_module_tests_summary(Results,
!IO)

does all this work by comparing the
accumulated test Results (typed as an
assoc_list(test_desc, test_results), where
test_desc is provided by module
qcheck2.tests_digest_types (generated from the
tested module using qcpt), and the
assoc_list is the accumulated result of calling a
set of qcheck/[4,7,8] goals).

The show_module_tests_summary/3 predicate reports
the total number of tests (and those that passed) within each
module and reports the number of predicates that succeeded all
unit testing. This predicate also reports, by name and arity,
the predicates not tested by the framework.

Example

The following report ...

$ ./peano_unit_tests_missing_to_peano

Qcheck2 tested the following modules:
*** 6/6 tests passed on 2/3 predicates in peano
(did not test to_peano/2)

... shows that my modifications to module
peano_unit_tests (I commented out the
to_peano/2 tests) resulted in all tests passing, but
the test suite did not provide full coverage of the
peano module's functionality.



Traditional Reportage



Module qcheck2.reports provides two ways to report
results as Q1 did:

show_test_results_sets(Results, !IO)
show_test_results(Description, test_results, !IO)


The predicate show_test_results_sets/3 simply
iterates over Results, calling
show_test_results/4 at each iteration.
The predicate show_test_results/4 reports the
results from qcheck/[4,7,8] in the style of
Q1.

Example

Here are some of the results reported from running
peano_unit_tests_with_failing_test with a call to
show_test_results_sets/3:


...
1)
Test description : public(peano, increment / 2) - "Makes sure we get a successor"
Number of test cases that succeeded : 100
Number of trivial tests : 0
Number of tests cases which failed the pre-condition : 0
2)
Test description : public(peano, increment / 2) - "Checks increment/2\'s compare"
Number of test cases that succeeded : 19
Number of trivial tests : 0
Number of tests cases which failed the pre-condition : 81
Distributions of selected arguments :
13 {0, 1}
5 {1, 2}
1 {2, 3}
3)
Test description : public(peano, to_peano / 2) - "Tests string creation"
Falsifiable :
i(-68)
...

The above examples demonstrate 1) a sample
test report where all the tests succeeded (with no reportage on
the test values used), 2) a sample where some
test values were unusable (with reportage), and
3) a sample where the predicate and test
values (by intention) failed the test. These tests are of the
same form as those of Q1. The major difference is
that test tests are reported separately from the
qcheck/[4,7,8] call. This separation allows delayed
(even permanently delayed) reporting of results.



Aim 2: Generation




Q1 offers a dizzying array of options when giving
the user control over what test values are generated and how they
are generated, that is, if these values are to be generated from
user-defined, or complex (not builtin), types. It also offers a
generator of such power that it is dubbed: "The Mother of all
Random Number Generators" by its creator. Impressive as the
above offerings are (and they are, in fact, impressive) there are
two glaring wants: first, one is required to
use the supplied generator -- users are not permitted alternate
generators for their own testing, and this becomes an issue when,
for example, "pure" randomness (such as the values provided by
random.org), or
cryptological-strength randomness is a requirement of the tested
system, and, second, restricting the ranges of primitive
builtins (or other "unbound" enumerated types (such as
integer)) is not directly feasible (albeit possibly
with some esoteric indirection). We address each of these wants
in the new system in this document by turns.



Want 1: User-supplied random number generators

Q1 was built to hide its inner workings. This
software principle is considered to be good practice, but, since
one of the inner workings is random number generation to generate
test values, this good practice obstructs the ability to replace
the random number generator when so required. The new
architecture in qcheck2 exposes the random number
generator in the state variable, allowing its replacement by
alternate equivalent types. The change also comes with a
supplied state initialization predicate that provides the default
random number generator.

The design of the system requires the user to wrap their random
number generator (hereinafter referred to as the RNG)
in type described in the next section, and
it must conform to the following typeclass:

:- typeclass random_number_generator(RNG) where [
% gives a random float on the range [0, 1)
pred rnd(float::out, RNG::in, RNG::out) is det,
pred reseed(int::in, RNG::unused, RNG::out) is det
].

where:

rnd/3 supplies a float between 0.0 and 1.0 and updates the state
of the RNG; and
reseed/3 reinitializes the RNG with the seed supplied as
the first argument.


Want 2: User-controllable numeric ranges

It is very easy for the user of Q1 to specify a
discriminated-union type and have the system generate test
examples. The problem is for builtin types:
Q1 does not consider bounds when
generating floats, ints
and, surprisingly,
chars.2
One could argue that by choosing unbound types, the user must be
prepared to accept any value those types describe, and indeed
this is true. Where this argument falls apart is when the user
wishes to test predicates within their nominal ranges. Certainly
the test framework should generate test data that tests beyond
the ranges, but when every test case offered is extrinsic,
nothing is gained by submitting that predicate to random
testing. In short, the user must be allowed direct control over
ranges of test values of builtin types when the situation
warrants.



:- type generator(RNG)
---> generator(int_range, float_range, character_type, RNG).

where:

int_range is either range(int, int) or unbound

Endnotes


1

Well, this statement is not true in all
cases
-- if all the tested predicates had the same type
and modality then one may simply use the name of the predicate
as the Description. See, for example, modules
foo and foo_unit_tests included in
the distribution. The material point here is that although it
is possible to use the name of a predicate as the
Description it is not realistic given that most
systems use predicates with different types, arities, and
modalities.


2

Q1 treats strings
as list(int)
, where each element
is a char-equivalent int, which can
be positive, negative or, as is most often the case, of very
large magnitude.



Contact Information: dauclair at hotmail.com 703-300-0447
Copyright © 2006-2007, Logical Types, LLC. All rights reserved.

Sunday, October 7, 2007

PADL

After Action Report: PADL 2006


Practical Aspects of Declarative Languages

Introduction

This is an highly-opinionated report on the presentations at the
PADL 2006. The sessions are presented chronologically, but are
tabled in the contents by topic, and cross-referenced by their
practicality (i.e. in what I found useful in the presentation).

Presentations

Cross References
TopicPracticality
Invited Speakers Practical Results
Phil Walder Dominators
Erik Meijer Probabilistic Prolog
No show: David Roundy Distribution type
Theorem Provers (no, really!)
"Practical"
"Applications"
1
Declarative debugging
CHRs for testing Improving the search from failed
branches
Probabilistic music Cliques as power sets
Genomes with distribution type
Complexity measures
Constraints "Theoretical"
2
(including compiler
Optimal paths with
dominators
implementations and
optimizations
)
3
Constraints in Mercury Tabling
and
tabling
4
Informing constraints with
failure
Constraints
5
Model checking
Analysis and Verification Reimplementing cut
Typing Prolog with cliques Description
logics

6
Model Checking model
checkers
Logic Programming
Generic (foreign) Cuts
Tabling in Mercury
Correct Tabling in XSB
Declarative debugging
Browsing and Querying
Logical Eclipse plugin
description logic
queries
Queries with sets with regex


Retrospective

As you can see below, my critisms are very harsh. This begs two
questions: harsh against whom? and would I go again to a PADL?
The answer to the latter is resoundingly in the affirmative (as
William Byrd would pipe up: 't'): the density of
useful concepts I took away from this conference was much greater
than most other conferences I've attended, including, not
surprisingly, the POPL.

So, why so harsh in my reviews? Was I critizing the speakers?
In some cases, I was indeed critizing the speakers, particularly
when they sacrifice rigor for herd thinking (particularly when
they do not acknowledge such lapses), but in the main, my
criticisms, such as 'this topic does not interest me', was not a
criticism of the speaker, nor even the choice of topic (with the
caveat that of the 17 of the 33 papers accepted, the papers
accepted should be more practical and less
esoteric, if the reviewers chose academic curiosities
over less polished papers on the topic of applications
then I do fault the selection process), but a severe
criticism of me. What I mean here is that the PADL's
mission is to gather reports from the front-lines, as it were.
That's where companies, like mine, operate. So, companies, like
mine, including mine, should be submitting these
papers. We do not, and the PADL has suffered for
it.

In short, the PADL was what it set out to be -- a free exchange
of ideas of the application of research and an open channel
between researchers and industry. AND the PADL,
as good as it was, could be even better with more
practical, real, applications developed in the declarative style
showcased for all to see. It motivates the researchers ('my work
has revelance! an audience! a fan club!'), and it improves the
tools industry uses to deliver these applications ('you can
do that?'). Win/win. QED.

The Challenge

So you companies that use declarative programming ... you know
who you are (*cough* galois
*cough* ... um, Doug, what about you? Yeah, me, too), belly up to
the bar, 'cause drinks are on the house! (trans: submit
papers; you are not divulging company secrets, and you're gonna
get lots of neato ideas that will help you on your current
projects).

Day 1: Monday, January 9, 2006


9:00 am Links: Linking Theory to Practice
for the Web
Phil Walder

Phil presented a proposal for a functional language
to cover the three tiers of building web apps. *YAWN*
What came out of this talk was nothing immediately useful for us, but
he did ask a series of questions to which he didn't have answers. He had
the guts to go to lunch with a bunch of us logic programmers and was
willing to listen to the logic programming side of designing a language
and the trade-offs using that methodology.

What I learnt ("Life Lessons")

  • first, tell everybody what you do not know how to do currently,
    and ask how they do it

  • second, mingle among the group that has a different
    perspective than you or that knows stuff you don't

What to learn ("Talk pointers")

  • Timbre is a functional concurrent programming
    language (typed?)

  • Hope first functional language to implement the
    Hindey-Milner type system

  • "Antinomies": n. things that are directly opposed.
10:30 am Using CHRs to generate
functional test cases for the Java Card Virtual Machine

Sandrine-Dominique Gourand (presenter),
Arnaud Gotlieb

Sandrine (or is it 'Dominique') presented a test generating
system that examines disjoint clauses of predicates and, by using
pattern-, or type-, matching generates appropriate unit test cases.
She pointed out the language JSL ('Jakarta Specification Language'
which has nothing to do with the much more popular Jakarta Struts)
which was used to generate the system to generate the test cases --
it's very Lisp-like. JSL is now an unsupported branch of COQ, a
theorem-prover.

I talked with both Sandrine and Arnaud after the presentation, re:
automating unit testing; Sandrine gave me two papers to review along
those lines. The first was on
QuickCheck
for Haskell; the second was on using
Isabelle
to generate test cases from a specification.

Life Lessons

  • CHRs (constraint handling rules) have real-world
    applicability (beyond Sudoku): generating test cases.

  • Theorem provers (COQ and Isabelle) have real-world
    applicability -- generating comprehensive test cases.

Talk Pointers

11:00 am Probabilistic-logical Modeling
of Music
Jon Sneyers, et al.


Jon presented a learning system for music. The learning system was
implemented on "probabilist Prolog" (PRISM). It took 75 selections from
Bach pieces and ~25 selections from Mozart pieces. He used a
description language to model the significant parts of the musical
selections (rhythm, tone (polyphony), etc, but not many other facets
of music) as an ordered list of triples (triples of pairs for two
voices, triples of triples for three voices). He then fed these
triples under the appropriate selections as training sets. The
learning system was a Markov chain ("It took two lines of code to
implement in PRISM"). It properly classified all test cases (~30);
when he removed rhythm, it properly classified most of the the test
cases (~26/~30).



This program not only classifies, but also generates fresh example
selections from the learnt selections. Jon was pleased to report
snipplets sounded like things from the composers, but didn't sound
very good -- his pleasure was based on the implication that good
music is not machine generated (with a consequence that composers
still have a role).



Jon is currently pursuing a new approach, using CHRs for the learning
process. The work he presented is now two years old.

Life Lessons


  • In the after-session Q&A, I asked Jon about the memory footprint.
    Jon admitted that learning with the Markov Chains grows
    exponentially
    with the size of the selection, so that's
    why he used selections from music pieces, and not the entire
    piece.

    Markov Chains or connectionist systems are nice and all, but there
    is a large price to be paid for using them.

Talk Pointers

  • PRISM; a probablistic Prolog.
11:30 am Modeling Genome Evolution with a
DSEL for Probabilistic Programming
Martin Erwig and Steve Kollmansberger
(presenter)


Steve wrote a system in Haskell using monads to describe genome
interaction. As these interactions are stocastic (not deterministic),
the standard way to go about this is algorithmically: writing a
random number generator and assigning values. Steve chose a novel
approach: he created a data type called "Distribution" that
encompassed all possible (finite) values as well as their
probabilities. Then using monadic transform, he described several
basic operations (addition being the one that sticks in my mind) that
could be used directly under the transformation. Using this approach,
modeling the genetic interaction (which is more like an inhibitor)
become facile, reducing nearly to the point of simple arithmetic.

Life Lessons


  • Of course, data modeling eliminates the alternative algorithmic
    implementation, so ...

  • Consider novel data structuring approaches
    before grabbing that same old algorithmic hammer.

Talk Pointers


2:00 pm Using Dominators for
Solving Constrained Path Problems
Luis Quesada, et al.

Luis demonstrated that by using information of the
graph of the search for the shortest path (e.g. nodes 5 and 12
must be on the path, these nodes are said to dominate the path,
and if a particular mandatory node precedes another waypoint,
that mandatory node is said to dominate the waypoint), it allows one to
eliminate paths that do not lead to an eventual solution.



Talking with Luis after his presentation, I asked about ACO (Ant
Colony Optimization). He was familiar with the approach, having talked
with a research group studying it, but opted for this approach. ACO is
not guaranteed to terminate, dominator-search does. He and I also talked
about the problem our company is facing, and I drew out the problem on
a napkin. He said he had studied resource-constrained search paths for
the problem (vehicles that have limitted fuel supply), and asked me to
provide a more detailed problems summary, so we could look at ACO vs.
dominator approaches.



Life Lessons




  • Luis' talk, while given in an abstract, theoretical, graph-theory
    approach reaked of experience from solving real-world
    problems
    . More presentations at PADL should have this same
    smell.

  • I was sure that ACO was the only way to go with our problem-space,
    and after talking with Luis, I'm still pretty sure of this, but I've
    also become aware of another possibility: that the problem, once
    specified, may turn out to be so trivial as to need none of
    this
    -- the "shortest path" may actually turn out to be the
    smallest great circle, or some such. If one knows exactly what the
    problem is (intuitively), then a specification description
    might itself contain the solution
    .

  • Know what else is out there. I'm glad that Luis
    knew about ACO, considered its merits and decided on the dominator
    approach. I don't necessarily agree with his conclusions, but I admire
    his thoroughness.



Talk Pointers




2:30 pm Adding constraint solving to
Mercury
Ralph Becket, et al. presented by Zoltan
Somogyi (co-author)


Zoltan covered what the Mercury team is doing
to add constraint solvers as native constructs into the Mercury
language. One interesting thing that came out of this topic is that
the predicates that add constraints to the solver are
declarative, but those that read the state are
not
. This, of course, raised eyebrows in the room (particularly
Phil Walder, who asked a question
about it). Zoltan explained this choice by stating that the solver
should be logically consistent, no matter the order in which
constraints are added (making added constraints declarative,
or order-independent), but reading out the solver's state (finding the
current solution set), actually results in a "write" elsewhere (into
users of the constraint solvers), in a way, destructively changing
the state of the program at large, and therefor a read is not
declarative. Zoltan owned that this is completely opposite to
prevalent declarative wisdom, but when examined in this light, is
consistent with the prevalent view.



As is always with these presentations, a holy war exploded. This
case: Prolog vs. Mercury, and on the subject of typing. How is this
even remotely related to the topic of this presentation? Isn't it
sweet that one
can go these genteel sessions to peer into the vaulted chambers of
academe? I always love a good brawl. *sigh* I would offer some
advice about how such fist-to-cuffs over such trivialities cements
views of the world at large about the researchers being out of touch
with reality and how that consequently makes adoption of research
ideas all the more difficult, but I know I would only be 1) wasting
my breath and 2) starting another pointless flame war.



Life Lessons



  • Constraints ... who needs 'em? Dunno; but that's a good thing
    that came out of this symposium: it points out things I don't know that
    I don't know, so I bought Constraint-based Local Search by
    Pascal Van Hentenryck and Laurent Michel, so I can learn a bit more
    about the topic.


Talk Pointers




3:00 pm A Hybrid BDD and SAT Finite
Domain Constraint Solver
Peter Hawkins (presenter) and Peter Stuckey


Peter's presentation piggybacked on
Zoltan's in that he used the
Mercury constraint solver for this work. Peter noticed that SAT solvers
are very fast (with the cost of expressivity: their expressions are
booleans in propositional logic, ugh!), because SAT solvers 'learn' from
rejected branches, whereas regular constraint
solvers perform not so well (but allow inequalities, etc). So, what he
did was use a binary transformation (the BDD) to accumulate learned
knowledge from rejected branches on the regular constraint solver
in the form of proposition from a SAT solver. He also did some
redundancy elimination. The profiled results were very good for his
hybrid execution-time-wise. Not only that, because of this 'learning'
his constraint solver was able to tackle problems that other constraint
solvers choked on (SAT solvers also suffered because their limitted
syntax prevented formulating some constraint problems). Very cool.



BUT after the presentation I approached Peter. After
complimenting his presentation, I stated that it made no sense to me,
as I did not see a real-world application for constraint-based
programming. Peter was unable to provide
examples to me [that was exceedingly disappointing], but did refer me
to Pascal as an authoritative source, as his area of research is
constraints.



Life Lessons



  • Use information learnt from rejected branches to
    improve the search on new branches.



4:00 pm Efficient top-down set-sharing
analysis using cliques
Jorge Navas, Francisco Bueno and Manuel Hermenegildo
(presenter)


This presentation was about proving properties of Prolog programs,
such as parameter types and determinism, by using cliques (sets of
relations). Manuel demonstrated the system in action; it was slick!
It gave runtime feedback on a series of asserts he made about the
program (the assertions were ':-'/1 declarations in Ciao Prolog).
The key point of this presentation was that cliques could be represented
as power sets, so that 6 relations (xyz, xy, xz, x, y, z) could be
reduced to one powerset (over xyz). Then, by widening some assumptions,
the system was able to make more general inferences, but at the cost of
complete correctness.



Life Lessons



  • Data representation! Using the power set,
    instead of explicitly enumerating all the relations explicitly saves both
    space and time, simplifying the programming task and the subsequent
    debugging/maintenance.


4:30 pm Automatic Verification of a
Model Checker by Reflection
Bow-Yaw Wang


Bow needed to verify the correctness of his model checker (that he
built for what practical purpose? I was unable to determine this), and
had an epiphany: the model checker could itself be checked by
reflection; that is, by building a model of the model checker and
verifying the correctness of that model. Of course, this begs the
question (which, of course, was asked after the presentation, but,
surprisingly, not by me (someone else beat me to it)): how does one
stop this line of induction. Bow really didn't have a satisfying
answer for this (but, practically speaking, one can make a pretty good
determination that "enough" rigor has been applied...).



Model checking is all well and good, I suppose, but, from what I've
seen of it, it seems like a silly exercise: pre-production model
checking is based on requirements that will change during the
development process, and post-production model checking is a forgone
conclusion ("Hey! this working system really works! Fancy that!"). I'm
filing model checking under 'works of
"theoretical" interest'.

Day 2: Tuesday, January 10, 2006


9:10 am8
Generic Cut Actions for External
Prolog Predicates
Tiago Soares, Ricardo Rocha and Michel Ferreira
(presenter)

The presentation noted that some Prologs, including XSB (*cough* Big
Surprise *cough* tabling
*cough*), had problems interacting with, e.g., C, particularly with
database cursors or other persistent structures that required eventual
deallocation. The problem is especially noticeable when a cut ('!'/0)
in the Prolog code prunes away the goal that calls the C deallocation
routine (go figure).

Michel's solution was to modify the (in this case, Yap) Prolog
compiler so that the foreign interface required a C function to call
when Prolog encountered a cut. This C function is an extra argument
for each predicate defined by a foreign (C) function.

This solves the problem for memory leaks, but I asked a different
question to the group. Dynamic languages pay a penalty whenever they
make a foreign call, something on the order of 40 for marshalling and
unmarshalling data between the languages. I asked was there a way
to represent the foreign interface natively, e.g. a database wrapper in
Prolog, not C; or, generally, is there a way to eliminate this
penalty.



The group shot me down. Zoltan said you either pay the penalty, or
write your own database, which (the latter) is a losing proposition.
Others concurred, saying that was in the nature of dynamic languages.
Zoltan later expressed surprise at the size of the penalty, saying he
observed only a factor of 2 for foreign calls in Mercury [*cough*
strongly typed means less marshalling *cough* Mercury's not
dynamic *cough*].



In a separate conversation over lunch, Michel explained to me the
raison d'être for this system was they built a classic
inductive system and stored the deductive database in MySQL (?), as the
deductions were numerous. I asked if his group had solved the problem
of inductive systems: a very slight perturbation in the data causes
such systems to fall into an undefined state or causes wildly incorrect
resulting rules. He said they hadn't, but were looking at that
problem.



Life Lessons


  • Use statictical methods for induction (bayesian
    or neural); classical inductive systems are just too delicate to be
    useful with real-world problems.


9:40 am Tabling in Mercury: Design
and Implementation
Zoltan Somogyi (presenter) and Konstantinos
Sagonas


Zoltan presented tabling in Mercury given the constraints of the
type system. Given the system we are developing, tabling is not
a propos; I didn't get much out of this talk.



10:10 am Incremental Evaluation of
Tabled Prolog: Beyond Pure Logic Programs
Diptikalyan Saha and C. R. Ramakrishnan


Okay, here's the thing; you all
now know that tabling is not
one of my favorite topics
, the the point of having two
talks on a compiler implementation detail
is what again? To prove that XSB Prolog doesn't do tabling correctly
presently? Isn't that clear enough already?



Anyway, what's the point of going in the beyond pure logic? Isn't
that going in a very bad direction? Perhaps one should explore
what happens when one moves toward pure logic, not away
from it? (hint, hint, Mercury, *cough*)



Life Lessons



  • Never! Ever! use XSB Prolog: it tables incorrectly
    and requires the user to turn off tabling
    explicitly!

  • Accomodating errors as a goal of design is
    erroneous!
    Instead of accomodating flaws, eliminate them
    (perhaps by going to a better system).



10:40 am Controlling search space
materialization in a practical declarative
debugger
Ian MacLarty and Zoltan Somogyi (presenter)


Holy Smart Debugger Design, Batman! This debugger,
instead of having you step through the code in the traditional manner,
does an inductive debugging search: it presents you with a predicate
with instantiated arguments and the result and ask if the result was
expective, if it wasn't it drills into the predicate, seeking the
cause of the disconnect, if it was, it moves on. Groovy!


Life Lessons

  • Prolog's traditional debugging system is wonderful; much better than
    most every other thing out there, so it's unthinkable that something
    could be better -- imagine the impossible; then create
    it

  • Having the computer handle drudge work, repetitive tasks, and
    assisting the user in decision making ... What a
    concept!



2:00 pm LINQ: Reconciling objects,
relations and XML in the .NET framework: a Personal Perspective

Erik Meijer


Erik talked about the socialogical aspects of linking a
semi-equivalent of Haskell's typeclasses into Visual Basic, making
sure it was backwards compatible, even down to the IL. He also showed
how XML syntax could be weaved into VC procedures, allowing the
'copy-and-paste' coding style (the usual XML element instantiation
remains for those who wish to do it the longer, or more programatic,
way). He followed a whole side monologue about how the DOM-XML is
bad (oh, my goodness, how can that possibly be!) but not for
the usual reasons (ummmm, that would be, all of them?) but because
an element is entirely dependent on a document instance to exist, so,
things like moving a set of elements from one document to embed into
another are ridiculously hard. He solves this problem by reifying
element independently of the document. He solved this problem, and
probably thousands of others have, too, using the same approach. That's
called, 'reuse', right? One of the thrusts of his presentation
was that results from research were working their way into the
mainstream software engineering community.



Okay, a question: anything that permits and encourages bad
programming ('copy-and-paste' coding) ... that's good?



3:30 pm A Generic Code Browser with a
Declarative Configuration Language
Kris De Volder


Finally! a practical application! Kris demonstrated
a plugin for eclipse (the Java IDE) that used a mercury-like scripting
language to create on-demand code browsers. Slick! Kris was
also well-informed and reasonable: fully recognizing that he had
a small acceptance window ('5 seconds'), and a tiered expertise
base. So, he designed the tool so that most features were automatic
(GUI-selectable), some features obtainable via a regex-like query
system, and finally advanced features only required the full language.
He also saw his niche: smack inbetween the rigid prebuild gui
browsers and the nothingness of rolling your own browser from
scratch, and he documented this niche thoroughly. Bravo! an
unqualified success!



4:00 pm Translating
Description Logic Queries to Prolog
Zsolt Nagy, Gergely Lukácsy (presenter)
and Péter Szeredi


Gergely presented a system that he and his colleagues had
developed that converts description logic assertions and
queries (in the ALC description language, which is
developed from, and is, a frame logic) down to Prolog terms.
He spent some time showing the differences between ALC
and Prolog, particularly highlighting Prolog's closed-world
model (where if a fact is not known, it is false) and
ALC's open-world assumpution (OWA) model (where
something must be stated as false to be false; if something is
unknown, then it is simply unknown. This, of course, impacts
the semantics of negation: Prolog's negation as failure is not
congruent with the OWA model. But it also affects transitive
closures: he demonstrated Prolog's weakness (and the strengths
of frame logic) with the canonical Oedipus/Iocaste patricide
example (which brought forth chuckles from the audience when he
blushed on the moral implications of his example).



Given all this, Gergerly demonstrated the system at
efficiently reduced ALC terms and queries down to
Prolog. He also compared the results against other systems.
This system was very much faster than others. Gergerly pointed
out, in fairness, that other systems also handled other
description logic languages, so they were perhaps not fully
optimized for ALC. No matter: this system is a full
frame logic implementation, and it performs well; even finding
solutions to queries that other systems do not terminate, and
this system is very effective in ignoring "noise" in the
knowledge base (noise being in the form of many other terms
that have no relation to the queries being posed -- other
systems lag exponentially in the face of larger, noisier,
knowledge bases).



Life Lessons



  • If you are a father, do not allow any of your sons
    to be named 'Oedipus'
    !

  • Too much generality in a tool can kill what you
    need to do. The best tool for the job may not be the
    most featureful.



Talk Pointers





4:30 pm Querying Complex
Graphs
Yanhong A. Liu (presenter) and Scott D. Stoller


Annie presented a system based on sets and regular
expressions that allowed intuitive querying of
(semi-?)structured data. This system converted the queries to
datalog-equivalents and eventually resulted in the query in C
code. The interesting thing about this system was that the
query itself was only half the answer, the other half of the
answer was a measure of the complexity of the query. In this
way the query could be reformulated so that the result would
entail less complexity. Very sweet!



Life Lessons



  • You don't miss what you don't know.
    Anyone, on seeing how complex their system is, sees the solid
    benefit of this measurement. Why is this not everywhere?
    Because nobody knows it is possible.


Endnotes



1 As all of the talks in the "Practical" "Applications"
were neither practical (they were all used to further research
or as dissertations) nor applications (several speakers
thoughout the conference, when I
queried them about aspects of their presentation stated that the works
they discussed where under development still), I query the accuracy of
the topic heading for these talks. A saving grace is that all
of the talks on this topic did have results that were practical
and immediately usable for real-world applications, unlike what
some of the other presentations had to offer.
2

"Theoretical" as in
"yeah, I'll look into this
stuff after I incorporate all the practical stuff from the symposium into
the project, complete the project, build the world's fifth largest (if not
larger) supercomputer, build a demonstratively useful quantum computer
and finish the laundry
."

3

Also, I do not consider compiler implementations
or optimizations at all a practical application. The compiler is a tool
to provide the ability to build a practical application: it, per
se
, is not.

4 Yes, there were two presentations on
tabling
. Both of them I found to be very "useful" in a
"theoretical" sense of the word.
5 Okay, okay, so I am looking into constraints. This
does point out an issue with the symposium: it may point out things that
I don't know I don't know, but it wasn't very helpful in educating me
about the practical applications in these areas.
6 I think a more approapriate term for 'description
logics' or 'ontologies' is 'hooey'.
Bluntly, descriptions
logics are as innovative and as useful as Java is ('the most distressing
thing to hit computing since MS-DOS' Alan Kay). With the set of claims
proponents are making grandiosely about knowledge representation, which I
must remind everyone is a similar set of claims the AI community was
making for symbolic representation of knowledge (what, again, is
description logic? Oh, the 'symbolic representation of knowledge'? Ah,
yes, we have a winner here!), in the same self-satisfied tone, just before
AI went dark for twenty years, AND with the bedfellows it has
(whispered: 'eggs'-'em'-'ill'), I am shocked that more gullible
consumers of this hogwash haven't made the obvious connections and
avoided this dead end...
7 The 'random' in "'random' testing" means 'comprehensive
with random generation of parameters', not 'a random selection
[e.g. not comprehensive] of possible tests'.
8 This is a departure from the published schedule;
the invited speaker for the morning was a no-show.




author: Douglas M. Auclair
dauclair at hotmail dot com
date:January 16, 2006
Copyright © 2006, Logical Types, LLC. All rights
reserved.

Thursday, March 1, 2007

Archived News: 2007-03

I have eliminated my "corrections" to the Mercury
extras xml library (I had an outdated
version of the library as outlined in the
2007-03-18 news post, so
my corrections were redundant the the Mercury team's).
At the same time, I have moved my XML enhancements
into the utils library and promoted
the doug_graph module from alpha; it is
now also included in utils.

18:

A new library and several improvements can be found in
the shared repository:

  • The xml library provided in the Mercury
    extras distribution is out-of-date; it no
    longer compiles. I have fixed the compile errors and added
    several modules (to assist in XML transformations and
    pretty-printing) and tests.

    Update 2007-03-28:

    I was in error in the above item: my
    copy
    of the xml library provided by the
    Mercury team was out of date; the version
    supplied with the extras
    distribution by the Mercury team is
    working correctly.

    This means my fixes to the xml library are
    redundant, so I withdraw them. The extended
    functionality, however, I do continue to find
    (very) useful, so I am moving these
    enhancements as an xml library under
    the utils umbrella.

  • Several improvements are available for
    qcheck2:



    1. I have modified the qcheck2 library so
      that it now uses the RNG protocol as proposed in the
      Mercury users' maillist. I have also modified the
      reporting feature to accept a polymorphic type for
      module.predicate unit tests ... this improvement
      'upgrades' qcheck2 to be an independent
      library (qcheck was also an independent
      library).

    2. The program qcpt that generates the
      module.predicate test points for a system has also
      been updated to use the new qcheck2
      reporting protocol. qcpt is bundled
      with ltq.



  • I have entirely changed the utils library:

    1. Although useful for a small number of repetitions,
      the peano module becomes unweildly for
      large cycles (1,000,000 is represented as 1,000,001
      cons cells!). So, I have discarded it in favor of a
      slightly more sophisticated counting algorithm (where
      1,000,000 is represented by 7 cons cells) in the
      utils.series module that now also includes
      loop abstraction with func unfold/3 and
      pred svunfold/6 (the latter being used when
      one must also update a dependent state variable).

    2. Julian Fondrant on the Mercury Users maillist
      proposed a RNG typeclass and protocol, and published a
      module implementation using the tausworthe3 algorithm.
      I have incorporated this module (as
      utils.random) with a simplified
      façade and other minor corrections.




Copyright © 2006, 2007, Logical Types, LLC.
All rights reserved.