" /> My Constraint Programming Blog: March 2010 Archives

« February 2010 | Main | April 2010 »

March 30, 2010

Comet version 2.1.0 released

Comet version 2.1.0 is released. It can be downloaded here.

From the release notes (in the distribution):

Major additions since 2.0.1:
  • New Version of Comet Studio IDE
  • New Version of User Manual
  • New Scheduling Resources

Detailed list of improvements:

  • New Version of Comet Studio IDE
  • New Version of User Manual, covering:
    • set variables in CP
    • soft global constraints in CP
    • more aspects of scheduling in CP
    • over-constrained problems
    • concurrency
    • visualization
    • interfacing with databases, XML, C++ and Java

  • General
    • Extended getSnapshot method for set variables
    • switch instruction now works with enumerated types
    • Extended onFailure to work with all types of selectors
    • Made possible to create set of arrays
    • Added copy method for set{string}
    • Added directory browsing support (classes FSDirectory and FSFile on Core module)
    • Added a reset method to the C++ API so that one can parse a Comet model once and solve several times with the same input
    • Added a method isEmpty() for sets
    • Added support for dictionaries of Objects (including string, Integer, Float, Boolean)
    • Allow definition of sets defined over enumerated type
    • Added filename option for seed tracking with -t and -r flags
    • Add the methods getNAN(), getPInf() and getNInf() to return the IEEE double representation for Not-a-number (NaN), +infinity, and -infinity
    • Allows equality testing between two ranges

  • Constraint Programming
    • Reduced cost on soft-cardinality constraints
    • More flexibility on cardinality constraints: not all values from the domains need to be constrained in cardinality
    • Added built-in guarded constraints
    • Added CostRegular and a new propagator for Regular
    • Allowed access to min/max in solution
    • Added the methods: Outcome requires(int val) and Outcome excludes(int val), on var{set{int}}
    • Allow declaration of arrays of var in extension
    • Added method function Solver getCPSolver(expr{int} e) to get the Solver from an expr{int}
    • Extended getSnapshot to var{set{int}} variables
    • Added getValue() method for var{float}
    • Added label(var{set{int}}[]) and getIntSetVariables() methods on Solver

  • Mathematical Programming
    • Allow declaration of float LP variables without an initial lower bound
    • Added Method to manually release the memory of a MIP/LP solver

  • Scheduling
    • New resources:
      • UnarySequence
      • UnaryResourcePool
      • UnarySequencePool
      • DiscreteResourcePool

    • Deprecated AlternativeUnaryResource

  • Visualization
    • New VisualBarplot widget
    • Improved VisualHistogram behavior and functionalities
    • New slider widget: UISlider
    • Added tooltips on VisualText, VisualTextTable, etc
    • Added more methods to VisualActivity:
      • void setRow(int i);
      • void setColor(string c);
      • void setToolTip(string s);

    • Several enhancement to VisualGantt

March 29, 2010

MiniZinc Challenge 2010

MiniZinc Challenge 2010 has been announced:

The Challenge

The aim of the challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).

....

For the rules, see: MiniZinc Challenge 2010 -- Rules.

Also, see the earlier challenges:

March 26, 2010

MiniZinc version 1.1.1 released

MiniZinc version 1.1.1 has been released. Download.

From the NEWSG12 MiniZinc Distribution version 1.1.1
---------------------------------------

Bugs fixed in this release:

* A bug that caused predicate arguments to be incorrectly flattened in
reifying contexts has been fixed. [Bug #109]

* A bug in mzn2fzn that caused incorrect bounds to be calculated for the
result of a mod operation has been fixed. [Bug #107]

* A bug in mzn2fzn that caused out of range array accesses to be generated in
reified contexts, instead of just making the result of the reification
false. [Bug #110]

* The omission of the null annotation from the Zinc / MiniZinc specification
has been fixed.

* The rostering problem in the MiniZinc benchmark suite (benchmarks/roster),
has been reformulated. The old formulation was always unsatisfiable under
the change to the semantics of the mod operation introduced in MiniZinc 1.1.
[Bug #108]

* A bug in mzn2fzn that caused it to emit the null/0 annotation in the
generated FlatZinc. [Bug #111]

March 22, 2010

Minion version 0.10 released

Minion version 0.10 has been released. From the News:
March 17, 2010. Minion 0.10 has been released.

Changelog:

  • Nauty is now included by default for automatic symmetry detection.
  • New features:
    • added watchvecexists_less constraint
    • added quick lexless constraint
    • added lightweight table constraint
    • new flag -instancestats that will give various properties about the instance
    • new flag -searchlimit to limit only time spent in search and not in preprocessing
  • Bugfixes:
    • time limits are now taken into account during preprocessing as well
    • some of the generators were getting old and have been updated
    • reification of unary constraints works properly now
    • set constraints on empty sets now work
    • several bugfixes for watched constraints
  • Misc improvements:
    • speed improvements and memory for the parser
    • reduced memory requirements
    • improved testing
    • tidier reimplementation of the search manager
    • bumped minimum Mac OSX version from 10.4 to 10.5

March 17, 2010

MiniZinc version 1.1 released

MiniZinc version 1.1 has been released. See below how my existing (and maybe others') MiniZinc models are affected by the changes.

From the NEWS:

G12 MiniZinc Distribution version 1.1
-------------------------------------

Changes to the MiniZinc language:

* The following built-in operations have been introduced:

int: lb_array(array[int] of var int)
float: lb_array(array[int] of var float)
set of int: lb_array(array[int] of var set of int)

int: ub_array(array[int] of var int)
float: ub_array(array[int] of var float)
set of int: ub_array(array[int] of var set of int)

set of int: dom_array(array[int] of var int)

These new operations are synonyms for the following existing built-in
MiniZinc operations:

int: lb(array[$T] of var int)
float: lb(array[$T] of var float)
set of int: lb(array[$T] of var set of int)

int: ub(array[$T] of var int)
float: ub(array[$T] of var float)
set of int: ub(array[$T] of var set of int)

set of int: dom(array[$T] of var int)

These latter operations are now deprecated. Support for them will
be removed in the next release. This change is being made in order
to preserve compatibility with the full Zinc language.

Note: that only the versions of lb, ub and dom that take an array
as a an argument are deprecated. The MiniZinc lb, ub and dom operations
on non-array values are *not* deprecated.


Changes to the FlatZinc language:

* Boolean variable expressions as constraints are no longer supported.
All constraints in FlatZinc must now be predicate applications.

* String parameters are no longer supported. String literals are restricted
to appearing as the arguments of annotations.

* Set of bool and set of float parameters and literals are no longer
supported.

* The int_float_lin/4 objective expression is no longer supported.

* FlatZinc now has two additional evaluation outcomes: "unknown"
for when search terminates without having explored the whole search
space and "unbounded", for when the objective of an optimization
problem is unbounded.

* The semantics of the int_div/3 and int_mod/3 built-ins has been changed.
See the ``Specification of FlatZinc'' for further details.


Other Changes:

* The single pass MiniZinc interpreter, minizinc, has been deprecated.
It will be removed in a future release.

* The MiniZinc-to-FlatZinc converter, mzn2fzn, has been rewritten.
The new implementation is smaller and more efficient.
Computation of variable bounds has also been improved.

* mzn2fzn now outputs singleton sets as ranges. [Bug #94]

* A bug that caused expressions containing abs/1 to be incorrectly
flattened has been fixed. [Bug #91]

* The FlatZinc interpreter's finite-domain backend now implements
global_cardinality_low_up as a built-in.

* The FlatZinc interpreter's lazy clause generation solver now supports
the int_mod/3 built-in.

* Two additional modes of operation have been added to the FlatZinc
solution processing tools, solns2dzn, that allow it to extract the first
or last solution from a FlatZinc output stream. Also, there is no longer
a default mode of operation for solns2dzn, it must now be specified by
the user or an error will occur.

* The following new global constraints have been added to the MiniZinc
library:

all_equal
decreasing
diffn
lex2
lex_greater (Synonym for lex_less with arguments swapped.)
lex_greatereq (Synonym for lex_lesseq with arguments swapped.)
sliding_sum
strict_lex2

* The following synonyms for existing global constraints have been added
to the MiniZinc library (the existing name is given in parentheses):

alldifferent (all_different)
atleast (at_least)
atmost (at_most)
atmost1 (at_most1)

* The sequence constraint is deprecated. Models should use the new
sliding_sum constraint instead.

* The 'table' constraint decompositions in the MiniZinc library have been
modified so as to fit better with the G12 MiniZinc-to-FlatZinc conversion:
now no scaling constraints are created.

* The decompositions of the constraints in the 'lex' family have been
tweaked to enable a little more propagation.

Documents

Here are the three new specification documents for MiniZinc version 1.1 (and Zinc version 0.11):

Comments

The next few days I will change my MiniZinc models so they comply to version 1.1, and I have already started this work. Update 2010-03-20: These changes has now been done, including updating the SVN repository.

The following will be changed:

  • lb/ub for arrays

    lb(array) and ub(array) will be changed to lb_array(array) and ub_array(array) respectively.
  • Comparing/copying arrays

    One thing that should not work even in MiniZinc version 1.0.3 - but for some reason did - was copying/equality/comparison of arrays in the constraint section or in predicates. This don't work in MiniZinc version 1.1. E.g., the following no longer works:

    int: n = 4;
    array[1..n] of var 1..n: x;
    constraint
    x = [1,2,3,4] % no longer works
    ;

    Instead, the arrays must now be handled element-wise. Since many of my models use the above construct, especially for testing the global constraints, the models use a new predicate family cp<n>d (where <n> is the dimension, 1, 2, etc), e.g. cp1d and cp2d. Example of one version of cp1d:

    int: n = 4;
    array[1..n] of var 1..n: x;
    solve satisfy;
    % arrays of 1d where both arguments are var int
    predicate cp1d(array[int] of var int: x, array[int] of var int: y) =
    assert(index_set(x) = index_set(y),
    "cp1d: x and y have different sizes",
    forall(i in index_set(x)) ( x[i] = y[i] ));
    constraint
    cp1d(x, [1,2,3,4]) % this works
    ;

    Some examples are collected in the model copy_arrays.mzn.

    I estimate that over 200 of my models have to be fixed in this way.As mentioned above, some of models are now already changed.

  • Renamed models

    Some of my MiniZinc models has been renamed since they clash with new built-in predicates:

After the changes are done, I will also update the G12's MiniZinc SVN repository, the hakank directory.

Two more things

Also see, my MiniZinc Page.

March 16, 2010

Gecode version 3.3.0 released

Gecode version 3.3.0 has been released. From the Changelog:

Changes in Version 3.3.0 (2010-03-15)

This release provides some fixes, some performance improvements for domain propagators, and quite some clean ups how propagators and advisors report their status to the kernel. Many of these clean ups are essential to make it easier to program propagators and branchers with Gecode.

  • Kernel
    • Additions
      • Advisors now can force its propagator to be rescheduled, including recomputation of its cost used for scheduling (normally, a propagator is only rescheduled if its modification event delta changes). An advisor can signal forceful rescheduling by returning ES_NOFIX_FORCE or returning the return value of ES_NOFIX_FORCE_DISPOSE. Read the forthcoming "Modeling and Programming with Gecode" for details. (minor)
    • Other changes
      • The failure macros for posting GECODE_ES_FAIL and GECODE_ME_FAIL now only accept a single argument and assume that "home" actually refers to the home space. Read the forthcoming "Modeling and Programming with Gecode" for details. (minor)
      • The functions ES_FIX_PARTIAL, ES_NOFIX_PARTIAL, ES_FIX_DISPOSE, and ES_NOFIX_DISPOSE are now member of Space. Read the forthcoming "Modeling and Programming with Gecode" for details. (minor)
      • The function ES_SUBSUMED now is a member of Space and accepts a propagator as its single argument. The variant with a size as the second argument is available as ES_SUBSUMED_DISPOSED but use is highly discouraged. Read the forthcoming "Modeling and Programming with Gecode" for details. (minor)
      • The functions ES_SUBSUMED_FIX and ES_SUBSUMED_NOFIX for advisors have been renamed to ES_FIX_DISPOSE and ES_NOFIX_DISPOSE. Read the forthcoming "Modeling and Programming with Gecode" for details. (minor)
    • Bug fixes
      • ViewValBrancher with random value selection did not produce a random sequence of values. (minor)
  • Finite domain integers
    • Additions
      • Integer sets (IntSet) now have a in member function for testing whether an integer is included. (minor)
    • Other changes
      • Patterns for reified propagators have been moved to the Gecode::Int namespace. (minor)
    • Performance improvements
      • Considerably improved performance and memory consumption of the DFA-based extensional constraint (regular). (major)
  • Finite integer sets
    • Other changes
      • Patterns for set propagators have been moved to the Gecode::Set namespace. (minor)
  • Minimal modeling support
    • Additions
      • Linear expressions can freely mix integer and Boolean variables and support construction from variable arrays via a sum function. (minor)
    • Removals
      • Removed special cases for posting linear and Boolean expressions consisting of a single variable only (was highly ambigious). (minor)
  • Support algorithms and datastructures
    • Performance improvements
      • Changed to single, really efficient bitset implementation used allover the system. (major)
  • Gist
    • Bug fixes
      • Avoid inter-thread call to QWidget::update, which apparently causes a slight memory leak (and warning messages on stderr) on Mac OS. (minor)
  • Gecode/FlatZinc
    • Other changes
      • The FlatZinc interpreter can now be extended by plugins that implement custom search strategies. The plugins are implemented as dynamically loaded libraries using the Qt plugin mechanism. An example can be found in the directory gecode/flatzinc/exampleplugin. (minor)
      • The index of the variable used for optimization is now available in the FlatZincSpace class. (minor)
      • Added command line option -print, which controls whether all solutions are printed or only the last one that is found, and -search, to choose between branch-and-bound and restart optimization. (minor)
      • The FlatZinc library can now parse a FlatZinc model into any subclass of FlatZincSpace, so that custom extensions can be built. Annotations on the solve item can now be accessed from the returned FlatZincSpace, so that additional search strategies can be implemented. (minor)
    • Bug fixes
      • The FlatZinc interpreter ignored the -c-d and -a-d command line switches when used with Gist. (minor)
I'm really looking forward to the forthcoming document Modeling and Programming with Gecode mentioned above. Also, I'm doing some experiments with the new search annotations using AFC (Accumulated Failure Count, a.k.a. weighted degree of a variable) for Gecode/fz (the FlatZinc solver):
  • afc_min
  • size_afc_min
  • afc_max
  • size_afc_max

March 12, 2010

Pi Day Sudoku 2009 - the models (MiniZinc and Comet)

The 14 March is Pi Day (π Day) and is a celebration of Pi (3.14 in the mm.dd date notation). There are a lot of activities for this day. One of these activities is the Pi Day Sudoku.

Almost exactly one year ago, I blogged about the Pi Day Sudoku 2009 problem in these two posts: The problem is an extended version of a Sudoku problem:
Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

(Click to enlarge the picture)

Since it was a competition (closed June 1, 2009), I didn't published any models of this problem when blogging about the problem. Now, a year after, it seems to be a good time to publish them. I then implemented two version, one in MiniZinc, and one in Comet:: Both models use the same approach (details differs however). It is the same as for the plain Sudoku, with two exceptions:
  • The common approach in plain Sudoku is to use the global constraint alldifferent for stating that the values in the rows, columns, and regions should be different. Since there should be 3 occurrences of π in each row, column and region, this approach don't work. As mentioned in Solving Pi Day Sudoku 2009 with the global cardinality constraint, my first approach was a home made version of the constraint alldifferent except 0 and Pi but it was too slow. Instead (and via a suggestion of Mikael Zayenz Lagerkvist) I changed to the global constraint global_cardinality, which was much faster.
  • The other difference to the standard approach is that the regions are not 3x3 (or MxM), but "jigsaw regions". It was not especially hard to make this representation (though boring to type them in). The constraint for checking the regions are (in MiniZinc):
      /\ % the regions
      forall(i in 1..num_regions) (
        check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
      )
    
Here I will not publish the solution to the puzzle since I gather that there are a lot of people out there that will solve it by there own. And there is at least one valid solution out there.

Summary

This was a fun problem, especially since I learned some new things by implement the models. As a constraint programming challenge is was quite harder than this year puzzle: Pi Day 2010:
Rules: Fill in the grid so that each row, column, and block contains 1-9 exactly once. This puzzle only has 18 clues! That is conjectured to be the least number of clues that a unique-solution rotationally symmetric puzzle can have. To celebrate Pi Day, the given clues are the first 18 digits of π = 3.14159265358979323...
[Yes, I have implemented a MiniZinc model for this problem as well; it is a standard Sudoku problem. No, I will not publish the model or a solution until the deadline, June 1, 2010.]

For more about Pi Day Sudoku 2010, see the blog 360: Pi Day Sudoku is back.

Also, see the following models that implements Killer Sudoku, which use the same approach as Pi Day Sudoku 2009:

The MiniZinc code

Here is the MiniZinc model (sudoku_pi.mzn), slightly edited.
include "globals.mzn";
int: n = 12;
int: X = 0;  % the unknown
int: P = -1; % π (Pi)

predicate check(array[int] of var int: x) =
   global_cardinality(x, occ) % :: domain
;

array[1..n, 1..n] of var -1..9: x :: is_output;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
array[1..11] of 0..3: occ2 = [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1];

% solve satisfy;
solve :: int_search([x[i,j] | i,j in 1..n], first_fail, indomain_min, complete) satisfy;

constraint

  % copy the hints
  forall(i in 1..n, j in 1..n) (
      x[i,j] != 0
      /\
      if dat[i,j] != X then
        x[i,j] = dat[i,j]
      else 
        true
      endif
  )

  /\ % rows
  forall(i in 1..n) (
    check([x[i,j] | j in 1..n]) 
  )

  /\ % columns
  forall(j in 1..n) (
    check([x[i,j] | i in 1..n])
  )

  /\ % the regions
  forall(i in 1..num_regions) (
    check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
  )
;

output 
[ show(occ) ++ "\n"] ++
[
  if j = 1 then "\n" else " " endif ++
   show(x[i,j])
  | i in 1..n, j in 1..n

] ++ ["\n"];


% data
array[1..n,1..n] of int: dat = array2d(1..n, 1..n,
[
 4,9,7, P,5,X,X,X,X, X,X,X,
 X,P,X, 8,X,X,9,6,1, 5,2,X,
 X,8,X, 1,X,X,X,P,X, 7,X,X,
 X,X,X, X,X,X,X,P,X, 4,X,X,
 5,3,9, 6,X,X,X,X,X, X,X,X,

 9,4,X, P,P,P,7,X,X, X,X,X,
 X,X,X, X,X,6,2,5,P, X,7,4,
 X,X,X, X,X,X,X,X,P, P,3,8,
 X,7,8, 4,6,9,X,X,X, X,X,X,

 X,X,3, X,P,X,X,4,7, 1,6,9,
 X,X,4, X,1,X,X,X,6, X,P,X,
 X,X,X, X,X,X,X,X,4, X,5,X
 ]);


% The regions
int: num_regions = 12;
array[1..num_regions*12, 1..2] of int: regions  = array2d(1..num_regions*12, 1..2,
[
  % Upper left dark green
  1,1  , 1,2  , 1,3  , 
  2,1  , 2,2  , 2,3  , 
  3,1  , 3,2  , 
  4,1  , 4,2  ,  
  5,1  , 5,2  , 
 
  % Upper mid light dark green
  1,4  ,  1,5  ,  1,6  ,  1,7  ,  1,8  ,  1,9  , 
  2,4  ,  2,5  ,  2,6  ,  2,7  ,  2,8  ,  2,9  , 

  % Upper right green
  1,10  ,  1,11  ,  1,12  , 
  2,10  ,  2,11  ,  2,12  , 
  3,11  ,  3,12  , 
  4,11  ,  4,12  , 
  5,11  ,  5,12   , 

  % Mid upper left "blue"
  3,3  ,  3,4  , 3,5  ,  3,6  , 
  4,3  ,  4,4  , 4,5  ,  4,6  , 
  5,3  ,  5,4  , 5,5  ,  5,6  , 

  % Mid Upper right blue
  3,7  ,  3,8  ,  3,9  ,  3,10  , 
  4,7  ,  4,8  ,  4,9  ,  4,10  , 
  5,7  ,  5,8  ,  5,9  ,  5,10  , 

  % Mid left green
  6,1  ,  6,2  , 6,3  , 
  7,1  ,  7,2  , 7,3  , 
  8,1  ,  8,2  , 8,3  , 
  9,1  ,  9,2  , 9,3  , 

  % Mid left blue
  6,4  , 6,5  , 
  7,4  , 7,5  , 
  8,4  , 8,5  , 
  9,4  , 9,5  , 
  10,4 , 10,5  , 
  11,4 , 11,5  , 

  % Mid mid green
  6,6  , 6,7  , 
  7,6  , 7,7  , 
  8,6  , 8,7  , 
  9,6  , 9,7  , 
  10,6 , 10,7  , 
  11,6 , 11,7  , 

  % Mid right blue
  6,8  ,  6,9  , 
  7,8  ,  7,9  , 
  8,8  ,  8,9  , 
  9,8  ,  9,9  , 
  10,8 ,  10,9  , 
  11,8 ,  11,9  , 

  % Mid right green
  6,10  ,  6,11  ,  6,12  , 
  7,10  ,  7,11  ,  7,12  , 
  8,10  ,  8,11  ,  8,12  , 
  9,10  ,  9,11  ,  9,12  , 

  % Lower left dark green
  10,1  , 10,2  ,  10,3  , 
  11,1  , 11,2  ,  11,3  , 
  12,1  , 12,2  ,  12,3  , 12,4  , 12,5  ,  12,6  , 

  % Lower right dark green
  10,10  ,  10,11  , 10,12  , 
  11,10  ,  11,11  , 11,12  , 
  12,7   ,  12,8   ,  12,9  , 12,10  , 12,11  ,  12,12  
]);

The Comet code

The Comet model (sudoku_pi.co) use the same principle as the MiniZinc model. However, the representation of the regions are different where I instead of a matrix use a more object oriented approach with two tuple for the structures. For some reasons that I have forgot now, I didn't create a function check in this Comet model, instead stated the cardinality constraints directly.
import cotfd;
int t0 = System.getCPUTime();

int n = 12;
int P = -1; // Pi
int X = 0; // unknown 
range R = -1..9; 

set{int} V = {-1,1,2,3,4,5,6,7,8,9};

// regions where 1..9 is alldiff + 3 Pi
tuple Pos {
  int row;
  int col;
}

tuple Region {
  set{Pos} p;
}

int num_regions = 12;
Region regions[1..num_regions] = 
[
 // Upper left dark green
 Region({Pos(1,1),Pos(1,2),Pos(1,3),
         Pos(2,1),Pos(2,2),Pos(2,3),
         Pos(3,1),Pos(3,2),
         Pos(4,1),Pos(4,2), 
         Pos(5,1), Pos(5,2)}),
 
 // Upper mid light dark green
 Region({Pos(1,4), Pos(1,5), Pos(1,6), Pos(1,7), Pos(1,8), Pos(1,9),
         Pos(2,4), Pos(2,5), Pos(2,6), Pos(2,7), Pos(2,8), Pos(2,9)}),

 // Upper right green
 Region({Pos(1,10), Pos(1,11), Pos(1,12),
         Pos(2,10), Pos(2,11), Pos(2,12),
         Pos(3,11), Pos(3,12),
         Pos(4,11), Pos(4,12),
         Pos(5,11), Pos(5,12) }),

 // Mid upper left "blue"
 Region({Pos(3,3), Pos(3,4),Pos(3,5), Pos(3,6),
         Pos(4,3), Pos(4,4),Pos(4,5), Pos(4,6),
         Pos(5,3), Pos(5,4),Pos(5,5), Pos(5,6)}),

 // Mid Upper right blue
 Region({Pos(3,7), Pos(3,8), Pos(3,9), Pos(3,10),
        Pos(4,7), Pos(4,8), Pos(4,9), Pos(4,10),
        Pos(5,7), Pos(5,8), Pos(5,9), Pos(5,10)}),

 // Mid left green
 Region({Pos(6,1), Pos(6,2),Pos(6,3),
         Pos(7,1), Pos(7,2),Pos(7,3),
         Pos(8,1), Pos(8,2),Pos(8,3),
         Pos(9,1), Pos(9,2),Pos(9,3)}),

 // Mid left blue
 Region({Pos(6,4),Pos(6,5),
         Pos(7,4),Pos(7,5),
         Pos(8,4),Pos(8,5),
         Pos(9,4),Pos(9,5),
         Pos(10,4),Pos(10,5),
         Pos(11,4),Pos(11,5)}),

 // Mid mid green
 Region({Pos(6,6),Pos(6,7),
         Pos(7,6),Pos(7,7),
         Pos(8,6),Pos(8,7),
         Pos(9,6),Pos(9,7),
         Pos(10,6),Pos(10,7),
         Pos(11,6),Pos(11,7)}),

 // Mid right blue
 Region({Pos(6,8), Pos(6,9),
         Pos(7,8), Pos(7,9),
         Pos(8,8), Pos(8,9),
         Pos(9,8), Pos(9,9),
         Pos(10,8), Pos(10,9),
         Pos(11,8), Pos(11,9)}),

 // Mid right green
 Region({Pos(6,10), Pos(6,11), Pos(6,12),
         Pos(7,10), Pos(7,11), Pos(7,12),
         Pos(8,10), Pos(8,11), Pos(8,12),
         Pos(9,10), Pos(9,11), Pos(9,12)}),

 // Lower left dark green
 Region({Pos(10,1),Pos(10,2), Pos(10,3),
         Pos(11,1),Pos(11,2), Pos(11,3),
         Pos(12,1),Pos(12,2), Pos(12,3),Pos(12,4),Pos(12,5), Pos(12,6)}),

 // Lower right dark green
 Region({Pos(10,10), Pos(10,11),Pos(10,12),
         Pos(11,10), Pos(11,11),Pos(11,12),
         Pos(12,7),Pos(12,8), Pos(12,9),Pos(12,10),Pos(12,11), Pos(12,12)})

 ];

// the hints
int data[1..n,1..n] = 
[
 [4,9,7, P,5,X,X,X,X, X,X,X],
 [X,P,X, 8,X,X,9,6,1, 5,2,X],
 [X,8,X, 1,X,X,X,P,X, 7,X,X],
 [X,X,X, X,X,X,X,P,X, 4,X,X],
 [5,3,9, 6,X,X,X,X,X, X,X,X],

 [9,4,X, P,P,P,7,X,X, X,X,X],
 [X,X,X, X,X,6,2,5,P, X,7,4],
 [X,X,X, X,X,X,X,X,P, P,3,8],
 [X,7,8, 4,6,9,X,X,X, X,X,X],

 [X,X,3, X,P,X,X,4,7, 1,6,9],
 [X,X,4, X,1,X,X,X,6, X,P,X],
 [X,X,X, X,X,X,X,X,4, X,5,X]
 ];

Integer num_solutions(0);

Solver m();
var{int} x[1..n,1..n](m, R);

// int occ[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];
var{int} occ[-1..9](m,0..3);
int occ_count[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];

exploreall {

  // get the hints
  forall(i in 1..n) {
    forall(j in 1..n) {
      int c = data[i,j];
      if (c == P) {
        cout << "P";
      } else {
        cout << data[i,j];
      }
      if (c != 0) {
        m.post(x[i,j] == data[i,j]);
      }
      cout << " ";
    }
    cout << endl;
   }

  forall(i in 1..n, j in 1..n) {
    m.post(x[i,j] != 0);
  }

  forall(i in -1..9) {
    m.post(occ[i] == occ_count[i]);
  }  
  
  // rows
  forall(i in 1..n) {
    m.post(cardinality(occ, all(j in 1..n) x[i,j])); 
  }
    
  // columns
  forall(j in 1..n) {
    m.post(cardinality(occ, all(i in 1..n) x[i,j]));
  }

  // regions
  forall(r in 1..num_regions) {
    m.post(cardinality(occ, all(i in regions[r].p) x[i.row,i.col])); 

  }

} using {

  // reversing i and j gives faster solution
  forall(i in 1..n, j in 1..n: !x[i,j].bound()) {
    tryall(v in V : x[i,j].memberOf(v)) by(v) 
      m.label(x[i,j], v);
    onFailure
      m.diff(x[i,j], v);
  }

  int t1 = System.getCPUTime();
  cout << "time:      " << (t1-t0) << endl;
  cout << "#choices = " << m.getNChoice() << endl;
  cout << "#fail    = " << m.getNFail() << endl;
  cout << "#propag  = " << m.getNPropag() << endl;
  

  num_solutions++;

   forall(i in 1..n) {
     forall(j in 1..n) {
       int c = x[i,j];
       if (c == P) {
         cout << "P";
       } else {
         cout << x[i,j];
       }
       cout << " ";
     }
     cout << endl;
   }
}

cout << "\nnum_solutions: " << num_solutions << endl;
cout << endl << endl;

int t1 = System.getCPUTime();
cout << "time:      " << (t1-t0) << endl;
cout << "#choices = " << m.getNChoice() << endl;
cout << "#fail    = " << m.getNFail() << endl;
cout << "#propag  = " << m.getNPropag() << endl;

March 07, 2010

New book: Antoni Niederliński: Programowanie w logice z ograniczeniami ("Programming in Logic with Constraints")

Some weeks ago Antoni Niederliński wrote to the ECLiPSe mailing list about his new book (in Polish) about constraint logic programming with ECLipSe: Programowanie w logice z ograniczeniami ("Programming in Logic with Constraints"), PKJS, Gliwice 2010, 308 pages. ISBN: 9788360716953.

The author was kind enough to send the book to me and I received it yesterday. Since I don't understand Polish, I have now just browsed the book and tried some of the examples. It is a challenge to read code where all variable names and comments are in a unknown language, however many of the them are standard examples or variations .

It is a beautiful book with nice color plates of the problem's solutions as well as the principles of constraints such as propagators, etc. There are also many examples that seems to be well explained in the text, and there are 69 accompanying ECLiPSe models which can be downloaded (see below). It would be nice if this book is translated to English, it would be a welcome addition to other introduction books to constraint (logic) programming since it contains many different examples with explanations. However, I don't know if the author have any such plans.

The book is presented in the (Polish) site: www.pwlzo.pl. Translated to English by Google Translate: Logic programming with constraints - Gentle introduction to Eclipse:

[N]ew book: Anthony Niederliński, "Programming in logic with constraints. Gentle introduction to Eclipse," PKJS, Gliwice 2010, 308 + XVIII of the parties. The book contains:
* accessible lecture bases logic programming with constraints (CLP) for a free Eclipse Constraint Programming System, available through the Cisco-style Mozilla Public License
* descriptions and examples of basic methods of CLP with an emphasis on their intuitive understanding, and without the theoretical justification
* commented on a series of sample programs of increasing difficulty;

The book is intended for all those who want to quickly begin to set limits and optimal solutions for combinatorial and continuous problems using the mechanisms of CLP.

The contents of the book is Google translated here. It is slightly edited, and I have not been able to find a translation for some of the words.

Contents book "Programming in logic with constraints. Gentle introduction to the Eclipse"

Introduction, i
0.1 What is the book?, i
0.2 What is the logic programming with constraints?, Ii
0.3 Programming in logic with constraints and artificial intelligence, v
0.4 Programming in logic with constraints and knowledge engineering, vii
0.5 Programming in logic with constraints and operational research, viii
0.6 How was the book?, Ix
0.7 What the book contains?, Ix
0.8 How to use the book?, Xiii
0.9 A inconvenience Eclipse, xvi
0.10 Credits, xviii
1 In the beginning was the Prologue, 1
1.1 Elements of the Prologue, 1
1.2 Setting System 3-element, 8
1.2.1 Setting up a complete review of the method, 9
1.2.2 Setting method of searching the depths with standard relapses, 9
1.2.3 Setting the optimum, 15
1.3 Golfers, 18
1.4 Three balls, 21
1.5 Who killed him?, 24
1.6 Hetmani [n-queens problem] - a complete overview, 26
1.7 Hetmani [n-queens problem] - searching the depths of the standard recurrences, 28
1.8 Examination - searching the depths of the standard recurrences, 31
1.9 Incorrect wheel in the prologue, 36
1.10 How to become his own grandfather?, 37
1.11 Maze, 39
1.12 Field of mine, 42
1.13 The man-wolf-goat-cabbage, 46
1.14 Missionaries and cannibals, 49
1.15 Towers of Hanoi, 54
1.16 Decanting [3-jugs problem], 57
2 CLP restrictions for elementary feasible solutions (63
2.1 Limitations and areas of elementary variables, 63
2.2 What languages are different from Prolog CLP?, 63
2.2.1 n-Queens Problem in the CLP, 64
2.2.2 Exploration and Forward Checking method of propagation, 65
2.2.3 Exploration and propagation method of Forward Looking Ahead + Checking, 68
2.3 heuristic search, 69
2.4 Techniques zgodnościowe [about consistency], 72
2.5 Propagation of the failure of limitations, 73
2.6 Propagation of constraints gives a solution, 76
2.7 Who was with whom? - Propagation, 81
2.8 Students and languages - propagation, 83
2.9 Propagation of exploration: three equations in whole numbers, 89
2.10 Golfers again, 91
2.11 The guards - search and propagation, 93
2.12 Examination - search and propagation, 94
2.13 Hetman - search and propagation, 96
2.14 Setting - search and propagation, 97
2.15 collaborators and undercover Etosowcy The opposition, 99
3.1 Declarations module, 103
3.2 Limitation 'alldifferent (?Lists)', 104
3.3 Limitation 'element (?Index, ++List,?Value), 106
3.4 Send More Money, 106
3.5 Who was with whom?, 107
3.6 Golfers once again, 110
3.7 Three bullets once again, 113
3.8 Hetman [n-queens], 116
3.9 Five halls, 117
3.10 The ten classrooms, 120
3.11 All for All, 126
3.12 Seven of machinery - the seven operations, 131
3.13 Three machines - three of the five operations, 134
3.14 Three machines - five operations, 135
3.15 Using the data, 138
3.15.1 Structures and arrays, 138
3.15.2 Recursion and iteration, 141
3.16 Scalar product, 147
3.17 Hetman [n-queens] again, 148
3.18 Sudoku, 149
3.19 Hetman [n-queens] again, 152
4 CLP restrictions for elementary optimal solutions, 153
4.1 General, 153
4.2 Improved Branch and Bound, 154
4.2.1 The optimal position commanders - a standard Branch and Bound, 155
4.2.2 The optimal position commanders - Branch and Bound & Forward Checking, 156
4.2.3 The optimal position commanders - Branch and Bound + Forward Looking Ahead Checking, 157
4.3 The fundamental predicates, 158
4.3.1 The predicate 'bb min()', 158
4.3.2 The predicate 'search()', 160
4.4 Setting the optimal, 162
4.5 Optimizing load seven machines 165
4.6 Plan of the transport of coal 1, 168
4.7 Plan of the transport of coal 2, 171
4.8 Plan of the transport of coal 3, 173
4.9 Coloring maps, 176
4.10 knapsack problem 1, 177
4.11 Urzeczowione limitations, 180
4.12 Restrictions on the harvest, 181
4.13 knapsack problem 2, 185
4.14 Wholesale and consumers - 1, 186
4.15 Wholesale and consumers - 2, 191
4.16 Elementary scheduling, 194
4.16.1 resource constraints - the crew, 195
4.16.2 Lights and misery optimization, 199
4.16.3 Limitations kolejnośsciowe - building a house, 202
4.16.4 Limitations [disjunctive] - limited resources, 206
4.17 Optimization constraints and contradictions - photo, 209
4.18 shall appoint a parliamentary committee, 213
4.19 We are fighting for even udeszczowienie, 216
4.20 How to cut optimally?, 220
4.21 Most Send Money, 222
5 CLP restrictions for global optimal solutions, 225
225 5.1 Limitation of 'cumulative ()', 225
5.2 Cumulative scheduling 1, 227
5.3 Cumulative Scheduling 2, 228
5.4 Limitation of 'disjunctive ()', 230
5.5 [Disjunctive] scheduling 1, 231
5.6 We read newspapers 1, 232
5.7 We read newspapers 2, 238
5.8 We read newspapers 3, 242
5.9 Mount bicycles, 247
5.10 unloads and loads the ship, 262
5.11 An example of a very difficult - benchmark MT10, 269
6 CLP for continuous variables, 287
6.1 The curse and blessing of compound interest, 288
6.1.1 On retirement as a millionaire - 1, 289
6.1.2 On retirement as a millionaire - 2, 290
6.1.3 Ah, those mortgages!, 292
6.2 Wholesalers - suppliers, 293
6.3 Mixing oils, 297
6.4 How do I earn and not narobić?, 299

The examples can be downloaded here here.

End note

The book contain at least one reference to my own ECLiPSe page (attributed as "Constraint Programming Blog" in the reference section) which is quite fun since it is the first time my constraint programming blog/sites are referenced in a book. The example that starts on page 222 is attributed to my send_most_money.ecl model.

Here is the original text in Polish, as far as I have been able to type the different accents and strikes:


Popularna łamigłówka Send More Money (patrz rozdzial 3.4) ma rowniez swą wersje optymalizacyjna o nazwie Send Most Money, ktorej celem jest maksymalizacja wartosci zmiennej Money. Odpowiedni program 4_20_smm.ecl, zaczerpnięty z witryny [Kjellerstrand-10], jest postaci:

Google Translate translates the text as:

The popular puzzle Send More Money (see Chapter 3.4) also has its version of optimization, called Send Money Bridge, whose objective is to maximize the value of the variable Money. 4_20_smm.ecl appropriate program, taken from the site [Kjellerstrand-10], is of the form:

...

March 03, 2010

Survey of Nonogram Solvers updated: one addition is a JaCoP solver

Survey of Paint-by-Number Puzzle Solvers by Jan Wolter has been updated. One interesting addition to the solver is a JaCoP solver: Ben Raziel's JaCoP Solver:


[from Ben Raziel:]
"""
We use JaCoP (a free java CSP solving library) to do the line solving for us. For searching, we use a modified version of JaCoP's DFS search. I employed your probing approach and tweaked it a little in order to achieve better performance.

The basic idea is to probe more cells, in order to reach a contradiction - which means we don't have to guess our next move (since a contradiction tells us what color the current cell is). The probes are ordered into "levels": each level contains cells where a contradiction is more likely to be found - which minimizes the number of cells that we have to probe until a contradiction is reached.
"""
The solver is not currently publically available, but the author plans to make it available, once this is cleared with the University.

From the summary of this solver

Assessment: Very good at hard puzzles, a bit slow at easy ones.

...

The run times on simple problems are not spectacular. Java is an interpreted language, and thus, naturally, rather slow. It is furthermore also built on top of a general solving library, JaCoP, and those typically add some overhead. But my suspicion is that part of this is also a basic design trade off - the authors were focusing more on solving hard puzzles fast than on solving easy puzzles fast.

But this really works very well on the harder puzzles. It solves every puzzle in the full 2,491 webpbn puzzle data set in under 15 minutes and only the "Lion" puzzle, which most solvers can't solve at all, takes more than 5 minutes. No other solver tested has accomplished this.

Of course, this solver, like pbnsolve, was trained on the webpbn data set, which certainly gives it an advantage when being tested against that data set. In fact the authors had the full dataset available from early in their development cycle, which is more than can be said for webpbn.

There are also other updates, such as general reflections about the state of Nonogram solvers.