August 31, 2010

Nontransitive dice, Ormat game, 17x17 challenge

Here are some new models done the last weeks and some older not mentioned before. (I have also spent some time with Numberjack, a Python based solver, and will blog about this later on.)

Nontransitive dice

MiniZinc: nontransitive_dice.mzn

(Sidenote: This was triggered by a problem in Numberjack's Tutorial.)

Nontransitive dice is presented at Wikipedia as:
A set of nontransitive dice is a set of dice for which the relation "is more likely to roll a higher number" is not transitive. See also intransitivity. This situation is similar to that in the game Rock, Paper, Scissors, in which each element has an advantage over one choice and a disadvantage to the other.
And then gives the following example:
Consider a set of three dice, A, B and C such that
* die A has sides {2,2,4,4,9,9},
* die B has sides {1,1,6,6,8,8}, and
* die C has sides {3,3,5,5,7,7}.

* the probability that A rolls a higher number than B is 5/9 (55.55 %),
* the probability that B rolls a higher number than C is 5/9, and
* the probability that C rolls a higher number than A is 5/9.

Thus A is more likely to roll a higher number than B, B is more likely to roll a higher number than C, and C is more likely to roll a higher number than A. This shows that the relation "is more likely to roll a higher number" is not transitive with these dice, and so we say this is a set of nontransitive dice.
How easy is it to model such nontransitive dice in a high level constraint programming system such as MiniZinc or Comet? The basic MiniZinc model nontransitive_dice.mzn - with no bells and whistles - is this (somewhat edited):
int: m = 3; % number of dice
int: n = 6; % number of sides of each die

% the dice
array[1..m, 1..n] of var 0..n*2: dice :: is_output;

% the competitions: 
% The last wrap around is the one that breaks 
% the transitivity.
array[0..m-1, 1..2] of var 0..n*n: comp :: is_output;

solve satisfy;
   % order the number of each die
   forall(d in 1..m) (
       increasing([dice[d,i] | i in 1..n])
% and now we roll...
% Number of wins for [A vs B, B vs A]
   forall(d in 0..m-1) (
      comp[d,1] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+(d mod m), r1] >      
                   dice[1+((d + 1) mod m), r2]))
      comp[d,2] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+((d+1) mod m), r1] >
                   dice[1+((d) mod m), r2]))

% non-transitivity
% All dice 1..m-1 must beat the follower, 
% and die m must beat die 1
  forall(d in 0..m-1) (
    comp[d,1] > comp[d,2]
The result of the m competitions is placed in the (m x 2) matrix comp where the winner is in comp[i,1] and the loser in comp[i,2] for the match i. The last constraint section in the code above ensures that the winner is always in comp[i,1].

In the full model I have added the following:
  • max_val: maximum value of the dice, to be minimized
  • max_win: maximum number of winnings, to be maximized
  • gap and gap_sum: the difference of wins of a specific competition, to be maximized or minimized
  • the example setups from the Wikipedia page
The Comet version, includes about the same functions as the MiniZinc model. Here is the basic model (also edited for presentational purposes):
import cotfd;
int m = 3; // number of dice
int n = 6; // number of sides of each die
Solver cp();

// the dice
var{int} dice[1..m, 1..n](cp, 1..n*2);

// The competitions: 
var{int} comp[0..m-1, 1..2](cp, 0..n*n);

explore  {
  // symmetry breaking: order the number of each die
  forall(d in 1..m) {
    forall(i in 2..n) {[d,i-1] <= dice[d,i]);

  // and now we roll...
  // Number of wins for [A vs B, B vs A]
  forall(d in 0..m-1) {[d,1] == 
        sum(r1 in 1..n, r2 in 1..n) (
            dice[1+(d % m), r1] >
            dice[1+((d + 1) % m), r2]));[d,2] == 
         sum(r1 in 1..n, r2 in 1..n) (
            dice[1+((d+1) % m), r1] >
            dice[1+((d) % m), r2]));

  forall(d in 0..m-1) {[d,1] > comp[d,2]);

} using {


  cout << "dice:" << endl;
  forall(i in 1..m) {
    forall(j in 1..n) {
      cout << dice[i,j] << " ";
    cout << endl;
  cout << endl;
The full Comet model when minimzing max_value for 3 dice with 6 sides of each die (with domain 1..12), gives this result, i.e. the maximum value used in the dice is 4:
solution #1
1 1 1 2 4 4
1 1 1 3 3 3 
1 2 2 2 2 2 
1 vs 2: 15 (p:0.416667) 12 (p:0.333333) 
2 vs 3: 18 (p:0.500000) 15 (p:0.416667) 
3 vs 1: 15 (p:0.416667) 13 (p:0.361111) 
max_val: 4
max_win: 18
gap_sum: 8
Somewhat related is another dice problem: sicherman_dice.mzn.

17x17 challenge (not solved, though)

Last November, William Gasarch published a challenge in The 17x17 challenge. Worth $289.00. This is not a joke.. Also, see bit-player's The 17×17 challenge, and O.R. by the Beach Let’s Join O.R. Forces to Crack the 17×17 Challenge.

The objective is to create a 17x17 matrix where there are no colored sub-rectangles, i.e. no (sub)rectangles in the matrix can have the same color (number). Here's a solution of the 5x5x3 problem, i.e. 5 x 5 matrix and with 3 colors (1..3):
1 3 3 2 1 
3 1 2 2 1 
3 2 1 2 1 
2 2 2 1 1 
1 1 1 1 2 
When I first saw this problem in December, I tried some models but didn't pursued it further. However, when Karsten Konrad published some OPL models, I picked it up again. We also discussed this problem to see what a join force could do.

Here is my take in MiniZinc: 17_b.mzn. It is inspired (or rather a translation) of Karsten Konrad's OPL model from his blog post Let’s do real CP: forbiddenAssignment. Karsten earlier posted two OPL models: However, 17x17x4 is very hard, and - as I understand it - no one has yet cracked it.

Here are three MiniZinc models with some different approaches. A Comet version is mentioned below.

This is a translation of Karsten Konrad's OPL model in Let’s do real CP: forbiddenAssignment, which uses a forbiddenAssignments constraint. However, MiniZinc don't has this constraint as a built-in so I had to roll my own, here called existential_conflict. The forbidden is an table of forbidden combinations in the matrix. For 4 colors it look like this:
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
i.e. there can be no rectangle with has all the same values. (If you now start to think about global cardinality constraint, please read on.) Here is the complete model:
int: NbColumns = 10;
int: NbRows = 10;
int: NbColors = 4;
set of int: Columns = 1..NbColumns;
set of int: Rows = 1..NbRows;
set of int: Colors = 1..NbColors;

array[Rows,Columns] of var Colors: space :: is_output;
array[Colors, 1..4] of int: forbidden = 
   array2d(Colors, 1..4,[ i | i in Colors, j in 1..4]);

% The pattern x must not be in table table.
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
   not exists(pattern in index_set_1of2(table)) (
      forall(j in index_set_2of2(table)) (
           x[j] = table[pattern, j]

solve :: int_search(
   [space[i,j] | i in Rows, j in Columns], 

  space[1,1] = 1 /\
  space[NbRows,NbColumns] = 2 /\
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
       [space[r,c], space[r2,c], space[r,c2], space[r2,c2]])
This is almost exactly the same as the one above, except that it has another implementation of existential_conflict, a kind of MIP version:
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
 forall(pattern in index_set_1of2(table)) (
     sum(j in index_set_2of2(table)) ( 
           bool2int(x[j] = table[pattern, j])
     ) < 4
This third version use the constraint global cardinality constraint instead of forbidden assignments/existential_conflict. We create a temporary array (gcc), and then check that there is no value of 4 (i.e. the number of occurrences of each number must be less than 4). Also, I skipped the symmetry breaking of assigning the corner values in the matrix space (commented below).
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
    let {
       array[1..4] of var 0..4: gcc
    } in
        [space[r,c], space[r2,c], space[r,c2], space[r2,c2]], 

    forall(i in 1..4) (  gcc[i] < 4  )
As you can see these are rather simple (a.k.a. naive) models. I suspect that there are better search strategies than the one I tested, for example Gecode's size_afc_max with indomain_random which is one of the better:
solve :: int_search(
         [space[i,j] | i in Rows, j in Columns], 
However, neither of these models could find any solution of the original 17x17x4 problem. The longest take was 36 hours on my new 8 core Linux (Ubuntu) and 12 Gb RAM, but no luck.

Here is a solution of the 17x17x5 (i.e. 5 colors), using the LazyFD solver (it took 1:19 minutes), c.f. Karsten's Improved Model for Coloring Problem.
1 5 1 5 1 5 3 2 5 5 4 4 2 3 1 3 2 
5 1 4 3 5 5 2 3 2 2 3 2 4 1 4 4 4 
3 2 2 2 1 3 2 4 1 3 5 3 4 4 2 5 5 
2 2 3 3 5 4 4 2 5 4 4 1 3 1 1 1 5 
4 4 5 3 1 4 1 1 3 5 2 2 5 3 2 5 3 
5 1 3 1 1 3 3 2 2 1 4 1 5 4 2 4 5 
4 2 4 3 5 3 5 1 2 4 2 5 2 4 1 2 1 
2 4 2 4 1 5 4 3 4 2 5 3 4 1 5 2 3 
4 3 5 2 1 3 4 3 5 2 5 5 3 3 4 1 2 
1 5 3 1 2 1 4 1 3 2 3 5 2 4 2 5 4 
2 1 3 5 2 1 5 3 2 3 5 4 5 2 4 1 1 
5 3 5 4 3 4 3 2 1 3 3 2 1 2 1 2 4 
4 3 3 4 4 1 5 5 1 5 4 1 2 2 5 3 3 
5 4 4 1 3 2 1 4 2 5 4 5 3 5 3 1 3 
2 4 1 3 4 2 1 5 1 3 1 4 5 5 2 4 1 
3 3 5 2 4 2 5 4 3 1 1 2 1 5 3 1 4 
1 5 2 3 4 2 1 5 4 4 5 1 1 2 3 3 2 
I also wrote a Comet model,, and experimented with different cardinality constraints (cardinality, atmost, a decomposition of forbiddenAssignments) and different labelings.

Ormat game

Ormat game is yet another grid problem. From bit-player: The Ormat Game (where the nice pictures has been replace with 0/1 matrices below):
Here’s the deal. I’m going to give you a square grid, with some of the cells colored and others possibly left blank. We’ll call this a template. Perhaps the grid will be one of these 3×3 templates:
   1 0 0      1 1 1     1 1 1
0 1 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 0
(1) (2) (3)
You have a supply of transparent plastic overlays that match the grid in size and shape and that also bear patterns of black dots:
   1 0 0     1 0 0     0 1 0
0 1 0 0 0 1 1 0 0
0 0 1 0 1 0 0 0 1
(a) (b) (c)
0 0 1 0 1 0 0 0 1
1 0 0 0 0 1 0 1 0
0 1 0 1 0 0 1 0 0
(d) (e) (f)
Note that each of these patterns has exactly three dots, with one dot in each row and each column. The six overlays shown are the only 3×3 grids that have this property.

Your task is to assemble a subset of the overlays and lay them on the template in such a way that dots cover all the colored squares but none of the blank squares. You are welcome to superimpose multiple dots on any colored square, but overall you want to use as few overlays as possible. To make things interesting, I’ll suggest a wager. I’ll pay you $3 for a correct covering of a 3×3 template, but you have to pay me $1 for each overlay you use. Is this a good bet?
This is the first version in MiniZinc: ormat_game.mzn where the problem instances are included in the model file. As with the 17x17 problem above, this is - in concept, at least - quite simple. The model, sans the overlays, is shown below:
include "globals.mzn"; 
% the number of overlays, n!
int: f :: is_output = product([i | i in 1..n]);

% Which overlay to use.
array[1..f] of var 0..1: x :: is_output;

% number of overlays used (to minimize)
var 1..f: num_overlays :: is_output = sum(x);

% There are n! possible overlays
% They are in a separate .dzn file
%    ormat_game_n3.dzn etc
array[1..f, 1..n,1..n] of 0..1: overlays;

solve :: int_search(
  minimize num_overlays;

    % if problem has a black cell (=1) then there 
    % must be a selected overlay that has a black cell
    forall(i,j in 1..n) (
        problem[i,j] = 1 -> 
            exists(o in 1..f) (
                               x[o]*overlays[o,i,j] = 1 
    /\ % the inverse: wherever a selected overlay 
       %  has a black cell, problem must have 
       %  a black cell
    forall(o in 1..f) (
        x[o] = 1 -> 
           forall(i,j in 1..n) (
              overlays[o,i,j] = 1 -> problem[i,j] = 1

% Problem grid 3
include "ormat_game_n3.dzn";
array[1..n, 1..n] of 0..1: problem = array2d(1..n, 1..n,
The only constraints are:
  • if the problem matrix has a black cell (1) then there at least one of the selected overlay matrices must have a 1 in this cell
  • the converse: for all the black cells in the selected overlays the problem must has a black cell.
The output and solution:

x: [0, 1, 0, 1, 1, 1]
num_overlays: 4

Overlay #2

Overlay #4

Overlay #5

Overlay #6
Ideally, the overlays should be calculated in real-time, but it was much easier to generate overlay files for each size (using ormat_game_generate.mzn and some post-processing in Perl). These must be included in the model as well. Note: there are n! overlay matrices for problems of size n x n. However, changing the model file each time a new problem is made or tested is not very convenient. It's better to have problem instances in separate files which include the model and the overlay files of appropriate size. These following problem instances includes the general model ormat_game_nodel.mzn and an overlay file: Note: As of writing, there is problems running problems of size 7 on my 64-bit Ubuntu; it works on the 32-bit machine (also Linux), though. This has been reported to the G12 team.

Other new models

Here are some other new models (or not mentioned earlier):

Contiguity using regular

contiguity_regular.mzn: decomposition of the global constraint contiguity using regular constraint.

Combinatorial auction

These two MiniZinc models implements a simple combinatorial auction problem (Wikipedia): combinatorial_auction.mzn, and a "talkative" MIP version combinatorial_auction_lp.mzn (which is translated from Numberjack's Tutorial)

Compare with the Gecode version combinatorial_auction.cpp.

March 30, 2010

Comet version 2.1.0 released

Comet version 2.1.0 is released. It can be downloaded here.

From the release notes (in the distribution):

Major additions since 2.0.1:
  • New Version of Comet Studio IDE
  • New Version of User Manual
  • New Scheduling Resources

Detailed list of improvements:

  • New Version of Comet Studio IDE
  • New Version of User Manual, covering:
    • set variables in CP
    • soft global constraints in CP
    • more aspects of scheduling in CP
    • over-constrained problems
    • concurrency
    • visualization
    • interfacing with databases, XML, C++ and Java

  • General
    • Extended getSnapshot method for set variables
    • switch instruction now works with enumerated types
    • Extended onFailure to work with all types of selectors
    • Made possible to create set of arrays
    • Added copy method for set{string}
    • Added directory browsing support (classes FSDirectory and FSFile on Core module)
    • Added a reset method to the C++ API so that one can parse a Comet model once and solve several times with the same input
    • Added a method isEmpty() for sets
    • Added support for dictionaries of Objects (including string, Integer, Float, Boolean)
    • Allow definition of sets defined over enumerated type
    • Added filename option for seed tracking with -t and -r flags
    • Add the methods getNAN(), getPInf() and getNInf() to return the IEEE double representation for Not-a-number (NaN), +infinity, and -infinity
    • Allows equality testing between two ranges

  • Constraint Programming
    • Reduced cost on soft-cardinality constraints
    • More flexibility on cardinality constraints: not all values from the domains need to be constrained in cardinality
    • Added built-in guarded constraints
    • Added CostRegular and a new propagator for Regular
    • Allowed access to min/max in solution
    • Added the methods: Outcome requires(int val) and Outcome excludes(int val), on var{set{int}}
    • Allow declaration of arrays of var in extension
    • Added method function Solver getCPSolver(expr{int} e) to get the Solver from an expr{int}
    • Extended getSnapshot to var{set{int}} variables
    • Added getValue() method for var{float}
    • Added label(var{set{int}}[]) and getIntSetVariables() methods on Solver

  • Mathematical Programming
    • Allow declaration of float LP variables without an initial lower bound
    • Added Method to manually release the memory of a MIP/LP solver

  • Scheduling
    • New resources:
      • UnarySequence
      • UnaryResourcePool
      • UnarySequencePool
      • DiscreteResourcePool

    • Deprecated AlternativeUnaryResource

  • Visualization
    • New VisualBarplot widget
    • Improved VisualHistogram behavior and functionalities
    • New slider widget: UISlider
    • Added tooltips on VisualText, VisualTextTable, etc
    • Added more methods to VisualActivity:
      • void setRow(int i);
      • void setColor(string c);
      • void setToolTip(string s);

    • Several enhancement to VisualGantt

March 12, 2010

Pi Day Sudoku 2009 - the models (MiniZinc and Comet)

The 14 March is Pi Day (π Day) and is a celebration of Pi (3.14 in the mm.dd date notation). There are a lot of activities for this day. One of these activities is the Pi Day Sudoku.

Almost exactly one year ago, I blogged about the Pi Day Sudoku 2009 problem in these two posts: The problem is an extended version of a Sudoku problem:
Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

(Click to enlarge the picture)

Since it was a competition (closed June 1, 2009), I didn't published any models of this problem when blogging about the problem. Now, a year after, it seems to be a good time to publish them. I then implemented two version, one in MiniZinc, and one in Comet:: Both models use the same approach (details differs however). It is the same as for the plain Sudoku, with two exceptions:
  • The common approach in plain Sudoku is to use the global constraint alldifferent for stating that the values in the rows, columns, and regions should be different. Since there should be 3 occurrences of π in each row, column and region, this approach don't work. As mentioned in Solving Pi Day Sudoku 2009 with the global cardinality constraint, my first approach was a home made version of the constraint alldifferent except 0 and Pi but it was too slow. Instead (and via a suggestion of Mikael Zayenz Lagerkvist) I changed to the global constraint global_cardinality, which was much faster.
  • The other difference to the standard approach is that the regions are not 3x3 (or MxM), but "jigsaw regions". It was not especially hard to make this representation (though boring to type them in). The constraint for checking the regions are (in MiniZinc):
      /\ % the regions
      forall(i in 1..num_regions) (
        check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
Here I will not publish the solution to the puzzle since I gather that there are a lot of people out there that will solve it by there own. And there is at least one valid solution out there.


This was a fun problem, especially since I learned some new things by implement the models. As a constraint programming challenge is was quite harder than this year puzzle: Pi Day 2010:
Rules: Fill in the grid so that each row, column, and block contains 1-9 exactly once. This puzzle only has 18 clues! That is conjectured to be the least number of clues that a unique-solution rotationally symmetric puzzle can have. To celebrate Pi Day, the given clues are the first 18 digits of π = 3.14159265358979323...
[Yes, I have implemented a MiniZinc model for this problem as well; it is a standard Sudoku problem. No, I will not publish the model or a solution until the deadline, June 1, 2010.]

For more about Pi Day Sudoku 2010, see the blog 360: Pi Day Sudoku is back.

Also, see the following models that implements Killer Sudoku, which use the same approach as Pi Day Sudoku 2009:

The MiniZinc code

Here is the MiniZinc model (sudoku_pi.mzn), slightly edited.
include "globals.mzn";
int: n = 12;
int: X = 0;  % the unknown
int: P = -1; % π (Pi)

predicate check(array[int] of var int: x) =
   global_cardinality(x, occ) % :: domain

array[1..n, 1..n] of var -1..9: x :: is_output;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
array[1..11] of 0..3: occ2 = [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1];

% solve satisfy;
solve :: int_search([x[i,j] | i,j in 1..n], first_fail, indomain_min, complete) satisfy;


  % copy the hints
  forall(i in 1..n, j in 1..n) (
      x[i,j] != 0
      if dat[i,j] != X then
        x[i,j] = dat[i,j]

  /\ % rows
  forall(i in 1..n) (
    check([x[i,j] | j in 1..n]) 

  /\ % columns
  forall(j in 1..n) (
    check([x[i,j] | i in 1..n])

  /\ % the regions
  forall(i in 1..num_regions) (
    check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])

[ show(occ) ++ "\n"] ++
  if j = 1 then "\n" else " " endif ++
  | i in 1..n, j in 1..n

] ++ ["\n"];

% data
array[1..n,1..n] of int: dat = array2d(1..n, 1..n,
 4,9,7, P,5,X,X,X,X, X,X,X,
 X,P,X, 8,X,X,9,6,1, 5,2,X,
 X,8,X, 1,X,X,X,P,X, 7,X,X,
 X,X,X, X,X,X,X,P,X, 4,X,X,
 5,3,9, 6,X,X,X,X,X, X,X,X,

 9,4,X, P,P,P,7,X,X, X,X,X,
 X,X,X, X,X,6,2,5,P, X,7,4,
 X,X,X, X,X,X,X,X,P, P,3,8,
 X,7,8, 4,6,9,X,X,X, X,X,X,

 X,X,3, X,P,X,X,4,7, 1,6,9,
 X,X,4, X,1,X,X,X,6, X,P,X,
 X,X,X, X,X,X,X,X,4, X,5,X

% The regions
int: num_regions = 12;
array[1..num_regions*12, 1..2] of int: regions  = array2d(1..num_regions*12, 1..2,
  % Upper left dark green
  1,1  , 1,2  , 1,3  , 
  2,1  , 2,2  , 2,3  , 
  3,1  , 3,2  , 
  4,1  , 4,2  ,  
  5,1  , 5,2  , 
  % Upper mid light dark green
  1,4  ,  1,5  ,  1,6  ,  1,7  ,  1,8  ,  1,9  , 
  2,4  ,  2,5  ,  2,6  ,  2,7  ,  2,8  ,  2,9  , 

  % Upper right green
  1,10  ,  1,11  ,  1,12  , 
  2,10  ,  2,11  ,  2,12  , 
  3,11  ,  3,12  , 
  4,11  ,  4,12  , 
  5,11  ,  5,12   , 

  % Mid upper left "blue"
  3,3  ,  3,4  , 3,5  ,  3,6  , 
  4,3  ,  4,4  , 4,5  ,  4,6  , 
  5,3  ,  5,4  , 5,5  ,  5,6  , 

  % Mid Upper right blue
  3,7  ,  3,8  ,  3,9  ,  3,10  , 
  4,7  ,  4,8  ,  4,9  ,  4,10  , 
  5,7  ,  5,8  ,  5,9  ,  5,10  , 

  % Mid left green
  6,1  ,  6,2  , 6,3  , 
  7,1  ,  7,2  , 7,3  , 
  8,1  ,  8,2  , 8,3  , 
  9,1  ,  9,2  , 9,3  , 

  % Mid left blue
  6,4  , 6,5  , 
  7,4  , 7,5  , 
  8,4  , 8,5  , 
  9,4  , 9,5  , 
  10,4 , 10,5  , 
  11,4 , 11,5  , 

  % Mid mid green
  6,6  , 6,7  , 
  7,6  , 7,7  , 
  8,6  , 8,7  , 
  9,6  , 9,7  , 
  10,6 , 10,7  , 
  11,6 , 11,7  , 

  % Mid right blue
  6,8  ,  6,9  , 
  7,8  ,  7,9  , 
  8,8  ,  8,9  , 
  9,8  ,  9,9  , 
  10,8 ,  10,9  , 
  11,8 ,  11,9  , 

  % Mid right green
  6,10  ,  6,11  ,  6,12  , 
  7,10  ,  7,11  ,  7,12  , 
  8,10  ,  8,11  ,  8,12  , 
  9,10  ,  9,11  ,  9,12  , 

  % Lower left dark green
  10,1  , 10,2  ,  10,3  , 
  11,1  , 11,2  ,  11,3  , 
  12,1  , 12,2  ,  12,3  , 12,4  , 12,5  ,  12,6  , 

  % Lower right dark green
  10,10  ,  10,11  , 10,12  , 
  11,10  ,  11,11  , 11,12  , 
  12,7   ,  12,8   ,  12,9  , 12,10  , 12,11  ,  12,12  

The Comet code

The Comet model ( use the same principle as the MiniZinc model. However, the representation of the regions are different where I instead of a matrix use a more object oriented approach with two tuple for the structures. For some reasons that I have forgot now, I didn't create a function check in this Comet model, instead stated the cardinality constraints directly.
import cotfd;
int t0 = System.getCPUTime();

int n = 12;
int P = -1; // Pi
int X = 0; // unknown 
range R = -1..9; 

set{int} V = {-1,1,2,3,4,5,6,7,8,9};

// regions where 1..9 is alldiff + 3 Pi
tuple Pos {
  int row;
  int col;

tuple Region {
  set{Pos} p;

int num_regions = 12;
Region regions[1..num_regions] = 
 // Upper left dark green
         Pos(5,1), Pos(5,2)}),
 // Upper mid light dark green
 Region({Pos(1,4), Pos(1,5), Pos(1,6), Pos(1,7), Pos(1,8), Pos(1,9),
         Pos(2,4), Pos(2,5), Pos(2,6), Pos(2,7), Pos(2,8), Pos(2,9)}),

 // Upper right green
 Region({Pos(1,10), Pos(1,11), Pos(1,12),
         Pos(2,10), Pos(2,11), Pos(2,12),
         Pos(3,11), Pos(3,12),
         Pos(4,11), Pos(4,12),
         Pos(5,11), Pos(5,12) }),

 // Mid upper left "blue"
 Region({Pos(3,3), Pos(3,4),Pos(3,5), Pos(3,6),
         Pos(4,3), Pos(4,4),Pos(4,5), Pos(4,6),
         Pos(5,3), Pos(5,4),Pos(5,5), Pos(5,6)}),

 // Mid Upper right blue
 Region({Pos(3,7), Pos(3,8), Pos(3,9), Pos(3,10),
        Pos(4,7), Pos(4,8), Pos(4,9), Pos(4,10),
        Pos(5,7), Pos(5,8), Pos(5,9), Pos(5,10)}),

 // Mid left green
 Region({Pos(6,1), Pos(6,2),Pos(6,3),
         Pos(7,1), Pos(7,2),Pos(7,3),
         Pos(8,1), Pos(8,2),Pos(8,3),
         Pos(9,1), Pos(9,2),Pos(9,3)}),

 // Mid left blue

 // Mid mid green

 // Mid right blue
 Region({Pos(6,8), Pos(6,9),
         Pos(7,8), Pos(7,9),
         Pos(8,8), Pos(8,9),
         Pos(9,8), Pos(9,9),
         Pos(10,8), Pos(10,9),
         Pos(11,8), Pos(11,9)}),

 // Mid right green
 Region({Pos(6,10), Pos(6,11), Pos(6,12),
         Pos(7,10), Pos(7,11), Pos(7,12),
         Pos(8,10), Pos(8,11), Pos(8,12),
         Pos(9,10), Pos(9,11), Pos(9,12)}),

 // Lower left dark green
 Region({Pos(10,1),Pos(10,2), Pos(10,3),
         Pos(11,1),Pos(11,2), Pos(11,3),
         Pos(12,1),Pos(12,2), Pos(12,3),Pos(12,4),Pos(12,5), Pos(12,6)}),

 // Lower right dark green
 Region({Pos(10,10), Pos(10,11),Pos(10,12),
         Pos(11,10), Pos(11,11),Pos(11,12),
         Pos(12,7),Pos(12,8), Pos(12,9),Pos(12,10),Pos(12,11), Pos(12,12)})


// the hints
int data[1..n,1..n] = 
 [4,9,7, P,5,X,X,X,X, X,X,X],
 [X,P,X, 8,X,X,9,6,1, 5,2,X],
 [X,8,X, 1,X,X,X,P,X, 7,X,X],
 [X,X,X, X,X,X,X,P,X, 4,X,X],
 [5,3,9, 6,X,X,X,X,X, X,X,X],

 [9,4,X, P,P,P,7,X,X, X,X,X],
 [X,X,X, X,X,6,2,5,P, X,7,4],
 [X,X,X, X,X,X,X,X,P, P,3,8],
 [X,7,8, 4,6,9,X,X,X, X,X,X],

 [X,X,3, X,P,X,X,4,7, 1,6,9],
 [X,X,4, X,1,X,X,X,6, X,P,X],
 [X,X,X, X,X,X,X,X,4, X,5,X]

Integer num_solutions(0);

Solver m();
var{int} x[1..n,1..n](m, R);

// int occ[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];
var{int} occ[-1..9](m,0..3);
int occ_count[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];

exploreall {

  // get the hints
  forall(i in 1..n) {
    forall(j in 1..n) {
      int c = data[i,j];
      if (c == P) {
        cout << "P";
      } else {
        cout << data[i,j];
      if (c != 0) {[i,j] == data[i,j]);
      cout << " ";
    cout << endl;

  forall(i in 1..n, j in 1..n) {[i,j] != 0);

  forall(i in -1..9) {[i] == occ_count[i]);
  // rows
  forall(i in 1..n) {, all(j in 1..n) x[i,j])); 
  // columns
  forall(j in 1..n) {, all(i in 1..n) x[i,j]));

  // regions
  forall(r in 1..num_regions) {, all(i in regions[r].p) x[i.row,i.col])); 


} using {

  // reversing i and j gives faster solution
  forall(i in 1..n, j in 1..n: !x[i,j].bound()) {
    tryall(v in V : x[i,j].memberOf(v)) by(v) 
      m.label(x[i,j], v);
      m.diff(x[i,j], v);

  int t1 = System.getCPUTime();
  cout << "time:      " << (t1-t0) << endl;
  cout << "#choices = " << m.getNChoice() << endl;
  cout << "#fail    = " << m.getNFail() << endl;
  cout << "#propag  = " << m.getNPropag() << endl;


   forall(i in 1..n) {
     forall(j in 1..n) {
       int c = x[i,j];
       if (c == P) {
         cout << "P";
       } else {
         cout << x[i,j];
       cout << " ";
     cout << endl;

cout << "\nnum_solutions: " << num_solutions << endl;
cout << endl << endl;

int t1 = System.getCPUTime();
cout << "time:      " << (t1-t0) << endl;
cout << "#choices = " << m.getNChoice() << endl;
cout << "#fail    = " << m.getNFail() << endl;
cout << "#propag  = " << m.getNPropag() << endl;

November 08, 2009

Update on Nonogram: Jan Wolter's Survey and my own new benchmark

Survey of Paint-by-Number Puzzle Solvers

In Some new models, etc I mentioned the great Survey of Paint-by-Number Puzzle Solvers, created by Jan Wolter (also author of the Nonogram solver pbnsolve).

In this survey he included both Gecode's Nonogram solver written by Mikael Lagerkvist as well as my own Nonogram model (with Gecode/FlatZinc).

Since the last update on the former blog post the following has happeded:
  • both our solver has now the assessment: "An amazingly good solver, especially for a simple demo program", and are placed 4, 5, and 6 of 10 tested systems
  • my Gecode/FlatZinc model has been tested for "Full results"; it came 4 out of 5.
  • my Nonogram model with the Lazy FD solver is now included in the "Sample result", at 6'th place
It seems than Wolter has appreciated constraint programming as a general tool for solving these kind of combinatorial problems, much for its ease of experimentation, e.g. with labeling strategies and (for the MiniZinc models) changing solvers:

From the analysis of Lagerkvist's Gecode model:
This is especially impressive because the faster solvers are large, complex programs custom designed to solve paint-by-number puzzles. This one is a general purpose solver with just a couple hundred custom lines of code to generate the constraints, run the solver, and print the result. Considering that this is a simple application of a general purpose solving tool rather than hugely complex and specialized special purpose solving tool, this is an astonishingly good result.

Getting really first class search performance usually requires a lot of experimentation with different search strategies. This is awkward and slow to do if you have to implement each new strategies from scratch. I suspect that a tool like Gecode lets you try out lots of different strategies with relatively little coding to implement each one. That probably contributes a lot to getting to better solvers faster.
From the analysis of my MiniZinc model:
If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the constraint programming approach has big advantages, achieving good search results with low development cost.


These two results [Gecode/FlatZinc and LazyFD] highlight the advantage of a solver-independent modeling language like MiniZinc. You can describe your problem once, and then try out a wide variety of different solvers and heuristics without having to code every single one from scratch. You can benefit from the work of the best and the brightest solver designers. It's hard to imagine that this isn't where the future lies for the development of solvers for applications like this.
And later in the Conclusions:
The two constraint-based systems by Lagerkvist and Kjellerstrand come quite close in performance to the dedicated solvers, although both are more in the category of demonstrations of constraint programming than fully developed solving applications. The power of the underlaying search libraries and the ease of experimentation with alternative search heuristics obviously serves them well. I think it very likely that approaches based on these kinds of methods will ultimately prove the most effective.
I think this is an important lesson: Before starting to write very special tools, first try a general tool like a constraint programming system and see how well it perform.

The Lazy FD solver and the Lion problem

Most of the problems in the Sample Results where solved by some solver within the time limit of 30 minutes. However, one problem stand out as extra hard: The Lion problem. When I tested the MiniZinc's Lazy FD solver on my machine I was very excited that it took just about 2 minutes, and mentioned this to Wolter. He also tested this but on his 64-bit machine it took 80 minutes to solve (and since it was above the time limit this is not in the result table). This is how he describes the Lazy FD solver:
But the remarkable thing is that [the Lazy FD solver] solves almost everything. Actually, it even solved the Lion puzzle that no other solver was able to solve, though, on my computer, it took 80 minutes to do it. Now, I didn't allow a lot of other solvers 80 minutes to run, but that's still pretty impressive. (Note that Kjellerstrand got much faster solving times for the Lion than I did. Lagerkvist also reported that his solver could solve it, but I wasn't able to reproduce that result even after 30 CPU hours. I don't know why.)
After some discussion, we come to the conclusion that the differences was probably due to the fact that I use a 32-bit machine (and the 32-bit version of MiniZinc) with 2 Gb memory, and Wolter use a 64-bit machine with 1 Gb memory.

One should also note that the all other solvers was compiled without optimization, including Gecode/FlatZinc; however the LazyFD does not come with source so it is running optimized. This may be an unfair advantage to the LazyFD solver.

My own benchmark of the Sample Results

The times in the Sample Results is, as mentioned above, for solvers compiled with no optimization. I have now run the same problems on my machine (Linux Mandriva, Intel Dual 3.40GHz, with 2Gb memory), but the solvers uses the standard optimization. All problems was run with a time limit of 10 minutes (compared to Wolters 30 minutes) and searched for 2 solutions, which checks for unique solutions. The last three problems (Karate, Flag, Lion) has multiple solutions, and it is considered a failure if not two where found in the time limit. I should also note that during the benchmark I am using the machine for other things, such as surfing etc.

The problems
I downloaded the problems from Wolter's Webbpbn: Puzzle Export. For copyright reasons I cannot republish these models, but it is easy to download each problem. Select ".DZN" for the MiniZinc files, and "A compiled in C++ format" for Gecode. There is no support for Comet's format, but it's quite easy to convert a .dzn file to Comet's.

The solvers + labeling strategies
Here is a description of each solver and its labeling strategy:
  • fz, "normal" (column_row)
    MiniZinc model with Gecode/FlatZinc. The usual labeling in nonogram_create_automaton2.mzn, i.e. where the columns are labeled before rows:
    solve :: int_search(
          [x[i,j] | j in 1..cols, i in 1..rows], 
  • fz, "row_column"
    MiniZinc model with Gecode/FlatZinc. Here the order of labeling is reversed, rows are labeled before columns. Model is nonogram_create_automaton2_row_column.mzn
    solve :: int_search(
          [x[i,j] | i in 1..rows, j in 1..cols], 
  • fz, "mixed"
    MiniZinc model with Gecode/FlatZinc: nonogram_create_automaton2_mixed.mzn.
    I have long been satisfied with the "normal" labeling in the MiniZinc model because P200 (the hardest problem I until now has tested) was solved so fast. However, the labeling used in the Comet Nonogram model described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second, and which is also used in the Gecode model, is somewhat more complicated since it base the exacl labeling by comparing the hints for the rows and the column.

    I decided to try this labeling in MiniZinc as well. However, labeling in MiniZinc is not so flexible as in Comet and Gecode. Instead we have to add a dedicated array for the labeling (called labeling):
    array[1..rows*cols] of var 1..2: labeling;
    and then copy the element in the grid to that array based on the relation between rows and column hints:
          % prepare for the labeling
          if rows*row_rule_len < cols*col_rule_len then
               % label on rows first
               labeling = [x[i,j] | i in 1..rows, j in 1..cols]
               % label on columns first
               labeling = [x[i,j] | j in 1..cols, i in 1..rows]
          % .... 
    and last, the search is now based just on this labeling array:
    solve :: int_search(
  • jacop, "normal"
    MiniZinc normal model with JaCoP/FlatZinc using the same model as for fz "normal".
  • lazy, satisfy
    Model: nonogram_create_automaton2_mixed.mzn. This use the MiniZinc LazyFD solver with the search strategy:
    solve satisfy;
    This labeling is recommended by the authors of LazyFD. See MiniZinc: the lazy clause generation solver for more information about this solver.

    Note: The solver in MiniZinc latest official version (1.0.3) don't support set vars. Instead I (and also Jan Wolter) used the latest "Release Of The Day" version (as per 2009-11-02).
  • Comet, normal
    Model: This is the Comet model I described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. No changes has been done.
  • Gecode, normal
    This is the Nonogram model distributed with Gecode version 3.2.1. The labeling is much like the one used in the Comet model, as well as fz, "mixed". (In fact the labeling in the Gecode model was inspired by the labeling in the Comet model).

Here is the results. For each model (+ labeling strategy) two values are presented:
  • time (in seconds)
  • number of failures if applicable (the LazyFD solver always returns 0 here).
The result
Model fz

Some conclusions, or rather notes

Here are some conclusions (or notes) about the benchmark.
  • The same solver Gecode/FlatZinc is here compared with three different labelings. There is no single labeling that is better than the other. I initially has some hopes that the "mixed" labeling should take the best labeling from the two simpler row/columns labelings, but this is not really the case. For example for Tragic the row_column strategy is better than "normal" and "mixed". I am, however, somewhat tempted, to use the "row_column" labeling, but the drawback is that "P200" problem (not included in Wolfter's sample problems) takes much longer with this labeling.
  • The same model and labeling but with different solvers is compared: Gecode/FlatZinc is faster than JaCoP/FlatZinc on all the problems. For the easier problems this could be explained by the extra startup time of Java for JaCoP, but that is not the complete explanation for the harder problems. Note: Both Gecode/FlatZinc and JaCoP/FlatZinc has dedicated and fast regular constraints (whereas the LazyFD, and the Comet solvers use a decomposition).
  • The LazyFD solver is the only one that solves all problems (including Lion), but is somewhat slower on the middle problems than most of the others. It emphasizes that this is a very interesting solver.
  • It is also interesting to compare the result of the Comet model and Gecode/FlatZinc "mixed", since they use the same principle of labeling. However there are some differences. First, the MiniZinc model with Gecode/FlatZinc use a dedicated regular constraint, and Comet use my own decomposition of the constraint. For the Merka problem the Comet version outperforms the Gecode/FlatZinc version, otherwise it's about the same time (and number of failures).
  • The Light problem: It is weird that this problem was solved in almost exactly 10 minutes (the timeout is 10 minutes) for Gecode/FlatZinc and JaCoP/FlatZinc. The solutions seems correct but I'm suspicious of this. Update: Christian Schulte got me on the right track. Here is was happened: The first (unique) solution was found pretty quick and was printed, but the solvers could not prove a unique solution so it timed out. JaCoP/FlatZinc actually printed "TIME-OUT" but I didn't observe that. Case closed: They both FAILED on this test. Thanks, Christian.End update
As said above, I can only agree with Jan Wolter in his comment that the ease of experimenting, for example changing labeling, and solver for the FlatZinc solvers, is a very nice feature.

Last word

No benchmark or comparison of (constraint programming) models is really complete without the reference to the article On Benchmarking Constraint Logic Programming Platforms. Response to Fernandez and Hill's "A Comparative Study of Eight Constraint Programming Languages over the Boolean and Finite Domains" by Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey. (Link is to ACM.)

From the Abstract:
... The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems.

September 30, 2009

This weeks news

This is mostly a consolidation (with some additional information) of the tweets on since last time.

The Perl program is used for making little better output from the FlatZinc solvers.

The parameters to the program are now:
  • {tr:from:to:}: translate digit to another digit/character
  • {trvar:var:from:to:}: translate digit to another digit/character for a specific variable var
  • {trtr:from_string:replacement_string:}: translates all digits in from_string to the corresponding digit/character in replacement_string (inspired from the Unix tr command)
  • {trtrvar:var:from_string:replacement_string:}: as trtr but for a specific variable var
  • {nospaces}: don't show space after character
  • {noprintf}: don't try to be clever with printf stuff
Example: for showing a nicer picture of a Nonogram:

flatzinc nonogram_create_automaton2.fzn | "{tr:1: :} {tr:2:#:} {nospace}"

Where the {tr:from:to:} means translate digit from to character to, and {nospace} means that no spaces are shown after each digit/character.

This is now probably better written with trtr:

flatzinc nonogram_create_automaton2.fzn | "{trtr:12: #:} {nospace}"

Also, see below for some more examples.

MiniZinc/Comet: Nonogram model fixed for "zero length" clues

I discovered that my MiniZinc and Comet models had a little bug in the generation of the automaton for "zero length" clues (i.e. the row/column is completely empty, just 0's).

The fixed versions is here:
* MiniZinc: nonogram_create_automaton2.mzn
* Comet:

It was quite tricky to debug the generated automata, but I found a fix for it surprisingly soon. I won't publish the fixes here, but it suffice to say that the MiniZinc model is - shall we say - not more beautiful than before.

A test instance for this: T2 (source unknown)
* Comet: * MiniZinc: nonogram_t2.dzn

T2 has the following two solutions:
   ##   ##  ###
  #  # #  # #  #
  #      #  #  #
  #     #   ###
  #  # #  # #
   ##   ##  #


   ##   ##  ###
  #  # #  # #  #
  #     #   #  #
  #      #  ###
  #  # #  # #
   ##   ##  #

One more Nonogram example

Besides the T2 problem instance mentioned above, there is a new Nonogram example: Gondola for both Comet and MiniZinc:
* Comet:
* MiniZinc: nonogram_gondola.dzn.
For the curious, here is the solution, using the following command (see above for the part):
$ mzn2fzn -G jacop --data nonogram_gondola.dzn nonogram_create_automaton2.mzn
$ java JaCoP.fz.Fz2jacop -a -s  nonogram_create_automaton2.fzn | "{trtr:12: #:} {nospaces}"
  #####      ######
  ######     #    #        #
     ###  ###########     ###
  ######   # #    #       # #
  #######  # # # ##   #   ###
     ####    #    #  ##   # ####
  #######    #  # # ##    ### #
  #######     #  # ###    # #
     ####     # #  # # #########
     ####   ######## # #
     ####  #     ####  #   ###
     #### # #######    #  #####
     #### # #  ## # ####  #    #
     #### #########      ## # ##
     #### # ###   #      ##    #
     #### # ######       #   # #
     #### ########      ###    #
     ########## ###    ##### ###
      #### # ## ###    #####  ##
      ### #####  ##     ########
      ## ######  ###      #    #
      # ############     #
  ####################   # #
    ## #########################
   ##   ### ####################
  ##     ##### ### ## ## ## ## #
  #   ##  ######################
    ###       ##################
  #        ##

Updated Comet's SONET model

In About 20 more constraint programming models in Comet Pierre Schaus (on of Comet's developers) commented that my use of tryall outside the using block in the SONET problem ( was not to recommend.

I have updated the model with his suggestion that uses or instead: in 1..r) (rings[ring,client1] + rings[ring, client2] >= 2));
Thanks for the correction (suggestion), Pierre.

New MiniZinc model: Letter Square problem

letter_square.mzn is (yet another) grid problem. From my comment in the model file:

This problem is from the swedish book Paul Vaderlind: Vaderlinds nya hjärngympa ('Vaderlind's new brain gymnastics'), page 63ff. Unfortunately, I don't know the origin of this problem.

The objective is to create a matrix where all values on each row/column is different, except for the blanks.

A set of hints is given: the (or some) first seen non blank of each
- upper row
- lower row
- left column
- right column
seen from that view.

    B B
 |A   B C| 
B|  B C A| 
B|B C A  | 
 |C A   B| 
This model codes the hints as follows:
 blank -> 0
 A     -> 1
 B     -> 2
 C     -> 3

row_upper = [0,2,2,0];
row_lower = [3,0,0,0];
col_left  = [0,2,0,0];
col_right = [0,0,0,0];
Note: there are no hints for the right column.

Here is yet another opportunity to use (0 is here translated to blank, etc):

solver model.fzn | "{trtr:0123456: ABCDEF:}"

With the following output:
  A B   C
      A B C
  B C   A
    A C   B
  C   B   A
The following problem instances from the book has been created:

July 11, 2009

Comet version 2.0-Beta released

Comet version 2.0-Beta can now be downloaded from Dynadec's Download page.

Here are some of the new things in this version, from the announcement:

Dynadec is happy to announce the pre-release of Comet 2.0. This new version is a major development and here are some of the main new features:

1. New XML library (cotxml) to read and write XML files.
2. New visualization library (qtvisual) using the Qt toolkit on all platforms (Linux, Mac, Windows).
3. New graphical debugger (-gqt command line option) on all platforms.
4. New support for external plugins.
5. New Comet language documentation with code samples.

Please look at the release notes for more details on this new version.

I haven't checked much on the list yet, but I'm very excited about the over 320 pages Comet Tutorial which is really great, and the circa 60 new accompanying examples.

From the Preface of the Comet Tutorial:

This document is a tutorial about Comet, an hybrid optimization system, combining constraint programming, local search, and linear and integer programming. It is also a full object-oriented, garbage collected programming language, featuring some advanced control structures for search and parallel programming.
This tutorial describes the basic functionalities of Comet 2.0 and how Comet can be used to solve a variety of combinatorial optimization applications. It complements the API description which describes the various classes, functions, interfaces, and libraries through a set of HTML pages. This document is constantly evolving to cover more functionalities of the Comet system and to explain how to use Comet on increasingly complex problems. The Comet system itself is constantly evolving and the material will be updated accordingly.
The tutorial is currently divided in four parts: the description of the programming language, constraint programming, local search, and mathematical programming. A fifth part, describing some of the Comet libraries, and a sixth part, describing the graphical and visualization layers, will be added subsequently. Questions about Comet, its uses, and bug reports can be posted at the Dynadec forum at

For more about Comet, for example my own (about 160) Comet models, see My Comet page.

June 03, 2009

Scheduling with the cumulatives constraint in Gecode

One of the few problems of my learning problems that has not been modeled in Gecode before was the Furniture moving problem (from Marriott & Stuckey "Programming with Constraints", page 112f). The reason has been that Gecode don't have the common cumulative constraint, but instead the generalized and extended cumulatives constraint (note the last "s"). This cumulatives constraint was defined and explained in the paper A new multi-resource cumulatives constraint with negative heights (ps.Z) by Nicolas Beldiceanu and Mats Carlsson.

Furniture moving

The last couple of days I have read that paper and experimented with the cumulatives constraint in modeling the Furniture moving problem. The Gecode program is here: furniture_moving.cpp.

Some notes
The cumulatives paper shows 8 examples/variants of usuage (A..H, which are explained below) and the Furniture moving problem is modeled after the G variant, a covering model of producers/consumers. The main approach is that we have the four tasks (consumers, the furnitures to move) and three movers (producers). The height of the consumers is coded as negative, and the movers as height 1 (positive), i.e.
int _durations[] = {   30,    10,  15,   15                };
int _height[] =    {   -2,    -1,  -2,   -2,      1,  1,  1};
int _limit[] =     {0}; // just one machine
Just for fun, I also let the durations of the movers "free" to see how long they have to work according to this model. The result was that the movers 2 and 3 have to work full time (60 minutes), but the first mover has to work just 10 minutes.

8 Cumulatives examples

In the paper cited above, Beldiceanu and Carlsson showed 8 different variants (A to H) of how to model with the cumulatives constraint. I have tried to implement all these examples in Gecode as faithful as possible.

The Gecode code is cumulatives_examples.cpp and is fairly general with a short explanations (cited from the paper) for each variants. For selecting a specific problem instance it takes a command line parameter where 0 is example A to parameter 7 for example H.

Below are some comments for each example, and as said before there are more comments in the Gecode code. Please note that Gecode is 0-based so we start numbering tasks and machines from 0 and onward.

Example A

This is the "plain old" cumulative with one machine.

Example B

cumulative with two machines.

Example C

Note that the atmost parameter of cumulatives is set to false since we have an at least constraint. Also a dummy task (0) are used here.

Example D

Same as the above, except for the "dummy" task. Also, I have added an another cumulatives constraint with atmost=true so all tasks are not "stacked" above each other.

Example E

Producer/consumer model. This one was quite tricky to understand first, but after reading the description some times I realized it's beauty: all producers start at time 0 and and produce whatever they produce, and all consumers ends at the "last date" (here time 6) and consumes whatever they consumes.

Example F

Generalization of example E with two machines.

Example G

A "covering problem", also elegant stated with cumulatives. Here there are two tasks (consumers, task 0 and 1) which have negative height, and 4 persons (producers) with the positive height 1. The atmost parameter must be false.

It is this variant that the above mentioned Furniture moving problem was modeled after.

Example H

A generalization of Example G with two machines. Here I have hard coded which machine a task/person should belong since otherwise only task 6 would be at machine 1. Also, I added an atmost limit for each machine for esthetic reasons.

Another scheduling example: Building a house

This example is a standard example from OPL: how to build a house with some dependencies of different tasks, e.g. that masonry must be finished before carpentry, painting must be finished before moving in etc.

This problem was done in Gecode with cumulatives (as variant A, see above) and was very straightforward to implements, both the cumulatives and the dependencies graph. There code is here: building_a_house.cpp.

See also: Some scheduling models in Comet

Comet has a great support for scheduling with a special Schedule framework, much like OPL's. Here are my Comet models that use this Schedule framework. The two first models are standard OPL examples. Also, the Comet model is a variant of the Furniture moving problem, using a cumulative constraint implemented as a decomposition.

May 09, 2009

Learning Constraint Programming IV: Logical constraints: Who killed Agatha? revisited

Here is a comparison of how different constraint programming systems implements the logical constraints in the problem Who Killed Agatha?, also known as The Dreadsbury Mansion Murder Mystery.. In Learning constraint programming - part II: Modeling with the Element constraint the problem was presented and showed how the systems implements the Element constraints.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.

Here we see compare how different constraint programming systems implement the three emphasized conditions in the problem formulation above:
  • the concept of richer
  • Charles hates noone that Agatha hates
  • No one hates everyone
All models defines the concepts of hates and richer as two matrices. The matrix declarations are omitted in the code snippets below.


Here are the different models for the Who killed Agatha? problem. JaCoP and Choco has two version for how to implement the Element constraint, see the link above. Also, there is no Gecode/R version of this problem.

Defining the concept of richer

First we define the concept of richer, which consists of two parts:
  • No one is richer than him-/herself
  • if i is richer than j then j is not richer than i
This is an antisymmetric relation which is explored somewhat more (with an alternative predicate of the concept) in the MiniZinc model antisymmetric.mzn.

The logical concept used here is equivalence (if and only of).


% define the concept of richer: no one is richer than him-/herself
forall(i in r) (
   richer[i,i] = 0

/\ % if i is richer than j then j is not richer than 
forall(i, j in r where i != j) (
   richer[i,j] = 1 <-> richer[j,i] = 0


Note that Comet don't have support for equivalence directly, instead we have to use two implications. Update: equivalence in Comet is written as == (I tested <=> which didn't work). Thanks to Pascal Van Hentenryck for pointing this out.
// no one is richer than him-/herself
forall(i in r)[i,i] == 0);

// if i is richer than j then j is not richer than i
forall(i in r, j in r : i != j) {
  /* earlier version: two implications[i,j] == 1 => richer[j,i] == 0);[j,i] == 0 => richer[i,j] == 1);
  // equivalence[i,j] == 1) == (richer[j,i] == 0));


//  no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    store.impose(new XeqC(richer[i][i], 0));

//  if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                          new Eq(
                                 new XeqC(richer[i][j], 1),
                                 new XeqC(richer[j][i], 0)



//   a) no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    m.addConstraint(eq(richer[i][i], 0));

//   b) if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                                    eq(richer[i][j], 1),
                                    eq(richer[j][i], 0)


// no one is richer than him-/herself
for(int i = 0; i < n; i++) {
  rel(*this, richer_m(i,i), IRT_EQ, 0, opt.icl());

// if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
  for(int j = 0; j < n; j++) {
    if (i != j) {
      post(*this, tt(
                 richer_m(j,i) == 1, // <=>
                 richer_m(i,j) == 0)), 

Charles hates noone that Agatha hates

Here is the definitions of the condition Charles hates noone that Agatha hates, which simply mean the implication:
  if Agatha hates X then Charles don't hate X


When starting to modeling these kind of problems, I tend to follow the order of the conditions, which here means that the Charles part is before the Agatha part. When remodeling in another system the order tends to be fixed (cf the Comet version).
forall(i in r) (
   hates[charles, i] = 0 <- hates[agatha, i] = 1


forall(i in r)[agatha, i] == 1 => hates[charles, i] == 0);


for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(hates[agatha][i], 1),
                            new XeqC(hates[charles][i], 0)


I tend to copy/paste the models for Choco and JaCoP and just change the functions that are different. A consequence of this is that some special feature in one of these two systems is not used.
for(int i = 0; i < n; i++) {
                            eq(hates[agatha][i], 1),
                            eq(hates[charles][i], 0)


for(int i = 0; i < n; i++) {
   post(*this, tt(
                        hates_m(i,agatha) == 1, 
                        // =>
                        hates_m(i,charles) == 0)), 

No one hates everyone

This is the last condition to compare: No one hates everyone. It is implemented by a sum of the number of persons that each person hates, and this sum must be 2 or less. Note that it is possible to hate oneself.


forall(i in r) (
  sum(j in r) (hates[i,j]) <= 2  


  forall(i in r) in r) (hates[i,j]) <= 2);


Note: We could save the XlteqC constraint by restrict the domain of a_sum to 0..2 instead of 0..n (=3) but this explicit use of XlteqC is more clear.
for(int i = 0; i < n; i++) {
    FDV a[] = new FDV[n];
    for (int j = 0; j < n; j++) {
        a[j] = new FDV(store, "a"+j, 0, 1);
        a[j] = hates[i][j];
    FDV a_sum = new FDV(store, "a_sum"+i, 0, n);
    store.impose(new Sum(a, a_sum));
    store.impose(new XlteqC(a_sum, 2));


Note: sum is an operator, which makes the condition somewhat easier to state than in JaCoP.
for(int i = 0; i < n; i++) {
    IntegerVariable a[] = makeIntVarArray("a", n, 0, 1);
    for (int j = 0; j < n; j++) {
        a[j] = hates[i][j];
    m.addConstraint(leq(sum(a), 2));


In Gecode this condition is quite easy to state by using linear. In order to use this there is a Matrix "view" of the hates matrix hates_m.
Matrix hates_m(hates, n, n);
// ...
for(int i = 0; i < n; i++) {
  linear(*this, hates_m.row(i), IRT_LQ, 2, opt.icl());

End comment

The mandatory end comment: There are probably better ways of modeling the problem than shown above, either by changing some details or by model the problem completely different. Maybe this will be done sometime...

May 08, 2009

Learning Constraint Programming III: decomposition of a global constraint: alldifferent_except_0

The series Learning Constraint Programming is a preparation for My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned". Confusingly, the entries is not numbered in any logical order. Sorry about that.

Here are the previous entries:


The global constraint alldifferent_except_0 (or the more general variant alldifferent_except_c) is one of my favorite global constraints. It is very handy to use when 0 (or any constant c) is coded as an unknown or irrelevant value. Then we can constraint the rest to be all distinct.

The great Global Constraint Catalog entry alldifferent_except_0) explains this constraint as:
Enforce all variables of the collection VARIABLES to take distinct values, except those variables that are assigned to 0.

The alldifferent_except_0 constraint holds since all the values (that are different from 0) 5, 1, 9 and 3 are distinct.


I have modeled a decomposition of alldifferent_except_0 in the following models, where the constraint is just tested perhaps combined with some other constraint, e.g. sorted or that there must be at least some zeros:

- MiniZinc alldifferent_except_0.mzn
- Comet:
- Gecode/R: all_different_except_0.rb
- Choco:
- JaCoP:
- Gecode: alldifferent_except_0.cpp

Some models using alldifferent_except_0

And here is some real use of the constraint:

- Nonogram (Comet): (A faster model using the regular constraint,, is described here and here).
- I wrote about alldifferent_except_0 in Pi Day Sudoku 2009. However, as faster way of solving the problem was found and is described in Solving Pi Day Sudoku 2009 with the global cardinality constraint). Note: the competition is still on, so there is no link to any model.
- Sudoku generate (Comet):
- all paths graph (MiniZinc): all_paths_graph.mzn
- Cube sum (MiniZinc): cube_sum.mzn
- Message sending (MiniZinc): message_sending.mzn

As the first two entries indicates, there may be faster solutions than using (a decomposition) of alldifferent_except_0, but even as a decomposition is a great conceptual tool when modeling a problem.


In the implementations below we also see how to define a function (predicate) in the constraint programming systems.

For the Gecode/R model there are different approaches:
- "standard" ("direct") approach where we loop over all different pairs of elements and ensures that if both values are different from 0 then they should be different
- using count
- using global cardinality ("simulated" in Gecode/R, see below)

Also, in some models we use the slighly more general version alldifferent_except_c where c is any constant (e.g. "Pi" in the Pi Day Sudoku puzzle mentioned above.


Model: alldifferent_except_0.mzn.
predicate all_different_except_0(array[int] of var int: x) =
   let {
      int: n = length(x)
   forall(i,j in 1..n where i != j) (
        (x[i] > 0 /\ x[j] > 0) 
        x[i] != x[j]

// usage:
constraint all_different_except_0(x);


function void alldifferent_except_0(Solver m, var{int}[] x) {
  int n = x.getSize();
  forall(i in 1..n, j in i+1..n) {
           x[i] > 0 && x[j] > 0 
           x[i] != x[j]

// usage
exploreall {
  // ...
  alldifferent_except_0(m, x);



When modeling the constraint in Gecode/R, I experimented with different approaches. The reification variant all_different_except_0_reif is actually quite fast.
# The simplest and the fastest implementation 
# using count for 1..max (poor man's global cardinality)
def all_different_except_0
    self.count(i).must <= 1

# global cardinality version using an extra array with the counts
def global_cardinality(xgcc)
    xgcc[i].must == self.count(i)

# The standard approach using reification.
def all_different_except_0_reif(x)
  n = x.length
  b1_is_an bool_var_matrix(n,n)
  b2_is_an bool_var_matrix(n,n)
  b3_is_an bool_var_matrix(n,n)
      if i != j then 
        x[i].must_not.equal(0, :reify => b1[i,j]) 
        x[i].must_not.equal(0, :reify => b2[i,j]) 
        x[i].must_not.equal(x[j], :reify => b3[i,j])
        (b1[i,j] & b2[i,j]).must.imply(b3[i,j])

    # ...
    # usage:
    # all_different_except_0_gcc(x)
    # all_different_except_0_reif(x)



Note that here alldifferent_except_0 is derived from the more general version alldifferent_except_c.
// decomposition of alldifferent except 0
public void allDifferentExcept0(CPModel m, IntegerVariable[] v) {
    allDifferentExceptC(m, v, 0);

// slightly more general: alldifferent except c
public void allDifferentExceptC(CPModel m, IntegerVariable[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
                                           gt(v[i], c), 
                                           gt(v[j], c)
                                       neq(v[i], v[j]),

// ...
// usage: 




This is exactly the same approach as the Choco version.
// decomposition of alldifferent except 0
public void allDifferentExcept0(FDstore m, FDV[] v) {
    allDifferentExceptC(m, v, 0);
} // end allDifferentExcept0

// slightly more general: alldifferent except c
public void allDifferentExceptC(FDstore m, FDV[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
            m.impose(new IfThen(
                                       new And(
                                           new XneqC(v[i], c), 
                                           new XneqC(v[j], c)
                                       new XneqY(v[i], v[j])
} // end allDifferentExceptC

        // ...
        // usage:
        allDifferentExcept0(store, x);



The Gecode version is very succint since it use overloaded boolean operators. Very nice.
void alldifferent_except_0(Space& space, IntVarArray x, IntConLevel icl = ICL_BND) {
  for(int i = 0; i < x.size(); i++) {
    for(int j = i+1; j < x.size(); j++) {
           imp(x[i] != 0 && x[j] != 0, 
           // =>
           x[i] != x[j])),
} // alldifferent_except_0

// ...
// usage:
    alldifferent_except_0(*this, x, opt.icl());

May 05, 2009

Learning constraint programming - part II: Modeling with the Element constraint

As indicated last in Learning constraint programming (languages) - part I here are some findings when implementing Crossword, Word square, and Who killed Agatha?. See links below for the implementations.

The first constraint programming system i learned after constraint logic programming in Prolog was MiniZinc. When implemented the problems below I realized that I have been quite spoiled by using MiniZinc. The way MiniZinc (and also Comet) supports the Element constraint, i.e. access of variable arrays/matrices, is very straightforward in these systems and it doesn't matter whether the array to access is an array of integers or of non-variable variable integers. In the Java (Choco, JaCoP) and C++ systems (Gecode), however, this is another matter. Due to different circumstances I have not implemented these models in Gecode/R.

Element in MiniZinc and Comet
Accessing arrays and matrices in MiniZinc and Comet is simply done by using the [] construct, no matter what the type of the array or the index are (I assume integers and variable integers here). For the other systems we must explicitly use the Element constraint (called nth in Choco).


This is a standard constraint programming problem, used as a running example in Apt's great book Principles of Constraint Programming. Here is a formulation of the problem (stated differently than in the book):
Place the words listed below in the following crossword. The '#' means a blocked cell, and the numbers indicate the overlappings of the words.

      1   2   3   4   5
  1 | 1 |   | 2 |   | 3 |
  2 | # | # |   | # |   |
  3 | # | 4 |   | 5 |   |
  4 | 6 | # | 7 |   |   |
  5 | 8 |   |   |   |   |
  6 |   | # | # |   | # |
We can use the following words


MiniZinc: crossword.mzn
Gecode/R: (Not implemented)
Gecode: crossword.cpp

Note: I have seen more general models for solving crossword problems in Choco, JaCoP, Gecode/R, and Gecode with constructions other that the simple Elements used here. Since I wanted to compare the same way of solving the problem using the same Element-construct this may be an unfair comparison between the systems. Well, this is at least a finding how to implement this problem by Element...

Explanation of variables
The matrix A is the individual chars of the words (Comet variant):
int A[1..num_words,1..word_len] = 
   [h, o, s, e, s], //  HOSES
   [l, a, s, e, r], //  LASER
   [s, a, i, l, s], //  SAILS
   [s, h, e, e, t], //  SHEET
   [s, t, e, e, r], //  STEER
   [h, e, e, l, 0], //  HEEL
   [h, i, k, e, 0], //  HIKE
   [k, e, e, l, 0], //  KEEL
   [k, n, o, t, 0], //  KNOT
   [l, i, n, e, 0], //  LINE
   [a, f, t, 0, 0], //  AFT
   [a, l, e, 0, 0], //  ALE
   [e, e, l, 0, 0], //  EEL
   [l, e, e, 0, 0], //  LEE
   [t, i, e, 0, 0]  //  TIE
overlapping is the matrix of the overlapping cells)
This is the Comet version:
 [1, 3, 2, 1],   //  s
 [1, 5, 3, 1],   //  s 
 [4, 2, 2, 3],   //  i
 [4, 3, 5, 1],   //  k
 [4, 4, 3, 3],   //  e
 [7, 1, 2, 4],   //  l
 [7, 2, 5, 2],   //  e
 [7, 3, 3, 4],   //  e
 [8, 1, 6, 2],   //  l
 [8, 3, 2, 5],   //  s
 [8, 4, 5, 3],   //  e
 [8, 5, 3, 5]    //  r
E is the variable array of which the word to use for the different overlappings. This is in fact the only variable (array) that is needed in the problem, apart from the utility/convenience variables.

The main constraint for the crossword example in each system is stated thus:

forall(i in 1..num_overlapping) (
   A[E[overlapping[i,1]], overlapping[i,2]] =  A[E[overlapping[i,3]], overlapping[i,4]]
  forall(i in 1..num_overlapping) {[E[overlapping[i,1]], overlapping[i,2]] ==  A[E[overlapping[i,3]], overlapping[i,4]], onDomains);
Note that Choco has a special Element which support two dimensional arrays (matrix), which we use.
for(int I = 0; I < num_overlapping; I++) {
  IntegerVariable tmp = makeIntVar("tmp" + I, 1, 26);
  M.addConstraint(nth(E[overlapping[I][0]], W[overlapping[I][1]], AX, tmp));
  M.addConstraint(nth(E[overlapping[I][2]], W[overlapping[I][3]], AX, tmp));
Here we had used some trickery by using a transposed version of the word matrix since JaCoP has no special Element constraint for two dimensional arrays.
for (int I = 0; I < num_overlapping; I++) {
   FDV tmp = new FDV(store, "TMP" + I, 0, num_words*word_len);
   store.impose(new Element(E[overlapping[I][0]], words_t[overlapping[I][1]], tmp));
   store.impose(new Element(E[overlapping[I][2]], words_t[overlapping[I][3]], tmp));
This is more complicated compared to the two Java systems since in Gecode we use an array (of length rows*cols) to simulate the matrix. (There is a Matrix "view" in Gecode but the indices must be of type integer, not IntVar, so it can not be used.) Also, the constraints plus and mult takes IntVar as argument.

The first overlapped crossing is "expanded" like this (Gecode is 0-based):
   A[E[overlapping[i,0]], overlapping[i,1]] // MiniZinc/Comet
   a1 = A[ E[I*4+0] * word_len + overlapping[I*4+1]] // Gecode
Here is the complete code. The comments hopefully explains what is going on.

First we define an utility function for accessing the element according to the above principle.
 * Special version of element for an array version of a "matrix" words,
 * E is an integer variable array, C is an array of IntVars for
 * the offset j in the words "matrix".
 * The call 
 *    element_offset(*this, words, E[i], word_len_v, C[j], res, opt.icl());
 * corresponds to:
 *    res = words[E[i], j] --> words[E[i]*word_len+J]
void element_offset(Space& space,
                   IntArgs words,
                   IntVar e,
                   IntVar word_len,
                   IntVar c,
                   IntVar res,
                   IntConLevel icl = ICL_DOM) {

      element(space, words, 
                        word_len, icl), 
                   c, icl), 
              res, icl);

The function is then used as follows:
for(int I = 0; I < num_overlapping; I++) {
   IntVar e1(*this, 0, num_overlapping*4);
   IntVar e2(*this, 0, num_overlapping*4);

   IntVarArray o(*this, 4, 0, num_overlapping*4);
   for(int J = 0; J < 4; J++) {
     post(*this, o[J] == overlapping[I*4+J], opt.icl());

   element(*this, E, o[0], e1, opt.icl());      // e1 = E[I*4+0]
   element(*this, E, o[2], e2, opt.icl());      // e2 = E[I*4+2]

   IntVar a1(*this, 0, num_words*word_len);
   element_offset(*this, A, e1, word_len_v, o[1], a1, opt.icl());
   element_offset(*this, A, e2, word_len_v, o[3], a1, opt.icl());
(The same element_offset function is also used in the word_square problem below.) It took quite a time to get the function and temporary variables (and their domains) right. With training (and the element_offset as a skeleton) similiar problems should be easier to implement.

Note: this is not a bashing of Gecode. Gecode is a great system and it happens that for this specific problem, Gecode does not has the appropriate support. I should also mention that it was a long time since I programmed in C++ and am little rusty.

As mentioned earlier, I have been very spoiled by the MiniZinc (and Comet) constructs. Also: I'm a very 'lazy' person (in the Perl sense of the word lazy) and likes the agile programming languages - Perl, Ruby, Python etc - much for their high level constructs.

Word square problem

The word problem is a cousin to the crossword problem, and is described in Wikipedia's Word_square:
A word square is a special case of acrostic. It consists of a set of words, all having the same number of letters as the total number of words (the "order" of the square); when the words are written out in a square grid horizontally, the same set of words can be read vertically.
An example of order 7 found by the Comet model where we see that the first row word (aalborg) is also the first column word.



Here are the models for solving the Word square problem:
MiniZinc: word_square.mzn
Gecode/R: (Not implemented it Gecode/R)
Gecode: word_square.cpp

It is somewhat easier than the crossword problem. As before, E is the array of the index of the words to use, and words is an matrix of the words. Also, these models is an experiment of how to read a file, the word list /usr/dict/words (standard on Unix/Linux systems).

forall(I, J in 1..word_len) (
  words[E[I], J] = words[E[J],I]
  forall(i in 1..word_len) {
    forall(j in 1..word_len) { [E[i], j] == words[E[j],i], onDomains);
// The overlappings (crossings).
// Note that we use a transposed word matrix for the Element.
for(int i = 0; i < word_length ; i++) {
    for(int j = 0; j < word_length ; j++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        FDV tmp = new FDV(store, "tmp" + i + " " + j, 0, dict_size);
        store.impose(new Element(E[i], words[j], tmp));
        store.impose(new Element(E[j], words[i], tmp));
// Constants for the nth constraint below
IntegerVariable [] C = new IntegerVariable[dict_size];
for (int I = 0; I < word_length; I++) {
    C[I] = makeIntVar("C"+I, I,I);

// The overlappings (crossings)
for(int I = 0; I < word_length ; I++) {
    for(int J = 0; J < word_length ; J++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        IntegerVariable tmp = makeIntVar("tmp" + I + " " + J, 0, dict_size);
        M.addConstraint(nth(E[I], C[J], words, tmp));
        M.addConstraint(nth(E[J], C[I], words, tmp));
Note that this model use the same function element_offset that was used in the Crossword problem. It took some time to realize that it also could be used here.
// convenience variables for the element constraints below
// since element, plus, and mult wants IntVars.
IntVar word_len_v(*this, word_len, word_len);
IntVarArray C(*this, word_len, 0, word_len-1);
for(int i = 0; i < word_len; i++) {
  rel(*this, C[i], IRT_EQ, i, opt.icl());

for(int i = 0; i < word_len; i++) {
  for(int j = 0; j < word_len; j++) {
    // words[E[i], j] ==  words[E[j],i]

    IntVar tmp(*this, 0, num_words);

    // tmp == words[E[i], j] --> words[E[i]*word_len+j]
    element_offset(*this, words, E[i], word_len_v, C[j], tmp, opt.icl());

    // tmp == words[E[j], i]  --> words[E[j]*word_len+i]
    element_offset(*this, words, E[j], word_len_v, C[i], tmp, opt.icl());

Who killed Agatha?

This is a standard benchmark for theorem proving, also known as The Dreadsbury Mansion Murder Mystery.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.


MiniZinc: who_killed_agatha.mzn
JaCoP :
JaCoP :
Gecode: who_killed_agatha.cpp

In Some new Gecode models I wrote about the findings in implemented this problem in Gecode and compared to Comet/MiniZinc.

The models use two 3x3 matrices for representing the two relations hates and richer with 0..1 as domain (i.e. boolean). The Element constraints is used for implementing the condition A killer always hates, and is no richer than his victim. where the_killer is an integer variable; the_victim is in some models replaced by agatha (the integer 0). The interesting thing here is that at least one of the indices are integer variables, which caused the difficulties in the two problems above.

These models also use a lot of boolean constructs. A comparison of how these are implemented in the different CP systems may be described in a future blog post.

hates[the_killer, the_victim] = 1 /\
richer[the_killer, the_victim] = 0
Comet:[the_killer, the_victim] == 1);[the_killer, the_victim] == 0);
Note: In the models below I have simplified the problem by using agatha (defined as the integer 0) instead of the integer variable the_victim. This is not a problem since we know that Agatha is the victim, and is the reason why the Element is easier to use than for Crossword and Word square.

JaCoP variant 1 (no Element):
JaCoP don't have a direct support for the case when the index i (in matrix[i][j]) is an integer variable so the first variant of modeling the condition A killer always hates, and is no richer than his victim. does not use Element at all. In we simply loop over all integers (0..2), check if "this" i equals the_killer and then we can use two integers for accessing the matrices. Also, note the IfThen construct.
for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(hates[i][agatha], 1)
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(richer[i][agatha], 0)
This was the first variant I implemented, but then I recall the "trickery" used in Crossword and Word square where the matrices where transposed and Element could be used. The problem with this approach is that all constraints must be rewritten in a way that may be confusing. Come to think of it, maybe the names of the matrices should have been changed to is_hated_by and poorer.

JaCoP variant 2 (transposed matrices, Element)
This method of transposition and using Element is implemented in The constraint is now much simpler:
int shift = -1;
for(int i = 0; i < n; i++) {
    store.impose(new Element(the_killer, hates[agatha], one, shift));
    store.impose(new Element(the_killer, richer[agatha], zero, shift));
Note: the Element in JaCoP defaults to start index as 1, but has support for shifting it to 0, by using -1 as the shift parameter. Choco variant 1 (no Element)
I implemented exact the same principle that was used in the two JaCoP model in the two Choco models. The first - no Element - is

for(int i = 0; i < n; i++) {
                            eq(hates[i][agatha], 1)
                            eq(the_killer, i),
                            eq(richer[i][agatha], 0)
Choco variant 2 (transposed matrices, nth)
Note: one and zero are integer variables since nth cannot handle plain integers as the last argument.
for(int i = 0; i < n; i++) {
   m.addConstraint(nth(the_killer, hates[agatha], one));
   m.addConstraint(nth(the_killer, richer[agatha], zero)


Here we have seen - not surprisingly - that using the Element constraint is quite different depending of which CP system we use and it can be easy or not so easy. It was my explicit intension to see how to solve the same problem as similiar as possible. We should also note that most (if not all) problems can be modeled in many ways, some not using Element at all.

One last comment: The two Java models of Who killed Agatha? took quite a long time to implement. The main reason for that was not the the handling of Element but was due to a bug of confusing the two matrices in one of the conditions. Sigh.

April 26, 2009

Comet: version 1.2-rev2 released and a facelift of the site

Two things that is new this weekend.

1) Version 1.2-rev2 has been released. To cite the release note: It contains a number of bug fixes and significant improvement in performance on a variety of benchmarks..

Download this version here.

2) The Dynadec site was also updated this weekend. Now it is more organized and contains things as:

Also, I have forgot to say that the Forums are now quite active (and the mailing list is practically closed). Fortunately there is an option of geting a mail whenever a new message is written.

March 24, 2009

Comet version 1.2 released

Comet version 1.2 is released. From the release notes:

New Features
This release has the following new features [with bug tracking id where applicable]:

* New linear programming library (cotln) with interfaces to LPSolve and COINOR-CLP.
* New database library (cotdb) using ODBC.
* Addition of events to Boolean CP variables [#112].
* Implementation mouse tracking on Gtk [#33].
* Improved error messages for semantic errors [#169, 160, 141]
* Allow filtering inside of a collect statement [#153].
* Addition of FunctionFloatExpr [#151].
* Support for classes as keys in dictionaries [#143].
* Added new mark method for 2D plots [#127].
* Improved handling of arrays of arrays [#138].
* Implemented return statement in void functions [#142].
* The compiler now gives an error if a class has no constructors [#156].
* The compmiler givs an error if a non-void method has no return statement [#102].
* Unified syntax of select, selectFirst, and selectCircular [#154].
* Unified syntax for arrays and matrices [#140].
* Unified syntax of queues and stacks [#139].
* Support for with atomic in CP search [#99, 121, 123].
* Support for sqrt in CP objectives [#135].
* Support for getSnapshot on float CP variables. [#131].
* Support for swapping operator :=: on int and tuple set variables in LS [#101].
* Added getLocalSolver on var{set{int}}, var{bool}, and var{float} [#119].

Bug Fixes
This release fixes the following bugs [with bug tracking id where applicable]:

* Fixed memory leaks in CP library [#148].
* Allow assignments to variables stored in a dict using the operator := [#172].
* Packaging of additional library .cob files [#159,#167].
* Fixes to implementation of built-in stack [#161].
* Fixed implementation of ofstream on Mac OS X [#150].
* Allow casting to float inside of constraint expressions [#149].
* Improved handling of float CP variables with large domains [#147].
* The operator := now works on Float objects [#155].
* Improved consistency in floating point operations [#122].
* Added evalFloat() to var{float} [#129].
* Improved handling of failed constraints outside explore constructs [#130].
* Implemented printing of arrays of ranges [#128].
* Fixed race condition in parallel loops resulting in deadlock [#124].
* Fixed bug in initial state of queues [#180].
* Fixed performance degradation for certain local search problems [#181].

Comet can be downloaded from Dynadec.

For more info about Comet, see My Comet page.

March 14, 2009

Solving Pi Day Sudoku 2009 with the global cardinality constraint

Here is an update of the Pi Day Sudoku 2009 problem.

After a suggestion from Mikael Zayenz Lagerkvist (of the Gecode team) the model now uses just the global cardinality constraint and is much faster than the model which used my own decomposition of all_different_except_0 (or rather all_different_except_Pi) and the count constraint.

Both of the new MiniZinc and Comet models solves the Pi Day Sudoku 2009 problem in less than a second (compared to couple of seconds before).

The Comet model has the following statistics for finding the first (and unique) solution.

time: 9
#choices = 6
#fail = 1
#propag = 1306

For Gecode/FlatZinc:

runtime: 20
solutions: 1
propagators: 35
propagations: 255
nodes: 3
failures: 0
peak depth: 2
peak memory: 158 KB

Compared to the former model, there are now much less failures.

Some more about the global cardinality constraint
Global cardinality (in Sudoku) is a way of stating how many occurrences there should be of each number per row, column or region. For the Pi Day Sudoku problem the following occurrences are needed:

Pi: 3 occurrences (coded as -1 in the model)
0: 0 occurrences (0 is a dummy value for handling the indata)
1..9: exactly one occurrence each

I MiniZinc the this is coded as

array[1..n, 1..n] of var -1..9: x;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
forall(i in 1..n) ( % rows
check([x[i,j] | j in 1..n], occ)

In Comet the function cardinality is used, i,e,, all(j in 1..n) x[i,j]));.

For more information about the global cardinality constraint, see the entry in Global Constraint Catalog.

A comment
Introductions to constraint programming nowadays tend to start with the Sudoko problem (or SEND + MORE = MONEY). These models often use the alldifferent constraint, and seldom, if ever, have I seen any presentation using global cardinality. One reason may be that alldifferent is easier to understand in a introduction, another reason may be that it is general a better constraint to use for these problems.

"Vanilla" Sudoko solver using global cardinality
I have compared some few "vanilla" Sudoku problems with either alldifferent or global cardinality but haven't done any systematic comparison. Sometimes the global cardinality approach is better, sometimes it is somewhat worse.

sudoku_gcc.mzn is a "vanilla" Sudoku solver in MiniZinc using the global cardinality constraint. And the same approach in Comet: Note: These models includes just a few problems.

Also see: Gecode's Sudoku solver is very fast, and includes 88 different problems.

March 11, 2009

Pi Day Sudoku 2009

Brainfreeze Puzzles has a Pi day Sudoku competition:

Pi Day is a celebration of the number π that occurs every March 14 (3.14...). Math geeks all over the world celebrate their math-geekiness on this day with pie-eating contests, recitations of the digits of pi, and occasionally fundraisers where math faculty get hit in the face with pies. At Brainfreeze Puzzles we celebrate Pi Day - how else? - with a Sudoku puzzle.


Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

The puzzle is also here (as a PDF file)


I programmed this puzzle in both Comet (using the constraint programming module) and MiniZinc using different solvers. The models use the same principle of solving the problem: a (homebrewn) all different except 0 constraint and counting exactly 3 occurrences of Pi for each row/column and region.

All the solvers give the same unique answer, which of course may mean that there is some systematic error on my part.


Comet solves the problem in about 9 seconds with the following statistics, using exploreall to make sure that it is a unique solution.

time: 8821
#choices = 9999
#fail = 16231
#propag = 23168886

Somewhat surprisingly selecting j before i in the forall loop, gives an improvment from 15 seconds to 9 seconds. This was the same labeling strategy that was used for improving Nonogram (see Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second). Here is the labeling:

// reversing i and j gives faster solution
forall(j in 1..n, i in 1..n: !x[i,j].bound()) {
tryall(v in V : x[i,j].memberOf(v)) by(v)
m.label(x[i,j], v);
m.diff(x[i,j], v);

MiniZinc and Gecode/flatzinc

Interestingly, the same improvement was observed with MiniZinc and the Gecode/flatzinc solver (the interface to Gecode). Here is the labeling function, with the same swapping of i and j (I didn't know that this mattered in MiniZinc).

solve :: int_search([x[i,j] | j, i in 1..n], "input_order", "indomain", "complete") satisfy;

With the swap, the time went from 16 seconds to 10 seconds with the following statistics:

propagators: 9460
propagations: 13461717
failures: 11221
clones: 11221
commits: 31478
peak memory: 5638 KB sudoku_pi.mzn --num 2 9,41s user 0,72s system 99% cpu 10,139 total

Fzntini, FlatZinc, ECLiPSe and searching for the first solution

MiniZinc's builtin solver flatzinc and fzntini (a SAT solver) only shows one solution and no statistics.

* fzntini solve problem in about 13 seconds
* flatzinc takes 3 seconds.

The ECLiPSe's MiniZinc solver ic takes 6.5 seconds using the same labeling as Gecode/flatzinc. No statistics is shown for this solver either.

For comparison both Comet and Gecode/flatzinc shows the first solutions in 2.5 seconds.


Since it is a competition I won't show my solution or the models on/via the blog. Sorry about that.


I learned about Pi Days Sudoku from the 360 blog: Pi Day Sudoku 2009.

March 08, 2009

Comet: About 15 new Comet models

The latter part of the last week was dominated by the optimization of the Nonogram solver, which I wrote about in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. I would like to thank Pascal Van Hentenryck, Mikael Zayenz Lagerkvist, and Christian Schulte for help, comments, and inspiration.

Here are the other Comet models written the last week.

Puzzles, recreation mathematics, etc

Pascal Van Hentenryck's OPL book

These models are translations from Pascal Van Hentenryck "The OPL Optimization Programming Language"

Operations research

  • Furniture moving using global constraint cumulative (simple scheduling problem from Marriott & Stuckey: 'Programming with constraints', page 112f). Compare with which uses the Comet's Schedule constraint.

Global constraints

Many of these models has been inspired by the great collections in Global Constraint Catalog.

March 05, 2009

Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second

In Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more I reported the running time of the Nonogram P200 problem (one of the hardest standard problems) as 1:30 minutes with the Comet model (using the regular constraint).

In the last paragraph of the Nonogram section, I noted: Maybe some creative labeling or a "proper" regular constraint can fasten things up...

Well, it was "creative labeling" part that made it. After some help from Pascal Van Hentenryck (one of the creators of Comet), the time for generating all the solutions of P200 - which is unique - went to about 1 second.

The statistics for the full search of P200 is shown below. Note: the unit for "time" is milliseconds and reports the solving time excluding the startup of Comet itself, i.e. 0.627 seconds.

num_solutions: 1
time: 621
#choices = 520
#fail = 1040
#propag = 1213237
comet -j2 1,48s user 0,10s system 99% cpu 1,582 total

For comparison the Gecode program Nonogram displays one solution in about the same time, but searching the complete search tree takes about 45 seconds, using the following command line option:
nonogram -pk speed -solutions 0 8.

Maybe there are some more options that will make the Gecode program works faster.

What was changed?

Pascal showed me two improvements (annotated by "Pascal's improvement" in

The first change didn't do much difference. It was a change in the regular function. The following

forall(i in x_range) {
cp.inside(x[i], 1..S); // Do this in case it's a var.[i+1] == d2[a[i], x[i]], onDomains); // Determine a[i+1].

was changed into

forall(i in x_range) {[i] >= 1); // hakank: Pascal's improvement[i] <= S); // hakank: Pascal's improvement[i+1] == d2[a[i], x[i]], onDomains); // Determine a[i+1].

The second change, however, made the whole thing. In the using section, the labeling (search strategy) was originally stated as

forall(i in 1..rows, j in 1..cols : !board[i,j].bound()) {
tryall(v in 1..2)[i,j] == v, onDomains);

Pascal's suggested that j in 1..cols and i in 1..rows should switch place, which made the whole improvement.

The labeling is now as follows, i.e. it starts with the smallest "dimension" (rows or cols).

if (rows * row_rule_len < cols * col_rule_len ) {
    forall(i in 1..rows, j in 1..cols: !board[i,j].bound()) {
      tryall(v in 1..2) by (-v)
  } else {
    forall(j in 1..cols, i in 1..rows: !board[i,j].bound()) {
      tryall(v in 1..2) by (-v)

Thanks again Pascal for your help and inspiration!

February 28, 2009

Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more

Since the last time some more Comet model has been written.

Much faster Nonogram model using the regular constraint

In More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems I presented, a solver for Nonogram puzzles, and also noted that is was quite slow: Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)....

Inspired by the announcement this week of the ECLiPSe example of Nonogram solver using the regular constraint (see nono_regular.ecl.txt and regular.ecl.txt) - and also my earlier Gecode/R model nonogram.rb which used "regular expressions" - I created a Nonogram model in Comet using the regular constraint:

Let us first look at the regular constraint.

Regular constraint
The regular constraint (see the Comet model for my implementation) is a global constraint using a DFA (deterministic finite automata) which accepts (or not accepts) an input sequence given a "transition matrix". The constraint was presented by Gilles Pesant in "A Regular Language Membership Constraint for Finite Sequences of Variables" (2004).

My implementation of the regular constraint is heavily borrowed from MiniZinc's builtin regular predicate (from lib/zinc/globals.mzn); the translation to Comet was very straightforward (omitting just some asserts).

An exemple of the usage of the constraint: We want to match the regular expression "123*21" (i.e. first "1", then "2", then zero or more "3", then "2", and last a "1"). Note: The length of the sequence is 10, so there must be 6 "3":s since "123" and "21" are "anchored" at the beginning and the end.

int len = 10;
int n_states = 5; // number of states
int input_max = 3; // the states are 1,2, and 3
int initial_state = 1; // we start with the 1 state
set{int} accepting_states = {n_states}; // This is the last state
// The transition matrix
int transition_fn[1..n_states, 1..input_max] =
[[2, 0, 0], // transitions from state 1: "1" -> state 2
[0, 3, 0], // transitions from state 2: "2" -> state 3
[0, 4, 3], // transitions from state 3: "2" -> state 4 | "3" -> state 3 (loop)
[5, 0, 0], // transitions from state 4: "1" -> state 5
[0, 0, 0]]; // transitions from state 5: END state
exploreall {
regular(reg_input, n_states, input_max, transition_fn, initial_state, accepting_states);
} using {
// ....

The unique sequence resulting from this automata is thus 1 2 3 3 3 3 3 3 2 1.

For using regular in the Nonogram problem, the automata for each row/column clue, must be built, preferably by a function. In this is done with the function make_transition_matrix, which by far was the hardest part of this problem (and surely could be written in a smoother way).

For the Nonogram clue [3,2,1] - which represents the regular expression "0*1110*110*10*" - the following automata (transition matrix) is generated:
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0

Note that the regular function uses 0 (zero) as the failing state, so the states must start with 1. This is taken care in the model as the resulting value is subtracted by 1.

As usual in my models, the regular constraint is just a "convenience function", i.e. not using special written propagation methods etc.

The regular constraint has - of course - more applications than for Nonogram solving. I plan to look more into this.

The Nonogram solver:
After the regular constraint and the automata generator was written, it was quite easy to change the old Nonogram solver to use these new tools. The result is in I was quite curious how fast this version should be compared to the former slow model. In short: it is much faster.

As comparison the Lambda instace took about 12 seconds with the old model; with the new model it takes 0.5 seconds, which is a nice improvement. I never finished a run for Nonunique problem with the older model; the new model takes 0.5 seconds (including the startup of the Comet program). Etc.

The P200 problem now takes 2:05 minutes, which can be compared with 57 seconds for the Gecode/R model. Thus, the Comet model is still slow compared to Gecode version and the ECLiPSe version, which solves the P200 problem in just a second or two. Maybe some creative labeling or a "proper" regular constraint can fasten things up...

Update some hour later: One way to gain about 30 seconds to 1:30 minutes on the P200 problem was to explicitly state the consistency level to onDomain, e.g.[m] == q0, onDomains), and to use another labeling strategy:
forall(i in 1..rows, j in 1..cols : !board[i,j].bound()) {
// label(board[i,j]); // the former strategy
tryall(v in 1..2)[i,j] == v, onDomains);

Some more Nonogram problems have been coded:

An aside about regular expressions
I have been very interested in regular expressions (especially the more expressive Perl type) for a long time. In 1997 I wrote a simple Perl program MakeRegex which returns a regular expression given a list of words. It was later Appletized in MakeRegexApplet. Now there are better programs/modules for this.

OPL Models

One of my projects is to translate the OPL models from Pascal Van Hentenryck "The OPL Optimization Programming Language" into Comet. Even if OPL and Comet are different languages the reading of the book has been very awarding. Thanks again, Pascal!

Some OPL models already have been published, but now I've been more systematic and started from the beginning. More models to come.

Finding: arrays in a tuple
In Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models I wrote

One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.

I've found a way of using arrays in a tuple, by first initialize the values in a matrix and then - by using the all function - "slice" the values into the tuple array. This has been done in the models and

Example from

tuple productType {
int profit;
set{Machines} machines;
int[] use;
int use[Products, Resources] = [[3,4],[2,3], [6,4]];
productType Product[Products] =
[productType(6, {shirtM}, all(i in Resources) use[shirt,i]),
productType(4, {shortM}, all(i in Resources) use[shorts,i]),
productType(7, {pantM}, all(i in Resources) use[pants,i])];

Combining different models

The alphametic problem SEND + MOST = MONEY has the additional requirement to maximize the value of MONEY. The older model does that and nothing more.

One natural extension of the problem is the following:
* first find the maximum value of MONEY
* then find all the solutions with this value.

The model has two functions:
* smm which is just the constraint for the alphametic problem
* send_most_money which has a parameter money.

send_most_money is first called with 0, indicating that is should maximize the value of MONEY, and then returns that value. The next call to send_most_money is with the calculated MONEY value, which indicates that all solutions should be generated.

The answer is

check all solutions for MONEY = 10876
x[9,7,8,2,1,0,4,6] MONEY: 10876
x[9,7,8,4,1,0,2,6] MONEY: 10876

Jim Orlin's Logic Puzzle

In Colored letters, labeled dice: a logic puzzle Jim Orlin stated a logic puzzle:
My daughter Jenn bough a puzzle book, and showed me a cute puzzle. There are 13 words as follows: BUOY, CAVE, CELT, FLUB, FORK, HEMP, JUDY, JUNK, LIMN, QUIP, SWAG, VISA, WISH.

There are 24 different letters that appear in the 13 words. The question is: can one assign the 24 letters to 4 different cubes so that the four letters of each word appears on different cubes. (There is one letter from each word on each cube.) It might be fun for you to try it. I’ll give a small hint at the end of this post. The puzzle was created by Humphrey Dudley.


If anyone wants to write an Excel spreadsheet and solve it via integer programming, please let me know. I’d be happy to post the Excel spreadsheet if you send it to me, or link to it if you post it and send me the URL.

This was a fun puzzle, so I modeled the problem in, and mailed Jim the link.

Some days later he wrote Update on Logic Puzzle where the contributed models were presented. There where another Comet model by Pierre Schaus, one of Comet's developers. Pierres model use an different and more elegant approach than mine.

Jim also linked to some other logical puzzles from the great collection (which have solutions in ECLiPSe/Prolog). One of these puzzles was Building Blocks, in the same vein as his labeled dice puzzle. Hence I had to make a Comet model of this problem:

He concludes the post with the following:

Incidentally, the ease for solving it using Constraint Programming makes me think that Constraint Programming should be considered a fundamental tool in the OR toolkit.

Other Comet models this week

Here are the other Comet models created/published this week. Some of them where very easy to do since they are translations of my MiniZinc models.

February 21, 2009

Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models

Here are some of the new Comet models written the last week.

Findings this week

Representing KenKen, Kakuro, and Killer Sudoku

One important thing in modelling a problem is how to representing the data in the problem. For grid puzzles such as KenKen, Kakuro, and Killer Sudoku, where one of the objects is to operate (add, multiply, etc) on an array of unknown length, I found one representation which work quite well (at least on the simple problem I tested).

Let's take the KenKen problem from Wikipedia.

The first version was quite large, and stated explicitly all the operations (addition, subtraction, multiplication, or division). This was slightly generalized in the same model, though.

After that I relaxed the requirement to state exactly which operation to do things actually got simpler to model. In the Comet model we represent an cell with a tuple (struct) as two integers

tuple cell {
int r;
int c;

The cage of numbers to add (or multiply etc) is represented by another tuple, a set of cells, and the result (integer):

tuple P {
set{cell} c;
int res;

Now, it is only to state the problem using this structure:

int num_p = 15; // number of cages/segments
P problem[1..num_p] = 
   P({cell(1,1), cell(2,1)}, 11),
   P({cell(1,2), cell(1,3)}, 2),
   P({cell(1,4), cell(2,4)}, 20),
   P({cell(1,5), cell(1,6), cell(2,6), cell(3,6)}, 6),
   P({cell(2,2), cell(2,3)}, 3),
   P({cell(2,5), cell(3,5)}, 3),
   P({cell(3,1), cell(3,2), cell(4,1), cell(4,2)}, 240),
   P({cell(3,3), cell(3,4)}, 6),  
   P({cell(4,3), cell(5,3)}, 6),
   P({cell(4,4), cell(5,4), cell(5,5)}, 7),
   P({cell(4,5), cell(4,6)}, 30),  
   P({cell(5,1), cell(5,2)}, 6),
   P({cell(5,6), cell(6,6)}, 9),
   P({cell(6,1), cell(6,2), cell(6,3)}, 8),
   P({cell(6,4), cell(6,5)}, 2)

This representation makes it possible to reading a problem from a file etc (which has not been done, though).

The single function that calculates the values in the segment given the result is as follows. Note: Here I assume that division and subtraction will only be done for two sized sets.

function void calc(Solver m, set{cell} cc, var{int}[,] x, int res) {
if (cc.getSize() == 2) {
var{int} a(m, x.getRange(0));
var{int} b(m, x.getRange(0)); == x[cc.atRank(0).r,cc.atRank(0).c]); == x[cc.atRank(1).r,cc.atRank(1).c]); + b == res) || (a * b == res) ||
(a * res == b) || (b * res == a) ||
(a - b == res) || (b - a == res));
} else { in cc) x[i.r, i.c] == res || prod(i in cc) x[i.r, i.c] == res);

In the exploreall we state the following constraints:

// all rows and columns must be unique
forall(i in R) in R) x[i,j]));
forall(j in R) in R) x[i,j]));
// the specific problem
forall(i in 1..num_p) {
calc(m, problem[i].c, x, problem[i].res);

Now, the other two grid puzzles in Comet, Killer Sudoku, and Kakuro, is represented is the same way, using just slight modifications in the calc function and added or altered constraints.

For more about these three grid puzzles see Wikipedia's entries:
* KenKen
* Killer Sudoku
* Kakuro

Different solvers in one model: Sudoku generator

Speaking of grid puzzles, here is a (proof-of-concept) generator of Sudoko puzzles: It uses a simple and brute force approach:

* Generate a problem matrix with the proper number of unknowns
* Try to solve the problem. If there are exactly one solution, then all is fine, else generate another problem.

It also use two limits to ensure that one generation/solution is not
to costy (which may miss some good problems):
* timeout
* number of failures

The thing to note here is the way two different functions works together: first one constraint solver (model) to generate a Sudoku matrix, then another constraint solver (model) to really solve the Sudoko problem.

User defined constraints

Earlier I wrote how to define function in Comet, e.g. in Some Comet constraint programming (CP) models .

This week I wanted to state my own functions in the same way as the built in functions, e.g. as;.

The following function implements the (global) constraint increasing, which then can be stated as;. Please note that this is just a "modelling" constraint, and it will use the underlying solver propagation etc.

class increasing extends UserConstraint {
var{int}[] _x;
int _n;
increasing(var{int}[] x) : UserConstraint() {
_x = x;
_n = _x.getSize();
Outcome post(Consistency cl) {
Solver cp = _x[1].getSolver();
forall(i in 2.._n) {[i-1] <= _x[i], cl);
return Suspend;
Outcome propagate() {
return Suspend;

The Comet model implements the following constraints:

  • alldifferent_except_0
  • increasing
  • decreasing
  • toNum
  • correspondence

Translating OPL models

I've started to read Pascal Van Hentenryck's great book about OPL, "The OPL Optimization Programming Language", (ISBN: 9780262720304, unfortunately out of print), and it is quite clear the Comet "is not very unlike OPL". Which is not a big surprise since Van Hentenryck is the designer of both languages.

There are differences between Comet and OPL, sometimes quite big, but often they are easy to circumvent. Maybe later findings will discuss more about this.

The following Comet models are translations of OPL models, some of these are from the examples from ILOG's OPL Models.

  • Scheduling building a house (OPL). Comet use a different way of stating scheduling constraints than OPL.
  • Set covering problem. Also note the labeling (in the uses section), which solves the problem faster than with the default label, or labelFF.
  • Einstein puzzle, translation of the OPL model from Daniel Selman's Einstein's Puzzle - Or, 'The Right Tool for the Job'. The presentation in the Comet model is different than the OPL model (which is shown commented) but it is quite close.
  • Fixed-charge problem. One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.
  • From page 32 of Van Hentenryck's OPL book: Finding a eight digit square which remains a square when 1 is concatenated in from of it. This simple problem translates very well to Comet.
  • P-median problem. This also translates very well to Comet.
  • Production planning problem. Same difference as in the fixed charge model, where arrays in a tuple is not allowed.
  • Warehouse location problem.


Here are the new Comet models. As usual more information about the problem/model is in the header of the model.

February 15, 2009

More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems

During the week some more Comet models has been published. Many of them are translations of my MiniZinc models.

Findings this week

Here are some findings for the previous week.

tryall again: Nonogram

One of the models I really wanted to make was Nonogram, since it would be interesting to compare with the MiniZinc model nonogram.mzn and the Gecode/R model nonogram.rb. The Comet model is

The first version was very influenced by the MiniZinc model, and used tryall where the MiniZinc model used exists (see the previous post About 20 more constraint programming models in Comet for some more comments ablit this). But this was way too slow.

After some thoughts, I rewrote the tryall:s and it was getting faster. For a simple Nonogram problem like The Hen (see picture below), it originally took about 30 seconds. And after the changes, it now takes a fraction of a second (with alot of fewer propagations).

Nonogram The hen
. X X X . . . .
X X . X . . . .
. X X X . . X X
. . X X . . X X
. . X X X X X X
X . X X X X X .
X X X X X X . .
. . . . X . . .
. . . X X . . .

Still, compared to the Gecode/R (and - of course - Gecode) models, which uses "regular expression" representation, the Comet model is slow, but is faster the MiniZinc version. For example the Lambda picture (see below) takes about 12 seconds with the Comet model, compared to 0.5 seconds with the Gecode/R model. Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)...

Nonogram: Lambda problem
. X X . . . . . . .
X . X X . . . . . .
X . . X . . . . . .
. . . X X . . . . .
. . . . X . . . . .
. . . X X X . . . .
. . . X X X . . . .
. . X X . X X . . .
. . X X . . X . . .
. X X . . . X X . X
. X X . . . . X X X
X X . . . . . X X .

To summarize, the main finding here was to replace two tryall statements with two decision variables, and change the rest accordingly. The model also include two symmetry breaking constraints ("alldifferent except 0" and ordering of a temporary array), but these make no big difference.

binary representation of set variables

As far as I know, Comet don't (yet) have a variable set variables (i.e. as decision variables). Instead, another representation must be used, and probably the most natural representation is an array of binary decision values representing each value of the set.

The following models use this representation:
*, partitions a range of integers 1..n in two equal sized sets where the sums and sums of squares of the two sets are equal.
* the well known problem of Steiner triples.
*, (partitions values according to a function of some kind)


These models is available by My Comet page. As usual, the problems are presented in the header of the file.

February 09, 2009

About 20 more constraint programming models in Comet

Since Some Comet constraint programming (CP) models, I've continued to translate some more of my (approx. 600) MiniZinc models to Comet. There are a few new models: and

Findings this week


One finding this week is that tryall almost behaves like MiniZinc's exists, a very useful modelling tool. See for example the following models for the usage:
* Hidato puzzle
* Knights path
* SONET problem
* Word golf.

tryall behaves slightly different depending where it is placed: In the main explore/exploreall section, it may not generate all the solutions; but when placed in the using section, it generates all the solutions (and makes the model a bit slower).

Search strategies (labelling)

I've started to experiment with different search strategies (labelling), but for most of the models, the two builtin label and labelFF (first fail) are reasonable fast. This is an area I will look into more.

Other things

* strings/dictionaries: It is very nice to be able to use "real" strings in modelling. See for example Word golf, and Set covering deployment. The latter model also - just for fun - has a hash table for counting the number of occurrences of the selected countries.

* more than one models in the same program. MiniZinc only allows one model per file. As an experiment, (two player zero sum game) solves first for one player, and then another model for the second player (which is not necessary since the solutions are a dual problem). Also, in Word golf, there is a loop of models solving for different ladder lengths.

The models

As usual, the description of the problem is included in the model, and a reference to my other model(s) of the same problem. Bus scheduling (Taha) Calculs d'enfer puzzle (from the NCL manual) Crew scheduling (Gecode) Discrete Tomography (one color model) Fill in the squares, recreational mathematics. (from ZDC system). Origin unknown. Optimizing furniture moving (Marriott and Stuckey) 2 player zero sum game (Taha). This uses the MIP (mixed integer programming) module. Heterosquare problem Hidato puzzle Knight path Least square (From Lundgren, Rönnqvist, Värbrand: "Optimeringslära"), using the MIP module. Maximum flow problem (Winston) Pandigital numbers (any base << about 12) Photo problem Eight number placement puzzle Quasigroup completion problem.
Problem files: Set covering, placing of firestations (Winston) Set covering deployment SONET problem Survo puzzle lights problem (CSPLib problem 16) Who killed agatha? (The Dreadsbury Mansion Murder Mystery, a automated reasoning problem) Word golf (word chain, word ladder), Data file with 2044 3-letter words. Young tableaux and partitions

January 31, 2009

Some Comet constraint programming (CP) models

Earlier this week, I wrote about Comet in Comet version 1.1. For more about Comet, see the link mentioned above and newly created My Comet Page.

A year ago, when I first tested Comet, it was with the constraint-based local search module (LocalSolver, LS), but now I've modelled some of my standard problems in Comet's "classic" constraint programming module (CP). I hopefully will write more about local search modelling in Comet later.

Notes about constraint modelling in Comet

I really like working with Comet's constraint programming module (cotfd, finite domain). The comments below (and the models) should be read in the light that this is the first modelling using this CP module. There probably are other/better ways to do things.

My main experience with modelling in constraint programming is in MiniZinc so I use that system as a comparison.

The Comet models below were mostly translated from the MiniZinc models, and it was quite easy to do this translation, with some exceptions, e.g. the declaration of variables.

Comet has some features that I have really missed in MiniZinc:
- command line arguments and I/O. See for an example.

- seamless conversion of floats and integers, and between integers and booleans.

- recursion. Well, I haven't have to use recursion in Comet yet, but it's nice to know it works.

- generating and colleting all solutions. Using exploreall all solutions to a problem may be obtained with in the model. In some of the MiniZinc solvers (Gecode and ECLiPSe) all solutions can be generated, but not collected in the model. Also, in Comet it is easy to modelling a problem with different input values (see for an example).

- writing search heuristics. Search heuristics can be a real time saver. Comet supports writing your own labelling functions. I have not used this feature very much, but experimented with it in some models, e.g. (Langford's number problem). For my models, the built in labeling functions label or labelFF (first-fail) seems to be enough. The examples in the distribution use more advanced labeling techniques.

Comet is a big system, with many more features than those used in my first models, e.g. the local search, writing user defined constraints, tuples, classes etc etc.

In summary:
I have not found any great differences in speed, propagations etc between MiniZinc (and its solvers) and Comet, but these problems are not especially demanding, and I didn't the models. This would be a future work.

Comet is a great system and fun to use. I've now started to reread the book Constraint-Based Local Search by Pascal van av Hentenryck and Laurent Michel, about the constraint-based local search (LS module) in Comet, which has features that solve very large problems.

Code example

In Comet 1.1 two Comet models were shown. Here is some small examples of coding in Comet. Since I like recreational mathematics, let's convert between an array of numbers and an integer.

function void toNum(Solver m, var{int}[] x, var{int} num, int base) { == sum(i in 1..x.getSize()) base^(x.getSize()-i)*x[i]);

It is then used in the model using the Solver as argument like this, where we also constrain the integer variable z to 123. The output of x will - of course - be [0,0,1,2,3]

Solver m();
var{int} x[i in 1..5](m, Digits);
var{int} z(m, 0..100000);
exploreall {
toNum(m, x, z, base); == 123);
} using {
cout << x << endl;

This whole test model is

Here are two other function, alldifferent_except_0 , and increasing (enure that an array is sorted). Both are used in the model

function void alldifferent_except_0(Solver m, var{int}[] x) {
int n = x.getSize();
forall(i in 1..n, j in i+1..n) {[i] > 0 && x[j] > 0 => x[i] != x[j]); }

function void increasing(Solver m, var{int}[] x) {
  forall(i in 2..x.getSize())[i-1] <= x[i]);

My Comet models

Below are some of my Comet models for (classic) constraint programming. Explanation of the problem, and references are in each specific file.

For all model, with one exception, the constraint programming module (CP) was used. The Coin grid, however, uses the mixed integer programming (MIP in the LP module). I first did a constraint programming variant but it was too slow (as where all the classic constraint programming models I've written in MiniZinc, Gecode/R etc). Integer programming really shines there.

So, here's the models.

Most of these models were translated from my models in other constraint programming systems, which see:

January 25, 2009

Comet version 1.1

The other day the constraint-based local search system Comet version 1.1 was released.

The release contains the following new features (mailed to the mailing list, and slightly edited):

Dynadec is proud to announce Comet Release 1.1. New features in thi rrelease include:
* Addition of a priority to user-defined constraints.
* Addition of the function sqrt for the local solver.
* Addition of the method "tryPost" on the CP solver for postin constraints in user-defined constraints.
* Addition of the total number of failures (i.e., across restarts).
* Addition of a method startWithRestart to start LNS with a restart which may generate an initial solution.
* Function minAssignment now includes an alldifferent and a computation of the cost
* Added the ability to select from a set of strings
* Added a onFailure clause to select to handle elegantly the case where no value is selected
* Text based debugger on all platforms. (-gtext)
* GUI for debugger on OSX and Linux (-ggui) [experimental]
* C++ API: with good packaging

We encourage all current customers and evaluators to download the latest at

The distribution includes 18 examples of using Comet with three different paradigms (modules):
* constraint-based local search
* traditional constraint programming
* linear programming

It also contains detailed descriptions of the API (HTML).Note that the distribution is now via Dynadec, not Comet online (which links to an older version).

An earlier version of Comet is documented in the great book Constraint-Based Local Search by Pascal van av Hentenryck and Laurent Michel, the main developers of Comet.The book explains Comet and the principles of constraint-based local search very well, and I really recommend it. The newer development of Comet have been published in papers by the Comet developers, see Publications. (By the way, I also recommend Van Hentenryck's classic and pathbreaking book "Constraint Satisfaction in Logic Programming", which was my second read in this field, and I still enjoy re-reading it.)

I like Comet, but have been somewhat frustrated when the version 1.02 (the former version) broked older code (sometimes only slight changes was needed, though)), but now it has been stabilized. In the future I hope to come back with some of my own models.

Example: N-queen
As can been seen from the examples below the Comet language is quite C++-ish (or maybe rather OPL-ish?).

Here is two model of the n-queens problem. First using constraint-based local search (the example ls/

import cotls;
int t0 = System.getCPUTime();
Solver m();
int n = 16;
range Size = 1..n;
UniformDistribution distr(Size);
var{int} queen[Size](m,Size) := distr.get();
ConstraintSystem S(m);; in Size) queen[i] + i)); in Size) queen[i] - i));
int it = 0;
while (S.violations() > 0 && it < 50 * n) {
selectMax(q in Size)(S.violations(queen[q]))
selectMin(v in Size)(S.getAssignDelta(queen[q],v))
queen[q] := v;
cout << queen << endl;

It solves the 1000-queens problem in about 1.5 seconds.

Here is the same problem, using the (traditional) constraint programming module (the example cp/

import cotfd;
Solver m();
int t0 = System.getCPUTime();
int n = 8; //1000;
range S = 1..n;
var{int} q[i in S](m,S);
Integer np(m.getNPropag());
cout << "Initiating search..." << endl;
Integer c(0);
solveall { in S) q[i] + i),onBounds); in S) q[i] - i),onBounds);,onBounds);
} using {
forall(i in S) by (q[i].getSize()) { // : !q[i].bound()) by (q[i].getSize()) {
tryall(v in S : q[i].memberOf(v))
c := c + 1;
cout << "Nb : " << c << endl;
int t1 = System.getCPUTime();
cout << "Solution = " << q << endl;
cout << "time: " << t1 - t0 << endl;
cout << "#choices = " << m.getNChoice() << endl;
cout << "#fail = " << m.getNFail() << endl;
cout << "#propag = " << m.getNPropag() - np << endl;

Also see
Download Comet
Dyandec forums (Dynadec)

Comet online
publications, which contains many publications about Comet
mailing list
Wiki. Note: the wiki have not been updated for a while. For documentation about the API, see the documentation in the distribution.