Main

November 01, 2014

Decision Management Community November 2014 Challenge: Who killed Agatha?

This blog post was inspired by Decision Management Community November 2014 Challenge: Challenge Nov-2014 Decision model 'Who killed Agatha?'. The original text - and the one linked to from the Challenge page - is here.

From DM Community Challenge Nov-2014:
Someone in Dreadsbury Mansion killed Aunt Agatha.
Agatha, the butler, and Charles live in Dreadsbury Mansion, and
are the only ones to live there. A killer always hates, and is
no richer than his victim. Charles hates noone that Agatha hates.
Agatha hates everybody except the butler. The butler hates everyone
not richer than Aunt Agatha. The butler hates everyone whom Agatha hates.
Noone hates everyone. Who killed Agatha?
A model for this problem is actually one of the first I implement whenever I learn a new Constraint Programming (CP) system since in my approach it has some features that are important to test (they are explained in some details below):
  • reification: "reasoning about constraints"
  • matrix element: how to index a matrix with decision variables
The same general approach is implemented in most of the CP systems that I've tested so far: http://hakank.org/common_cp_models/#whokilledagatha See also below for the full list with links.

All the decodings has the same approach: defining two binary 3x3 matrices of decision variables:
  • a "hates" matrix, i.e. who hates who
  • a "richer" than matrix: who is richer than who
And a Killer decision variable of range 1..3 (Agatha=1, Butler=2, and Charles=3). In some encodings (as the one below), Victim is also a decision variable, but we know that Agatha is the Victim from the start so this decision variable can be skipped.

The constraints will then on these two decision variables, the two matrices + the Killer decision variable.

First I describe the MiniZinc model in detail and after that a discussion about the solution and why it returns 8 solutions (all with the same killer: Agatha).

The Minizinc model

My original MiniZinc model is here who_killed_agatha.mzn.

The decoding shown below is slightly altered for simpler presentation.

Initiation of constants and domains

The following defines the dimension of the matrices (3x3) and also the domain - the possible values - of the killer and victim variables (1..3).
int: n = 3;
set of int: r = 1..3;
Here we define the constants agatha, butler, and charles. The values are fixed to the integers 1..3 since the solvers used handles finite domains in terms of integers. (There are exceptions to this. To state it short: finite domains in terms of integers is the most common in CP. Some systems also support set variables and/or float decision variables.)
% the people involved
int: agatha  = 1;
int: butler  = 2;
int: charles = 3;

Decision variables

The two decision variables the_killer and the_victim has both the domain 1..3, i.e. the set {agatha, butler charles}.
var r: the_killer;
var r: the_victim;
The 3x3 entries in the two matrices hates and richer are binary decision variables: 1 represents true, 0 represents false.
array[r,r] of var 0..1: hates;
array[r,r] of var 0..1: richer;
The following statement just states that the CP solver should use its default search method.
solve satisfy;
Aside: There is a range of different search heuristics available for most CP solvers, than can speed up more complex CSP's significantly, e.g. for an array x of decision variables one could state:
   solve :: int_search(x, first_fail, indomain_split, complete);
but for this simple problem we just settle for the default.

Constraints

Next is the constraint section where all the requirements (constraints) of the problem are specified. The order of the constraints is the same order as the problem description. In contrast to the MiniZinc version on my MiniZinc site (the link above), I have here separated each constraint in its own section to emphasize the specific constraint.

Constraint: A killer always hates, and is no richer than his victim.

Here we see an example of a "matrix element" constraint, i.e. that a decision variable (the_killer) is used as an index of a matrix of decision variables, e.g.
     hates[the_killer, the_victim] = 1<
[Aside: In MiniZinc this can be done in this most syntactical pleasing ("natural") way, though most of the other CP systems cannot handle this syntax so an alternative syntax must be used. However, most CP systems have a syntax for the one dimensional element constraint, which is then stated as
    element(Y,X,Z)
which corresponds to
    X[Y] = Z,
where X and both Y and Z can be decision variables. The element constraint is central for CP and is one of the features that separates it from traditional LP/MIP modeling systems.] The first constraint
   hates[the_killer, the_victim] = 1
simply means that the killer hates the victim, and the second that the killer is not richer than the victim.
constraint
   % A killer always hates, and is no richer than his victim. 
   hates[the_killer, the_victim] = 1 /\
   richer[the_killer, the_victim] = 0
;

The concept of richer

The only relations we are using here are hates and richer, and we must take care of the meaning of these concepts.

Hates: Regarding the concept of hatred there is no logic involved how it can be used: A and B can both hate each other, or one of them can hate the other, or neither hates the other. Note that A can hate him/herself.

Richer: The concept of richer is a completely different matter, however. There is a logic involved in this relation:
  • if A is richer than B, then B cannot be richer than A
  • A cannot be richer than him/herself
Realizing that the richer relation is special is important for this model. Without it, there would be many more different solutions (256 instead of 8), though all these 256 solutions points to the same killer: Agatha. (See below for an analysis of the 8 different solutions.)

As mentioned above, we don't need to - and cannot - do a similar analysis on (the semantic meaning of) the hate relation.

See below A note on the richer relation for a comment on the assumption that either A is richer than B or B is richer than A.
constraint
   % define the concept of richer

   % no one is richer than him-/herself
   forall(i in r) (
      richer[i,i] = 0
   )

   % (contd...) if i is richer than j then j is not richer than i
   /\ 
   forall(i, j in r where i != j) (
      richer[i,j] = 1 <-> richer[j,i] = 0
   )
;

Constraint: Charles hates noone that Agatha hates.

Here again, reifications is handy. In pseudo code:
  FOR EACH person I in the house:
     IF Agatha hates I THEN Charles don't hate I
constraint
  % Charles hates noone that Agatha hates. 
   forall(i in r) (
      hates[charles, i] = 0 <- hates[agatha, i] = 1
   )
;

Constraint: Agatha hates everybody except the butler

Here we simply state three hate facts.

It is important to not forget that Agatha can hate herself. Missing this fact in this model would yield that either Agatha or Charles is the killer.
constraint
   % Agatha hates everybody except the butler. 
   hates[agatha, charles] = 1  /\
   hates[agatha, agatha] = 1 /\
   hates[agatha, butler] = 0
;

Constraint: The butler hates everyone not richer than Aunt Agatha.

Same as above, we use reification to handle this:
   FOR EACH person I in the house
     IF I is not richer than Agatha THEN Butler hates I
which is implemented as
constraint
   % The butler hates everyone not richer than Aunt Agatha. 
   forall(i in r) (
     hates[butler, i] = 1 <- richer[i, agatha] = 0
   )
;

Constraint: The butler hates everyone whom Agatha hates.

Same reasoning with reifications as above.
constraint
   % The butler hates everyone whom Agatha hates. 
   forall(i in r) (
      hates[butler, i] = 1 <- hates[agatha, i] = 1
   )
;

Constraint: Noone hates everyone.

Here we count - for each person - the number of 1's (i.e. true) for each row in the hates matrix, and ensure that they are less than 3 (i.e. not all).
constraint
   % Noone hates everyone. 
   forall(i in r) (
     sum(j in r) (hates[i,j]) <= 2     
   )
;

Who killed Agatha?

As mentioned above, this is not really needed since we could have hard coded the_victim to agatha without changing anything.
constraint
   % Who killed Agatha? 
   the_victim = agatha
;
To summarize: This model use a couple of important concepts in Constraint Programming:
  • decision variables with specific (finite) domains
  • reifications, implication and equivalence between constraints
  • element constraint (here in term of the more complex variant: matrix element)

Solution and analysis

There are total 8 different solutions for this MiniZinc model, all stating that Agatha is the killer. (Being able to get all solutions to a combinatorial problem is a excellent feature in Constraint Programming.)

The reason this model give more than one solution is because it is - in my interpretation of the problem - under constrained: there is not enough information about certain relations in the hates and richer matrices to yield a unique solution.

Below are the values of the hates and richer matrices after all constraints have been activated, but before search (searching in the search tree and assign all the decision variables to give a solution), as well as the killer variable.

The entries with "0..1" are the decision variables that are not constrained (decided) enough before search. (Note: Not all CP solvers have the feature to show the inferred variable domains before search. The table below is from my Picat model, slightly altered: who_killed_agatha.pi)
    Killer:
    killer= 1

    Hates
    --------
    Agatha : Agatha : 1     Butler : 0        Charles: 1                   
    Butler : Agatha : 1     Butler : 0        Charles: 1                   
    Charles: Agatha : 0     Butler : 0..1     Charles: 0                   

    Richer
    --------
    Agatha : Agatha : 0     Butler : 0        Charles: 0..1       
    Butler : Agatha : 1     Butler : 0        Charles: 0..1       
    Charles: Agatha : 0..1  Butler : 0..1     Charles: 0                  
These undecided variables involves if:
  • Charles hates Butler (or not)
  • Agatha is richer than Charles (or not)
  • Butler is richer than Charles (or not)
  • Charles is richer than Agatha (or not)
  • Charles is richer than Butler (or not)
We see that:
  • The killer has been assigned to 1 (= Agatha).
  • Many (namely 5) of the variables including Charles are undecided before search, i.e. using the constraints only can not figure out if they are true of not. This is perhaps not too surprising since most constraints in the model - and problem statement - involves only Agatha and Butler.
Also, two pairs of these (under constrained) Charles variables cannot both be true of the same time in the same solution. In each solution, one variable of each pair must be true and one must be false:
  • "Agatha is richer than Charles" and "Charles is richer than Agatha"
  • "Butler is richer than Charles" and "Charles is richer than Butler"
This setting is basically the same as having 5 binary variables, V1..V5, where the variables V1 and V2 are xored (i.e. that one of V1 and V2 must be true and the other false), and variables V3 and V4 are xored.

Thus 8 different solutions:
      [0,1,0,1,0]
      [0,1,0,1,1]
      [0,1,1,0,0]
      [0,1,1,0,1]
      [1,0,0,1,0]
      [1,0,0,1,1]
      [1,0,1,0,0]
      [1,0,1,0,1]
This concludes the analysis.

A note on the "richer" relation

Nothing rules out that A can be as rich as B (i.e. that neither is richer than the other). However, in this simple model, we assume a stricter variant: that either A is richer than B, or B is richer than A. If we would change the equivalence relation
   richer[i,j] <-> richer[j,i] = 0
(which means:
   IF A is richer than B THEN B is not richer than A
   AND
   IF B is not richer than A THEN A is richer than B)
to an implication
    richer[i,j] -> richer[j,i] = 0
(IF A is richer than B THEN B is not richer than A)
then it don't change the principal solution but we would yield 18 solutions instead of 8, all point to the same killer: Agatha.

OTHER ENCODINGS

Here is a list of all the encodings of the Agatha murder problem that I have implemented so far. Whenever I learn a new CP system, the Agatha problem will probably be one of the first to test. See http://hakank.org/common_cp_models/#whokilledagatha:

October 04, 2011

Crossword construction in MiniZinc using table constraints - a small benchmark on "72 Gecode problems"

Almost all models, wordlists, and files mentioned in this blog post is available at my MiniZinc Crossword page.

Abstract

The method presented here for construction (solving, generating) crosswords is based of "just a bunch" of table constraints representing the words together with a rather simple unicity constraint. After an introduction of the problem and a description of the general approach used, a benchmark of several (72) crossword problem instances is reported between two FlatZinc solvers (Gecode/fz and Chuffed) and the crossword solver distributed with Gecode (written in C++) using element constraints for the intersections and alldifferent constraint for the unicity requirement. We will find that this MiniZinc model and the FlatZinc solvers are competitive and in some case even faster than the Gecode (C++) model.

Introduction

Some weeks ago I started to read Ivan Bratko's "Prolog Programming for Artificial Intelligence" (the very new 4th edition, ISBN: 9780321417466). On page 27f there was a very simple crossword problem:
The problem is to fill this crossword with words:
    
    L1   L2    L3   L4    L5   XXX
    L6   XXX   L7   XXX   L8   XXX
    L9   L10   L11  L12   L13  L14
    L15  XXX   XXX  XXX   L16  XXX

Where the L* are letters to be identified.
and also a couple of words to use:
dog, run, top
five, four, lost, mess, unit
baker, forum, green, super
prolog, vanish, wonder, yellow
One common approach of solving/generating crosswords is to identify the intersections of the words and then using a wordlist for matching these intersections (e.g. the very simple crossword.mzn). This is usually done with element constraints for the intersections and alldifferent constraint to ensure unicity of the selected words.

Instead of this approach I got the idea (not very revolutionary, but still) to try using only "a bunch of" table constraint representing the words (called "segments" in the model) which would then handle the intersections via the naming of the variables. This was implemented in the MiniZinc model crossword_bratko.mzn. The table constraints where manually created by just using the numbers of the "free" letters in the problem (L1, L2, etc). The array of decision variables (L) is in the domain is in the domain 1..numletters (1..26, for "a".."z"). The dummy variable L[0] was later added to handle the fill-outs (and is hard-coded to a single value).

Here is the kernel of the Bratko problem which consists of the two row (across) words, and the three column (down) words:
array[0..N] of var 1..num_letters: L;
constraint
   % across
   table([L[1],L[2],L[3],L[4],L[5]], words5)
   /\
   table([L[9],L[10],L[11],L[12],L[13],L[14]], words6)


   /\ % down
   table([L[1],L[6],L[9],L[15]], words4)
   /\
   table([L[3],L[7],L[11]], words3)
   /\
   table([L[5],L[8],L[13],L[16]], words4);
The second argument of table is a matrix where all words of the same size are collected, e.g. words5 contains all words of length 5. In earlier crossword models I have tend to collect all words in a single - and often huge - matrix with a lot of "0" as fillers to make it a proper matrix.

As mentioned above the intersections are not explicitly identified in the model; instead they are just a consequence that the same letter identifier "happens" to be in a across word and a down word. E.g. L[3] represents the third letter of the first across word, and the first letter of the first down word. The index (3) is just a counter of the "free" letters. In this problem there are 14 free letters, represented by L[1]..L[14].

Also note that in this problem we don't have to care about 1-letter words. In the general model presented below, we will see that 1-letter words must be handled in a special way.

The constraint that all words should be distinct is implemented which the constraint shown below. The matrix segments is the segments (i.e. the words) in the crossword grid where all letters are identified as a unique integer (the index in L), and "0" (zero) is used as a filler so the segments has the same length and can be represented as a matrix. It has two parts: the segments across (rows) and the segments down (columns), and is the same structure as the table constraints.
segments = array2d(1..num_segments, 1..max_length, 
[
   % across
   1, 2, 3, 4, 5, 0,
   9,10,11,12,13,14,

   % down
   1, 6, 9,15, 0, 0,
   3, 7,11, 0, 0, 0,
   5, 8,13,16, 0, 0
]);

% the segments/words should be all distinct
constraint
   forall(I,J in 1..num_segments where I < J) (
      not(forall(K in 1..max_length) (
          L[segments[I,K]] = L[segments[J,K]]
      ))
   )
;
(The structure of the table constraints and the segments are much the same, and I have tried to represent this in a single matrix, but stumbled on the problem how to represent an appropriate wordlist structure. This may very well be fixed in a later version.)

This model has a single solution (given the wordlist show above):
L = [6, 15, 18, 21, 13, 9, 21, 5, 22, 1, 14, 9, 19, 8, 5, 19]

f o r u m *
i * u * e *
v a n i s h
e * * * s *

Generalization of the model and Gecode's 72 crossword problem

Since Bratko's problem was so small, I then wondered how well MiniZinc would do on a variety of larger crossword problems using the same basic approach, for example on the 72 crossword instances in Gecode's crossword.cpp (it is a large file but most of it are definitions of these 72 problem instances). This model uses element constraints by projecting words to their individual letters. In constraint, the MiniZinc model use table constraints for encoding the entire words. One of the objectives of the benchmark is to compare these two approaches.

The 72 problem instances are of different sizes which ranges from some small problems with grids of 5x5 cells to larger 23x23 grids. See the comments in the Gecode model for more details. The problems has been collected from different sources, e.g. Herald Tribune Crosswords and in early studies of backtracking, e.g. M.L Ginsberg Dynamic Backtracking and M.L. Ginsberg et al Search Lessons Learned from Crossword Puzzles. Some of them has also been used in later studies as well, e.g. the paper by Anbulagan and Adi Botea on phase transitions in crossword problems: Crossword Puzzles as a Constraint Problem (PDF).

The problems are represented as a grid in text form. For example the Bratko problem mentioned above would be represented as the following grid , where "_" represents a letter to be identified, and "*" is a blocked cell.
_ _ _ _ _ *
_ * _ * _ *
_ _ _ _ _ _
_ * * * _ *
A larger example is problem the 15x15 problem #10 (from Gecode's crossword.cpp) which is also identified as "15.01, 15 x 15".
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ _ _ _ _ _ _ * _ _ _ _
_ _ _ _ _ _ _ * * _ _ _ _ _ _
* * * _ _ _ * _ _ _ _ _ _ * *
_ _ _ _ _ * _ _ _ * _ _ _ _ _
_ _ _ * _ _ _ _ _ _ * _ _ _ _
_ _ _ * _ _ _ _ _ _ _ * _ _ _
_ _ _ _ * _ _ _ _ _ _ * _ _ _
_ _ _ _ _ * _ _ _ * _ _ _ _ _
* * _ _ _ _ _ _ * _ _ _ * * *
_ _ _ _ _ _ * * _ _ _ _ _ _ _
_ _ _ _ * _ _ _ _ _ _ _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
However, since MiniZinc don't have the facility of parsing this kind of text representation, I wrote some Perl programs to convert a problem instance to a MiniZinc model. The MiniZinc model for problem #10 is crossword3_10.mzn, which contains the problem specific table constraints and segments definitions, much in the same way as in the Bratko model.

Here is one solution (using Gecode's SCOWL wordlist, see below for more about the wordlists):
roar*breed*area
urge*lager*weal
laughingly*also
elegant**aerate
***and*gadget**
acmes*err*osier
bra*ornate*tore
aah*magneto*non
snap*madras*add
heron*gay*idles
**imaged*pea***
vesper**surmise
echo*ambassador
trim*beach*gear
suss*snaky*ears


Note: By construction this approach requires that all 1-letter segments (words) must be distinct. This means that 1-letter segments must be handled with care since these segments are the same when seeing them across and down. More specific this means that the unicity constraint has a special check for 1-letter words:
forall(I,J in 1..num_segments where I < J) (
  if sum(K in 1..max_length) (
     bool2int(segments[I,K] > 0 ) ) = 1
     /\
     segments[I,1] = segments[J,1]
  then
     true 
  else 
    not(forall(K in 1..max_length) (
      L[segments[I,K]] = L[segments[J,K]]
    ))
  endif
)

Structure of the MiniZinc files

The structure of the model are
  • the general model crossword3.mzn: which includes the wordlist and unicity constraint
  • a wordlist, which is included in the crossword3.mzn
  • problem instance, e.g. crossword3_10.mzn: includes the specific data instance and table constraints. This instance file includes crossword3.mzn
The instances can be run as follows:
# Using G12/MiniZinc default solver
$ minizinc -v --no-optimize -s crossword3_0.mzn

# Using Gecode/fz
$ minizinc -v --no-optimize -G gecode -s crossword3_0.mzn -f "fz -mode stat -n 1 "

For larger problems, the parameter --no-optimize might be a good choice, otherwise mzn2fzn (the converter to FlatZinc) can take very long time.

More about the plain Gecode model

The (plain) Gecode model crossword.cpp is described in detail chapter 19 in Modeling and Programming with Gecode (PDF), which also include some benchmark results.

Benchmarking

One of the objectives with the benchmark was to see how well the MiniZinc model (together with a FlatZinc solver) would compete with Gecode's crossword model (Gecode's crossword.cpp, available in the Gecode distribution). This model will be named "plain Gecode model" to be able to discern it when also mentioning the Gecode/fz FlatZinc solver.

After doing some preliminary tests of several FlatZinc solvers using different labelings, I selected these two solvers to continue with the full benchmarks of all 72 instances:
  • Gecode/fz: SVN version (based on the version 3.7.0) per 2011-09-21.
  • Chuffed: Version compiled 2011-04-17 (no explicit version). Chuffed is a lazy clause solver written by Geoffrey Chu. As of writing it is not yet public available but has been mentioned several times in connection with MiniZinc Challenge where it has shown very impressive results; see MiniZinc challenge 2011 Results and MiniZinc challenge 2011 Results. For more information see this description.
After systematically testing all labelings on some problem instances (#20, #30, and with some wordlists #40), some labelings where selected for the full tests. Note: I looked for a problem instance that could be used as a "proxy" for the complete problem set in order to get the best labelings, but found no instance that was representative enough for this.

Benchmark

The benchmarks consists of testing all 72 problem instances with the following wordlists:
  • Gecode's SCOWL wordlist (66400 English words), available from MiniZinc Crossword page
  • Swedish wordlist based on an earlier version of Den stora svenska ordlistan ("The Big Swedish Wordlist"), (388493 Swedish words), available from MiniZinc Crossword page. Note that crossword3.mzn must be changed to handle the national characters "å", "ä", and "ö" (this is commented out in the current model).
  • 73871 English words from /usr/share/dict/words (some words where filtered out, e.g. the one not matching ^[A-Za-z]$), available from MiniZinc Crossword page
  • 80.txt (242606 English words)
Notes:
  • The time reported is for all 72 problem instances, with a timeout of 10 minutes (600 seconds) per problem instance.
  • The plain Gecode model was run with default parameters (except for the timeout of 10 minutes and statistics)
  • The solver stated as chuffed4 below is Chuffed using the single parameter --toggle-vsids (and the timeout)
  • The solver Gecode/fz was run with the parameter for timeout and statistics.
The reported runs are not all the tests that where made. Instead just the top results are shown.

Which time to compare: Total time, runtime, or solvetime?

The time for running the plain Gecode executable was measured via the Unix time command. This includes everything: parsing of the problem (compiled in the model), setting up the constraints, and then solving the model.

However, the way MiniZinc works is different: the MiniZinc model (.mzn) is first parsed and then translated (flattened) into a FlatZinc format (.fzn) which is the file used by the solvers. This flattening process can take some time: As the number of words in the wordlist increases, the time for flattening increases as a consequence. The time is about a couple of seconds for smaller wordlists to about 30 seconds for the largest Swedish wordlist. Also, it takes some seconds to generate the "fancy" output where the resulting grid is presented (see the output section in the MiniZinc model). This output is used to check that the solution is correct and don't include any duplicate words (this is done via a separate Perl program).

Example: The time for running problem #33 (a grid of size 21x21) using the plain Gecode crossword model with SCOWL wordlist is 44.9 seconds. The total time for the Gecode/fz solver using size_afc_min/indomain_min to solve the same problem takes 59.6 seconds. However, Gecode/fz reports two times in its output: runtime: 49.1s and solvetime: 47.6s. This means that - for this problem instance - there is an overhead of about 5 seconds to generate the FlatZinc file and then output the nice output grid. In comparison, Chuffed using first_fail/indomain_max took 13.22s total time, with a reported 3.41s runtime and 1.56s as solvetime.

In the benchmark below all the three different times for the FlatZinc solvers are reported: total time, runtime, and solvetime (summed for all 72 instances). One way of comparing the performance of the FlatZinc solvers with the plain Gecode model is to use the runtime which would exclude the overhead for flattening to FlatZinc and generating the output. On the other hand, comparing runtime values is really not fair to plain Gecode, since the flattening process might do some relevant optimization of the model. As we will see, some of the solvers has in fact a total time that is better than the plain Gecode model.

The benchmarks below are grouped on the different wordlists, and the ordering is on the runtime. We will see that there is no single FlatZinc solver + labeling that has the best runtimes (or total time) for all wordslists.

Wordlist Gecode SCOWL

This is the wordlist used in the "plain" Gecode model containing 66400 English words.

The total time for plain Gecode crossword (with default parameters) on all 72 problem instances is 57:16.25 minutes. The timeout/failures are for problem: #15, #30, #39, #40, #45, #49.

Problem #40 is not possible to solve with this wordlist since the problem requires two words of length 23. However the wordlist contain no word of this length.

It is interesting that Chuffed's total time (36 minutes 37 seconds) is significant less than plain Gecode's total time (and runtime is of course even faster). We also see that the overhead of flattening to FlatZinc format and then handling the output is about 6 minutes.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
chuffed4first_fail
indomain_max
69 2197.65s (36 minutes and 37 seconds)1744.44s (29 minutes and 4 seconds)1681.32s (28 minutes and 1 second)
chuffed4most_constrained
indomain_max
69 2198.98s (36 minutes and 38 seconds)1746.56s (29 minutes and 6 seconds)1683.45s (28 minutes and 3 seconds)
chuffed4max_regret
indomain_max
67 3368.83s (56 minutes and 8 seconds)2917.64s (48 minutes and 37 seconds)2854.8s (47 minutes and 34 seconds)
fzsize_afc_min
indomain_min
66 4386.15s (1 hour, 13 minutes, and 6 seconds)3931.794s (1 hour, 5 minutes, and 31 seconds)3883.713s (1 hour, 4 minutes, and 43 seconds)
fzsize_afc_min
indomain_split
66 4564.53s (1 hour, 16 minutes, and 4 seconds)4114.111s (1 hour, 8 minutes, and 34 seconds)4066.457s (1 hour, 7 minutes, and 46 seconds)

Wordlist 80.txt

The 80.txt wordlist contains 242606 English words.

Total time for plain Gecode: 21:48.28 minutes with 70 instances solved.

Here we see that the best total time for the MiniZinc solver solves all 72 problems in about the same time as the plain Gecode model. The runtime reported is just about 3 minutes which is kind of weird. (The reported solvetime of 17 seconds is even weirder.)
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
chuffed4first_fail
indomain_min
72 1324.96s (22 minutes and 4 seconds)157.42s (2 minutes and 37 seconds)17.71s (17 seconds)
fzsize_afc_min
indomain_max
72 1375.62s (22 minutes and 55 seconds)214.765s (3 minutes and 34 seconds)103.848s (1 minute and 43 seconds)
fzsize_afc_min
indomain_median
72 1370.48s (22 minutes and 50 seconds)214.772s (3 minutes and 34 seconds)103.495s (1 minute and 43 seconds)
chuffed4max_regret
indomain_split
72 1410.68s (23 minutes and 30 seconds)266.42s (4 minutes and 26 seconds)126.43s (2 minutes and 6 seconds)

/usr/share/dict/words

This wordlists contains 73871 English words from /usr/share/dict/words. Some words where filtered out from the original file: the one not matching ^[A-Za-z]$.

Total time for plain Gecode: 33:43.19 minutes, 69 instances solved.

This is another benchmark where Chuffed's is the best FlatZinc solver. The total time (34 minutes and 1 second) is almost the same as plain Gecode, with the runtime (25 minutes and 56 seconds) is significantly faster.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
chuffed4first_fail
indomain_max
69 2041.68s (34 minutes and 1 second)1556.97s (25 minutes and 56 seconds)1484.82s (24 minutes and 44 seconds)
chuffed4most_constrained
indomain_max
69 2048.91s (34 minutes and 8 seconds)1563.16s (26 minutes and 3 seconds)1491.13s (24 minutes and 51 seconds)
chuffed4first_fail
indomain_split
69 2196.82s (36 minutes and 36 seconds)1712.9s (28 minutes and 32 seconds)1639.85s (27 minutes and 19 seconds)
chuffed4first_fail
indomain_min
68 2522.89s (42 minutes and 2 seconds)2045.56s (34 minutes and 5 seconds)1974.13s (32 minutes and 54 seconds)
fzsize_afc_min
indomain_min
68 2563.63s (42 minutes and 43 seconds)2085.042s (34 minutes and 45 seconds)2029.719s (33 minutes and 49 seconds)

Swedish wordlist

This wordlist contains 388493 Swedish words.

There is no plain Gecode runtime for this, instead it is just a comparison of the FlatZinc solvers. I wanted to include this in the benchmark for two reasons: to see how/if MiniZinc could handle this large wordlist, and also because I'm quite curious about Swedish solutions for the problems (much because I am a Swede).

The best solver this time is Gecode/fz (using size_afc_min/indomain_min) with a runtime of 3 minutes and a solvetime of 42 seconds. Though the total time is much larger: almost 39 minutes. This means that there is a massive overhead of flattening to FlatZinc and presenting the output.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
fzsize_afc_min
indomain_min
72 2330.14s (38 minutes and 50 seconds)181.685s (3 minutes and 1 second)42.129s (42 seconds)
fzsize_afc_min
indomain_split
72 2349.9s (39 minutes and 9 seconds)197.006s (3 minutes and 17 seconds)57.683s (57 seconds)
fzsize_afc_min
indomain_max
72 2393.97s (39 minutes and 53 seconds)255.495s (4 minutes and 15 seconds)116.09s (1 minute and 56 seconds)
chuffed4most_constrained
indomain_split
72 2415.61s (40 minutes and 15 seconds)258.14s (4 minutes and 18 seconds)89.89s (1 minute and 29 seconds)
chuffed4first_fail
indomain_split
72 2413.29s (40 minutes and 13 seconds)258.3s (4 minutes and 18 seconds)89.9s (1 minute and 29 seconds)

Summary and conclusions

The objective of the benchmark was to see how well the MiniZinc model would compete with the plain Gecode model. For all wordlists there is at least one FlatZinc solver with a total time that is near plain Gecode's total times, and for one wordlist (SCOWL) there is a solver that is much faster. Comparing the reported runtimes all measurable wordlists has a FlatZinc solver that is faster. For one wordlist (Swedish words) there where no run of the plain Gecode model.

As mentioned above, there is no single FlatZinc solver/labeling that is the best for all wordlist. Comparing just the FlatZinc solvers we see that Chuffed solver (with some labeling) was the best for SCOWL, 80.txt, and /usr/share/dict/words; whereas Gecode/fz was the best for the Swedish wordlist.

In the preliminary tests the best variable selection for Gecode/fz is size_afc_min, though the best value selection is not that clear. For Chuffed there is single variable/value selection combination that dominates, though both first_fail and most_constrained often gave quite good results.

As noted several times before in the CP field, these kind of benchmarks might be misleading and of limited value. The comparison between the plain Gecode model and the MiniZinc model (+ FlatZinc solvers) might be even worse since it compares at the same time:
  • two different CP systems: compiled C++ code vs MiniZinc eco system
  • two different approaches: element constraint on intersections vs. table constraint on words.
  • different time measurements
Still, it is probably not unfair to draw the conclusion that the MiniZinc model and the two tested FlatZinc solvers at least did give the plain Gecode model a match in generating/solving the selected problem instances.

Also, there might be some parameter or labeling that is much better than these tested. This includes - of course - parameters of the plain Gecode model.

Further research

It would be interesting to study how well the table approach would do as a plain Gecode model. It would also be interesting to do a MiniZinc model using a element/alldifferent approach.

System

Here is the system and version used in the benchmark:
  • Linux Ubuntu 11.04, 64-bit 8 cores (i7 930, 2.80GHz) with 12Gb RAM
  • Gecode: SVN version (based on the version 3.7.0) per 2011-09-21.
  • MiniZinc version: Version 1.4
  • Chuffed version: Version compiled 2011-04-17 (no explicit version).

Thanks

Thanks to Christian Schulte and Guido Tack for some explanations and intuitions. Also, thanks to Geoffrey Chu and Peter Stuckey for the opportunity to test Chuffed.

March 06, 2011

A matching problem, a related seating problem, and some other seating problems

The INFORMS blog challenge the last month (February) was O.R. and Love. The entry by Capgemini O.R. Blog Team was Cupids with Computers where a bunch of OR'ers (including me) was matched according to some - not exactly stated - compatibility scores. It is both interesting and fun to read. Also, see their companion blog post Two Deadly Sins that discuss some of the problems with matching.

I was curious how well a Constraint Programming system like MiniZinc would do on this matching problem so I wrote the model or_matching.mzn.

And it did quite well: it (Gecode/fz) solves the problem in 0.1 seconds (with 191 failures). Some comments about the model:
  • Since few CP system have good support for floats I converted the scores to integer (x 10), i.e. 3.7 is represented as 37 etc. However, this can lead to some errors since I assume that the scores are rounded.
  • it also identify the left out person which is just for fun.
The result is the following matchings (slightly edited):
Total score: 519

P1 P2 Score
--------------
 1  9: 92
 5 14: 69 (tie)
10 18: 69 (tie)
 8 16: 59
 2 19: 55
 3 12: 52
 6 11: 44
13 15: 42
 4  7: 37

Left out: 17
One reason that I modeled the problem was to see if it was possible to identify the OR'ers, or at least myself, and with some assumptions I have a some guesses. The assumptions is that the matchings in the blog post are presented from best match (score) and then ordered in score order.

My model includes a symmetry breaking which place the person with the lower index in the left position of the pair which means that I can just identify a pair. Also the second and third pair has a tie so it is even harder to guess these.

Ranking My guess (pair)
--------------------------------------------------------------------
- Anna Nagurney and Laura McLay: 1,9
- Ian Frommer and Paul Rubin: 5,14 or 10,18 (the score is a tie)
- Michael Trick and Tallys Yunes: 5,14 or 10,18 (ibid.)
- Larry D'Agostino and Robert Randall: 8,16
- David Smith and Thiago Serra: 2,19
- Francisco Marco-Serrano and Niclola Petty: 3,12
- John Angelis and Michale Watson: 6,11
- John Poppelaars and Patricia Randall: 13,15
- Hakan Kjellerstrand and Phillip Mah: 4,7

- Shiva Subramanian (the leftout): 17

So, based on this, I guess that I am either #4 or #7.

Let us further investigate this. In Two Deadly Sins there is some smaller score matrices where I happen to be named:
 Hakan John Patricia Phillip Shiva
  -     3.7   2.2     2.2     1.3   Hakan
  -      -    4.4     2.2     1.8   John
  -      -    -       4.2     2.7   Patricia
  -      -    -       -       2.2   Philip
  -      -    -       -       -     Shiva
Now assume that this is the same score and names as in the main matrix. My (Hakan's) scores for John, Patricia, Phillip, and Shiva, are 3.7, 2.2, 2.2, and 1.3 respectively, and these value can be found for #4 but not for #7. There is just one score 1.3 and it is for #4 and #17 so we might draw the conclusion that #17 is Shiva.

However, since Hakan is #4 and is connected to #7, this would mean that Philip should be #7 if my matchings are correct. But it can't be since there is no 4.4 score for Hakan and Philip. Hmmm.

OK. If we skip my ranking and instead look at the score matrices we can see that there is one 1.8 score, between John and Shiva (in the small score matrix), and #17 and #7. So this means that #7 is John. And if #7 is John it makes more sense since there is a score of 3.7 between Hakan (#4) and John (#7) in the big score matrix. John and Patricia has a score of 4.4, and the only 4.4 for #7 is #13 so Patricia is #13. And the last one to find is Philip which is #11 given by the score 4.2 with Patricia.

So now we have found the following identifies from the small scores:
Hakan: #4
John: #7
Patricia: #13
Philip: #11
Shiva: #17
And let us compare these findings with my matchings.
  • Hakan: #4. This seems to be correct. But the matching to #7 is not Philip (he is #11).
  • John: #7. Now, there are two Johns but neither of them has been attributed to #7 in my matching, so this is plain #fail.
  • Patricia: #13. My matching matched Patricia with John Poppelaars and is could be correct. This means that the "other John" is #15 (according to my matching)
  • Philip: #11. Well, this is the same #fail as for Hakan.
  • Shiva: #17. This seems to be correct.
Well, my method of identifying the persons with the matching seems to be quite wrong, but it was fun trying.

Back to the model and some more details. The kernel of the model is really just the following code, including some symmetry breaking constraints.

% decision variables:
% the matchings
array[1..m, 1..2] of var 1..n: x;
% the scores for the found matching
array[1..m] of var 0..max_score: s;

constraint
alldifferent([x[i,j] | i in 1..m, j in 1..2]) :: domain
;

constraint
forall(i in 1..m) (
s[i] = scores[x[i,1], x[i,2]]

% symmetry breaking
/\ x[i,1] <= x[i,2]
)
;

% another symmetry breaking
constraint decreasing(s);

I optimized this model for Gecode/fz using these two things found by some experimentation:
  • branching is on the score array s instead of the seating array x
  • alldifference has the :: domain annotation. Without this, it takes much longer.
These two makes Gecode solve the problem quite fast: Gecode/fz: 0.16s, 191 failures

However, for other solvers the branching on the score array s is not so good so here are the statistics for branching on the array x instead, and using instead the variable/value heuristics: first_fail/indomain_median which seems to give the best overall result for most solvers.
  • Gecode/fz: 4.5s, 98469 failures
  • G12/LazyFD: 2.16s
  • fzn2smt: 6.0s
  • fzntini: 16:02 minutes (I'm not sure why this is so hard for fzntini)
  • G12/fd: 10.127s, 189346 choice points
  • G12/minizinc: 10.465s, 189346 choice points
  • G12/fdmip: 10.226s, 189346 choice points
  • JaCoP/fz: 10.358s, 192957 wrong decisions
  • SICStus/fz: 18.419s, 362122 backtracks
  • ECLiPSe/ic: 48.313s
  • ECLiPSe/fd: 19.557s
Update 2011-03-08
Julien Fischer and Sebastian Brand (of G12 team) was kind to point out that there is a better way of getting the identification of the leftout:
constraint
   if n mod 2 != 0 then
     % This version was suggested by 
     % Sebastian Brand and Julien Fischer.
     leftout > 0 /\
     forall(j in 1..m, k in 1..2) (
        leftout != x[j,k]
     )
   else 
     leftout = 0
   endif;
Below is shown the statistics using this version instead. Julien also pointed out that G12/fd and G12/minizinc is really the same solver and - for this problem - G12/fdmip doesn't make use of the MIP component so it's the same as G12/fd. So I skipped these two latter in this test. The branching is (still) first_fail/indomain_median
    * Gecode/fz: 3.3s, 98469 failures
    * G12/LazyFD: 1.5s
    * fzn2smt: (I stopped after 10 minutes.)
    * fzntini: 5:52 minutes
    * G12/fd: 6.4s, 107178 choice points
    * JaCoP/fz: 5.4s, 192975 wrong decisions
    * SICStus/fz: 7.5s, 362122 backtracks
    * ECLiPSe/ic: 35.5s
    * ECLiPSe/fd: 11.3s    
This change boosted the times for most solvers, quite much for some. It's strange that fzn2smt was much slower, and I don't understand why. For fzn2smt I also tested different things, e.g. to removed the :: domain annotation on alldifferent but it made no difference.

Testing indomain_random
I was also curious how some solvers would do with the heuristic indomain_random, so I ran each solver three times and took the best result. Some solvers was "stable" and got exactly the same statistics (Gecode, G12/fd, SICStus, ECLiPSe/ic), but some of them got quite varying result and all three results are shown (JaCoP and ECLiPSe/fd).
    * Gecode/fz: 1.7, 27422 failures (stable)
    * G12/fd: 19.0s, 1401656 choice points (stable)
    * JaCoP/fz: 2.3s, 22178 wrong decisions
        8.5s 164599 wrong decisions
        4.1s 50575 wrong decisions
        2.3s 22178 wrong decisions
    * SICStus/fz: 3.3s, 116386 backtracks (stable)
    * ECLiPSe/ic: 34.5s (stable)
    * ECLiPSe/fd: 5.6s
       7.41s
       5.6s
      26.7s
Almost all of the solver got at least one result better than before, except G12/fd which is surprising.
End of Update 2011-03-08


2011-03-12 Yet another update
After some interesting discussions with Sebastian Brand (thank you, Sebastian!) I have included some improvements.
1) Since the calculation of the leftout don't contribute much to the constraint store, it has been removed as a decision variable and just calculated in the output section:

output [
let {
int: leftout = max(0..n diff { fix(x[j,k]) | j in 1..m, k in 1..2 })
}
in
% ...

2) The symmetry breaking constraint to sort the matchings in score order
  decreasing(s)
has been replaced with another symmetry breaking constraint which seems to be faster. Unfortunately, there is no sorting anymore but since it is faster for all solvers I prefer it.
  forall(i in 2..m)( x[i-1,1] < x[i,1] )
3) The branching has been changed to first_fail/indomain_split

With these three changes there are now improvements of most solvers; for ECLiPSe's solvers a radical improvement (compare with the listing above):

- G12/lazy: 0.89s
- Gecode: 0.10s and 649 failures
Note: without the ::domain annotation on alldifferent it's slighly worse: 0.16s and 1898
That difference has been much larger in earlier versions.
- JaCoP: 1.4s: 3253 wrong search decisions
- SICStus: 0.29s 3291 backtracks
- Choco: 1.8s 5928 backtracks
- ECLiPSe/ic: 1.6s
- ECLiPSe/fd: 0.94s
- smt2fzn is still very slow (I contacted the fzn2smt developers about this)
- tini: 4:44min (slightly better, but still very slow)


4) Sebastian also suggested that I should test a table constraint. That is, instead of
s[i] = scores[x[i,1], x[i,2]]

we use this:
table([x[i,1], x[i,2], s[i]], scores_table)

This also requires another representation of the score matrix:

scores_table = array2d(1..n*n, 1..3,
[
1, 1, 00,
1, 2, 30,
1, 3, 10,
1, 4, 33,
1, 5, 49,
% ....

Here is the result of using this table approach (together with the previous changes):

- G12/fd: 2.0s, 708 choice points (slower)
- G12/lazy: 2.442s (slower)
- Gecode: 0.11s 686 failures (about the same)
- JaCoP: 1.2s 785 wrong search decisions (slightly better)
- SICStus: 0.43 746 backtracks (better in backtracks, but slightly slower)
- Choco: 1.9s: 2299 backtracks (better in backtracks, but slightly slower)
- ECLiPSe/ic: 2.8s (slower)
- ECLiPSe/fd: 1.4s (slower)
- fzn2smt: 11.1s (and SMT is back in the game again!)
Note: Using the decreasing/1 constraint instead, it takes 7.0s.
- tini: stopped after 10 minutes, so it's slower than the previous version.

To sum: Most solvers where a tiny bit slower with this table approach, and some have slightly less choice points/failures/etc. Hmmm, maybe this (slight) increment of time for some solvers is caused by the slightly larger file to process by mzn2fzn?

The big winner here is SMT, but for this problem it's still slower than most other solvers.

I've put the improvements in the same model as before or_matching.mzn (the older model is now in or_matching_orig.mzn).

Also, thanks to Krzysztof Kuchcinski for an interesting discussion about this.

End of update 2011-03-12

A related seating problem

This matching problem inspired me also to model a related problem using the score matrix from the matching model:
When all these persons meet, what is the best way to place them around a (topological circular) table if the objective is to get the highest possible compatibility score for the adjacent seated persons.
Another MiniZinc model (or_seating.mzn) gives an optimal value of 996 score points, in about 1.5 seconds (with 17629 failures) using Gecode.

There are some possible way of approaching this problem, and I took the TSP way using a global constraint circuit. This model use my decomposition of a circuit constraint which, given the score matrix, gives an optimal circuit. However, what we really want is the path (the seating), so we have do extract the path. After a while I realized that the temporary array z used in the circuit_me decomposition is exactly this path, so it's just to use it in the parameter of the predicate. A consequence of this is that all seatings ends with person #1, which shouldn't really be a problem since the path is a permutation.

predicate circuit_me(array[int] of var int: x, array[int] of var int: z) =
let {
int: lbx = min(index_set(x)),
int: ubx = max(index_set(x))
}
in
all_different(x) :: domain /\
all_different(z) :: domain /\

% put the orbit of x[1] in in z[1..n]
z[lbx] = x[lbx] /\
forall(i in lbx+1..ubx) (
z[i] = x[z[i-1]]
)
/\ % may not be 1 for i < n
forall(i in lbx..ubx-1) (
z[i] != lbx
)
/\ % when i = n it must be 1
z[n] = lbx
;

Here are the two optimal solutions found by this model (with the implicit symmetry breaking that #1 is always the last position). However, the second solution is just the first reversed.

9 10 18 8 16 13 15 12 3 17 19 2 11 6 14 5 7 4 1
4 7 5 14 6 11 2 19 17 3 12 15 13 16 8 18 10 9 1

For the systems that support the circuit/1 constraint (right now only Gecode and JaCoP support this), there is a variant: one can use circuit to get the circuit and then extract the path (seating) with this "extractor" predicate where the solution always start with #1.

predicate circuit_path(array[int] of var int: x,
array[int] of var int: p) =
% we always starts the path at 1
p[1] = 1
/\
forall(i in 2..n) (
p[i] = x[p[i-1]]
);

Then we got these two optimal solutions, which is just permutations of the two above. 1 9 10 18 8 16 13 15 12 3 17 19 2 11 6 14 5 7 4
1 4 7 5 14 6 11 2 19 17 3 12 15 13 16 8 18 10 9

Connection to the matching problem:
Comparing these seating placements with the matching above, it seems to be correct, e.g. #9 and #1 sit together, as do #18 and #6, #2 and #19, etc.

Some other seating problems

For some reason I am fascinated by seating problems and I'm not really alone. John Toczek's PuzzlOr problem Best Host this February was a seating problem (the object is to minimize seating conflicts given that some persons would like to sit together) that I solved using a rather simple MiniZinc model. However, I will not link to the model since the contest (to win a “O.R. The Science of Better” T-shirt) is still open. And let us not forget Karen Petrie's Wedding planning problem: How a computer scientist organised her wedding.

Here are two (classical) combinatorial seating problems, the formulation taken from Fei Dai, Mohit Dhawan, Changfu Wu and Jie Wu: On Solutions to a Seating Problem, page 1:

Seating, row
MiniZinc model seating_row.mzn
n couples are seated in one row. Each person, except two seated at the two ends, has two neighbors. It is required that each neighbor of person k should either have the same gender or be the spouse of k.
Seating, table:
MiniZinc model seating_table.mzn
A classical [...] seating problem is the following: n couples are seated around a table in such a way that two neighbors of each person k are the different gender of k and they are not the spouse of k.
Both these problems is about constraints on couples and gender. There are different way of representing who is in a couple with "anonymous persons", i.e. using just integers to represent the couples and individuals. Here are two variants (I included both of these in the seating_row model):

1) Using math:
predicate is_couple1(int: n, var int: a, var int: b) =
% exists(i in 1..n) (
exists(i in max(lb(a), lb(b))..min(ub(a), ub(b))) (
(2*(i-1)+1 = a /\ 2*(i-1)+2 = b)
\/
(2*(i-1)+1 = b /\ 2*(i-1)+2 = a)
);

2) Using set representation, i.e. an array of the pairs:
% define the couple
array[1..n] of set of int: couples = [{2*(i-1)+1, 2*(i-1)+2} | i in 1..n];

predicate is_couple(int: n, var int: a, var int: b, array[int] of set of int: c) =
exists(j in 1..n) (
a in c[j] /\ b in c[j]
);

The second approach of using sets, is in my opinion the nicest, and it's also easier to use if one has a concrete list of couples.

The gender is simply represented by modulo 2.

The paper "On Solutions to a Seating Problem" (cited above) studies "reasonable patterns":
A reasonable pattern is an assignment of n couples without indices that meet the above requirement. For example, when n = 3 all reasonable patterns are as follows:

wwwhhh whhwwh whhhww hhwwwh
hhhwww hwwhhw hwwwhh wwhhhw
However, I am more interested in the concrete seating placements so it's not quite the same problems. For n=3 there is a total of 60 patterns. Here are all 8 different patterns generated for 3 pairs (n=3). The first number indicates the occurrences of each pattern.
12 gender2:hhhwww
 6 gender2:hhwwwh
 6 gender2:hwwhhw
 6 gender2:hwwwhh
 6 gender2:whhhww
 6 gender2:whhwwh
 6 gender2:wwhhhw
12 gender2:wwwhhh
I tried to generate exactly one of each pattern but had no luck. A model with the following simple symmetry breaking generates 20 solutions, less than 60 but more than exactly 8 patterns.

constraint (x[1] = 1 \/ x[1] = 2);

Statistics for Seating, row problem
Here are the number of solutions (using no symmetry breaking) for some different number of couples (n). The solver statistics below is for Gecode/fz where just the last solutions is printed (it took about 2.5 minutes to print the solutions for n=6, without printing it took just 15.7 seconds).

n #solutions
-------------
1 2 (0.001s, 0 failures)
2 8 (0.001s, 5 failures)
3 60 (0.01s, 73 failures)
4 816 (0.02s, 1181 failures)
5 17520 (0.45s, 29872 failures)
6 550080 (15.7s, 1051417 failures)
7 23839200 (12:36min, 49685576 failures)

The sequence of these numbers seems to correspond to the OEIS Integer Sequence A096121 which is described as A rook must land on each square exactly once, but may start and end anywhere and may intersect its own path. Inspired by Leroy Quet in a Jul 05 2004 posting to the Seqfan mailing list.

Well, I'm not sure if there is actually a deep connection between these two problems or just a freaky coincidence...

Statistics for Seating, table problem
For the seating table problem with 3 couples (n=3) there are 12 solutions (two different patterns). Here are the number of solutions for different n.

n #solutions
-------------
1 0 (0s, 0 failures)
2 0 (0s, 4 failures)
3 12 (0s, 0 failures)
4 96 (0.01s, 36 failures)
5 3120 (0.04s, 340 failures)
6 115200 (0.9s, 4436 failures)
7 5836320 (43.4s, 146938 failures)
8 382072320 (47:28minutes, 6593012)

The related OEIS sequence is A059375 (Total menage numbers 2*n!*A000179(n)) (the sequence starts with 1 which is the value for n=0, but this is weird in our current setting). This problem is also known as the Menage problem (Wikipedia) so the correspondence between these numbers is not surprising at all. Also see Wolfram MathWorld: Married Couples Problem.

January 09, 2011

Rogo grid puzzle in Answer Set Programming (Clingo) and MiniZinc

Encoding/models:
ASP (Clingo): rogo.lp, rogo2.lp
MiniZinc: rogo.mzn, rogo2.mzn
(See below for some problem instances.)

Later update: I have now also implemented versions with symmetry breaking constraints in the two encodings: rogo2.lp and rogo2.mzn. See below for more into

In Operations Research, Sudoko, Rogo, and Puzzles Mike Trick presented the Rogo puzzle, a grid puzzle where the object is to find a loop of a number of steps and get as many points as possible. He writes
The loop must be a real loop: it must return where it started and can’t cross itself or double back. Steps can be either horizontal or vertical: they cannot be diagonal. The loop cannot include any of the black squares. ... Rogo was created by two faculty members (Nicola Petty and Shane Dye)  at the University of Canterbury in Christchurch, New Zealand.  Not surprisingly, Nicola and Shane teach management science there:  the problem has a very strong operations research flavor.
From Creative Heuristics Ltd (the Rogo puzzle site): Rogo is an entirely new type of puzzle. The object is to collect the biggest score possible using a given number of steps in a loop around a grid. The best possible score for a puzzle is given with it, so you can easily check that you have solved the puzzle. Rogo puzzles can also include forbidden squares, which must be avoided in your loop..

Below I have assumed that the path must be in exactly the given number of steps and the programs are written with this assumption in mind (and further reading of the problem descriptions support this interpretation). However, my first approach - probably caused by sloppy reading - was that the optimal path could possible be in lesser steps. It took several hours trying to implement a program supporting this, but was abandoned after re-reading of the problem descriptions. This seems to be a harder problem; maybe it could be used as a variant of the original problem?

I was inspired to solve these puzzle in part because of Mike's last words: Creating a solver would make a nice undergraduate project (and I suspect there are at least a few master's theses and perhaps a doctoral dissertation on algorithmic aspects of creating and solving these). One other reason was to see how to do this with Answer Set Programming - here using Clingo (a Potassco tool) and to compare it with a Constraint Programming system, MiniZinc.

Some more links about Rogo:
  • Instructions
  • Rogo blog
  • YouTube clip.
  • Nicola Petty, Shane Dye: Determining Degree Of Difficulty In Rogo, A TSP-based Paper Puzzle (PDF)
    From the Conclusions: The Rogo puzzle format has a number of aspects that can be controlled to potentially affect degree of difficulty of solving. As a pilot, this study showed that there are many aspects of puzzle-solving related to the nature of the puzzle that can be explored, and there appear to be some general effects, though there are still marked individual differences between people solving the puzzles. This research has the potential to provide interesting insights into both human behaviour, and the nature of puzzles.

    Note: I didn't noticed this until I was almost finished with this blog post (and have just glanced through it).

Example

Here is the Rogo example from Mike Trick's site (pictures borrowed from his site; click on them for larger versions).

Problem:

Rogo puzzle, problem.

One solution:

Rogo puzzle, solution

Note that there is not an unique solution to these puzzles. All three problem instances I tested have more than one solution. For example, the Mike Trick problem has 48 solutions including path symmetries. Since there are 12 steps there are (removing the path symmetry) 48 / 12 = 4 distinct different paths. These different paths are shown below as MiniZinc solutions, where the first step have been fixed to (2,2), i.e. x[1]=2 and y[1]=2<, and it has 3 points (points[1]):

points = array1d(1..12, [3, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0]);
x = array1d(1..12, [2, 3, 4, 4, 5, 5, 5, 4, 3, 2, 2, 2]);
y = array1d(1..12, [2, 2, 2, 3, 3, 4, 5, 5, 5, 5, 4, 3]);
----------
points = array1d(1..12, [3, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 3, 3, 4, 5, 5, 5, 4, 3, 2, 2, 2]);
y = array1d(1..12, [2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 4, 3]);
----------
points = array1d(1..12, [3, 0, 0, 1, 0, 0, 2, 0, 0, 2, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 2, 3, 4, 5, 5, 5, 4, 3, 3]);
y = array1d(1..12, [2, 3, 4, 5, 5, 5, 5, 4, 3, 3, 3, 2]);
----------
points = array1d(1..12, [3, 0, 0, 1, 0, 0, 2, 0, 0, 2, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 2, 3, 4, 5, 5, 5, 4, 4, 3]);
y = array1d(1..12, [2, 3, 4, 5, 5, 5, 5, 4, 3, 3, 2, 2]);
----------

Answer Set Programming, Clingo

Here is the full ASP (Clingo) encoding, without the data:

% domains
rows(1..r).
cols(1..c).

% max number of steps
steps(1..max_steps).

% define adjacency between cells
adj(R,C, R1,C1) :- rows(R;R1), cols(C;C1), |R-R1| + |C-C1|==1.

% the path: unique index
0 { path(I, Row, Col) : steps(I) } 1 :- rows(Row), cols(Col).
1 { path(I, Row, Col) : rows(Row) : cols(Col) } 1 :- steps(I).

% close the circuit: ensure that the first and last cells
% in the path are connected.
:- path(1, R1, C1), path(max_steps, R2, C2), not adj(R1,C1,R2,C2).

% remove bad paths
:- path(I-1,R1,C1), path(I,R2,C2), not adj(R1,C1, R2,C2).

% no black cells in the path
:- path(I, R,C), black(R,C).

% total points, needed since
% "Optimization:" don't show the proper value.
total(Total) :- Total = #sum[got_points(R,C,Value) = Value].

% list the cells in path with points
got_points(R,C, Value) :- point(R,C,Value), path(I, R, C).

% maximize the number of points
% #maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P ].

% alternative: we can add an second objective to
% start with the cell with lowest indices
#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P@2 ].
#minimize [ path(1,R,C) = R*c+C@1].

#hide.
#show path(I, Row, Col).
#show total(Total).
#show got_points(R,C,Value).

Here is the encoding for Mike Trick's problem instance:
#const max_steps = 12.
#const r = 5.
#const c = 9.

% the point cells
point(1,1,2).
point(2,2,3).
point(2,5,1).
point(2,8,2).
point(3,9,3).
point(4,3,2).
point(5,5,2).
point(5,8,1).

% black cells (to avoid)
black(3,7).
black(4,4).
The solution of Mike Trick's problem (edited), using the following command line:
clingo --heuristic=Vmtf --stat rogo_data_mike_trick.lp rogo.lp
total: 8
got_points:
  5,5,2
  4,3,2
  2,5,1
  2,2,3

path:
  1,2,2
  2,2,3
  3,2,4
  4,2,5
  5,3,5
  6,4,5
  7,5,5
  8,5,4
  9,5,3
  10,4,3
  11,4,2
  12,3,2
Statistics for this solution:
Models      : 1     
  Enumerated: 6
  Optimum   : yes
Optimization: 184 20 
Time        : 0.960
  Prepare   : 0.060
  Prepro.   : 0.020
  Solving   : 0.880
Choices     : 19826
Conflicts   : 16539
Restarts    : 1

Atoms       : 912   
Bodies      : 22839 
Tight       : Yes

  Deleted   : 10406 
Update With the following symmetry breaking added, the problem is solved in 0.58 seconds.

% symmetry breaking: the cell with the lowest coordinates
% should be in the first step
:- path(1, R, C), steps(Step), Step > 1, path(Step, R2, C2),
R*c+C > R2*c+C2.


The statistics for this variant:

Time        : 0.580
  Prepare   : 0.080
  Prepro.   : 0.030
  Solving   : 0.470
Choices     : 8727
Conflicts   : 6914
Restarts    : 2
End of update

Some notes:
One nice feature in Clingo (and lparse) is that it is possible to have many optimization objective. Here we we first maximize the number of points (#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P@2 ] , and as a second objective (with lower priority @1) we minimize the start cell to start with the cell with the lowest coordinate: #minimize [ path(1,R,C) = R*c+C@1].. Sometimes this is faster than the plain #maximize objective, sometimes not.

The size of "core" of the encoding is quite small. Here is the code with the comments and the helper predicates (for outputs) removed.

rows(1..r).
cols(1..c).
steps(1..max_steps).
adj(R,C, R1,C1) :- rows(R;R1), cols(C;C1), |R-R1| + |C-C1|==1.
0 { path(I, Row, Col) : steps(I) } 1 :- rows(Row), cols(Col).
1 { path(I, Row, Col) : rows(Row) : cols(Col) } 1 :- steps(I).
:- path(1, R1, C1), path(max_steps, R2, C2), not adj(R1,C1,R2,C2).
:- path(I-1,R1,C1), path(I,R2,C2), not adj(R1,C1, R2,C2).
:- path(I, R,C), black(R,C).
#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P ].

The corresponding "core" of the MiniZinc program (see below for the full code) is larger.

Constraint Programming, MiniZinc

Here is the full MiniZinc code (without data). Compared with the ASP approach, the decision variables are represented in another way: the paths is represented by two arrays x and y, and the points are collected in a separate array (points) so we can simply sum over it for the optimization.

include "globals.mzn";

int: W = 0; % white (empty) cells
int: B = -1; % black cells
int: max_val = max([problem[i,j] | i in 1..rows, j in 1..cols]);

% define the problem
int: rows;
int: cols;
int: max_steps; % max length of the loop
array[1..rows, 1..cols] of int: problem;

% the coordinates in the path
array[1..max_steps] of var 1..rows: x :: is_output;
array[1..max_steps] of var 1..cols: y :: is_output;

% the collected points
int: max_point = max([problem[i,j] | i in 1..rows, j in 1..cols]);
array[1..max_steps] of var 0..max_point : points :: is_output;

% objective: sum of points in the path
int: max_sum = sum([problem[i,j] | i in 1..rows, j in 1..cols where problem[i,j] > 0]);
var 0..max_sum: sum_points :: is_output;

% solve satisfy;
solve maximize sum_points;
% solve :: int_search(x ++ y, first_fail, indomain_min, complete) maximize sum_points;

% all coordinates must be unique
constraint forall(s in 1..max_steps, t in s+1..max_steps) (
x[s] != x[t] \/ y[s] != y[t]
);

% calculate the points (to maximize)
constraint forall(s in 1..max_steps) (
points[s] = problem[x[s], y[s]]
)
/\
sum_points = sum(points);

% ensure that there are no black cells
% in the path
constraint forall(s in 1..max_steps) (
problem[x[s],y[s]] != B
);

% get the path
constraint forall(s in 1..max_steps-1) (
abs(x[s] - x[s+1]) + abs(y[s] - y[s+1]) = 1
)
/\ % close the path around the corner
abs(x[max_steps] - x[1]) + abs(y[max_steps] - y[1]) = 1;


Except for more declaration of the arrays and decision variables, this code don't have more logic than the ASP encoding. However it is more verbose.

The solution for Mike Trick's problem, using LazyFD, takes 1.1 second, slightly slower than using Clingo (see below for more comparison of times):

points = array1d(1..12, [0, 0, 3, 0, 0, 2, 0, 0, 2, 0, 0, 1]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 3, 3, 4, 5, 5, 5, 4, 3, 2]);
y = array1d(1..12, [4, 3, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5]);


Update
After some thoughts I decided to try the same symmetry breaking that was an option the ASP encoding. It is implemented in rogo2.mzn and use the following extra constraint that ensure that the cell with the lowest coordinate is in the first step. % symmetry breaking: the cell with lowest coordinates % should be in the first step. constraint forall(i in 2..max_steps) ( x[1]*cols+y[1] < x[i]*cols+y[i] );

With this model, LazyFD solves Mike Trick's problem in 0.627 seconds. Also see under "Comparison" below.

End of update

Comparison

I was curious how well the systems should do so here is a "recreational" comparison. Please don't read too much into it:
  • it is just 3 problems.
  • there is probably better heuristics for both Clingo and Gecode/fz
The following 3 problem instances was used in the test. Unfortunately, I have not found any direct links for the two latter instances (see below for links to my encodings). Instead a variant of the coding used in MiniZinc is shown, where "_" ("W" in the MiniZinc code) is white/blank, "B" is a black cell to avoid, and a number represent a point of the cell.
  • Mike Trick's example. 5 rows, 9 column, 12 steps; good: 6, best: 8.
    %1 2 3 4 5 6 7 8 9  
     2,_,_,_,_,_,_,_,_, % 1
     _,3,_,_,1,_,_,2,_, % 2
     _,_,_,_,_,_,B,_,2, % 3
     _,_,2,B,_,_,_,_,_, % 4
     _,_,_,_,2,_,_,1,_, % 5
    
  • The Paper Rogo puzzle from Creative Heuristics Ltd for 20110106. 9 rows, 7 columns, 16 steps; good: 28, best: 31 points.
     %1 2 3 4 5 6 7
      B,_,6,_,_,3,B, % 1
      2,_,3,_,_,6,_, % 2
      6,_,_,2,_,_,2, % 3
      _,3,_,_,B,B,B, % 4
      _,_,_,2,_,2,B, % 5
      _,_,_,3,_,_,_, % 6
      6,_,6,B,_,_,3, % 7
      3,_,_,_,_,_,6, % 8
      B,2,_,6,_,2,B, % 9
    
  • The Paper Rogo puzzle from Creative Heuristics Ltd for 20110107. 12 rows, 7 columns, 16 steps; good: 34 points, best: 36 points.
     %1 2 3 4 5 6 7
      4,7,_,_,_,_,3, % 1
      _,_,_,_,3,_,4, % 2
      _,_,4,_,7,_,_, % 3
      7,_,3,_,_,_,_, % 4
      B,B,B,_,3,_,_, % 5
      B,B,_,7,_,_,7, % 6
      B,B,_,_,_,4,B, % 7
      B,4,4,_,_,_,B, % 8
      B,_,_,_,_,3,B, % 9
      _,_,3,_,4,B,B, % 10
      3,_,_,_,_,B,B, % 11
      7,_,7,4,B,B,B  % 12
    
For the ASP encoding I used clingo (a combined grounder and solver) and the parameter --heuristi=Vmtf after some minimal testing with different parameters. For MiniZinc, both solvers LazyFD and Gecode/fz where used and I settled with the search heuristic solve minimize sum_points which seems to be fast enough for this experiment. Note that LazyFD tend to behave best without any explicit search heuristics . (Also: there is always a better search heuristics that the one you settle with.)

The time reported is the total time, i.e. including the grounding time/convert to FlatZinc.

Update I have added the second version of the MiniZinc model with the added symmetry breaking constraint as a separate entry: End of update

Mike Trick problem

Clingo: 0.96 seconds, 19826 choices, 16539 conflicts, 1 restart.
Clingo with symmetry breaking: 58 seconds, 8727 choices, 6914 conflicts, 2 restarts.
LazyFD: 1.1 second. (failure is not reported)
LazyFD with symmetry breaking: 0.6 seconds (failure is not reported)
Gecode/fz: 2.62s, 92113 failures2577853 failures
Gecode/fz: with symmetry breaking: 0.4 seconds, 9418 failures

20110106 problem

Clingo: 1:57.07 minutes, 1155290 choices, 1044814 conflicts, 1 restart
Clingo with symmetry breaking: 20.4 seconds, 157146 choices, 135178 conflicts, 3 restarts
LazyFD: 2:58 minutes
LazyFD with symmetry breaking: 19.9 seconds (failure is not reported)
Gecode/fz: 58.6 seconds, 1380512 failures
Gecode/fz with symmetry breaking: 7.8 seconds, failures

20110107 problem

Clingo: 3:13.72 1541808 choices, 1389396 conflicts, 1 restart
Clingo with symmetry breaking: 31.6 seconds 178301 choices, 151439 conflicts, 1 restart
LazyFD: 2:55.18 minutes
LazyFD with symmetry breaking: 44.5 seconds (failure is not reported)
Gecode/fz: 1:54.50 minutes 2577853 failures
Gecode/fz with symmetry breaking: 11.3 seconds, failures

Here we see that Gecode/fz without (symmetry breaking) is the fastest for the two larger problems (but the slowest on the first), and both Clingo and LazyFD was each placed second in the harder problems. So, I'm not really sure we can draw any real conclusions from this small test.

Update With symmetry breaking added the result is more unanimous. All three solvers benefit much from this. Gecode/fz is faster on all three problems, and the other two still one second place each. We also see how much symmetry breaking can do. End of update

Some comments/conclusions

Here are some general comments about the problem and the coding.

ASP

As mentioned above, my initial assumption of the problem was that the given number of steps was not always the best path length, and trying to code this was not easy. After a long time sitting with this approach (say 6 hours coding time?), I re-read the description and couldn't find any real support for this assumption, so I skipped it in favor for the "fixed length" approach.

To get to the final version it took several hours, say 9-10 hours total coding (in smaller sessions over several days). This includes time for coding the initial assumption/interpretation. It also includes the time trying to get the first assumption approach to work with the incremental solver iclingo which - I hoped - should first try the longest length and if that gave no solution it should try at the the lengths in decrement of 1; but I didn't get this to work.

As usual I realized that several of the predicates I first thought was needed could just be thrown away. An example is this snipped to ensure "positive" that there is a path between the cell R1,C1 and the cell R2,C2.

{ x(I-1, R1, C1) : adj(R1,C1, R2,C2) : rows(R1;R2) : cols(C1;C2) : not black(R1,C1) : not black(R2,C2) } max_steps :- x(I, R2, C2), steps(I).

Instead it could be replaced with :- x(I, R,C), black(R,C).
:- x(I-1,R1,C1), x(I,R2,C2), not adj(R1,C1, R2,C2).


Well, I hope that with time I'll recognize these things faster.

MiniZinc

It took about 30 minutes to code the MiniZinc model (after a unnecessarily long detour of debugging bad thinking and a bug in the data). The reason for this much shorter time is two-fold:
  • It's much easier to code something when the general strategy (approach) of the problem is known and has been coded in another system. All the (de)tours when writing the ASP version made much more comfortable with the given problem.
  • I'm much more comfortable coding in MiniZinc than in ASP for two reasons: 1) I have programmed in MiniZinc much longer than in ASP. 2) I am also more used with the high-level CP approach of stating the requirements/constraints with traditional loops and using arrays/matrices.
Programs and data files Here are all the files used, both program and data files for the two systems. ASP (Clingo)
MiniZinc

November 11, 2010

Google CP Solver: A much faster Nonogram solver using DefaultSearch

This is an extension of the comparison done in Comparison of some Nonogram solvers: Google CP Solver vs Gecode and two MiniZinc solvers with includes a new and much faster variant in Google CP Solver.

The day after the above blog post was written, Laurent Perron and I discussed alternative ways of solving these Nonogram problems. Laurent suggested another solving strategy: DefaultSearch (solver.DefaultPhase) which gave excellent overall result. Then I did some more experiment and tweaking which gave the results shown in the table below.

The new Nonogram model

Model: nonogram_default_search.py

The changes compared to the previous version nonogram_table2.py was very small in the Python program (there where also some changes in the C++ code). Here are the changes:
parameters = pywrapcp.DefaultPhaseParameters()
parameters.heuristic_period = 200000
db = solver.DefaultPhase(board_label, parameters)  

DefaultSearch

To be honest, I am not exactly sure why DefaultSearch is so fast for the Nonogram problems. It is a variant of Impact-Based Search (see Philippe Refalo: Impact-Based Search Strategies for Constraint Programming, 2004) combined with periodic heuristics. The source code is here: constraint_solver/default_search.cc.

heuristic_period

The value of 200000 for parameters.heuristic_period was found by experimenting. For quite long I used the value of 11000 and later found that even though increasing this value didn't do much difference for the simpler problems (same time and same number of failures), for the harder problems - such as Lion, Marley, and Nature - the number of failures decreased and the time was improved a bit. Larger values above that, such as 2000000, decreased the number of failures somewhat for the Nature problem, but the time seemed to be slightly worse. So I settled with 200000.

This may or may not be a good value for general Nonogram problems. Doing this kind of testing on a limited - albeit mixed - problem set may have biased the search in some unfortunate way.

After deciding on the above heuristics, I finally tested (as a verification set) with some other problem instances and found that they still was easy solved:
  • P200: 0.25s / 1637 failures
  • P199: 0.06s / 242 failures
  • Dragonfly: 0.03s / 0 failures
P200 was slightly worse in this version: the previous version solved it in 0.14s and with 520 failures. The other two problem was better or about the same with the current version.

As an experiment, I changed the order of variables to board_flat (instead of board_label), then Lion was solved in 2 seconds (compared to 4.45 seconds) and Marley in 11 seconds (compared to 7:18 minutes). On the down side, 9-Dom then took 9 seconds (compared to 3.5 seconds) and Nature timed out (compared to 20:25 minutes), and other problems took slightly longer time. So this is probably not a good way to go.

I also experimented with restarts (which is reported to work well with Impact-Based Search), and found some initial promising results. I may come back to this later.

I should also note that the current model is quite different from my first model nonogram_regular.py presented in Google CP Solver: Regular constraint, Nonogram solver, nurse rostering, etc. Laurent has done a great job of optimizing all sort of things, both in the Python code and in the C++ code. For example, he added the "regular like" constraint solver.TransitionConstraint which obsoleted my decomposition of regular for this problem.

A comment on the results

None of the Nonogram solvers that Jan Wolter had tested in his excellent Survey of Paint-by-Number Puzzle Solvers has solved all the problem instances under the 30 minutes time limit. The table below shows that the DefaultSearch variant solves all the problems. At least on my computer, and hopefully also on Wolter's benchmark computer if he want to test this solver. All problems - except Marley and Nature - where solved well under 10 seconds, including the infamous Lion problem (4.46s).

In the previous version of the Nonogram solver a couple of the problems timed out, marked + in the table. Some of these, such as Hot and Light, is now solved under 2 seconds. All these "remarkable" improvements in bold.

Test machine

The machine used for the test is the same as in the former test: Linux Ubuntu 10.4 LTS with 12Gb RAM, 8 core (Intel i7/930, 2.80GHz). All programs ran on a single processor. Python version 2.6. Google CP Solver used was revision 276.

Comparison - table

Here is the full table with the new version (DefaultSearch) in the last column. The number in parenthesis is Wolter's times. The second row in each entry is my own timing. The number of failures where applicable; LazyFD always returned 0 choice points so these are skipped. A + indicates time out (30 minutes).

The links for the puzzle is to Wolter's puzzle pages.
Puzzle Size Notes Gecode MiniZinc
Gecode/fz
MiniZinc
LazyFD
Google
CP Solver
Google
CP Solver
DefaultSearch
#1: Dancer* 5 x 10 Line (0.01s)
0.01s/0 failures
(0,01s)
0.08s/0 failures
(0.04s)
0.1s
0.02s
0 failures
0.02s
0 failures
#6: Cat* 20 x 20 Line (0.01s)
0.1s/0 failures
(0.02s)
0.1s/0 failures
(0.44s)
0.95s
0.12s
0 failures
0.03s
0 failures
#21: Skid* 14 x 25 Line, Blanks (0.01s)
0.01s/0 failures
(0,02s)
0.09s/13 failures
(0.64s)
0.9s
0.03s
0 failures
0.03s
0 failures
#27: Bucks* 27 x 23 Blanks (0.01s)
0.2s/2 failures
(0.02s)
0.1s/3 failures
(1.1s)
1.6s
0.05s
3 failures
0.04s
6 failures
#23: Edge* 10 x 11   (0.01s)
0.01s/15 failures
(0.01s)
0.09s/25 failures
(0.08s)
0.16s
0.03s
25 failures
0.03s
58 failures
#2413: Smoke 20 x 20   (0.01s)
N/A
(0.02s)
0.11s/5 failures
(0.60s)
1.18s
0.05s
8 failures
0.04s
20 failures
#16: Knot* 34 x 34 Line (0.01s)
0.01s/0 failures
(0.02s)
0.14s/0 failures
(5.5s)
5.6s
0.12s
0 failures
0.06s
0 failures
#529: Swing* 45 x 45 Line (0.02s)
0.02s/0 failures
(0.04s)
0.24s/0 failures
(13.2s)
22.3s
0.3s
0 failures
0.11s
0 failures
#65: Mum* 34 x 40   (0.02s)
0.02s/17
(0.04s)
0.18s/20 failures
(11.0s)
12.4s
0.15s
22 failures
0.10s
46 failures
#7604: DiCap 52 x 63   (0.02s)
N/A
(0.06s)
0.29s/0 failures
(+)
1:48m
0.45s
0 failures
0.18s
0 failures
#1694: Tragic 45 x 50   (0.14s)
N/A
(3.6m)
2:14m/394841 failures
(32.1s)
34.57s
2:33m
394841 failures
0.35s
1862 failures
#1611: Merka* 55 x 60 Blanks (0.03s)
N/A
(+)
+
(1.1m)
1:10m
0.47s
27 failures
0.20s
96 failures
#436: Petro* 40 x 35   (1.4m[s?])
0.05s/48 failures
(1.6s)
1.15s/1738 failures
(6.2s)
9.3s
1.19s
1738 failures
0.245s
770 failures
#4645: M&M 50 x 70 Blanks (0.93s)
N/A
(0.28s)
0.41s/89 failures
(48.1s)
1:14m
0.48s
82 failures
0.46s
376 failures
#3541: Signed 60 x 50   (0.57s)
N/A
(0.56s)
0.61s/929 failures
(1.0m)
1:02m
11.7s
6484 failures
0.92s
3821 failures
#803: Light* 50 x 45 Blanks (+)
N/A
(+)
+
(4.7s)
18.32s
+ 1.5s
4052 failures
#6574: Forever* 25 x 25   (4.7s)
N/A
(2.3s)
1.5s/17143 failures
(1,7s)
1.99s
1.6s
17143 failures
0.26s
2421 failures
#2040: Hot 55 x 60   (+)
N/A
(+)
+
(2.6m)
1:05m
+ 0.62s
2394 failures
#6739: Karate 40 x 40 Multiple (56.0s)
N/A
(57.8s)
38.0s/215541 failures
(13.6s)
22.52s
56.1s
215541 failures
0.2s
1361 failures
#8098: 9-Dom* 19 x 19   (12.6m)
N/A
(3.8s)
2.59s/45226 failures
(1.8s)
3.11s
3.9s
45226 failures
3.54s
27650 failures
#2556: Flag 65 x 45 Multiple, Blanks (3.4s)
N/A
(+)
+
(4.2s)
16.18s
1.6s
14859 failures
0.15s
442 failures
#2712: Lion 47 x 47 Multiple (+)
N/
(+)
+
(+)
8:54m
+ 4.34s
15429 failures
#10088: Marley 52 x 63 Multiple (+)
N/A
(+)
+
(2.9m)
+
+ 7:18m
309293 failures
#9892: Nature 50 x 40 Multiple (+)
N/A
(+)
+
(2.4m)
25.29s
+ 20:25m
7743891 failures

November 08, 2010

Comparison of some Nonogram solvers: Google CP Solver vs Gecode and two MiniZinc solvers

After writing Google CP Solver: Regular constraint, Nonogram solver, nurse rostering, etc yesterday, I thought it would be interesting to benchmark the new Nonogram solver written in Google CP Solver with some other solvers. And the benchmark is of course from Jan Wolter's great Survey of Paint-by-Number Puzzle Solvers, though I compare only with Gecode, and two MiniZinc solvers: Gecode/fz (Gecode's FlatZinc solver), and MiniZinc's LazyFD since I know them best and can test them my self.

In the table below, I have kept Wolter's original value is in parentheses to get a feeling for the differences in our machines and to check with my missing problems with Gecode.

System notes:
  • The timeout is 30 minutes as in Wolter's test.
  • My machine is a Linux Ubuntu 10.4 LTS with 12Gb RAM, 8 core (Intel i7/930, 2.80GHz), though all programs ran on a single processor.
  • Also, I was "working" (Spotify, web surfing, running other tests in parallel, etc) on the machine as the test ran.
  • Due to copyrights issues, I cannot distribute the examples. They can be downloaded from Wolter's Sample Paint-by-Number Puzzles Download.
And here are some comments about the different solvers.

Google CP Solver

Google CP Solver revision 259, with revision 259 of the Python solver nonogram_table2.py, which includes a lot of nice improvements, thanks by Laurent Perron.

Gecode

Gecode and Gecode/fz is version 3.4.2 (latest svn revision: 11517)
Please note that I have just tested the problem instances in the existing files for Gecode. Hence a lot of instances where not tested on my machine (they are marked N/A).

Also, I think that Wolter's time for Petro should be 1.4s (not 1.4m).

MiniZinc

MiniZinc version: 64bit Linux, snapshot version per 2010-11-05.

Note: In contrast to Wolter's result, the times for Gecode/fz and LazyFD includes generating the FlatZinc file, which may be considerable for large problem instances. Hence some of my results are larger for these solvers.

Some comments about Google CP Solver / Python solver

Wolter has done a great job analyzing the three other solvers (Gecode, Gecode/fz, and LazyFD), so I just comment on Google CP Solver solver.

It looks like Google CP Solver/Python solver is quite similar to Gecode/fz solver, and in some case (e.g. 9-Dom, Bucks, Tragic, Petro, etc) it has exactly the same number of failures. There are some exceptions, though:
  • Solved Merka and Dicap quite fast. Gecode/fz timed out for both these problems
  • Also solved Flag fast where Gecode/fz timed out. Here it is also faster than LazyFD and Gecode (note: Wolter's time).
  • Slower than Gecode/fz on Karate, Signed
  • Slightly slower than Gecode/fz on Tragic, with the same number of failures

Comparison - table

Here is the full table. The number in parenthesis is Wolter's times. The second row is my own timing. I also added the number of failures where applicable; LazyFD always returned 0 choice points so I skipped that. A + indicates time out (30 minutes).

The links for the puzzle is to Wolter's puzzle pages.
Puzzle Size Notes Gecode MiniZinc
Gecode/fz
MiniZinc
LazyFD
Google
CP Solver
#1: Dancer* 5 x 10 Line (0.01s)
0.01s/0 failures
(0,01s)
0.08s/0 failures
(0.04s)
0.1s
0.02s
0 failures
#6: Cat* 20 x 20 Line (0.01s)
0.1s/0 failures
(0.02s)
0.1s/0 failures
(0.44s)
0.95s
0.12s
0 failures
#21: Skid* 14 x 25 Line, Blanks (0.01s)
0.01s/0 failures
(0,02s)
0.09s/13 failures
(0.64s)
0.9s
0.03s
0 failures
#27: Bucks* 27 x 23 Blanks (0.01s)
0.2s/2 failures
(0.02s)
0.1s/3 failures
(1.1s)
1.6s
0.05s
3 failures
#23: Edge* 10 x 11   (0.01s)
0.01s/15 failures
(0.01s)
0.09s/25 failures
(0.08s)
0.16s
0.03s
25 failures
#2413: Smoke 20 x 20   (0.01s)
N/A
(0.02s)
0.11s/5 failures
(0.60s)
1.18s
0.05s
8 failures
#16: Knot* 34 x 34 Line (0.01s)
0.01s/0 failures
(0.02s)
0.14s/0 failures
(5.5s)
5.6s
0.12s
0 failures
#529: Swing* 45 x 45 Line (0.02s)
0.02s/0 failures
(0.04s)
0.24s/0 failures
(13.2s)
22.3s
0.3s
0 failures
#65: Mum* 34 x 40   (0.02s)
0.02s/17
(0.04s)
0.18s/20 failures
(11.0s)
12.4s
0.15s
22 failures
#7604: DiCap 52 x 63   (0.02s)
N/A
(0.06s)
0.29s/0 failures
(+)
1:48m
0.45s
0 failures
#1694: Tragic 45 x 50   (0.14s)
N/A
(3.6m)
2:14m/394841 failures
(32.1s)
34.57s
2:33m
394841 failures
#1611: Merka* 55 x 60 Blanks (0.03s)
N/A
(+)
+
(1.1m)
1:10m
0.47s
27 failures
#436: Petro* 40 x 35   (1.4m[s?])
0.05s/48 failures
(1.6s)
1.15s/1738 failures
(6.2s)
9.3s
1.19s
1738 failures
#4645: M&M 50 x 70 Blanks (0.93s)
N/A
(0.28s)
0.41s/89 failures
(48.1s)
1:14m
0.48s
82 failures
#3541: Signed 60 x 50   (0.57s)
N/A
(0.56s)
0.61s/929 failures
(1.0m)
1:02m
11.7s
6484 failures
#803: Light* 50 x 45 Blanks (+)
N/A
(+)
+
(4.7s)
18.32s
+
#6574: Forever* 25 x 25   (4.7s)
N/A
(2.3s)
1.5s/17143 failures
(1,7s)
1.99s
1.6s
17143 failures
#2040: Hot 55 x 60   (+)
N/A
(+)
+
(2.6m)
1:05m
+
#6739: Karate 40 x 40 Multiple (56.0s)
N/A
(57.8s)
38.0s/215541 failures
(13.6s)
22.52s
56.1s
215541 failures
#8098: 9-Dom* 19 x 19   (12.6m)
N/A
(3.8s)
2.59s/45226 failures
(1.8s)
3.11s
3.9s
45226 failures
#2556: Flag 65 x 45 Multiple, Blanks (3.4s)
N/A
(+)
+
(4.2s)
16.18s
1.6s
14859 failures
#2712: Lion 47 x 47 Multiple (+)
N/
(+)
+
(+)
8:54m
+
#10088: Marley 52 x 63 Multiple (+)
N/A
(+)
+
(2.9m)
+
+
#9892: Nature 50 x 40 Multiple (+)
N/A
(+)
+
(2.4m)
25.29s
+

November 07, 2010

Google CP Solver: Regular constraint, Nonogram solver, nurse rostering, etc

Models mentioned: After some sweat and confusions, I have now implemented a Nonogram solver in Google CP Solver using a (decomposition of) regular constraint. The model is here: nonogram_regular.py. Also, a test model for the regular constraint is in regular.py

The hardest part by far was to translate the regular constraint from Comet to Google CP solver. The Comet version (nonogram_regular.co) was described in a sequence of blog posts: As we will see below, other problems where implemented by the regular constraint.

Update: Laurent Perron suggested an improvement of the regular constraint, which makes some of the models much faster. I have commented this below.

Translation of regular

As mentioned above, the hardest part was the translation of the regular constraint from Comet (from nonogram_regular.co) to Google CP solver (the Comet version was a translated from the MiniZinc version). I started with the simpler example regular.py) which basically consists of two parts also included in the nonogram model:

make_transition_matrix

The function translates an array of numbers to an finite state automaton which representing and regular expression of 0's and 1's. E.g. the array [3,2,1] represents the regular expression 0*1{3}0+1{2}0+1{1}0* and is translated into the automaton (the valid states) which must start in state 1 and end in state 9 (0 is a failing state):
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0
The hardest part here was that the Comet constraint is 1-based, but Python is 0-based. There are many things that can - and will - go wrong when translating from a 1-based system to a 0-base system. And it don't help that state 0 is used as a fail state. regular constraint

Then, given the automaton (the valid states), the regular constraint transforms the states into an array of decision variables. The domains are 1..2 (representing the values of 0 and 1).

Update Laurent Perron suggested an improvement using the table constraint (called solver.AllowedAssignments in Google CP Solver). I have changed to this version here.

This was his first improvement (last in the method):
#
# Global constraint regular
#
# This is a translation of MiniZinc's regular constraint (defined in
# lib/zinc/globals.mzn), via the Comet code refered above.
# All comments are from the MiniZinc code.
# '''
# The sequence of values in array 'x' (which must all be in the range 1..S)
# is accepted by the DFA of 'Q' states with input 1..S and transition
# function 'd' (which maps (1..Q, 1..S) -> 0..Q)) and initial state 'q0'
# (which must be in 1..Q) and accepting states 'F' (which all must be in
# 1..Q).  We reserve state 0 to be an always failing state.
# '''
#
# x : IntVar array
# Q : number of states
# S : input_max
# d : transition matrix
# q0: initial state
# F : accepting states
def regular(x, Q, S, d, q0, F):

  solver = x[0].solver()

  assert Q > 0, 'regular: "Q" must be greater than zero'
  assert S > 0, 'regular: "S" must be greater than zero'

  # d2 is the same as d, except we add one extra transition for
  # each possible input;  each extra transition is from state zero
  # to state zero.  This allows us to continue even if we hit a
  # non-accepted input.

  d2 = []
  for i in range(Q + 1):
    for j in range(S):
      if i == 0:
        d2.append((0, j, 0))
      else:
        d2.append((i, j, d[i - 1][j]))

  # If x has index set m..n, then a[m-1] holds the initial state
  # (q0), and a[i+1] holds the state we're in after processing
  # x[i].  If a[n] is in F, then we succeed (ie. accept the
  # string).
  x_range = range(0, len(x))
  m = 0
  n = len(x)

  a = [solver.IntVar(0, Q + 1, 'a[%i]' % i) for i in range(m, n + 1)]

    # Check that the final state is in F
  solver.Add(solver.MemberCt(a[-1], F))
    # First state is q0
  solver.Add(a[m] == q0)
  for i in x_range:
    solver.Add(x[i] >= 1)
    solver.Add(x[i] <= S)
    # Determine a[i+1]: a[i+1] == d2[a[i], x[i]]
    # Laurent Perron suggested using this: 
    solver.Add(solver.AllowedAssignments((a[i], x[i] - 1, a[i + 1]), d2))
Update 2: Some hour later, Laurent Perron added C++ support for simplifying this by adding solver.TransitionConstraint, so the definition of regular is now much neater:
def regular(x, Q, S, d, q0, F):

  solver = x[0].solver()

  assert Q > 0, 'regular: "Q" must be greater than zero'
  assert S > 0, 'regular: "S" must be greater than zero'

  # d2 is the same as d, except we add one extra transition for
  # each possible input;  each extra transition is from state zero
  # to state zero.  This allows us to continue even if we hit a
  # non-accepted input.

  d2 = []
  for i in range(Q + 1):
    for j in range(1, S + 1):
      if i == 0:
        d2.append((0, j, 0))
      else:
        d2.append((i, j, d[i - 1][j - 1]))

  solver.Add(solver.TransitionConstraint(x, d2, q0, F))
This improvement is in the model: regular_table2.py and the Nonogram model: nonogram_table2.py

Nonogram

After some more problems was fixed it was really a simple matter of translating the rest of the Nonogram model (nonogram_regular.py).

Labeling: I has the same labeling, or as approximate as possible, as in the Comet model:
# This labeling was inspired by a suggestion from
# Pascal Van Hentenryck about my Comet nonogram model.
board_label = []
if rows * row_rule_len < cols * col_rule_len:
    for i in range(rows):
        for j in range(cols):
            board_label.append(board[i,j])
else:
    for j in range(cols):        
        for i in range(rows):
            board_label.append(board[i,j])
Update: Here are Laurent Perron's versions with the improved regular: nonogram_table.py and nonogram_table2.py

Testing and problem instances for Nonogram

I have compared two problem instances, p200 (nonogram_p200.py) and p199 (nonogram_p199.py), with some of my other Nonogram solver: the Comet model nonogram_regular.co (with my decomposition of regular), the MiniZinc model nonogram_create_automaton2.mzn with Gecode/fz, using Gecode's built in regular.

Update: I have added Laurent Perron's version using table (solver.AllowedAssignments) as an separate entry for comparison:

Problem Google CP Solver Google CP Solver with table Comet Gecode/fz
P200 1.8s/11615 failures 0.2s/520 failures 0.4s/794 failures 0.2s/520 failures
P199 17.607s/104239 failures 0.3s/772 failures 0.6s/1175 failures 0.1s/772 failures

The result for P200 is not too bad for Google CP Solver, but I am a little surprised by the large times/failures for the P199, almost 18 seconds. However, one could note that using a plain ordering of the variables for this problem instance, i.e.
board_label = [board[i,j] for i in range(rows) for j in range(cols)]
then P199 is solved in 0.4 seconds with 2005 failures. However, with this ordering, P200 takes very long time.

Update: It's interesting to see that the improved table version has exactly the same number of failures as the version using Gecode/fz.
Update 2:The second improvement of Laurent Perron didn't change the number of failures, but the time was slightly improved with a couple of ms's: 169ms vs 209ms for P200, and 264ms vs 296ms for P199.

Thanks for the improvements, Laurent!

Here are some other problem instances:

Nurse rostering

Model: nurse_rostering.py

A standard application of regular constraint is in different types of scheduling, e.g. nurse rostering. The model nurse_rostering.py is a small example based on the model in the MiniZinc Tutorial (PDF), page 35f (the graph of the DFA is shown at page 37).

The example is for 7 nurses and 14 days using three states: day shift, night shift, and day off. There are two kind of constraints:

* Rules using the regular constraint:
  • - one day off every 4 days
  • - no 3 nights in a row.
These are translated as the following transition function:
transition_fn =  [
   # d,n,o
    [2,3,1], # state 1
    [4,4,1], # state 2
    [4,5,1], # state 3 
    [6,6,1], # state 4
    [6,0,1], # state 5
    [0,0,1]  # state 6
    ]
Note: This transition function can be used for any number of nurses and days. * Other rules
Here I went wild and required some extra rules for this specific number of nurses and days:
  • Each nurse must work between 7 and 10 days during the period (14 days)
  • On weekends there must be 2 nurses on day shift, 1 on night shift, and 4 has day off
  • On working days: 3 nurses on day shift, 2 on night shift, and 2 has day off
Below is one solution, including statistics for each nurse and day, respectively:
Nurse0:  d d d o d d d o d d d o d o  day_stat: [('d', 10), ('o', 4)] total: 10 workdays
Nurse1:  d d d o d d d o d d d o d o  day_stat: [('d', 10), ('o', 4)] total: 10 workdays
Nurse2:  d d o d d n o d d d o d n o  day_stat: [('d', 8), ('o', 4), ('n', 2)] total: 10 workdays
Nurse3:  n n o d n o n d n o d d o d  day_stat: [('d', 5), ('o', 4), ('n', 5)] total: 10 workdays
Nurse4:  n o d n n o o d n n o n o o  day_stat: [('d', 2), ('o', 6), ('n', 6)] total: 8 workdays
Nurse5:  o n n d o o o n o o n n o d  day_stat: [('d', 2), ('o', 7), ('n', 5)] total: 7 workdays
Nurse6:  o o n n o o o n o n n d o n  day_stat: [('d', 1), ('o', 7), ('n', 6)] total: 7 workdays

Statistics per day:
        d n o
Day 0:  3 2 2
Day 1:  3 2 2
Day 2:  3 2 2
Day 3:  3 2 2
Day 4:  3 2 2
Day 5:  2 1 4
Day 6:  2 1 4
Day 7:  3 2 2
Day 8:  3 2 2
Day 9:  3 2 2
Day10:  3 2 2
Day11:  3 2 2
Day12:  2 1 4
Day13:  2 1 4

Solving 3 jugs problem using regular

Model: 3_jugs_regular.py
Model: 3_jugs_regular2.py

This is a model solving the classic 3 jugs problem (a.k.a. water jugs problem), based on Taha's network flow model (Introduction to Operations Research, page 245f) but using regular instead. Compare, for example, with the Comet model 3_jugs.co using a network flow approach.

The different possible states in the problem is represented as nodes, and we start at state 1 (node 1). The name of the nodes represent the amount of water in each water jug; e.g. 8,0,0 means that 8 liter/gallon is in the first jug, and 0 in both jug 2 and 3. The goal is then to go from state 8,0,0 to 4,4,0. Here are all the nodes:
nodes = [
    '8,0,0', # 1 start
    '5,0,3', # 2
    '5,3,0', # 3 
    '2,3,3', # 4 
    '2,5,1', # 5
    '7,0,1', # 6
    '7,1,0', # 7
    '4,1,3', # 8
    '3,5,0', # 9
    '3,2,3', # 10
    '6,2,0', # 11
    '6,0,2', # 12
    '1,5,2', # 13
    '1,4,3', # 14
    '4,4,0'  # 15 goal
    ]
The valid transitions between these states are as follows.
states = [
    [2,9],  # state 1
    [3],    # state 2
    [4, 9], # state 3
    [5],    # state 4
    [6,9],  # state 5
    [7],    # state 6
    [8,9],  # state 7
    [15],   # state 8
    [10],   # state 9
    [11],   # state 10
    [12],   # state 11
    [13],   # state 12
    [14],   # state 13
    [15]    # state 14
    ]
And from this adjacency lists we can build the DFA:
transition_fn =  [
   # 1  2  3  4  5  6  7  8  9  0  1  2  3  4  5 
    [0, 2, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0],  # 1
    [0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],  # 2
    [0, 0, 0, 4, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0],  # 3
    [0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],  # 4
    [0, 0, 0, 0, 0, 6, 0, 0, 9, 0, 0, 0, 0, 0, 0],  # 5
    [0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],  # 6
    [0, 0, 0, 0, 0, 0, 0, 8, 9, 0, 0, 0, 0, 0, 0],  # 7
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15], # 8
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0], # 9
    [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 11, 0, 0, 0, 0], # 10
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 0, 0, 0], # 11
    [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 0, 0], # 12
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0], # 13
    [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15], # 14
                                                    # 15
    ]
As you may see this is very much like the representation in the network flow model, but the connections is a state number instead of just 0/1 (connected/not connected).

Running this program gives the following solution where x contains the chosen states, and then the nodes of the states are shown.
x: [1, 9, 10, 11, 12, 13, 14, 15]
8,0,0 -> 3,5,0
3,5,0 -> 3,2,3
3,2,3 -> 6,2,0
6,2,0 -> 6,0,2
6,0,2 -> 1,5,2
1,5,2 -> 1,4,3
1,4,3 -> 4,4,0

num_solutions: 1
failures: 1
branches: 2
wall_time: 0 ms

Found a solution of length 8: [1, 9, 10, 11, 12, 13, 14, 15]
Note: This model don't have any explicit Minimization objective to minimize the number of states (which is implied in the problem). Instead it calls the main function with the length of the array x, ranging from length 1 to 15, and thus find the first, minimal, solution of length 8.

3 jugs problem, alternative solution

Model: 3_jugs_regular2.py

However, I was not satisfied with this calling of main repeatedly. Instead an alternative solution was created which use an extra (last) state which may be looped into (15->15->15...):
# ....
[15],   # state 14
[15]    # state 15: loop ("fix-point")
With this state we can fix the length of the array of decision variables (x) to 15, the number of states, and add an objective variable cost which is then minimized. This variable counts the number of elements in x that is not in the final state (state 15):
cost = solver.IntVar(0, input_max, 'cost')
# ...
sum_b = [solver.IsLessCstVar(x[i], input_max-1) for i in range(input_max)]
solver.Add(cost == solver.Sum(sum_b))

objective = solver.Minimize(cost, 1)

Global contiguity

Model: contiguity_regular.py

The definition of the global constraint global_contiguity from Global Constraint Catalog is:
Enforce all variables of the VARIABLES collection to be assigned to 0 or 1. In addition, all variables assigned to value 1 appear contiguously.

Example:
(<0, 1, 1, 0>)

The global_contiguity constraint holds since the sequence 0 1 1 0 contains no more than one group of contiguous 1.
The regular expression for this is ^0*1*0*$ (or maybe ^0*1+0*$ if there is an assumption that there must be at least one 1). The transition function is thus:
transition_fn =  [
    [1,2], # state 1: 0*
    [3,2], # state 2: 1* 
    [3,0], # state 3: 0* 
    ]
For an array of length 4 there are 11 solutions:
[0, 0, 0, 0]
[0, 0, 0, 1]
[0, 0, 1, 0]
[0, 0, 1, 1]
[0, 1, 0, 0]
[0, 1, 1, 0]
[0, 1, 1, 1]
[1, 0, 0, 0]
[1, 1, 0, 0]
[1, 1, 1, 0]
[1, 1, 1, 1]

num_solutions: 11
failures: 0
branches: 20
wall_time: 0 ms
For an array of length 100 there are 5051 solutions, still with 0 failures (and 10100 branches). It takes about 4 seconds to generate and print all solutions; without printing it takes 0.2 seconds.

Regular expressions

Model: regexp.py

The history behind this is that my last name (Kjellerstrand) is quite often misspelled, in ways according to this regular expression:
k(je|ä)ll(er|ar)?(st|b)r?an?d
Note: In Swedish the two constructs "kje(ll)" and "kä(ll)" sounds alike.

This model generates all the words that can be construed by this regular expression. As with the other models, we create a DFA. Note that the states are groups of characters (maybe it should be just single characters):
transition_fn =  [
   # 1 2 3 4 5 6 7 8 9 0 1 2     # 
    [0,2,3,0,0,0,0,0,0,0,0,0],   #  1 k 
    [0,0,0,4,0,0,0,0,0,0,0,0],   #  2 je
    [0,0,0,4,0,0,0,0,0,0,0,0],   #  3 ä
    [0,0,0,0,5,6,7,8,0,0,0,0],   #  4 ll
    [0,0,0,0,0,0,7,8,0,0,0,0],   #  5 er
    [0,0,0,0,0,0,7,8,0,0,0,0],   #  6 ar
    [0,0,0,0,0,0,0,0,9,10,0,0],  #  7 st 
    [0,0,0,0,0,0,0,0,9,10,0,0],  #  8 b
    [0,0,0,0,0,0,0,0,0,10,0,0],  #  9 r
    [0,0,0,0,0,0,0,0,0,0,11,12], # 10 a
    [0,0,0,0,0,0,0,0,0,0,0,12],  # 11 n
                                 # 12 d 
    ]

s = ['k','je','ä','ll','er','ar','st','b','r','a','n','d']
Running the program generates all the 48 variants of my last name. Though, I have to confess that I have never been called "Kjellbad", "Källbad" or "Kjellbrad". Yet.
kjellstad
kjellbad
källstad
källbad
kjellerstad
kjellerbad
kjellarstad
kjellarbad
kjellstrad
kjellstand
kjellbrad
kjellband
källerstad
källerbad
källarstad
källarbad
källstrad
källstand
källbrad
källband
kjellerstrad
kjellerstand
kjellerbrad
kjellerband
kjellarstrad
kjellarstand
kjellarbrad
kjellarband
kjellstrand
kjellbrand
källerstrad
källerstand
källerbrad
källerband
källarstrad
källarstand
källarbrad
källarband
källstrand
källbrand
kjellerstrand
kjellerbrand
kjellarstrand
kjellarbrand
källerstrand
källerbrand
källarstrand
källarbrand
Compare with the Gecode model all_regexp.cpp, and was described in Regular expressions in Gecode. Here I also describe my fascination with regular expressions.

August 31, 2010

Nontransitive dice, Ormat game, 17x17 challenge

Here are some new models done the last weeks and some older not mentioned before. (I have also spent some time with Numberjack, a Python based solver, and will blog about this later on.)

Nontransitive dice

MiniZinc: nontransitive_dice.mzn
Comet: nontransitive_dice.co

(Sidenote: This was triggered by a problem in Numberjack's Tutorial.)

Nontransitive dice is presented at Wikipedia as:
A set of nontransitive dice is a set of dice for which the relation "is more likely to roll a higher number" is not transitive. See also intransitivity. This situation is similar to that in the game Rock, Paper, Scissors, in which each element has an advantage over one choice and a disadvantage to the other.
And then gives the following example:
Consider a set of three dice, A, B and C such that
* die A has sides {2,2,4,4,9,9},
* die B has sides {1,1,6,6,8,8}, and
* die C has sides {3,3,5,5,7,7}.

Then:
* the probability that A rolls a higher number than B is 5/9 (55.55 %),
* the probability that B rolls a higher number than C is 5/9, and
* the probability that C rolls a higher number than A is 5/9.

Thus A is more likely to roll a higher number than B, B is more likely to roll a higher number than C, and C is more likely to roll a higher number than A. This shows that the relation "is more likely to roll a higher number" is not transitive with these dice, and so we say this is a set of nontransitive dice.
How easy is it to model such nontransitive dice in a high level constraint programming system such as MiniZinc or Comet? The basic MiniZinc model nontransitive_dice.mzn - with no bells and whistles - is this (somewhat edited):
int: m = 3; % number of dice
int: n = 6; % number of sides of each die

% the dice
array[1..m, 1..n] of var 0..n*2: dice :: is_output;

% the competitions: 
% The last wrap around is the one that breaks 
% the transitivity.
array[0..m-1, 1..2] of var 0..n*n: comp :: is_output;

solve satisfy;
constraint
   % order the number of each die
   forall(d in 1..m) (
       increasing([dice[d,i] | i in 1..n])
   )
;
 
% and now we roll...
% Number of wins for [A vs B, B vs A]
constraint
   forall(d in 0..m-1) (
      comp[d,1] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+(d mod m), r1] >      
                   dice[1+((d + 1) mod m), r2]))
      /\
      comp[d,2] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+((d+1) mod m), r1] >
                   dice[1+((d) mod m), r2]))
   )
;

% non-transitivity
% All dice 1..m-1 must beat the follower, 
% and die m must beat die 1
constraint 
  forall(d in 0..m-1) (
    comp[d,1] > comp[d,2]
   )
;
The result of the m competitions is placed in the (m x 2) matrix comp where the winner is in comp[i,1] and the loser in comp[i,2] for the match i. The last constraint section in the code above ensures that the winner is always in comp[i,1].

In the full model I have added the following:
  • max_val: maximum value of the dice, to be minimized
  • max_win: maximum number of winnings, to be maximized
  • gap and gap_sum: the difference of wins of a specific competition, to be maximized or minimized
  • the example setups from the Wikipedia page
The Comet version nontransitive_dice.co, includes about the same functions as the MiniZinc model. Here is the basic model (also edited for presentational purposes):
import cotfd;
int m = 3; // number of dice
int n = 6; // number of sides of each die
Solver cp();

// the dice
var{int} dice[1..m, 1..n](cp, 1..n*2);

// The competitions: 
var{int} comp[0..m-1, 1..2](cp, 0..n*n);

explore  {
  // symmetry breaking: order the number of each die
  forall(d in 1..m) {
    forall(i in 2..n) {
      cp.post(dice[d,i-1] <= dice[d,i]);
    }
  }

  // and now we roll...
  // Number of wins for [A vs B, B vs A]
  forall(d in 0..m-1) {
    cp.post(comp[d,1] == 
        sum(r1 in 1..n, r2 in 1..n) (
            dice[1+(d % m), r1] >
            dice[1+((d + 1) % m), r2]));

    cp.post(comp[d,2] == 
         sum(r1 in 1..n, r2 in 1..n) (
            dice[1+((d+1) % m), r1] >
            dice[1+((d) % m), r2]));
  }

  forall(d in 0..m-1) {
    cp.post(comp[d,1] > comp[d,2]);
  }


} using {

  labelFF(dice);
  labelFF(comp);      

  cout << "dice:" << endl;
  forall(i in 1..m) {
    forall(j in 1..n) {
      cout << dice[i,j] << " ";
    }
    cout << endl;
  }
  cout << endl;
}
The full Comet model when minimzing max_value for 3 dice with 6 sides of each die (with domain 1..12), gives this result, i.e. the maximum value used in the dice is 4:
solution #1
dice:
1 1 1 2 4 4
1 1 1 3 3 3 
1 2 2 2 2 2 
comp:
1 vs 2: 15 (p:0.416667) 12 (p:0.333333) 
2 vs 3: 18 (p:0.500000) 15 (p:0.416667) 
3 vs 1: 15 (p:0.416667) 13 (p:0.361111) 
gap[3,3,2]
max_val: 4
max_win: 18
gap_sum: 8
Somewhat related is another dice problem: sicherman_dice.mzn.

17x17 challenge (not solved, though)

Last November, William Gasarch published a challenge in The 17x17 challenge. Worth $289.00. This is not a joke.. Also, see bit-player's The 17×17 challenge, and O.R. by the Beach Let’s Join O.R. Forces to Crack the 17×17 Challenge.

The objective is to create a 17x17 matrix where there are no colored sub-rectangles, i.e. no (sub)rectangles in the matrix can have the same color (number). Here's a solution of the 5x5x3 problem, i.e. 5 x 5 matrix and with 3 colors (1..3):
1 3 3 2 1 
3 1 2 2 1 
3 2 1 2 1 
2 2 2 1 1 
1 1 1 1 2 
When I first saw this problem in December, I tried some models but didn't pursued it further. However, when Karsten Konrad published some OPL models, I picked it up again. We also discussed this problem to see what a join force could do.

Here is my take in MiniZinc: 17_b.mzn. It is inspired (or rather a translation) of Karsten Konrad's OPL model from his blog post Let’s do real CP: forbiddenAssignment. Karsten earlier posted two OPL models: However, 17x17x4 is very hard, and - as I understand it - no one has yet cracked it.

Here are three MiniZinc models with some different approaches. A Comet version is mentioned below.

17_b.mzn
This is a translation of Karsten Konrad's OPL model in Let’s do real CP: forbiddenAssignment, which uses a forbiddenAssignments constraint. However, MiniZinc don't has this constraint as a built-in so I had to roll my own, here called existential_conflict. The forbidden is an table of forbidden combinations in the matrix. For 4 colors it look like this:
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
i.e. there can be no rectangle with has all the same values. (If you now start to think about global cardinality constraint, please read on.) Here is the complete model:
int: NbColumns = 10;
int: NbRows = 10;
int: NbColors = 4;
set of int: Columns = 1..NbColumns;
set of int: Rows = 1..NbRows;
set of int: Colors = 1..NbColors;

array[Rows,Columns] of var Colors: space :: is_output;
array[Colors, 1..4] of int: forbidden = 
   array2d(Colors, 1..4,[ i | i in Colors, j in 1..4]);

% The pattern x must not be in table table.
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
   not exists(pattern in index_set_1of2(table)) (
      forall(j in index_set_2of2(table)) (
           x[j] = table[pattern, j]
      )
   )
;

solve :: int_search(
   [space[i,j] | i in Rows, j in Columns], 
   first_fail, 
   indomain_min, 
  complete) 
satisfy;

constraint
  space[1,1] = 1 /\
  space[NbRows,NbColumns] = 2 /\
   
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
    extensional_conflict(forbidden, 
       [space[r,c], space[r2,c], space[r,c2], space[r2,c2]])
  )
17_b2.mzn
This is almost exactly the same as the one above, except that it has another implementation of existential_conflict, a kind of MIP version:
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
 forall(pattern in index_set_1of2(table)) (
     sum(j in index_set_2of2(table)) ( 
           bool2int(x[j] = table[pattern, j])
     ) < 4
  )
;
17_b3.mzn
This third version use the constraint global cardinality constraint instead of forbidden assignments/existential_conflict. We create a temporary array (gcc), and then check that there is no value of 4 (i.e. the number of occurrences of each number must be less than 4). Also, I skipped the symmetry breaking of assigning the corner values in the matrix space (commented below).
constraint
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
    let {
       array[1..4] of var 0..4: gcc
    } in
    global_cardinality(
        [space[r,c], space[r2,c], space[r,c2], space[r2,c2]], 
        gcc) 

    /\ 
    forall(i in 1..4) (  gcc[i] < 4  )
  )
;
As you can see these are rather simple (a.k.a. naive) models. I suspect that there are better search strategies than the one I tested, for example Gecode's size_afc_max with indomain_random which is one of the better:
solve :: int_search(
         [space[i,j] | i in Rows, j in Columns], 
         size_afc_max,
         indomain_random,
         fail) 
     satisfy;
However, neither of these models could find any solution of the original 17x17x4 problem. The longest take was 36 hours on my new 8 core Linux (Ubuntu) and 12 Gb RAM, but no luck.

Here is a solution of the 17x17x5 (i.e. 5 colors), using the LazyFD solver (it took 1:19 minutes), c.f. Karsten's Improved Model for Coloring Problem.
1 5 1 5 1 5 3 2 5 5 4 4 2 3 1 3 2 
5 1 4 3 5 5 2 3 2 2 3 2 4 1 4 4 4 
3 2 2 2 1 3 2 4 1 3 5 3 4 4 2 5 5 
2 2 3 3 5 4 4 2 5 4 4 1 3 1 1 1 5 
4 4 5 3 1 4 1 1 3 5 2 2 5 3 2 5 3 
5 1 3 1 1 3 3 2 2 1 4 1 5 4 2 4 5 
4 2 4 3 5 3 5 1 2 4 2 5 2 4 1 2 1 
2 4 2 4 1 5 4 3 4 2 5 3 4 1 5 2 3 
4 3 5 2 1 3 4 3 5 2 5 5 3 3 4 1 2 
1 5 3 1 2 1 4 1 3 2 3 5 2 4 2 5 4 
2 1 3 5 2 1 5 3 2 3 5 4 5 2 4 1 1 
5 3 5 4 3 4 3 2 1 3 3 2 1 2 1 2 4 
4 3 3 4 4 1 5 5 1 5 4 1 2 2 5 3 3 
5 4 4 1 3 2 1 4 2 5 4 5 3 5 3 1 3 
2 4 1 3 4 2 1 5 1 3 1 4 5 5 2 4 1 
3 3 5 2 4 2 5 4 3 1 1 2 1 5 3 1 4 
1 5 2 3 4 2 1 5 4 4 5 1 1 2 3 3 2 
I also wrote a Comet model, 17_b.co, and experimented with different cardinality constraints (cardinality, atmost, a decomposition of forbiddenAssignments) and different labelings.

Ormat game

Ormat game is yet another grid problem. From bit-player: The Ormat Game (where the nice pictures has been replace with 0/1 matrices below):
Here’s the deal. I’m going to give you a square grid, with some of the cells colored and others possibly left blank. We’ll call this a template. Perhaps the grid will be one of these 3×3 templates:
 
   1 0 0      1 1 1     1 1 1
0 1 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 0
(1) (2) (3)
You have a supply of transparent plastic overlays that match the grid in size and shape and that also bear patterns of black dots:
   1 0 0     1 0 0     0 1 0
0 1 0 0 0 1 1 0 0
0 0 1 0 1 0 0 0 1
(a) (b) (c)
0 0 1 0 1 0 0 0 1
1 0 0 0 0 1 0 1 0
0 1 0 1 0 0 1 0 0
(d) (e) (f)
Note that each of these patterns has exactly three dots, with one dot in each row and each column. The six overlays shown are the only 3×3 grids that have this property.

Your task is to assemble a subset of the overlays and lay them on the template in such a way that dots cover all the colored squares but none of the blank squares. You are welcome to superimpose multiple dots on any colored square, but overall you want to use as few overlays as possible. To make things interesting, I’ll suggest a wager. I’ll pay you $3 for a correct covering of a 3×3 template, but you have to pay me $1 for each overlay you use. Is this a good bet?
This is the first version in MiniZinc: ormat_game.mzn where the problem instances are included in the model file. As with the 17x17 problem above, this is - in concept, at least - quite simple. The model, sans the overlays, is shown below:
include "globals.mzn"; 
% the number of overlays, n!
int: f :: is_output = product([i | i in 1..n]);

% Which overlay to use.
array[1..f] of var 0..1: x :: is_output;

% number of overlays used (to minimize)
var 1..f: num_overlays :: is_output = sum(x);

% There are n! possible overlays
% They are in a separate .dzn file
%    ormat_game_n3.dzn etc
array[1..f, 1..n,1..n] of 0..1: overlays;

solve :: int_search(
   x, 
   smallest, 
   indomain_min, 
   complete) 
  minimize num_overlays;

constraint 
    % if problem has a black cell (=1) then there 
    % must be a selected overlay that has a black cell
    forall(i,j in 1..n) (
        (
        problem[i,j] = 1 -> 
            exists(o in 1..f) (
                               x[o]*overlays[o,i,j] = 1 
                               )
        )                                            
    )
    /\ % the inverse: wherever a selected overlay 
       %  has a black cell, problem must have 
       %  a black cell
    forall(o in 1..f) (
        x[o] = 1 -> 
           forall(i,j in 1..n) (
              overlays[o,i,j] = 1 -> problem[i,j] = 1
           ) 
    ) 
;

% Problem grid 3
include "ormat_game_n3.dzn";
array[1..n, 1..n] of 0..1: problem = array2d(1..n, 1..n,
[
   1,1,1,
   1,1,1,
   1,1,0
]);
The only constraints are:
  • if the problem matrix has a black cell (1) then there at least one of the selected overlay matrices must have a 1 in this cell
  • the converse: for all the black cells in the selected overlays the problem must has a black cell.
The output and solution:
Problem:
111
111
110

Solution:
x: [0, 1, 0, 1, 1, 1]
num_overlays: 4

Overlay #2
100
001
010

Overlay #4
001
100
010

Overlay #5
010
001
100

Overlay #6
001
010
100
Ideally, the overlays should be calculated in real-time, but it was much easier to generate overlay files for each size (using ormat_game_generate.mzn and some post-processing in Perl). These must be included in the model as well. Note: there are n! overlay matrices for problems of size n x n. However, changing the model file each time a new problem is made or tested is not very convenient. It's better to have problem instances in separate files which include the model and the overlay files of appropriate size. These following problem instances includes the general model ormat_game_nodel.mzn and an overlay file: Note: As of writing, there is problems running problems of size 7 on my 64-bit Ubuntu; it works on the 32-bit machine (also Linux), though. This has been reported to the G12 team.

Other new models

Here are some other new models (or not mentioned earlier):

Contiguity using regular

contiguity_regular.mzn: decomposition of the global constraint contiguity using regular constraint.

Combinatorial auction

These two MiniZinc models implements a simple combinatorial auction problem (Wikipedia): combinatorial_auction.mzn, and a "talkative" MIP version combinatorial_auction_lp.mzn (which is translated from Numberjack's Tutorial)

Compare with the Gecode version combinatorial_auction.cpp.

March 12, 2010

Pi Day Sudoku 2009 - the models (MiniZinc and Comet)

The 14 March is Pi Day (π Day) and is a celebration of Pi (3.14 in the mm.dd date notation). There are a lot of activities for this day. One of these activities is the Pi Day Sudoku.

Almost exactly one year ago, I blogged about the Pi Day Sudoku 2009 problem in these two posts: The problem is an extended version of a Sudoku problem:
Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

(Click to enlarge the picture)

Since it was a competition (closed June 1, 2009), I didn't published any models of this problem when blogging about the problem. Now, a year after, it seems to be a good time to publish them. I then implemented two version, one in MiniZinc, and one in Comet:: Both models use the same approach (details differs however). It is the same as for the plain Sudoku, with two exceptions:
  • The common approach in plain Sudoku is to use the global constraint alldifferent for stating that the values in the rows, columns, and regions should be different. Since there should be 3 occurrences of π in each row, column and region, this approach don't work. As mentioned in Solving Pi Day Sudoku 2009 with the global cardinality constraint, my first approach was a home made version of the constraint alldifferent except 0 and Pi but it was too slow. Instead (and via a suggestion of Mikael Zayenz Lagerkvist) I changed to the global constraint global_cardinality, which was much faster.
  • The other difference to the standard approach is that the regions are not 3x3 (or MxM), but "jigsaw regions". It was not especially hard to make this representation (though boring to type them in). The constraint for checking the regions are (in MiniZinc):
      /\ % the regions
      forall(i in 1..num_regions) (
        check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
      )
    
Here I will not publish the solution to the puzzle since I gather that there are a lot of people out there that will solve it by there own. And there is at least one valid solution out there.

Summary

This was a fun problem, especially since I learned some new things by implement the models. As a constraint programming challenge is was quite harder than this year puzzle: Pi Day 2010:
Rules: Fill in the grid so that each row, column, and block contains 1-9 exactly once. This puzzle only has 18 clues! That is conjectured to be the least number of clues that a unique-solution rotationally symmetric puzzle can have. To celebrate Pi Day, the given clues are the first 18 digits of π = 3.14159265358979323...
[Yes, I have implemented a MiniZinc model for this problem as well; it is a standard Sudoku problem. No, I will not publish the model or a solution until the deadline, June 1, 2010.]

For more about Pi Day Sudoku 2010, see the blog 360: Pi Day Sudoku is back.

Also, see the following models that implements Killer Sudoku, which use the same approach as Pi Day Sudoku 2009:

The MiniZinc code

Here is the MiniZinc model (sudoku_pi.mzn), slightly edited.
include "globals.mzn";
int: n = 12;
int: X = 0;  % the unknown
int: P = -1; % π (Pi)

predicate check(array[int] of var int: x) =
   global_cardinality(x, occ) % :: domain
;

array[1..n, 1..n] of var -1..9: x :: is_output;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
array[1..11] of 0..3: occ2 = [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1];

% solve satisfy;
solve :: int_search([x[i,j] | i,j in 1..n], first_fail, indomain_min, complete) satisfy;

constraint

  % copy the hints
  forall(i in 1..n, j in 1..n) (
      x[i,j] != 0
      /\
      if dat[i,j] != X then
        x[i,j] = dat[i,j]
      else 
        true
      endif
  )

  /\ % rows
  forall(i in 1..n) (
    check([x[i,j] | j in 1..n]) 
  )

  /\ % columns
  forall(j in 1..n) (
    check([x[i,j] | i in 1..n])
  )

  /\ % the regions
  forall(i in 1..num_regions) (
    check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
  )
;

output 
[ show(occ) ++ "\n"] ++
[
  if j = 1 then "\n" else " " endif ++
   show(x[i,j])
  | i in 1..n, j in 1..n

] ++ ["\n"];


% data
array[1..n,1..n] of int: dat = array2d(1..n, 1..n,
[
 4,9,7, P,5,X,X,X,X, X,X,X,
 X,P,X, 8,X,X,9,6,1, 5,2,X,
 X,8,X, 1,X,X,X,P,X, 7,X,X,
 X,X,X, X,X,X,X,P,X, 4,X,X,
 5,3,9, 6,X,X,X,X,X, X,X,X,

 9,4,X, P,P,P,7,X,X, X,X,X,
 X,X,X, X,X,6,2,5,P, X,7,4,
 X,X,X, X,X,X,X,X,P, P,3,8,
 X,7,8, 4,6,9,X,X,X, X,X,X,

 X,X,3, X,P,X,X,4,7, 1,6,9,
 X,X,4, X,1,X,X,X,6, X,P,X,
 X,X,X, X,X,X,X,X,4, X,5,X
 ]);


% The regions
int: num_regions = 12;
array[1..num_regions*12, 1..2] of int: regions  = array2d(1..num_regions*12, 1..2,
[
  % Upper left dark green
  1,1  , 1,2  , 1,3  , 
  2,1  , 2,2  , 2,3  , 
  3,1  , 3,2  , 
  4,1  , 4,2  ,  
  5,1  , 5,2  , 
 
  % Upper mid light dark green
  1,4  ,  1,5  ,  1,6  ,  1,7  ,  1,8  ,  1,9  , 
  2,4  ,  2,5  ,  2,6  ,  2,7  ,  2,8  ,  2,9  , 

  % Upper right green
  1,10  ,  1,11  ,  1,12  , 
  2,10  ,  2,11  ,  2,12  , 
  3,11  ,  3,12  , 
  4,11  ,  4,12  , 
  5,11  ,  5,12   , 

  % Mid upper left "blue"
  3,3  ,  3,4  , 3,5  ,  3,6  , 
  4,3  ,  4,4  , 4,5  ,  4,6  , 
  5,3  ,  5,4  , 5,5  ,  5,6  , 

  % Mid Upper right blue
  3,7  ,  3,8  ,  3,9  ,  3,10  , 
  4,7  ,  4,8  ,  4,9  ,  4,10  , 
  5,7  ,  5,8  ,  5,9  ,  5,10  , 

  % Mid left green
  6,1  ,  6,2  , 6,3  , 
  7,1  ,  7,2  , 7,3  , 
  8,1  ,  8,2  , 8,3  , 
  9,1  ,  9,2  , 9,3  , 

  % Mid left blue
  6,4  , 6,5  , 
  7,4  , 7,5  , 
  8,4  , 8,5  , 
  9,4  , 9,5  , 
  10,4 , 10,5  , 
  11,4 , 11,5  , 

  % Mid mid green
  6,6  , 6,7  , 
  7,6  , 7,7  , 
  8,6  , 8,7  , 
  9,6  , 9,7  , 
  10,6 , 10,7  , 
  11,6 , 11,7  , 

  % Mid right blue
  6,8  ,  6,9  , 
  7,8  ,  7,9  , 
  8,8  ,  8,9  , 
  9,8  ,  9,9  , 
  10,8 ,  10,9  , 
  11,8 ,  11,9  , 

  % Mid right green
  6,10  ,  6,11  ,  6,12  , 
  7,10  ,  7,11  ,  7,12  , 
  8,10  ,  8,11  ,  8,12  , 
  9,10  ,  9,11  ,  9,12  , 

  % Lower left dark green
  10,1  , 10,2  ,  10,3  , 
  11,1  , 11,2  ,  11,3  , 
  12,1  , 12,2  ,  12,3  , 12,4  , 12,5  ,  12,6  , 

  % Lower right dark green
  10,10  ,  10,11  , 10,12  , 
  11,10  ,  11,11  , 11,12  , 
  12,7   ,  12,8   ,  12,9  , 12,10  , 12,11  ,  12,12  
]);

The Comet code

The Comet model (sudoku_pi.co) use the same principle as the MiniZinc model. However, the representation of the regions are different where I instead of a matrix use a more object oriented approach with two tuple for the structures. For some reasons that I have forgot now, I didn't create a function check in this Comet model, instead stated the cardinality constraints directly.
import cotfd;
int t0 = System.getCPUTime();

int n = 12;
int P = -1; // Pi
int X = 0; // unknown 
range R = -1..9; 

set{int} V = {-1,1,2,3,4,5,6,7,8,9};

// regions where 1..9 is alldiff + 3 Pi
tuple Pos {
  int row;
  int col;
}

tuple Region {
  set{Pos} p;
}

int num_regions = 12;
Region regions[1..num_regions] = 
[
 // Upper left dark green
 Region({Pos(1,1),Pos(1,2),Pos(1,3),
         Pos(2,1),Pos(2,2),Pos(2,3),
         Pos(3,1),Pos(3,2),
         Pos(4,1),Pos(4,2), 
         Pos(5,1), Pos(5,2)}),
 
 // Upper mid light dark green
 Region({Pos(1,4), Pos(1,5), Pos(1,6), Pos(1,7), Pos(1,8), Pos(1,9),
         Pos(2,4), Pos(2,5), Pos(2,6), Pos(2,7), Pos(2,8), Pos(2,9)}),

 // Upper right green
 Region({Pos(1,10), Pos(1,11), Pos(1,12),
         Pos(2,10), Pos(2,11), Pos(2,12),
         Pos(3,11), Pos(3,12),
         Pos(4,11), Pos(4,12),
         Pos(5,11), Pos(5,12) }),

 // Mid upper left "blue"
 Region({Pos(3,3), Pos(3,4),Pos(3,5), Pos(3,6),
         Pos(4,3), Pos(4,4),Pos(4,5), Pos(4,6),
         Pos(5,3), Pos(5,4),Pos(5,5), Pos(5,6)}),

 // Mid Upper right blue
 Region({Pos(3,7), Pos(3,8), Pos(3,9), Pos(3,10),
        Pos(4,7), Pos(4,8), Pos(4,9), Pos(4,10),
        Pos(5,7), Pos(5,8), Pos(5,9), Pos(5,10)}),

 // Mid left green
 Region({Pos(6,1), Pos(6,2),Pos(6,3),
         Pos(7,1), Pos(7,2),Pos(7,3),
         Pos(8,1), Pos(8,2),Pos(8,3),
         Pos(9,1), Pos(9,2),Pos(9,3)}),

 // Mid left blue
 Region({Pos(6,4),Pos(6,5),
         Pos(7,4),Pos(7,5),
         Pos(8,4),Pos(8,5),
         Pos(9,4),Pos(9,5),
         Pos(10,4),Pos(10,5),
         Pos(11,4),Pos(11,5)}),

 // Mid mid green
 Region({Pos(6,6),Pos(6,7),
         Pos(7,6),Pos(7,7),
         Pos(8,6),Pos(8,7),
         Pos(9,6),Pos(9,7),
         Pos(10,6),Pos(10,7),
         Pos(11,6),Pos(11,7)}),

 // Mid right blue
 Region({Pos(6,8), Pos(6,9),
         Pos(7,8), Pos(7,9),
         Pos(8,8), Pos(8,9),
         Pos(9,8), Pos(9,9),
         Pos(10,8), Pos(10,9),
         Pos(11,8), Pos(11,9)}),

 // Mid right green
 Region({Pos(6,10), Pos(6,11), Pos(6,12),
         Pos(7,10), Pos(7,11), Pos(7,12),
         Pos(8,10), Pos(8,11), Pos(8,12),
         Pos(9,10), Pos(9,11), Pos(9,12)}),

 // Lower left dark green
 Region({Pos(10,1),Pos(10,2), Pos(10,3),
         Pos(11,1),Pos(11,2), Pos(11,3),
         Pos(12,1),Pos(12,2), Pos(12,3),Pos(12,4),Pos(12,5), Pos(12,6)}),

 // Lower right dark green
 Region({Pos(10,10), Pos(10,11),Pos(10,12),
         Pos(11,10), Pos(11,11),Pos(11,12),
         Pos(12,7),Pos(12,8), Pos(12,9),Pos(12,10),Pos(12,11), Pos(12,12)})

 ];

// the hints
int data[1..n,1..n] = 
[
 [4,9,7, P,5,X,X,X,X, X,X,X],
 [X,P,X, 8,X,X,9,6,1, 5,2,X],
 [X,8,X, 1,X,X,X,P,X, 7,X,X],
 [X,X,X, X,X,X,X,P,X, 4,X,X],
 [5,3,9, 6,X,X,X,X,X, X,X,X],

 [9,4,X, P,P,P,7,X,X, X,X,X],
 [X,X,X, X,X,6,2,5,P, X,7,4],
 [X,X,X, X,X,X,X,X,P, P,3,8],
 [X,7,8, 4,6,9,X,X,X, X,X,X],

 [X,X,3, X,P,X,X,4,7, 1,6,9],
 [X,X,4, X,1,X,X,X,6, X,P,X],
 [X,X,X, X,X,X,X,X,4, X,5,X]
 ];

Integer num_solutions(0);

Solver m();
var{int} x[1..n,1..n](m, R);

// int occ[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];
var{int} occ[-1..9](m,0..3);
int occ_count[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];

exploreall {

  // get the hints
  forall(i in 1..n) {
    forall(j in 1..n) {
      int c = data[i,j];
      if (c == P) {
        cout << "P";
      } else {
        cout << data[i,j];
      }
      if (c != 0) {
        m.post(x[i,j] == data[i,j]);
      }
      cout << " ";
    }
    cout << endl;
   }

  forall(i in 1..n, j in 1..n) {
    m.post(x[i,j] != 0);
  }

  forall(i in -1..9) {
    m.post(occ[i] == occ_count[i]);
  }  
  
  // rows
  forall(i in 1..n) {
    m.post(cardinality(occ, all(j in 1..n) x[i,j])); 
  }
    
  // columns
  forall(j in 1..n) {
    m.post(cardinality(occ, all(i in 1..n) x[i,j]));
  }

  // regions
  forall(r in 1..num_regions) {
    m.post(cardinality(occ, all(i in regions[r].p) x[i.row,i.col])); 

  }

} using {

  // reversing i and j gives faster solution
  forall(i in 1..n, j in 1..n: !x[i,j].bound()) {
    tryall(v in V : x[i,j].memberOf(v)) by(v) 
      m.label(x[i,j], v);
    onFailure
      m.diff(x[i,j], v);
  }

  int t1 = System.getCPUTime();
  cout << "time:      " << (t1-t0) << endl;
  cout << "#choices = " << m.getNChoice() << endl;
  cout << "#fail    = " << m.getNFail() << endl;
  cout << "#propag  = " << m.getNPropag() << endl;
  

  num_solutions++;

   forall(i in 1..n) {
     forall(j in 1..n) {
       int c = x[i,j];
       if (c == P) {
         cout << "P";
       } else {
         cout << x[i,j];
       }
       cout << " ";
     }
     cout << endl;
   }
}

cout << "\nnum_solutions: " << num_solutions << endl;
cout << endl << endl;

int t1 = System.getCPUTime();
cout << "time:      " << (t1-t0) << endl;
cout << "#choices = " << m.getNChoice() << endl;
cout << "#fail    = " << m.getNFail() << endl;
cout << "#propag  = " << m.getNPropag() << endl;

March 03, 2010

Survey of Nonogram Solvers updated: one addition is a JaCoP solver

Survey of Paint-by-Number Puzzle Solvers by Jan Wolter has been updated. One interesting addition to the solver is a JaCoP solver: Ben Raziel's JaCoP Solver:


[from Ben Raziel:]
"""
We use JaCoP (a free java CSP solving library) to do the line solving for us. For searching, we use a modified version of JaCoP's DFS search. I employed your probing approach and tweaked it a little in order to achieve better performance.

The basic idea is to probe more cells, in order to reach a contradiction - which means we don't have to guess our next move (since a contradiction tells us what color the current cell is). The probes are ordered into "levels": each level contains cells where a contradiction is more likely to be found - which minimizes the number of cells that we have to probe until a contradiction is reached.
"""
The solver is not currently publically available, but the author plans to make it available, once this is cleared with the University.

From the summary of this solver

Assessment: Very good at hard puzzles, a bit slow at easy ones.

...

The run times on simple problems are not spectacular. Java is an interpreted language, and thus, naturally, rather slow. It is furthermore also built on top of a general solving library, JaCoP, and those typically add some overhead. But my suspicion is that part of this is also a basic design trade off - the authors were focusing more on solving hard puzzles fast than on solving easy puzzles fast.

But this really works very well on the harder puzzles. It solves every puzzle in the full 2,491 webpbn puzzle data set in under 15 minutes and only the "Lion" puzzle, which most solvers can't solve at all, takes more than 5 minutes. No other solver tested has accomplished this.

Of course, this solver, like pbnsolve, was trained on the webpbn data set, which certainly gives it an advantage when being tested against that data set. In fact the authors had the full dataset available from early in their development cycle, which is more than can be said for webpbn.

There are also other updates, such as general reflections about the state of Nonogram solvers.

January 14, 2010

Some new MiniZinc models, mostly Enigma problems

The last week I have solved some (more) of New Scientist's Enigma problems. Note: You may have to be a subscriber to read all the articles, however there is a grace period with some free peeks (7 it seems to be).

Enigma problems

Here are the new models of Enigma problems. All are written in MiniZinc.
  • enigma_counting_pennies.mzn: Enigma Counting pennies (from 2005)
    Here all the 24 ways of permuting a list of 4 items are needed. Instead of calculating them, I simply listed all the variants and used them later as index.
  • enigma_843.mzn: Enigma How many are whole? (#843)
    In this model there was problem using the following type of reifications:
      square(ONE) /\ ( square(TWO) \/ square(THREE) \/ square(FOUR)) % /\ ..
      % or
      (square(ONE) /\ ( bool2int(square(TWO)) + bool2int(square(THREE)) + bool2int(square(FOUR)) = 1) % /\ ...
    
    where square is defined as:
    predicate square(var int: y) =
      let {
         var 1..ceil(sqrt(int2float(ub(y)))): z
      } in
       z*z = y;
    
    In some of the later versions of MiniZinc (probably 1.0.2), the handling of reification was changed. My fix was to add a boolean matrix which handled the boolean constraints. It is not beatiful:
       % In the list ONE TWO THREE FOUR just the first and one other are perfect squares.
       square(ONE) /\ 
       [bools[1,j] | j in 1..4] = [square(ONE),square(TWO),square(THREE),square(FOUR)] /\
       sum([bool2int(bools[1,j]) | j in 1..4]) = 2 /\
       % ...
    
  • enigma_1530.mzn: Enigma Tom Daley (#1530)
    A simple alphametic puzzle.
  • enigma_1535.mzn: Enigma Back to front (#1535)
    Magic square with some more constraints.
  • enigma_1555.mzn: Enigma Not a square (#1555)
    Another alphametic puzzle on a grid.
  • enigma_1557.mzn: Enigma Reverse Division (#1557)
    Division and reversing some numbers.
  • enigma_1568.mzn: Enigma Odd puzzle (#1568)
    Long multipication alphametic puzzle.
  • enigma_1570.mzn: Enigma Set of cubes (#1570)
    Pandigital problem over at set of cubes. Here I simply listed the cubes between 0^3 and 1000^3 instead of calculating them. This may be consider cheating...
  • enigma_1575.mzn: Enigma All our days (#1575)
    Another alphametic puzzle with some more constraints.
Some of these problem may have been better solved using other means, e.g. pure (or not so pure) mathematics, but this is after all a blog about constraint programming, and modellng them is good exercise. And fun.

I have also solved - or at least modeled - some of the open problems: 1573, 1574, 1576, and 1577, but will not publish the models until they are closed. I "accidentally" published problem #1574 the other day before I realized it was still open, but so be it.

Some other models/data files

Here are two simple new problems taken from Choco's (forthcoming site ) example distribution: Also, a new data file for the Bridge and Torch model: bridge_and_torch_problem8.dzn, which is, as I understand it, the same as Choco's U2planning problem.

Birthdays 2010 puzzle

Last, here is a very simple puzzle of my own based on a coincidence of birth years of me and my brother and the current year:
This year (2010) my older brother Anders will be the same age as the year I was born in the last century, and I will be the same age as the year he was born in the last century. My brother is four years older than me. How old are we this year?
A (unnecessary complex) MiniZinc model of this problem is birthdays_2010.mzn. Of course, it can be solved much easier with the simple equation: {H+A=110,A-H=4} (link to Wolfram Alpha), or in MiniZinc:
constraint H+A = 110 /\ A-H = 4;
But this latter version is not very declarative, and I - in general - prefer the declarative way.

Update: Added link to Wolfram Alpha for the simple equation above.

December 29, 2009

1 year anniversary and Secret Santa problem II

Exactly one year ago, I started this blog. Little did I know what to expect. Through this blog and its accompanying pages - I have met many very interesting persons that has learned me much in my pursuit of learning constraint programming. I am very grateful for your help and inspiration! And thanks to all my (other) readers.

I hope the following year will be as rewarding as this last.

Secret Santa problem II

As an anniversary gift, here is another Secret Santa problem (compare with Merry Christmas: Secret Santas Problem) with a slightly different touch.

The problem formulation is from Maple Primes forum Secret Santa Graph Theory:
Every year my extended family does a "secret santa" gift exchange. Each person draws another person at random and then gets a gift for them. At first, none of my siblings were married, and so the draw was completely random. Then, as people got married, we added the restriction that spouses should not draw each others names. This restriction meant that we moved from using slips of paper on a hat to using a simple computer program to choose names. Then people began to complain when they would get the same person two years in a row, so the program was modified to keep some history and avoid giving anyone a name in their recent history. This year, not everyone was participating, and so after removing names, and limiting the number of exclusions to four per person, I had data something like this:

Name: Spouse, Recent Picks

Noah: Ava. Ella, Evan, Ryan, John
Ava: Noah, Evan, Mia, John, Ryan
Ryan: Mia, Ella, Ava, Lily, Evan
Mia: Ryan, Ava, Ella, Lily, Evan
Ella: John, Lily, Evan, Mia, Ava
John: Ella, Noah, Lily, Ryan, Ava
Lily: Evan, John, Mia, Ava, Ella
Evan: Lily, Mia, John, Ryan, Noah
I have interpreted the problem as follows:
  • one cannot be a Secret Santa of one's spouse nor of oneself
  • one cannot be a Secret Santa for somebody two years in a row
  • objective: maximize the "Secret Santa distance", i.e. the number of years since the last assignment of the same person
My MiniZinc model for this problem is secret_santa2.mzn.

This is a more traditional linear programming problem compared to Secret Santa I, using a distance matrix for maximizing the "Secret Santa Distance". M is a "large" number (number of persons + 1) for coding that there have been no previous Secret Santa assignment.
rounds = array2d(1..n, 1..n, [
%N  A  R  M  El J  L  Ev 
 0, M, 3, M, 1, 4, M, 2, % Noah
 M, 0, 4, 2, M, 3, M, 1, % Ava 
 M, 2, 0, M, 1, M, 3, 4, % Ryan
 M, 1, M, 0, 2, M, 3, 4, % Mia 
 M, 4, M, 3, 0, M, 1, 2, % Ella
 1, 4, 3, M, M, 0, 2, M, % John
 M, 3, M, 2, 4, 1, 0, M, % Lily
 4, M, 3, 1, M, 2, M, 0  % Evan
]);

The original problem don't say anything about single persons, i.e. those without spouses. In this model, singleness (no-spouseness) is coded as spouse = 0, and the no-spouse-Santa constraint has been adjusted to takes care of this.

The constraint part is the following, where n is the number of persons:
constraint
   all_different(santas)

   /\ % no Santa for one self or the spouse
   forall(i in 1..n) (
      santas[i] != i /\
      if spouses[i] > 0 then santas[i] != spouses[i] else true endif
   )

   /\ % the "santa distance" (
   forall(i in 1..n) ( santa_distance[i] = rounds[i,santas[i]] )

   /\ % cannot be a Secret Santa for the same person two years in a row.
   forall(i in 1..n) (
      let { var 1..n: j } in
       rounds[i,j] = 1 /\ santas[i] != j
   )

   /\
   z = sum(santa_distance)

Solution

This model gives - when using solve satisfy and the constraint z >= 67 - the following 8 solutions with a total of Secret Santa distance of 67 (sum(santa_distance)). If all Secret Santa assignments where new it would have been a total of n*(n+1) = 8*9 = 72. As we can see there is always one Santa with a previous existing assignment. (With one Single person and the data I faked, we can get all brand new Secret Santas. See the model for this.)
santa_distance: 9 9 4 9 9 9 9 9
santas        : 7 5 8 6 1 4 3 2

santa_distance: 9 9 4 9 9 9 9 9
santas        : 7 5 8 6 3 4 1 2

santa_distance: 9 9 9 4 9 9 9 9
santas        : 7 5 6 8 1 4 3 2

santa_distance: 9 9 9 4 9 9 9 9
santas        : 7 5 6 8 3 4 1 2

santa_distance: 9 9 9 9 4 9 9 9
santas        : 4 7 1 6 2 8 3 5

santa_distance: 9 9 9 9 4 9 9 9
santas        : 4 7 6 1 2 8 3 5

santa_distance: 9 9 9 9 9 9 4 9
santas        : 4 7 1 6 3 8 5 2

santa_distance: 9 9 9 9 9 9 4 9
santas        : 4 7 6 1 3 8 5 2

November 08, 2009

Update on Nonogram: Jan Wolter's Survey and my own new benchmark

Survey of Paint-by-Number Puzzle Solvers

In Some new models, etc I mentioned the great Survey of Paint-by-Number Puzzle Solvers, created by Jan Wolter (also author of the Nonogram solver pbnsolve).

In this survey he included both Gecode's Nonogram solver written by Mikael Lagerkvist as well as my own Nonogram model (with Gecode/FlatZinc).

Since the last update on the former blog post the following has happeded:
  • both our solver has now the assessment: "An amazingly good solver, especially for a simple demo program", and are placed 4, 5, and 6 of 10 tested systems
  • my Gecode/FlatZinc model has been tested for "Full results"; it came 4 out of 5.
  • my Nonogram model with the Lazy FD solver is now included in the "Sample result", at 6'th place
It seems than Wolter has appreciated constraint programming as a general tool for solving these kind of combinatorial problems, much for its ease of experimentation, e.g. with labeling strategies and (for the MiniZinc models) changing solvers:

From the analysis of Lagerkvist's Gecode model:
This is especially impressive because the faster solvers are large, complex programs custom designed to solve paint-by-number puzzles. This one is a general purpose solver with just a couple hundred custom lines of code to generate the constraints, run the solver, and print the result. Considering that this is a simple application of a general purpose solving tool rather than hugely complex and specialized special purpose solving tool, this is an astonishingly good result.

Getting really first class search performance usually requires a lot of experimentation with different search strategies. This is awkward and slow to do if you have to implement each new strategies from scratch. I suspect that a tool like Gecode lets you try out lots of different strategies with relatively little coding to implement each one. That probably contributes a lot to getting to better solvers faster.
From the analysis of my MiniZinc model:
If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the constraint programming approach has big advantages, achieving good search results with low development cost.

...

These two results [Gecode/FlatZinc and LazyFD] highlight the advantage of a solver-independent modeling language like MiniZinc. You can describe your problem once, and then try out a wide variety of different solvers and heuristics without having to code every single one from scratch. You can benefit from the work of the best and the brightest solver designers. It's hard to imagine that this isn't where the future lies for the development of solvers for applications like this.
And later in the Conclusions:
The two constraint-based systems by Lagerkvist and Kjellerstrand come quite close in performance to the dedicated solvers, although both are more in the category of demonstrations of constraint programming than fully developed solving applications. The power of the underlaying search libraries and the ease of experimentation with alternative search heuristics obviously serves them well. I think it very likely that approaches based on these kinds of methods will ultimately prove the most effective.
I think this is an important lesson: Before starting to write very special tools, first try a general tool like a constraint programming system and see how well it perform.

The Lazy FD solver and the Lion problem

Most of the problems in the Sample Results where solved by some solver within the time limit of 30 minutes. However, one problem stand out as extra hard: The Lion problem. When I tested the MiniZinc's Lazy FD solver on my machine I was very excited that it took just about 2 minutes, and mentioned this to Wolter. He also tested this but on his 64-bit machine it took 80 minutes to solve (and since it was above the time limit this is not in the result table). This is how he describes the Lazy FD solver:
But the remarkable thing is that [the Lazy FD solver] solves almost everything. Actually, it even solved the Lion puzzle that no other solver was able to solve, though, on my computer, it took 80 minutes to do it. Now, I didn't allow a lot of other solvers 80 minutes to run, but that's still pretty impressive. (Note that Kjellerstrand got much faster solving times for the Lion than I did. Lagerkvist also reported that his solver could solve it, but I wasn't able to reproduce that result even after 30 CPU hours. I don't know why.)
After some discussion, we come to the conclusion that the differences was probably due to the fact that I use a 32-bit machine (and the 32-bit version of MiniZinc) with 2 Gb memory, and Wolter use a 64-bit machine with 1 Gb memory.

One should also note that the all other solvers was compiled without optimization, including Gecode/FlatZinc; however the LazyFD does not come with source so it is running optimized. This may be an unfair advantage to the LazyFD solver.

My own benchmark of the Sample Results

The times in the Sample Results is, as mentioned above, for solvers compiled with no optimization. I have now run the same problems on my machine (Linux Mandriva, Intel Dual 3.40GHz, with 2Gb memory), but the solvers uses the standard optimization. All problems was run with a time limit of 10 minutes (compared to Wolters 30 minutes) and searched for 2 solutions, which checks for unique solutions. The last three problems (Karate, Flag, Lion) has multiple solutions, and it is considered a failure if not two where found in the time limit. I should also note that during the benchmark I am using the machine for other things, such as surfing etc.

The problems
I downloaded the problems from Wolter's Webbpbn: Puzzle Export. For copyright reasons I cannot republish these models, but it is easy to download each problem. Select ".DZN" for the MiniZinc files, and "A compiled in C++ format" for Gecode. There is no support for Comet's format, but it's quite easy to convert a .dzn file to Comet's.

The solvers + labeling strategies
Here is a description of each solver and its labeling strategy:
  • fz, "normal" (column_row)
    MiniZinc model with Gecode/FlatZinc. The usual labeling in nonogram_create_automaton2.mzn, i.e. where the columns are labeled before rows:
    solve :: int_search(
          [x[i,j] | j in 1..cols, i in 1..rows], 
          first_fail, 
          indomain_min, 
          complete) 
    satisfy;
    
  • fz, "row_column"
    MiniZinc model with Gecode/FlatZinc. Here the order of labeling is reversed, rows are labeled before columns. Model is nonogram_create_automaton2_row_column.mzn
    solve :: int_search(
          [x[i,j] | i in 1..rows, j in 1..cols], 
          first_fail, 
          indomain_min, 
          complete) 
    satisfy;
    
  • fz, "mixed"
    MiniZinc model with Gecode/FlatZinc: nonogram_create_automaton2_mixed.mzn.
    I have long been satisfied with the "normal" labeling in the MiniZinc model because P200 (the hardest problem I until now has tested) was solved so fast. However, the labeling used in the Comet Nonogram model described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second, and which is also used in the Gecode model, is somewhat more complicated since it base the exacl labeling by comparing the hints for the rows and the column.

    I decided to try this labeling in MiniZinc as well. However, labeling in MiniZinc is not so flexible as in Comet and Gecode. Instead we have to add a dedicated array for the labeling (called labeling):
    array[1..rows*cols] of var 1..2: labeling;
    
    and then copy the element in the grid to that array based on the relation between rows and column hints:
    constraint
          % prepare for the labeling
          if rows*row_rule_len < cols*col_rule_len then
               % label on rows first
               labeling = [x[i,j] | i in 1..rows, j in 1..cols]
          else 
               % label on columns first
               labeling = [x[i,j] | j in 1..cols, i in 1..rows]
          endif
          /\
          % .... 
    
    and last, the search is now based just on this labeling array:
    solve :: int_search(
            labeling,
            first_fail, 
            indomain_min, 
            complete)
    satisfy;
    
  • jacop, "normal"
    MiniZinc normal model with JaCoP/FlatZinc using the same model as for fz "normal".
  • lazy, satisfy
    Model: nonogram_create_automaton2_mixed.mzn. This use the MiniZinc LazyFD solver with the search strategy:
    solve satisfy;
    
    This labeling is recommended by the authors of LazyFD. See MiniZinc: the lazy clause generation solver for more information about this solver.

    Note: The solver in MiniZinc latest official version (1.0.3) don't support set vars. Instead I (and also Jan Wolter) used the latest "Release Of The Day" version (as per 2009-11-02).
  • Comet, normal
    Model: nonogram_regular.co. This is the Comet model I described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. No changes has been done.
  • Gecode, normal
    This is the Nonogram model distributed with Gecode version 3.2.1. The labeling is much like the one used in the Comet model, as well as fz, "mixed". (In fact the labeling in the Gecode model was inspired by the labeling in the Comet model).


Here is the results. For each model (+ labeling strategy) two values are presented:
  • time (in seconds)
  • number of failures if applicable (the LazyFD solver always returns 0 here).
The result
Model fz
normal
fz
row_column
fz
mixed
jacop
normal
lazy
satisfy
Comet
normal
Gecode
normal
Dancer
(#1)
0.48s
(0)
0.31s
(0)
1.00s
(0)
3.64s
(0)
0.91s
(0)
0.691s
(0)
0.199s
(0)
Cat
(#6)
0.24s
(0)
0.24s
(0)
0.25s
(0)
1.20s
(0)
1.13s
(0)
0.6s
(0)
0.025s
(0)
Skid
(#21)
0.24s
(13)
0.23s
(3)
0.28s
(13)
0.78s
(13)
1.37s
(0)
0.586s
(0)
0.217s
(0)
Bucks
(#27)
0.32s
(3)
0.32s
(9)
0.37s
(3)
0.96s
(3)
2.37s
(0)
1.366s
(5)
0.026s
(2)
Edge
(#23)
0.16s
(25)
0.17s
(25)
0.18s
(25)
0.59s
(25)
0.31s
(0)
0.521s
(43)
0.175s
(25)
Smoke
(#2413)
0.27s
(5)
0.27s
(8)
0.28s
(8)
0.83s
(5)
1.44s
(0)
0.616s
(14)
0.275s
(5)
Knot
(#16)
0.42s
(0)
0.43s
(0)
0.48s
(0)
1.19s
(0)
8.15s
(0)
1.307s
(0)
0.329s
(0)
Swing
(#529)
0.95s
(0)
0.94s
(0)
0.96s
(0)
2.19s
(0)
21.94s
(0)
1.782s
(0)
0.431s
(0)
Mum
(#65)
0.64s
(20)
0.64s
(22)
0.66s
(22)
1.68s
(20)
16.34s
(0)
1.268s
(39)
0.491s
(22)
Tragic
(#1694)
340.32s
(394841)
1.02s
(255)
436.92s
(394841)
--
(198329)
45.97s
(0)
477.39s
(702525)
1.139s
(255)
Merka
(#1611)
--
(361587)
1.44s
(79)
--
(294260)
--
(136351)
80.92s
(0)
1.654s
(46)
0.645s
(13)
Petro
(#436)
2.97s
(1738)
143.09s
(106919)
3.42s
(1738)
7.27s
(1738)
9.86s
(0)
3.103s
(3183)
151.09s
(106919)
M_and_m
(#4645)
1.41s
(89)
601.27s
(122090)
1.59s
(89)
3.43s
(89)
66.98s
(0)
2.215s
(162)
2.797s
(428)
Signed
(#3541)
1.87s
(929)
23.12s
(6484)
28.23s
(6484)
5.75s
(929)
73.02s
(0)
20.369s
(12231)
1.648s
(929)
Light
(#803)
600.47s--
(400660)
--
(621547)
--
(485056)
601.53s--
(171305)
8.64s
(0)
--
(0)
--
(538711)
Forever
(#6574)
4.14s
(17143)
7.86s
(30900)
6.22s
(17143)
12.21s
(17143)
3.27s
(0)
7.5s
(27199)
8.077s
(30900)
Hot
(#2040)
--
(303306)
--
(330461)
--
(248307)
--
(119817)
165.72s
(0)
--
(0)
--
(312532)
Karate
(#6739)
95.78s
(215541)
67.27s
(130934)
133.43s
(215541)
373.02s
(215541)
19.32s
(0)
120.02s
(272706)
80.56s
(170355)
Flag
(#2556)
--
(1686545)
5.69s
(14915)
7.93s
(14915)
--
(243222)
9.29s
(0)
7.28s
(24678)
3.998s
(16531)
Lion
(#2712)
--
(542373)
--
(1124697)
--
(420216)
--
(187215)
115.56s
(0)
--
(0)
--
(869513)

Some conclusions, or rather notes

Here are some conclusions (or notes) about the benchmark.
  • The same solver Gecode/FlatZinc is here compared with three different labelings. There is no single labeling that is better than the other. I initially has some hopes that the "mixed" labeling should take the best labeling from the two simpler row/columns labelings, but this is not really the case. For example for Tragic the row_column strategy is better than "normal" and "mixed". I am, however, somewhat tempted, to use the "row_column" labeling, but the drawback is that "P200" problem (not included in Wolfter's sample problems) takes much longer with this labeling.
  • The same model and labeling but with different solvers is compared: Gecode/FlatZinc is faster than JaCoP/FlatZinc on all the problems. For the easier problems this could be explained by the extra startup time of Java for JaCoP, but that is not the complete explanation for the harder problems. Note: Both Gecode/FlatZinc and JaCoP/FlatZinc has dedicated and fast regular constraints (whereas the LazyFD, and the Comet solvers use a decomposition).
  • The LazyFD solver is the only one that solves all problems (including Lion), but is somewhat slower on the middle problems than most of the others. It emphasizes that this is a very interesting solver.
  • It is also interesting to compare the result of the Comet model and Gecode/FlatZinc "mixed", since they use the same principle of labeling. However there are some differences. First, the MiniZinc model with Gecode/FlatZinc use a dedicated regular constraint, and Comet use my own decomposition of the constraint. For the Merka problem the Comet version outperforms the Gecode/FlatZinc version, otherwise it's about the same time (and number of failures).
  • The Light problem: It is weird that this problem was solved in almost exactly 10 minutes (the timeout is 10 minutes) for Gecode/FlatZinc and JaCoP/FlatZinc. The solutions seems correct but I'm suspicious of this. Update: Christian Schulte got me on the right track. Here is was happened: The first (unique) solution was found pretty quick and was printed, but the solvers could not prove a unique solution so it timed out. JaCoP/FlatZinc actually printed "TIME-OUT" but I didn't observe that. Case closed: They both FAILED on this test. Thanks, Christian.End update
As said above, I can only agree with Jan Wolter in his comment that the ease of experimenting, for example changing labeling, and solver for the FlatZinc solvers, is a very nice feature.

Last word

No benchmark or comparison of (constraint programming) models is really complete without the reference to the article On Benchmarking Constraint Logic Programming Platforms. Response to Fernandez and Hill's "A Comparative Study of Eight Constraint Programming Languages over the Boolean and Finite Domains" by Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey. (Link is to ACM.)

From the Abstract:
... The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems.

October 31, 2009

Some new models, etc

The readers that follows me on Twitter (hakankj) have probably already has seen this, but here follows a list of models, etc, not yet blogged about.

MiniZinc: Different models

The following four models are translation of my Comet models: And here are some new models:

Nonogram related

A large 40x50 Nonogram problem instance in MiniZinc: nonogram_stack_overflow.dzn to be used with nonogram_create_automaton2.mzn model. The problem was mentioned in Stack Overflow thread Solving Nonograms (Picross). It is solved in under 1 second and 0 failures.

Today my Nonogram model nonogram_create_automaton2.mzn was included in the great Survey of Paint-by-Number Puzzle Solvers (created by Jan Wolter).

My MiniZinc model is described and analyzed here. I'm not at all surprised that it's slower compared to the other solvers; it was quite expected.

Some comments:
Assessment: Slow on more complex puzzles.

...

Results: Run times are not really all that impressive, especially since it is only looking for one solution, not for two like most of the other programs reviewed here. I don't know what the differences are between this and Lagerkvist's system, but this seems much slower in all cases, even though both are ultimately being run in the Gecode library.
Update 2009-11-01
I later realized two things:

1) That the mzn2fzn translator did not use the -G gecode flag, which means that Gecode/FlatZinc uses a decomposition of the regular constraint instead of Gecode's built in, which is really the heart in this model. The model is basically two things: build an automata for a pattern and then run regular on it.
2) When Jan compiled Gecode, he set off all optmization for comparison reason. This is quite unfortunate since Gecode is crafted with knowledge of the specific optimizations.

I there have run all problems by myself and see how well it would done (at least the ballpart figure) when using an "normal" optimized Gecode and with the -G gecode flag for mzn2fzn.

Explanation of the values:
Problem: Name of the problem instance
Runtime: The value of runtime from Gecode/FlatZinc solver
Solvetime: The value of solvetime from Gecode/FlatZinc solver
Failures: Number of failures:
Total time: The Unix time for running the complete problem, including the time of mzn2fzn (which was not included in the benchmark).
A "--" means that a solution was not reached in 10 minutes.
Problem Runtime Solvetime Failures Total time
Dancer 0.002 0.000 0 1.327s
Cat 0.009 0.002 0 0.965s
Skid 0.012 0.006 13 0.660s
Bucks 0.015 0.004 3 0.866s
Edge 0.008 0.005 25 0.447s
Smoke 0.011 0.004 5 0.963s
Knot 0.026 0.006 0 1.450s
Swing 0.059 0.012 0 1.028s
Mum 0.120 0.093 20 1.811s
Tragic 6:24.273 6:23.607 394841 6:28,10
Merka -- -- -- --
Petro 2.571 2.545 1738 4.071s
M&M 0.591 0.510 89 1.961s
Signed 1.074 1.004 929 2.461s
Hot -- -- -- --
Flag -- [lazy solver: 2 solutions in 10 seconds] -- -- --
Lion -- -- -- --

It's interesting to note that the Lazy solver finds some solutions quite fast for the "Flag" problem. However, there where no other big differences compared to Gecode/FlatZinc. I also tested the problems with JaCoP's FlatZinc solver which solved the problems in about the same time as Gecode/FlatZinc with no dramatic differences.

As mentioned above, the exact values is not really comparable to the benchmark values. But it should give an indication of the result when using -G gecode and a "normal" optimized Gecode.

Unfortunately, I cannot link to the specific models due to copyright issues, but they can all be downloaded from the page Web Paint-by-Number Puzzle Export.

(End update)

Update 2009-11-02
Jan Wolter now have rerun the tests of my solver with the -G gecode option, and the time is much more like mine in the table above. The analysis is quite different with an assessment of Pretty decent, and the following under Result:
...

When comparing this to other solvers, it's important to note that nonogram_create_automaton2.mzn contains only about 100 lines of code. From Kjellerstrand's blog, it is obvious that these 100 lines are the result of a lot of thought and experimentation, and a lot of experience with MiniZinc, but the Olšák solver is something like 2,500 lines and pbnsolve approaches 7,000. If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the CSP approach has big advantages.

(End update 2)

Also, the Gecode nonogram solver is included in the survey: called Lagerkvist. I'm not sure when it was added to the survey. It use the latest version of Gecode, so it must have been quite recent.

Some comments:
Assessment: Pretty decent.

...

Puzzles with blank lines seem to cause the program to crash with a segmentation fault.

Otherwise it performs quite well. There seems to be about 0.02 seconds of overhead in all runs, so even simple puzzles take at least that long to run. Aside from that, it generally outperforms the Wilk solver. It's not quite in the first rank, especially considering that it was only finding one solution, not checking for uniqueness, but it's still pretty darn good.

...
I mailed Jan today about the -solutions n options in the Gecode related solvers (he tested my MiniZinc model with Gecode/FlatZinc), as well as some other comments about my model. Also, I will download the problems tested and play with them.

Tailor/Essence': Zebra puzzle

Suddenly I realized that there where no Zebra problem, neither at Andrea's or here. So here it is: zebra.eprime: Zebra puzzle.

Gecode: Words square problem

See Gecode: Modeling with Element for matrices -- revisited for an earlier discussion of this problem.

Thanks to Christian Schulte my Word square model is now much faster: word_square2.cpp.

From the comment in the model:
Christian Schulte suggested 
using branch strategy INT_VAL_SPLIT_MIN 
instead of INT_VAL_MAX .
This made an improvement for size 7 from
322 failures to 42 and from 1:16 minutes
to 10 seconds (for 1 solution).
 
Now it manage to solve a size 8 in a reasonable 
time (1:33 minutes and 1018 failures):
  abastral
  bichrome
  achiotes
  shippers
  troponin
  rotenone
  amerinds
  lessness
 
But wait, there's more!

In the SVN trunk version of Gecode, there is now a version of this model: examples/word-square.cpp where Christian Schulte and Mikael Zayenz Lagerkvist has done a great job of improving it (using a slighly smaller word list, though). It solves a size 8 problem in about 14 seconds. There is also two different approaches with different strengths, etc. I have great hopes that it will improved even further...

September 30, 2009

This weeks news

This is mostly a consolidation (with some additional information) of the tweets on twitter.com/hakankj since last time.

mzn_show.pl

The Perl program mzn_show.pl is used for making little better output from the FlatZinc solvers.

The parameters to the program are now:
  • {tr:from:to:}: translate digit to another digit/character
  • {trvar:var:from:to:}: translate digit to another digit/character for a specific variable var
  • {trtr:from_string:replacement_string:}: translates all digits in from_string to the corresponding digit/character in replacement_string (inspired from the Unix tr command)
  • {trtrvar:var:from_string:replacement_string:}: as trtr but for a specific variable var
  • {nospaces}: don't show space after character
  • {noprintf}: don't try to be clever with printf stuff
Example: for showing a nicer picture of a Nonogram:

flatzinc nonogram_create_automaton2.fzn | mzn_show.pl "{tr:1: :} {tr:2:#:} {nospace}"

Where the {tr:from:to:} means translate digit from to character to, and {nospace} means that no spaces are shown after each digit/character.

This is now probably better written with trtr:

flatzinc nonogram_create_automaton2.fzn | mzn_show.pl "{trtr:12: #:} {nospace}"

Also, see below for some more examples.

MiniZinc/Comet: Nonogram model fixed for "zero length" clues

I discovered that my MiniZinc and Comet models had a little bug in the generation of the automaton for "zero length" clues (i.e. the row/column is completely empty, just 0's).

The fixed versions is here:
* MiniZinc: nonogram_create_automaton2.mzn
* Comet: nonogram_regular.co

It was quite tricky to debug the generated automata, but I found a fix for it surprisingly soon. I won't publish the fixes here, but it suffice to say that the MiniZinc model is - shall we say - not more beautiful than before.

A test instance for this: T2 (source unknown)
* Comet: nonogram_t2.co. * MiniZinc: nonogram_t2.dzn

T2 has the following two solutions:
   ##   ##  ###
  #  # #  # #  #
  #      #  #  #
  #     #   ###
  #  # #  # #
   ##   ##  #

----------

   ##   ##  ###
  #  # #  # #  #
  #     #   #  #
  #      #  ###
  #  # #  # #
   ##   ##  #

One more Nonogram example

Besides the T2 problem instance mentioned above, there is a new Nonogram example: Gondola for both Comet and MiniZinc:
* Comet: nonogram_gondola.co
* MiniZinc: nonogram_gondola.dzn.
For the curious, here is the solution, using the following command (see above for the mzn_show.pl part):
$ mzn2fzn -G jacop --data nonogram_gondola.dzn nonogram_create_automaton2.mzn
$ java JaCoP.fz.Fz2jacop -a -s  nonogram_create_automaton2.fzn | mzn_show.pl "{trtr:12: #:} {nospaces}"
  #####      ######
  ######     #    #        #
     ###  ###########     ###
  ######   # #    #       # #
  #######  # # # ##   #   ###
     ####    #    #  ##   # ####
  #######    #  # # ##    ### #
  #######     #  # ###    # #
     ####     # #  # # #########
     ####   ######## # #
     ####  #     ####  #   ###
     #### # #######    #  #####
     #### # #  ## # ####  #    #
     #### #########      ## # ##
     #### # ###   #      ##    #
     #### # ######       #   # #
     #### ########      ###    #
     ########## ###    ##### ###
      #### # ## ###    #####  ##
      ### #####  ##     ########
      ## ######  ###      #    #
      # ############     #
  ####################   # #
    ## #########################
   ##   ### ####################
  ##     ##### ### ## ## ## ## #
  #   ##  ######################
            ####################
    ###       ##################
  #        ##

Updated Comet's SONET model

In About 20 more constraint programming models in Comet Pierre Schaus (on of Comet's developers) commented that my use of tryall outside the using block in the SONET problem (sonet_problem.co) was not to recommend.

I have updated the model with his suggestion that uses or instead:
m.post(or(ring in 1..r) (rings[ring,client1] + rings[ring, client2] >= 2));
Thanks for the correction (suggestion), Pierre.

New MiniZinc model: Letter Square problem

letter_square.mzn is (yet another) grid problem. From my comment in the model file:

This problem is from the swedish book Paul Vaderlind: Vaderlinds nya hjärngympa ('Vaderlind's new brain gymnastics'), page 63ff. Unfortunately, I don't know the origin of this problem.

The objective is to create a matrix where all values on each row/column is different, except for the blanks.

A set of hints is given: the (or some) first seen non blank of each
- upper row
- lower row
- left column
- right column
seen from that view.

Example:
    B B
  -------
 |A   B C| 
B|  B C A| 
B|B C A  | 
 |C A   B| 
 --------
  C
This model codes the hints as follows:
 blank -> 0
 A     -> 1
 B     -> 2
 C     -> 3

row_upper = [0,2,2,0];
row_lower = [3,0,0,0];
col_left  = [0,2,0,0];
col_right = [0,0,0,0];
Note: there are no hints for the right column.

Here is yet another opportunity to use mzn_show.pl (0 is here translated to blank, etc):

solver model.fzn | mzn_show.pl "{trtr:0123456: ABCDEF:}"

With the following output:
  A B   C
      A B C
  B C   A
    A C   B
  C   B   A
The following problem instances from the book has been created:

August 11, 2009

Strimko - Latin squares puzzle with "streams"

Via the post A New Twist on Latin Squares from the interesting math blog 360, I learned the other day about a new grid puzzle Strimko, based on Latin squares (which for example Sudoku also is).

The rules of Strimko are quite simple:
Rule #1: Each row must contain different numbers.
Rule #2: Each column must contain different numbers.
Rule #3: Each stream must contain different numbers.
The stream is a third dimension of the puzzle: an connected "stream" of numbers which also must be distinct.

Here is an example of a Strimko puzzle (weekly #068, same as in the 360 post):
Strimko puzzle
(the link goes to a playable version).

MiniZinc model

This problem was simple to solve in MiniZinc: strimko.mzn, and the data file strimko_068.dzn.

Here are the constraints of the model: three calls to all_different, one for each rows, cols (these two are the latin square requirement), and also for each stream. The places is for handling the "hints" (the given numbers) in the puzzle.
constraint 
  % latin square
  forall(i in 1..n) (
      all_different([ x[i, j] | j in 1..n]) /\
      all_different([ x[j, i] | j in 1..n])
  )
  /\ % streams
  forall(i in 1..n) (
     all_different([x[streams[i,2*j+1],streams[i,2*j+2]] | j in 0..n-1])
  )
  /\ % placed
  forall(i in 1..num_placed) (
      x[placed[i,1], placed[i,2]] = placed[i,3]
  )
;
The data file strimko_068.dzn is shown below:. Note the representation of the puzzle: Each cell is represented with two numbers, row,column.
% Strimko Set 068
n = 4;
streams = array2d(1..n, 1..n*2, [
                    1,1, 2,2, 3,3, 4,4,
                    2,1, 1,2, 1,3, 2,4,
                    3,1, 4,2, 4,3, 3,4,
                    4,1, 3,2, 2,3, 1,4
                   ]);
num_placed = 3;
placed = array2d(1..num_placed, 1..3, [
                             2,2,3,
                             2,3,2,
                             3,3,1
                           ]);
Solution:
  4 2 3 1
  1 3 2 4
  2 4 1 3
  3 1 4 2

A better version

But it was quite boring and error prone to code the problem instance in that representation. There is - of course - a simpler way by represent the streams themselves, i.e. give each stream a unique "stream number", and all cells with the same numbers must be distinct. Data file: strimko2_068.dzn
% Strimko Weekly Set 068
streams = array2d(1..n, 1..n, [
                    1,2,2,4,
                    2,1,4,2,
                    3,4,1,3,
                    4,3,3,1
                  ]);
% ...
The model also needed a simple change to reflect this representation: strimko2.mzn:
  % ...
  /\ 
  % streams
  forall(s in 1..n) (
     all_different([x[i,j] | i,j in 1..n where streams[i,j] = s])
  )
  % ....
The main advantage of the second model is that it makes it easier to state the problem instances. Also it is somewhat smaller to represent the grid: n x n*2 versus n x n. However, I haven't noticed any difference between the performance of these two models, but the problems are all very small so it may not be visible.

The problem instances

Here are all the implemented models and the problem instances: For more information about Strimko, and some problems, see: For more about MiniZinc and some of my models, see My MiniZinc page.

June 24, 2009

My old swedish blog posts about constraint programming translated (via Google)

Before starting this blog (i.e. My Constraint Programming blog) late December 2008, I blogged about constraint programming in my Swedish blog hakank.blogg). Translations of those Swedish blog post has not been collected before, and now it is time.

So, here are links to most of the blog posts from the category Constraint Programming, translated via Google's Language Tools. Most of the translation is intelligible, but if you have some questions about some particular, please feel to mail me: hakank@bonetmail.com for more information.


November 4, 2005
Swedish: Sesemans matematiska klosterproblem samt lite Constraint Logic Programming
Translated: Sesemans kloster mathematical problems and little Constraint Logic Programming ("kloster" means "convent").


April 18, 2006
Swedish: Choco: Constraint Programming i Java
Translated: Choco: Constraint Programming in Java


The two post above was written when I had just a cursory interest in constraint programming. From about February 2008 and onward, it became my main interest.


April 5, 2008
Swedish: Constraint Programming: Minizinc, Gecode/flatzinc och ECLiPSe/minizinc,
Translated: Constraint Programming: Minizinc, Gecode / flatzinc and ECLIPSE / minizinc.


April 14, 2008
Swedish: MiniZinc-sida samt fler MiniZinc-exempel
Translated: MiniZinc-page and multi-MiniZinc example.


April 21, 2008
Swedish: Applikationer med constraint programming, lite om operations research samt nya MiniZinc-modeller
Translated: Applications of constraint programming, a little of operations research and new MiniZinc models


April 26, 2008
Swedish: Ett litet april-pyssel
Translated: A small-craft in April (a better translation would be "A small April puzzle")


April 27, 2008
Swedish: Mitt OpenOffice Calc-/Excel-skov
Translated: My Open Office Calc-/Excel-skov (better translation: My Open Office Calc-/Excel period).


May 26, 2008
Swedish: Några fler MiniZinc-modeller, t.ex. Smullyans Knights and Knaves-problem
Translated: Some more MiniZinc models, eg Smullyans Knights and Knaves-problem Smullyans Knights and Knaves problem


June 2, 2008
Swedish: MiniZinc/FlatZinc version 0.8 släppt
Translated: MiniZinc / FlatZinc version 0.8 released


June 2, 2008
Swedish: Nya MiniZinc-modeller, flera global constraints, däribland clique
Translated: New MiniZinc models, several global constraints, including Clique


June 5, 2008
Swedish: Mats Anderssons tävling kring fotbolls-EM 2008 - ett MiniZinc-genererat tips
Translated: Mats Andersson racing around championship in 2008 - a MiniZinc-generated tips (it is about a competition how to predict Soccer World Cup 2008 using MiniZinc)


June 24, 2008
Swedish: Tre matematiska / logiska pyssel med constraint programming-lösningar: n-puzzle, SETS, M12 (i MiniZinc)
Translated: Three mathematical / logical craft with constraint programming solutions: n-puzzle, SETS, M12 (in MiniZinc) ("craft" should be translated to "puzzles")


June 24, 2008
Swedish: MiniZinc-nyheter
Translated: MiniZinc news


June 29, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP
Translated: Group Theoretical solution of the M12 puzzle in GAP (well, this is not really a constraint programming solution, but it is another way of solving the M12 puzzle blogged about in June 24)


June 30, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP - take 2
Translated: Group Theoretical solution of the M12 puzzle in GAP - take 2


July 4, 2008
Swedish: Martin Chlond's Integer Programming Puzzles i MiniZinc
Translated: Martin's Chlond Integer Programming Puzzles in MiniZinc


July 7, 2008
Swedish: Fler MiniZinc modeller kring recreational mathematics
Translated: More MiniZinc models on recreational mathematics


July 20, 2008
Swedish: Fler constraint programming-modeller i MiniZinc, t.ex. Minesweeper och Game of Life
Translated: More constraint programming models in MiniZinc, eg Minesweeper och Game of Life Minesweeper and the Game of Life


August 17, 2008
Swedish: JaCoP - Java Constraint Programming solver
Translated: JaCoP - Java Constraint Programming solver


September 14, 2008
Swedish: Constraint programming i Java: Choco version 2 släppt - samt jämförelse med JaCoP
Translated: Constraint programming in Java: Choco version 2 released - and comparison with JaCoP


September 28, 2008
Swedish: Constraint programming: Fler MiniZinc-modeller, t.ex. Martin Gardner och nonogram
Translated: Constraint programming: More MiniZinc models, eg Martin Gardner och nonogram Martin Gardner and nonogram


December 27, 2008
Swedish: Constraint programming-nyheter samt nya MiniZinc-modeller
Translated: Constraint programming, news and new MiniZinc models


December 29, 2008
Swedish: My Constraint Programming Blog
Translated: My Constraint Programming Blog


After that, entries about constraint programming at my swedish blog posts where just summaries of the stuff written here at My Constraint Programming blog.

March 14, 2009

Solving Pi Day Sudoku 2009 with the global cardinality constraint

Here is an update of the Pi Day Sudoku 2009 problem.

After a suggestion from Mikael Zayenz Lagerkvist (of the Gecode team) the model now uses just the global cardinality constraint and is much faster than the model which used my own decomposition of all_different_except_0 (or rather all_different_except_Pi) and the count constraint.

Both of the new MiniZinc and Comet models solves the Pi Day Sudoku 2009 problem in less than a second (compared to couple of seconds before).

The Comet model has the following statistics for finding the first (and unique) solution.

time: 9
#choices = 6
#fail = 1
#propag = 1306

For Gecode/FlatZinc:

runtime: 20
solutions: 1
propagators: 35
propagations: 255
nodes: 3
failures: 0
peak depth: 2
peak memory: 158 KB

Compared to the former model, there are now much less failures.


Some more about the global cardinality constraint
Global cardinality (in Sudoku) is a way of stating how many occurrences there should be of each number per row, column or region. For the Pi Day Sudoku problem the following occurrences are needed:

Pi: 3 occurrences (coded as -1 in the model)
0: 0 occurrences (0 is a dummy value for handling the indata)
1..9: exactly one occurrence each

I MiniZinc the this is coded as

array[1..n, 1..n] of var -1..9: x;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
contraint
forall(i in 1..n) ( % rows
check([x[i,j] | j in 1..n], occ)
)
...

In Comet the function cardinality is used, i,e, m.post(cardinality(occ, all(j in 1..n) x[i,j]));.


For more information about the global cardinality constraint, see the entry in Global Constraint Catalog.


A comment
Introductions to constraint programming nowadays tend to start with the Sudoko problem (or SEND + MORE = MONEY). These models often use the alldifferent constraint, and seldom, if ever, have I seen any presentation using global cardinality. One reason may be that alldifferent is easier to understand in a introduction, another reason may be that it is general a better constraint to use for these problems.

"Vanilla" Sudoko solver using global cardinality
I have compared some few "vanilla" Sudoku problems with either alldifferent or global cardinality but haven't done any systematic comparison. Sometimes the global cardinality approach is better, sometimes it is somewhat worse.

sudoku_gcc.mzn is a "vanilla" Sudoku solver in MiniZinc using the global cardinality constraint. And the same approach in Comet: sudoku_gcc.co. Note: These models includes just a few problems.


Also see: Gecode's Sudoku solver is very fast, and includes 88 different problems.

March 11, 2009

Pi Day Sudoku 2009

Brainfreeze Puzzles has a Pi day Sudoku competition:


Pi Day is a celebration of the number π that occurs every March 14 (3.14...). Math geeks all over the world celebrate their math-geekiness on this day with pie-eating contests, recitations of the digits of pi, and occasionally fundraisers where math faculty get hit in the face with pies. At Brainfreeze Puzzles we celebrate Pi Day - how else? - with a Sudoku puzzle.

...

Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

The puzzle is also here (as a PDF file)

Models

I programmed this puzzle in both Comet (using the constraint programming module) and MiniZinc using different solvers. The models use the same principle of solving the problem: a (homebrewn) all different except 0 constraint and counting exactly 3 occurrences of Pi for each row/column and region.

All the solvers give the same unique answer, which of course may mean that there is some systematic error on my part.


Comet


Comet solves the problem in about 9 seconds with the following statistics, using exploreall to make sure that it is a unique solution.

time: 8821
#choices = 9999
#fail = 16231
#propag = 23168886

Somewhat surprisingly selecting j before i in the forall loop, gives an improvment from 15 seconds to 9 seconds. This was the same labeling strategy that was used for improving Nonogram (see Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second). Here is the labeling:

// reversing i and j gives faster solution
forall(j in 1..n, i in 1..n: !x[i,j].bound()) {
tryall(v in V : x[i,j].memberOf(v)) by(v)
m.label(x[i,j], v);
onFailure
m.diff(x[i,j], v);
}

MiniZinc and Gecode/flatzinc

Interestingly, the same improvement was observed with MiniZinc and the Gecode/flatzinc solver (the interface to Gecode). Here is the labeling function, with the same swapping of i and j (I didn't know that this mattered in MiniZinc).


solve :: int_search([x[i,j] | j, i in 1..n], "input_order", "indomain", "complete") satisfy;

With the swap, the time went from 16 seconds to 10 seconds with the following statistics:

propagators: 9460
propagations: 13461717
failures: 11221
clones: 11221
commits: 31478
peak memory: 5638 KB
mzx.pl sudoku_pi.mzn --num 2 9,41s user 0,72s system 99% cpu 10,139 total

Fzntini, FlatZinc, ECLiPSe and searching for the first solution

MiniZinc's builtin solver flatzinc and fzntini (a SAT solver) only shows one solution and no statistics.

* fzntini solve problem in about 13 seconds
* flatzinc takes 3 seconds.

The ECLiPSe's MiniZinc solver ic takes 6.5 seconds using the same labeling as Gecode/flatzinc. No statistics is shown for this solver either.

For comparison both Comet and Gecode/flatzinc shows the first solutions in 2.5 seconds.

Solution

Since it is a competition I won't show my solution or the models on/via the blog. Sorry about that.

Reference

I learned about Pi Days Sudoku from the 360 blog: Pi Day Sudoku 2009.

March 08, 2009

Comet: About 15 new Comet models

The latter part of the last week was dominated by the optimization of the Nonogram solver, which I wrote about in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. I would like to thank Pascal Van Hentenryck, Mikael Zayenz Lagerkvist, and Christian Schulte for help, comments, and inspiration.


Here are the other Comet models written the last week.

Puzzles, recreation mathematics, etc

Pascal Van Hentenryck's OPL book

These models are translations from Pascal Van Hentenryck "The OPL Optimization Programming Language"

Operations research

  • furniture_moving_cumulative.co: Furniture moving using global constraint cumulative (simple scheduling problem from Marriott & Stuckey: 'Programming with constraints', page 112f). Compare with furniture_moving.co which uses the Comet's Schedule constraint.

Global constraints

Many of these models has been inspired by the great collections in Global Constraint Catalog.

February 28, 2009

Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more

Since the last time some more Comet model has been written.

Much faster Nonogram model using the regular constraint

In More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems I presented nonogram.co, a solver for Nonogram puzzles, and also noted that is was quite slow: Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)....

Inspired by the announcement this week of the ECLiPSe example of Nonogram solver using the regular constraint (see nono_regular.ecl.txt and regular.ecl.txt) - and also my earlier Gecode/R model nonogram.rb which used "regular expressions" - I created a Nonogram model in Comet using the regular constraint: nonogram_regular.co.

Let us first look at the regular constraint.


Regular constraint
The regular constraint (see the Comet model regular.co for my implementation) is a global constraint using a DFA (deterministic finite automata) which accepts (or not accepts) an input sequence given a "transition matrix". The constraint was presented by Gilles Pesant in "A Regular Language Membership Constraint for Finite Sequences of Variables" (2004).

My implementation of the regular constraint is heavily borrowed from MiniZinc's builtin regular predicate (from lib/zinc/globals.mzn); the translation to Comet was very straightforward (omitting just some asserts).

An exemple of the usage of the constraint: We want to match the regular expression "123*21" (i.e. first "1", then "2", then zero or more "3", then "2", and last a "1"). Note: The length of the sequence is 10, so there must be 6 "3":s since "123" and "21" are "anchored" at the beginning and the end.

int len = 10;
int n_states = 5; // number of states
int input_max = 3; // the states are 1,2, and 3
int initial_state = 1; // we start with the 1 state
set{int} accepting_states = {n_states}; // This is the last state
// The transition matrix
int transition_fn[1..n_states, 1..input_max] =
[[2, 0, 0], // transitions from state 1: "1" -> state 2
[0, 3, 0], // transitions from state 2: "2" -> state 3
[0, 4, 3], // transitions from state 3: "2" -> state 4 | "3" -> state 3 (loop)
[5, 0, 0], // transitions from state 4: "1" -> state 5
[0, 0, 0]]; // transitions from state 5: END state
...
exploreall {
regular(reg_input, n_states, input_max, transition_fn, initial_state, accepting_states);
} using {
// ....
}

The unique sequence resulting from this automata is thus 1 2 3 3 3 3 3 3 2 1.

For using regular in the Nonogram problem, the automata for each row/column clue, must be built, preferably by a function. In regular.co this is done with the function make_transition_matrix, which by far was the hardest part of this problem (and surely could be written in a smoother way).

For the Nonogram clue [3,2,1] - which represents the regular expression "0*1110*110*10*" - the following automata (transition matrix) is generated:
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0


Note that the regular function uses 0 (zero) as the failing state, so the states must start with 1. This is taken care in the model as the resulting value is subtracted by 1.

As usual in my models, the regular constraint is just a "convenience function", i.e. not using special written propagation methods etc.

The regular constraint has - of course - more applications than for Nonogram solving. I plan to look more into this.


The Nonogram solver: nonogram_regular.co
After the regular constraint and the automata generator was written, it was quite easy to change the old Nonogram solver to use these new tools. The result is in nonogram_regular.co. I was quite curious how fast this version should be compared to the former slow model. In short: it is much faster.

As comparison the Lambda instace took about 12 seconds with the old model; with the new model it takes 0.5 seconds, which is a nice improvement. I never finished a run for Nonunique problem with the older model; the new model takes 0.5 seconds (including the startup of the Comet program). Etc.

The P200 problem nonogram_p200.co now takes 2:05 minutes, which can be compared with 57 seconds for the Gecode/R model. Thus, the Comet model is still slow compared to Gecode version and the ECLiPSe version, which solves the P200 problem in just a second or two. Maybe some creative labeling or a "proper" regular constraint can fasten things up...

Update some hour later: One way to gain about 30 seconds to 1:30 minutes on the P200 problem was to explicitly state the consistency level to onDomain, e.g. cp.post(a[m] == q0, onDomains), and to use another labeling strategy:
forall(i in 1..rows, j in 1..cols : !board[i,j].bound()) {
// label(board[i,j]); // the former strategy
tryall(v in 1..2) m.post(board[i,j] == v, onDomains);
}


Some more Nonogram problems have been coded:

An aside about regular expressions
I have been very interested in regular expressions (especially the more expressive Perl type) for a long time. In 1997 I wrote a simple Perl program MakeRegex which returns a regular expression given a list of words. It was later Appletized in MakeRegexApplet. Now there are better programs/modules for this.

OPL Models

One of my projects is to translate the OPL models from Pascal Van Hentenryck "The OPL Optimization Programming Language" into Comet. Even if OPL and Comet are different languages the reading of the book has been very awarding. Thanks again, Pascal!

Some OPL models already have been published, but now I've been more systematic and started from the beginning. More models to come.

Finding: arrays in a tuple
In Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models I wrote

One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.

I've found a way of using arrays in a tuple, by first initialize the values in a matrix and then - by using the all function - "slice" the values into the tuple array. This has been done in the models fixed_charge2.co and production3.co.

Example from fixed_charge2.co:

tuple productType {
int profit;
set{Machines} machines;
int[] use;
}
int use[Products, Resources] = [[3,4],[2,3], [6,4]];
productType Product[Products] =
[productType(6, {shirtM}, all(i in Resources) use[shirt,i]),
productType(4, {shortM}, all(i in Resources) use[shorts,i]),
productType(7, {pantM}, all(i in Resources) use[pants,i])];

Combining different models

The alphametic problem SEND + MOST = MONEY has the additional requirement to maximize the value of MONEY. The older model send_most_money.co does that and nothing more.

One natural extension of the problem is the following:
* first find the maximum value of MONEY
* then find all the solutions with this value.

The model send_most_money2.co has two functions:
* smm which is just the constraint for the alphametic problem
* send_most_money which has a parameter money.

send_most_money is first called with 0, indicating that is should maximize the value of MONEY, and then returns that value. The next call to send_most_money is with the calculated MONEY value, which indicates that all solutions should be generated.

The answer is

check all solutions for MONEY = 10876
x[9,7,8,2,1,0,4,6] MONEY: 10876
x[9,7,8,4,1,0,2,6] MONEY: 10876


Jim Orlin's Logic Puzzle

In Colored letters, labeled dice: a logic puzzle Jim Orlin stated a logic puzzle:
My daughter Jenn bough a puzzle book, and showed me a cute puzzle. There are 13 words as follows: BUOY, CAVE, CELT, FLUB, FORK, HEMP, JUDY, JUNK, LIMN, QUIP, SWAG, VISA, WISH.

There are 24 different letters that appear in the 13 words. The question is: can one assign the 24 letters to 4 different cubes so that the four letters of each word appears on different cubes. (There is one letter from each word on each cube.) It might be fun for you to try it. I’ll give a small hint at the end of this post. The puzzle was created by Humphrey Dudley.

...

If anyone wants to write an Excel spreadsheet and solve it via integer programming, please let me know. I’d be happy to post the Excel spreadsheet if you send it to me, or link to it if you post it and send me the URL.

This was a fun puzzle, so I modeled the problem in labeled_dice.co, and mailed Jim the link.

Some days later he wrote Update on Logic Puzzle where the contributed models were presented. There where another Comet model WordPuzzleSchaus.co by Pierre Schaus, one of Comet's developers. Pierres model use an different and more elegant approach than mine.

Jim also linked to some other logical puzzles from the great collection brownbuffalo.sourceforge.net (which have solutions in ECLiPSe/Prolog). One of these puzzles was Building Blocks, in the same vein as his labeled dice puzzle. Hence I had to make a Comet model of this problem: building_blocks.co.

He concludes the post with the following:

Incidentally, the ease for solving it using Constraint Programming makes me think that Constraint Programming should be considered a fundamental tool in the OR toolkit.


Other Comet models this week

Here are the other Comet models created/published this week. Some of them where very easy to do since they are translations of my MiniZinc models.

February 21, 2009

Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models

Here are some of the new Comet models written the last week.

Findings this week

Representing KenKen, Kakuro, and Killer Sudoku

One important thing in modelling a problem is how to representing the data in the problem. For grid puzzles such as KenKen, Kakuro, and Killer Sudoku, where one of the objects is to operate (add, multiply, etc) on an array of unknown length, I found one representation which work quite well (at least on the simple problem I tested).

Let's take the KenKen problem from Wikipedia.

The first version kenken.co was quite large, and stated explicitly all the operations (addition, subtraction, multiplication, or division). This was slightly generalized in the same model, though.

After that I relaxed the requirement to state exactly which operation to do things actually got simpler to model. In the Comet model kenken2.co we represent an cell with a tuple (struct) as two integers

tuple cell {
int r;
int c;
}

The cage of numbers to add (or multiply etc) is represented by another tuple, a set of cells, and the result (integer):

tuple P {
set{cell} c;
int res;
}

Now, it is only to state the problem using this structure:

int num_p = 15; // number of cages/segments
P problem[1..num_p] = 
  [
   P({cell(1,1), cell(2,1)}, 11),
   P({cell(1,2), cell(1,3)}, 2),
   P({cell(1,4), cell(2,4)}, 20),
   P({cell(1,5), cell(1,6), cell(2,6), cell(3,6)}, 6),
   P({cell(2,2), cell(2,3)}, 3),
   P({cell(2,5), cell(3,5)}, 3),
   P({cell(3,1), cell(3,2), cell(4,1), cell(4,2)}, 240),
   P({cell(3,3), cell(3,4)}, 6),  
   P({cell(4,3), cell(5,3)}, 6),
   P({cell(4,4), cell(5,4), cell(5,5)}, 7),
   P({cell(4,5), cell(4,6)}, 30),  
   P({cell(5,1), cell(5,2)}, 6),
   P({cell(5,6), cell(6,6)}, 9),
   P({cell(6,1), cell(6,2), cell(6,3)}, 8),
   P({cell(6,4), cell(6,5)}, 2)
   ];

This representation makes it possible to reading a problem from a file etc (which has not been done, though).

The single function that calculates the values in the segment given the result is as follows. Note: Here I assume that division and subtraction will only be done for two sized sets.

function void calc(Solver m, set{cell} cc, var{int}[,] x, int res) {
if (cc.getSize() == 2) {
var{int} a(m, x.getRange(0));
var{int} b(m, x.getRange(0));
m.post(a == x[cc.atRank(0).r,cc.atRank(0).c]);
m.post(b == x[cc.atRank(1).r,cc.atRank(1).c]);
m.post((a + b == res) || (a * b == res) ||
(a * res == b) || (b * res == a) ||
(a - b == res) || (b - a == res));
} else {
m.post(sum(i in cc) x[i.r, i.c] == res || prod(i in cc) x[i.r, i.c] == res);
}
}


In the exploreall we state the following constraints:

// all rows and columns must be unique
forall(i in R)
m.post(alldifferent(all(j in R) x[i,j]));
forall(j in R)
m.post(alldifferent(all(i in R) x[i,j]));
// the specific problem
forall(i in 1..num_p) {
calc(m, problem[i].c, x, problem[i].res);
}

Now, the other two grid puzzles in Comet, Killer Sudoku, and Kakuro, is represented is the same way, using just slight modifications in the calc function and added or altered constraints.

For more about these three grid puzzles see Wikipedia's entries:
* KenKen
* Killer Sudoku
* Kakuro


Different solvers in one model: Sudoku generator


Speaking of grid puzzles, here is a (proof-of-concept) generator of Sudoko puzzles: sudoku_generate.co. It uses a simple and brute force approach:

* Generate a problem matrix with the proper number of unknowns
* Try to solve the problem. If there are exactly one solution, then all is fine, else generate another problem.

It also use two limits to ensure that one generation/solution is not
to costy (which may miss some good problems):
* timeout
* number of failures

The thing to note here is the way two different functions works together: first one constraint solver (model) to generate a Sudoku matrix, then another constraint solver (model) to really solve the Sudoko problem.


User defined constraints


Earlier I wrote how to define function in Comet, e.g. in Some Comet constraint programming (CP) models .

This week I wanted to state my own functions in the same way as the built in functions, e.g. as m.post(myConstraint(...));.

The following function implements the (global) constraint increasing, which then can be stated as m.post(increasing(x));. Please note that this is just a "modelling" constraint, and it will use the underlying solver propagation etc.

class increasing extends UserConstraint {
var{int}[] _x;
int _n;
increasing(var{int}[] x) : UserConstraint() {
_x = x;
_n = _x.getSize();
}
Outcome post(Consistency cl) {
Solver cp = _x[1].getSolver();
forall(i in 2.._n) {
cp.post(_x[i-1] <= _x[i], cl);
}
return Suspend;
}
Outcome propagate() {
return Suspend;
}
}

The Comet model constraint_test.co implements the following constraints:

  • alldifferent_except_0
  • increasing
  • decreasing
  • toNum
  • correspondence

Translating OPL models

I've started to read Pascal Van Hentenryck's great book about OPL, "The OPL Optimization Programming Language", (ISBN: 9780262720304, unfortunately out of print), and it is quite clear the Comet "is not very unlike OPL". Which is not a big surprise since Van Hentenryck is the designer of both languages.

There are differences between Comet and OPL, sometimes quite big, but often they are easy to circumvent. Maybe later findings will discuss more about this.

The following Comet models are translations of OPL models, some of these are from the examples from ILOG's OPL Models.

  • building_a_house.co: Scheduling building a house (OPL). Comet use a different way of stating scheduling constraints than OPL.
  • covering_opl.co: Set covering problem. Also note the labeling (in the uses section), which solves the problem faster than with the default label, or labelFF.
  • einstein_opl.co: Einstein puzzle, translation of the OPL model from Daniel Selman's Einstein's Puzzle - Or, 'The Right Tool for the Job'. The presentation in the Comet model is different than the OPL model (which is shown commented) but it is quite close.
  • fixed_charge.co: Fixed-charge problem. One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.
  • number_square.co: From page 32 of Van Hentenryck's OPL book: Finding a eight digit square which remains a square when 1 is concatenated in from of it. This simple problem translates very well to Comet.
  • p_median_opl.co: P-median problem. This also translates very well to Comet.
  • production2.co: Production planning problem. Same difference as in the fixed charge model, where arrays in a tuple is not allowed.
  • warehouse.co: Warehouse location problem.


Models


Here are the new Comet models. As usual more information about the problem/model is in the header of the model.

February 15, 2009

More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems

During the week some more Comet models has been published. Many of them are translations of my MiniZinc models.

Findings this week

Here are some findings for the previous week.

tryall again: Nonogram

One of the models I really wanted to make was Nonogram, since it would be interesting to compare with the MiniZinc model nonogram.mzn and the Gecode/R model nonogram.rb. The Comet model is nonogram.co.

The first version was very influenced by the MiniZinc model, and used tryall where the MiniZinc model used exists (see the previous post About 20 more constraint programming models in Comet for some more comments ablit this). But this was way too slow.

After some thoughts, I rewrote the tryall:s and it was getting faster. For a simple Nonogram problem like The Hen (see picture below), it originally took about 30 seconds. And after the changes, it now takes a fraction of a second (with alot of fewer propagations).

Nonogram The hen
. X X X . . . .
X X . X . . . .
. X X X . . X X
. . X X . . X X
. . X X X X X X
X . X X X X X .
X X X X X X . .
. . . . X . . .
. . . X X . . .

Still, compared to the Gecode/R (and - of course - Gecode) models, which uses "regular expression" representation, the Comet model is slow, but is faster the MiniZinc version. For example the Lambda picture (see below) takes about 12 seconds with the Comet model, compared to 0.5 seconds with the Gecode/R model. Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)...

Nonogram: Lambda problem
. X X . . . . . . .
X . X X . . . . . .
X . . X . . . . . .
. . . X X . . . . .
. . . . X . . . . .
. . . X X X . . . .
. . . X X X . . . .
. . X X . X X . . .
. . X X . . X . . .
. X X . . . X X . X
. X X . . . . X X X
X X . . . . . X X .

To summarize, the main finding here was to replace two tryall statements with two decision variables, and change the rest accordingly. The model also include two symmetry breaking constraints ("alldifferent except 0" and ordering of a temporary array), but these make no big difference.

binary representation of set variables

As far as I know, Comet don't (yet) have a variable set variables (i.e. as decision variables). Instead, another representation must be used, and probably the most natural representation is an array of binary decision values representing each value of the set.

The following models use this representation:
* set_partition.co, partitions a range of integers 1..n in two equal sized sets where the sums and sums of squares of the two sets are equal.
* steiner_triplets.co: the well known problem of Steiner triples.
* partition_function.co, (partitions values according to a function of some kind)

Models

These models is available by My Comet page. As usual, the problems are presented in the header of the file.

February 09, 2009

About 20 more constraint programming models in Comet

Since Some Comet constraint programming (CP) models, I've continued to translate some more of my (approx. 600) MiniZinc models to Comet. There are a few new models: game_theory_taha.co and fill_in_the_squares.co.

Findings this week

tryall

One finding this week is that tryall almost behaves like MiniZinc's exists, a very useful modelling tool. See for example the following models for the usage:
* Hidato puzzle
* Knights path
* SONET problem
* Word golf.

tryall behaves slightly different depending where it is placed: In the main explore/exploreall section, it may not generate all the solutions; but when placed in the using section, it generates all the solutions (and makes the model a bit slower).

Search strategies (labelling)

I've started to experiment with different search strategies (labelling), but for most of the models, the two builtin label and labelFF (first fail) are reasonable fast. This is an area I will look into more.

Other things

* strings/dictionaries: It is very nice to be able to use "real" strings in modelling. See for example Word golf, and Set covering deployment. The latter model also - just for fun - has a hash table for counting the number of occurrences of the selected countries.

* more than one models in the same program. MiniZinc only allows one model per file. As an experiment, game_theory_taha.co (two player zero sum game) solves first for one player, and then another model for the second player (which is not necessary since the solutions are a dual problem). Also, in Word golf, there is a loop of models solving for different ladder lengths.

The models

As usual, the description of the problem is included in the model, and a reference to my other model(s) of the same problem.

bus_schedule.co: Bus scheduling (Taha)
calculs_d_enfer.co: Calculs d'enfer puzzle (from the NCL manual)
crew.co: Crew scheduling (Gecode)
discrete_tomography.co: Discrete Tomography (one color model)
fill_in_the_squares.co: Fill in the squares, recreational mathematics. (from ZDC system). Origin unknown.
furniture_moving.co: Optimizing furniture moving (Marriott and Stuckey)
game_theory_taha.co: 2 player zero sum game (Taha). This uses the MIP (mixed integer programming) module.
heterosquare_cp.co: Heterosquare problem
hidato.co: Hidato puzzle
knights_path.co: Knight path
least_square.co: Least square (From Lundgren, Rönnqvist, Värbrand: "Optimeringslära"), using the MIP module.
max_flow_winston1.co: Maximum flow problem (Winston)
pandigital_numbers.co: Pandigital numbers (any base << about 12)
photo_problem.co: Photo problem
place_number_puzzle.co: Eight number placement puzzle
quasigroup_completion.co: Quasigroup completion problem.
Problem files:


set_covering.co: Set covering, placing of firestations (Winston)
set_covering_deployment.co: Set covering deployment
sonet_problem.co: SONET problem
survo_puzzle.co: Survo puzzle
traffic_lights.co:Traffic lights problem (CSPLib problem 16)
who_killed_agatha.co: Who killed agatha? (The Dreadsbury Mansion Murder Mystery, a automated reasoning problem)
word_golf.co: Word golf (word chain, word ladder), word_golf_len3.co: Data file with 2044 3-letter words.
young_tableaux.co: Young tableaux and partitions

January 19, 2009

Nonogram in Gecode/R

I have now implemented a model for solving Nonograms in Gecode/R: nonogram.rb. See My Gecode/R page for more about Gecode/R).

Wikipedia explains Nonogram in this way (Nonogram


Nonograms or Paint by Numbers are picture logic puzzles in which cells in a
grid have to be colored or left blank according to numbers given at the
side of the grid to reveal a hidden picture. In this puzzle type, the
numbers measure how many unbroken lines of filled-in squares there are
in any given row or column. For example, a clue of "4 8 3" would mean
there are sets of four, eight, and three filled squares, in that order,
with at least one blank square between successive groups.


A problem file may looks like this (see the file nonogram_heart.txt). First are the rules for the rows, then the rules for the columns.

row_rules
2,2
4,4
1,3,1
2,1,2
1,1
2,2
2,2
3
1

col_rules
3
2,3
2,2
2,2
2,2
2,2
2,2
2,3
3

The solution is a heart:


## ##
#### ####
# ### #
## # ##
# #
## ##
## ##
###
#

The program parses a nonogram problem file (or uses a default problem sans a problem file) and translates the rules to "regular expressions", e.g. the rule 3,2 is translated to the "regular expression" [repeat(0), repeat(1,3,3), at_least_once(0),repeat(1,2,2),repeat(0)] (i.e. the regexp /^0*1{3}0+1{3}0*$/), which means:
* start with 0 or more 0's
* then exactly 3 1's
* then 1 or more 0's
* then exactly 2 1's
* and last 0 or more 0's


For more about regular expressions in Gecode/R, see Integer Enum operands


The code
The translation is done via this method (debug printouts has been omitted):

def parse_regex(a, debug = false)
r = [repeat(0)]
a.each_with_index{|e,i|
r << repeat(1,e,e)
if i < a.length-1 then
r << at_least_once(0)
end
}
r << repeat(0)
return r
end

The constraint part is

def initialize(row_rules, col_rules)
rows = row_rules.length
cols = col_rules.length
x_is_an int_var_matrix(rows, cols, 0..1) # decision matrix
rows.times{|i| # rows
x.row(i).must.match(parse_regex(row_rules[i], debug))
}
cols.times{|j| # columns
x.column(j).must.match parse_regex(col_rules[j], debug)
}
branch_on x, :variable => :none, :value => :max
end

Files
Except for the P200 problem, all problem listed below was solved in about a second or two. For the P200 problem the solution is printed in about 3 seconds, then it takes nearly a minute to very that there are no other solution.

Most problems are from Gecode examples:
* nonogram.rb: The model
* nonogram_simple.txt: Very simple problem
* nonogram_bear.txt: Bear
* nonogram_crocodile.txt: Crocodile
* nonogram_difficult.txt: Difficult
* nonogram_dragonfly.txt: Dragonfly
* nonogram_heart.txt: Heart
* nonogram_house.txt: House
* nonogram_nonunique.txt: Non unique problem (43 solutions)
* nonogram_p200.txt: P200
* nonogram_pinwheel.txt: Pinwheel
* nonogram_soccer_player.txt: Soccer player
* nonogram_unknown.txt: "Unkwnown"

Also see my MiniZinc model nonogram.mzn, which is way more complicated and often much slower.

January 18, 2009

Some other Gecode/R models, mostly recreational mathematics

Here are some new Gecode/R models. They are mostly in recreation mathematics since I still learning the limits of Gecode/R and want to use simple problems.


SEND + MOST = MONEY


This is a simple alphametic puzzle: SEND + MOST = MONEY and the model is based on the example in the Gecode/R distribution (send_most_money.rb). The difference with its cousin SEND + MORE = MONEY is that there are many solutions, and we want to maximize the value of MONEY (10876).

The model is send_most_money2.rb.

Now, there are actually 4 different solutions for MONEY = 10876 which we want to find.


s e n d m o s t y
3 7 8 2 1 0 9 4 6
5 7 8 2 1 0 9 4 6
3 7 8 4 1 0 9 2 6
5 7 8 4 1 0 9 2 6

In order to do that, the model first solves the maximation problem and assigns the value (10876) to max_value; the second steps finds all 4 solutions. These two steps use the same code with the difference that the second step activates the following constraint

if (max_money > 0) then
money.must == max_money
end

Nothing fancy, but it quite easy to to it in Ruby and Geocde/R.


Steiner triplets


This is another standard problem in the constraint programming (and recreational mathematics) community: Find a set of triplet of numbers from 1 to n such that any two (different) triplets have at most one element in common.

For more about this, see Mathworld: SteinerTripleSystem, and Wikipedia Steiner_system.

The Gecode/R model is steiner.rb.

The problem can be simply stated as:

nb = n * (n-1) / 6 # size of the array
sets_is_an set_var_array(nb, [], 1..n)
sets.must.at_most_share_one_element(:size => 3)
branch_on sets, :variable => :smallest_unknown, :value => :min

This, however, is very slow (and I didn't care to wait that long for a solution). I tried some other branch strategies but found none that made some real difference.

When the following constraint was added, it really fasten things up. This in effect the same as the constraint sets.must.at_most_share_one_element(:size => 3) above:

n.times{|i|
sets[i].size.must == 3
(i+1..n-1).each{|j|
(sets[i].intersection(sets[j])).size.must <= 1
}
}

The first 10 solutions for n = 7 took about 1 seconds. The first solution is:

Solution #1
{1, 2, 3}
{1, 4, 5}
{1, 6, 7}
{2, 4, 6}
{2, 5, 7}
{3, 4, 7}
{3, 5, 6}
memory: 25688
propagations: 302
failures: 0
clones: 2
commits: 16

For n = 9, 10 solutions took slightly longer, 1.3 seconds.


Devil's Words


Gecode/R model: devils_word.rb.

Devil's word is a "coincidence game" where a the ASCII code version of a name, often a famous person's, is calculated to sum up to 666 and some points is made about that fact (which of course is nonsense).

There are 189 different ways my own name HakanKjellerstrand (where the umlaut "å" in my first name is replaced with "a") can be "devilized" to 666. With output it took about 2 seconds to generate the solutions, without output it took 0.5 seconds.

The first solution is:

+72 +97 +107 +97 +110 -75 +106 +101 +108 -108 -101 +114 +115 +116 +114 -97 -110 -100


Also, see
* MiniZinc model: devils_word.mzn
* my CGI program: Devil's words.

And see Skeptic's law of truly large numbers (coincidence) for more about coincidences. The CGI program mentioned above was presented in the swedish blog post Statistisk data snooping - att leta efter sammanträffanden ("Statistical data snooping - to look for coincidences") which contains some more references to these kind of coincidences.


Pandigital Numbers, "any" base


Pandigital numbers are a recreational mathemathics construction. From MathWorld Pandigital Number

A number is said to be pandigital if it contains each of the digits
from 0 to 9 (and whose leading digit must be nonzero). However,
"zeroless" pandigital quantities contain the digits 1 through 9.
Sometimes exclusivity is also required so that each digit is
restricted to appear exactly once.

The Gecode/R model pandigital_numbers.rb extends this to handle "all" bases. Or rather bases from 2 to 10, since larger bases cannot be handled by the Gecode solver.

For base 10 using the digits 1..9 there are 9 solutions:

4 * 1738 = 6952 (base 10)
4 * 1963 = 7852 (base 10)
18 * 297 = 5346 (base 10)
12 * 483 = 5796 (base 10)
28 * 157 = 4396 (base 10)
27 * 198 = 5346 (base 10)
39 * 186 = 7254 (base 10)
42 * 138 = 5796 (base 10)
48 * 159 = 7632 (base 10)

For base 10, using digits 0..9, there are 22 solutions.

Here is the number of solutions for base from 5 to 10 and start either 0 or 1 (there are no solutions for base 2..4):
* base 2, start 0: 0
* base 2, start 1: 0
* base 3, start 0: 0
* base 3, start 1: 0
* base 4, start 0: 0
* base 4, start 1: 0
* base 5, start 0: 0
* base 5, start 1: 1
* base 6, start 0: 0
* base 6, start 1: 1
* base 7, start 0: 2
* base 7, start 1: 2
* base 8, start 0: 4
* base 8, start 1: 4
* base 9, start 0: 10
* base 9, start 1: 6
* base 10, start 0: 22
* base 10, start 1: 9

See also the MiniZinc model pandigital_numbers.mzn with a slightly different approach using the very handy MiniZinc construct exists instead of looping through the model for different len1 and len2.


SEND + MORE = MONEY (any base)


And talking of "all base" problems, send_more_money_any_base.rb is a model for solving SEND+MORE=MONEY in any base from 10 and onward.


Pattern
The number of solutions from base 10 and onward are the triangle numbers, i.e. 1,3,6,10,15,21,... I.e.

* Base 10: 1 solution
* Base 11: 3 solutions
* Base 12: 6
* Base 13: 10
* Base 14: 15
* Base 15: 21
* etc

For more about triangle numbers, see Wikipedia Triangular numbers.

There seems to be a pattern of the solutions given a base b:

 
           S  E  N  D  M  O  R  Y
           ----------------------
 Base 10:  9  5  6  7  1  0  8  2
 Base 11: 10  7  8  6  1  0  9  2
          10  6  7  8  1  0  9  3
          10  5  6  8  1  0  9  2
 Base 12:
          11  8  9  7  1  0 10  3
          11  8  9  6  1  0 10  2
          11  7  8  9  1  0 10  4
          11  6  7  9  1  0 10  3
          11  6  7  8  1  0 10  2
          11  5  6  9  1  0 10  2
 ...
 Base 23:
          22 19 20 18  1  0 21 14
          22 19 20 17  1  0 21 13
 ...

Some patterns:

S: always base-1 e.g. 9 for base 10
M: always 1 e.g. 1 any base
0: always 0 e.g. 0 any base
R: always base-2 e.g. 8 for base 10
E, N, D: from base-3 down to 5 e.g. {5,6,7,8,9} for base 10
Y: between 2 and ??? e.g. {2,3,4} for base 12


I haven't read any mathematical analysis of these patterns. There is an article in math Horizons April 2006: "SENDing MORE MONEY in any base" by Christopher Kribs-Zaleta with the summary: Dudeney's classic "Send More Money" cryptarithmetic puzzle has a unique solution in base ten. This article generalizes explores solutions in bases other than ten. I haven't read this article, though.


Also, see my MiniZinc model send_more_money_any_base.mzn.