November 01, 2014

Decision Management Community November 2014 Challenge: Who killed Agatha?

This blog post was inspired by Decision Management Community November 2014 Challenge: Challenge Nov-2014 Decision model 'Who killed Agatha?'. The original text - and the one linked to from the Challenge page - is here.

From DM Community Challenge Nov-2014:
Someone in Dreadsbury Mansion killed Aunt Agatha.
Agatha, the butler, and Charles live in Dreadsbury Mansion, and
are the only ones to live there. A killer always hates, and is
no richer than his victim. Charles hates noone that Agatha hates.
Agatha hates everybody except the butler. The butler hates everyone
not richer than Aunt Agatha. The butler hates everyone whom Agatha hates.
Noone hates everyone. Who killed Agatha?
A model for this problem is actually one of the first I implement whenever I learn a new Constraint Programming (CP) system since in my approach it has some features that are important to test (they are explained in some details below):
  • reification: "reasoning about constraints"
  • matrix element: how to index a matrix with decision variables
The same general approach is implemented in most of the CP systems that I've tested so far: See also below for the full list with links.

All the decodings has the same approach: defining two binary 3x3 matrices of decision variables:
  • a "hates" matrix, i.e. who hates who
  • a "richer" than matrix: who is richer than who
And a Killer decision variable of range 1..3 (Agatha=1, Butler=2, and Charles=3). In some encodings (as the one below), Victim is also a decision variable, but we know that Agatha is the Victim from the start so this decision variable can be skipped.

The constraints will then on these two decision variables, the two matrices + the Killer decision variable.

First I describe the MiniZinc model in detail and after that a discussion about the solution and why it returns 8 solutions (all with the same killer: Agatha).

The Minizinc model

My original MiniZinc model is here who_killed_agatha.mzn.

The decoding shown below is slightly altered for simpler presentation.

Initiation of constants and domains

The following defines the dimension of the matrices (3x3) and also the domain - the possible values - of the killer and victim variables (1..3).
int: n = 3;
set of int: r = 1..3;
Here we define the constants agatha, butler, and charles. The values are fixed to the integers 1..3 since the solvers used handles finite domains in terms of integers. (There are exceptions to this. To state it short: finite domains in terms of integers is the most common in CP. Some systems also support set variables and/or float decision variables.)
% the people involved
int: agatha  = 1;
int: butler  = 2;
int: charles = 3;

Decision variables

The two decision variables the_killer and the_victim has both the domain 1..3, i.e. the set {agatha, butler charles}.
var r: the_killer;
var r: the_victim;
The 3x3 entries in the two matrices hates and richer are binary decision variables: 1 represents true, 0 represents false.
array[r,r] of var 0..1: hates;
array[r,r] of var 0..1: richer;
The following statement just states that the CP solver should use its default search method.
solve satisfy;
Aside: There is a range of different search heuristics available for most CP solvers, than can speed up more complex CSP's significantly, e.g. for an array x of decision variables one could state:
   solve :: int_search(x, first_fail, indomain_split, complete);
but for this simple problem we just settle for the default.


Next is the constraint section where all the requirements (constraints) of the problem are specified. The order of the constraints is the same order as the problem description. In contrast to the MiniZinc version on my MiniZinc site (the link above), I have here separated each constraint in its own section to emphasize the specific constraint.

Constraint: A killer always hates, and is no richer than his victim.

Here we see an example of a "matrix element" constraint, i.e. that a decision variable (the_killer) is used as an index of a matrix of decision variables, e.g.
     hates[the_killer, the_victim] = 1<
[Aside: In MiniZinc this can be done in this most syntactical pleasing ("natural") way, though most of the other CP systems cannot handle this syntax so an alternative syntax must be used. However, most CP systems have a syntax for the one dimensional element constraint, which is then stated as
which corresponds to
    X[Y] = Z,
where X and both Y and Z can be decision variables. The element constraint is central for CP and is one of the features that separates it from traditional LP/MIP modeling systems.] The first constraint
   hates[the_killer, the_victim] = 1
simply means that the killer hates the victim, and the second that the killer is not richer than the victim.
   % A killer always hates, and is no richer than his victim. 
   hates[the_killer, the_victim] = 1 /\
   richer[the_killer, the_victim] = 0

The concept of richer

The only relations we are using here are hates and richer, and we must take care of the meaning of these concepts.

Hates: Regarding the concept of hatred there is no logic involved how it can be used: A and B can both hate each other, or one of them can hate the other, or neither hates the other. Note that A can hate him/herself.

Richer: The concept of richer is a completely different matter, however. There is a logic involved in this relation:
  • if A is richer than B, then B cannot be richer than A
  • A cannot be richer than him/herself
Realizing that the richer relation is special is important for this model. Without it, there would be many more different solutions (256 instead of 8), though all these 256 solutions points to the same killer: Agatha. (See below for an analysis of the 8 different solutions.)

As mentioned above, we don't need to - and cannot - do a similar analysis on (the semantic meaning of) the hate relation.

See below A note on the richer relation for a comment on the assumption that either A is richer than B or B is richer than A.
   % define the concept of richer

   % no one is richer than him-/herself
   forall(i in r) (
      richer[i,i] = 0

   % (contd...) if i is richer than j then j is not richer than i
   forall(i, j in r where i != j) (
      richer[i,j] = 1 <-> richer[j,i] = 0

Constraint: Charles hates noone that Agatha hates.

Here again, reifications is handy. In pseudo code:
  FOR EACH person I in the house:
     IF Agatha hates I THEN Charles don't hate I
  % Charles hates noone that Agatha hates. 
   forall(i in r) (
      hates[charles, i] = 0 <- hates[agatha, i] = 1

Constraint: Agatha hates everybody except the butler

Here we simply state three hate facts.

It is important to not forget that Agatha can hate herself. Missing this fact in this model would yield that either Agatha or Charles is the killer.
   % Agatha hates everybody except the butler. 
   hates[agatha, charles] = 1  /\
   hates[agatha, agatha] = 1 /\
   hates[agatha, butler] = 0

Constraint: The butler hates everyone not richer than Aunt Agatha.

Same as above, we use reification to handle this:
   FOR EACH person I in the house
     IF I is not richer than Agatha THEN Butler hates I
which is implemented as
   % The butler hates everyone not richer than Aunt Agatha. 
   forall(i in r) (
     hates[butler, i] = 1 <- richer[i, agatha] = 0

Constraint: The butler hates everyone whom Agatha hates.

Same reasoning with reifications as above.
   % The butler hates everyone whom Agatha hates. 
   forall(i in r) (
      hates[butler, i] = 1 <- hates[agatha, i] = 1

Constraint: Noone hates everyone.

Here we count - for each person - the number of 1's (i.e. true) for each row in the hates matrix, and ensure that they are less than 3 (i.e. not all).
   % Noone hates everyone. 
   forall(i in r) (
     sum(j in r) (hates[i,j]) <= 2     

Who killed Agatha?

As mentioned above, this is not really needed since we could have hard coded the_victim to agatha without changing anything.
   % Who killed Agatha? 
   the_victim = agatha
To summarize: This model use a couple of important concepts in Constraint Programming:
  • decision variables with specific (finite) domains
  • reifications, implication and equivalence between constraints
  • element constraint (here in term of the more complex variant: matrix element)

Solution and analysis

There are total 8 different solutions for this MiniZinc model, all stating that Agatha is the killer. (Being able to get all solutions to a combinatorial problem is a excellent feature in Constraint Programming.)

The reason this model give more than one solution is because it is - in my interpretation of the problem - under constrained: there is not enough information about certain relations in the hates and richer matrices to yield a unique solution.

Below are the values of the hates and richer matrices after all constraints have been activated, but before search (searching in the search tree and assign all the decision variables to give a solution), as well as the killer variable.

The entries with "0..1" are the decision variables that are not constrained (decided) enough before search. (Note: Not all CP solvers have the feature to show the inferred variable domains before search. The table below is from my Picat model, slightly altered: who_killed_agatha.pi)
    killer= 1

    Agatha : Agatha : 1     Butler : 0        Charles: 1                   
    Butler : Agatha : 1     Butler : 0        Charles: 1                   
    Charles: Agatha : 0     Butler : 0..1     Charles: 0                   

    Agatha : Agatha : 0     Butler : 0        Charles: 0..1       
    Butler : Agatha : 1     Butler : 0        Charles: 0..1       
    Charles: Agatha : 0..1  Butler : 0..1     Charles: 0                  
These undecided variables involves if:
  • Charles hates Butler (or not)
  • Agatha is richer than Charles (or not)
  • Butler is richer than Charles (or not)
  • Charles is richer than Agatha (or not)
  • Charles is richer than Butler (or not)
We see that:
  • The killer has been assigned to 1 (= Agatha).
  • Many (namely 5) of the variables including Charles are undecided before search, i.e. using the constraints only can not figure out if they are true of not. This is perhaps not too surprising since most constraints in the model - and problem statement - involves only Agatha and Butler.
Also, two pairs of these (under constrained) Charles variables cannot both be true of the same time in the same solution. In each solution, one variable of each pair must be true and one must be false:
  • "Agatha is richer than Charles" and "Charles is richer than Agatha"
  • "Butler is richer than Charles" and "Charles is richer than Butler"
This setting is basically the same as having 5 binary variables, V1..V5, where the variables V1 and V2 are xored (i.e. that one of V1 and V2 must be true and the other false), and variables V3 and V4 are xored.

Thus 8 different solutions:
This concludes the analysis.

A note on the "richer" relation

Nothing rules out that A can be as rich as B (i.e. that neither is richer than the other). However, in this simple model, we assume a stricter variant: that either A is richer than B, or B is richer than A. If we would change the equivalence relation
   richer[i,j] = 1 <-> richer[j,i] = 0
(which means:
   IF A is richer than B THEN B is not richer than A
   IF B is not richer than A THEN A is richer than B)
to an implication
    richer[i,j]  = 1 -> richer[j,i] = 0
(IF A is richer than B THEN B is not richer than A)
then it don't change the principal solution but we would yield 18 solutions instead of 8, all point to the same killer: Agatha.


Here is a list of all the encodings of the Agatha murder problem that I have implemented so far. Whenever I learn a new CP system, the Agatha problem will probably be one of the first to test. See

April 11, 2013

The MiniZinc Challenge 2013 and other MiniZinc news

Here are some news from the G12 MiniZinc world.

MiniZinc main page:

The MiniZinc main page has changed to

MiniZinc challenge

From MiniZinc Challenge 2013:
The aim of the MiniZinc Challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).

Registration opens: Wed, 1 May 2013.
Problem submission deadline: Fri, 14 June 2013.
Initial submission round begins: Mon, 1 July 2013.
Initial submission round ends: Fri, 19 July 2013.
Final submissions: Fri, 2 August 2013.
Announcement of results at CP2013: 17 - 20 September 2013.

For details of the competition see:

Note that the scoring system has slightly changed this year, so that solvers that obtain indistiguishable results in quality of solution split the points inversely proportional to the time taken

To register for the challenge please email

Here is the Call for Problem Submission:
The MiniZinc Challenge is an annual solver competition in the Constraint Programming (CP) community held before the International Conference on Principles and Practice of Constraint Programming. The MiniZinc Challenge 2013 is seeking interesting problem sets on which various constraint solving technologies should be compared on this year. Everyone is allowed to submit problems regardless of whether they are an entrant in the challenge.

Important dates and deadlines:

Problem submission open: now
Problem submission deadline: Fri, 14 June 2013

Problem submission

Send an email with the subject line “[MZNC13] benchmark” to mzn-challenge 'at'

There are no restrictions on the kind of problems, but ideally they should be of interesting nature such as practice-related problems and puzzles etc. Problem submissions with real-world instances are welcome warmly. Models for the 2013 challenge can only use integer and Boolean variables.

The problem submitter provides a MiniZinc model of the problem and 20 instances ranging from easy-to-solve to hard-to-solve for an “ordinary” CP system. It is strongly encouraged to make use of the global constraint definitions provided in the MiniZinc 1.6 distribution. Please, follow the links below for submission instructions and requirements.
MiniZinc Challenge 2013
Also see: MiniZinc Challenge Medals 2008-2012

MiniZinc forum

Also just released is The MiniZinc forums with three forums:
  • Beginners: All beginner-level questions about MiniZinc.
  • Users: General discussion about MiniZinc. For beginners' questions please use the dedicated Beginners' Forum.
  • Developers: Discussions about developing the MiniZinc system and solvers that interface with MiniZinc.

October 14, 2012

Results MiniZinc Challenge 2012

The result from MiniZinc Challenge 2012 was presented this Friday (the last day of CP2012, 18th International Conference on Principles and Practice of Constraint Programming ).

The official contestants (solvers) this year was: Of these solvers, the only one I haven't tested (yet) is izplus.

Official results

The official results are:
  • Fixed search:
    • Gold medal: Gecode
    • Silver medal: Jacop
    • Bronze medal: OR-Tools
  • Free search:
    • Gold medal: Gecode
    • Silver medal: Fzn2smt
    • Bronze medal: izplus
  • Parallel search:
    • Gold medal: Gecode
    • Silver medal: Fzn2smt
    • Bronze medal: izplus
Congratutations to all!

Result including all solvers

It can be interesting to see the results for all solvers in the Challenge, including the G12's internal solvers such as Chuffed and CPX (which "are not eligible for prizes, but do modify the scoring results"). For a short description of these non-eligible solvers, see the short description at the result page.

I took the results from the "Selection" section of the result page and for each categories selected "Select all problems", "Compute results" and then sorted on the points (most is best). The result is quite interesting since it shows that Chuffed and G12 CPX got most points in all the three categories, and G12 Lazy FD also got good places.

Note: The mixing in the categories is explained by the management: entries in the FD search category were automatically included in the free search category, while entries in the free search category (including promoted FD entries) were automatically included in the parallel search category. The official winners (gold, silver, bronze) has been embolded.
  • Fixed search ("FD category solvers")

  • Free category ("Free category solvers")

  • Par category ("Par category solvers")

Note that all the problem instances are available for download from the Result page (mznc12-problems.tar.gz).

Also see: MiniZinc Challenge Medals 2008-2012

May 16, 2012

Manufacturing Cell Design Problem (MCDP): My first Constraint Programming related academic papers

Some days ago I was told that the journal paper I have co-authored about Manufacturing Cell Design Problem (MCDP, see below) has been accepted for publication in a journal. Also, some weeks ago a short conference paper about the same topic was accepted. My part in both papers was that I created a couple of MiniZinc models (first the standard formulation and then some other using different approaches) and running a large number of benchmarks on a couple of FlatZinc solvers. This is really fun, since these papers are my first academic papers related to Constraint Programming.

Since the papers are not yet published/presented, I cannot reveal much more than the following. After the publications, I will blog more.

The journal paper

The journal paper is

Ricardo Soto, Hakan Kjellerstrand, Orlando Durán, Broderick Crawford, Eric Monfroy, Fernando Paredes: Cell formation in group technology using constraint programming and Boolean satisfiability

Published in the journal Expert Systems with Applications (ScienceDirect page, "In Press, Corrected Proof")

Cell formation consists in organizing a plant as a set of cells, each of them containing machines that process similar types or families of parts. The idea is to minimize the part flow among cells in order to reduce costs and increase productivity. The literature presents different approaches devoted to solve this problem, which are mainly based on mathematical programming and on evolutionary computing. Mathematical programming can guarantee a global optimal solution, however at a higher computational cost than an evolutionary algorithm, which can assure a good enough optimum in a fixed amount of time. In this paper, we model and solve this problem by using state-of-the-art constraint programming (CP) techniques and Boolean satisfiability (SAT) technology. We present different experimental results that demonstrate the efficiency of the proposed optimization models. Indeed, CP and SAT implementations are able to reach the global optima in all tested instances and in competitive runtime.

Manufacturing cells; Machine grouping; Constraint programming; Boolean satisfiability

Conference paper

The short conference paper is the following (with almost the same authors as the journal paper)

Ricardo Soto, Hakan Kjellerstrand, Juan Gutiérrez, Alexis López, Broderick Crawford, and Eric Monfroy: Solving Manufacturing Cell Design Problems using Constraint Programming

for the conference IEA/AIE 2012 (International Conference on Industrial, Engineering and Other Applications. of Applied Intelligent Systems, 2012 Dalian, China )

A manufacturing cell design problem (MCDP) consists in creating an optimal production plant layout. The production plant is composed of cells which in turn are composed of machines that process part families of products. The goal is to minimize part flow among cells in order to reduce production costs and increase productivity. In this paper, we focus on modeling and solving the MCDP by using state-of-the-art constraint programming (CP) techniques. We implement different optimization models and we solve it by using two solving engines. Our preliminary results demonstrate the efficiency of the proposed implementations, indeed the global optima is reached in all instances and in competitive runtime.

More about MCDP

Manufacturing Cell Design Problem is a kind of clustering problem where the object is to cluster machines that belongs to the same part (families) as good as possible. For a little more about the problem, see the following (Powerpoint) presentation by My Mingang Fu, Lin Ben, and Kuowei Chen Manufacturing Cell Design - Problem Formulation.

April 20, 2012

G12 MiniZinc version 1.5.1 released

G12 MiniZinc version 1.5.1 has been released. It can be downloaded here.

From the NEWS:
G12 MiniZinc Distribution 1.5.1

* We have added the following variants of the count/3 constraint to the MiniZinc library:

count_eq (synonym for count)

Bugs fixed in this release:

* mzn2fzn now correctly flattens the built-in functions xorall/1 and iffall/1 when they appear in reified contexts with at least two variables and at least one literal "true" in their array argument. [Bug #340]

* A bug in mzn2fzn that caused models that were satisfiable under the relational semantics to be incorrectly flattened into unsatisfiable FlatZinc instances has been fixed. [Bug #337]

* A bug that caused mzn2fzn to infer incorrect bounds for arrays of set variables has been fixed. [Bug #341]

April 04, 2012

MiniZinc Challenge 2012 is now underway

The MiniZinc Challenge 2012 is now underway:
The Challenge

The aim of the challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).


  • Registration opens: 1 May 2012.
  • Problem submission deadline: 12 July 2012.
  • Initial submission round begins: 1 August 2012.
  • Initial submission round ends: 20 August 2012.
  • Final submissions: 1 September 2012.
  • Announcement of results at CP2012: 8-12 October 2012.
For more about the result of the last year MiniZinc Challenge: MiniZinc Challenge 2011 Results.

March 26, 2012

My talk about Constraint Programming at Google (Paris)

The presentation can be downloaded here: google_talk_20120323.ppt.

In late January this year, I was invited by Laurent Perron - head of the or-tools group - to talk about my view on Constraint Programming and my experience with the or-tools system (I have done quite a few models using the Python, Java, and C# interfaces).

The talk was this Friday (March 23) at the Google's Paris office. It was a lovely day but unfortunately I got a common cold the day before so it was little hard to enjoy all the things Paris can offer.

Friday started with Laurent and me just talking about CP in general, and or-tools in particular and it was really fun and interesting. Later on we where joined by two other guests: Charles Proud'Homme and Nicolas Beldiceanu, both from Ecole des Mines de Nantes and it was great talking with them as well and, above all, listen when they discussed various CP things.

The Google office in Paris was very impressive, very high ceilings and seemed to be build to get lost easily (though neither of us quests got completely lost).

At 1400 I started the talk in front of an audience of about 20 engineers at the Google office (+ the two guests from Ecole des Mines de Nantes) and I think it went quite well considering the cold and all. It was recorded for internal use at Google. I don't know how public it will be but I will blog about this when it has been edited etc. After the 50 minutes talk there was a little Q/A session.

Thanks Laurent for the invitation and a memorable day.

Little more about the talk

The talk was aimed for programmers that don't know very much about Constraint Programming and I especially wanted to convey my own fascination about CP by using this agenda:
  • Introducing the principles of CP (very simplified)
  • Showing the declarativeness of CP by some code in the high level G12 MiniZinc and then in or-tools Python, C#, and sometimes Java.
  • The basic principle of propagation of constraints and domains is shown via a very simple 4x4 Sudoku problem.
  • After that, some of - IMHO - the most fascinating concepts in modeling CP where presented:
    • Global constraints
    • Element constraint
    • Reification
    • Bi-directedness
      Note: After the talk Nicolas Beldiceanu commented that this is more known as "reversibility" in the Prolog world.
    • 1/N/All solutions
    • Symmetry breaking
Here is the talk: google_talk_20120323.ppt.

I would like to thank the following for various degrees of comments, suggestions, and encouragement regarding the presentation:
  • Magnus Bodin
  • Carl Mäsak
  • Mikael Lagerkvist
  • Christian Schulte
  • Laurent Perron
  • Alastair Andrew
And a special thanks to Nikolaj van Omme for his very detailed comments.

March 16, 2012

G12 MiniZinc version 1.5 released

MiniZinc version 1.5 has been released. It can be downloaded here.

From the NEWS:
G12 MiniZinc Distribution 1.5

* G12/CPX solver

We have added the solver G12/CPX (Constraint Programming with eXplanations) to the distribution. G12/CPX is the successor to the LazyFD solver. The FlatZinc interface to G12/CPX is named fzn_cpx and MiniZinc models can be solved with G12/CPX using mzn-g12cpx, for example to solve the model foo.mzn using G12/CPX, do

$ mzn-g12cpx foo.mzn

The existing LazyFD solver is now deprecated and will be removed in a future release.

Changes to the MiniZinc language:

* We have added some new built-in functions to assist with formatting complex output: show_int/2, show_float/3, join/2 and concat/1.

* We have added two new built-in functions, show_int/2 and show_float/3, to assist with formatting complex output.

* The built-in annotation is_output/0 is no longer supported.

* The built-in functions sum/1, product/1, forall/1, exists/1, xorall/1 and iffall/1 now also work with multi-dimensional arrays.

Changes to the FlatZinc language:

* We have added two new FlatZinc built-ins: bool_lin_eq/3 and bool_lin_le/3.

Other changes in this release:

* The following new global constraints have been added to the MiniZinc library:

Bugs fixed in this release:

* mzn2fzn now supports flattening expressions containing the built-in operation abort/1.

* mzn2fzn no longer turns optimisation problems that have a fixed objective into satisfaction problems. [Bug #277]

* The FlatZinc interpreter's MIP backend no longer aborts in the presence of a constant assignment to the objective variable. [Bug #319]

* The FlatZinc interpreter's FD backend no longer erroneously reports unsatisfiability in the presence of a constant assignment to the objective variable. [Bug #319]

* mzn2fzn now correctly reports that the built-in fix operation has aborted if given an argument that is not fixed. [Bug #158]

* A bug in mzn2fzn that caused it to not completely flatten array expressions in var array lookups has been fixed. [Bug #318]

* A bug that caused the FlatZinc interpreter to not indicate that search was complete for optimization problems has been fixed.
Quite a few of my MiniZinc models contain is_output (now not supported). I will update to them to the current version as soon as possible.

March 14, 2012

Tom Schrijvers, Guido Tack Search Combinators - paper and implementation

A paper and an implementation of Search Combinators - a framework to define application-tailored search strategies - has been available.


The paper is:
Tom Schrijvers, Guido Tack, Pieter Wuille, Horst Samulowitz, Peter J. Stuckey: Search Combinators (ArXiv). Abstract:
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver.
The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver.


An implementation using MiniZinc and Gecode is available from Gecode's FlatZinc page. The README file describes the tools as:
These two tools [minizinc-to-minizinc pre-compiler and FlatZinc interpreter], together with the G12 mzn2fzn translator, comprise a complete toolchain for solving MiniZinc models using search combinators. The pre-compiler translates a slightly extended version of MiniZinc to standards-compliant MiniZinc. The FlatZinc interpreter was modified to understand search combinators expressed as annotations.

In order to use the tools, you will need the standard mzn2fzn compiler from the G12 MiniZinc distribution, which can be downloaded at


Two examples are included in the distribution: golomb.mzn and radiation.mzn. The golomb.mzn is shown here, where the specific changes has been marked:
include "globals.mzn";
include "searchcombinators.mzn";

int: m;
int: n = m*m;

array[1..m] of var 0..n: mark;
array[1..(m*(m-1)) div 2] of var 0..n: differences =
    [ mark[j] - mark[i] | i in 1..m, j in i+1..m];

constraint mark[1] = 0;
constraint forall ( i in 1..m-1 ) ( mark[i] < mark[i+1] );
constraint alldifferent(differences);

% Symmetry breaking
constraint differences[1] < differences[(m*(m-1)) div 2];

solve :: dicho(print(mark,int_search(mark,input_order,assign_lb)),
The result using the included data file golomb-10.dzn (m=10) is
{0, 1, 3, 7, 12, 20, 30, 44, 65, 80}
{0, 1, 3, 11, 17, 29, 36, 51, 56, 60}
{0, 1, 6, 10, 23, 26, 34, 41, 53, 55}
I have just started to experiment with this and might return with a longer report.

March 03, 2012

Some newer models (most in MiniZinc, some in or-tools/C#)

For a time I have not blogged about all models I've created, instead just tweeted about them (I'm hakankj at Twitter). And sometimes not even than, just published them directly on my <CP system> pages without any noise.

Well, here I have collect some of these newer and unblogged models in - roughly - time order. Most are in MiniZinc (since I often use MiniZinc for prototyping) but there are also some in other systems. (See Constraint Programming for a list of these CP systems pages.)

  • xkcd_among_diff_0.mzn: Xkcd problem using among_diff_0
    This is another approach of the Xkcd "knapsack problem" where the object is to order dishes for an amount of exactly 15.05.
    This version where inspired by a comment in Helmut Simonis presentation Acquiring Global Constraint Models (page 3), where he use the global constraint among_diff_0 ("Count how many variables are different from 0")
    However, my implementation differs in some ways:
    • it use integers instead of floats.
    • it implements a slightly more general approach
    For more about the global constraint among_diff_0: see my MiniZinc model among_diff_0.mzn. Also see xkcd.mzn, my first approach of the problem.

  • monorail.mzn: Monorail puzzle
    From Aaron Iba's Blog Users of my iOS Game Teach Me a Lesson MIT Didn't:
    The object of Monorail puzzles is to complete a closed-circuit loop through all the stations (dots) by drawing rails. The loop must pass through each station exactly once and close back on itself, like an actual monorail system might in a city.
        .   .   .   .       1  2  3  4
        .   .___.   .       5  6  7  8
        .   .   .   .       9 10 11 12
        .   .___.   .      13 14 15 16
        |          |
        .  .___.___.
        |  | 
        .  .___.___.
        |          |
    Also see
    Problem instances:

  • dennys_menu.mzn
    From Mind Your Decisions (about game theory and personal finance) Denny's math commercial:
    So there’s the question: how many different price combinations will total $10 when menu items priced at $2, $4, $6, and $8?

  • Some Rosetta Code implementations of various knapsack problems

  • Newspaper problem (job-shop)
    Problem statement from Snehal Patel's CS course
    There are four students: Algy, Bertie, Charlie and Digby, who share a flat. Four newspapers are delivered to the house: the Financial Times, the Guardian, the Daily Express and the Sun. Each of the students reads all of the newspapers, in particular order and for a specified amount of time (see below). Question: Given that Algy gets up at 8:30, Bertie and Charlie at 8:45 and Digby at 9:30, what is the earliest that they can all set off for college?
         Algy           Bertie        Charlie      Digby
    1st  FT       60    Guardian 75   Express  5   Sun      90
    2nd  Guardian 30    Express   3   Guardian 15  FT        1
    3rd  Express   2    FT       25   FT       10  Guardian  1
    4th  Sun       3    Sun      10   Sun      30  Express   1
    Extra requirements: All reads the newspaper in a specific order:
    - Algy order   : - FT, Guardian, Express, Sun
    - Bertie order : - Guardian, Express, FT, Sun
    - Charlie order: - Express, Guardian, FT, Sun
    - Digby order  : - Sun, FT, Guardian, Express
    This origin of this problem is from S. French: "Sequencing and Scheduling : an introduction to the mathematics of the job-shop", Ellis Horwood Limited, 1982.

    Tim Duncan wrote about it in his paper "Scheduling Problems and Constraint Logic Programming: A Simple Example and its Solution", AIAI-TR-120, 1990, page 5. The paper also includes a program in CHIP solving the problem.)

    Two versions differs by that the first (newpaper0.mzn) is not loaded with so much output stuff as the latter (newpaper.mzn).

  • schedule2.mzn
    Problem from Dennis E. Sasha's book "Puzzles for Programmers and Pros", page 131f:
    In which order do you schedule the tasks starting from current
    day 0?:
    Task  T1 takes 4 days with deadline on day 45
    Task  T2 takes 4 days with deadline on day 48
    Task  T3 takes 5 days with deadline on day 25
    Task  T4 takes 2 days with deadline on day 49
    Task  T5 takes 5 days with deadline on day 36
    Task  T6 takes 2 days with deadline on day 31
    Task  T7 takes 7 days with deadline on day 9
    Task  T8 takes 5 days with deadline on day 39
    Task  T9 takes 4 days with deadline on day 13
    Task T10 takes 6 days with deadline on day 17
    Task T11 takes 4 days with deadline on day 29
    Task T12 takes 1 days with deadline on day 19

  • Some Project Euler problems
    Whenever I learn a new programming language, I tend to solve at least the first - say - 20 Project Euler problems. Unfortunately many of the problems requires arbitrary precision or recursive approaches, and neither has good support in MiniZinc. Here are some of the problems:

  • hitting_set.mzn
    From MathWorld: VertexCover
    Let S be a collection of subsets of a finite set X. A subset Y of X that meets every member of S is called the vertex cover, or hitting set. The smallest possible such subset for a given graph G is known as a minimum vertex cover (Skiena 1990, p. 218), and its size is called the vertex cover number, denoted tau(G).
    This model some different problem instances, for example those from the cited article above but also from other sources

    By the way, this model was implemented after I read the paper There is no 16-Clue Sudoku: Solving the Sudoku Minimum Number of Clues Problem by Gary McGuire, Bastian Tugemann, and Gilles Civario. The abstract states: We apply our new hitting set enumeration algorithm to solve the sudoku minimum number of clues problem, which is the following question: What is the smallest number of clues (givens) that a sudoku puzzle may have? [...]
  • maximal_independent_sets.mzn
    From Wikipedia: Maximal independent set:
    In graph theory, a maximal independent set or maximal stable set is an independent set that is not a subset of any other independent set. That is, it is a set S such that every edge of the graph has at least one endpoint not in S and every vertex not in S has at least one neighbor in S. A maximal independent set is also a dominating set in the graph, and every dominating set that is independent must be maximal independent, so maximal independent sets are also called independent dominating sets.

    A graph may have many maximal independent sets of widely varying sizes; a largest maximal independent set is called a maximum independent set.
    The model contains a few problem instances, e.g. those from the above cited Wikipedia article.
    Also see Wikipedia Independent set (graph theory), and compare with the MiniZinc model misp.mzn which use another representation and approach (inspired from GLPK:s model).

  • magic_square_frenicle_form.mzn
    From Wikipedia Frénicle standard form
    A magic square is in Frénicle standard form, named for Bernard Frénicle de Bessy, if the following two conditions apply:
    - the element at position [1,1] (top left corner) is the smallest of the four corner elements; and
    - the element at position [1,2] (top edge, second from left) is smaller than the element in [2,1].
    Activating all these constraints we get the "standard" way of counting the number of solutions:
    (1), 0, 1, 880, 275305224
    which is the sequence A006052 from the excellent Online Encyclopedia of Integer Sequence

    Without these symmetry constraints the number of solutions are:
    N  #solutions
    1     1
    2     0
    3     8
    4  7040
    5  many...
    (Counting the number of the solutions of a CP model is a very good way of ensuring that the model is correct, or rather: if the number of the solutions are not the expected, then the model is wrong.)

  • scheduling_with_assignments.mzn
    This was done directly after I've done the newspaper.mzn (see above). It then struck me that most examples I've seen of scheduling in Constraint Programming (especially when demonstrated the works of cumulative), just showed the times of the jobs not the assignments of the workers.
    In this model, both the job assignments and the worker assignments are shown in different way. For example the solution of one of my own standard problem furniture_moving.mzn (from Marriott and Stuckey: "Programming with constraints", page 112f) is shown as
    earliest_end_time: 60
    num_jobs   : 4
    num_workers: 4
    start_time : [0, 0, 30, 45]
    duration   : [30, 10, 15, 15]
    end_time   : [30, 10, 45, 60]
    resource   : [3, 1, 3, 2]
    allow_idle : true
    collect_workers : false
    do_precendences: false
    Assignment matrix (jobs/workers):
    Job1: 1 0 1 1
    Job2: 0 1 0 0
    Job3: 1 0 1 1
    Job4: 1 1 0 0
    Assignment matrix (workers/jobs):
    Worker1: 1 0 1 1
    Worker2: 0 1 0 1
    Worker3: 1 0 1 0
    Worker4: 1 0 1 0
    Time range for the jobs and the assigned worker(s):
    Job1(0..30): 1 3 4 
    Job2(0..10): 2 
    Job3(30..45): 1 3 4 
    Job4(45..60): 1 2 
    Schedule: worker(job), timeline: (earliest_end_time: 60)
    Worker: 1:    1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4 
    Worker: 2:    2  2  2  2  2  2  2  2  2  2  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  4  4  4  4  4  4  4  4  4  4  4  4  4  4  4 
    Worker: 3:    1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  -  -  -  -  -  -  -  -  -  -  -  -  -  -  - 
    Worker: 4:    1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  3  3  3  3  3  3  3  3  3  3  3  3  3  3  3  -  -  -  -  -  -  -  -  -  -  -  -  -  -  - 
    Time:         0  1  2  3  4  5  6  7  8  9  10 1  2  3  4  5  6  7  8  9  20 1  2  3  4  5  6  7  8  9  30 1  2  3  4  5  6  7  8  9  40 1  2  3  4  5  6  7  8  9  50 1  2  3  4  5  6  7  8  9  
    The model presents:
    • start time, duration, and end time for all jobs
    • assignment matrix jobs/workers
    • assignment matrix workers/jobs
    • jobs with time range, and the assigned workers
    • schedule (time line in time units) for the jobs, showing the assigned worker
    • schedule (time line) for the workers, showing the time the workers work
    • and last, a Gantt like schedule showing which job each worker is scheduled to to in each time.

    The model also have some other "bells & whistles" such as
    • handling precedences
    • modeling as a bin pack problem
    • "collecting workers", which may be useful for certain problem, such as perfect square placements

    The two last "features" shows that the job scheduling problem has a family resemblance with bin pack and perfect square placement problem. Unfortunately these are not very fast in this model.

    Well, since I plan to blog about this more, including a benchmark, I leave it for now.

    Here are the problems instances. They has been taken from various sources:

  • equal_sized_groups.mzn
    This is a problem from or-exchange (where many from the area of Operations Research hang out): dividing into roughly equal sized groups, with a sorted list
    I have a problem, and it seems like it should be something that someone has studied before. I have a sorted list of N elements, and I want to divide them into K groups, by choosing K-1 split points between them. There may be elements with the same value, and we want to have items with same value in the same group. Find K groups as close in size to round(N/K) as possible.

    For example, divide these 32 elements in to 4 groups of size 8:
     1 1 1 1 2 2 3 3 3 3 3 3 3 3 4 4 4 4 5 5 5 5 5 6 6 6 6 7 8 9 10 10
    One solution would be these 3 break points:
     1 1 1 1 2 2 | 3 3 3 3 3 3 3 3 | 4 4 4 4 5 5 5 5 5 | 6 6 6 6 7 8 9 10 10
    [            6                 14                 23                    ]
    [    6 elts      8 elts               9 elts              9 elts          ]
     1 1 1 1 2 2         = 6 elements,  error = abs(8-6)=2
     3 3 3 3 3 3 3 3     = 8 elements,  error = abs(8-8)=0
     4 4 4 4 5 5 5 5 5     = 9 elements,  error = abs(8-9)=1
     6 6 6 6 7 8 9 10 10 = 9 elements,  error = abs(8-9)=1
     total error = 4
    Does this look familiar to anyone? I'd like an approximation algorithm if possible.

    Thanks, Craig Schmidt
    The model contains some examples and results.

  • houses.mzn
    Problem from a Kanren example: houses.scm
    Taken from _Algebra 1_, Glencoe/McGraw-Hill, New York, New York, 1998 pg. 411, Problem 56

    There are 8 houses on McArthur St, all in a row. These houses are numbered from 1 to 8.

    Allison, whose house number is greater than 2, lives next door to her best friend, Adrienne. Belinda, whose house number is greater than 5, lives 2 doors away from her boyfriend, Benito. Cheri, whose house number is greater than Benito's, lives three doors away from her piano teacher, Mr. Crawford. Daryl, whose house number is less than 4, lives 4 doors from his teammate, Don. Who lives in each house?
    One thing to note is the use of the global constraint inverse for channeling the each person to the houses and vice versa.

  • Finding an optimal wedding seating chart
    This problem has been implemented in both MiniZinc and or-tools/C#:
    The problem is from Meghan L. Bellows and J. D. Luc Peterson Finding an optimal seating chart for a wedding (PDF), via Improbable Research Finding an optimal seating chart for a wedding:
    Every year, millions of brides (not to mention their mothers, future mothers-in-law, and occasionally grooms) struggle with one of the most daunting tasks during the wedding-planning process: the seating chart. The guest responses are in, banquet hall is booked, menu choices have been made. You think the hard parts are over, but you have yet to embark upon the biggest headache of them all. In order to make this process easier, we present a mathematical formulation that models the seating chart problem. This model can be solved to find the optimal arrangement of guests at tables. At the very least, it can provide a starting point and hopefully minimize stress and arguments…
    As mentioned before (e.g. in A matching problem, a related seating problem, and some other seating problems) I'm quite fascinated by this type of seating problems.

    And I'm not the only one. After I tweeted about my MiniZinc implementation (wedding_optimal_chart.mzn), Erwin Kalvehagen (with the excellent blog Yet Another Math Programming Consultant showed a GAMS (MIP) model in Weddings and optimal seating . (He also found a bug in my model which was fixed quite easily. Thanks Erwin.)
  • grime_puzzle.mzn
    This problem was taken from the blog Travels in a mathematical world A puzzle from James Grime about abcdef:
    Today James Grime tweeted this question/puzzle:

    Is there a six digit number abcdef such that the following all hold?

    a+b+c+d+e+f = y

    If not, show why not.

    A little tweeting back and forth verified that "ab" means 10a+b not a×b.

  • balanced_brackets.mzn
    This model generates balanced brackets of size m*2. The number of generated solutions for m:
     m        #
     1       1
     2       2
     3       5
     4      14
     5      42
     6     132
     7     429
     8    1430
     9    4862
    10   16796
    11   58786
    12  208012
    13  742900
    Which - of course - is the Catalan numbers. See OEIS: 1,2,5,14,42,132,429,1430,4862,16796,58786,208012, and the entry for Catalan numbers: A000108.

  • The "8809 = 6" puzzle
    This problem has been encoded in both MiniZinc and or-tools/C# (and was created yesterday):
    The problem seems to have been around for a couple of years, but it wasn't until I read about it in another excellent blog (Michael Lugo's God Plays Dice): A puzzle that I really tried it (and was lucky to solve it direct without any computational aid). The problem was stated thus:
    Here’s a puzzle.
    8809 = 6
    7662 = 2
    9312 = 1
    8193 = 3
    8096 = 5
    7756 = 1
    6855 = 3
    9881 = 5
    2581 = ?
    After a few days a mathematical solution came in the blog post: An answer to a puzzle, but then the problem had been expanded a little:
    8809 = 6
    7111 = 0
    2172 = 0
    6666 = 4
    1111 = 0
    3213 = 0
    7662 = 2
    9312 = 1
    0000 = 4
    2222 = 0
    3333 = 0
    5555 = 0
    8193 = 3
    8096 = 5
    7777 = 0
    9999 = 4
    7756 = 1
    6855 = 3
    9881 = 5
    5531 = 0
    2581 = ?
    The first version of the problem actually has two different solutions (of the unknown "?", represented as x in the models), since some of the variables are under defined. The second version has a unique solution of x, but there are 10 slightly different solutions since one of the variables is (still) under defined, i.e. the x is the same in all these solutions.

    The approach in my models was inspired by Michael Lugo's mathematical solution (as an equation system), though the or-tools/C# model implements another version using a matrix to represent the problem.

February 08, 2012

G12 MiniZinc version 1.4.3 released

G12 MiniZinc version 1.4.3 has been released. Download here.

From the NEWS
G12 MiniZinc Distribution 1.4.3

Changes in this release:

* We have added more redefinitions of FlatZinc built-ins to the "linear" MiniZinc library specialisation. This allows a wider range of MiniZinc models to solved using LP/MIP solvers.

* We have added decompositions of lex_less/2, lex_lesseq/2, lex_greater/2 and lex_greatereq/2 for Boolean arrays.

Bugs fixed in this release:

* We have fixed a bug in mzn2fzn that caused it to generate invalid FlatZinc for some var array lookups. [Bug #312]

January 20, 2012

G12 Zinc version 2.0 has been released

G12 Zinc version 2.0 has been released. The supported OS/machines are: Linux x86, Linux x86-64, Mac OS X x86, Mac OS X x86-64, Windows.

Release 2.0

New features in this release:

* G12/CPX: G12/CPX is a new lazy clause generation solver that can be used as a finite-domain solver with G12. It provides substantial performance and scalability improvements over the old lazy clause generation solver G12/FDX. G12/FDX itself is now deprecated and support for it will be dropped in a later release.

* Windows support: the platform is now supported on systems running Microsoft Windows.

Changes to the Zinc implementation:
* Zinc generated executables support a new command line option, --logging-level, that controls the amount of verbose output that is generated.

Release 1.1

This release of G12 is restricted to the MIP subset of G12. Only the Zinc MIP backend and MIP solver interfaces are included.
New features in this release:

* Gurobi support: the Gurobi optimizer () may now be used as an LP/MIP solver for G12.
Changes to the Zinc language:

* Zinc model and data files must now be UTF-8 encoded.

* The Zinc specification now defines the content type `application/x-zinc-output' which specifies how output should be produced by a Zinc implementation.

By default, the G12 implementation of Zinc now produces output that conforms to this content type. See Appendix D of the Zinc specification for further details.

* Zinc now supports C-style block comments.

* Unfixed sets with non-finite elements may now occur in more contexts. For example, the following is now accepted:

predicate disjoint(var set of $T: s1, var set of $T: s2) =
let {var set of $T: s = s1 intersect s2} in s = {};

* The following builtin operations are no longer supported:
- show_cond/1
- first/1
- second/1
- fst/1
- snd/1

Changes to the Zinc standard library:

* We have generalised the interface to the cumulative constraint so that it is polymorphic in the task type.

Changes to the Zinc implementation:

* The implementation now supports solver parameter annotations. These provide the Zinc modeller with more control over the low-level behaviour of a solver.
For example:

bool: do_logging; % Runtime data file parameter.
solve :: solver_parameters(cplex, [
presolve(false)]) satisfy;

The above solver_parameters/2 annotation says that if we are using CPLEX as the solver then low-level tracing output should be enabled in CPLEX, we should limit ourself to at most 50 integer solutions and presolving should be disabled.

Note that the arguments of solver parameter annotations may be specified in runtime data files, as with the logging/1 annotation above.

* The implementation now supports backend parameter annotations. These provide the Zinc modeller with more control over the behaviour of a Zinc backend. For example:

bool: name_flag; % Runtime data file parameter.
solve :: backend_parameters([pass_through_names(name_flag)]) satisfy;

The above backend_parameters/1 annotation tell the backend to makes Zinc variable names available to the underlying solver(s) (where possible).

* Executables generated by the Zinc compiler may now have runtime data specified directly on the command line rather than in a file. This is done using the new command line option --cmdline-data (-D for short).
For example:

./foo -D "n = 7; m = 8;"

The above command runs the model foo with the integer parameters "n" and "m" set to 7 and 8 respectively.

* The Zinc compiler now generates more efficient code for comprehension expressions, particularly those with four or more generators.

* array3d, array4d, array5d and array6d casts are now supported in runtime data files.

Bugs fixed in this release:

* A transformation bug that was causing variables to be incorrectly annotated has been fixed. [Bug #165]

* Some bugs that caused transformation aborts have been fixed. [Bugs #155, #159 and #162]
Also, see the documentation page. And perhaps also my Zinc page.

November 24, 2011

MiniZinc version 1.4.2 released

MiniZinc version 1.4.2 has been released. It can be downloaded here for several operating systems.

From the NEWS file:
G12 MiniZinc Distribution 1.4.2

Bugs fixed in this release:

* We have fixed a bug in mzn2fzn that caused it to incorrectly treat the condition of a where clause that evaluated to false as model inconsistency.

* A bug in mzn2fzn that caused it to create self-assignments for introduced variables has been fixed. [Bug #290]

* A bug in the g12_fd solver's cardinality constraint, which also affected the domain consistent alldifferent constraint, has been fixed. [Bug #287]

November 11, 2011

MiniZinc version 1.4.1 released

MiniZinc version 1.4.1 has been released. It can be downloaded here.

From the NEWS file:
Bugs fixed in this release:

* A bug in mzn2fzn's optimisation pass that caused it to leave dangling variable references in search annotations has been fixed. [Bugs #282 and #283]

* Some bugs that caused mzn2fzn to abort with models containing large 2d array literals have been fixed. [Bug #284]

* The solns2out tool now always outputs solution separators on a separate line even when the model output item does not contain a final newline character. [Bug #288]

Some related things:

I also noticed that there is a new logo for the MiniZinc project:


On YouTube there is a short video Installing Minizinc on OS X Lion , which may help some.

My MiniZinc page.

October 04, 2011

Crossword construction in MiniZinc using table constraints - a small benchmark on "72 Gecode problems"

Almost all models, wordlists, and files mentioned in this blog post is available at my MiniZinc Crossword page.


The method presented here for construction (solving, generating) crosswords is based of "just a bunch" of table constraints representing the words together with a rather simple unicity constraint. After an introduction of the problem and a description of the general approach used, a benchmark of several (72) crossword problem instances is reported between two FlatZinc solvers (Gecode/fz and Chuffed) and the crossword solver distributed with Gecode (written in C++) using element constraints for the intersections and alldifferent constraint for the unicity requirement. We will find that this MiniZinc model and the FlatZinc solvers are competitive and in some case even faster than the Gecode (C++) model.


Some weeks ago I started to read Ivan Bratko's "Prolog Programming for Artificial Intelligence" (the very new 4th edition, ISBN: 9780321417466). On page 27f there was a very simple crossword problem:
The problem is to fill this crossword with words:
    L1   L2    L3   L4    L5   XXX
    L6   XXX   L7   XXX   L8   XXX
    L9   L10   L11  L12   L13  L14
    L15  XXX   XXX  XXX   L16  XXX

Where the L* are letters to be identified.
and also a couple of words to use:
dog, run, top
five, four, lost, mess, unit
baker, forum, green, super
prolog, vanish, wonder, yellow
One common approach of solving/generating crosswords is to identify the intersections of the words and then using a wordlist for matching these intersections (e.g. the very simple crossword.mzn). This is usually done with element constraints for the intersections and alldifferent constraint to ensure unicity of the selected words.

Instead of this approach I got the idea (not very revolutionary, but still) to try using only "a bunch of" table constraint representing the words (called "segments" in the model) which would then handle the intersections via the naming of the variables. This was implemented in the MiniZinc model crossword_bratko.mzn. The table constraints where manually created by just using the numbers of the "free" letters in the problem (L1, L2, etc). The array of decision variables (L) is in the domain is in the domain 1..numletters (1..26, for "a".."z"). The dummy variable L[0] was later added to handle the fill-outs (and is hard-coded to a single value).

Here is the kernel of the Bratko problem which consists of the two row (across) words, and the three column (down) words:
array[0..N] of var 1..num_letters: L;
   % across
   table([L[1],L[2],L[3],L[4],L[5]], words5)
   table([L[9],L[10],L[11],L[12],L[13],L[14]], words6)

   /\ % down
   table([L[1],L[6],L[9],L[15]], words4)
   table([L[3],L[7],L[11]], words3)
   table([L[5],L[8],L[13],L[16]], words4);
The second argument of table is a matrix where all words of the same size are collected, e.g. words5 contains all words of length 5. In earlier crossword models I have tend to collect all words in a single - and often huge - matrix with a lot of "0" as fillers to make it a proper matrix.

As mentioned above the intersections are not explicitly identified in the model; instead they are just a consequence that the same letter identifier "happens" to be in a across word and a down word. E.g. L[3] represents the third letter of the first across word, and the first letter of the first down word. The index (3) is just a counter of the "free" letters. In this problem there are 14 free letters, represented by L[1]..L[14].

Also note that in this problem we don't have to care about 1-letter words. In the general model presented below, we will see that 1-letter words must be handled in a special way.

The constraint that all words should be distinct is implemented which the constraint shown below. The matrix segments is the segments (i.e. the words) in the crossword grid where all letters are identified as a unique integer (the index in L), and "0" (zero) is used as a filler so the segments has the same length and can be represented as a matrix. It has two parts: the segments across (rows) and the segments down (columns), and is the same structure as the table constraints.
segments = array2d(1..num_segments, 1..max_length, 
   % across
   1, 2, 3, 4, 5, 0,

   % down
   1, 6, 9,15, 0, 0,
   3, 7,11, 0, 0, 0,
   5, 8,13,16, 0, 0

% the segments/words should be all distinct
   forall(I,J in 1..num_segments where I < J) (
      not(forall(K in 1..max_length) (
          L[segments[I,K]] = L[segments[J,K]]
(The structure of the table constraints and the segments are much the same, and I have tried to represent this in a single matrix, but stumbled on the problem how to represent an appropriate wordlist structure. This may very well be fixed in a later version.)

This model has a single solution (given the wordlist show above):
L = [6, 15, 18, 21, 13, 9, 21, 5, 22, 1, 14, 9, 19, 8, 5, 19]

f o r u m *
i * u * e *
v a n i s h
e * * * s *

Generalization of the model and Gecode's 72 crossword problem

Since Bratko's problem was so small, I then wondered how well MiniZinc would do on a variety of larger crossword problems using the same basic approach, for example on the 72 crossword instances in Gecode's crossword.cpp (it is a large file but most of it are definitions of these 72 problem instances). This model uses element constraints by projecting words to their individual letters. In constraint, the MiniZinc model use table constraints for encoding the entire words. One of the objectives of the benchmark is to compare these two approaches.

The 72 problem instances are of different sizes which ranges from some small problems with grids of 5x5 cells to larger 23x23 grids. See the comments in the Gecode model for more details. The problems has been collected from different sources, e.g. Herald Tribune Crosswords and in early studies of backtracking, e.g. M.L Ginsberg Dynamic Backtracking and M.L. Ginsberg et al Search Lessons Learned from Crossword Puzzles. Some of them has also been used in later studies as well, e.g. the paper by Anbulagan and Adi Botea on phase transitions in crossword problems: Crossword Puzzles as a Constraint Problem (PDF).

The problems are represented as a grid in text form. For example the Bratko problem mentioned above would be represented as the following grid , where "_" represents a letter to be identified, and "*" is a blocked cell.
_ _ _ _ _ *
_ * _ * _ *
_ _ _ _ _ _
_ * * * _ *
A larger example is problem the 15x15 problem #10 (from Gecode's crossword.cpp) which is also identified as "15.01, 15 x 15".
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ _ _ _ _ _ _ * _ _ _ _
_ _ _ _ _ _ _ * * _ _ _ _ _ _
* * * _ _ _ * _ _ _ _ _ _ * *
_ _ _ _ _ * _ _ _ * _ _ _ _ _
_ _ _ * _ _ _ _ _ _ * _ _ _ _
_ _ _ * _ _ _ _ _ _ _ * _ _ _
_ _ _ _ * _ _ _ _ _ _ * _ _ _
_ _ _ _ _ * _ _ _ * _ _ _ _ _
* * _ _ _ _ _ _ * _ _ _ * * *
_ _ _ _ _ _ * * _ _ _ _ _ _ _
_ _ _ _ * _ _ _ _ _ _ _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
_ _ _ _ * _ _ _ _ _ * _ _ _ _
However, since MiniZinc don't have the facility of parsing this kind of text representation, I wrote some Perl programs to convert a problem instance to a MiniZinc model. The MiniZinc model for problem #10 is crossword3_10.mzn, which contains the problem specific table constraints and segments definitions, much in the same way as in the Bratko model.

Here is one solution (using Gecode's SCOWL wordlist, see below for more about the wordlists):

Note: By construction this approach requires that all 1-letter segments (words) must be distinct. This means that 1-letter segments must be handled with care since these segments are the same when seeing them across and down. More specific this means that the unicity constraint has a special check for 1-letter words:
forall(I,J in 1..num_segments where I < J) (
  if sum(K in 1..max_length) (
     bool2int(segments[I,K] > 0 ) ) = 1
     segments[I,1] = segments[J,1]
    not(forall(K in 1..max_length) (
      L[segments[I,K]] = L[segments[J,K]]

Structure of the MiniZinc files

The structure of the model are
  • the general model crossword3.mzn: which includes the wordlist and unicity constraint
  • a wordlist, which is included in the crossword3.mzn
  • problem instance, e.g. crossword3_10.mzn: includes the specific data instance and table constraints. This instance file includes crossword3.mzn
The instances can be run as follows:
# Using G12/MiniZinc default solver
$ minizinc -v --no-optimize -s crossword3_0.mzn

# Using Gecode/fz
$ minizinc -v --no-optimize -G gecode -s crossword3_0.mzn -f "fz -mode stat -n 1 "

For larger problems, the parameter --no-optimize might be a good choice, otherwise mzn2fzn (the converter to FlatZinc) can take very long time.

More about the plain Gecode model

The (plain) Gecode model crossword.cpp is described in detail chapter 19 in Modeling and Programming with Gecode (PDF), which also include some benchmark results.


One of the objectives with the benchmark was to see how well the MiniZinc model (together with a FlatZinc solver) would compete with Gecode's crossword model (Gecode's crossword.cpp, available in the Gecode distribution). This model will be named "plain Gecode model" to be able to discern it when also mentioning the Gecode/fz FlatZinc solver.

After doing some preliminary tests of several FlatZinc solvers using different labelings, I selected these two solvers to continue with the full benchmarks of all 72 instances:
  • Gecode/fz: SVN version (based on the version 3.7.0) per 2011-09-21.
  • Chuffed: Version compiled 2011-04-17 (no explicit version). Chuffed is a lazy clause solver written by Geoffrey Chu. As of writing it is not yet public available but has been mentioned several times in connection with MiniZinc Challenge where it has shown very impressive results; see MiniZinc challenge 2011 Results and MiniZinc challenge 2011 Results. For more information see this description.
After systematically testing all labelings on some problem instances (#20, #30, and with some wordlists #40), some labelings where selected for the full tests. Note: I looked for a problem instance that could be used as a "proxy" for the complete problem set in order to get the best labelings, but found no instance that was representative enough for this.


The benchmarks consists of testing all 72 problem instances with the following wordlists:
  • Gecode's SCOWL wordlist (66400 English words), available from MiniZinc Crossword page
  • Swedish wordlist based on an earlier version of Den stora svenska ordlistan ("The Big Swedish Wordlist"), (388493 Swedish words), available from MiniZinc Crossword page. Note that crossword3.mzn must be changed to handle the national characters "å", "ä", and "ö" (this is commented out in the current model).
  • 73871 English words from /usr/share/dict/words (some words where filtered out, e.g. the one not matching ^[A-Za-z]$), available from MiniZinc Crossword page
  • 80.txt (242606 English words)
  • The time reported is for all 72 problem instances, with a timeout of 10 minutes (600 seconds) per problem instance.
  • The plain Gecode model was run with default parameters (except for the timeout of 10 minutes and statistics)
  • The solver stated as chuffed4 below is Chuffed using the single parameter --toggle-vsids (and the timeout)
  • The solver Gecode/fz was run with the parameter for timeout and statistics.
The reported runs are not all the tests that where made. Instead just the top results are shown.

Which time to compare: Total time, runtime, or solvetime?

The time for running the plain Gecode executable was measured via the Unix time command. This includes everything: parsing of the problem (compiled in the model), setting up the constraints, and then solving the model.

However, the way MiniZinc works is different: the MiniZinc model (.mzn) is first parsed and then translated (flattened) into a FlatZinc format (.fzn) which is the file used by the solvers. This flattening process can take some time: As the number of words in the wordlist increases, the time for flattening increases as a consequence. The time is about a couple of seconds for smaller wordlists to about 30 seconds for the largest Swedish wordlist. Also, it takes some seconds to generate the "fancy" output where the resulting grid is presented (see the output section in the MiniZinc model). This output is used to check that the solution is correct and don't include any duplicate words (this is done via a separate Perl program).

Example: The time for running problem #33 (a grid of size 21x21) using the plain Gecode crossword model with SCOWL wordlist is 44.9 seconds. The total time for the Gecode/fz solver using size_afc_min/indomain_min to solve the same problem takes 59.6 seconds. However, Gecode/fz reports two times in its output: runtime: 49.1s and solvetime: 47.6s. This means that - for this problem instance - there is an overhead of about 5 seconds to generate the FlatZinc file and then output the nice output grid. In comparison, Chuffed using first_fail/indomain_max took 13.22s total time, with a reported 3.41s runtime and 1.56s as solvetime.

In the benchmark below all the three different times for the FlatZinc solvers are reported: total time, runtime, and solvetime (summed for all 72 instances). One way of comparing the performance of the FlatZinc solvers with the plain Gecode model is to use the runtime which would exclude the overhead for flattening to FlatZinc and generating the output. On the other hand, comparing runtime values is really not fair to plain Gecode, since the flattening process might do some relevant optimization of the model. As we will see, some of the solvers has in fact a total time that is better than the plain Gecode model.

The benchmarks below are grouped on the different wordlists, and the ordering is on the runtime. We will see that there is no single FlatZinc solver + labeling that has the best runtimes (or total time) for all wordslists.

Wordlist Gecode SCOWL

This is the wordlist used in the "plain" Gecode model containing 66400 English words.

The total time for plain Gecode crossword (with default parameters) on all 72 problem instances is 57:16.25 minutes. The timeout/failures are for problem: #15, #30, #39, #40, #45, #49.

Problem #40 is not possible to solve with this wordlist since the problem requires two words of length 23. However the wordlist contain no word of this length.

It is interesting that Chuffed's total time (36 minutes 37 seconds) is significant less than plain Gecode's total time (and runtime is of course even faster). We also see that the overhead of flattening to FlatZinc format and then handling the output is about 6 minutes.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
69 2197.65s (36 minutes and 37 seconds)1744.44s (29 minutes and 4 seconds)1681.32s (28 minutes and 1 second)
69 2198.98s (36 minutes and 38 seconds)1746.56s (29 minutes and 6 seconds)1683.45s (28 minutes and 3 seconds)
67 3368.83s (56 minutes and 8 seconds)2917.64s (48 minutes and 37 seconds)2854.8s (47 minutes and 34 seconds)
66 4386.15s (1 hour, 13 minutes, and 6 seconds)3931.794s (1 hour, 5 minutes, and 31 seconds)3883.713s (1 hour, 4 minutes, and 43 seconds)
66 4564.53s (1 hour, 16 minutes, and 4 seconds)4114.111s (1 hour, 8 minutes, and 34 seconds)4066.457s (1 hour, 7 minutes, and 46 seconds)

Wordlist 80.txt

The 80.txt wordlist contains 242606 English words.

Total time for plain Gecode: 21:48.28 minutes with 70 instances solved.

Here we see that the best total time for the MiniZinc solver solves all 72 problems in about the same time as the plain Gecode model. The runtime reported is just about 3 minutes which is kind of weird. (The reported solvetime of 17 seconds is even weirder.)
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
72 1324.96s (22 minutes and 4 seconds)157.42s (2 minutes and 37 seconds)17.71s (17 seconds)
72 1375.62s (22 minutes and 55 seconds)214.765s (3 minutes and 34 seconds)103.848s (1 minute and 43 seconds)
72 1370.48s (22 minutes and 50 seconds)214.772s (3 minutes and 34 seconds)103.495s (1 minute and 43 seconds)
72 1410.68s (23 minutes and 30 seconds)266.42s (4 minutes and 26 seconds)126.43s (2 minutes and 6 seconds)


This wordlists contains 73871 English words from /usr/share/dict/words. Some words where filtered out from the original file: the one not matching ^[A-Za-z]$.

Total time for plain Gecode: 33:43.19 minutes, 69 instances solved.

This is another benchmark where Chuffed's is the best FlatZinc solver. The total time (34 minutes and 1 second) is almost the same as plain Gecode, with the runtime (25 minutes and 56 seconds) is significantly faster.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
69 2041.68s (34 minutes and 1 second)1556.97s (25 minutes and 56 seconds)1484.82s (24 minutes and 44 seconds)
69 2048.91s (34 minutes and 8 seconds)1563.16s (26 minutes and 3 seconds)1491.13s (24 minutes and 51 seconds)
69 2196.82s (36 minutes and 36 seconds)1712.9s (28 minutes and 32 seconds)1639.85s (27 minutes and 19 seconds)
68 2522.89s (42 minutes and 2 seconds)2045.56s (34 minutes and 5 seconds)1974.13s (32 minutes and 54 seconds)
68 2563.63s (42 minutes and 43 seconds)2085.042s (34 minutes and 45 seconds)2029.719s (33 minutes and 49 seconds)

Swedish wordlist

This wordlist contains 388493 Swedish words.

There is no plain Gecode runtime for this, instead it is just a comparison of the FlatZinc solvers. I wanted to include this in the benchmark for two reasons: to see how/if MiniZinc could handle this large wordlist, and also because I'm quite curious about Swedish solutions for the problems (much because I am a Swede).

The best solver this time is Gecode/fz (using size_afc_min/indomain_min) with a runtime of 3 minutes and a solvetime of 42 seconds. Though the total time is much larger: almost 39 minutes. This means that there is a massive overhead of flattening to FlatZinc and presenting the output.
Solvervar select
val select
#solvedTotal timeRuntimeSolvetime
72 2330.14s (38 minutes and 50 seconds)181.685s (3 minutes and 1 second)42.129s (42 seconds)
72 2349.9s (39 minutes and 9 seconds)197.006s (3 minutes and 17 seconds)57.683s (57 seconds)
72 2393.97s (39 minutes and 53 seconds)255.495s (4 minutes and 15 seconds)116.09s (1 minute and 56 seconds)
72 2415.61s (40 minutes and 15 seconds)258.14s (4 minutes and 18 seconds)89.89s (1 minute and 29 seconds)
72 2413.29s (40 minutes and 13 seconds)258.3s (4 minutes and 18 seconds)89.9s (1 minute and 29 seconds)

Summary and conclusions

The objective of the benchmark was to see how well the MiniZinc model would compete with the plain Gecode model. For all wordlists there is at least one FlatZinc solver with a total time that is near plain Gecode's total times, and for one wordlist (SCOWL) there is a solver that is much faster. Comparing the reported runtimes all measurable wordlists has a FlatZinc solver that is faster. For one wordlist (Swedish words) there where no run of the plain Gecode model.

As mentioned above, there is no single FlatZinc solver/labeling that is the best for all wordlist. Comparing just the FlatZinc solvers we see that Chuffed solver (with some labeling) was the best for SCOWL, 80.txt, and /usr/share/dict/words; whereas Gecode/fz was the best for the Swedish wordlist.

In the preliminary tests the best variable selection for Gecode/fz is size_afc_min, though the best value selection is not that clear. For Chuffed there is single variable/value selection combination that dominates, though both first_fail and most_constrained often gave quite good results.

As noted several times before in the CP field, these kind of benchmarks might be misleading and of limited value. The comparison between the plain Gecode model and the MiniZinc model (+ FlatZinc solvers) might be even worse since it compares at the same time:
  • two different CP systems: compiled C++ code vs MiniZinc eco system
  • two different approaches: element constraint on intersections vs. table constraint on words.
  • different time measurements
Still, it is probably not unfair to draw the conclusion that the MiniZinc model and the two tested FlatZinc solvers at least did give the plain Gecode model a match in generating/solving the selected problem instances.

Also, there might be some parameter or labeling that is much better than these tested. This includes - of course - parameters of the plain Gecode model.

Further research

It would be interesting to study how well the table approach would do as a plain Gecode model. It would also be interesting to do a MiniZinc model using a element/alldifferent approach.


Here is the system and version used in the benchmark:
  • Linux Ubuntu 11.04, 64-bit 8 cores (i7 930, 2.80GHz) with 12Gb RAM
  • Gecode: SVN version (based on the version 3.7.0) per 2011-09-21.
  • MiniZinc version: Version 1.4
  • Chuffed version: Version compiled 2011-04-17 (no explicit version).


Thanks to Christian Schulte and Guido Tack for some explanations and intuitions. Also, thanks to Geoffrey Chu and Peter Stuckey for the opportunity to test Chuffed.

September 21, 2011

G12 MiniZinc version 1.4 released

G12 MiniZinc version 1.4 has been released. Download it here.

From the NEWS:
G12 MiniZinc Distribution 1.4
Changes to the MiniZinc language:

* Input files are now encoded in UTF-8.

* A new built-in function, is_fixed/1, can be used to test whether a value is known to be fixed at flattening time.

* Two new built-in functions, iffall/1 and xorall/1, can be used to perform n-ary xor operations on arrays of Booleans. They are defined as follows:

iffall([a1, a2, ..., aN]) <=> a1 xor a2 xor ... xor aN xor true

xorall([a1, a2, ..., aN]) <=> a1 xor a2 xor ... xor aN xor false

Changes to the FlatZinc language:

* Variable and parameter names may now be optionally prefixed with one or more leading underscore. For example, the following are now valid FlatZinc variable names:


The rationale for this change is to provide a way for mzn2fzn to name introduced variables in a way that is guaranteed not to clash with the variable and parameter names from the MiniZinc model.

Names for predicates, predicate parameters and annotations cannot have leading underscores.

* A new FlatZinc builtin has been added: array_bool_xor/1.

Changes to the MiniZinc evaluation driver:

* A new command line option, --random-seed, can be used to specify a seed for the FlatZinc implementation's random number generator. (The options -r and --seed are synonyms for this option.)

The evaluation driver will invoke FlatZinc implementations with the -r option if it is invoked with --random-seed. (See the description of the FlatZinc command line interface in the minizinc(1) manual page for details.)

Changes to the G12 FlatZinc interpreter:

* There is now a domain consistent version of the alldifferent/1 constraint for G12/FD. (The domain consistent version is selected by annotating the constraint with the domain/0 annotation.)

* The --random-seed (-r) command line option described above is now supported by the interpreter.

Other changes in this release:

* The global constraints value_precede/3 and value_precede_chain/2 have been added to the MiniZinc globals library. The existing precedence/1 global constraint has been deprecated in favour of these.

* The problems from the 2011 MiniZinc challenge are now included in the MiniZinc benchmark suite.

Bugs fixed in this release:

* A bug in G12/FD's cumulative/4 global constraint that caused poor runtime performance when (extended) edge finding filtering was enabled has been fixed. [Bug #221]

* A bug that caused mzn2fzn to abort instead of printing an error message if a variable had the same name as a built-in function has been fixed. [Bug #231]

* A bug that caused an abort in flatzinc's LazyFD backend if there was an application of the built-in int_lin_le_reif/4 constraint with a zero coefficient has been fixed. [Bug #232]

* A bug that caused mzn2fzn and solns2out to abort when processing array literals whose elements were all empty set literals has been fixed. [Bug #256]

* mzn2fzn and solns2out no longer abort on array5d and array6d expressions. [Bug #259]

* The incorrect default definition of the int_set_channel/2 global constraint has been fixed. [Bug #255]

* A bug that caused mzn2fzn to use the wrong predicate name when flattening an application of a bodyless reified predicate definition has been fixed.

* The incorrect default decomposition of the roots/3 global constraint has been fixed.

* mzn2fzn and solns2out no longer abort if they encounter an arrayNd cast containing an empty array of decision variables.

* A bug that caused mzn2fzn to abort if a model contained an array lookup where the array expression was a literal containing only anonymous variables has been fixed. [Bug #68]

* A bug that caused reified var array lookups to be incorrectly flattened has been fixed. [Bug #244]

* Assignments to annotation variables are no longer emitted in MiniZinc output specifications. [Bug #269]

* solns2out now prints trailing comments in the solution stream if search terminates before it is complete. [Bug #270]

* A bug that caused a stack overflow in solns2out has been fixed. [Bug #272]

* A bug that caused a stack overflow in mzn2fzn on Windows systems has been fixed. [Bug #228]

September 13, 2011

MiniZinc Challenge 2011 Results

The results of the MiniZinc Challenge 2011 has been published.

From the result page:
The entrants for this year (with their descriptions, when provided):

BProlog. A CLP(FD) solver.
Bumblebee. Translates to SAT, uses CryptoMiniSAT.
Gecode. A C++ FD solver.
JaCoP. A Java FD solver.
Fzn2smt . Translates to SMT, uses Yices.
SCIP. A CP/MIP solver.

In addition, the challenge organisers entered the following FlatZinc implementations:

Chuffed. A C++ FD solver using Lazy clause generation.
CPX. A C++ FD Solver using Lazy Clause Generation.
G12/FD. A Mercury FD solver (the G12 FlatZinc interpreter's default solver).
G12/LazyFD. A Mercury FD solver using Lazy Clause Generation.
G12/CBC. Translates to MIP, uses Cbc version 2.6.2.
G12/CPLEX. Translates to MIP, uses CPLEX version 12.1
. G12/Gurobi. Translates to MIP, uses Gurobi Optimizer version 4.5.

As per the challenge rules, these entries are not eligible for prizes, but do modify the scoring results. Furthermore, entries in the FD search category (BProlog, Gecode, JaCoP, Chuffed, CPX and G12/FD) were automatically included in the free search category, while entries in the free search category (Bumblebee, Fzn2smt, SCIP, CBC, CPLEX, Gurobi and promoted FD entries) were automatically included in the parallel search category.
The slides for the presentation of the results at CP2011 are here (PDF).

Summary of results

The results for the MiniZinc Challenge 2011 were

Fixed search:
* Gold medal: Gecode
* Silver medal: JaCoP
* Bronze medal: BProlog

Free search:
* Gold medal: Gecode
* Silver medal: fzn2smt
* Bronze medal: JaCoP

Parallel search:
* Gold medal: Gecode
* Silver medal: fzn2smt
* Bronze medal: JaCoP

Congratulations to all!

As in the MiniZinc Challenge 2010, the Chuffed solver did very well on the benchmarks (though not eligible for a price). More details in presentation and the result page (where the models and data instances are shown).

July 26, 2011

Guido Tack: libmzn - a prototype implementation of a modular compilation architecture for MiniZinc

Guido Tack has released libmzn, a prototype implementation of a modular compilation architecture for MiniZinc.

From the presentation:
This project proposes to develop an infrastructure for constraint modeling based on MiniZinc, rather than just a modeling language. This will ensure a greater impact of MiniZinc, and a better chance of it being accepted as a standard.

The infrastructure will feature Application Programming Interfaces (APIs) in C and C++ for both modeling and solving. This architecture will make it easy to integrate the MiniZinc toolchain into applications and general-purpose programming languages, as well as provide a direct interface for solver backends.
libmzn is presented in the position paper (to be presented at the 2011 MiniZinc workshop): libmzn - A modular CP infrastructure based on MiniZinc (PDF). The abstract:
The main obstacle to a successful integration of MiniZinc models within general applications is the monolithic, text-based interface to the MiniZinc toolchain. Both the frontend and the individual solver backends require a custom, error-prone implementation of data exchange via text files.

This position paper proposes to develop an infrastructure for constraint modeling based on MiniZinc, rather than just a modeling language. This will ensure a greater impact of MiniZinc, and a better chance of it being accepted as a standard. Such an infrastructure could be based on a modular library for MiniZinc, called libmzn, featuring Application Programming Interfaces (APIs) in C and C++ for both modeling and solving. This architecture will make it easy to integrate the MiniZinc toolchain into applications and general-purpose programming languages, as well as provide a direct interface for solver backends.

April 14, 2011

MiniZinc Challenge 2011 announced

Today the MiniZinc Challenge 2011 (the fourth) was announced:
The aim of the challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).
See the MiniZinc Challenge 2011 -- Rules for more details. Note, for example, that the scoring procedure has been changed from previous competitions.

If you haven't, now can be a good time to read the paper Philosophy of the MiniZinc Challenge by Peter J. Stuckey, Ralph Becket, Julien Fischer (from 2010) that discuss the MiniZinc Challenge is more detail.

March 21, 2011

Version 1.3.2 of G12 MiniZinc released

Version 1.3.2 of MiniZinc has been released.

From the NEWS:
G12 MiniZinc Distribution 1.3.2

Bugs fixed in this release:

* We have fixed a series of problems in mzn2fzn and solns2out with flattening and printing of expressions that contain anonymous variables. (This also resolves bug #67.)

* A bug that caused mzn2fzn to abort if an annotation contained a string literal argument has been fixed. [Bug #212]

* A problem with mzn2fzn that caused it to sometimes not print error messages on Windows XP, despite errors being present, has been fixed. [Bugs #97 and #215]

Other changes in this release:

* The distribution now includes a language syntax definition for the Zinc family of languages for use with GtkSourceView. The definition is in the directory tools/gtksourceview.

And here is the release notes for version 1.3.1 (released some weeks ago):
G12 MiniZinc Distribution 1.3.1
Bugs fixed in this release:

* The CP-Viz support now correctly renders solutions for optimisation problems.

February 12, 2011

MiniZinc version 1.3 released

MiniZinc version 1.3 is released. Download here (snapshots can be downloaded here)

From the NEWS:
G12 MiniZinc Distribution 1.3

* New evaluation and output framework

We have implemented a new evaluation and output framework for MiniZinc that simplifies evaluating a model and producing output formatted according to the model's output item.

The new framework is based around two new tools. The first, solns2out, takes a model output specification produced by mzn2fzn, and reads the solution stream from a FlatZinc implementation. It then formats and prints each solution according to the output specification. An example, of its use is as follows:

$ mzn2fzn model.mzn
$ flatzinc model.fzn | solns2out model.ozn

Model output specifications are contained in files with the ".ozn" extension. Such files are now generated by default by mzn2fzn.

The second new tool, named minizinc, is an evaluation driver that automates the process of evaluating a MiniZinc model. For example, the following command:

$ minizinc model.mzn

will flatten, evaluate, and generate formatted output for the specified model. The FlatZinc interpreter used by minizinc is pluggable, so any FlatZinc implementation that can be invoked from the command line can in principle be used with it. (The manual page for minizinc contains a complete description of how it interacts with the FlatZinc implementation.)

The minizinc program replaces the mzn script; since it also has support for CP-Viz, it also replaces the minizinc-viz script as well. Unlike the scripts, the minizinc program works directly from the Windows command prompt, i.e. neither Cygwin or MSYS are required to use it.
We have added some wrapper scripts (on Windows, batch files) around the minizinc program for invoking each of the G12 FlatZinc interpreter's backends with the appropriate global constraint definitions.

These new wrapper scripts are:

mzn-g12fd (Evaluate MiniZinc using G12/FD.)
mzn-g12lazy (Evaluate MiniZinc using G12/Lazy.)
mzn-g12mip (Evaluate MiniZinc using G12 and a MIP solver.)
mzn-g12sat (Evaluate MiniZinc using G12 and a SAT solver.)

For example, the following evaluates a MiniZinc model using the G12 Lazy Clause Generation solver:

$ mzn-g12lazy model.mzn

Changes to the MiniZinc language:

* The built-in operation show_cond/3 is no longer supported.

* The built-in annotation is_output/0 is deprecated. Support for it will be removed in a later release.

* The following operations, which were deprecated in MiniZinc 1.1, are no longer supported:

int: lb(array[$T] of var int)
float: lb(array[$T] of var float)
set of int: lb(array[$T] of var set of int)
int: ub(array[$T] of var int)
float: ub(array[$T] of var float)
set of int: ub(array[$T] of var set of int)
set of int: dom(array[$T] of var int)

Changes to the G12 MiniZinc-to-FlatZinc converter:

* The --no-output-pred-decls option is no longer supported.

* The --target-flatzinc-version is no longer supported.

Bugs fixed in this release:

* A bug in mzn2fzn that caused it to infer incorrect bounds on absolute value expressions has been fixed.

* A bug in mzn2fzn's optimisation pass that caused it to delete equality constraints between output variables has been fixed.

* The FlatZinc interpreter now rejects output_var/0 annotations on array declarations and output_array/1 annotations on scalar variable declarations.

Some notes

* One thing to note with the new minizinc program is that it can be used as a wrapper for all FlatZinc solvers and it shows the nice output from the output section . Here is how to run Gecode/fz (the program name is fz) on a Rogo model (rogo2.mzn):
$ minizinc rogo2.mzn rogo_mike_trick.dzn -f "fz -mode stat -solutions 0"


x     : [2, 2, 2, 2, 3, 4, 5, 5, 5, 4, 4, 3]
y     : [2, 3, 4, 5, 5, 5, 5, 4, 3, 3, 2, 2]
points: [3, 0, 0, 1, 0, 0, 2, 0, 0, 2, 0, 0]
sum_points: 8

(2, 2): 3 points
(2, 3): 0 points
(2, 4): 0 points
(2, 5): 1 point
(3, 5): 0 points
(4, 5): 0 points
(5, 5): 2 points
(5, 4): 0 points
(5, 3): 0 points
(4, 3): 2 points
(4, 2): 0 points
(3, 2): 0 points

%%  runtime:       1.216 (1216.804000 ms)
%%  solvetime:     1.213 (1213.208000 ms)
%%  solutions:     3
%%  variables:     230
%%  propagators:   284
%%  propagations:  9641152
%%  nodes:         78247
%%  failures:      39121
%%  peak depth:    30
%%  peak memory:   712 KB
The output statement for this model is:

output [
"x : " ++ show(x) ++ "\n" ++
"y : " ++ show(y) ++ "\n" ++
"points: " ++ show(points) ++ "\n" ++
"sum_points: " ++ show(sum_points) ++ "\n"

] ++ ["\n"]
"(" ++ show(x[i]) ++ ", " ++ show(y[i]) ++ "): " ++
show(points[i]) ++ if fix(points[i]) == 1 then " point"
else " points" endif ++ "\n"

| i in 1..max_steps
] ++ ["\n"];

* Many of my MiniZinc models does not contain a explicit output statement yet, and I will fix that.

January 27, 2011

FlatZinc solver fzn2smt 2.0 released

One of the new contestants in MiniZinc Challenge 2010 was fzn2smt and it did quite well:
  • Silver medal in the Free search category
  • Tied gold medal (with Gecode) in the Parallel search category
(For more details about the challenge, see the CP2010 presentation.)

From the fzn2smt page:
fzn2smt is a compiler from the FlatZinc language to the standard SMT-LIB language version 1.2. SMT stands for Satisfiability Modulo Theories: the problem of deciding the satisfiability of a formula with respect to background theories --such as linear arithmetic, arrays, etc-- for which specialized decision procedures do exist.

fzn2smt was designed with the idea in mind of help testing the adequacy of SMT technology outside the field of verification, where it has its roots. It aims at solving CSP instances with state-of-the art SMT solvers, by taking profit of recent advances in this tools and other already well-established and powerful implementation features of SAT technology such as non-chronological backtracking, learning and restarts, which seem to be rarely exploited in the context of Constraint Programming.

fzn2smt supports all standard data types and constraints of FlatZinc. The logic required for solving each instance is determined automatically during the translation, and the translation is done in a straightforward way at the current stage of development. Search annotations are ignored, as they do not make sense in the context of SMT. Only the alldifferent and cumulative MiniZinc global constraints are supported (encoding them into SMT).

The fzn2smt compiler is written in Java, and uses the ANTLR runtime for parsing. Working in cooperation with an SMT solver, fzn2smt is able to solve decision problems as well as optimization problems. However, since most SMT solvers do no support optimization, we have currently implemented it by means of iterative calls performing a binary search on the domain of the variable to optimize.

The output of fzn2smt could be fed into any SMT solver supporting the standard SMT-LIB language. By default works in conjunction with Yices 2 with the authorization of their authors, and was intended to be used only in the MiniZinc Challenge 2010, where the tool made good results.
See the fzn2smt page for installation instructions.

Some comments

fzn2smt can sometimes solve problems fast where other more "traditional" CP solvers takes longer time. However, since fzn2smt can only generate a single solution it is less useful for problems when all solutions are required, or for checking if a problem has a unique solution (e.g. for debugging a model). Since I use fzn2smt mostly for harder/larger problem I allow Java 4Gb of use: java -Xmx4096M fzn2smt -ce "yices -f" -i file.fzn

It's great that we now have yet another powerful tool for solving MiniZinc/FlatZinc problems.

January 09, 2011

Rogo grid puzzle in Answer Set Programming (Clingo) and MiniZinc

ASP (Clingo): rogo.lp, rogo2.lp
MiniZinc: rogo.mzn, rogo2.mzn
(See below for some problem instances.)

Later update: I have now also implemented versions with symmetry breaking constraints in the two encodings: rogo2.lp and rogo2.mzn. See below for more into

In Operations Research, Sudoko, Rogo, and Puzzles Mike Trick presented the Rogo puzzle, a grid puzzle where the object is to find a loop of a number of steps and get as many points as possible. He writes
The loop must be a real loop: it must return where it started and can’t cross itself or double back. Steps can be either horizontal or vertical: they cannot be diagonal. The loop cannot include any of the black squares. ... Rogo was created by two faculty members (Nicola Petty and Shane Dye)  at the University of Canterbury in Christchurch, New Zealand.  Not surprisingly, Nicola and Shane teach management science there:  the problem has a very strong operations research flavor.
From Creative Heuristics Ltd (the Rogo puzzle site): Rogo is an entirely new type of puzzle. The object is to collect the biggest score possible using a given number of steps in a loop around a grid. The best possible score for a puzzle is given with it, so you can easily check that you have solved the puzzle. Rogo puzzles can also include forbidden squares, which must be avoided in your loop..

Below I have assumed that the path must be in exactly the given number of steps and the programs are written with this assumption in mind (and further reading of the problem descriptions support this interpretation). However, my first approach - probably caused by sloppy reading - was that the optimal path could possible be in lesser steps. It took several hours trying to implement a program supporting this, but was abandoned after re-reading of the problem descriptions. This seems to be a harder problem; maybe it could be used as a variant of the original problem?

I was inspired to solve these puzzle in part because of Mike's last words: Creating a solver would make a nice undergraduate project (and I suspect there are at least a few master's theses and perhaps a doctoral dissertation on algorithmic aspects of creating and solving these). One other reason was to see how to do this with Answer Set Programming - here using Clingo (a Potassco tool) and to compare it with a Constraint Programming system, MiniZinc.

Some more links about Rogo:
  • Instructions
  • Rogo blog
  • YouTube clip.
  • Nicola Petty, Shane Dye: Determining Degree Of Difficulty In Rogo, A TSP-based Paper Puzzle (PDF)
    From the Conclusions: The Rogo puzzle format has a number of aspects that can be controlled to potentially affect degree of difficulty of solving. As a pilot, this study showed that there are many aspects of puzzle-solving related to the nature of the puzzle that can be explored, and there appear to be some general effects, though there are still marked individual differences between people solving the puzzles. This research has the potential to provide interesting insights into both human behaviour, and the nature of puzzles.

    Note: I didn't noticed this until I was almost finished with this blog post (and have just glanced through it).


Here is the Rogo example from Mike Trick's site (pictures borrowed from his site; click on them for larger versions).


Rogo puzzle, problem.

One solution:

Rogo puzzle, solution

Note that there is not an unique solution to these puzzles. All three problem instances I tested have more than one solution. For example, the Mike Trick problem has 48 solutions including path symmetries. Since there are 12 steps there are (removing the path symmetry) 48 / 12 = 4 distinct different paths. These different paths are shown below as MiniZinc solutions, where the first step have been fixed to (2,2), i.e. x[1]=2 and y[1]=2<, and it has 3 points (points[1]):

points = array1d(1..12, [3, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0]);
x = array1d(1..12, [2, 3, 4, 4, 5, 5, 5, 4, 3, 2, 2, 2]);
y = array1d(1..12, [2, 2, 2, 3, 3, 4, 5, 5, 5, 5, 4, 3]);
points = array1d(1..12, [3, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 3, 3, 4, 5, 5, 5, 4, 3, 2, 2, 2]);
y = array1d(1..12, [2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 4, 3]);
points = array1d(1..12, [3, 0, 0, 1, 0, 0, 2, 0, 0, 2, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 2, 3, 4, 5, 5, 5, 4, 3, 3]);
y = array1d(1..12, [2, 3, 4, 5, 5, 5, 5, 4, 3, 3, 3, 2]);
points = array1d(1..12, [3, 0, 0, 1, 0, 0, 2, 0, 0, 2, 0, 0]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 2, 3, 4, 5, 5, 5, 4, 4, 3]);
y = array1d(1..12, [2, 3, 4, 5, 5, 5, 5, 4, 3, 3, 2, 2]);

Answer Set Programming, Clingo

Here is the full ASP (Clingo) encoding, without the data:

% domains

% max number of steps

% define adjacency between cells
adj(R,C, R1,C1) :- rows(R;R1), cols(C;C1), |R-R1| + |C-C1|==1.

% the path: unique index
0 { path(I, Row, Col) : steps(I) } 1 :- rows(Row), cols(Col).
1 { path(I, Row, Col) : rows(Row) : cols(Col) } 1 :- steps(I).

% close the circuit: ensure that the first and last cells
% in the path are connected.
:- path(1, R1, C1), path(max_steps, R2, C2), not adj(R1,C1,R2,C2).

% remove bad paths
:- path(I-1,R1,C1), path(I,R2,C2), not adj(R1,C1, R2,C2).

% no black cells in the path
:- path(I, R,C), black(R,C).

% total points, needed since
% "Optimization:" don't show the proper value.
total(Total) :- Total = #sum[got_points(R,C,Value) = Value].

% list the cells in path with points
got_points(R,C, Value) :- point(R,C,Value), path(I, R, C).

% maximize the number of points
% #maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P ].

% alternative: we can add an second objective to
% start with the cell with lowest indices
#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P@2 ].
#minimize [ path(1,R,C) = R*c+C@1].

#show path(I, Row, Col).
#show total(Total).
#show got_points(R,C,Value).

Here is the encoding for Mike Trick's problem instance:
#const max_steps = 12.
#const r = 5.
#const c = 9.

% the point cells

% black cells (to avoid)
The solution of Mike Trick's problem (edited), using the following command line:
clingo --heuristic=Vmtf --stat rogo_data_mike_trick.lp rogo.lp
total: 8

Statistics for this solution:
Models      : 1     
  Enumerated: 6
  Optimum   : yes
Optimization: 184 20 
Time        : 0.960
  Prepare   : 0.060
  Prepro.   : 0.020
  Solving   : 0.880
Choices     : 19826
Conflicts   : 16539
Restarts    : 1

Atoms       : 912   
Bodies      : 22839 
Tight       : Yes

  Deleted   : 10406 
Update With the following symmetry breaking added, the problem is solved in 0.58 seconds.

% symmetry breaking: the cell with the lowest coordinates
% should be in the first step
:- path(1, R, C), steps(Step), Step > 1, path(Step, R2, C2),
R*c+C > R2*c+C2.

The statistics for this variant:

Time        : 0.580
  Prepare   : 0.080
  Prepro.   : 0.030
  Solving   : 0.470
Choices     : 8727
Conflicts   : 6914
Restarts    : 2
End of update

Some notes:
One nice feature in Clingo (and lparse) is that it is possible to have many optimization objective. Here we we first maximize the number of points (#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P@2 ] , and as a second objective (with lower priority @1) we minimize the start cell to start with the cell with the lowest coordinate: #minimize [ path(1,R,C) = R*c+C@1].. Sometimes this is faster than the plain #maximize objective, sometimes not.

The size of "core" of the encoding is quite small. Here is the code with the comments and the helper predicates (for outputs) removed.

adj(R,C, R1,C1) :- rows(R;R1), cols(C;C1), |R-R1| + |C-C1|==1.
0 { path(I, Row, Col) : steps(I) } 1 :- rows(Row), cols(Col).
1 { path(I, Row, Col) : rows(Row) : cols(Col) } 1 :- steps(I).
:- path(1, R1, C1), path(max_steps, R2, C2), not adj(R1,C1,R2,C2).
:- path(I-1,R1,C1), path(I,R2,C2), not adj(R1,C1, R2,C2).
:- path(I, R,C), black(R,C).
#maximize [ path(I,R,C) : steps(I) : point(R,C,P) = P ].

The corresponding "core" of the MiniZinc program (see below for the full code) is larger.

Constraint Programming, MiniZinc

Here is the full MiniZinc code (without data). Compared with the ASP approach, the decision variables are represented in another way: the paths is represented by two arrays x and y, and the points are collected in a separate array (points) so we can simply sum over it for the optimization.

include "globals.mzn";

int: W = 0; % white (empty) cells
int: B = -1; % black cells
int: max_val = max([problem[i,j] | i in 1..rows, j in 1..cols]);

% define the problem
int: rows;
int: cols;
int: max_steps; % max length of the loop
array[1..rows, 1..cols] of int: problem;

% the coordinates in the path
array[1..max_steps] of var 1..rows: x :: is_output;
array[1..max_steps] of var 1..cols: y :: is_output;

% the collected points
int: max_point = max([problem[i,j] | i in 1..rows, j in 1..cols]);
array[1..max_steps] of var 0..max_point : points :: is_output;

% objective: sum of points in the path
int: max_sum = sum([problem[i,j] | i in 1..rows, j in 1..cols where problem[i,j] > 0]);
var 0..max_sum: sum_points :: is_output;

% solve satisfy;
solve maximize sum_points;
% solve :: int_search(x ++ y, first_fail, indomain_min, complete) maximize sum_points;

% all coordinates must be unique
constraint forall(s in 1..max_steps, t in s+1..max_steps) (
x[s] != x[t] \/ y[s] != y[t]

% calculate the points (to maximize)
constraint forall(s in 1..max_steps) (
points[s] = problem[x[s], y[s]]
sum_points = sum(points);

% ensure that there are no black cells
% in the path
constraint forall(s in 1..max_steps) (
problem[x[s],y[s]] != B

% get the path
constraint forall(s in 1..max_steps-1) (
abs(x[s] - x[s+1]) + abs(y[s] - y[s+1]) = 1
/\ % close the path around the corner
abs(x[max_steps] - x[1]) + abs(y[max_steps] - y[1]) = 1;

Except for more declaration of the arrays and decision variables, this code don't have more logic than the ASP encoding. However it is more verbose.

The solution for Mike Trick's problem, using LazyFD, takes 1.1 second, slightly slower than using Clingo (see below for more comparison of times):

points = array1d(1..12, [0, 0, 3, 0, 0, 2, 0, 0, 2, 0, 0, 1]);
sum_points = 8;
x = array1d(1..12, [2, 2, 2, 3, 3, 4, 5, 5, 5, 4, 3, 2]);
y = array1d(1..12, [4, 3, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5]);

After some thoughts I decided to try the same symmetry breaking that was an option the ASP encoding. It is implemented in rogo2.mzn and use the following extra constraint that ensure that the cell with the lowest coordinate is in the first step. % symmetry breaking: the cell with lowest coordinates % should be in the first step. constraint forall(i in 2..max_steps) ( x[1]*cols+y[1] < x[i]*cols+y[i] );

With this model, LazyFD solves Mike Trick's problem in 0.627 seconds. Also see under "Comparison" below.

End of update


I was curious how well the systems should do so here is a "recreational" comparison. Please don't read too much into it:
  • it is just 3 problems.
  • there is probably better heuristics for both Clingo and Gecode/fz
The following 3 problem instances was used in the test. Unfortunately, I have not found any direct links for the two latter instances (see below for links to my encodings). Instead a variant of the coding used in MiniZinc is shown, where "_" ("W" in the MiniZinc code) is white/blank, "B" is a black cell to avoid, and a number represent a point of the cell.
  • Mike Trick's example. 5 rows, 9 column, 12 steps; good: 6, best: 8.
    %1 2 3 4 5 6 7 8 9  
     2,_,_,_,_,_,_,_,_, % 1
     _,3,_,_,1,_,_,2,_, % 2
     _,_,_,_,_,_,B,_,2, % 3
     _,_,2,B,_,_,_,_,_, % 4
     _,_,_,_,2,_,_,1,_, % 5
  • The Paper Rogo puzzle from Creative Heuristics Ltd for 20110106. 9 rows, 7 columns, 16 steps; good: 28, best: 31 points.
     %1 2 3 4 5 6 7
      B,_,6,_,_,3,B, % 1
      2,_,3,_,_,6,_, % 2
      6,_,_,2,_,_,2, % 3
      _,3,_,_,B,B,B, % 4
      _,_,_,2,_,2,B, % 5
      _,_,_,3,_,_,_, % 6
      6,_,6,B,_,_,3, % 7
      3,_,_,_,_,_,6, % 8
      B,2,_,6,_,2,B, % 9
  • The Paper Rogo puzzle from Creative Heuristics Ltd for 20110107. 12 rows, 7 columns, 16 steps; good: 34 points, best: 36 points.
     %1 2 3 4 5 6 7
      4,7,_,_,_,_,3, % 1
      _,_,_,_,3,_,4, % 2
      _,_,4,_,7,_,_, % 3
      7,_,3,_,_,_,_, % 4
      B,B,B,_,3,_,_, % 5
      B,B,_,7,_,_,7, % 6
      B,B,_,_,_,4,B, % 7
      B,4,4,_,_,_,B, % 8
      B,_,_,_,_,3,B, % 9
      _,_,3,_,4,B,B, % 10
      3,_,_,_,_,B,B, % 11
      7,_,7,4,B,B,B  % 12
For the ASP encoding I used clingo (a combined grounder and solver) and the parameter --heuristi=Vmtf after some minimal testing with different parameters. For MiniZinc, both solvers LazyFD and Gecode/fz where used and I settled with the search heuristic solve minimize sum_points which seems to be fast enough for this experiment. Note that LazyFD tend to behave best without any explicit search heuristics . (Also: there is always a better search heuristics that the one you settle with.)

The time reported is the total time, i.e. including the grounding time/convert to FlatZinc.

Update I have added the second version of the MiniZinc model with the added symmetry breaking constraint as a separate entry: End of update

Mike Trick problem

Clingo: 0.96 seconds, 19826 choices, 16539 conflicts, 1 restart.
Clingo with symmetry breaking: 58 seconds, 8727 choices, 6914 conflicts, 2 restarts.
LazyFD: 1.1 second. (failure is not reported)
LazyFD with symmetry breaking: 0.6 seconds (failure is not reported)
Gecode/fz: 2.62s, 92113 failures2577853 failures
Gecode/fz: with symmetry breaking: 0.4 seconds, 9418 failures

20110106 problem

Clingo: 1:57.07 minutes, 1155290 choices, 1044814 conflicts, 1 restart
Clingo with symmetry breaking: 20.4 seconds, 157146 choices, 135178 conflicts, 3 restarts
LazyFD: 2:58 minutes
LazyFD with symmetry breaking: 19.9 seconds (failure is not reported)
Gecode/fz: 58.6 seconds, 1380512 failures
Gecode/fz with symmetry breaking: 7.8 seconds, failures

20110107 problem

Clingo: 3:13.72 1541808 choices, 1389396 conflicts, 1 restart
Clingo with symmetry breaking: 31.6 seconds 178301 choices, 151439 conflicts, 1 restart
LazyFD: 2:55.18 minutes
LazyFD with symmetry breaking: 44.5 seconds (failure is not reported)
Gecode/fz: 1:54.50 minutes 2577853 failures
Gecode/fz with symmetry breaking: 11.3 seconds, failures

Here we see that Gecode/fz without (symmetry breaking) is the fastest for the two larger problems (but the slowest on the first), and both Clingo and LazyFD was each placed second in the harder problems. So, I'm not really sure we can draw any real conclusions from this small test.

Update With symmetry breaking added the result is more unanimous. All three solvers benefit much from this. Gecode/fz is faster on all three problems, and the other two still one second place each. We also see how much symmetry breaking can do. End of update

Some comments/conclusions

Here are some general comments about the problem and the coding.


As mentioned above, my initial assumption of the problem was that the given number of steps was not always the best path length, and trying to code this was not easy. After a long time sitting with this approach (say 6 hours coding time?), I re-read the description and couldn't find any real support for this assumption, so I skipped it in favor for the "fixed length" approach.

To get to the final version it took several hours, say 9-10 hours total coding (in smaller sessions over several days). This includes time for coding the initial assumption/interpretation. It also includes the time trying to get the first assumption approach to work with the incremental solver iclingo which - I hoped - should first try the longest length and if that gave no solution it should try at the the lengths in decrement of 1; but I didn't get this to work.

As usual I realized that several of the predicates I first thought was needed could just be thrown away. An example is this snipped to ensure "positive" that there is a path between the cell R1,C1 and the cell R2,C2.

{ x(I-1, R1, C1) : adj(R1,C1, R2,C2) : rows(R1;R2) : cols(C1;C2) : not black(R1,C1) : not black(R2,C2) } max_steps :- x(I, R2, C2), steps(I).

Instead it could be replaced with :- x(I, R,C), black(R,C).
:- x(I-1,R1,C1), x(I,R2,C2), not adj(R1,C1, R2,C2).

Well, I hope that with time I'll recognize these things faster.


It took about 30 minutes to code the MiniZinc model (after a unnecessarily long detour of debugging bad thinking and a bug in the data). The reason for this much shorter time is two-fold:
  • It's much easier to code something when the general strategy (approach) of the problem is known and has been coded in another system. All the (de)tours when writing the ASP version made much more comfortable with the given problem.
  • I'm much more comfortable coding in MiniZinc than in ASP for two reasons: 1) I have programmed in MiniZinc much longer than in ASP. 2) I am also more used with the high-level CP approach of stating the requirements/constraints with traditional loops and using arrays/matrices.
Programs and data files Here are all the files used, both program and data files for the two systems. ASP (Clingo)

December 13, 2010

Christmas Company Competition Problem: Mixing teams

This blog post is my entry in December Blog Challenge: O.R. and the Holidays. (Note: since I'm not a INFORMS member, this entry might get disqualified.)

This week my company (the local office) is having the annually Christmas gathering and we will - after eating some good Brazilian food - go bowling.

Mixing the teams as good as possible in these kind of gatherings can be quite important and I have here created a MiniZinc model (company_competition.mzn) for this.

Problem statement

In this problem I have decided that the teams should be picked (mixed) according to the following requirements (see below for other considerations):
  • We are in total 18 contestants and there should be 4 or 5 persons in each team, which gives 4 teams. (Different teams sizes are discussed below.)
  • There should be as even distribution of sexes in each team as possible. There are 12 males and 6 females.
  • There are 3 departments (IT, Custom relations 1, Custom relations 2) and these should be mixed as much as possible. As it happens, all 3 departments consists of 6 persons each.
  • The managers for each department should be in different teams, if possible.
The number of violations of these requirements is then minimized (the variable z in the model).

MiniZinc model

The MiniZinc model used is company_competition.mzn.

This model is slightly simplified and includes our first names and departments for realism (I'm "hakan" as you may have guessed). For general use of this model - e.g. our the next Christmas competition, or by some other company - the problem instance should have been in a separate data file (this is easy to fix), but for clarity I've kept everything is the same file.

The hardest part in modeling this problem was the following which required a lot of experimenting.
  • The way to measure the violations is very important to get a good (fair) mixing, and took quite a time. Some of the rejected measurements has been kept (commented) in the model. See below for a comparison of two different measurements.
  • To no surprise it took quite a time to get the labeling as good as possible. It may - of course - be a better labeling, but I have not found any.
  • Testing different symmetry breaking constraints.


For Gecode/fz, MiniZinc/fd, and JaCoP/fz the optimal value of z (14) was found almost immediately (< 1 second). However, after that it took quite a while to prove that this was the optimal value. Here are the times (including generating the FlatZinc file) and the number of failures for each solver:
  • FzTini: 52 seconds (no failures reported)
  • Gecode/fz: 1:37 minutes, 6213161 failures
  • JaCoP/fz : 4:02 minutes, 6591510 failures
  • MiniZinc/fd: 4:30 minutes: 3726 choice points explored (it don't report # of failures)
  • SMT : 11:49 minutes (no failures is reported)
  • ECLiPSe/ic: 14:50 minutes (don't have support for set_search so I simply commented it)
  • ECLiPSe/fd: 16:32 minutes (don't have support for set_search so I simply commented it)
  • LazyFD : > 1 hour
  • Choco/fz : error (the solver didn't like the way I use set variables)
  • SICStus/fz: error (ibid)
  • SCIP: error (don't handle sets)
Later update
Thanks to Joachim Schimpf (ECLiPSe team) I found a small bug in the manager constraint. This caused some solvers to behave badly: ECLiPSe/ic, ECLiPSe/fd), and FzTini. After the fix, FzTini solves the problem in 52 seconds which is the fastest. ECLiPSe don't have support for the set_search so I just comment it when running its solvers, and it may degrade their performance.
End of update

For the presentation of the results, however, the MiniZinc helper program mzn was used since it is the only way to show the output statements. This additional constraint was also added:
/\ z = 14
Also, please note that for getting a "nice" mix of sex and departments I actually did this is two steps: 1) Running with minimize z to obtain the minimum value (as described above), and (in principle) ignored the specific mixing. 2) Before running mzn I changed the labeling somewhat by adding team_sex first in the labeling list (which was not used in the labeling for optimization) since the distribution of sexes tend to be somewhat off. This approach seemed to be easier than looking through many thousands of solutions (with optimal value of z).

One solution

Below is one solution (of many) which seems to have a quite fair mixing: the departments and sexes are mixed very good. Since the number of competitors (18) don't divide evenly with t_size (4), we allow team sizes of either t_size (4) or t_size+1 (5).
z (#violations): 14

Teams: [{1, 7, 8, 13, 17}, {2, 5, 9, 10, 18}, {3, 6, 11, 14}, {4, 12, 15, 16}]

Team Departments:
1 2 2
2 2 1
2 1 1
1 1 2

Team Sexes:
3 2
3 2
3 1
3 1

Team_size: [5, 5, 4, 4]

which_team: [1, 2, 3, 4, 2, 3, 1, 1, 2, 2, 3, 4, 1, 3, 4, 4, 1, 2]
1: hakan	1
2: andersj	2
3: robert	3
4: markus	4
5: johan	2
6: micke	3
7: alex	        1
8: andersh	1
9: jennyk	2
10: kenneth	2
11: sara	3
12: cecilia	4
13: stefan	1
14: jacob	3
15: roger	4
16: henrik	4
17: line	1
18: hanna	2

The teams:
Team 1: hakan(M,it) alex(F,cr1) andersh(M,cr1) stefan(M,cr2) line(F,cr2) 
Team 2: andersj(M,it) johan(M,it) jennyk(F,cr1) kenneth(M,cr1) hanna(F,cr2) 
Team 3: robert(M,it) micke(M,it) sara(F,cr1) jacob(M,cr2) 
Team 4: markus(M,it) cecilia(F,cr1) roger(M,cr2) henrik(M,cr2) 

johan(it) belongs to team 2
cecilia(cr1) belongs to team 4
stefan(cr2) belongs to team 1

Some explanations

The mixing of departments ("Team Departments"):
1 2 2  (team 1)
2 2 1  (team 2)
2 1 1  (team 3)
1 1 2  (team 4)
means that the first team consists of 1 person from department 1 (it), and 2 persons from departments 2 (cr1) and 3 (cr2) respectively. And so on.
Team Sexes:
3 2
3 2
3 1
3 1
shows the number of males and females for each team.

It took Gecode/fz 6:20 minutes (and 6006337 failures) to generate all the 467424 optimal solutions (where z = 14).

Different violation measurements: departments

As stated above, the measurements of the violations was one of the hardest part. The solution I selected to be the best for measuring department mixing was the following:
sum(t in 1..num_teams, d1,d2 in 1..num_departments where d1 < d2) (abs(team_departments[t,d1] - team_departments[t,d2]))
One alternative version is to measure the department "mixedness" against some ideal value: the team size divided by the number of departments:
sum(t in 1..num_teams, d in 1..num_departments) (abs(team_departments[t,d] - (team_size[t] div num_departments)))
Using this latter version we get the following as the first solution from mzn, but it don't look as fair as the first variant shown above: both for the mixing of the departments and the sexes could be better. Note: I realize that there are many solutions with z = 12 and there may be some other optimal solution that looks more fair.
z (#violations): 12

Teams: [{1, 7, 8, 9, 13}, {2, 5, 10, 14, 17}, {3, 6, 11, 15}, {4, 12, 16, 18}]

Team Departments:
1 3 1
2 1 2
2 1 1
1 1 2

Team Sexes:
3 2
4 1
3 1
2 2
Here are the times to solve this optimization problem (to prove that z = 12 is the optimal value) for the three fastest solvers above:
  • Gecode/fz: 1:30 minutes, 6502528 failures
  • JaCoP/fz : 3:45 minutes, 6885666 failures
  • MiniZinc/fd: 4:33 minutes, 21 choice points explored
Well, it seems that the time and the number of failures are about the same for Gecode/fz and MiniZinc/fd. For JaCoP/fz it's slightly faster.

Different team sizes

Given the same problem instance, the same constraints and search labeling, how do different team sizes change the time to solve the problem? By changing t_size we see that team size of 4 was the hardest one.

For t_size = 3 it took Gecode/fz 15 seconds (742196 failures) to realize that it is quite easy to pick mixed teams. This seems to be an optimal mixing.
1 1 1 
1 1 1 
1 1 1 
1 1 1 
1 1 1 
1 1 1 

2 1 
2 1 
2 1 
2 1 
2 1 
2 1 

team_size: 3 3 3 3 3 3

which_team: 1 2 3 4 5 6 1 2 3 4 5 6 1 3 5 6 2 4
z = 6;

For t_size = 5 if took Gecode/fz 2.2 seconds (122300 failures) to get the following optimal solution (z = 6). However, a better mixing of sexes must be sought in another optimal solution.
2 2 2 
2 2 2 
2 2 2 

5 1 
4 2 
3 3 

z = 6;
For t_size = 6 we get the same solution as for 5 but it took Gecode/fz slightly longer, 3.5 seconds (202517 failures).

Final notes

The mixing model presented above can - of course - be used for competitions other than Christmas company competitions.

Also, other mixing requirements could have been taken into considerations, such as:
  • different offices: people from different offices (say different cities) should be mixed
  • ages: a fair mixing of age groups may be of some point.
  • time of employment.
  • experience in the target activity of competition: If some are very experienced in the target activity (e.g. former bowling pros), they should be put in different teams. Some kind of handicap system might also be used, e.g. that these pros counts as two persons etc.
  • there might also be that some persons cannot stand each other. Depending on the management principle these should either be in the same team (to learn to cooperate) or in different teams (so they don't ruin a team). The opposite case, i.e. where two person are together as couple (or family, etc) may be handled in the same way.

Christmas related

Related to this (Christmas and OR) is the two Secret Santa models I wrote about a year ago: Merry Christmas: Secret Santas Problem and 1 year anniversary and Secret Santa problem II

December 07, 2010

MiniZinc version 1.2.2 released

MiniZinc version 1.2.2 has been released. It can be downloaded here.

From NEWS file:

G12 MiniZinc Distribution 1.2.2

Changes to the MiniZinc language:

* We have added a new built-in function trace/2 that can be used to
print debugging output during flattening, for example the following
MiniZinc fragment:

constraint forall (i in 1 .. 5) (
trace("Processing i = " ++ show(i) ++ "\n",
x[i] < x[i + 1]

will cause mzn2fzn to print the following as the above constraint
is flattened:

Processing i = 1
Processing i = 2
Processing i = 3
Processing i = 4
Processing i = 5

Other changes in this release:

* The FlatZinc interpreter's -s option is now a synonym for the --solver-statistics option instead of the --solver-backend option.

* The FlatZinc interpreter's LazyFD backend can now print out the number of search nodes explored after each solution is generated.

* The mzn script has been extended so that comments in the FlatZinc output stream, such as those containing solver statistics, are printed after the output produced by processing the output item.

Bugs fixed in this release:

* A bug that caused the mzn script to abort if the model contained a large array literal has been fixed.

November 24, 2010

MiniZinc version 1.2.1 released

MiniZinc version 1.2.1 released. Download.

From the NEWS:

G12 MiniZinc Distribution 1.2.1
Bugs fixed in this release:

* Flattening of expressions containing the built-in operation log/2 no longer causes mzn2fzn to abort.

* We have fixed a number of bugs in mzn2fzn and flatzinc that caused them to abort when processing large array literals.

* We have fixed a bug in mzn2fzn that caused it to generate variable declarations in which the variable was initialised with an assignment to itself.

* A bug that caused mzn2fzn to abort if it encountered of an empty array of Boolean, integer or float decision variables as a predicate application argument has been fixed. [Bug #187]

* Some bugs in mzn2fzn's optimisation pass that resulted in dangling variable references in the generated FlatZinc have been fixed.

November 14, 2010

MiniZinc version 1.2: More about CP-Viz and some models changed

In MiniZinc version 1.2 released I cited the news in MiniZinc version 1.2. Here I describe little more about the support for CP-Viz (visualization MiniZinc models), and also my MiniZinc models that was changed to comply with this version.


Ever since I watched Helmut Simonis' excellent ECLiPSe ELearning videos some year ago, I have wanted to been able to play with the kind of visualizations shown in these videos: i.e. that shows both the search tree and - above all - how/when the variables where assigned/removed from domains etc.

MiniZinc version 1.2 has now (a limited) support for CP-Viz, a Java program supporting different methods of visualization constraint programming models. CP-Viz is presented in the paper A Generic Visualization Platform for CP by Helmut Simonis, Paul Davern, Jacob Feldman, Deepak Mehta, Luis Quesada, and Mats Carlsson. Slides from the CP-Viz presentation at CP 2010.

Here is a visualization of 8-queens problem (requires that the web browser has support for SVG files), using an array of 8 variables (the rows), each having of domain 1..8 (columns). To see the progress, click either on Forward a couple of times manually, or Animate for a nice animation.

The last event of this progress is shown in the picture below (it's a screen shot, click on it to get a larger version). The left pane shows the search tree, and the right pane is the grid of variables and their assigned/removed/etc values. Explanation of the colors is below (type vector).

Here is the MiniZinc model used for the visualization, queens_viz.mzn, slightly edited:
include "globals.mzn";
int: n = 8;
array[1..n] of var 1..n :: is_output: queens :: viz([

solve :: int_search(

    forall(i, j in 1..n where i < j) (
         queens[i] != queens[j] /\
         queens[i] + i != queens[j] + j /\
         queens[i] - i != queens[j] - j
output [ show(queens) ++ "\n" ];
Supported types
There is a couple of visualization types supported. Descriptions from Mark Brown Visualizing MiniZinc models with CP-Viz Version 1.2 (file doc/pdf/mzn-viz.pdf in the distribution):
  • vector
    This is the type shown above.
    Shows a grid with one column per variable and one row per domain value. The variable (column) selected at the current search node is enclosed in a rectangle. Squares are color coded as follows:
    • red: failed or assigned value
    • pale green: value in domain
    • dark cyan: value just removed from domain
    • white: value earlier removed from domain
  • vector_waterfall
    Shows a grid with one column per variable and one row per search level.
  • vector_size
    Shows a graph of domain size versus depth for the current derivation.
  • binary_vector
    Shows a row with one square per variable
  • alldifferent
    This visualization type is used in the same way as vector.
Known limitations
The last section in Visualizing MiniZinc models with CP-Viz Version 1.2 describes the current known limitations:
The following limitations of minizinc-viz will be addressed in future releases.
  • Only the fd backend is supported.
  • Not many visualization types are supported.
  • MiniZinc should choose sensible default values for the optional parameters.
  • Not much error checking is currently performed. Bad annotations may be silently ignored, or may lead to bad input being given to CP-Viz. MiniZinc does not try to validate the XML it generates.
I'll wait eagerly for further support on this...

Changed models

For some change in version 1.2, I had to change my existing MiniZinc models. The changes varied, but here are the most common (I also did some other small fixes):
  • Changed global_cardinality/2 to global_cardinality_old
    Although global_cardinality_old is deprecated, I choose that since right now only the MiniZinc's solvers supports the new global_cardinality/3 version. When other FlatZinc solvers support the new version I will change to that.
  • limit to limitx
    limit is an annotation now.
  • time to timex
    time is an annotation now.
These models have also been updated at the G12 SVN repository

November 12, 2010

MiniZinc version 1.2 released

MiniZinc version 1.2 has been released (download). From NEWS:

G12 MiniZinc Distribution 1.2
* CP-Viz support

We have added support for visualizing MiniZinc models using CP-Viz to
the FlatZinc interpreter's FD backend.

See the ``Visualizing MiniZinc models with CP-Viz'' guide in the doc
directory for further details.

* New MiniZinc tutorial

We have added a new MiniZinc tutorial. It introduces the MiniZinc language
in much greater depth than the old tutorial and includes chapters on
predicates, search, and effective modelling practices.

* XML-FlatZinc redesigned

We have redesigned the XML representation of FlatZinc. The new version
is much less verbose than previous version of XML-FlatZinc. The conversion
tools, fzn2xml and xml2fzn, have been updated to work with new version.

Note that the new version of XML-FlatZinc is *not* compatible with previous
versions of XML-FlatZinc.

* FlatZinc to XCSP converter

We have added a new tool, fzn2xcsp, that converts FlatZinc model instances
into XCSP 2.1 format. The MiniZinc globals library contains a new set of
solver-specific constraints in the directory "xcsp" for use with models that
are going to be converted into XCSP.

Changes to the MiniZinc language:

* The following built-in operation has been removed from MiniZinc:

int: dom_size(array[$T] of var int)

Changes to the FlatZinc language:

* The following built-in constraints have been removed from FlatZinc:





* Constrained type-insts for parameters are no longer supported in FlatZinc.
For example, the following parameter declarations are no longer allowed:

1..10: p = 4;
1.0..10.0: f = 5.0;
set of {2, 5, 6} = {2, 5};
array[1..2] of set of 1..5 = [{}, {3}];

Constrained type-insts may still appear in variable declarations and
also as the argument types in predicate declarations.

Changes to the G12 MiniZinc-to-FlatZinc converter:

* We have added a FlatZinc optimisation pass to mzn2fzn. This pass is
enabled by default. Turning the optimiser off (see the '--no-optimise'
option) results in faster conversion, but may leave certain obvious
simplifications for the backend to handle. In particular, unoptimised
FlatZinc models are likely to contain many intermediate variables with
known values.

* mzn2fzn supports a new command line option that allows model data to
be specified directly on the command line. The new option is
'--cmdline-data', or '-D 'for short. An example of its use is:

mzn2fzn -D "n = 4;" queens.mzn

The above causes the parameter assignment "n = 4;" to be included
when flattening queens.mzn.

Changes to the G12 MiniZinc interpreter:

* The deprecated 1-pass MiniZinc interpreter, minizinc, has been removed from
the distribution.

Changes to the G12 FlatZinc interpreter:

* "indomain_random" is now supported as a value choice method for integer
search in the FD backend.

* We have significantly improved the worst-case complexity of the element
constraint in G12/FD.

Other changes in this release:

* The problems from the 2010 MiniZinc challenge are now included in the
MiniZinc benchmark suite.

* The following new global constraints have been added to the MiniZinc


* We have modified the interface to the global_cardinality constraint
so that it conforms more closely to the description in the Global
Constraint Catalog. The new interface is:

global_cardinality(array[int] of var int: x,
array[int] of int: cover,
array[int] of var int: counts);

The old definition of the global_cardinality constraint is still
available under the name global_cardinality_old, but it is now
deprecated and will be removed in a future release.

* We have added "closed" versions of the global_cardinality and
global_cardinality_low_up constraints. In the closed versions
the decision variables are restricted to taking their values from
the cover. The closed forms are named:


Bugs fixed in this release:

* Flattening of expressions containing the built-in operation dom_size/1
is now supported. [Bug #158]

* A bug that caused mzn2fzn to erroneously report that model was inconsistent
if the condition of an if-then-else was false has been fixed. [Bug #158]

* Output annotations are now attached to decision variables that only occur
in the output expression in the where clause of a comprehension. [Bug #160]

* mzn2fzn now outputs all parameter declarations before any variable
declarations, as the FlatZinc specification requires.

* A bug that caused mzn2fzn to erroneously treat assignments to string
parameters as a source of model unsatisfiability has been fixed. [Bug #170]

* The FlatZinc interpreter now emits an error if overloaded predicate
declarations are encountered.

Note: There are a few changes in this version which may break existing model, for example that
global_cardinality now has 3 arguments instead of 2 (one may use global_cardinality_old instead, but it is deprecated). During the next days I will update my MiniZinc models to comply to this version.

November 08, 2010

Comparison of some Nonogram solvers: Google CP Solver vs Gecode and two MiniZinc solvers

After writing Google CP Solver: Regular constraint, Nonogram solver, nurse rostering, etc yesterday, I thought it would be interesting to benchmark the new Nonogram solver written in Google CP Solver with some other solvers. And the benchmark is of course from Jan Wolter's great Survey of Paint-by-Number Puzzle Solvers, though I compare only with Gecode, and two MiniZinc solvers: Gecode/fz (Gecode's FlatZinc solver), and MiniZinc's LazyFD since I know them best and can test them my self.

In the table below, I have kept Wolter's original value is in parentheses to get a feeling for the differences in our machines and to check with my missing problems with Gecode.

System notes:
  • The timeout is 30 minutes as in Wolter's test.
  • My machine is a Linux Ubuntu 10.4 LTS with 12Gb RAM, 8 core (Intel i7/930, 2.80GHz), though all programs ran on a single processor.
  • Also, I was "working" (Spotify, web surfing, running other tests in parallel, etc) on the machine as the test ran.
  • Due to copyrights issues, I cannot distribute the examples. They can be downloaded from Wolter's Sample Paint-by-Number Puzzles Download.
And here are some comments about the different solvers.

Google CP Solver

Google CP Solver revision 259, with revision 259 of the Python solver, which includes a lot of nice improvements, thanks by Laurent Perron.


Gecode and Gecode/fz is version 3.4.2 (latest svn revision: 11517)
Please note that I have just tested the problem instances in the existing files for Gecode. Hence a lot of instances where not tested on my machine (they are marked N/A).

Also, I think that Wolter's time for Petro should be 1.4s (not 1.4m).


MiniZinc version: 64bit Linux, snapshot version per 2010-11-05.

Note: In contrast to Wolter's result, the times for Gecode/fz and LazyFD includes generating the FlatZinc file, which may be considerable for large problem instances. Hence some of my results are larger for these solvers.

Some comments about Google CP Solver / Python solver

Wolter has done a great job analyzing the three other solvers (Gecode, Gecode/fz, and LazyFD), so I just comment on Google CP Solver solver.

It looks like Google CP Solver/Python solver is quite similar to Gecode/fz solver, and in some case (e.g. 9-Dom, Bucks, Tragic, Petro, etc) it has exactly the same number of failures. There are some exceptions, though:
  • Solved Merka and Dicap quite fast. Gecode/fz timed out for both these problems
  • Also solved Flag fast where Gecode/fz timed out. Here it is also faster than LazyFD and Gecode (note: Wolter's time).
  • Slower than Gecode/fz on Karate, Signed
  • Slightly slower than Gecode/fz on Tragic, with the same number of failures

Comparison - table

Here is the full table. The number in parenthesis is Wolter's times. The second row is my own timing. I also added the number of failures where applicable; LazyFD always returned 0 choice points so I skipped that. A + indicates time out (30 minutes).

The links for the puzzle is to Wolter's puzzle pages.
Puzzle Size Notes Gecode MiniZinc
CP Solver
#1: Dancer* 5 x 10 Line (0.01s)
0.01s/0 failures
0.08s/0 failures
0 failures
#6: Cat* 20 x 20 Line (0.01s)
0.1s/0 failures
0.1s/0 failures
0 failures
#21: Skid* 14 x 25 Line, Blanks (0.01s)
0.01s/0 failures
0.09s/13 failures
0 failures
#27: Bucks* 27 x 23 Blanks (0.01s)
0.2s/2 failures
0.1s/3 failures
3 failures
#23: Edge* 10 x 11   (0.01s)
0.01s/15 failures
0.09s/25 failures
25 failures
#2413: Smoke 20 x 20   (0.01s)
0.11s/5 failures
8 failures
#16: Knot* 34 x 34 Line (0.01s)
0.01s/0 failures
0.14s/0 failures
0 failures
#529: Swing* 45 x 45 Line (0.02s)
0.02s/0 failures
0.24s/0 failures
0 failures
#65: Mum* 34 x 40   (0.02s)
0.18s/20 failures
22 failures
#7604: DiCap 52 x 63   (0.02s)
0.29s/0 failures
0 failures
#1694: Tragic 45 x 50   (0.14s)
2:14m/394841 failures
394841 failures
#1611: Merka* 55 x 60 Blanks (0.03s)
27 failures
#436: Petro* 40 x 35   (1.4m[s?])
0.05s/48 failures
1.15s/1738 failures
1738 failures
#4645: M&M 50 x 70 Blanks (0.93s)
0.41s/89 failures
82 failures
#3541: Signed 60 x 50   (0.57s)
0.61s/929 failures
6484 failures
#803: Light* 50 x 45 Blanks (+)
#6574: Forever* 25 x 25   (4.7s)
1.5s/17143 failures
17143 failures
#2040: Hot 55 x 60   (+)
#6739: Karate 40 x 40 Multiple (56.0s)
38.0s/215541 failures
215541 failures
#8098: 9-Dom* 19 x 19   (12.6m)
2.59s/45226 failures
45226 failures
#2556: Flag 65 x 45 Multiple, Blanks (3.4s)
14859 failures
#2712: Lion 47 x 47 Multiple (+)
#10088: Marley 52 x 63 Multiple (+)
#9892: Nature 50 x 40 Multiple (+)

September 26, 2010

A first look at G12 Zinc: Basic learning models etc

There is a separate page for the new Zinc models: My Zinc page.

For about week ago the version 1.0.0 of NICTA G12 - Constraint Programming Platform was relased, which include Zinc version 1.0.0. I have looked forward to this day since I first learned (early 2008) about the G12 solvers (Zinc/MiniZinc).

In Some exiting news today I just had the time to collect some links etc. Later that day I started to learn more about Zinc, and the best way of learning a new constraint programming system is - for me at least - to implement my "learning problems" (a collection of "about 17 different models").

Here I will not go through all features of Zinc or all the differences between MiniZinc and Zinc. Both these are described in Specification of Zinc and MiniZinc (there is also a PDF version). The appendix C contains an overview of the differences.

That said, some of the most significant differences between MiniZinc and Zinc is that Zinc supports the following:
  • functions
  • tuples
  • records
  • enums
  • type synonyms (including constraints). See furniture_moving2.zinc for an example of types with constraints
  • type-inst variables ($T) for creating polymorphic functions/predicates.
  • sets can contain arbitrary objects
  • arrays can be indexed with enums etc
  • implicit type conversion, e.g. int-to-float, set-to-array
  • compiling the model+data to an executable program
  • Different support for solvers and annotations. (I have not fully looked into this.)
  • The following built-in functions: powerset, cartesian_product, concat, head, last, tail, condense, condense_int_index, show_float, first, second, fst, snd, foldl, foldr
When testing this very first version of Zinc, one shold be aware of some of its disadvantages, which surely be fixed in later versions.
  • the compilation to executable file is not blazingly fast. For most Zinc models compilation took about 6-10 seconds (on my 64-bit Linux Ubuntu 10.4 with 8 cores and 12Gb RAM).
  • running the model can be slower since there is much more happening in a Zinc program.
  • for some Zinc models (with some problem instances), the memory usage can be very high (compared to the MiniZinc variant) and in some rare cases I had to stop the execution since it claimed all the RAM. This happened in the Zinc version of the 17x17 problem (for sizes 15x15x4).
  • unsurprisingly, there are bugs (some are mentioned below)
  • the support for data files (.dzn files) is more restricted than in MiniZinc, and one may hav to either change the data file, or compile it in the Zinc program. For an example of this, see Survo puzzle below. However, I'm not sure if this restrictedness is going to be lifted in later version.
One very nice feature in Zinc is that the MiniZinc language is a subset of the Zinc language, so all valid MiniZinc models is also valid Zinc models. Thus they can be compiled as is.
  $ zinc minizinc_model.mzn
  $ ./minizinc_model
Well, at least if the model is runnable by the tools from G12 MiniZinc distribution. Added functionality, such as definitions added by an "external" solver, e.g. Gecode, JaCoP, SICStus Prolog, ECLiPSe, SMT, etc may not work (or at least I have not get these to work). Also, I have not checked that all my MiniZinc models can be compiled and run with Zinc.

Functions and foldr

Besides predicates, Zinc has support for functions which is real nice. Some examples are shown in the models, e.g. in the General alphametic solver (see below). Here are some other examples:
function var int: mysum(array[$T] of var int: a) = foldr('+', 0, a);

function var int: scalar_product(array[$T] of var int: a, array[$T] of var int: b) = 
   sum(i in index_set(a)) (  a[i]*b[i] );

function var int: myplus(var int: a, var int: b) = a + b;

function var int: mysum2(array[int] of var int: a) = foldr(myplus, 0, a);
It may be interesting to know that the two standard accumulators forall and exists are predicates defined with foldr (in the file g12-1.0.0/lib/zinc/stdlib.zinc):
predicate forall(array[$T] of var bool: xs) = foldr('/\', true, xs);
predicate exists(array[$T] of var bool: xs) = foldr('\/', false, xs);

The Zinc models

Here are my Zinc models so far. Apart from the Nonogram model (that triggered some bugs), all "about 17 learning models" has been translated to Zinc. Some of these has very little changes compared to the MiniZinc model and could be compiled as a MiniZinc (.mzn) model. However I wanted to list them all here with the .zinc suffix.

I have collected all these Zinc models in a separate page: My Zinc page.

Least difference

MiniZinc model: least_diff.mzn
Zinc model: least_diff.zinc

This model use a type synonym:
type digits = var 0..9;
Instead of the traditional (MiniZinc) way of declaring decision variables:
  set of 0..9: digits = 0..9;
  var digits: A;
  var digits: B;
  % ...
we now can write:
  type digits = var 0..9;
  digits: A;
  digits: B;
  % ...
Another usage of type declarations is to add a constraint of the variable directly. Instead of first declaring a decision variable and add the constraint that it is >=0 in a constraint section we can combine this in Zinc:
  % define a constrained type definition for positive numbers
  type varintp = (var int: i where i >= 0);
  varintp: X;
This model also use a Zinc specific search annotation:
  solve :: backend_fdic(g12_fd, none, none) minimize difference;
Note: Zinc also support the MiniZinc annotation form (and converts it to the Zinc way):
  solve :: int_search(FD, first_fail, indomain, complete) minimize difference;
Note: Since the MiniZinc language is a subset of the Zinc languages, the Zinc solver can run any MiniZinc model.

Simple diet problem

MiniZinc model: diet1.mzn
Zinc model: diet1.zinc

This is just about the same as the MiniZinc version.


MiniZinc model: send_more_money.mzn
Zinc model: send_more_money.zinc
Zinc model: send_more_money2.zinc (with enums)

This is the classic alphametic puzzle: SEND + MORE = MONEY.

The changes in the Zinc model send_more_money.zinc are about the same as in the least_diff problem mentioned above, i.e type definition of the domain. Here we use two different types: one for the range 0..9, and for the letter S and the letter M we use the range 1..9:
  type digits0_9 = var 0..9;
  type digits1_9 = var 1..9;
  digits1_9: S; % S > 0
  digits0_9: E;
  digits0_9: N;
  digits0_9: D;
  digits1_9: M; % M > 0
  digits0_9: O;
  digits0_9: R;
  digits0_9: Y;

    all_different(fd) /\
                 1000*S + 100*E + 10*N + D  +  
                 1000*M + 100*O + 10*R + E  = 
       10000*M + 1000*O + 100*N + 10*E + Y 
    % /\ S > 0 % not needed
    % /\ M > 0 % not needed
The variant send_more_money2.zinc, use another new feature in Zinc: enum digits for declaring the names of the variables, and also use an array of decision variables (of range 0..9) with the enum digits as indices.
  enum digits = {S,E,N,D,M,O,R,Y};
  array[digits] of var 0..9: x;

     alldifferent(x) /\
                1000*x[S] + 100*x[E] + 10*x[N] + x[D]  +  
                1000*x[M] + 100*x[O] + 10*x[R] + x[E]  = 
   10000*x[M] + 1000*x[O] + 100*x[N] + 10*x[E] + x[Y] 
   x[S] > 0 /\ x[M] > 0
It may be a matter of taste which version one would use for these small (toy) problem. I tend to prefer the first version (where the decision variables are used directly), but for larger problems the second version is probably better.

Also, see the general alphametic solver below.

Seseman Convent problem

MiniZinc model: seseman.mzn
Zinc model: seseman.zinc

This is just the same as the MiniZinc version, apart some correcting some typos.

Coins Grid (Hurlimann)

MiniZinc model: coins_grid.mzn
Zinc model: coins_grid.zinc

This is a problem that the plain constraint programming solvers has some problem with, but is a cinch for MIP solvers.

Here the declaration of the variables and constraints are about the same. However, the search strategy changed so Zinc use the MIP solver which solves the problem very fast:
solve :: backend_mip(osi_cbc) minimize z;
Using the fdic (finite domain solver) is much slower:
solve :: backend_fdic(g12_fd, g12_ic, osi_cbc) minimize z;
However, with the above solve annotation (fdic), there is another way of using the MIP solver: as an option when compiling the model:
 $ zinc -b mip coins_grid.zinc
I have not yet fully tested all the features of the new search annotations. It is explained in the Zinc Manual (as PDF).

de Bruijn sequence

MiniZinc model: debruijn_binary.mzn
Zinc model: debruijn_binary.zinc

There are no big differences between the MiniZinc model and the Zinc model, just these:
  • it use data files for the problem instances
  • pow(base, n) is used instead converting forth and back to floats. There have been some problems with integer version of pow in earlier MiniZinc version, but it has been fixed now.
  • skipped the gcc-part
Some data files: debruijn_binary_2_3_8.dzn debruijn_binary_3_3_27.dzn


MiniZinc model: alldifferent_except_0.mzn
Zinc model: alldifferent_except_0.zinc

This Zinc version don't have any built-in support for the constraint increasing, which is just used as symmetry breaking the the example. However - since MiniZinc is a subset of Zinc - the MiniZinc version in increasing.mzn can be used instead:
include "increasing.mzn";

Furniture moving (scheduling)

MiniZinc model: furniture_moving.mzn
Zinc model: furniture_moving.zinc
Zinc model: furniture_moving2.zinc, defining the tasks as record
The Zinc version use an enum for defining the tasks:
% declaration
enum Task; 

% ....
% data instance:
enum Tasks = {piano, chair, bed, table};
(I was somewhat surprised that enum is also needed when stating the data instances.)

Originally, I planned to use these Tasks as indices of the arrays (e.g. array[Tasks] of var 0..upperLimit: Starts) but this don't work in Zinc version 1.0.0 due to a bug (which will be hopefully be fixed in the next release).

A variant, furniture_moving2.zinc, use a record for defining the tasks. Note that the constraint which calculates t.end from t.start = t.duration.
type Task = (record(string: desc,
             var 0..upper_limit: start, 
             int: duration, 
             int: resource, 
             var 0..upper_limit*2: end): 
                  t where t.end = t.start + t.duration);
And then tasks is defined as
tasks = [
         ("piano", _, 30, 3, _) ,
         ("chair", _, 10, 1, _),
         ("bed"  , _, 15, 3, _),
         ("table", _, 15, 2, _)
Here we use the anonym value (_) for the decision variables.

Printing the tasks array, we now can see all fields in the record, including the decision variables (start and end):
(desc:"piano", start:30, duration:30, resource:3, end:60)
(desc:"chair", start:0, duration:10, resource:1, end:10)
(desc:"bed", start:15, duration:15, resource:3, end:30)
(desc:"table", start:0, duration:15, resource:2, end:15)
This is much clearer comparing to the traditional output of just the contents of the decision variables arrays.

A drawback with this method is that if tasks is put in a separate data file, this data file must be compiled with the model.

The (slightly modified) version furniture_moving2b.zinc and furniture_moving2b-zinc.dzn examplifies this:
  $ zinc furniture_moving2b.zinc furniture_moving2b-zinc.dzn
  $ ./furniture_moving2b


MiniZinc model: minesweeper.mzn
Zinc model: minesweeper.zinc
Zinc model: minesweeper_model.zinc (general model)

There are no changes in the Zinc model.

Problem instances. They include minesweeper_model.zinc

Quasigroup completion

MiniZinc model: quasigroup_completion.mzn
Zinc model: quasigroup_completion.zinc
Zinc model: quasigroup_completion_model.zinc (general model)

  • using show_array2d().
  • In Zinc the global constraint alldifferent is spelled alldifferent. In MiniZinc there is an alias to the predicate name all_different
Problem instances:

Survo puzzle

MiniZinc model: survo_puzzle.mzn
Zinc model: survo_puzzle.zinc

The Zinc model is almost the same, except that the output use
output [ show_array2d(x) ];
instead of the traditional
output [
  if j = 1 then "\n" else " " endif ++
  | i in 1..r, j in 1..c
] ++ ["\n"];
Zinc is more restricted with regards to the datafile that is to be included via command line, such as:
  $ zinc survo_puzzle.zinc
  $ ./survo_puzzle survo_puzzle1.dzn
The following don't work (in the current version 1.0.0 at least):
r = 3;
c = 4;
matrix = array2d(1..r, 1..c,
    0, 6, 0, 0,
    8, 0, 0, 0,
    0, 0, 3, 0 
Instead, one has to explicitly state the dimension of the matrix:
r = 3;
c = 4;
matrix = array2d(1..3, 1..4,
    0, 6, 0, 0,
    8, 0, 0, 0,
    0, 0, 3, 0 
However, if the data file is included in compile time, the variables r and c can be used to state the dimensions.
  $  zinc survo_puzzle.zinc survo_puzzle1.dzn
Data files:

Young Tableaux

MiniZinc model: young_tableaux.mzn
Zinc model: young_tableaux.zinc
Slightly better output which is not really Zinc specific.

SEND + MORE = MONEY in any base

MiniZinc model: send_more_money_any_base.mzn
Zinc model: send_more_money_any_base.zinc
  • type digits = var 0..base-1;
  • using pow(base,n) instead of base*base*base (for n=2..4)

Simple map coloring

MiniZinc model: map.mzn
Zinc model: map.zinc

  • using enum and tuple
  • using data files with tuples of neighbours
The data files includes the name of the countries (as enum), and tuples of the neighbours:
enum country = {belgium, denmark, france, germany, netherlands, luxembourg};
neighbors = {
    (france, belgium),
    (france, luxembourg),
    (france, germany),
    (luxembourg, germany),
    (luxembourg, belgium),
    (netherlands, belgium),
    (germany, belgium),
    (germany, netherlands),
    (germany, denmark)
The constraint of stating that two neighbours must have different colors use the special notation for tuples: nn.1 for the first element in the tuple, and nn.2 for the second element.

Here is the complete model:
int: num_colors;
enum country;
set of tuple(country, country): neighbors;
array[country] of var 1..num_colors: colors;

    forall(nn in neighbors) (
       colors[nn.1] != colors[nn.2]
    /\ % symmetry breaking
    colors[country[1]] = 1

solve satisfy;

output [
   show(colors) ++ "\n"
Data files:


MiniZinc model: nonogram_create_automaton2.mzn
Zinc model: nonogram_create_automaton2.zinc
Data file: nonogram_p200-zinc.dzn

Note: This is the only model where I got serious problems when translating to Zinc or running a MiniZinc model with Zinc. It triggered a translation bug in the Zinc compiler mentioned in the Mantis tracker here.

xkcd knapsack problem

MiniZinc model: xkcd.mzn
Zinc model: xkcd.zinc

  • enum as indices and presentation
  • using show_float to show the real prices with 2 decimals
This model use enum as indices:
enum products = {mixed_fruit, french_fries, side_salad, host_wings, mozzarella_sticks, samples_place};
This means that the indices of the products are mixed_fruit, french_fries, ..., not 1, 2, 3, .. as we are used to in MiniZinc. As a consequence of this, the prices must use these indices as well:
price = [mixed_fruit:215, french_fries:275, side_salad:335, 
         host_wings:355, mozzarella_sticks:420, samples_place:580];
(It is a type error if one try to define the price array as [215, 275, 335, 355, 420, 580].) There are two solutions to the problem.
[ mixed_fruit:1, french_fries:0, side_salad:0, host_wings:2, mozzarella_sticks:0, samples_place:1 ]
1 of mixed_fruit price: 2.15 (= 2.15)
2 of host_wings price: 3.55 (= 7.10)
1 of samples_place price: 5.80 (= 5.80)
[ mixed_fruit:7, french_fries:0, side_salad:0, host_wings:0, mozzarella_sticks:0, samples_place:0 ]
7 of mixed_fruit price: 2.15 (= 15.05)

Simple crossword problem

MiniZinc model: crossword2.mzn
Zinc model: crossword2.zinc

The Zinc model use enum for the letters, which is used to get a better output:
    E1: 1  = hoses
    E2: 3  = sails
    E3: 5  = steer
    E4: 7  = hike
    E5: 8  = keel
    E6: 12  = ale
    E7: 14  = lee
    E8: 2  = laser

Word square

MiniZinc model: word_square.mzn
Zinc model: word_square.zinc

Just as with the crosswords2.zinc this use enum for the letters which makes the model and output neater than the MiniZinc model.

However it is very slow. Even for the trivial 2x2 problem, the Zinc model took 1 min 30 seconds (with solve satisfy). I have not found any faster labeling...

Who killed Agatha

MiniZinc model: who_killed_agatha.mzn
Zinc model: who_killed_agatha.zinc

The only change compared to the MiniZinc model is that the persons is defined as an enum (not int:s).

Conference scheduling

MiniZinc model: conference.mzn
Zinc model: conference.base.zinc

The Zinc model itself is about the same as the MiniZinc model except for two things: First the output is sligtly more elaborate. Second, I experiment with different backends for solving the problem by using separate configurations files:
  • conference.fd.zinc
    This use the fd solver:
    include "conference.base.zinc";
    backend = backend_fdic(default, default, default);
    search = tree_search(sessions, in_order, min_assign);
  • conference.fdx.zinc
    The fdx is the Lazy solver:
    include "conference.base.zinc";
    backend = backend_fdic(g12_fdx, default, default);
    search = tree_search(sessions, in_order, min_assign);
  • conference.mip.zinc
    MIP solver
    include "conference.base.zinc";
    backend = backend_mip(default);
    % search = null;
    % search = tree_search(sessions, in_order, min_assign);
    search = tree_search(sessions, min_domsize, min_assign);

Labeled dice, Building blocks

MiniZinc model: labeled_dice.mzn
Zinc model: labeled_dice.zinc
Zinc model: labeled_dice2.zinc (general method)
Zinc model: coloring_blocks.model.zinc (more general model)

The Zinc model use enums instead of explicit assign each letter to an int.

labeled_dice2.zinc is a more general version which use the same tuple and lookup method as in alphametic.zinc (see above).

coloring_blocks_building_blocks.zinc is an even more general solution which solves both labeled_dice problem and the similar building_blocks (see the MiniZinc model building_blocks.mzn)

Note: There is one additional quirk in this general model which I didn't notice before when there was two separate MiniZinc models. There are not 24 distinct letters in the building blocks problem, just 23 (the letter "F" is not in any of the listed word). So I had to add a special variable additional which contains "F" and is union:ed which letter for this problem instance. And I'm not very happy about this solution. (I tried to to do a union with all the letters from "a" to "z" but this didn't work.)

Problem instances:

New Zinc models

I have also done some new models (i.e. not just translated MiniZinc models).

alphametic.zinc A general alphametic solver

Zinc model: alphametic.zinc
This is a general alphametic solver, i.e. solves puzzles such as SEND + MORE = MONEY, SEND+MOST = MONEY. This may have been possible to do in MiniZinc but using Zinc's tuple as a lookup table makes it quite easy.

The words must be stated letter for letter since Zinc has not support for accessing the characters in a string.
base = 10;
num_words = 3; 
num_letters = 5;
words = 
  array2d(1..3, 1..5, [
  "", "s","e","n","d", % +
  "", "m","o","r","e", % = 
Here are some of features of this model:
  • The distinct letters used in the problem are put into a set (code>letters which is used to create the lookup table letters2 using a tuple (string, int):
    set of string: letters = {""} union {words[i,j] | i in 1..num_words, j in 1..num_letters};
    array[1..card(letters)] of tuple(string, int): letters2 = [(letters[i],i) | i in 1..card(letters)];
    We have to do a union with the set of the character {""} (empty string) since all strings must be of the same length in the problem matrix (words).
  • Notice how it calculates the first letter in each word (which must be > 0):
    set of string: non_zeros = {  
         words[i, min([j | j in 1..num_letters 
                      where words[i,j] != ""])] | 
               i in 1..num_words                       
  • using the operation string_intersperse (cf. join("", array) in other programming languages)
  • The function lookup/2 is a mapping between a letter and its index in the array of decision variables x:
    function int: lookup($T: c, % the key (here a character)
                          array[int] of tuple($T, int): h) =
             % take the first (and only) value in the list
             head([k.2 | k in h where k.1 = c])
    Here is an example how it is used for stating that some of the letters must be > 0.
    % ...
      % ....
      forall(z in non_zeros) (
         x[lookup(z,letters2)] > 0
Data files

Dudeney numbers

MiniZinc/Zinc model: dudeney_numbers.mzn

This is a new MiniZinc/Zinc model for calculating Dudeney Numbers:
A Dudeney number is a positive integer that is a perfect cube such that the sum of its decimal digits is equal to the cube root of the number. There are exactly six such integers.
Pierre Schaus wrote about this problem some days ago in Dudeney number, and showed an or-tools (Operations Research Tools developed at Google, a.k.a. Google CP Solver) model for solving this using Python. (Note: I will check out the Google CP Solver more later.)

Running this model with Zinc and generate all the solutions:
  $ zinc dudeney_numbers.mzn
  $ dudeney_numbers -s all
gives the following output. Note that the indices are here explicitly stated in the array x.
s: 1
nb: 1
x: [ 1:0, 2:0, 3:0, 4:0, 5:0, 6:1 ]
s: 17
nb: 4913
x: [ 1:0, 2:0, 3:4, 4:9, 5:1, 6:3 ]
s: 18
nb: 5832
x: [ 1:0, 2:0, 3:5, 4:8, 5:3, 6:2 ]
s: 26
nb: 17576
x: [ 1:0, 2:1, 3:7, 4:5, 5:7, 6:6 ]
s: 27
nb: 19683
x: [ 1:0, 2:1, 3:9, 4:6, 5:8, 6:3 ]
s: 8
nb: 512
x: [ 1:0, 2:0, 3:0, 4:5, 5:1, 6:2 ]
Note: In the current Zinc version there are only two options for the number of solutions: first or all. I hope that there soon will be options for a given number of solutions, at least two solutions since it can be interesting to know if a solution is unique or not.

More info about G12 system/Zinc

Here are some information about G12 and Zinc:

September 11, 2010

Results of MiniZinc Challenge 2010

The results of MiniZinc Challenge 2010 has been published: MiniZinc Challenge 2010 Results:
The entrants for this year (with their descriptions, when provided):

In addition the challenge organisers entered the following FlatZinc implementations:

  • Chuffed (description). A C++ FD solver using Lazy clause generation.
  • Fzntini. Translates to SAT, uses TiniSAT.
  • G12/FD. A Mercury FD solver (the G12 FlatZinc interpreter's default solver).
  • G12/CPLEX. Translates to MIP, uses CPLEX12.1.

As per the challenge rules, these entries are not eligible for prizes, but do modify the scoring results. Furthermore, entries in the FD search category (Gecode, JaCoP, Chuffed, G12/FD) were automatically included in the free search category, while entries in the free search category (Fzn2smt, Fzntini, SCIP, CPLEX and promoted FD entries) were automatically included in the parallel search category.

I'm really curious about Chuffed (which I have not tested). As far as I know, it is not public available.


This year the results are not in fixed result lists. Instead there is a Javascript application where one can choose different combinations of solvers and problems.

Update some hours later: I have been informed that my summaries was not correct so I have removed them to not confuse anyone. Sorry for any confusions. Please see the Results page.

See also

The results of the earlier MiniZinc Challenges:
* MiniZinc Challenge 2009 Results. Results of the MiniZinc challenge 2009.
* MiniZinc Challenge 2008 Results. Results of the MiniZinc challenge 2008

August 31, 2010

Nontransitive dice, Ormat game, 17x17 challenge

Here are some new models done the last weeks and some older not mentioned before. (I have also spent some time with Numberjack, a Python based solver, and will blog about this later on.)

Nontransitive dice

MiniZinc: nontransitive_dice.mzn

(Sidenote: This was triggered by a problem in Numberjack's Tutorial.)

Nontransitive dice is presented at Wikipedia as:
A set of nontransitive dice is a set of dice for which the relation "is more likely to roll a higher number" is not transitive. See also intransitivity. This situation is similar to that in the game Rock, Paper, Scissors, in which each element has an advantage over one choice and a disadvantage to the other.
And then gives the following example:
Consider a set of three dice, A, B and C such that
* die A has sides {2,2,4,4,9,9},
* die B has sides {1,1,6,6,8,8}, and
* die C has sides {3,3,5,5,7,7}.

* the probability that A rolls a higher number than B is 5/9 (55.55 %),
* the probability that B rolls a higher number than C is 5/9, and
* the probability that C rolls a higher number than A is 5/9.

Thus A is more likely to roll a higher number than B, B is more likely to roll a higher number than C, and C is more likely to roll a higher number than A. This shows that the relation "is more likely to roll a higher number" is not transitive with these dice, and so we say this is a set of nontransitive dice.
How easy is it to model such nontransitive dice in a high level constraint programming system such as MiniZinc or Comet? The basic MiniZinc model nontransitive_dice.mzn - with no bells and whistles - is this (somewhat edited):
int: m = 3; % number of dice
int: n = 6; % number of sides of each die

% the dice
array[1..m, 1..n] of var 0..n*2: dice :: is_output;

% the competitions: 
% The last wrap around is the one that breaks 
% the transitivity.
array[0..m-1, 1..2] of var 0..n*n: comp :: is_output;

solve satisfy;
   % order the number of each die
   forall(d in 1..m) (
       increasing([dice[d,i] | i in 1..n])
% and now we roll...
% Number of wins for [A vs B, B vs A]
   forall(d in 0..m-1) (
      comp[d,1] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+(d mod m), r1] >      
                   dice[1+((d + 1) mod m), r2]))
      comp[d,2] = sum(r1, r2 in 1..n) (
          bool2int(dice[1+((d+1) mod m), r1] >
                   dice[1+((d) mod m), r2]))

% non-transitivity
% All dice 1..m-1 must beat the follower, 
% and die m must beat die 1
  forall(d in 0..m-1) (
    comp[d,1] > comp[d,2]
The result of the m competitions is placed in the (m x 2) matrix comp where the winner is in comp[i,1] and the loser in comp[i,2] for the match i. The last constraint section in the code above ensures that the winner is always in comp[i,1].

In the full model I have added the following:
  • max_val: maximum value of the dice, to be minimized
  • max_win: maximum number of winnings, to be maximized
  • gap and gap_sum: the difference of wins of a specific competition, to be maximized or minimized
  • the example setups from the Wikipedia page
The Comet version, includes about the same functions as the MiniZinc model. Here is the basic model (also edited for presentational purposes):
import cotfd;
int m = 3; // number of dice
int n = 6; // number of sides of each die
Solver cp();

// the dice
var{int} dice[1..m, 1..n](cp, 1..n*2);

// The competitions: 
var{int} comp[0..m-1, 1..2](cp, 0..n*n);

explore  {
  // symmetry breaking: order the number of each die
  forall(d in 1..m) {
    forall(i in 2..n) {[d,i-1] <= dice[d,i]);

  // and now we roll...
  // Number of wins for [A vs B, B vs A]
  forall(d in 0..m-1) {[d,1] == 
        sum(r1 in 1..n, r2 in 1..n) (
            dice[1+(d % m), r1] >
            dice[1+((d + 1) % m), r2]));[d,2] == 
         sum(r1 in 1..n, r2 in 1..n) (
            dice[1+((d+1) % m), r1] >
            dice[1+((d) % m), r2]));

  forall(d in 0..m-1) {[d,1] > comp[d,2]);

} using {


  cout << "dice:" << endl;
  forall(i in 1..m) {
    forall(j in 1..n) {
      cout << dice[i,j] << " ";
    cout << endl;
  cout << endl;
The full Comet model when minimzing max_value for 3 dice with 6 sides of each die (with domain 1..12), gives this result, i.e. the maximum value used in the dice is 4:
solution #1
1 1 1 2 4 4
1 1 1 3 3 3 
1 2 2 2 2 2 
1 vs 2: 15 (p:0.416667) 12 (p:0.333333) 
2 vs 3: 18 (p:0.500000) 15 (p:0.416667) 
3 vs 1: 15 (p:0.416667) 13 (p:0.361111) 
max_val: 4
max_win: 18
gap_sum: 8
Somewhat related is another dice problem: sicherman_dice.mzn.

17x17 challenge (not solved, though)

Last November, William Gasarch published a challenge in The 17x17 challenge. Worth $289.00. This is not a joke.. Also, see bit-player's The 17×17 challenge, and O.R. by the Beach Let’s Join O.R. Forces to Crack the 17×17 Challenge.

The objective is to create a 17x17 matrix where there are no colored sub-rectangles, i.e. no (sub)rectangles in the matrix can have the same color (number). Here's a solution of the 5x5x3 problem, i.e. 5 x 5 matrix and with 3 colors (1..3):
1 3 3 2 1 
3 1 2 2 1 
3 2 1 2 1 
2 2 2 1 1 
1 1 1 1 2 
When I first saw this problem in December, I tried some models but didn't pursued it further. However, when Karsten Konrad published some OPL models, I picked it up again. We also discussed this problem to see what a join force could do.

Here is my take in MiniZinc: 17_b.mzn. It is inspired (or rather a translation) of Karsten Konrad's OPL model from his blog post Let’s do real CP: forbiddenAssignment. Karsten earlier posted two OPL models: However, 17x17x4 is very hard, and - as I understand it - no one has yet cracked it.

Here are three MiniZinc models with some different approaches. A Comet version is mentioned below.

This is a translation of Karsten Konrad's OPL model in Let’s do real CP: forbiddenAssignment, which uses a forbiddenAssignments constraint. However, MiniZinc don't has this constraint as a built-in so I had to roll my own, here called existential_conflict. The forbidden is an table of forbidden combinations in the matrix. For 4 colors it look like this:
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
i.e. there can be no rectangle with has all the same values. (If you now start to think about global cardinality constraint, please read on.) Here is the complete model:
int: NbColumns = 10;
int: NbRows = 10;
int: NbColors = 4;
set of int: Columns = 1..NbColumns;
set of int: Rows = 1..NbRows;
set of int: Colors = 1..NbColors;

array[Rows,Columns] of var Colors: space :: is_output;
array[Colors, 1..4] of int: forbidden = 
   array2d(Colors, 1..4,[ i | i in Colors, j in 1..4]);

% The pattern x must not be in table table.
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
   not exists(pattern in index_set_1of2(table)) (
      forall(j in index_set_2of2(table)) (
           x[j] = table[pattern, j]

solve :: int_search(
   [space[i,j] | i in Rows, j in Columns], 

  space[1,1] = 1 /\
  space[NbRows,NbColumns] = 2 /\
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
       [space[r,c], space[r2,c], space[r,c2], space[r2,c2]])
This is almost exactly the same as the one above, except that it has another implementation of existential_conflict, a kind of MIP version:
predicate extensional_conflict(array[int, int] of var int: table, 
                               array[int] of var int: x) =
 forall(pattern in index_set_1of2(table)) (
     sum(j in index_set_2of2(table)) ( 
           bool2int(x[j] = table[pattern, j])
     ) < 4
This third version use the constraint global cardinality constraint instead of forbidden assignments/existential_conflict. We create a temporary array (gcc), and then check that there is no value of 4 (i.e. the number of occurrences of each number must be less than 4). Also, I skipped the symmetry breaking of assigning the corner values in the matrix space (commented below).
  forall(r in Rows, r2 in 1..r-1, c in Columns, c2 in 1..c-1) (
    let {
       array[1..4] of var 0..4: gcc
    } in
        [space[r,c], space[r2,c], space[r,c2], space[r2,c2]], 

    forall(i in 1..4) (  gcc[i] < 4  )
As you can see these are rather simple (a.k.a. naive) models. I suspect that there are better search strategies than the one I tested, for example Gecode's size_afc_max with indomain_random which is one of the better:
solve :: int_search(
         [space[i,j] | i in Rows, j in Columns], 
However, neither of these models could find any solution of the original 17x17x4 problem. The longest take was 36 hours on my new 8 core Linux (Ubuntu) and 12 Gb RAM, but no luck.

Here is a solution of the 17x17x5 (i.e. 5 colors), using the LazyFD solver (it took 1:19 minutes), c.f. Karsten's Improved Model for Coloring Problem.
1 5 1 5 1 5 3 2 5 5 4 4 2 3 1 3 2 
5 1 4 3 5 5 2 3 2 2 3 2 4 1 4 4 4 
3 2 2 2 1 3 2 4 1 3 5 3 4 4 2 5 5 
2 2 3 3 5 4 4 2 5 4 4 1 3 1 1 1 5 
4 4 5 3 1 4 1 1 3 5 2 2 5 3 2 5 3 
5 1 3 1 1 3 3 2 2 1 4 1 5 4 2 4 5 
4 2 4 3 5 3 5 1 2 4 2 5 2 4 1 2 1 
2 4 2 4 1 5 4 3 4 2 5 3 4 1 5 2 3 
4 3 5 2 1 3 4 3 5 2 5 5 3 3 4 1 2 
1 5 3 1 2 1 4 1 3 2 3 5 2 4 2 5 4 
2 1 3 5 2 1 5 3 2 3 5 4 5 2 4 1 1 
5 3 5 4 3 4 3 2 1 3 3 2 1 2 1 2 4 
4 3 3 4 4 1 5 5 1 5 4 1 2 2 5 3 3 
5 4 4 1 3 2 1 4 2 5 4 5 3 5 3 1 3 
2 4 1 3 4 2 1 5 1 3 1 4 5 5 2 4 1 
3 3 5 2 4 2 5 4 3 1 1 2 1 5 3 1 4 
1 5 2 3 4 2 1 5 4 4 5 1 1 2 3 3 2 
I also wrote a Comet model,, and experimented with different cardinality constraints (cardinality, atmost, a decomposition of forbiddenAssignments) and different labelings.

Ormat game

Ormat game is yet another grid problem. From bit-player: The Ormat Game (where the nice pictures has been replace with 0/1 matrices below):
Here’s the deal. I’m going to give you a square grid, with some of the cells colored and others possibly left blank. We’ll call this a template. Perhaps the grid will be one of these 3×3 templates:
   1 0 0      1 1 1     1 1 1
0 1 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 0
(1) (2) (3)
You have a supply of transparent plastic overlays that match the grid in size and shape and that also bear patterns of black dots:
   1 0 0     1 0 0     0 1 0
0 1 0 0 0 1 1 0 0
0 0 1 0 1 0 0 0 1
(a) (b) (c)
0 0 1 0 1 0 0 0 1
1 0 0 0 0 1 0 1 0
0 1 0 1 0 0 1 0 0
(d) (e) (f)
Note that each of these patterns has exactly three dots, with one dot in each row and each column. The six overlays shown are the only 3×3 grids that have this property.

Your task is to assemble a subset of the overlays and lay them on the template in such a way that dots cover all the colored squares but none of the blank squares. You are welcome to superimpose multiple dots on any colored square, but overall you want to use as few overlays as possible. To make things interesting, I’ll suggest a wager. I’ll pay you $3 for a correct covering of a 3×3 template, but you have to pay me $1 for each overlay you use. Is this a good bet?
This is the first version in MiniZinc: ormat_game.mzn where the problem instances are included in the model file. As with the 17x17 problem above, this is - in concept, at least - quite simple. The model, sans the overlays, is shown below:
include "globals.mzn"; 
% the number of overlays, n!
int: f :: is_output = product([i | i in 1..n]);

% Which overlay to use.
array[1..f] of var 0..1: x :: is_output;

% number of overlays used (to minimize)
var 1..f: num_overlays :: is_output = sum(x);

% There are n! possible overlays
% They are in a separate .dzn file
%    ormat_game_n3.dzn etc
array[1..f, 1..n,1..n] of 0..1: overlays;

solve :: int_search(
  minimize num_overlays;

    % if problem has a black cell (=1) then there 
    % must be a selected overlay that has a black cell
    forall(i,j in 1..n) (
        problem[i,j] = 1 -> 
            exists(o in 1..f) (
                               x[o]*overlays[o,i,j] = 1 
    /\ % the inverse: wherever a selected overlay 
       %  has a black cell, problem must have 
       %  a black cell
    forall(o in 1..f) (
        x[o] = 1 -> 
           forall(i,j in 1..n) (
              overlays[o,i,j] = 1 -> problem[i,j] = 1

% Problem grid 3
include "ormat_game_n3.dzn";
array[1..n, 1..n] of 0..1: problem = array2d(1..n, 1..n,
The only constraints are:
  • if the problem matrix has a black cell (1) then there at least one of the selected overlay matrices must have a 1 in this cell
  • the converse: for all the black cells in the selected overlays the problem must has a black cell.
The output and solution:

x: [0, 1, 0, 1, 1, 1]
num_overlays: 4

Overlay #2

Overlay #4

Overlay #5

Overlay #6
Ideally, the overlays should be calculated in real-time, but it was much easier to generate overlay files for each size (using ormat_game_generate.mzn and some post-processing in Perl). These must be included in the model as well. Note: there are n! overlay matrices for problems of size n x n. However, changing the model file each time a new problem is made or tested is not very convenient. It's better to have problem instances in separate files which include the model and the overlay files of appropriate size. These following problem instances includes the general model ormat_game_nodel.mzn and an overlay file: Note: As of writing, there is problems running problems of size 7 on my 64-bit Ubuntu; it works on the 32-bit machine (also Linux), though. This has been reported to the G12 team.

Other new models

Here are some other new models (or not mentioned earlier):

Contiguity using regular

contiguity_regular.mzn: decomposition of the global constraint contiguity using regular constraint.

Combinatorial auction

These two MiniZinc models implements a simple combinatorial auction problem (Wikipedia): combinatorial_auction.mzn, and a "talkative" MIP version combinatorial_auction_lp.mzn (which is translated from Numberjack's Tutorial)

Compare with the Gecode version combinatorial_auction.cpp.

August 27, 2010

MiniZinc version 1.1.6 released

MiniZinc release 1.1.6 has been released. It can be downloaded here.

From the NEWS

G12 MiniZinc Distribution 1.1.6

Changes in this release:

* We have modified the decomposition of the global constraint lex_lesseq
in order to avoid the introduction of an auxiliary Boolean variable.
(Thanks to Chris Mears and Todd Niven for pointing this out.)

Bugs fixed in this release:

* mzn2fzn now correctly computes the set of output variables when the
output item contains let expressions. [Bug #141]

* A bug that caused mzn2fzn to infer incorrect bounds for integer and
float var array elements has been fixed. [Bug #149]

* mzn2fzn now prints the source locations of all solve (output) items when
the are multiple such items. [Bug #143]

* mzn2fzn now flattens par expressions containing the built-in operation
pow/2 correctly.

* mzn2fzn now flattens arrayNd expressions containing arrays of strings

* The mzn script no longer aborts if the model contains an array of
decision variables. [Bug #140]

July 22, 2010

Minizinc version 1.1.5 released

Version 1.1.5 of MiniZinc has been released. It can be downloaded here.

From the NEWS:

Bugs fixed in this release:

* We have fixed a number of problems that caused stack overflows in mzn2fzn.

* The FlatZinc interpreter's MIP backend no longer reports "unknown" for
satisfaction problems.

July 20, 2010

Minizinc version 1.1.4 released

MiniZinc version 1.1.4 has been released. It can be downloaded here.

From the NEWS:

Changes in this release:

* We have added a library of global constraint definitions and FlatZinc built-in redefinitions suitable for use with LP/MIP solvers; both are in the "linear" directory of the MiniZinc library.

Bugs fixed in this release:

* Some performance issues with mzn2fzn that occurred when flattening models that generate and traverse large arrays have been fixed.

* An omission that caused mzn2fzn not to recognise the MiniZinc built-in function round/1 has been corrected.

* A bug in flatzinc that caused the MIP backend to abort when the model instance contained an unused set parameter has been fixed. [Bug #134]

* A bug in mzn2fzn that caused it not to place domain constraints on the FlatZinc variables generated for an array of variables introduced via a let expression has been fixed. [Bug #133]

* The implementation of the div propagator in flatzinc's FD backend has been modified to avoid potentially long fixpoint computations.

July 18, 2010

Some new MiniZinc models

Here are some new MiniZinc models. Some are completely new - or previously unannounced - but there are also some which has been implemented in some other constraint programming system before.

New models

Here are some new - or at least previously not announced - models:

Latin square card puzzle

latin_square_card_puzzle.mzn: Latin square card puzzle

I found this problem in Mario Livio's nice PopSci book about the development of group theory The Equation that couldn't be solved, page 22:
... Incidentally, you may get a kick out of solving this eighteenth century card puzzle: Arrange all the jacks, queens, kings, and aces from a deck of cards in a square so that no suit or value would appear twice in any row, column, or the two main diagonals.
My general approach is to use integers of the following form. n is the size of the matrix (here 4) and m is the number we use for modulo operation (here 10). These values are calculated automatically by the model depending on n.
 % values: i mod 10 
  0, 1, 2, 3,  % suite 1: 0 div 10
 10,11,12,13,  % suite 2: 1 div 10
 20,21,22,23,  % suite 3: 2 div 10
 30,31,32,33   % suite 4: 3 div 10
What then want that the values divided by 10 (the suites) should be different in each row, column, and diagonals, and also the values by modulo 10 (the values) is different in each row, column, and diagonals.

With the symmetry breaking constraint that the value 0 must be in the upper leftmost cell, there are 72 solutions for n = 4. Here is one of them:
 0 33 22 11
21 12  3 30
13 20 31  2
32  1 10 23
Note: There are solutions for n = 4 and n = 5 but not for n = 6. The n = 6 problem is the same as Euler's 36 officer's problem, which thus is not solvable. Also see MathWorld.

Investment problem

This problem is from ORMM (Operations Research Models and Methods):
A portfolio manager with a fixed budget of $100 million is considering the eight investment opportunities shown in Table 1. The manager must choose an investment level for each alternative ranging from $0 to $40 million. Although an acceptable investment may assume any value within the range, we discretize the permissible allocations to intervals of $10 million to facilitate the modeling. This restriction is important to what follows. For convenience we define a unit of investment to be $10 million. In these terms, the budget is 10 and the amounts to invest are the integers in the range from 0 to 4.
Here are two implementations:

Arg max


argmax/argmin predicate
  • argmax_ge(pos, x)
    pos is the index x for the maximum value(s). There can be many maximum values.
  • argmax_gt(pos, x)
    pos is the index x for the maximum value. There can be only one maximum value.
  • argmin_le(pos, x)
    pos is the index x for the minimum value(s). There can be many minimum values.
  • argmin_lt(pos, x)
    pos is the index x for the minimum value. There can be only one maximum value.

Permutation number


A permutation number is the number of transpositions in a permutation, see Wikipedia Permutation.

Sicherman dice


From Wikipedia Sicherman_dice:
Sicherman dice are the only pair of 6-sided dice which are not normal dice, bear only positive integers, and have the same probability distribution for the sum as normal dice.

The faces on the dice are numbered 1, 2, 2, 3, 3, 4 and 1, 3, 4, 5, 6, 8.
I read about this problem in a book/column by Martin Gardner long time ago, and got inspired to model it now by the recent WolframBlog post Sicherman Dice

Here is the vital part of the code:
array[2..12] of int: standard_dist = 
       array1d(2..12, [1,2,3,4,5,6,5,4,3,2,1]);
% the two dice
array[1..n] of var 1..m: x1 :: is_output;
array[1..n] of var 1..m: x2 :: is_output;
forall(k in 2..12) (
  standard_dist[k] = 
    sum(i,j in 1..n) ( 
       bool2int(x1[i]+x2[j] == k)
% symmetry breaking
/\ increasing(x1) 
/\ increasing(x2)

/\ % x1 is less or equal to x2
forall(i in 1..n) (
   x1[i] <= x2[i]
% /\ lex_lesseq(x1, x2)
This model gets the two different solutions, first the standard dice and then the Sicherman dice:
[1, 2, 3, 4, 5, 6]
[1, 2, 3, 4, 5, 6]


[1, 2, 2, 3, 3, 4]
[1, 3, 4, 5, 6, 8]
Extra: If we also allow that 0 (zero) is a valid value then the following two solutions are also shown. The only thing we have to do is to change the domains of x1 and x2:
% the two dice
array[1..n] of var 0..m: x1 :: is_output;
array[1..n] of var 0..m: x2 :: is_output;
Here here are the two new solutions:
[0, 1, 1, 2, 2, 3]);
[2, 4, 5, 6, 7, 9]);


[0, 1, 2, 3, 4, 5]);
[2, 3, 4, 5, 6, 7]);
These two extra cases are mentioned in MathWorld SichermanDice.

Translations of other models

The following MiniZinc models is ports from models written in at least one other constraint programming system before, mostly Comet:

June 11, 2010

MiniZinc version 1.1.3 released

MiniZinc version 1.1.3 has been released. It was be downloaded here.

From the NEWS file:

Changes in this release:

* We have added a new script, mzn, that allows output items to work with two-pass MiniZinc evaluation. (The script requires a Unix-like system -- we hope to lift restriction in later versions.)
* The files alldifferent.mzn, atmost1.mzn, atmost.mzn and atleast.mzn have been added to the MiniZinc globals library. At the moment these files merely cause all_different.mzn, at_most1.mzn etc to be included. Eventually the latter will be replaced by the former.

Bugs fixed in this release:

* A bug in mzn2fzn that caused an internal error when flattening predicates with a reified form has been fixed. [Bug #131]
* The MiniZinc type checker now correctly reports an error for attempts to use the built-in function index_set/1 with arrays that have more than one dimension. [Bug #68]
* The broken definition for refied all_different for the lazy clause generation solver has been fixed.
* A bug where mzn2fzn was mishandling arrays of strings has been fixed.

May 23, 2010

Two new tools for MiniZinc, and a paper

Some days ago, the G12 group released two new tools for MiniZinc that I haven't used that much, but will hopefully do in the not to far future. Also, a recent paper is mentioned.


The G12 IDE is an application for writing, running, visualizing, and debugging MiniZinc models. It is based on (the Java IDE) Eclipse. It can be downloaded here.


fzn2xcsp is a tool for converting a subset of FlatZinc to XCSP 2.1. As of writing this tools is only available in the development version.

Paper: Philosophy of the MiniZinc challenge

Peter J. Stuckey, Ralph Becket, and Julien Fischer: Philosophy of the MiniZinc challenge (Springer Link). It is published in the Constraints Journal, but the paper is not available there (yet).
MiniZinc arose as a response to the extended discussion at CP2006 of the need for a standard modelling language for CP. This is a challenging problem, and we believe MiniZinc makes a good attempt to handle the most obvious obstacle: there are hundreds of potential global constraints, most handled by few or no systems. A standard input language for solvers gives us the capability to compare different solvers. Hence, every year since 2008 we have run the MiniZinc Challenge comparing different solvers that support MiniZinc. In this report we discuss the philosophy behind the challenge, why we do it, how we do it, and why we do it that way.

Beside being an interesting paper about the MiniZinc challenge (see the MiniZinc homepage for links to the last two year's challenges, and this year's), it is also the first constraint programming paper where I'm mentioned (in the Acknowledgment). Thanks for this, I'm honored and appreciate it very much.

May 14, 2010

Optimizing shopping baskets: The development of a MiniZinc model

Yesterday (Thursday) was a holiday in Sweden and I had a busy day. Apart from writing a modulo propagator for ECLiPSe/ic, I also spend a lot of time with the Stack Overflow question How to use constraint programming for optimizing shopping baskets?. It asked for some tips (in a Java constraint programming system) for solving a problem of optimizing a shopping basket where different shops has different prices for items, as well as a delivery cost for orders below a certain limit.

You can read what I wrote and the developments of the models in my answer (I am hakank).

Update 2010-05-16: Added a better version. See below, Version 6.

First version

Even though the question was about a Java constraint system (I mentioned JaCoP, though), I rather quickly throw together a MiniZinc model instead as some kind of a prototype for a Java model. I'm a very lazy person and MiniZinc is excellent for prototyping.

This first version is shopping_basket.mzn (which actually include the next requirement: to handle shops which don't have all items). It solved the very simple example described in the problem section without any problem.

This model is quite straightforward: x is a matrix of 0/1 decision variables which shop to buy which item (the "knapsack" part). The more tricky part is to calculate the delivery costs if the total is below a certain limit (delivery_costs_limits). Here is the constraint part of the model:
   % we must buy all items (from some shop)
   forall(i in 1..num_items) (
       sum([x[i,j] | j in 1..num_shops]) = 1
   total = sum(i in 1..num_items) (
       % costs of the products
       sum(j in 1..num_shops) (
   ) + sum(j in 1..num_shops) (
       %  and delivery costs if total for a shop < limit
       let {
         var int: this_cost = sum([costs[i,j]*x[i,j] | i in 1..num_items])
       } in
       delivery_costs[j]*bool2int(this_cost > 0 /\ this_cost < delivery_costs_limits[j])

The N/A problem: Not all shops sells all items

The next question was how to handle the "N/A" cases, i.e. for shops that don't have some item. It was solved by setting the price to a large number (999999 or 99999). However this is not very nice to the constraint solver. I really wanted to set these N/A to 0 (zero) but didn't get this correct (see below for a wrong turn on this).

As mentioned above the model shopping_basket.mzn includes the N/A problem.

Another representation

Almost always there are different approached how to model a problem. In the next version, shopping_basket2.mzn, I represented x - instead of a 0/1 matrix - as an array of length 1..num_items with the domains 1..num_shops to represent which shop to buy an item. The idea was that the constraint solvers now had much less work to do.

The constraint section is now simpler in some part, but calculating the delivery costs required a little more code:
 total = sum(i in 1..num_items) (
 ) + 
 sum(j in 1..num_shops) (
   let {
   var int: this_cost = 
     sum([costs[i,j]*bool2int(x[i]==j) | i in 1..num_items])
   } in
      this_cost > 0 /\ 
      this_cost < delivery_costs_limits[j])

Real data

The simple example data was easily solved for these two models, but the real challenge began when handling real data: 29 items and 37 shops. The two simplistic models was, however, not strong enough to handle this, so I started to think harder about the problem.

A wrong turn

The first attempt was continuing with the matrix approach and trying to be clever to represent N/A by 0 (zero), see above. This was done in shopping_basket3.mzn, but this model is plain wrong! The reason for this rather embarrissing bug was that the test case I used was not correct (completely mea culpa of course).

Final(?) version, part I: limiting the domains

OK, back to the drawing board. After some deep thoughts (i.e. an afternoon nap) I realized that the first models was too much of a proof-of-concept, too much prototype. In these I had broken one of the cardinality rules of constraint programming: not thinking about the domains of all decision variables.

In the first models the total cost (total) was defined as
var int: total :: is_output;
and the temporary variables this_cost was also defined as var int. This means that there is no limits of these variables (not even that they should be positive) which gives little hints for the constraint solvers.

The remedy is shown in the final(?) version shopping_basket5.mzn (shopping_basket4.mzn is basically the same thing for the matrix approach, and was not published).

Here the maximum total for any shop is first calculated, which can be quite large given the 99999 for N/A, but it is still limited and positive.
% total price
int: max_total :: is_output = 
      sum(i in 1..num_items) (
         max([costs[i,j] | j in 1..num_shops] )
total is then defined with this limit:
var 0..max_total: total :: is_output; 
The temporary variable this_cost is handled accordingly.

Final(?) version, part II: labeling

This model was now in better shape, but still too slow. For all these versions I tried a lot of different labelings (search heuristics) and with many solvers (including the two MIP solvers, MiniZinc/mip and ECLiPSe/eplex, but they didn't accept the models).

I first saw a real improvement with the combination of Gecode/fz and the largest,indomain_max labeling. It solved the problem in 25 seconds (and with 69964 failures) :
total = 42013
x = [13, 20, 17, 18, 18, 13, 17, 17, 20, 13, 17, 17, 13, 13, 
     18, 17, 13, 13, 17, 8, 13, 36, 10, 17, 13, 13, 17, 20, 13] 
Just a minute or so after publishing this result (which seemed to please the person asking the question), I tested a different labeling, largest,indomain_split, and it solved the problem in 12 seconds (51013 failures). It gave a slightly different solution of where to buy item 12 (shown as bold):
total = 42013
x = [13, 20, 17, 18, 18, 13, 17, 17, 20, 13, 17, 13, 13, 13, 
     18, 17, 13, 13, 17, 8, 13, 36, 10, 17, 13, 13, 17, 20, 13]. 
These two solutions are the only two solutions for the (optimal) total of 42013. The test for all solutions takes about 1 second.

Some comments

This was a fun problem, quite simple but instructive what will happen when ignoring declaration of proper domain variables in the first versions of the models.

Even though the "real problem" was of the magnitude of 29 items and 37 shops (29 * 37 =1073), I am somewhat surprised that the problem was so hard to solve. This reminds me of the coins_grid problem which MIP solvers solve very fast, but the constraint solver have problems with it.

Update 2010-05-16: Version 6: Now faster

Well, version 5 was not the final version. Maybe this version 6 (shopping_basket6.mzn) is?

By two simple changes the problem is now solved in about 2 seconds and with 4460 failures (compared with 12 seconds and 51013). Here are the changes:
* The first change was the representation of N/A:s from 99999 to 0 (zero) . As noted above the former representation is not a good since it make all calculations, and the domains, unnecessary large.
* And finally, the following constraint states that all prices to consider must be larger than 0:
   forall(i in 1..num_items) (
      costs[i,x[i]] > 0
Sigh. Of course! No defense of my silliness there is.

This also lessens the surprise considerable in Some comment above.

May 11, 2010

MiniZinc version 1.1.2 released

Version 1.1.2 of MiniZinc has been released. Download.

Changes from version 1.1.1 (from NEWS):

G12 MiniZinc Distribution 1.1.2
Bugs fixed in this release:

* The file diffn.mzn is now included in globals.mzn.

* A bug in mzn2fzn that caused it to abort when flattening int2float expressions has been fixed.

* An error in the FlatZinc specification has been fixed. All var types may have assignments, not just arrays.

* A bug in mzn2fzn that caused it generate an array_int_element constraint where an array_var_int_element constraint was required has been fixed. [Bug #122]

* A bug that caused mzn2fzn to generate invalid FlatZinc rather than emit an error message when the bound of an unbounded variable is taken has been fixed.

May 06, 2010

Some 40 new ECLiPSe models

During the last days I have published about 40 new ECLiPSe models on my ECLiPSe page. They are all listed below.

A major part are ports from my SICStus Prolog models that I wrote some months ago, and I have not changed much from the SICStus models. Also, most of them - in turn - was first written in MiniZinc (see my MiniZinc page).

However, some small puzzles are completely new. By a freaky coincidence two are some kind of weighting problems (for comparison, the corresponding MiniZinc model are also linked):

The new models

Here is all the new ECLiPSe models.


I think I have forgot to mention Helmut Simonis' great ECLiPSE ELearning Website, which includes 20 interesting chapters, most with video lectures. Even if the code examples are in ECLiPSe, the material is definitely of general interest for anyone who want to learn more about Constraint Programming.

March 29, 2010

MiniZinc Challenge 2010

MiniZinc Challenge 2010 has been announced:

The Challenge

The aim of the challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).


For the rules, see: MiniZinc Challenge 2010 -- Rules.

Also, see the earlier challenges:

March 26, 2010

MiniZinc version 1.1.1 released

MiniZinc version 1.1.1 has been released. Download.

From the NEWSG12 MiniZinc Distribution version 1.1.1

Bugs fixed in this release:

* A bug that caused predicate arguments to be incorrectly flattened in
reifying contexts has been fixed. [Bug #109]

* A bug in mzn2fzn that caused incorrect bounds to be calculated for the
result of a mod operation has been fixed. [Bug #107]

* A bug in mzn2fzn that caused out of range array accesses to be generated in
reified contexts, instead of just making the result of the reification
false. [Bug #110]

* The omission of the null annotation from the Zinc / MiniZinc specification
has been fixed.

* The rostering problem in the MiniZinc benchmark suite (benchmarks/roster),
has been reformulated. The old formulation was always unsatisfiable under
the change to the semantics of the mod operation introduced in MiniZinc 1.1.
[Bug #108]

* A bug in mzn2fzn that caused it to emit the null/0 annotation in the
generated FlatZinc. [Bug #111]

March 17, 2010

MiniZinc version 1.1 released

MiniZinc version 1.1 has been released. See below how my existing (and maybe others') MiniZinc models are affected by the changes.

From the NEWS:

G12 MiniZinc Distribution version 1.1

Changes to the MiniZinc language:

* The following built-in operations have been introduced:

int: lb_array(array[int] of var int)
float: lb_array(array[int] of var float)
set of int: lb_array(array[int] of var set of int)

int: ub_array(array[int] of var int)
float: ub_array(array[int] of var float)
set of int: ub_array(array[int] of var set of int)

set of int: dom_array(array[int] of var int)

These new operations are synonyms for the following existing built-in
MiniZinc operations:

int: lb(array[$T] of var int)
float: lb(array[$T] of var float)
set of int: lb(array[$T] of var set of int)

int: ub(array[$T] of var int)
float: ub(array[$T] of var float)
set of int: ub(array[$T] of var set of int)

set of int: dom(array[$T] of var int)

These latter operations are now deprecated. Support for them will
be removed in the next release. This change is being made in order
to preserve compatibility with the full Zinc language.

Note: that only the versions of lb, ub and dom that take an array
as a an argument are deprecated. The MiniZinc lb, ub and dom operations
on non-array values are *not* deprecated.

Changes to the FlatZinc language:

* Boolean variable expressions as constraints are no longer supported.
All constraints in FlatZinc must now be predicate applications.

* String parameters are no longer supported. String literals are restricted
to appearing as the arguments of annotations.

* Set of bool and set of float parameters and literals are no longer

* The int_float_lin/4 objective expression is no longer supported.

* FlatZinc now has two additional evaluation outcomes: "unknown"
for when search terminates without having explored the whole search
space and "unbounded", for when the objective of an optimization
problem is unbounded.

* The semantics of the int_div/3 and int_mod/3 built-ins has been changed.
See the ``Specification of FlatZinc'' for further details.

Other Changes:

* The single pass MiniZinc interpreter, minizinc, has been deprecated.
It will be removed in a future release.

* The MiniZinc-to-FlatZinc converter, mzn2fzn, has been rewritten.
The new implementation is smaller and more efficient.
Computation of variable bounds has also been improved.

* mzn2fzn now outputs singleton sets as ranges. [Bug #94]

* A bug that caused expressions containing abs/1 to be incorrectly
flattened has been fixed. [Bug #91]

* The FlatZinc interpreter's finite-domain backend now implements
global_cardinality_low_up as a built-in.

* The FlatZinc interpreter's lazy clause generation solver now supports
the int_mod/3 built-in.

* Two additional modes of operation have been added to the FlatZinc
solution processing tools, solns2dzn, that allow it to extract the first
or last solution from a FlatZinc output stream. Also, there is no longer
a default mode of operation for solns2dzn, it must now be specified by
the user or an error will occur.

* The following new global constraints have been added to the MiniZinc

lex_greater (Synonym for lex_less with arguments swapped.)
lex_greatereq (Synonym for lex_lesseq with arguments swapped.)

* The following synonyms for existing global constraints have been added
to the MiniZinc library (the existing name is given in parentheses):

alldifferent (all_different)
atleast (at_least)
atmost (at_most)
atmost1 (at_most1)

* The sequence constraint is deprecated. Models should use the new
sliding_sum constraint instead.

* The 'table' constraint decompositions in the MiniZinc library have been
modified so as to fit better with the G12 MiniZinc-to-FlatZinc conversion:
now no scaling constraints are created.

* The decompositions of the constraints in the 'lex' family have been
tweaked to enable a little more propagation.


Here are the three new specification documents for MiniZinc version 1.1 (and Zinc version 0.11):


The next few days I will change my MiniZinc models so they comply to version 1.1, and I have already started this work. Update 2010-03-20: These changes has now been done, including updating the SVN repository.

The following will be changed:

  • lb/ub for arrays

    lb(array) and ub(array) will be changed to lb_array(array) and ub_array(array) respectively.
  • Comparing/copying arrays

    One thing that should not work even in MiniZinc version 1.0.3 - but for some reason did - was copying/equality/comparison of arrays in the constraint section or in predicates. This don't work in MiniZinc version 1.1. E.g., the following no longer works:

    int: n = 4;
    array[1..n] of var 1..n: x;
    x = [1,2,3,4] % no longer works

    Instead, the arrays must now be handled element-wise. Since many of my models use the above construct, especially for testing the global constraints, the models use a new predicate family cp<n>d (where <n> is the dimension, 1, 2, etc), e.g. cp1d and cp2d. Example of one version of cp1d:

    int: n = 4;
    array[1..n] of var 1..n: x;
    solve satisfy;
    % arrays of 1d where both arguments are var int
    predicate cp1d(array[int] of var int: x, array[int] of var int: y) =
    assert(index_set(x) = index_set(y),
    "cp1d: x and y have different sizes",
    forall(i in index_set(x)) ( x[i] = y[i] ));
    cp1d(x, [1,2,3,4]) % this works

    Some examples are collected in the model copy_arrays.mzn.

    I estimate that over 200 of my models have to be fixed in this way.As mentioned above, some of models are now already changed.

  • Renamed models

    Some of my MiniZinc models has been renamed since they clash with new built-in predicates:

After the changes are done, I will also update the G12's MiniZinc SVN repository, the hakank directory.

Two more things

Also see, my MiniZinc Page.

March 12, 2010

Pi Day Sudoku 2009 - the models (MiniZinc and Comet)

The 14 March is Pi Day (π Day) and is a celebration of Pi (3.14 in the mm.dd date notation). There are a lot of activities for this day. One of these activities is the Pi Day Sudoku.

Almost exactly one year ago, I blogged about the Pi Day Sudoku 2009 problem in these two posts: The problem is an extended version of a Sudoku problem:
Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

(Click to enlarge the picture)

Since it was a competition (closed June 1, 2009), I didn't published any models of this problem when blogging about the problem. Now, a year after, it seems to be a good time to publish them. I then implemented two version, one in MiniZinc, and one in Comet:: Both models use the same approach (details differs however). It is the same as for the plain Sudoku, with two exceptions:
  • The common approach in plain Sudoku is to use the global constraint alldifferent for stating that the values in the rows, columns, and regions should be different. Since there should be 3 occurrences of π in each row, column and region, this approach don't work. As mentioned in Solving Pi Day Sudoku 2009 with the global cardinality constraint, my first approach was a home made version of the constraint alldifferent except 0 and Pi but it was too slow. Instead (and via a suggestion of Mikael Zayenz Lagerkvist) I changed to the global constraint global_cardinality, which was much faster.
  • The other difference to the standard approach is that the regions are not 3x3 (or MxM), but "jigsaw regions". It was not especially hard to make this representation (though boring to type them in). The constraint for checking the regions are (in MiniZinc):
      /\ % the regions
      forall(i in 1..num_regions) (
        check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])
Here I will not publish the solution to the puzzle since I gather that there are a lot of people out there that will solve it by there own. And there is at least one valid solution out there.


This was a fun problem, especially since I learned some new things by implement the models. As a constraint programming challenge is was quite harder than this year puzzle: Pi Day 2010:
Rules: Fill in the grid so that each row, column, and block contains 1-9 exactly once. This puzzle only has 18 clues! That is conjectured to be the least number of clues that a unique-solution rotationally symmetric puzzle can have. To celebrate Pi Day, the given clues are the first 18 digits of π = 3.14159265358979323...
[Yes, I have implemented a MiniZinc model for this problem as well; it is a standard Sudoku problem. No, I will not publish the model or a solution until the deadline, June 1, 2010.]

For more about Pi Day Sudoku 2010, see the blog 360: Pi Day Sudoku is back.

Also, see the following models that implements Killer Sudoku, which use the same approach as Pi Day Sudoku 2009:

The MiniZinc code

Here is the MiniZinc model (sudoku_pi.mzn), slightly edited.
include "globals.mzn";
int: n = 12;
int: X = 0;  % the unknown
int: P = -1; % π (Pi)

predicate check(array[int] of var int: x) =
   global_cardinality(x, occ) % :: domain

array[1..n, 1..n] of var -1..9: x :: is_output;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
array[1..11] of 0..3: occ2 = [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1];

% solve satisfy;
solve :: int_search([x[i,j] | i,j in 1..n], first_fail, indomain_min, complete) satisfy;


  % copy the hints
  forall(i in 1..n, j in 1..n) (
      x[i,j] != 0
      if dat[i,j] != X then
        x[i,j] = dat[i,j]

  /\ % rows
  forall(i in 1..n) (
    check([x[i,j] | j in 1..n]) 

  /\ % columns
  forall(j in 1..n) (
    check([x[i,j] | i in 1..n])

  /\ % the regions
  forall(i in 1..num_regions) (
    check([x[regions[j,1],regions[j,2]] | j in 1+(i-1)*n..1+((i-1)*n)+11 ])

[ show(occ) ++ "\n"] ++
  if j = 1 then "\n" else " " endif ++
  | i in 1..n, j in 1..n

] ++ ["\n"];

% data
array[1..n,1..n] of int: dat = array2d(1..n, 1..n,
 4,9,7, P,5,X,X,X,X, X,X,X,
 X,P,X, 8,X,X,9,6,1, 5,2,X,
 X,8,X, 1,X,X,X,P,X, 7,X,X,
 X,X,X, X,X,X,X,P,X, 4,X,X,
 5,3,9, 6,X,X,X,X,X, X,X,X,

 9,4,X, P,P,P,7,X,X, X,X,X,
 X,X,X, X,X,6,2,5,P, X,7,4,
 X,X,X, X,X,X,X,X,P, P,3,8,
 X,7,8, 4,6,9,X,X,X, X,X,X,

 X,X,3, X,P,X,X,4,7, 1,6,9,
 X,X,4, X,1,X,X,X,6, X,P,X,
 X,X,X, X,X,X,X,X,4, X,5,X

% The regions
int: num_regions = 12;
array[1..num_regions*12, 1..2] of int: regions  = array2d(1..num_regions*12, 1..2,
  % Upper left dark green
  1,1  , 1,2  , 1,3  , 
  2,1  , 2,2  , 2,3  , 
  3,1  , 3,2  , 
  4,1  , 4,2  ,  
  5,1  , 5,2  , 
  % Upper mid light dark green
  1,4  ,  1,5  ,  1,6  ,  1,7  ,  1,8  ,  1,9  , 
  2,4  ,  2,5  ,  2,6  ,  2,7  ,  2,8  ,  2,9  , 

  % Upper right green
  1,10  ,  1,11  ,  1,12  , 
  2,10  ,  2,11  ,  2,12  , 
  3,11  ,  3,12  , 
  4,11  ,  4,12  , 
  5,11  ,  5,12   , 

  % Mid upper left "blue"
  3,3  ,  3,4  , 3,5  ,  3,6  , 
  4,3  ,  4,4  , 4,5  ,  4,6  , 
  5,3  ,  5,4  , 5,5  ,  5,6  , 

  % Mid Upper right blue
  3,7  ,  3,8  ,  3,9  ,  3,10  , 
  4,7  ,  4,8  ,  4,9  ,  4,10  , 
  5,7  ,  5,8  ,  5,9  ,  5,10  , 

  % Mid left green
  6,1  ,  6,2  , 6,3  , 
  7,1  ,  7,2  , 7,3  , 
  8,1  ,  8,2  , 8,3  , 
  9,1  ,  9,2  , 9,3  , 

  % Mid left blue
  6,4  , 6,5  , 
  7,4  , 7,5  , 
  8,4  , 8,5  , 
  9,4  , 9,5  , 
  10,4 , 10,5  , 
  11,4 , 11,5  , 

  % Mid mid green
  6,6  , 6,7  , 
  7,6  , 7,7  , 
  8,6  , 8,7  , 
  9,6  , 9,7  , 
  10,6 , 10,7  , 
  11,6 , 11,7  , 

  % Mid right blue
  6,8  ,  6,9  , 
  7,8  ,  7,9  , 
  8,8  ,  8,9  , 
  9,8  ,  9,9  , 
  10,8 ,  10,9  , 
  11,8 ,  11,9  , 

  % Mid right green
  6,10  ,  6,11  ,  6,12  , 
  7,10  ,  7,11  ,  7,12  , 
  8,10  ,  8,11  ,  8,12  , 
  9,10  ,  9,11  ,  9,12  , 

  % Lower left dark green
  10,1  , 10,2  ,  10,3  , 
  11,1  , 11,2  ,  11,3  , 
  12,1  , 12,2  ,  12,3  , 12,4  , 12,5  ,  12,6  , 

  % Lower right dark green
  10,10  ,  10,11  , 10,12  , 
  11,10  ,  11,11  , 11,12  , 
  12,7   ,  12,8   ,  12,9  , 12,10  , 12,11  ,  12,12  

The Comet code

The Comet model ( use the same principle as the MiniZinc model. However, the representation of the regions are different where I instead of a matrix use a more object oriented approach with two tuple for the structures. For some reasons that I have forgot now, I didn't create a function check in this Comet model, instead stated the cardinality constraints directly.
import cotfd;
int t0 = System.getCPUTime();

int n = 12;
int P = -1; // Pi
int X = 0; // unknown 
range R = -1..9; 

set{int} V = {-1,1,2,3,4,5,6,7,8,9};

// regions where 1..9 is alldiff + 3 Pi
tuple Pos {
  int row;
  int col;

tuple Region {
  set{Pos} p;

int num_regions = 12;
Region regions[1..num_regions] = 
 // Upper left dark green
         Pos(5,1), Pos(5,2)}),
 // Upper mid light dark green
 Region({Pos(1,4), Pos(1,5), Pos(1,6), Pos(1,7), Pos(1,8), Pos(1,9),
         Pos(2,4), Pos(2,5), Pos(2,6), Pos(2,7), Pos(2,8), Pos(2,9)}),

 // Upper right green
 Region({Pos(1,10), Pos(1,11), Pos(1,12),
         Pos(2,10), Pos(2,11), Pos(2,12),
         Pos(3,11), Pos(3,12),
         Pos(4,11), Pos(4,12),
         Pos(5,11), Pos(5,12) }),

 // Mid upper left "blue"
 Region({Pos(3,3), Pos(3,4),Pos(3,5), Pos(3,6),
         Pos(4,3), Pos(4,4),Pos(4,5), Pos(4,6),
         Pos(5,3), Pos(5,4),Pos(5,5), Pos(5,6)}),

 // Mid Upper right blue
 Region({Pos(3,7), Pos(3,8), Pos(3,9), Pos(3,10),
        Pos(4,7), Pos(4,8), Pos(4,9), Pos(4,10),
        Pos(5,7), Pos(5,8), Pos(5,9), Pos(5,10)}),

 // Mid left green
 Region({Pos(6,1), Pos(6,2),Pos(6,3),
         Pos(7,1), Pos(7,2),Pos(7,3),
         Pos(8,1), Pos(8,2),Pos(8,3),
         Pos(9,1), Pos(9,2),Pos(9,3)}),

 // Mid left blue

 // Mid mid green

 // Mid right blue
 Region({Pos(6,8), Pos(6,9),
         Pos(7,8), Pos(7,9),
         Pos(8,8), Pos(8,9),
         Pos(9,8), Pos(9,9),
         Pos(10,8), Pos(10,9),
         Pos(11,8), Pos(11,9)}),

 // Mid right green
 Region({Pos(6,10), Pos(6,11), Pos(6,12),
         Pos(7,10), Pos(7,11), Pos(7,12),
         Pos(8,10), Pos(8,11), Pos(8,12),
         Pos(9,10), Pos(9,11), Pos(9,12)}),

 // Lower left dark green
 Region({Pos(10,1),Pos(10,2), Pos(10,3),
         Pos(11,1),Pos(11,2), Pos(11,3),
         Pos(12,1),Pos(12,2), Pos(12,3),Pos(12,4),Pos(12,5), Pos(12,6)}),

 // Lower right dark green
 Region({Pos(10,10), Pos(10,11),Pos(10,12),
         Pos(11,10), Pos(11,11),Pos(11,12),
         Pos(12,7),Pos(12,8), Pos(12,9),Pos(12,10),Pos(12,11), Pos(12,12)})


// the hints
int data[1..n,1..n] = 
 [4,9,7, P,5,X,X,X,X, X,X,X],
 [X,P,X, 8,X,X,9,6,1, 5,2,X],
 [X,8,X, 1,X,X,X,P,X, 7,X,X],
 [X,X,X, X,X,X,X,P,X, 4,X,X],
 [5,3,9, 6,X,X,X,X,X, X,X,X],

 [9,4,X, P,P,P,7,X,X, X,X,X],
 [X,X,X, X,X,6,2,5,P, X,7,4],
 [X,X,X, X,X,X,X,X,P, P,3,8],
 [X,7,8, 4,6,9,X,X,X, X,X,X],

 [X,X,3, X,P,X,X,4,7, 1,6,9],
 [X,X,4, X,1,X,X,X,6, X,P,X],
 [X,X,X, X,X,X,X,X,4, X,5,X]

Integer num_solutions(0);

Solver m();
var{int} x[1..n,1..n](m, R);

// int occ[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];
var{int} occ[-1..9](m,0..3);
int occ_count[-1..9] = [3,0,1,1,1,1,1,1,1,1,1];

exploreall {

  // get the hints
  forall(i in 1..n) {
    forall(j in 1..n) {
      int c = data[i,j];
      if (c == P) {
        cout << "P";
      } else {
        cout << data[i,j];
      if (c != 0) {[i,j] == data[i,j]);
      cout << " ";
    cout << endl;

  forall(i in 1..n, j in 1..n) {[i,j] != 0);

  forall(i in -1..9) {[i] == occ_count[i]);
  // rows
  forall(i in 1..n) {, all(j in 1..n) x[i,j])); 
  // columns
  forall(j in 1..n) {, all(i in 1..n) x[i,j]));

  // regions
  forall(r in 1..num_regions) {, all(i in regions[r].p) x[i.row,i.col])); 


} using {

  // reversing i and j gives faster solution
  forall(i in 1..n, j in 1..n: !x[i,j].bound()) {
    tryall(v in V : x[i,j].memberOf(v)) by(v) 
      m.label(x[i,j], v);
      m.diff(x[i,j], v);

  int t1 = System.getCPUTime();
  cout << "time:      " << (t1-t0) << endl;
  cout << "#choices = " << m.getNChoice() << endl;
  cout << "#fail    = " << m.getNFail() << endl;
  cout << "#propag  = " << m.getNPropag() << endl;


   forall(i in 1..n) {
     forall(j in 1..n) {
       int c = x[i,j];
       if (c == P) {
         cout << "P";
       } else {
         cout << x[i,j];
       cout << " ";
     cout << endl;

cout << "\nnum_solutions: " << num_solutions << endl;
cout << endl << endl;

int t1 = System.getCPUTime();
cout << "time:      " << (t1-t0) << endl;
cout << "#choices = " << m.getNChoice() << endl;
cout << "#fail    = " << m.getNFail() << endl;
cout << "#propag  = " << m.getNPropag() << endl;

January 14, 2010

Some new MiniZinc models, mostly Enigma problems

The last week I have solved some (more) of New Scientist's Enigma problems. Note: You may have to be a subscriber to read all the articles, however there is a grace period with some free peeks (7 it seems to be).

Enigma problems

Here are the new models of Enigma problems. All are written in MiniZinc.
  • enigma_counting_pennies.mzn: Enigma Counting pennies (from 2005)
    Here all the 24 ways of permuting a list of 4 items are needed. Instead of calculating them, I simply listed all the variants and used them later as index.
  • enigma_843.mzn: Enigma How many are whole? (#843)
    In this model there was problem using the following type of reifications:
      square(ONE) /\ ( square(TWO) \/ square(THREE) \/ square(FOUR)) % /\ ..
      % or
      (square(ONE) /\ ( bool2int(square(TWO)) + bool2int(square(THREE)) + bool2int(square(FOUR)) = 1) % /\ ...
    where square is defined as:
    predicate square(var int: y) =
      let {
         var 1..ceil(sqrt(int2float(ub(y)))): z
      } in
       z*z = y;
    In some of the later versions of MiniZinc (probably 1.0.2), the handling of reification was changed. My fix was to add a boolean matrix which handled the boolean constraints. It is not beatiful:
       % In the list ONE TWO THREE FOUR just the first and one other are perfect squares.
       square(ONE) /\ 
       [bools[1,j] | j in 1..4] = [square(ONE),square(TWO),square(THREE),square(FOUR)] /\
       sum([bool2int(bools[1,j]) | j in 1..4]) = 2 /\
       % ...
  • enigma_1530.mzn: Enigma Tom Daley (#1530)
    A simple alphametic puzzle.
  • enigma_1535.mzn: Enigma Back to front (#1535)
    Magic square with some more constraints.
  • enigma_1555.mzn: Enigma Not a square (#1555)
    Another alphametic puzzle on a grid.
  • enigma_1557.mzn: Enigma Reverse Division (#1557)
    Division and reversing some numbers.
  • enigma_1568.mzn: Enigma Odd puzzle (#1568)
    Long multipication alphametic puzzle.
  • enigma_1570.mzn: Enigma Set of cubes (#1570)
    Pandigital problem over at set of cubes. Here I simply listed the cubes between 0^3 and 1000^3 instead of calculating them. This may be consider cheating...
  • enigma_1575.mzn: Enigma All our days (#1575)
    Another alphametic puzzle with some more constraints.
Some of these problem may have been better solved using other means, e.g. pure (or not so pure) mathematics, but this is after all a blog about constraint programming, and modellng them is good exercise. And fun.

I have also solved - or at least modeled - some of the open problems: 1573, 1574, 1576, and 1577, but will not publish the models until they are closed. I "accidentally" published problem #1574 the other day before I realized it was still open, but so be it.

Some other models/data files

Here are two simple new problems taken from Choco's (forthcoming site ) example distribution: Also, a new data file for the Bridge and Torch model: bridge_and_torch_problem8.dzn, which is, as I understand it, the same as Choco's U2planning problem.

Birthdays 2010 puzzle

Last, here is a very simple puzzle of my own based on a coincidence of birth years of me and my brother and the current year:
This year (2010) my older brother Anders will be the same age as the year I was born in the last century, and I will be the same age as the year he was born in the last century. My brother is four years older than me. How old are we this year?
A (unnecessary complex) MiniZinc model of this problem is birthdays_2010.mzn. Of course, it can be solved much easier with the simple equation: {H+A=110,A-H=4} (link to Wolfram Alpha), or in MiniZinc:
constraint H+A = 110 /\ A-H = 4;
But this latter version is not very declarative, and I - in general - prefer the declarative way.

Update: Added link to Wolfram Alpha for the simple equation above.

January 04, 2010

Finding celebrities at a party

This problem is from Uwe Hoffmann Finding celebrities at a party (PDF):
Problem: Given a list of people at a party and for each person the list of people they know at the party, we want to find the celebrities at the party. A celebrity is a person that everybody at the party knows but that only knows other celebrities. At least one celebrity is present at the party.
(This paper contains an implementation of the problem in Scala.)

Note: The original of this problem is Richard Bird and Sharon Curtis: "Functional pearls: Finding celebrities: A lesson in functional programming" (J. Funct. Program., 16(1):13–20, 2006), but I (nor Hoffmann) have not been able to access this paper. Update: I have now got hold of this paper. Thank you SW!

The example in Hoffmann's paper is to find of who are the celebrity/celebrities in this party graph:
Adam  knows {Dan,Alice,Peter,Eva},
Dan   knows {Adam,Alice,Peter},
Eva   knows {Alice,Peter},
Alice knows {Peter},
Peter knows {Alice}
Solution: the celebrities are Peter and Alice.

MiniZinc model

The MiniZinc model of this problem is finding_celebrities.mzn.

Following are some discussion of the two constraints
  • All must knows a celebrity
  • Celebrities only knows a celebrity
But first a comment how the party graph is represented.

Party graph

Here I have chosen to represent the party as an array of sets:
  Adam  (1) knows {Dan,Eva,Alice,Peter}  {2,3,4,5}
  Dan   (2) knows {Adam,Alice,Peter}     {1,4,5}
  Eva   (3) knows {Alice,Peter}          {4,5}
  Alice (4) knows {Peter}                {5}
  Peter (5) knows {Alice}                {4}
This is coded in MiniZinc as:
party = [
          {2,3,4,5}, % 1, Adam
          {1,4,5},   % 2, Dan 
          {4,5},     % 3, Eva
          {5},       % 4, Alice
          {4}        % 5, Peter
This corresponds to the following incidence matrix, which is calculated in the model. For simplifying the calculations, we assume that a person know him/herself (this is also handled in the model).
% 1 2 3 4 5
  1,1,1,1,1, % 1
  1,1,0,1,1, % 2
  0,0,1,1,1, % 3
  0,0,0,1,1, % 4
  0,0,0,1,1  % 5
This conversion from incidence sets (party) to incidence matrix (graph) is done by the set2matrix predicate:
predicate set2matrix(array[int] of var set of int: s,
                     array[int,int] of var int: mat) =
     forall(i in index_set(s)) ( graph[i,i] = 1)
     forall(i,j in index_set(s) where i != j) (
      j in party[i] <-> graph[i,j] = 1

All must know a celebrity

I started this model by consider the celebrities as a clique, and therefore using the constraint clique (which is still included in the model). However, is not enough to identify the clique since there may be other cliques but are not celebrities. In the above example there is another clique: {Dan,Adam}.

In fact, the clique constraint is not needed at all (and it actually makes the model slower). Instead we can just look for person(s) that everybody knows, i.e. where there are all 1's in the column of a celebrity in the party graph matrix (graph in the model). This is covered by the constraint:
forall(i in 1..n) (
   (i in celebrities -> sum([graph[j,i] |j in 1..n]) = n)
This is a necessary condition but not sufficient for identifying celebrities.

Celebrities only know each other

We must also add the constraint that all people in the celebrity clique don't know anyone outside this clique. This is handled by the constraint the all celebrities must knows the same amount of persons, i.e. the size (cardinality) of the clique:
forall(i in 1..n) (
   i in celebrities -> sum([graph[i,j] | j in 1..n]) = num_celebrities
As noted above, a person is assumed to know him/herself.

Running the model

If we run the MiniZinc model finding_celebrities.mzn we found the following solution:
celebrities = 4..5;
num_celebrities = 2;
which means that the celebrities are {4,5}, i.e. {Alice, Peter}.

Another, slightly different party

We now change the party matrix slightly by assuming that Alice (4) also know Adam (1), i.e. the following incidence matrix:
 % 1 2 3 4 5
   1,1,1,1,1, % 1
   1,1,0,1,1, % 2
   0,0,1,1,1, % 3
   1,0,0,1,1, % 4
   0,0,0,1,1  % 5
This makes Alice a non celebrity since she knows the non celebrity Adam, and this in turn also makes Peter a non celebrity since he knows the non celebrity Alice. In short, in this party there are no celebrities.

The model also contains a somewhat larger party graph with 10 persons.

Update about 8 hours later There is now a version which uses just set constraints, i.e. no conversion to an incidence matrix: finding_celebrities2.mzn. The constraint sections is now:
  num_celebrities >= 1

  /\ % all persons know the celebrities,
     % and the celebrities only know celebrities
  forall(i in 1..n) (
     (i in celebrities -> 
             forall(j in 1..n where j != i) (i in party[j]))
     (i in celebrities -> 
             card(party[i]) = num_celebrities-1)
I have kept the same representation of the party (the array of sets of who knows who) as the earlier model which means that a person is now not assuming to know him/herself. The code reflects this change by using where j != i and num_celebrities-1.

December 29, 2009

1 year anniversary and Secret Santa problem II

Exactly one year ago, I started this blog. Little did I know what to expect. Through this blog and its accompanying pages - I have met many very interesting persons that has learned me much in my pursuit of learning constraint programming. I am very grateful for your help and inspiration! And thanks to all my (other) readers.

I hope the following year will be as rewarding as this last.

Secret Santa problem II

As an anniversary gift, here is another Secret Santa problem (compare with Merry Christmas: Secret Santas Problem) with a slightly different touch.

The problem formulation is from Maple Primes forum Secret Santa Graph Theory:
Every year my extended family does a "secret santa" gift exchange. Each person draws another person at random and then gets a gift for them. At first, none of my siblings were married, and so the draw was completely random. Then, as people got married, we added the restriction that spouses should not draw each others names. This restriction meant that we moved from using slips of paper on a hat to using a simple computer program to choose names. Then people began to complain when they would get the same person two years in a row, so the program was modified to keep some history and avoid giving anyone a name in their recent history. This year, not everyone was participating, and so after removing names, and limiting the number of exclusions to four per person, I had data something like this:

Name: Spouse, Recent Picks

Noah: Ava. Ella, Evan, Ryan, John
Ava: Noah, Evan, Mia, John, Ryan
Ryan: Mia, Ella, Ava, Lily, Evan
Mia: Ryan, Ava, Ella, Lily, Evan
Ella: John, Lily, Evan, Mia, Ava
John: Ella, Noah, Lily, Ryan, Ava
Lily: Evan, John, Mia, Ava, Ella
Evan: Lily, Mia, John, Ryan, Noah
I have interpreted the problem as follows:
  • one cannot be a Secret Santa of one's spouse nor of oneself
  • one cannot be a Secret Santa for somebody two years in a row
  • objective: maximize the "Secret Santa distance", i.e. the number of years since the last assignment of the same person
My MiniZinc model for this problem is secret_santa2.mzn.

This is a more traditional linear programming problem compared to Secret Santa I, using a distance matrix for maximizing the "Secret Santa Distance". M is a "large" number (number of persons + 1) for coding that there have been no previous Secret Santa assignment.
rounds = array2d(1..n, 1..n, [
%N  A  R  M  El J  L  Ev 
 0, M, 3, M, 1, 4, M, 2, % Noah
 M, 0, 4, 2, M, 3, M, 1, % Ava 
 M, 2, 0, M, 1, M, 3, 4, % Ryan
 M, 1, M, 0, 2, M, 3, 4, % Mia 
 M, 4, M, 3, 0, M, 1, 2, % Ella
 1, 4, 3, M, M, 0, 2, M, % John
 M, 3, M, 2, 4, 1, 0, M, % Lily
 4, M, 3, 1, M, 2, M, 0  % Evan

The original problem don't say anything about single persons, i.e. those without spouses. In this model, singleness (no-spouseness) is coded as spouse = 0, and the no-spouse-Santa constraint has been adjusted to takes care of this.

The constraint part is the following, where n is the number of persons:

   /\ % no Santa for one self or the spouse
   forall(i in 1..n) (
      santas[i] != i /\
      if spouses[i] > 0 then santas[i] != spouses[i] else true endif

   /\ % the "santa distance" (
   forall(i in 1..n) ( santa_distance[i] = rounds[i,santas[i]] )

   /\ % cannot be a Secret Santa for the same person two years in a row.
   forall(i in 1..n) (
      let { var 1..n: j } in
       rounds[i,j] = 1 /\ santas[i] != j

   z = sum(santa_distance)


This model gives - when using solve satisfy and the constraint z >= 67 - the following 8 solutions with a total of Secret Santa distance of 67 (sum(santa_distance)). If all Secret Santa assignments where new it would have been a total of n*(n+1) = 8*9 = 72. As we can see there is always one Santa with a previous existing assignment. (With one Single person and the data I faked, we can get all brand new Secret Santas. See the model for this.)
santa_distance: 9 9 4 9 9 9 9 9
santas        : 7 5 8 6 1 4 3 2

santa_distance: 9 9 4 9 9 9 9 9
santas        : 7 5 8 6 3 4 1 2

santa_distance: 9 9 9 4 9 9 9 9
santas        : 7 5 6 8 1 4 3 2

santa_distance: 9 9 9 4 9 9 9 9
santas        : 7 5 6 8 3 4 1 2

santa_distance: 9 9 9 9 4 9 9 9
santas        : 4 7 1 6 2 8 3 5

santa_distance: 9 9 9 9 4 9 9 9
santas        : 4 7 6 1 2 8 3 5

santa_distance: 9 9 9 9 9 9 4 9
santas        : 4 7 1 6 3 8 5 2

santa_distance: 9 9 9 9 9 9 4 9
santas        : 4 7 6 1 3 8 5 2

December 25, 2009

Merry Christmas: Secret Santas Problem

Here is a fun little problem related to the holiday. Merry Christmas, everyone! (For the Swedish readers: Sorry for the one day off greeting.)

This problem is from the Ruby Quiz#2 Secret Santas
Honoring a long standing tradition started by my wife's dad, my friends all play a Secret Santa game around Christmas time. We draw names and spend a week sneaking that person gifts and clues to our identity. On the last night of the game, we get together, have dinner, share stories, and, most importantly, try to guess who our Secret Santa was. It's a crazily fun way to enjoy each other's company during the holidays.

To choose Santas, we use to draw names out of a hat. This system was tedious, prone to many "Wait, I got myself..." problems. This year, we made a change to the rules that further complicated picking and we knew the hat draw would not stand up to the challenge. Naturally, to solve this problem, I scripted the process. Since that turned out to be more interesting than I had expected, I decided to share.

This weeks Ruby Quiz is to implement a Secret Santa selection script.

Your script will be fed a list of names on STDIN.


Your script should then choose a Secret Santa for every name in the list. Obviously, a person cannot be their own Secret Santa. In addition, my friends no longer allow people in the same family to be Santas for each other and your script should take this into account.
The MiniZinc model secret_santa.mzn skips the parts of input and mailing. Instead, we assume that the friends are identified with a unique number from 1..n, and the families are identified with a number 1..num_families.

We use two arrays:
  • the array x represents whom a person should be a Santa of. x[1] = 10 means that person 1 is a Secret Santa of person 10, etc.
  • the family array consists of the family identifier of each person.
Now, the three constraints can easily be stated in a constraint programming system like MiniZinc:
  • "everyone gives and received a Secret Santa gift": this is handled with a permutation of the values 1..n using all_different(x).
  • "one cannot be one own's Secret Santa". This is captured in the no_fix_point predicate, stating that there can be no i for which x[i] = i (i.e. no "fix point").
  • "no Secret Santa to a person in the same family". Here we use the family array and checks that for each person (i), the family of i (family[i]) must not be the same as the family of the person that receives the gift (family[x[i]]).
Here is the complete MiniZinc model (in a slightly compact form):
include "globals.mzn"; 
int: n = 12;
int: num_families = 4;
array[1..n] of 1..num_families: family = [1,1,1,1, 2, 3,3,3,3,3, 4,4];
array[1..n] of var 1..n: x :: is_output;

% Ensure that there are no fix points in the array.
predicate no_fix_points(array[int] of var int: x) = 
      forall(i in index_set(x)) ( x[i] != i  );

solve satisfy;

  % Everyone gives and receives a Secret Santa
  all_different(x) /\      

  % Can't be one own's Secret Santa
  no_fix_points(x) /\   

  % No Secret Santa to a person in the same family
  forall(i in index_set(x)) (   family[i] != family[x[i]]  )

% output (just for the minizinc solver)
output [
   "Person " ++ show(i) ++ 
   " (family: " ++ show(family[i]) ++ ") is a Secret Santa of " ++ 
    show(x[i]) ++ 
   " (family: " ++ show(family[x[i]]) ++ ")\n"
   | i in 1..n
] ++ 


Here is the first solution (of many):
[10, 9, 8, 5, 12, 4, 3, 2, 1, 11, 7, 6]
This means that person 1 should be a Secret Santa of person 10, etc.

The minizinc solver gives the following, using the output code (slightly edited):
Person  1 (family: 1) is a Secret Santa of 10 (family: 3)
Person  2 (family: 1) is a Secret Santa of  9 (family: 3)
Person  3 (family: 1) is a Secret Santa of  8 (family: 3)
Person  4 (family: 1) is a Secret Santa of  5 (family: 2)
Person  5 (family: 2) is a Secret Santa of 12 (family: 4)
Person  6 (family: 3) is a Secret Santa of  4 (family: 1)
Person  7 (family: 3) is a Secret Santa of  3 (family: 1)
Person  8 (family: 3) is a Secret Santa of  2 (family: 1)
Person  9 (family: 3) is a Secret Santa of  1 (family: 1)
Person 10 (family: 3) is a Secret Santa of 11 (family: 4)
Person 11 (family: 4) is a Secret Santa of  7 (family: 3)
Person 12 (family: 4) is a Secret Santa of  6 (family: 3)

Bales of Hay

As an extra, here is another MiniZinc model: bales_of_hay.mzn which solves the following problem (from The Math Less Traveled the other day):
You have five bales of hay.

For some reason, instead of being weighed individually, they were weighed in all possible combinations of two. The weights of each of these combinations were written down and arranged in numerical order, without keeping track of which weight matched which pair of bales. The weights, in kilograms, were 80, 82, 83, 84, 85, 86, 87, 88, 90, and 91.

How much does each bale weigh? Is there a solution? Are there multiple possible solutions?
The answer? There is a unique solution (when bales are ordered on weight): 39, 41, 43, 44, 47.

November 08, 2009

Update on Nonogram: Jan Wolter's Survey and my own new benchmark

Survey of Paint-by-Number Puzzle Solvers

In Some new models, etc I mentioned the great Survey of Paint-by-Number Puzzle Solvers, created by Jan Wolter (also author of the Nonogram solver pbnsolve).

In this survey he included both Gecode's Nonogram solver written by Mikael Lagerkvist as well as my own Nonogram model (with Gecode/FlatZinc).

Since the last update on the former blog post the following has happeded:
  • both our solver has now the assessment: "An amazingly good solver, especially for a simple demo program", and are placed 4, 5, and 6 of 10 tested systems
  • my Gecode/FlatZinc model has been tested for "Full results"; it came 4 out of 5.
  • my Nonogram model with the Lazy FD solver is now included in the "Sample result", at 6'th place
It seems than Wolter has appreciated constraint programming as a general tool for solving these kind of combinatorial problems, much for its ease of experimentation, e.g. with labeling strategies and (for the MiniZinc models) changing solvers:

From the analysis of Lagerkvist's Gecode model:
This is especially impressive because the faster solvers are large, complex programs custom designed to solve paint-by-number puzzles. This one is a general purpose solver with just a couple hundred custom lines of code to generate the constraints, run the solver, and print the result. Considering that this is a simple application of a general purpose solving tool rather than hugely complex and specialized special purpose solving tool, this is an astonishingly good result.

Getting really first class search performance usually requires a lot of experimentation with different search strategies. This is awkward and slow to do if you have to implement each new strategies from scratch. I suspect that a tool like Gecode lets you try out lots of different strategies with relatively little coding to implement each one. That probably contributes a lot to getting to better solvers faster.
From the analysis of my MiniZinc model:
If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the constraint programming approach has big advantages, achieving good search results with low development cost.


These two results [Gecode/FlatZinc and LazyFD] highlight the advantage of a solver-independent modeling language like MiniZinc. You can describe your problem once, and then try out a wide variety of different solvers and heuristics without having to code every single one from scratch. You can benefit from the work of the best and the brightest solver designers. It's hard to imagine that this isn't where the future lies for the development of solvers for applications like this.
And later in the Conclusions:
The two constraint-based systems by Lagerkvist and Kjellerstrand come quite close in performance to the dedicated solvers, although both are more in the category of demonstrations of constraint programming than fully developed solving applications. The power of the underlaying search libraries and the ease of experimentation with alternative search heuristics obviously serves them well. I think it very likely that approaches based on these kinds of methods will ultimately prove the most effective.
I think this is an important lesson: Before starting to write very special tools, first try a general tool like a constraint programming system and see how well it perform.

The Lazy FD solver and the Lion problem

Most of the problems in the Sample Results where solved by some solver within the time limit of 30 minutes. However, one problem stand out as extra hard: The Lion problem. When I tested the MiniZinc's Lazy FD solver on my machine I was very excited that it took just about 2 minutes, and mentioned this to Wolter. He also tested this but on his 64-bit machine it took 80 minutes to solve (and since it was above the time limit this is not in the result table). This is how he describes the Lazy FD solver:
But the remarkable thing is that [the Lazy FD solver] solves almost everything. Actually, it even solved the Lion puzzle that no other solver was able to solve, though, on my computer, it took 80 minutes to do it. Now, I didn't allow a lot of other solvers 80 minutes to run, but that's still pretty impressive. (Note that Kjellerstrand got much faster solving times for the Lion than I did. Lagerkvist also reported that his solver could solve it, but I wasn't able to reproduce that result even after 30 CPU hours. I don't know why.)
After some discussion, we come to the conclusion that the differences was probably due to the fact that I use a 32-bit machine (and the 32-bit version of MiniZinc) with 2 Gb memory, and Wolter use a 64-bit machine with 1 Gb memory.

One should also note that the all other solvers was compiled without optimization, including Gecode/FlatZinc; however the LazyFD does not come with source so it is running optimized. This may be an unfair advantage to the LazyFD solver.

My own benchmark of the Sample Results

The times in the Sample Results is, as mentioned above, for solvers compiled with no optimization. I have now run the same problems on my machine (Linux Mandriva, Intel Dual 3.40GHz, with 2Gb memory), but the solvers uses the standard optimization. All problems was run with a time limit of 10 minutes (compared to Wolters 30 minutes) and searched for 2 solutions, which checks for unique solutions. The last three problems (Karate, Flag, Lion) has multiple solutions, and it is considered a failure if not two where found in the time limit. I should also note that during the benchmark I am using the machine for other things, such as surfing etc.

The problems
I downloaded the problems from Wolter's Webbpbn: Puzzle Export. For copyright reasons I cannot republish these models, but it is easy to download each problem. Select ".DZN" for the MiniZinc files, and "A compiled in C++ format" for Gecode. There is no support for Comet's format, but it's quite easy to convert a .dzn file to Comet's.

The solvers + labeling strategies
Here is a description of each solver and its labeling strategy:
  • fz, "normal" (column_row)
    MiniZinc model with Gecode/FlatZinc. The usual labeling in nonogram_create_automaton2.mzn, i.e. where the columns are labeled before rows:
    solve :: int_search(
          [x[i,j] | j in 1..cols, i in 1..rows], 
  • fz, "row_column"
    MiniZinc model with Gecode/FlatZinc. Here the order of labeling is reversed, rows are labeled before columns. Model is nonogram_create_automaton2_row_column.mzn
    solve :: int_search(
          [x[i,j] | i in 1..rows, j in 1..cols], 
  • fz, "mixed"
    MiniZinc model with Gecode/FlatZinc: nonogram_create_automaton2_mixed.mzn.
    I have long been satisfied with the "normal" labeling in the MiniZinc model because P200 (the hardest problem I until now has tested) was solved so fast. However, the labeling used in the Comet Nonogram model described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second, and which is also used in the Gecode model, is somewhat more complicated since it base the exacl labeling by comparing the hints for the rows and the column.

    I decided to try this labeling in MiniZinc as well. However, labeling in MiniZinc is not so flexible as in Comet and Gecode. Instead we have to add a dedicated array for the labeling (called labeling):
    array[1..rows*cols] of var 1..2: labeling;
    and then copy the element in the grid to that array based on the relation between rows and column hints:
          % prepare for the labeling
          if rows*row_rule_len < cols*col_rule_len then
               % label on rows first
               labeling = [x[i,j] | i in 1..rows, j in 1..cols]
               % label on columns first
               labeling = [x[i,j] | j in 1..cols, i in 1..rows]
          % .... 
    and last, the search is now based just on this labeling array:
    solve :: int_search(
  • jacop, "normal"
    MiniZinc normal model with JaCoP/FlatZinc using the same model as for fz "normal".
  • lazy, satisfy
    Model: nonogram_create_automaton2_mixed.mzn. This use the MiniZinc LazyFD solver with the search strategy:
    solve satisfy;
    This labeling is recommended by the authors of LazyFD. See MiniZinc: the lazy clause generation solver for more information about this solver.

    Note: The solver in MiniZinc latest official version (1.0.3) don't support set vars. Instead I (and also Jan Wolter) used the latest "Release Of The Day" version (as per 2009-11-02).
  • Comet, normal
    Model: This is the Comet model I described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. No changes has been done.
  • Gecode, normal
    This is the Nonogram model distributed with Gecode version 3.2.1. The labeling is much like the one used in the Comet model, as well as fz, "mixed". (In fact the labeling in the Gecode model was inspired by the labeling in the Comet model).

Here is the results. For each model (+ labeling strategy) two values are presented:
  • time (in seconds)
  • number of failures if applicable (the LazyFD solver always returns 0 here).
The result
Model fz

Some conclusions, or rather notes

Here are some conclusions (or notes) about the benchmark.
  • The same solver Gecode/FlatZinc is here compared with three different labelings. There is no single labeling that is better than the other. I initially has some hopes that the "mixed" labeling should take the best labeling from the two simpler row/columns labelings, but this is not really the case. For example for Tragic the row_column strategy is better than "normal" and "mixed". I am, however, somewhat tempted, to use the "row_column" labeling, but the drawback is that "P200" problem (not included in Wolfter's sample problems) takes much longer with this labeling.
  • The same model and labeling but with different solvers is compared: Gecode/FlatZinc is faster than JaCoP/FlatZinc on all the problems. For the easier problems this could be explained by the extra startup time of Java for JaCoP, but that is not the complete explanation for the harder problems. Note: Both Gecode/FlatZinc and JaCoP/FlatZinc has dedicated and fast regular constraints (whereas the LazyFD, and the Comet solvers use a decomposition).
  • The LazyFD solver is the only one that solves all problems (including Lion), but is somewhat slower on the middle problems than most of the others. It emphasizes that this is a very interesting solver.
  • It is also interesting to compare the result of the Comet model and Gecode/FlatZinc "mixed", since they use the same principle of labeling. However there are some differences. First, the MiniZinc model with Gecode/FlatZinc use a dedicated regular constraint, and Comet use my own decomposition of the constraint. For the Merka problem the Comet version outperforms the Gecode/FlatZinc version, otherwise it's about the same time (and number of failures).
  • The Light problem: It is weird that this problem was solved in almost exactly 10 minutes (the timeout is 10 minutes) for Gecode/FlatZinc and JaCoP/FlatZinc. The solutions seems correct but I'm suspicious of this. Update: Christian Schulte got me on the right track. Here is was happened: The first (unique) solution was found pretty quick and was printed, but the solvers could not prove a unique solution so it timed out. JaCoP/FlatZinc actually printed "TIME-OUT" but I didn't observe that. Case closed: They both FAILED on this test. Thanks, Christian.End update
As said above, I can only agree with Jan Wolter in his comment that the ease of experimenting, for example changing labeling, and solver for the FlatZinc solvers, is a very nice feature.

Last word

No benchmark or comparison of (constraint programming) models is really complete without the reference to the article On Benchmarking Constraint Logic Programming Platforms. Response to Fernandez and Hill's "A Comparative Study of Eight Constraint Programming Languages over the Boolean and Finite Domains" by Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey. (Link is to ACM.)

From the Abstract:
... The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems.

October 31, 2009

Some new models, etc

The readers that follows me on Twitter (hakankj) have probably already has seen this, but here follows a list of models, etc, not yet blogged about.

MiniZinc: Different models

The following four models are translation of my Comet models: And here are some new models:

Nonogram related

A large 40x50 Nonogram problem instance in MiniZinc: nonogram_stack_overflow.dzn to be used with nonogram_create_automaton2.mzn model. The problem was mentioned in Stack Overflow thread Solving Nonograms (Picross). It is solved in under 1 second and 0 failures.

Today my Nonogram model nonogram_create_automaton2.mzn was included in the great Survey of Paint-by-Number Puzzle Solvers (created by Jan Wolter).

My MiniZinc model is described and analyzed here. I'm not at all surprised that it's slower compared to the other solvers; it was quite expected.

Some comments:
Assessment: Slow on more complex puzzles.


Results: Run times are not really all that impressive, especially since it is only looking for one solution, not for two like most of the other programs reviewed here. I don't know what the differences are between this and Lagerkvist's system, but this seems much slower in all cases, even though both are ultimately being run in the Gecode library.
Update 2009-11-01
I later realized two things:

1) That the mzn2fzn translator did not use the -G gecode flag, which means that Gecode/FlatZinc uses a decomposition of the regular constraint instead of Gecode's built in, which is really the heart in this model. The model is basically two things: build an automata for a pattern and then run regular on it.
2) When Jan compiled Gecode, he set off all optmization for comparison reason. This is quite unfortunate since Gecode is crafted with knowledge of the specific optimizations.

I there have run all problems by myself and see how well it would done (at least the ballpart figure) when using an "normal" optimized Gecode and with the -G gecode flag for mzn2fzn.

Explanation of the values:
Problem: Name of the problem instance
Runtime: The value of runtime from Gecode/FlatZinc solver
Solvetime: The value of solvetime from Gecode/FlatZinc solver
Failures: Number of failures:
Total time: The Unix time for running the complete problem, including the time of mzn2fzn (which was not included in the benchmark).
A "--" means that a solution was not reached in 10 minutes.
Problem Runtime Solvetime Failures Total time
Dancer 0.002 0.000 0 1.327s
Cat 0.009 0.002 0 0.965s
Skid 0.012 0.006 13 0.660s
Bucks 0.015 0.004 3 0.866s
Edge 0.008 0.005 25 0.447s
Smoke 0.011 0.004 5 0.963s
Knot 0.026 0.006 0 1.450s
Swing 0.059 0.012 0 1.028s
Mum 0.120 0.093 20 1.811s
Tragic 6:24.273 6:23.607 394841 6:28,10
Merka -- -- -- --
Petro 2.571 2.545 1738 4.071s
M&M 0.591 0.510 89 1.961s
Signed 1.074 1.004 929 2.461s
Hot -- -- -- --
Flag -- [lazy solver: 2 solutions in 10 seconds] -- -- --
Lion -- -- -- --

It's interesting to note that the Lazy solver finds some solutions quite fast for the "Flag" problem. However, there where no other big differences compared to Gecode/FlatZinc. I also tested the problems with JaCoP's FlatZinc solver which solved the problems in about the same time as Gecode/FlatZinc with no dramatic differences.

As mentioned above, the exact values is not really comparable to the benchmark values. But it should give an indication of the result when using -G gecode and a "normal" optimized Gecode.

Unfortunately, I cannot link to the specific models due to copyright issues, but they can all be downloaded from the page Web Paint-by-Number Puzzle Export.

(End update)

Update 2009-11-02
Jan Wolter now have rerun the tests of my solver with the -G gecode option, and the time is much more like mine in the table above. The analysis is quite different with an assessment of Pretty decent, and the following under Result:

When comparing this to other solvers, it's important to note that nonogram_create_automaton2.mzn contains only about 100 lines of code. From Kjellerstrand's blog, it is obvious that these 100 lines are the result of a lot of thought and experimentation, and a lot of experience with MiniZinc, but the Olšák solver is something like 2,500 lines and pbnsolve approaches 7,000. If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the CSP approach has big advantages.

(End update 2)

Also, the Gecode nonogram solver is included in the survey: called Lagerkvist. I'm not sure when it was added to the survey. It use the latest version of Gecode, so it must have been quite recent.

Some comments:
Assessment: Pretty decent.


Puzzles with blank lines seem to cause the program to crash with a segmentation fault.

Otherwise it performs quite well. There seems to be about 0.02 seconds of overhead in all runs, so even simple puzzles take at least that long to run. Aside from that, it generally outperforms the Wilk solver. It's not quite in the first rank, especially considering that it was only finding one solution, not checking for uniqueness, but it's still pretty darn good.

I mailed Jan today about the -solutions n options in the Gecode related solvers (he tested my MiniZinc model with Gecode/FlatZinc), as well as some other comments about my model. Also, I will download the problems tested and play with them.

Tailor/Essence': Zebra puzzle

Suddenly I realized that there where no Zebra problem, neither at Andrea's or here. So here it is: zebra.eprime: Zebra puzzle.

Gecode: Words square problem

See Gecode: Modeling with Element for matrices -- revisited for an earlier discussion of this problem.

Thanks to Christian Schulte my Word square model is now much faster: word_square2.cpp.

From the comment in the model:
Christian Schulte suggested 
using branch strategy INT_VAL_SPLIT_MIN 
instead of INT_VAL_MAX .
This made an improvement for size 7 from
322 failures to 42 and from 1:16 minutes
to 10 seconds (for 1 solution).
Now it manage to solve a size 8 in a reasonable 
time (1:33 minutes and 1018 failures):
But wait, there's more!

In the SVN trunk version of Gecode, there is now a version of this model: examples/word-square.cpp where Christian Schulte and Mikael Zayenz Lagerkvist has done a great job of improving it (using a slighly smaller word list, though). It solves a size 8 problem in about 14 seconds. There is also two different approaches with different strengths, etc. I have great hopes that it will improved even further...

October 14, 2009

MiniZinc version 1.0.3 released

MiniZinc version 1.0.3 has been released. It can be downloaded here.

From the NEWS:

G12 MiniZinc Distribution version 1.0.3

Bugs fixed in this release:

* A fencepost error that was being introduced into flattened array access
reifications has been fixed.

* Common subexpression elimination has been improved in order to eliminate
redundant int and float linear equations during flattening.

* A bug that caused flattening to abort if array_*_element built-ins were
redefined has been fixed. [Bug #82]

* A bug in the implementation of the FlatZinc set_lt and set_gt built-ins
has been fixed. Note that the expected outputs for the corresponding
tests in the FCTS were also previously incomplete.

* The omission of the string_lit tag from the XML-FlatZinc DTD has been

October 08, 2009

MiniZinc: All my public MiniZinc models are now at G12 Subversion repository

All my public MiniZinc models (and data files) are now in the G12 SVN MiniZinc examples repository:, subdirectory hakank/.

The file hakank/README states the following:
In this directory I have collected all the MiniZinc
models, data files, and tools that are available from

Any new models on the site will be transferred to this
SVN directory as soon as possible.

Hakan Kjellerstrand,
This means that the models (and other files) which is published at My MiniZinc page is the master, and I will put the files to the G12 SVN repository as soon as possible, hopefully directly after.

The structure in the repository is exactly the same as for the web site, which means that all models are directly in the hakank directory, and then two specific data collections in nonogram_examples and sudoku_problems

Some statistics of the collection (as of writing):
* .mzn files: 742
* .dzn files: 234
* .pl files (Perl program): 2
* README files: 1
* index.html files: 3
* .zip files: 2
*.java files: 1

For a total of 985 files.

I hope there are some use of them.

Also, see The MiniZinc Wiki.

October 01, 2009

MiniZinc: 151 new Nonogram problem instances (from JaCoP)

The latest versions of JaCoP (download here) includes 151 Nonogram instances (ExamplesJaCoP/nonogramRepository/). I wrote about these and the included Nonogram solver in JaCoP: a request from the developers (Knapsack and Geost) and Nonogram labeling.

When I (beta) tested the new FlatZinc support for this version, I converted these instances to MiniZinc's data file (.dzn) for some tests. It is a fun way of testing a system.

Now all these problem instances have been published. They are to be used with the MiniZinc Nonogram model nonogram_create_automaton2.mzn. See At last 2: A Nonogram solver using regular written in "all MiniZinc" for more about this model.

All problem instances - in MiniZinc's .dzn format - are available in, and also packed in the Zip file The name of each file corresponds to the files in JaCoP's ExamplesJaCoP/nonogramRepository/; the file data000.nin is here called nonogram_jacop_data000.dzn, etc. (A larger file, also includes the generated .fzn files. Note: I used mzn2fzn for MiniZinc version ROTD/2009-09-13 for this. See below how to generate these files.)

Batch running with JaCoP/Fz2jacop

Running many FlatZinc models with JaCoP's Fz2jacop is not optimal because of the overhead of the Java startup. Instead I used a Java program, (based on a program from Krzysztof Kuchcinski, one of JaCoP's main developers. Thanks, Kris.).

The main call is

fz.main(new String[] {"-s", "-a", m});

which means that statistics should be shown and also require to return all solutions of the problem.

The FlatZinc (.fzn) files was generated from the .dzn files with the following command:

foreach_file 'mzn2fzn -G jacop --data $f -o $f.fzn nonogram_create_automaton2.mzn' '*.dzn'

ExamplesJaCoP.Nonogram vs. JaCoP/Fz2jacop

This time I just compared the run time of the two different approaches for solving Nonograms with JaCoP:
* The Java program ExamplesJaCoP.Nonogram
* Running JaCoP's MiniZinc/FlatZinc solver Fz2jacop

The program instances is, with one exception the same as in the distributed ExamplesJaCoP.Nonogram. Since I wanted both program to solve for all solutions, I excluded instance #83 (see below) since it has too many solutions. However, the P200 problem in included since it is hardwired in ExamplesJaCoP.Nonogram. This means that it it still 151 problems running.

The result:
* ExampleJaCoP.Nonogram took 14.8 seconds
* The Fz2jacop version took 17.8 seconds

I'm quite impressed with both of these results, especially Fz2jacop's As shown in JaCoP: a request from the developers (Knapsack and Geost) and Nonogram labeling, many problems are solved in "0" milliseconds, but still.

Instance #83

As mentioned above, problem instance #83 was excluded since it has too many solutions. Here are two of them. It looks like a person (alien?) standing on a spaceship waving.
                              #           #         
                            #       #               
                      # # # #       # # #           
                          #   # # #   # # #         
                  #                   #     #       
                #         #     # # # # #           
              #     #   #           # # #     #     
              #   #   #             # # #   #       
              # #   #               # # #           
              # #                   #   #           
                #                   #   #           
          # # # # # # # # # # # # # # # # #         
      # # # # # # # # # # # # # # # # # # # # #     
    # # # #     # #     # # #     # #     # # # #   
  # # # # #     # #     # # #     # #     # # # # # 
    # # # #     # #     # # #     # #     # # # #   
      # # # # # # # # # # # # # # # # # # # # #     
          # # # # # # # # # # # # # # # # #         
              # # # # # # # # # # # # #             
                  #       #       #                 
                # #       #       # #               
                #         #         #               
              # #         #         # #             
              #           #           #             
            # # #       # # #       # # #           
                              #           #         
                            #       #               
                      # # # #       # # #           
                          #   # # #   # # #         
                  #                   #     #       
                #         #     # # # # #           
              #     #   #           # # #   #       
              #   #   #             # # #     #     
              # #   #               # # #           
              # #                   #   #           
                #                   #   #           
          # # # # # # # # # # # # # # # # #         
      # # # # # # # # # # # # # # # # # # # # #     
    # # # #     # #     # # #     # #     # # # #   
  # # # # #     # #     # # #     # #     # # # # # 
    # # # #     # #     # # #     # #     # # # #   
      # # # # # # # # # # # # # # # # # # # # #     
          # # # # # # # # # # # # # # # # #         
              # # # # # # # # # # # # #             
                  #       #       #                 
                # #       #       # #               
                #         #         #               
              # #         #         # #             
              #           #           #             
            # # #       # # #       # # #           
This output was generated with the (very new) option {noresult} in (This option don't display the real numeric results.)

September 30, 2009

This weeks news

This is mostly a consolidation (with some additional information) of the tweets on since last time.

The Perl program is used for making little better output from the FlatZinc solvers.

The parameters to the program are now:
  • {tr:from:to:}: translate digit to another digit/character
  • {trvar:var:from:to:}: translate digit to another digit/character for a specific variable var
  • {trtr:from_string:replacement_string:}: translates all digits in from_string to the corresponding digit/character in replacement_string (inspired from the Unix tr command)
  • {trtrvar:var:from_string:replacement_string:}: as trtr but for a specific variable var
  • {nospaces}: don't show space after character
  • {noprintf}: don't try to be clever with printf stuff
Example: for showing a nicer picture of a Nonogram:

flatzinc nonogram_create_automaton2.fzn | "{tr:1: :} {tr:2:#:} {nospace}"

Where the {tr:from:to:} means translate digit from to character to, and {nospace} means that no spaces are shown after each digit/character.

This is now probably better written with trtr:

flatzinc nonogram_create_automaton2.fzn | "{trtr:12: #:} {nospace}"

Also, see below for some more examples.

MiniZinc/Comet: Nonogram model fixed for "zero length" clues

I discovered that my MiniZinc and Comet models had a little bug in the generation of the automaton for "zero length" clues (i.e. the row/column is completely empty, just 0's).

The fixed versions is here:
* MiniZinc: nonogram_create_automaton2.mzn
* Comet:

It was quite tricky to debug the generated automata, but I found a fix for it surprisingly soon. I won't publish the fixes here, but it suffice to say that the MiniZinc model is - shall we say - not more beautiful than before.

A test instance for this: T2 (source unknown)
* Comet: * MiniZinc: nonogram_t2.dzn

T2 has the following two solutions:
   ##   ##  ###
  #  # #  # #  #
  #      #  #  #
  #     #   ###
  #  # #  # #
   ##   ##  #


   ##   ##  ###
  #  # #  # #  #
  #     #   #  #
  #      #  ###
  #  # #  # #
   ##   ##  #

One more Nonogram example

Besides the T2 problem instance mentioned above, there is a new Nonogram example: Gondola for both Comet and MiniZinc:
* Comet:
* MiniZinc: nonogram_gondola.dzn.
For the curious, here is the solution, using the following command (see above for the part):
$ mzn2fzn -G jacop --data nonogram_gondola.dzn nonogram_create_automaton2.mzn
$ java JaCoP.fz.Fz2jacop -a -s  nonogram_create_automaton2.fzn | "{trtr:12: #:} {nospaces}"
  #####      ######
  ######     #    #        #
     ###  ###########     ###
  ######   # #    #       # #
  #######  # # # ##   #   ###
     ####    #    #  ##   # ####
  #######    #  # # ##    ### #
  #######     #  # ###    # #
     ####     # #  # # #########
     ####   ######## # #
     ####  #     ####  #   ###
     #### # #######    #  #####
     #### # #  ## # ####  #    #
     #### #########      ## # ##
     #### # ###   #      ##    #
     #### # ######       #   # #
     #### ########      ###    #
     ########## ###    ##### ###
      #### # ## ###    #####  ##
      ### #####  ##     ########
      ## ######  ###      #    #
      # ############     #
  ####################   # #
    ## #########################
   ##   ### ####################
  ##     ##### ### ## ## ## ## #
  #   ##  ######################
    ###       ##################
  #        ##

Updated Comet's SONET model

In About 20 more constraint programming models in Comet Pierre Schaus (on of Comet's developers) commented that my use of tryall outside the using block in the SONET problem ( was not to recommend.

I have updated the model with his suggestion that uses or instead: in 1..r) (rings[ring,client1] + rings[ring, client2] >= 2));
Thanks for the correction (suggestion), Pierre.

New MiniZinc model: Letter Square problem

letter_square.mzn is (yet another) grid problem. From my comment in the model file:

This problem is from the swedish book Paul Vaderlind: Vaderlinds nya hjärngympa ('Vaderlind's new brain gymnastics'), page 63ff. Unfortunately, I don't know the origin of this problem.

The objective is to create a matrix where all values on each row/column is different, except for the blanks.

A set of hints is given: the (or some) first seen non blank of each
- upper row
- lower row
- left column
- right column
seen from that view.

    B B
 |A   B C| 
B|  B C A| 
B|B C A  | 
 |C A   B| 
This model codes the hints as follows:
 blank -> 0
 A     -> 1
 B     -> 2
 C     -> 3

row_upper = [0,2,2,0];
row_lower = [3,0,0,0];
col_left  = [0,2,0,0];
col_right = [0,0,0,0];
Note: there are no hints for the right column.

Here is yet another opportunity to use (0 is here translated to blank, etc):

solver model.fzn | "{trtr:0123456: ABCDEF:}"

With the following output:
  A B   C
      A B C
  B C   A
    A C   B
  C   B   A
The following problem instances from the book has been created:

September 23, 2009

MiniZinc Challenge 2009 Results

The result of MiniZinc Challenge 2009 is presented in MiniZinc Challenge 2009 Results:
There were two entrants this year:

* Gecode
* SICStus

In addition, the challenge organisers entered the following three FlatZinc implementations:

* G12/FD
* G12/LazyFD

As per the challenge rules, these entries are not eligible for prizes, but do modify the scoring results.

Summary of Results


sicstus 1651.8
eclipse_ic 322.1
gecode 4008.8
g12_fd 2040.6
g12_lazyfd 1376.6


sicstus 1841.0
gecode 4535.5
g12_fd 1112.4
g12_lazyfd 2511.1

Congratulations to the Gecode team!

September 21, 2009

A few new MiniZinc models, and a lot of improved

Some news about my MiniZinc models.

New MiniZinc models

This last weeks I have implemented the following new MiniZinc models:

Corrected some models

When testing the MiniZinc/FlatZinc support for the new version of JaCoP , I found problems in some models. These are now corrected:
  • strech_path.mzn: The former implementation was not correct.
  • min_index.mzn and max_index.mzn:
    The minimum(x[i], x) and maximum(x[i], x) don't work with the current MiniZinc ROTD version. Substituted to x[i] = min(x) and x[i] = max(x).

Improved all global constraints models

The global constraints section of My MiniZinc Page contains about 160 decompositions of global constraints from Global Constraint Catalog (and some not in the Catalog). The following improvements has been done on all models, especially for the older models:
  • Corrected the links to Global Constraint Catalog in the presentation of the constraint (only older models)
  • Removed some strange characters in the quoted text from Global Constraint Catalog (I hope all these has been removed now).
  • Made older models more general by using index_set, ub, lb, instead of assuming that all arrays start with index 1 etc. Some examples of this generality
           let {
             int: lbx = min(index_set(x)),
             int: ubx = max(index_set(x))
           } in
             forall(i in lbx+1..ubx) (
               forall(j in i+1..ubx-1) (
                  % ...
            forall(i in index_set(x)) (
              all_different([x[i,j] | j in index_set(x)) 

September 16, 2009

JaCoP version 2.4 released

From JaCoP's news JaCoP version 2.4 is released:
Dear all,

We are happy to announce the release of a new version of our
Java-based solver JaCoP. This new version 2.4 has a number of new
features in addition to some bug fixes. The most important additions
in this version are:

1. The flatzinc interface that makes it possible to run minizinc
programs using JaCoP. The distribution contains number of different
minizinc examples.

2. Geometrical constraint, geost, based on pruning algorithms
originally proposed by Nicolas Beldiceanu et al. This constraint makes
it possible to define placement problems of non-convex objects in
k-dimensional space.

3. Knapsack constraint, which is based on the work published by Irit
Katriel et al. We extend the original work in number of ways, for example by
making it possible to use non-boolean quantity variables.

4. Set constraints defining typical operation on sets using set
interval variables.

This work would not be possible without help of several people. We
would like to thank Hakan Kjellerstrand for his help in testing
flatzinc parser as well as providing a number of examples in minizinc
format. We would also like to thank Meinolf Sellmann for his comments
on the initial implementation of knapsack constraint which have helped to
improve it further. Marc-Olivier Fleury has implemented most of the
functionality of the geost constraint. Robert Åkemalm has implemented the
first version of set constraint package. Wadeck Follonier has implemented
the first version of the Knapsack constraint. We would like to thank them
for their contributions.

As always feel free to send us ( radoslaw [dot] szymanek [at] gmail [dot] com )
feedback. We are always looking for cooperation to improve JaCoP. If you miss
some functionality of JaCoP we can help you to develop it so it can be done
efficiently and fast.

best regards,
Radoslaw Szymanek and Kris Kuchcinski
The latest version of JaCoP can be downloaded here.

For more information, see:
Also see My JaCoP page.

MiniZinc/FlatZinc support

I have especially tested the FlatZinc solver (Fz2jacop) and it is fast. For example, here is the statistics for nonogram_create_automaton2.mzn with the P200 problem (I wrote about this some days ago here):
Model variables : 629
Model constraints : 50

Search CPU time : 640ms
Search nodes : 1040
Search decisions : 520
Wrong search decisions : 520
Search backtracks : 520
Max search depth : 22
Number solutions : 1

Total CPU time : 1010ms
Note: JaCoP uses a special optimized regular constraint.

The FlatZinc solver has the following options:
$ java JaCoP.fz.Fz2jacop --help
Usage: java Fz2jacop [] .fzn
    -h, --help
        Print this message.
    -a, --all-solutions
    -t , --time-out 
         - time in second.
    -s, --statistics
    -n , --num-solutions 
         - limit on solution number.
Great work!

September 13, 2009

At last 2: A Nonogram solver using regular written in "all MiniZinc"

The model: nonogram_create_automaton2.mzn.


In At last, a Nonogram solver using regular constraint in MiniZinc I wrote about a Nonogram solver in MiniZinc using the regular solver, nonogram_regular.mzn. The drawback of this version is that an external program ( was needed to convert the Nonogram patterns (clues) to an automaton (finite states) for use in the regular constraint.


In an update some hours later in the same blog post, the variant nonogram_create_automaton.mzn was mentioned. This is an "all MiniZinc" solution where the model also calculating the automata. The drawback of this was that the states is var int (decision variables), so it couldn't use the optimized regular constraint that some solvers (e.g. Gecode/FlatZinc) has implemented (this optimized version of regular is in fact the reason that Gecode/FlatZinc is so fast for the nonogram_regular.mzn model.) Instead a tweaked version of MiniZinc's standard regular constrant was used.


After a short discussion about this with Mikael Lagerkvist (of the Gecode team) the other day, we agreed that it would be nice thing to have the states calculated as "par" variables (i.e. not decision variables), thus the optimized regular could be used.

So I went got back to the drawing board and wondered how to do this. The solution I've found is not pretty, in fact it is hairy, very hairy: nonogram_create_automaton.mzn.

It is now as fast as nonogram_regular.mzn for solvers that use optimized version of regular, especially on the P200 problem.

Problem instances

Here are the problem instances I have tested (they are the same as used before.)

Comparison of calculating of the states

Now, let's compare the two different versions of calculating the automaton states.

First method: using decision variables

This is the version that is used in nonogram_create_automaton.mzn. The states are represented by the 2-dimensional array states.
states[1,1] = 1
states[1,2] = 2
forall(i in 2..len-1) (
   if i in zero_positions then
      states[i,1] = i+1 /\
      states[i,2] = 0 /\
      states[i+1,1] = i+1 /\
      states[i+1,2] = i+2
      if not(i - 1 in zero_positions) then
        states[i,1] = 0 /\
        states[i,2] = i+1
states[len,1] = len
states[len,2] = 0
Quite neat, and relatively easy to understand.

Second method: no decision variables

And this is the non-decision variable version of calculating the finite states used in nonogram_create_automaton2.mzn. Note that states are represented by a 1-dimensional array, which is - as I have understood it - a requirement for this kind of initialisation of an array. It is also the cause of this hairyness.
array[1..2*len] of 0..len*2: states = 
[1, 2] ++
   if i div 2 in zero_positions then
       if i mod 2 = 0 then
        (i div 2) + 1
   elseif (i-1) div 2 in zero_positions then
       if i mod 2 = 0 then
        (i div 2)+1
        (i div 2)+2
     if not( (((i-1) div 2) - 1) in zero_positions) then
        if i mod 2 = 0 then
           (i div 2) + 1
          if (i div 2) + 1 in zero_positions then
              (i div 2) + 2
         if i mod 2 = 0 then
             (i div 2) + 1
            if not((i div 2) + 1 in zero_positions) then
               (i div 2) + 2 
| i in 3..2*(len-1)]
[len, 0]
It could most probably be done in a more elegant fashion. And maybe I will think more about this later on.

September 09, 2009

At last, a Nonogram solver using regular constraint in MiniZinc

Here it is: nonogram_regular.mzn, a MiniZinc solver for Nonogram problems, using regular constraint.

In Comet version 2.0 released I wrote about the rewritten automata handler for the new Comet model (using the built-in constraint regular and helper function Automaton, both new in Comet version 2.0). This inspired me to finish the project of a "real" Nonogram solver for MiniZinc which use the regular constraint instead of the old (very slow) model nonogram.mzn.

Since MiniZinc has a built-in regular constraint, the hardest part was to create an automaton (finite state machine) given a Nonogram pattern. To be honest, I didn't write it purely in MiniZinc; instead a Perl program was written for this conversion (see below for more information about this program). Update: Well, I do have a version fully written in MiniZinc, nonogram_create_automaton.mzn, but it is not fast enough to be really interesting (must faster than the old version, nonogram.mzn, though). End of update this time.

The conversion pattern -> automaton is the same as described in Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more. To quote verbatim:
For the Nonogram clue [3,2,1] - which represents the regular expression 
"0*1110*110*10*" - the following automata (transition matrix) is 
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0

Note that the regular function uses 0 (zero) as the failing state, so the states 
must start with 1..

Results: P200

In fact, the only model I was really curious about was the P200 problem since it has been the one that was the challenge for the Comet Nonogram solver. See the following posts for more about this quest: First, here is a picture of the solution of the P200 problem (nonogram_p200_aut.dzn, generated by the minizinc solver (currently the only solver that use output as formatting output):
 ##   ##     ###
####  # #    # ####
 #### # ##   #     #
####  # # #  #  #  #
 ##   # # ##  ###  #####
      #   # #   #  ##  #
     ###  # #####  #  ##
   ### ##  ##    # ##  ##
  ## #  ####   # # #    #
 ##      ## # ## #     ##
 #  #     # ### ##   ###
 #  # ##  #######  ###
 # ##     ## # #####
 ###  ## ##  #  ##
  ###   ##   #   ##
    #####    #    ##
  ##   ##    #     ##
 ####   ##   #    ##
######   ##  ### ##
#######   #### ###  ##
 #######    ####   ####
#######      #      ####
######       #     ####
 ####       ##      ##
  ##         #

P200: How does different solver do?

Here is a small benchmark for solving the P200 problem with different solvers. It was run on a Linux machine (Mandriva), Dual CPU 3.40GHz, 2Gb memory. "Runtime" is the total runtime, including converting from MiniZinc model to FlatZinc, startup, and also small time for running a wrapper program (where I can choose solver etc). Also, the solvers searched through the whole search tree for a solution (which happens to be exactly one).

All versions of the solvers was "the latest" as of 2009-09-08, i.e. the versions from respective CVS, SVN, or release of the day.

The search strategy used was first_fail where the columns (j) is labeled before rows (i):
solve :: int_search(
        [x[i,j] | j in 1..col_max, i in 1..row_max],
For the MiniZinc/lazy solver, I also tested with solve satisfy since it often do very well without any specific search heuristics.
  • Gecode/FlatZinc: runtime 1.0s, solvetime 0.2s, failures 520 (also see below)
  • MiniZinc/minizinc: runtime 9s, 7697 choice points
  • MiniZinc/lazy: runtime 6s, choice points 2572
  • MiniZinc/lazy (with solve satisfy): runtime 13s, choice points 0
  • MiniZinc/fd: runtime 14s, choice points 11615
  • MiniZinc/fdmip: runtime 14s, choice points 11615
  • ECLiPSe/ic: runtime 109s
  • ECLiPSe/fd: runtime 34s
The Gecode/FlatZinc solver was by far the fastest. Here is the full statistics (for one random instance):
runtime:       0.207 (207.693000 ms)
solvetime:     0.198 (198.988000 ms)
solutions:     1
variables:     625
propagators:   50
propagations:  22940
nodes:         1041
failures:      520
peak depth:    22
peak memory:   1220 KB nonogram_regular.mzn --fz --data nonogram_p200_aut.dzn  0,96s user 0,10s system 98% cpu 1,078 total
For comparison, here is the statistics for running the Comet model
time:      459
#choices = 520
#fail    = 794
#propag  = 693993
comet  1,32s user 0,09s system 90% cpu 1,558 total
And, finally, the statistics for the Gecode (C++) program nonogram solving the P200 problem:
$ time nonogram -solutions 0 9
# ....
        propagators:  50
        branchings:   25

        runtime:      1.342 (1342.926000 ms)
        solutions:    1
        propagations: 35728
        nodes:        1409
        failures:     704
        peak depth:   25
        peak memory:  1027 KB
nonogram -solutions 0 9  1,45s user 0,06s system 99% cpu 1,517 total
It seems that the Gecode/FlatZinc version compares quite good.

For completion, here is the time for generating the automata of the P200 problem using the Perl program:
$ time perl nonogram_p200.dzn 1 > nonogram_p200_aut.dzn
0,20s user 0,02s system 96% cpu 0,233 total

Model and problem instances

Below are the problem instances as MiniZinc data file (.dzn), all the Nonogram problems listed on My Comet page, and some more. For each instance there are two variants: the "normal" version (which is the input file of the transformation) called "nonogram_name.dzn", and the generated automata version, called "nonogram_name_aut.mzn". It is the latter version ("_aut") that is used with nonogram_regular.mzn.

The program for converting to automata

As noted above, the Perl program converts a Nonogram problem instannce in "normal" pattern (clue) format to a format using automata.

The requirement of the indata file is semi-strict: the names must be the one in the example below, and each patterns must be on a separate line. The program use regular expressions to extract the information and can handle some variants in the format. The result is printed to standard output.
%% This is the problem instance of 'Hen', in "normal" pattern format.
%% Comments is keeped as they are
rows = 9;
row_rule_len = 2;
row_rules = array2d(1..rows, 1..row_rule_len,

   cols = 8;
   col_rule_len = 2;
   col_rules = array2d(1..cols, 1..col_rule_len,
To run the program:
$ perl nonogram_hen.dzn 1 > nonogram_hen_aut.dzn
The second parameter parameter 1 gives some more debugging info. The result is nonogram_hen_aut.dzn.

Minor note: For easy of debugging and further developments I decided to keep the 1-based version of arrays from the Comet model (Perl is 0-based) which made the code somewhat uglier.

September 04, 2009

The MiniZinc Wiki

The MiniZinc Wiki in brand new and includes (right now) for example: I have great hopes for the MiniZinc tutorial ("a work in progress") which now includes some commented models. Hopefully this will expand into a full tutorial of MiniZinc.

Worth of notice is also the SVN repository of MiniZinc examples: From the README file:
This subversion repository can be read and written by anyone in the constraint programming community who wishes to contribute MiniZinc models to the public domain.

August 11, 2009

Strimko - Latin squares puzzle with "streams"

Via the post A New Twist on Latin Squares from the interesting math blog 360, I learned the other day about a new grid puzzle Strimko, based on Latin squares (which for example Sudoku also is).

The rules of Strimko are quite simple:
Rule #1: Each row must contain different numbers.
Rule #2: Each column must contain different numbers.
Rule #3: Each stream must contain different numbers.
The stream is a third dimension of the puzzle: an connected "stream" of numbers which also must be distinct.

Here is an example of a Strimko puzzle (weekly #068, same as in the 360 post):
Strimko puzzle
(the link goes to a playable version).

MiniZinc model

This problem was simple to solve in MiniZinc: strimko.mzn, and the data file strimko_068.dzn.

Here are the constraints of the model: three calls to all_different, one for each rows, cols (these two are the latin square requirement), and also for each stream. The places is for handling the "hints" (the given numbers) in the puzzle.
  % latin square
  forall(i in 1..n) (
      all_different([ x[i, j] | j in 1..n]) /\
      all_different([ x[j, i] | j in 1..n])
  /\ % streams
  forall(i in 1..n) (
     all_different([x[streams[i,2*j+1],streams[i,2*j+2]] | j in 0..n-1])
  /\ % placed
  forall(i in 1..num_placed) (
      x[placed[i,1], placed[i,2]] = placed[i,3]
The data file strimko_068.dzn is shown below:. Note the representation of the puzzle: Each cell is represented with two numbers, row,column.
% Strimko Set 068
n = 4;
streams = array2d(1..n, 1..n*2, [
                    1,1, 2,2, 3,3, 4,4,
                    2,1, 1,2, 1,3, 2,4,
                    3,1, 4,2, 4,3, 3,4,
                    4,1, 3,2, 2,3, 1,4
num_placed = 3;
placed = array2d(1..num_placed, 1..3, [
  4 2 3 1
  1 3 2 4
  2 4 1 3
  3 1 4 2

A better version

But it was quite boring and error prone to code the problem instance in that representation. There is - of course - a simpler way by represent the streams themselves, i.e. give each stream a unique "stream number", and all cells with the same numbers must be distinct. Data file: strimko2_068.dzn
% Strimko Weekly Set 068
streams = array2d(1..n, 1..n, [
% ...
The model also needed a simple change to reflect this representation: strimko2.mzn:
  % ...
  % streams
  forall(s in 1..n) (
     all_different([x[i,j] | i,j in 1..n where streams[i,j] = s])
  % ....
The main advantage of the second model is that it makes it easier to state the problem instances. Also it is somewhat smaller to represent the grid: n x n*2 versus n x n. However, I haven't noticed any difference between the performance of these two models, but the problems are all very small so it may not be visible.

The problem instances

Here are all the implemented models and the problem instances: For more information about Strimko, and some problems, see: For more about MiniZinc and some of my models, see My MiniZinc page.

August 02, 2009

MiniZinc: Some new implemented global constraints (decompositions)

For some reason I have implemented some more (decompositions of) global constraints from the great Global constraint catalog; all models use the stated example from that site.

I have tried as much as possible - and mostly succeeded - to make the predicates fully multi-directional, i.e. so that each and every parameter of a predicate can either be fix or free (decision variable).

See my other implementations of global constraint in MiniZinc here. Right now there are now about 150 of them, about of half of the 313 listed in the Global constraint catalog. Some other new MiniZinc models:

MiniZinc: the lazy clause generation solver

In the following two papers (for CP2009), a new Zinc/MiniZinc solver using both finite domain and SAT is discussed: the lazy clause generation solver:

  • T. Feydy and P.J. Stuckey: Lazy clause generation reengineered:
    Abstract Lazy clause generation is a powerful hybrid approach to com-
    binatorial optimization that combines features from SAT solving and fi-
    nite domain (FD) propagation. In lazy clause generation finite domain
    propagators are considered as clause generators that create a SAT de-
    scription of their behaviour for a SAT solver. The ability of the SAT
    solver to explain and record failure and perform conflict directed back-
    jumping are then applicable to FD problems. The original implemen-
    tation of lazy clause generation was constructed as a cut down finite
    domain propagation engine inside a SAT solver. In this paper we show
    how to engineer a lazy clause generation solver by embedding a SAT
    solver inside an FD solver. The resulting solver is flexible, efficient and
    easy to use. We give experiments illustrating the effect of different design
    choices in engineering the solver.

  • A. Schutt, T. Feydy, P.J. Stuckey, and M. Wallace: Why cumulative decomposition is not as bad as it sounds.
    AbstractThe global cumulative constraint was proposed for mod-
    elling cumulative resources in scheduling problems for finite domain (FD)
    propagation. Since that time a great deal of research has investigated new
    stronger and faster filtering techniques for cumulative, but still most of
    these techniques only pay off in limited cases or are not scalable. Re-
    cently, the “lazy clause generation” hybrid solving approach has been
    devised which allows a finite domain propagation engine possible to take
    advantage of advanced SAT technology, by “lazily” creating a SAT model
    of an FD problem as computation progresses. This allows the solver to
    make use of SAT nogood learning and autonomous search capabilities.
    In this paper we show that using lazy clause generation where we model
    cumulative constraint by decomposition gives a very competitive im-
    plementation of cumulative resource problems. We are able to close a
    number of open problems from the well-established PSPlib benchmark
    library of resource-constrained project scheduling problems.

This solver (the lazy solver) has been included in the MiniZinc distribution since version 1.0, and has been improved each version, e.g. by adding primitives it supports. See the NEWS file for more information.

I have tested the lazy solver with some problems, for example nonogram.mzn (which is implemented inefficient without the regular constraint), and am very impressed by it. It is very fast, though it cannot solve the P200 problem in a reasonable time.

Some comments about the lazy solver (some limitation is hopefully lifted in the future):

  • it cannot handle set vars.
  • all decision variables must be explicit bounded, e.g. var int: x; will not work, instead use something like var 1..10: x;.
  • Labeling: The papers state that the default labeling (e.g. solve satisfy) is often very good.

Conclusion: The lazy solver is definitely worth more testing.

July 08, 2009

MiniZinc version 1.0.1 released

From the MiniZinc mailing list:

Version 1.0.1 of the G12 MiniZinc distribution has been released.

Version 1.0.1 of the distribution can be downloaded from Download.
Snapshots of the development version of the G12 MiniZinc distribution
are also available from this page.

Further information about MiniZinc and FlatZinc is available from MiniZinc and FlatZinc

Bugs may be reported via the G12 bug tracking system, which can be accessed at

From the NEWS file:

G12 MiniZinc Distribution version 1.0.1


There have been no changes to the definitions of the MiniZinc and FlatZinc
languages in this release. There have been no changes to the definition
of XML-FlatZinc.

Changes in this release:

* MiniZinc tools command line changes

The MiniZinc interpreter and MiniZinc-to-FlatZinc converter now
recognise files with the .dzn extension as MiniZinc data files, i.e.
you no longer need to use the --data option to use such files as data

* FlatZinc output processing tool

We have added a new tool, solns2dzn, that can be used to process the
output of FlatZinc implementations in various ways. For example, it
can extract each individual solution and write it to a separate file.

* The FlatZinc interpreter's lazy clause generation solver now supports
the int_times/3 built-in.

* The global_cardinality_low_up global constraint has been added to the
MiniZinc library.

* The MiniZinc-to-FlatZinc converter now propagates annotations through
assertions during flattening, For example, the following fragment of

predicate foo(int: x, array[int] of var int y);

predicate bar(int: x, array[int] of var int y) =
assert(x > 3, "value of x must be greater than 3", foo(x, y));

constraint bar(4, ys) :: baz;

will be flattened into the following fragment of FlatZinc:

constraint foo(4, ys) :: baz;

Bugs fixed in this release:

* A bug in the implementation of the FlatZinc built-in int_mod/3 in the
FlatZinc interpreter's finite-domain backend has been fixed.

* The MiniZinc-to-FlatZinc converter now does a better job of extracting
output variables from output items.

* The solver-specific global constraint definitions for the G12 lazy
clause generation solver are now documented. They are in the
``g12_lazyfd'' directory of the MiniZinc library.

* A bug that caused a segmentation fault in the type checker when
checking large FlatZinc instances has been fixed.

* A bug where a cycle of equalities caused mzn2fzn to go into
a loop has been fixed. [Bug #65]

* mzn2fzn now imposes constraints on arguments induced by predicate
parameter types. [Bug #69]

* Flattening of set2array coercions is now supported. [Bug #70]

* A bug that caused mzn2fzn to abort on min/max expressions over empty arrays
has been fixed. [Bug #71]

* mzn2fzn now computes bounds on set cardinality variables correctly.

June 24, 2009

My old swedish blog posts about constraint programming translated (via Google)

Before starting this blog (i.e. My Constraint Programming blog) late December 2008, I blogged about constraint programming in my Swedish blog hakank.blogg). Translations of those Swedish blog post has not been collected before, and now it is time.

So, here are links to most of the blog posts from the category Constraint Programming, translated via Google's Language Tools. Most of the translation is intelligible, but if you have some questions about some particular, please feel to mail me: for more information.

November 4, 2005
Swedish: Sesemans matematiska klosterproblem samt lite Constraint Logic Programming
Translated: Sesemans kloster mathematical problems and little Constraint Logic Programming ("kloster" means "convent").

April 18, 2006
Swedish: Choco: Constraint Programming i Java
Translated: Choco: Constraint Programming in Java

The two post above was written when I had just a cursory interest in constraint programming. From about February 2008 and onward, it became my main interest.

April 5, 2008
Swedish: Constraint Programming: Minizinc, Gecode/flatzinc och ECLiPSe/minizinc,
Translated: Constraint Programming: Minizinc, Gecode / flatzinc and ECLIPSE / minizinc.

April 14, 2008
Swedish: MiniZinc-sida samt fler MiniZinc-exempel
Translated: MiniZinc-page and multi-MiniZinc example.

April 21, 2008
Swedish: Applikationer med constraint programming, lite om operations research samt nya MiniZinc-modeller
Translated: Applications of constraint programming, a little of operations research and new MiniZinc models

April 26, 2008
Swedish: Ett litet april-pyssel
Translated: A small-craft in April (a better translation would be "A small April puzzle")

April 27, 2008
Swedish: Mitt OpenOffice Calc-/Excel-skov
Translated: My Open Office Calc-/Excel-skov (better translation: My Open Office Calc-/Excel period).

May 26, 2008
Swedish: Några fler MiniZinc-modeller, t.ex. Smullyans Knights and Knaves-problem
Translated: Some more MiniZinc models, eg Smullyans Knights and Knaves-problem Smullyans Knights and Knaves problem

June 2, 2008
Swedish: MiniZinc/FlatZinc version 0.8 släppt
Translated: MiniZinc / FlatZinc version 0.8 released

June 2, 2008
Swedish: Nya MiniZinc-modeller, flera global constraints, däribland clique
Translated: New MiniZinc models, several global constraints, including Clique

June 5, 2008
Swedish: Mats Anderssons tävling kring fotbolls-EM 2008 - ett MiniZinc-genererat tips
Translated: Mats Andersson racing around championship in 2008 - a MiniZinc-generated tips (it is about a competition how to predict Soccer World Cup 2008 using MiniZinc)

June 24, 2008
Swedish: Tre matematiska / logiska pyssel med constraint programming-lösningar: n-puzzle, SETS, M12 (i MiniZinc)
Translated: Three mathematical / logical craft with constraint programming solutions: n-puzzle, SETS, M12 (in MiniZinc) ("craft" should be translated to "puzzles")

June 24, 2008
Swedish: MiniZinc-nyheter
Translated: MiniZinc news

June 29, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP
Translated: Group Theoretical solution of the M12 puzzle in GAP (well, this is not really a constraint programming solution, but it is another way of solving the M12 puzzle blogged about in June 24)

June 30, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP - take 2
Translated: Group Theoretical solution of the M12 puzzle in GAP - take 2

July 4, 2008
Swedish: Martin Chlond's Integer Programming Puzzles i MiniZinc
Translated: Martin's Chlond Integer Programming Puzzles in MiniZinc

July 7, 2008
Swedish: Fler MiniZinc modeller kring recreational mathematics
Translated: More MiniZinc models on recreational mathematics

July 20, 2008
Swedish: Fler constraint programming-modeller i MiniZinc, t.ex. Minesweeper och Game of Life
Translated: More constraint programming models in MiniZinc, eg Minesweeper och Game of Life Minesweeper and the Game of Life

August 17, 2008
Swedish: JaCoP - Java Constraint Programming solver
Translated: JaCoP - Java Constraint Programming solver

September 14, 2008
Swedish: Constraint programming i Java: Choco version 2 släppt - samt jämförelse med JaCoP
Translated: Constraint programming in Java: Choco version 2 released - and comparison with JaCoP

September 28, 2008
Swedish: Constraint programming: Fler MiniZinc-modeller, t.ex. Martin Gardner och nonogram
Translated: Constraint programming: More MiniZinc models, eg Martin Gardner och nonogram Martin Gardner and nonogram

December 27, 2008
Swedish: Constraint programming-nyheter samt nya MiniZinc-modeller
Translated: Constraint programming, news and new MiniZinc models

December 29, 2008
Swedish: My Constraint Programming Blog
Translated: My Constraint Programming Blog

After that, entries about constraint programming at my swedish blog posts where just summaries of the stuff written here at My Constraint Programming blog.

June 13, 2009

Miscellaneous news

Here are some miscellaneous news.


Version 2.1

I have forgot to mention that Choco have released version 2.1 . Download this version via the download page. The Sourceforge download page is here.

Changed models for version 2.1

I have changed some of my Choco models for version 2.1. The biggest change was the use of cumulative in where the call to the cumulative constraint have changed and now use TaskVariable.

I added the following
// Create TaskVariables
TaskVariable[] t = new TaskVariable[starts.length];
   for(int i = 0; i < starts.length; i++){
      t[i] = makeTaskVar("", starts[i], ends[i], durations[i]);
and changed the call to cumulative to:
m.addConstraint(cumulative(null, t, constantArray(_heights), constant(NumPersons)));
Also, the line
durations[i] = makeConstantVar("durations" + i, durationsInts[i]);
was changed to
durations[i] = makeIntVar("durations" + i, durationsInts[i], durationsInts[i]);
For some of the other models (compiled for version 2.0.*), I just recompiled the source and it worked without any change in the code.


List of global constraints

At the MiniZinc site, there is now a list of the supported global constraints in MiniZinc: MiniZinc: Global Constraints.

MiniZinc Library

MiniZinc Library is a great collections of problems, examples, and tests. (For some reason is not linked from the main MiniZinc site.)

Output in Minizinc

I wrote here about the change of output in MiniZinc version 1.0 . Only the distributed solver minizinc supports the output statement (for some kind of "pretty printing" of the output), but the solvers using the FlatZinc format has no such support. Since I want to see matrices presented in a normal way for all solvers, I wrote a simple Perl program to show matrices more pretty than just a single line. Also, single arrays are shown without array1d(...).

The Perl program is and it simply takes the result from a solver and reformats the result.

An example: The output from the debruijn_binary.mzn from a solver using FlatZinc file, such as Gecode/FlatZinc and MiniZinc's fdmip is:
bin_code = array1d(1..8, [0, 0, 0, 0, 1, 0, 1, 1]);
binary = array2d(1..8, 1..4, [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0]);
x = array1d(1..8, [0, 1, 2, 5, 11, 6, 12, 8]);
When filtering this with it will be shown as:
bin_code: 0 0 0 0 1 0 1 1
% binary = array2d(1..8, 1..4, [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0]);
  0 0 0 0
  0 0 0 1
  0 0 1 0
  0 1 0 1
  1 0 1 1
  0 1 1 0
  1 1 0 0
  1 0 0 0
x: 0 1 2 5 11 6 12 8
Very simple, but quite useful.

Somewhat related: The NEWS file of the latest ROTD (Release of The Day) states the following:
FlatZinc output processing tool

We have added a new tool, solns2dzn, that can be used to process the output of FlatZinc implementations in various ways. For example, it can extract each individual solution and write it to a separate file.
This program is distributed as source (Mercury), not as an executable, so I have not been able to test it.

Hidato and exists

In the hidato.mzn model I have changed the exists construct
forall(k in 1..r*c-1) (
  exists(i in 1..r, j in 1..c) (
    k = x[i, j] % fix this k
    exists(a, b in  {-1, 0, 1} where
      i+a >= 1 /\ j+b >=  1 /\
      i+a <= r /\ j+b <= c
      /\ not(a = 0 /\ b = 0)
       % find the next k
       k + 1 = x[i+a, j+b]
to a a version using just var ints:
forall(k in 1..r*c-1) (
    let {
       var 1..r: i,
       var 1..c: j,
       var {-1,0,1}: a,
       var {-1,0,1}: b
    k = x[i, j] % fix this k
    i+a >= 1 /\ j+b >=  1 /\
    i+a <= r /\ j+b <= c
    /\ not(a = 0 /\ b = 0)
    % find the next k
    k + 1 = x[i+a, j+b]
This replacement of exists with int vars, if possible, seems to always be more effective.

However, there is one use of exists where it is harder to replace in this way. As an example, take Pandigital numbers in any base (the model includes a presentation of the problem) where - among other things - we want to find some array indices of the integer array x in order to find the positions of three numbers num1, num2 and res (the result of num1 * num2).
% num1. 
exists(j in 1..x_len) (
   j = len1 /\
   toNum([x[i] | i in 1..j], num1, base)

/\  % num2
exists(j, k in 1..x_len) (
   j = len1+1 /\ 
   k = len1+len2 /\ k > j  /\
   toNum([x[i] | i in j..k], num2, base)

/\ % the product
exists(k in 1..x_len) (
   k = len1+len2+1 /\
   toNum([x[i] | i in k..x_len], res, base)
Using this approach, we have to use exists since indices in a range (e.g. j in 1..j) must not be an int var.


Both Choco and MiniZinc sites now have links to my site, which is quite fun:
* Choco, Users (last)
* MiniZinc and FlatZinc (also last)

Work related page in english

The page Håkan Kjellerstrand, CV and work related interests is a english version of my work related page. (The original, swedish, version is here.)

May 29, 2009

MiniZinc Challenge 2009

From MiniZinc Challenge 2009:
The aim of the challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.
Announcement of the results will be at CP2009: 20 September 2009.

Also see
* MiniZinc Challenge 2009 -- Rules
* MiniZinc Challenge 2008 and Results.

May 20, 2009

MiniZinc version 1.0 released!

MiniZinc version is now official, i.e. version 1.0. Congratulations to the G12 team!

This version can be downloaded from here.

Comparing to the beta version 0.9 (released Christmas last year) the following has changed:

G12 MiniZinc Distribution version 1.0

* Licence change

The source code in the G12 MiniZinc distribution has now been released
under a BSD-style licence. See the files README and COPYING in the
distribution for details.

The MiniZinc examples, global constraint definitions and libraries
have been placed in the public domain.

* XML-FlatZinc

We have defined an XML representation for FlatZinc called XML-FlatZinc.
Two new tools, fzn2xml and xml2fzn, can be used to convert between FlatZinc
and XML-FlatZinc.

* FlatZinc Conformance Test Suite

We have added a suite of conformance tests for FlatZinc implementations.
It includes tests for built-in constraints, output and the behaviour of the
standard search annotations.

Changes to the MiniZinc language:

* Reification has been fixed. It is now a top-down process that correctly
handles partial functions such as integer division. Users can now also
supply alternative definitions for reified forms of predicates (this is
useful if a backend does not provide reified forms of all predicates).

* Users can supply alternative definitions for FlatZinc built-in constraints
(e.g., one can force the generated FlatZinc to use just int_lt rather than
int_lt and int_gt).

* A new variable annotation has been added: is_output is used to indicate
variables to be printed as part of the solution if no output item is
supplied. This annotation is converted to output_var or output_array as
appropriate by mzn2fzn.

Changes to the FlatZinc language:

* The outdomain_min and outdomain_max value choice methods are now supported
in the finite-domain solver backend.

* A new search annotation, seq_search, allows a sequential ordering to
be imposed on search annotations.

* The standard solve annotations now use nested annotations instead of
strings to describe variable selection strategies, value choice methods,
and exploration strategies.

* FlatZinc model instances may now contain bodyless predicate declarations.
This is to allow tools to type check FlatZinc that contains non-standard
built-in predicates.

* Two new annotations have been added that allow functional relationships
between variables to be defined: is_view_var on a variable declaration
states that this var is defined as a function of some other variables by
a constraint; defines_view_var(x) on a constraint states that the
constraint provides a definition for the view variable x.

* The FlatZinc specification now specifies how multiple solutions should be

* Two new variable annotations have been added to indicate which variables
should be printed if a solution is found: output_var is used
for non-array variables; output_array([IndexSet1, IndexSet2, ...]) is used
for array variables.

* Output items are no longer supported in FlatZinc. The built-in string
operations, show/1 and show_cond/3 have also been removed from the

Changes to the G12 MiniZinc-to-FlatZinc converter:

* The converter now outputs array_xxx_element constraints instead of
array_var_xxx_element constraints when the array argument is a

* An error is now reported if a variable is defined in a let expression
in a negated or reified context.

* The ZINC_STD_SEARCH_DIRS environment variable is no longer supported.
The new environment variable MZN_STDLIB_DIR or the command line option
``--stdlib-dir'' can be used to set the MiniZinc library directory.

* String array lookups are now supported.

* Comparison of fixed string expressions is now supported.

* Bodyless predicates in MiniZinc are now emitted at the head of the
generated FlatZinc. For backwards compatibility this behaviour
can be disabled using the ``--no-output-pred-decls'' command line

Changes to the G12 MiniZinc and FlatZinc interpreters:

* There is a new solver backend for the FlatZinc interpreter based upon
the G12 lazy clause generation solver. This backend is selected with
the ``lazy'' or ``lazyfd'' argument to the interpreter's ``--backend''

* The implementation of the int_negate/2 builtin constraint has been

* The interpreters now take a flag (--solver-statistics or --solver-stats)
causing any statistical information gathered by the solver to be appended to
the output in the form of a Zinc comment.

* The interpreters now take a flag (-a or --all-solutions) that causes
then to return all solutions.

* The interpreters now take a flag (-n or --num-solutions) taking an
integer argument giving the maximum number of solutions to display.

* The MiniZinc interpreter behaviour has changed for models with no
output item: now only the values of variables annotated with
is_output are printed.

Other Changes:

* The following global constraints have been added to the MiniZinc library:


For developers of FlatZinc solvers there is a transition guide to the changes.

Some comments

One of the greatest news is that it is now possible to state the number of solutions: -a for all solutions, or -n or --num-solutions for some fixed number of solutions. I have missed that feature for the distributed solvers since day one. Other solvers, e.g. Gecode/FlatZinc, ECLiPSe's, and, SICStus Prolog supported this feature already.

I'm also excited that string array lookups is supported. And there are other stuff which I will now test more in detail.

My MiniZinc page

On My MiniZinc page there are over 540 MiniZinc models which right now are for MiniZinc version 0.9. I have just started to convert these to the new version and it will take some time. Some models don't have to be changed at all, though.

Update some hours later
All models has now been converted to version 1.0. Here are some findings while doing this conversion:

* If there are any file in the current directory with the same name as in lib/minizinc/std/ (etc) then there will be a conflict with message structure error: more than one solve item

* solve: the search strategies should not have quotes. Also the search strategy occurrences is not available for the minizinc/flatzinc solvers.

* if there is no output clause, then define some (or all) decision variables as :: is_output and they will be printed.

* it seems that it is just the minizinc solver that handles the output section. For other solvers in the MiniZinc distribution (e.g. mip) the output is done via :: is_output annotation.

* always add "\n" last in the output clause.

May 09, 2009

Learning Constraint Programming IV: Logical constraints: Who killed Agatha? revisited

Here is a comparison of how different constraint programming systems implements the logical constraints in the problem Who Killed Agatha?, also known as The Dreadsbury Mansion Murder Mystery.. In Learning constraint programming - part II: Modeling with the Element constraint the problem was presented and showed how the systems implements the Element constraints.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.

Here we see compare how different constraint programming systems implement the three emphasized conditions in the problem formulation above:
  • the concept of richer
  • Charles hates noone that Agatha hates
  • No one hates everyone
All models defines the concepts of hates and richer as two matrices. The matrix declarations are omitted in the code snippets below.


Here are the different models for the Who killed Agatha? problem. JaCoP and Choco has two version for how to implement the Element constraint, see the link above. Also, there is no Gecode/R version of this problem.

Defining the concept of richer

First we define the concept of richer, which consists of two parts:
  • No one is richer than him-/herself
  • if i is richer than j then j is not richer than i
This is an antisymmetric relation which is explored somewhat more (with an alternative predicate of the concept) in the MiniZinc model antisymmetric.mzn.

The logical concept used here is equivalence (if and only of).


% define the concept of richer: no one is richer than him-/herself
forall(i in r) (
   richer[i,i] = 0

/\ % if i is richer than j then j is not richer than 
forall(i, j in r where i != j) (
   richer[i,j] = 1 <-> richer[j,i] = 0


Note that Comet don't have support for equivalence directly, instead we have to use two implications. Update: equivalence in Comet is written as == (I tested <=> which didn't work). Thanks to Pascal Van Hentenryck for pointing this out.
// no one is richer than him-/herself
forall(i in r)[i,i] == 0);

// if i is richer than j then j is not richer than i
forall(i in r, j in r : i != j) {
  /* earlier version: two implications[i,j] == 1 => richer[j,i] == 0);[j,i] == 0 => richer[i,j] == 1);
  // equivalence[i,j] == 1) == (richer[j,i] == 0));


//  no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    store.impose(new XeqC(richer[i][i], 0));

//  if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                          new Eq(
                                 new XeqC(richer[i][j], 1),
                                 new XeqC(richer[j][i], 0)



//   a) no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    m.addConstraint(eq(richer[i][i], 0));

//   b) if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                                    eq(richer[i][j], 1),
                                    eq(richer[j][i], 0)


// no one is richer than him-/herself
for(int i = 0; i < n; i++) {
  rel(*this, richer_m(i,i), IRT_EQ, 0, opt.icl());

// if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
  for(int j = 0; j < n; j++) {
    if (i != j) {
      post(*this, tt(
                 richer_m(j,i) == 1, // <=>
                 richer_m(i,j) == 0)), 

Charles hates noone that Agatha hates

Here is the definitions of the condition Charles hates noone that Agatha hates, which simply mean the implication:
  if Agatha hates X then Charles don't hate X


When starting to modeling these kind of problems, I tend to follow the order of the conditions, which here means that the Charles part is before the Agatha part. When remodeling in another system the order tends to be fixed (cf the Comet version).
forall(i in r) (
   hates[charles, i] = 0 <- hates[agatha, i] = 1


forall(i in r)[agatha, i] == 1 => hates[charles, i] == 0);


for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(hates[agatha][i], 1),
                            new XeqC(hates[charles][i], 0)


I tend to copy/paste the models for Choco and JaCoP and just change the functions that are different. A consequence of this is that some special feature in one of these two systems is not used.
for(int i = 0; i < n; i++) {
                            eq(hates[agatha][i], 1),
                            eq(hates[charles][i], 0)


for(int i = 0; i < n; i++) {
   post(*this, tt(
                        hates_m(i,agatha) == 1, 
                        // =>
                        hates_m(i,charles) == 0)), 

No one hates everyone

This is the last condition to compare: No one hates everyone. It is implemented by a sum of the number of persons that each person hates, and this sum must be 2 or less. Note that it is possible to hate oneself.


forall(i in r) (
  sum(j in r) (hates[i,j]) <= 2  


  forall(i in r) in r) (hates[i,j]) <= 2);


Note: We could save the XlteqC constraint by restrict the domain of a_sum to 0..2 instead of 0..n (=3) but this explicit use of XlteqC is more clear.
for(int i = 0; i < n; i++) {
    FDV a[] = new FDV[n];
    for (int j = 0; j < n; j++) {
        a[j] = new FDV(store, "a"+j, 0, 1);
        a[j] = hates[i][j];
    FDV a_sum = new FDV(store, "a_sum"+i, 0, n);
    store.impose(new Sum(a, a_sum));
    store.impose(new XlteqC(a_sum, 2));


Note: sum is an operator, which makes the condition somewhat easier to state than in JaCoP.
for(int i = 0; i < n; i++) {
    IntegerVariable a[] = makeIntVarArray("a", n, 0, 1);
    for (int j = 0; j < n; j++) {
        a[j] = hates[i][j];
    m.addConstraint(leq(sum(a), 2));


In Gecode this condition is quite easy to state by using linear. In order to use this there is a Matrix "view" of the hates matrix hates_m.
Matrix hates_m(hates, n, n);
// ...
for(int i = 0; i < n; i++) {
  linear(*this, hates_m.row(i), IRT_LQ, 2, opt.icl());

End comment

The mandatory end comment: There are probably better ways of modeling the problem than shown above, either by changing some details or by model the problem completely different. Maybe this will be done sometime...

May 08, 2009

Learning Constraint Programming III: decomposition of a global constraint: alldifferent_except_0

The series Learning Constraint Programming is a preparation for My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned". Confusingly, the entries is not numbered in any logical order. Sorry about that.

Here are the previous entries:


The global constraint alldifferent_except_0 (or the more general variant alldifferent_except_c) is one of my favorite global constraints. It is very handy to use when 0 (or any constant c) is coded as an unknown or irrelevant value. Then we can constraint the rest to be all distinct.

The great Global Constraint Catalog entry alldifferent_except_0) explains this constraint as:
Enforce all variables of the collection VARIABLES to take distinct values, except those variables that are assigned to 0.

The alldifferent_except_0 constraint holds since all the values (that are different from 0) 5, 1, 9 and 3 are distinct.


I have modeled a decomposition of alldifferent_except_0 in the following models, where the constraint is just tested perhaps combined with some other constraint, e.g. sorted or that there must be at least some zeros:

- MiniZinc alldifferent_except_0.mzn
- Comet:
- Gecode/R: all_different_except_0.rb
- Choco:
- JaCoP:
- Gecode: alldifferent_except_0.cpp

Some models using alldifferent_except_0

And here is some real use of the constraint:

- Nonogram (Comet): (A faster model using the regular constraint,, is described here and here).
- I wrote about alldifferent_except_0 in Pi Day Sudoku 2009. However, as faster way of solving the problem was found and is described in Solving Pi Day Sudoku 2009 with the global cardinality constraint). Note: the competition is still on, so there is no link to any model.
- Sudoku generate (Comet):
- all paths graph (MiniZinc): all_paths_graph.mzn
- Cube sum (MiniZinc): cube_sum.mzn
- Message sending (MiniZinc): message_sending.mzn

As the first two entries indicates, there may be faster solutions than using (a decomposition) of alldifferent_except_0, but even as a decomposition is a great conceptual tool when modeling a problem.


In the implementations below we also see how to define a function (predicate) in the constraint programming systems.

For the Gecode/R model there are different approaches:
- "standard" ("direct") approach where we loop over all different pairs of elements and ensures that if both values are different from 0 then they should be different
- using count
- using global cardinality ("simulated" in Gecode/R, see below)

Also, in some models we use the slighly more general version alldifferent_except_c where c is any constant (e.g. "Pi" in the Pi Day Sudoku puzzle mentioned above.


Model: alldifferent_except_0.mzn.
predicate all_different_except_0(array[int] of var int: x) =
   let {
      int: n = length(x)
   forall(i,j in 1..n where i != j) (
        (x[i] > 0 /\ x[j] > 0) 
        x[i] != x[j]

// usage:
constraint all_different_except_0(x);


function void alldifferent_except_0(Solver m, var{int}[] x) {
  int n = x.getSize();
  forall(i in 1..n, j in i+1..n) {
           x[i] > 0 && x[j] > 0 
           x[i] != x[j]

// usage
exploreall {
  // ...
  alldifferent_except_0(m, x);



When modeling the constraint in Gecode/R, I experimented with different approaches. The reification variant all_different_except_0_reif is actually quite fast.
# The simplest and the fastest implementation 
# using count for 1..max (poor man's global cardinality)
def all_different_except_0
    self.count(i).must <= 1

# global cardinality version using an extra array with the counts
def global_cardinality(xgcc)
    xgcc[i].must == self.count(i)

# The standard approach using reification.
def all_different_except_0_reif(x)
  n = x.length
  b1_is_an bool_var_matrix(n,n)
  b2_is_an bool_var_matrix(n,n)
  b3_is_an bool_var_matrix(n,n)
      if i != j then 
        x[i].must_not.equal(0, :reify => b1[i,j]) 
        x[i].must_not.equal(0, :reify => b2[i,j]) 
        x[i].must_not.equal(x[j], :reify => b3[i,j])
        (b1[i,j] & b2[i,j]).must.imply(b3[i,j])

    # ...
    # usage:
    # all_different_except_0_gcc(x)
    # all_different_except_0_reif(x)



Note that here alldifferent_except_0 is derived from the more general version alldifferent_except_c.
// decomposition of alldifferent except 0
public void allDifferentExcept0(CPModel m, IntegerVariable[] v) {
    allDifferentExceptC(m, v, 0);

// slightly more general: alldifferent except c
public void allDifferentExceptC(CPModel m, IntegerVariable[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
                                           gt(v[i], c), 
                                           gt(v[j], c)
                                       neq(v[i], v[j]),

// ...
// usage: 




This is exactly the same approach as the Choco version.
// decomposition of alldifferent except 0
public void allDifferentExcept0(FDstore m, FDV[] v) {
    allDifferentExceptC(m, v, 0);
} // end allDifferentExcept0

// slightly more general: alldifferent except c
public void allDifferentExceptC(FDstore m, FDV[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
            m.impose(new IfThen(
                                       new And(
                                           new XneqC(v[i], c), 
                                           new XneqC(v[j], c)
                                       new XneqY(v[i], v[j])
} // end allDifferentExceptC

        // ...
        // usage:
        allDifferentExcept0(store, x);



The Gecode version is very succint since it use overloaded boolean operators. Very nice.
void alldifferent_except_0(Space& space, IntVarArray x, IntConLevel icl = ICL_BND) {
  for(int i = 0; i < x.size(); i++) {
    for(int j = i+1; j < x.size(); j++) {
           imp(x[i] != 0 && x[j] != 0, 
           // =>
           x[i] != x[j])),
} // alldifferent_except_0

// ...
// usage:
    alldifferent_except_0(*this, x, opt.icl());

May 05, 2009

Learning constraint programming - part II: Modeling with the Element constraint

As indicated last in Learning constraint programming (languages) - part I here are some findings when implementing Crossword, Word square, and Who killed Agatha?. See links below for the implementations.

The first constraint programming system i learned after constraint logic programming in Prolog was MiniZinc. When implemented the problems below I realized that I have been quite spoiled by using MiniZinc. The way MiniZinc (and also Comet) supports the Element constraint, i.e. access of variable arrays/matrices, is very straightforward in these systems and it doesn't matter whether the array to access is an array of integers or of non-variable variable integers. In the Java (Choco, JaCoP) and C++ systems (Gecode), however, this is another matter. Due to different circumstances I have not implemented these models in Gecode/R.

Element in MiniZinc and Comet
Accessing arrays and matrices in MiniZinc and Comet is simply done by using the [] construct, no matter what the type of the array or the index are (I assume integers and variable integers here). For the other systems we must explicitly use the Element constraint (called nth in Choco).


This is a standard constraint programming problem, used as a running example in Apt's great book Principles of Constraint Programming. Here is a formulation of the problem (stated differently than in the book):
Place the words listed below in the following crossword. The '#' means a blocked cell, and the numbers indicate the overlappings of the words.

      1   2   3   4   5
  1 | 1 |   | 2 |   | 3 |
  2 | # | # |   | # |   |
  3 | # | 4 |   | 5 |   |
  4 | 6 | # | 7 |   |   |
  5 | 8 |   |   |   |   |
  6 |   | # | # |   | # |
We can use the following words


MiniZinc: crossword.mzn
Gecode/R: (Not implemented)
Gecode: crossword.cpp

Note: I have seen more general models for solving crossword problems in Choco, JaCoP, Gecode/R, and Gecode with constructions other that the simple Elements used here. Since I wanted to compare the same way of solving the problem using the same Element-construct this may be an unfair comparison between the systems. Well, this is at least a finding how to implement this problem by Element...

Explanation of variables
The matrix A is the individual chars of the words (Comet variant):
int A[1..num_words,1..word_len] = 
   [h, o, s, e, s], //  HOSES
   [l, a, s, e, r], //  LASER
   [s, a, i, l, s], //  SAILS
   [s, h, e, e, t], //  SHEET
   [s, t, e, e, r], //  STEER
   [h, e, e, l, 0], //  HEEL
   [h, i, k, e, 0], //  HIKE
   [k, e, e, l, 0], //  KEEL
   [k, n, o, t, 0], //  KNOT
   [l, i, n, e, 0], //  LINE
   [a, f, t, 0, 0], //  AFT
   [a, l, e, 0, 0], //  ALE
   [e, e, l, 0, 0], //  EEL
   [l, e, e, 0, 0], //  LEE
   [t, i, e, 0, 0]  //  TIE
overlapping is the matrix of the overlapping cells)
This is the Comet version:
 [1, 3, 2, 1],   //  s
 [1, 5, 3, 1],   //  s 
 [4, 2, 2, 3],   //  i
 [4, 3, 5, 1],   //  k
 [4, 4, 3, 3],   //  e
 [7, 1, 2, 4],   //  l
 [7, 2, 5, 2],   //  e
 [7, 3, 3, 4],   //  e
 [8, 1, 6, 2],   //  l
 [8, 3, 2, 5],   //  s
 [8, 4, 5, 3],   //  e
 [8, 5, 3, 5]    //  r
E is the variable array of which the word to use for the different overlappings. This is in fact the only variable (array) that is needed in the problem, apart from the utility/convenience variables.

The main constraint for the crossword example in each system is stated thus:

forall(i in 1..num_overlapping) (
   A[E[overlapping[i,1]], overlapping[i,2]] =  A[E[overlapping[i,3]], overlapping[i,4]]
  forall(i in 1..num_overlapping) {[E[overlapping[i,1]], overlapping[i,2]] ==  A[E[overlapping[i,3]], overlapping[i,4]], onDomains);
Note that Choco has a special Element which support two dimensional arrays (matrix), which we use.
for(int I = 0; I < num_overlapping; I++) {
  IntegerVariable tmp = makeIntVar("tmp" + I, 1, 26);
  M.addConstraint(nth(E[overlapping[I][0]], W[overlapping[I][1]], AX, tmp));
  M.addConstraint(nth(E[overlapping[I][2]], W[overlapping[I][3]], AX, tmp));
Here we had used some trickery by using a transposed version of the word matrix since JaCoP has no special Element constraint for two dimensional arrays.
for (int I = 0; I < num_overlapping; I++) {
   FDV tmp = new FDV(store, "TMP" + I, 0, num_words*word_len);
   store.impose(new Element(E[overlapping[I][0]], words_t[overlapping[I][1]], tmp));
   store.impose(new Element(E[overlapping[I][2]], words_t[overlapping[I][3]], tmp));
This is more complicated compared to the two Java systems since in Gecode we use an array (of length rows*cols) to simulate the matrix. (There is a Matrix "view" in Gecode but the indices must be of type integer, not IntVar, so it can not be used.) Also, the constraints plus and mult takes IntVar as argument.

The first overlapped crossing is "expanded" like this (Gecode is 0-based):
   A[E[overlapping[i,0]], overlapping[i,1]] // MiniZinc/Comet
   a1 = A[ E[I*4+0] * word_len + overlapping[I*4+1]] // Gecode
Here is the complete code. The comments hopefully explains what is going on.

First we define an utility function for accessing the element according to the above principle.
 * Special version of element for an array version of a "matrix" words,
 * E is an integer variable array, C is an array of IntVars for
 * the offset j in the words "matrix".
 * The call 
 *    element_offset(*this, words, E[i], word_len_v, C[j], res, opt.icl());
 * corresponds to:
 *    res = words[E[i], j] --> words[E[i]*word_len+J]
void element_offset(Space& space,
                   IntArgs words,
                   IntVar e,
                   IntVar word_len,
                   IntVar c,
                   IntVar res,
                   IntConLevel icl = ICL_DOM) {

      element(space, words, 
                        word_len, icl), 
                   c, icl), 
              res, icl);

The function is then used as follows:
for(int I = 0; I < num_overlapping; I++) {
   IntVar e1(*this, 0, num_overlapping*4);
   IntVar e2(*this, 0, num_overlapping*4);

   IntVarArray o(*this, 4, 0, num_overlapping*4);
   for(int J = 0; J < 4; J++) {
     post(*this, o[J] == overlapping[I*4+J], opt.icl());

   element(*this, E, o[0], e1, opt.icl());      // e1 = E[I*4+0]
   element(*this, E, o[2], e2, opt.icl());      // e2 = E[I*4+2]

   IntVar a1(*this, 0, num_words*word_len);
   element_offset(*this, A, e1, word_len_v, o[1], a1, opt.icl());
   element_offset(*this, A, e2, word_len_v, o[3], a1, opt.icl());
(The same element_offset function is also used in the word_square problem below.) It took quite a time to get the function and temporary variables (and their domains) right. With training (and the element_offset as a skeleton) similiar problems should be easier to implement.

Note: this is not a bashing of Gecode. Gecode is a great system and it happens that for this specific problem, Gecode does not has the appropriate support. I should also mention that it was a long time since I programmed in C++ and am little rusty.

As mentioned earlier, I have been very spoiled by the MiniZinc (and Comet) constructs. Also: I'm a very 'lazy' person (in the Perl sense of the word lazy) and likes the agile programming languages - Perl, Ruby, Python etc - much for their high level constructs.

Word square problem

The word problem is a cousin to the crossword problem, and is described in Wikipedia's Word_square:
A word square is a special case of acrostic. It consists of a set of words, all having the same number of letters as the total number of words (the "order" of the square); when the words are written out in a square grid horizontally, the same set of words can be read vertically.
An example of order 7 found by the Comet model where we see that the first row word (aalborg) is also the first column word.



Here are the models for solving the Word square problem:
MiniZinc: word_square.mzn
Gecode/R: (Not implemented it Gecode/R)
Gecode: word_square.cpp

It is somewhat easier than the crossword problem. As before, E is the array of the index of the words to use, and words is an matrix of the words. Also, these models is an experiment of how to read a file, the word list /usr/dict/words (standard on Unix/Linux systems).

forall(I, J in 1..word_len) (
  words[E[I], J] = words[E[J],I]
  forall(i in 1..word_len) {
    forall(j in 1..word_len) { [E[i], j] == words[E[j],i], onDomains);
// The overlappings (crossings).
// Note that we use a transposed word matrix for the Element.
for(int i = 0; i < word_length ; i++) {
    for(int j = 0; j < word_length ; j++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        FDV tmp = new FDV(store, "tmp" + i + " " + j, 0, dict_size);
        store.impose(new Element(E[i], words[j], tmp));
        store.impose(new Element(E[j], words[i], tmp));
// Constants for the nth constraint below
IntegerVariable [] C = new IntegerVariable[dict_size];
for (int I = 0; I < word_length; I++) {
    C[I] = makeIntVar("C"+I, I,I);

// The overlappings (crossings)
for(int I = 0; I < word_length ; I++) {
    for(int J = 0; J < word_length ; J++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        IntegerVariable tmp = makeIntVar("tmp" + I + " " + J, 0, dict_size);
        M.addConstraint(nth(E[I], C[J], words, tmp));
        M.addConstraint(nth(E[J], C[I], words, tmp));
Note that this model use the same function element_offset that was used in the Crossword problem. It took some time to realize that it also could be used here.
// convenience variables for the element constraints below
// since element, plus, and mult wants IntVars.
IntVar word_len_v(*this, word_len, word_len);
IntVarArray C(*this, word_len, 0, word_len-1);
for(int i = 0; i < word_len; i++) {
  rel(*this, C[i], IRT_EQ, i, opt.icl());

for(int i = 0; i < word_len; i++) {
  for(int j = 0; j < word_len; j++) {
    // words[E[i], j] ==  words[E[j],i]

    IntVar tmp(*this, 0, num_words);

    // tmp == words[E[i], j] --> words[E[i]*word_len+j]
    element_offset(*this, words, E[i], word_len_v, C[j], tmp, opt.icl());

    // tmp == words[E[j], i]  --> words[E[j]*word_len+i]
    element_offset(*this, words, E[j], word_len_v, C[i], tmp, opt.icl());

Who killed Agatha?

This is a standard benchmark for theorem proving, also known as The Dreadsbury Mansion Murder Mystery.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.


MiniZinc: who_killed_agatha.mzn
JaCoP :
JaCoP :
Gecode: who_killed_agatha.cpp

In Some new Gecode models I wrote about the findings in implemented this problem in Gecode and compared to Comet/MiniZinc.

The models use two 3x3 matrices for representing the two relations hates and richer with 0..1 as domain (i.e. boolean). The Element constraints is used for implementing the condition A killer always hates, and is no richer than his victim. where the_killer is an integer variable; the_victim is in some models replaced by agatha (the integer 0). The interesting thing here is that at least one of the indices are integer variables, which caused the difficulties in the two problems above.

These models also use a lot of boolean constructs. A comparison of how these are implemented in the different CP systems may be described in a future blog post.

hates[the_killer, the_victim] = 1 /\
richer[the_killer, the_victim] = 0
Comet:[the_killer, the_victim] == 1);[the_killer, the_victim] == 0);
Note: In the models below I have simplified the problem by using agatha (defined as the integer 0) instead of the integer variable the_victim. This is not a problem since we know that Agatha is the victim, and is the reason why the Element is easier to use than for Crossword and Word square.

JaCoP variant 1 (no Element):
JaCoP don't have a direct support for the case when the index i (in matrix[i][j]) is an integer variable so the first variant of modeling the condition A killer always hates, and is no richer than his victim. does not use Element at all. In we simply loop over all integers (0..2), check if "this" i equals the_killer and then we can use two integers for accessing the matrices. Also, note the IfThen construct.
for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(hates[i][agatha], 1)
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(richer[i][agatha], 0)
This was the first variant I implemented, but then I recall the "trickery" used in Crossword and Word square where the matrices where transposed and Element could be used. The problem with this approach is that all constraints must be rewritten in a way that may be confusing. Come to think of it, maybe the names of the matrices should have been changed to is_hated_by and poorer.

JaCoP variant 2 (transposed matrices, Element)
This method of transposition and using Element is implemented in The constraint is now much simpler:
int shift = -1;
for(int i = 0; i < n; i++) {
    store.impose(new Element(the_killer, hates[agatha], one, shift));
    store.impose(new Element(the_killer, richer[agatha], zero, shift));
Note: the Element in JaCoP defaults to start index as 1, but has support for shifting it to 0, by using -1 as the shift parameter. Choco variant 1 (no Element)
I implemented exact the same principle that was used in the two JaCoP model in the two Choco models. The first - no Element - is

for(int i = 0; i < n; i++) {
                            eq(hates[i][agatha], 1)
                            eq(the_killer, i),
                            eq(richer[i][agatha], 0)
Choco variant 2 (transposed matrices, nth)
Note: one and zero are integer variables since nth cannot handle plain integers as the last argument.
for(int i = 0; i < n; i++) {
   m.addConstraint(nth(the_killer, hates[agatha], one));
   m.addConstraint(nth(the_killer, richer[agatha], zero)


Here we have seen - not surprisingly - that using the Element constraint is quite different depending of which CP system we use and it can be easy or not so easy. It was my explicit intension to see how to solve the same problem as similiar as possible. We should also note that most (if not all) problems can be modeled in many ways, some not using Element at all.

One last comment: The two Java models of Who killed Agatha? took quite a long time to implement. The main reason for that was not the the handling of Element but was due to a bug of confusing the two matrices in one of the conditions. Sigh.

April 04, 2009

MiniZinc/FlatZinc support in SICStus Prolog version 4.0.5

The other day I downloaded a demo of SICStus Prolog version 4.0.5, with the sole intention of testing the new support for MiniZinc/FlatZinc. The version I downloaded was for Linux 32 bit glibc 2.7 (my machine use glibc 2.6 but it seems to work well anyway).

The support for MiniZinc/FlatZinc in SICStus Prolog is described more in 10.35 Zinc interface - library(zinc). Some restrictions are described in Notes:
  • Domain variables
    Only variables with finite integer domains are supported. This includes boolean variables which are considered finite integer domain variables with the domain 0..1.
  • Optimization problems
    Only variables with finite integer domains can be optimized in minimize and maximize solve items. The int_float_lin/4 expression as described in the FlatZinc specification is thus not supported.
  • Solve annotations
    • The solve annotations currently supported are int_search/4, bool_search/4, and labelling_ff/0.
    • The FlatZinc specification describes several exploration strategies. Currently, the only supported exploration strategy is "complete".
    • When no solve annotation is given, a most constrained heuristic is used on all problem variables (excluding those that have a var_is_introduced annotation, see below). This corresponds to labeling/2 of library(clpfd) with the option ffc.
    • The choice method "indomain_random" as described in the FlatZinc specification uses random_member/2 of library(random). The random generator of SICStus is initialized using the same seed on each start up, meaning that the same sequence will be tried for "indomain_random" on each start up. This behavior can be changed by setting a different random seed using setrand/1 of library(random).
  • Constraint annotations
    Constraint annotations are currently ignored.
  • Variable annotations
    Variable annotations are currently ignored, except var_is_introduced, which means that the corresponding variable is not considered in any default labeling (such as when no search annotation is given or when the labelling_ff search annotation is given).


For testing the MiniZinc solver I used exactly the same principle as I do for the ECLiPSe solver, so hooking it up into in my system was very easy. All this is done via a Perl script of my own. It consists of generating a Prolog file, here called with the content below . model.mzn is the MiniZinc file to run, and number_of_solutions is the number of solutions to generate (an integer, or all for all solutions).

% Generated by
:- use_module(library(zinc)).

go :-
And then running the following command (from the Perl program):
sicstus -l --goal go.


Most things works very well and with about the same performance as the ECLiPSe solver. I will investigate some more before (perhaps) buying a private license of SICStus Prolog (or upgrading from my old version 3.9.1 if that is possible). However, I did find some problems.
  • global_cardinality/2
    The support for the builtin global_cardinality/2 is broken. The following error occurs:
    ! Existence error
    ! `global_cardinality/2' is not defined
    Example: sudoku_gcc.mzn.

    There is an easy fix which works but is slower that using an builtin support: in the file globals.mzn (in the SICStus Prolog distribution), just use the decomposition variant (the commented) instead.
  • cumulative
    For the model furniture_moving.mzn the following error occurs:
    ! Instantiation error in argument 2 of user:cumulative/2
    ! goal:  cumulative([task(_4769,30,_4771,3,1),task(_4777,10,_4779,1,2),task(_4785,15,_4787,3,3),task(_4793,15,_4795,2,4)],[limit(_4801)])
    Note: the model cumulative_test_mats_carlsson.mzn works without problem (this is a simple example from one of Mats Carlsson's lecures).
  • integer overflow
    For the Grocery example (grocery.mzn) an integer overflow error is thrown:
    ! Item ending on line 3:
    ! Representation error in argument 1 of user:range_to_fdset/2
    ! CLPFD integer overflow
    ! goal:  range_to_fdset(1..359425431,_4771)
    Notes: MiniZinc's solver also overflows, so this is probably not a big thing. The solvers for Gecode/FlatZinc and ECLiPSe ic handles this problems correctly, though.
  • Statistics
    I have not seen any relevant statistics (e.g. number of failures, nodes, propagations etc) for the SICStus MiniZinc solver. The standard SISCtus Prolog predicate statistics/0 is somewhat useful, but is not what is needed when writing MiniZinc models and comparing with other versions and/or solvers.

    What I have in mind is something like the statistics from Gecode (and Gecode/FlatZinc):
    runtime:       50
    solutions:     1
    propagators:   8
    propagations:  3995
    nodes:         249
    failures:      124
    peak depth:    12
    peak memory:   27 KB

Final note

I have contacted the developers of SICStus Prolog about these things. They responsed that the bugs are now fixed and will be included in the next version (4.0.6). They also indicated that more detailed statistics may make it in a later version. That is great!

I have now also added the SICStus Prolog solver on my MiniZinc page.

March 24, 2009

Gecode version 3.0.1 and Gecode/FlatZinc 1.5 released

Version 3.0.1 of Gecode is relased. It contains mostly bug fixes:

This is a bug fix release fixing an embarassing bug in reified Boolean linear constraints.
  • Finite domain integers

    • Bug fixes
      • IntSetArgs no longer inherit from PrimArgArray, which was wrong as IntSet is no primitive type and hence does not support vararg initializers. (minor)
      • Fixed bug in reified Boolean linear constraints (an optimization is currently disabled and will be active in the next release: the optimization was incorrect and was never tested). (major, thanks to Alberto Delgado)

  • Example scripts
    • Additions
    • Bug fixes
      • The examples now pass the c-d and a-d command line options correctly to Gist. (minor)
      • The Steel Mill Slab Design example had two bugs in the search heuristic and a missing redundant constraint. (minor, bugzilla entry, thanks to Chris Mears)

Gecode/FlatZinc has been updated to version 1.5. The bug fix here is very interesting (and exiting) since Gist now also works with Gecode/FlatZinc.

Gist in Gecode and Gecode/FlatZinc
Gist for Gecode has been around for time, and was officially in the distribution in version 3.0.0. In Modeling with Gecode (PDF) there is a section how to program and use Gist.

Stable Marriage Problem.
As a first example of Gist with Gecode/FlatZinc (i.e. MiniZinc models): stable_marriage.pdf is a PDF file which shows the full search tree for the MiniZinc model stable_marriage.mzn. The green nodes are solutions, the blue nodes are choice points, the red squares are failures, and the big red triangles are a sub tree of failures.

Running the model with fz -mode stats stable_marriage.fzn gives the following result.

wife : [7, 5, 9, 8, 3, 6, 1, 4, 2]
husband: [7, 9, 5, 8, 2, 6, 1, 4, 3]
wife : [6, 5, 9, 8, 3, 7, 1, 4, 2]
husband: [7, 9, 5, 8, 2, 1, 6, 4, 3]
wife : [6, 4, 9, 8, 3, 7, 1, 5, 2]
husband: [7, 9, 5, 2, 8, 1, 6, 4, 3]
wife : [6, 1, 4, 8, 5, 9, 3, 2, 7]
husband: [2, 8, 7, 3, 5, 1, 9, 4, 6]
wife : [6, 4, 1, 8, 5, 7, 3, 2, 9]
husband: [3, 8, 7, 2, 5, 1, 6, 4, 9]
wife : [6, 1, 4, 8, 5, 7, 3, 2, 9]
husband: [2, 8, 7, 3, 5, 1, 6, 4, 9]

runtime: 10
solutions: 6
propagators: 346
propagations: 12426
nodes: 67
failures: 28
peak depth: 8
peak memory: 132 KB

Also see My MiniZinc Page for MiniZinc models which may be used with Gecode/FlatZinc.

March 12, 2009

MiniZinc Challenge 2008 Results

The MiniZinc Challenge 2008 was held in the summer 2008. I don't know exactly when the result was published (probably in the last week), but here it is; MiniZinc Challenge 2008 Results.

The summary states (higher result better):

The results of the challenge are available here; congratulations go to GeCode on a very convincing win! The summary of results is given in this table:

Contestant Total Score
eclipse_fd 787.3
eclipse_ic 938.8
g12_fd 1655.1
gecode 3418.8

Congratulations to the Gecode team!

This result confirms my impression about the MiniZinc solvers: Gecode/flatzinc is my favorite solver, since it is often very fast, and also that it shows important statistics. See Pi Day Sudoku 2009 for some examples of the latter.

As of writing (2009-03-12) the links to the models and data files in the result page don't work but all the files is included in the last ROTD (release of the day).

By the way, two of the models are from my MiniZinc collection:
* debruijn_binary.mzn: de Bruijn sequences
* quasiGroup7.mzn: quasigroup problem 7

March 11, 2009

Pi Day Sudoku 2009

Brainfreeze Puzzles has a Pi day Sudoku competition:

Pi Day is a celebration of the number π that occurs every March 14 (3.14...). Math geeks all over the world celebrate their math-geekiness on this day with pie-eating contests, recitations of the digits of pi, and occasionally fundraisers where math faculty get hit in the face with pies. At Brainfreeze Puzzles we celebrate Pi Day - how else? - with a Sudoku puzzle.


Rules: Fill in the grid so that each row, column, and jigsaw region contains 1-9 exactly once and π [Pi] three times.

The puzzle is also here (as a PDF file)


I programmed this puzzle in both Comet (using the constraint programming module) and MiniZinc using different solvers. The models use the same principle of solving the problem: a (homebrewn) all different except 0 constraint and counting exactly 3 occurrences of Pi for each row/column and region.

All the solvers give the same unique answer, which of course may mean that there is some systematic error on my part.


Comet solves the problem in about 9 seconds with the following statistics, using exploreall to make sure that it is a unique solution.

time: 8821
#choices = 9999
#fail = 16231
#propag = 23168886

Somewhat surprisingly selecting j before i in the forall loop, gives an improvment from 15 seconds to 9 seconds. This was the same labeling strategy that was used for improving Nonogram (see Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second). Here is the labeling:

// reversing i and j gives faster solution
forall(j in 1..n, i in 1..n: !x[i,j].bound()) {
tryall(v in V : x[i,j].memberOf(v)) by(v)
m.label(x[i,j], v);
m.diff(x[i,j], v);

MiniZinc and Gecode/flatzinc

Interestingly, the same improvement was observed with MiniZinc and the Gecode/flatzinc solver (the interface to Gecode). Here is the labeling function, with the same swapping of i and j (I didn't know that this mattered in MiniZinc).

solve :: int_search([x[i,j] | j, i in 1..n], "input_order", "indomain", "complete") satisfy;

With the swap, the time went from 16 seconds to 10 seconds with the following statistics:

propagators: 9460
propagations: 13461717
failures: 11221
clones: 11221
commits: 31478
peak memory: 5638 KB sudoku_pi.mzn --num 2 9,41s user 0,72s system 99% cpu 10,139 total

Fzntini, FlatZinc, ECLiPSe and searching for the first solution

MiniZinc's builtin solver flatzinc and fzntini (a SAT solver) only shows one solution and no statistics.

* fzntini solve problem in about 13 seconds
* flatzinc takes 3 seconds.

The ECLiPSe's MiniZinc solver ic takes 6.5 seconds using the same labeling as Gecode/flatzinc. No statistics is shown for this solver either.

For comparison both Comet and Gecode/flatzinc shows the first solutions in 2.5 seconds.


Since it is a competition I won't show my solution or the models on/via the blog. Sorry about that.


I learned about Pi Days Sudoku from the 360 blog: Pi Day Sudoku 2009.

February 28, 2009

Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more

Since the last time some more Comet model has been written.

Much faster Nonogram model using the regular constraint

In More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems I presented, a solver for Nonogram puzzles, and also noted that is was quite slow: Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)....

Inspired by the announcement this week of the ECLiPSe example of Nonogram solver using the regular constraint (see nono_regular.ecl.txt and regular.ecl.txt) - and also my earlier Gecode/R model nonogram.rb which used "regular expressions" - I created a Nonogram model in Comet using the regular constraint:

Let us first look at the regular constraint.

Regular constraint
The regular constraint (see the Comet model for my implementation) is a global constraint using a DFA (deterministic finite automata) which accepts (or not accepts) an input sequence given a "transition matrix". The constraint was presented by Gilles Pesant in "A Regular Language Membership Constraint for Finite Sequences of Variables" (2004).

My implementation of the regular constraint is heavily borrowed from MiniZinc's builtin regular predicate (from lib/zinc/globals.mzn); the translation to Comet was very straightforward (omitting just some asserts).

An exemple of the usage of the constraint: We want to match the regular expression "123*21" (i.e. first "1", then "2", then zero or more "3", then "2", and last a "1"). Note: The length of the sequence is 10, so there must be 6 "3":s since "123" and "21" are "anchored" at the beginning and the end.

int len = 10;
int n_states = 5; // number of states
int input_max = 3; // the states are 1,2, and 3
int initial_state = 1; // we start with the 1 state
set{int} accepting_states = {n_states}; // This is the last state
// The transition matrix
int transition_fn[1..n_states, 1..input_max] =
[[2, 0, 0], // transitions from state 1: "1" -> state 2
[0, 3, 0], // transitions from state 2: "2" -> state 3
[0, 4, 3], // transitions from state 3: "2" -> state 4 | "3" -> state 3 (loop)
[5, 0, 0], // transitions from state 4: "1" -> state 5
[0, 0, 0]]; // transitions from state 5: END state
exploreall {
regular(reg_input, n_states, input_max, transition_fn, initial_state, accepting_states);
} using {
// ....

The unique sequence resulting from this automata is thus 1 2 3 3 3 3 3 3 2 1.

For using regular in the Nonogram problem, the automata for each row/column clue, must be built, preferably by a function. In this is done with the function make_transition_matrix, which by far was the hardest part of this problem (and surely could be written in a smoother way).

For the Nonogram clue [3,2,1] - which represents the regular expression "0*1110*110*10*" - the following automata (transition matrix) is generated:
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0

Note that the regular function uses 0 (zero) as the failing state, so the states must start with 1. This is taken care in the model as the resulting value is subtracted by 1.

As usual in my models, the regular constraint is just a "convenience function", i.e. not using special written propagation methods etc.

The regular constraint has - of course - more applications than for Nonogram solving. I plan to look more into this.

The Nonogram solver:
After the regular constraint and the automata generator was written, it was quite easy to change the old Nonogram solver to use these new tools. The result is in I was quite curious how fast this version should be compared to the former slow model. In short: it is much faster.

As comparison the Lambda instace took about 12 seconds with the old model; with the new model it takes 0.5 seconds, which is a nice improvement. I never finished a run for Nonunique problem with the older model; the new model takes 0.5 seconds (including the startup of the Comet program). Etc.

The P200 problem now takes 2:05 minutes, which can be compared with 57 seconds for the Gecode/R model. Thus, the Comet model is still slow compared to Gecode version and the ECLiPSe version, which solves the P200 problem in just a second or two. Maybe some creative labeling or a "proper" regular constraint can fasten things up...

Update some hour later: One way to gain about 30 seconds to 1:30 minutes on the P200 problem was to explicitly state the consistency level to onDomain, e.g.[m] == q0, onDomains), and to use another labeling strategy:
forall(i in 1..rows, j in 1..cols : !board[i,j].bound()) {
// label(board[i,j]); // the former strategy
tryall(v in 1..2)[i,j] == v, onDomains);

Some more Nonogram problems have been coded:

An aside about regular expressions
I have been very interested in regular expressions (especially the more expressive Perl type) for a long time. In 1997 I wrote a simple Perl program MakeRegex which returns a regular expression given a list of words. It was later Appletized in MakeRegexApplet. Now there are better programs/modules for this.

OPL Models

One of my projects is to translate the OPL models from Pascal Van Hentenryck "The OPL Optimization Programming Language" into Comet. Even if OPL and Comet are different languages the reading of the book has been very awarding. Thanks again, Pascal!

Some OPL models already have been published, but now I've been more systematic and started from the beginning. More models to come.

Finding: arrays in a tuple
In Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models I wrote

One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.

I've found a way of using arrays in a tuple, by first initialize the values in a matrix and then - by using the all function - "slice" the values into the tuple array. This has been done in the models and

Example from

tuple productType {
int profit;
set{Machines} machines;
int[] use;
int use[Products, Resources] = [[3,4],[2,3], [6,4]];
productType Product[Products] =
[productType(6, {shirtM}, all(i in Resources) use[shirt,i]),
productType(4, {shortM}, all(i in Resources) use[shorts,i]),
productType(7, {pantM}, all(i in Resources) use[pants,i])];

Combining different models

The alphametic problem SEND + MOST = MONEY has the additional requirement to maximize the value of MONEY. The older model does that and nothing more.

One natural extension of the problem is the following:
* first find the maximum value of MONEY
* then find all the solutions with this value.

The model has two functions:
* smm which is just the constraint for the alphametic problem
* send_most_money which has a parameter money.

send_most_money is first called with 0, indicating that is should maximize the value of MONEY, and then returns that value. The next call to send_most_money is with the calculated MONEY value, which indicates that all solutions should be generated.

The answer is

check all solutions for MONEY = 10876
x[9,7,8,2,1,0,4,6] MONEY: 10876
x[9,7,8,4,1,0,2,6] MONEY: 10876

Jim Orlin's Logic Puzzle

In Colored letters, labeled dice: a logic puzzle Jim Orlin stated a logic puzzle:
My daughter Jenn bough a puzzle book, and showed me a cute puzzle. There are 13 words as follows: BUOY, CAVE, CELT, FLUB, FORK, HEMP, JUDY, JUNK, LIMN, QUIP, SWAG, VISA, WISH.

There are 24 different letters that appear in the 13 words. The question is: can one assign the 24 letters to 4 different cubes so that the four letters of each word appears on different cubes. (There is one letter from each word on each cube.) It might be fun for you to try it. I’ll give a small hint at the end of this post. The puzzle was created by Humphrey Dudley.


If anyone wants to write an Excel spreadsheet and solve it via integer programming, please let me know. I’d be happy to post the Excel spreadsheet if you send it to me, or link to it if you post it and send me the URL.

This was a fun puzzle, so I modeled the problem in, and mailed Jim the link.

Some days later he wrote Update on Logic Puzzle where the contributed models were presented. There where another Comet model by Pierre Schaus, one of Comet's developers. Pierres model use an different and more elegant approach than mine.

Jim also linked to some other logical puzzles from the great collection (which have solutions in ECLiPSe/Prolog). One of these puzzles was Building Blocks, in the same vein as his labeled dice puzzle. Hence I had to make a Comet model of this problem:

He concludes the post with the following:

Incidentally, the ease for solving it using Constraint Programming makes me think that Constraint Programming should be considered a fundamental tool in the OR toolkit.

Other Comet models this week

Here are the other Comet models created/published this week. Some of them where very easy to do since they are translations of my MiniZinc models.

January 18, 2009

Some other Gecode/R models, mostly recreational mathematics

Here are some new Gecode/R models. They are mostly in recreation mathematics since I still learning the limits of Gecode/R and want to use simple problems.


This is a simple alphametic puzzle: SEND + MOST = MONEY and the model is based on the example in the Gecode/R distribution (send_most_money.rb). The difference with its cousin SEND + MORE = MONEY is that there are many solutions, and we want to maximize the value of MONEY (10876).

The model is send_most_money2.rb.

Now, there are actually 4 different solutions for MONEY = 10876 which we want to find.

s e n d m o s t y
3 7 8 2 1 0 9 4 6
5 7 8 2 1 0 9 4 6
3 7 8 4 1 0 9 2 6
5 7 8 4 1 0 9 2 6

In order to do that, the model first solves the maximation problem and assigns the value (10876) to max_value; the second steps finds all 4 solutions. These two steps use the same code with the difference that the second step activates the following constraint

if (max_money > 0) then
money.must == max_money

Nothing fancy, but it quite easy to to it in Ruby and Geocde/R.

Steiner triplets

This is another standard problem in the constraint programming (and recreational mathematics) community: Find a set of triplet of numbers from 1 to n such that any two (different) triplets have at most one element in common.

For more about this, see Mathworld: SteinerTripleSystem, and Wikipedia Steiner_system.

The Gecode/R model is steiner.rb.

The problem can be simply stated as:

nb = n * (n-1) / 6 # size of the array
sets_is_an set_var_array(nb, [], 1..n)
sets.must.at_most_share_one_element(:size => 3)
branch_on sets, :variable => :smallest_unknown, :value => :min

This, however, is very slow (and I didn't care to wait that long for a solution). I tried some other branch strategies but found none that made some real difference.

When the following constraint was added, it really fasten things up. This in effect the same as the constraint sets.must.at_most_share_one_element(:size => 3) above:

sets[i].size.must == 3
(sets[i].intersection(sets[j])).size.must <= 1

The first 10 solutions for n = 7 took about 1 seconds. The first solution is:

Solution #1
{1, 2, 3}
{1, 4, 5}
{1, 6, 7}
{2, 4, 6}
{2, 5, 7}
{3, 4, 7}
{3, 5, 6}
memory: 25688
propagations: 302
failures: 0
clones: 2
commits: 16

For n = 9, 10 solutions took slightly longer, 1.3 seconds.

Devil's Words

Gecode/R model: devils_word.rb.

Devil's word is a "coincidence game" where a the ASCII code version of a name, often a famous person's, is calculated to sum up to 666 and some points is made about that fact (which of course is nonsense).

There are 189 different ways my own name HakanKjellerstrand (where the umlaut "å" in my first name is replaced with "a") can be "devilized" to 666. With output it took about 2 seconds to generate the solutions, without output it took 0.5 seconds.

The first solution is:

+72 +97 +107 +97 +110 -75 +106 +101 +108 -108 -101 +114 +115 +116 +114 -97 -110 -100

Also, see
* MiniZinc model: devils_word.mzn
* my CGI program: Devil's words.

And see Skeptic's law of truly large numbers (coincidence) for more about coincidences. The CGI program mentioned above was presented in the swedish blog post Statistisk data snooping - att leta efter sammanträffanden ("Statistical data snooping - to look for coincidences") which contains some more references to these kind of coincidences.

Pandigital Numbers, "any" base

Pandigital numbers are a recreational mathemathics construction. From MathWorld Pandigital Number

A number is said to be pandigital if it contains each of the digits
from 0 to 9 (and whose leading digit must be nonzero). However,
"zeroless" pandigital quantities contain the digits 1 through 9.
Sometimes exclusivity is also required so that each digit is
restricted to appear exactly once.

The Gecode/R model pandigital_numbers.rb extends this to handle "all" bases. Or rather bases from 2 to 10, since larger bases cannot be handled by the Gecode solver.

For base 10 using the digits 1..9 there are 9 solutions:

4 * 1738 = 6952 (base 10)
4 * 1963 = 7852 (base 10)
18 * 297 = 5346 (base 10)
12 * 483 = 5796 (base 10)
28 * 157 = 4396 (base 10)
27 * 198 = 5346 (base 10)
39 * 186 = 7254 (base 10)
42 * 138 = 5796 (base 10)
48 * 159 = 7632 (base 10)

For base 10, using digits 0..9, there are 22 solutions.

Here is the number of solutions for base from 5 to 10 and start either 0 or 1 (there are no solutions for base 2..4):
* base 2, start 0: 0
* base 2, start 1: 0
* base 3, start 0: 0
* base 3, start 1: 0
* base 4, start 0: 0
* base 4, start 1: 0
* base 5, start 0: 0
* base 5, start 1: 1
* base 6, start 0: 0
* base 6, start 1: 1
* base 7, start 0: 2
* base 7, start 1: 2
* base 8, start 0: 4
* base 8, start 1: 4
* base 9, start 0: 10
* base 9, start 1: 6
* base 10, start 0: 22
* base 10, start 1: 9

See also the MiniZinc model pandigital_numbers.mzn with a slightly different approach using the very handy MiniZinc construct exists instead of looping through the model for different len1 and len2.

SEND + MORE = MONEY (any base)

And talking of "all base" problems, send_more_money_any_base.rb is a model for solving SEND+MORE=MONEY in any base from 10 and onward.

The number of solutions from base 10 and onward are the triangle numbers, i.e. 1,3,6,10,15,21,... I.e.

* Base 10: 1 solution
* Base 11: 3 solutions
* Base 12: 6
* Base 13: 10
* Base 14: 15
* Base 15: 21
* etc

For more about triangle numbers, see Wikipedia Triangular numbers.

There seems to be a pattern of the solutions given a base b:

           S  E  N  D  M  O  R  Y
 Base 10:  9  5  6  7  1  0  8  2
 Base 11: 10  7  8  6  1  0  9  2
          10  6  7  8  1  0  9  3
          10  5  6  8  1  0  9  2
 Base 12:
          11  8  9  7  1  0 10  3
          11  8  9  6  1  0 10  2
          11  7  8  9  1  0 10  4
          11  6  7  9  1  0 10  3
          11  6  7  8  1  0 10  2
          11  5  6  9  1  0 10  2
 Base 23:
          22 19 20 18  1  0 21 14
          22 19 20 17  1  0 21 13

Some patterns:

S: always base-1 e.g. 9 for base 10
M: always 1 e.g. 1 any base
0: always 0 e.g. 0 any base
R: always base-2 e.g. 8 for base 10
E, N, D: from base-3 down to 5 e.g. {5,6,7,8,9} for base 10
Y: between 2 and ??? e.g. {2,3,4} for base 12

I haven't read any mathematical analysis of these patterns. There is an article in math Horizons April 2006: "SENDing MORE MONEY in any base" by Christopher Kribs-Zaleta with the summary: Dudeney's classic "Send More Money" cryptarithmetic puzzle has a unique solution in base ten. This article generalizes explores solutions in bases other than ten. I haven't read this article, though.

Also, see my MiniZinc model send_more_money_any_base.mzn.

January 15, 2009

Some models in Gecode/R (Ruby interface to Gecode)

Gecode/R is a great Ruby interface to Gecode (implemented in C++). At last I now have done some modeling in Gecode/R and it's nice.

The models and some other information about Gecode/R can be seen at My Gecode/R page.

Since Ruby is a very high level language, the modelling can be done "high levelish", more so than for example with the Java solvers Choco and JaCoP. And that is a feature I really like.

I'm still learning Gecode/R and have surely missed some stuff. There are not many examples in the package (these are also commented at Examples). Some things have been of great help
* The Sitemap
* The Documentation, especially the RDocs
* The test files in the specs directory.

An example: Survo Puzzle
From Wikipedia's Survo Puzzle

In a Survo puzzle the task is to fill an m * n table by integers 1,2,...,m*n so
that each of these numbers appears only once and their row and column sums are
equal to integers given on the bottom and the right side of the table.
Often some of the integers are given readily in the table in order to
guarantee uniqueness of the solution and/or for making the task easier.

E.g. the puzzle 128/2008 is presented with the following clues:

* * * * * * 30
* * 18 * * * 86
* * * * * * 55
22 11 42 32 27 37

where * marks a digit to find. The number to the right is the row sums which the row must sum to, and the last row is the column sums.

The unique solution of the problem is (by running the program ruby survo_puzzle.rb survo_puzzle_128_2008.txt)

Solution #1
4 1 10 5 3 7 = 30
12 8 18 16 15 17 = 86
6 2 14 11 9 13 = 55
= = = = = =
22 11 42 32 27 37

propagations: 3784
failures: 174
clones: 179
memory: 25740
commits: 461
Number of solutions: 1

The relevant constraint programming code is below (slightly edited). I think it's quite nice.

def initialize(clues, rowsums, colsums)
r = rowsums.length # number of rows
c = colsums.length # number of columns
x_is_an int_var_matrix(r, c, 1..r*c) # the solution matrix
x.must_be.distinct # all values in x must be distinct
# values in clues with values > 0 are copied straight off to x
x[i,j].must == clues[i][j] if clues[i][j] > 0
r.times{|i| x[i,0..c-1].sum.must == rowsums[i] } # check row sums
c.times{|j| x.transpose[j,0..r-1].sum.must == colsums[j] } # check column sums
branch_on x, :variable => :smallest_size, :value => :min

The full program is here: survo_puzzle.rb, and three data files:

The Gecode/R models
Below are the models shown at My Gecode/R page. The selection is mostly for comparison with models implemented in other constraint programming languages, see respectively:
My MiniZinc page (which has a lot more models)
My JaCoP page
My Choco page

The models first written (e.g. diet, least_diff) are very "un-Rubyish" since I wanted to model straight after the MiniZinc models. Maybe I Rubyfi these later.

After a while the modeling went quite easy, and both de Bruijn and Minesweeper was done surprisingly fast. I do, however, miss MiniZinc's sum construct, since it which make some things easier (e.g. the neighbour summing in minesweeper.rb).

The execution time of the models os approximately the same time as the corresponding MiniZinc model with the Gecode/flatzinc solver which is normally quite fast. The big exception of these examples is coins_grid which seems to be slow for the constraint programming systems, but fast with linear programming systems, e.g. with the MiniZinc solvers ECLiPSe/ic and MiniZinc/mip.

References to the problems etc are in the header of the model.

January 06, 2009

Map coloring problem: Lichtenstein

In The Chromatic Number of Liechtenstein bit-player asked (2008-10-28)) the following about coloring the map of Lichtenstein:

It seems that Liechtenstein is divided into 11 communes, which emphatically do not satisfy the connectivity requirement of the four color map theorem. Just four of the communes consist of a single connected area (Ruggell, Schellenberg and Mauren in the north, and Triesen in the south). All the rest of the communes have enclaves and/or exclaves.


In the map above, each commune is assigned its own color, and so we have an 11-coloring. It’s easy to see we could make do with fewer colors, but how many fewer? I have found a five-clique within the map; that is, there are five communes that all share a segment of border with one another. It follows that a four-coloring is impossible. Is there a five-coloring? What is the chromatic number of Liechtenstein?

I wrote a MiniZinc model for this minimizing problem: lichtenstein_coloring.mzn.

The model has two variable arrays:
* color_communes: the color of the 11 communes
* color: the color of the 29 en-/exclaves

Objective: minimize the number of colors used.

The Gecode/flatzinc solver gives the following solutions in less than 1 second, which states that 5 different colors (n_colors) is sufficient. The model allows for up to 11 different colors, hence the large color numbers.

n_colors: 5
color_communes: [1, 1, 10, 8, 8, 1, 9, 9, 8, 10, 11]
color: [1, 1, 1, 1, 1, 1, 10, 10, 8, 8, 8, 8, 8, 1, 9, 9, 9, 9, 9, 9, 8, 10, 10, 11, 11, 11, 11, 11, 11]

Optimal solution found.

runtime: 290
solutions: 4
propagators: 1235
propagations: 1992711
failures: 1045
clones: 1046
commits: 2757
peak memory: 1414 KB

Times for other MiniZinc solvers:
* Minizinc's flatzinc: 2 seconds,
* Minizinc's fdmip: 2 seconds,
* ECLiPSe's ic: 4 seconds
* tinifz: 5.5 seconds

Also see
The chromatic number of Lichtenstein by Michi (from whom I borrowed the edges).

January 05, 2009

Tom Schrijvers: "Monadic Constraint Programming" (in Haskell)

Tom Schrijvers presents in the blog post Monadic Constraint Programming a draft version of the paper Monadic Constraint Programming written by him, Peter Stuckey, and Philip Wadler:

A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. In this paper we give a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. We are then able to define search and search strategies as first class objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.

Prototype code in Haskell can be downloaded here.

December 29, 2008

Temporal reasoning model in MiniZinc

temporal_reasoning.mzn is a MiniZinc model of temporal reasoning. The example is from From Krzysztof R. Apt Principle of Constraint Programming, page 23ff:

The meeting ran non-stop the whole day.
Each person stayed at the meeting for a continous period of time.
The meeting began while Mr Jones was present and finished
while Ms White was present.
Ms_White arrived after the meeting has began.
In turn, Director Smith, was also present but he arrived after Jones had left.
Mr Brown talked to Ms White in presence of Smith.
Could possibly Jones and White have talked during this meeting?

The coding was inspired by the ECLiPSe (Prolog) model in Apt's presentation of chapter 2, ch2-sli.pdf.gz (gzipped PDF file), slides 15ff.

Also see My MiniZinc page for other MiniZinc models.

Welcome to my My Constraint Programming Blog

Welcome to my My Constraint Programming Blog!

This is an extension of my "normal" swedish blog hakank.blogg, and will contain news etc about constraint programming and related paradigms. It will also link to my newly written constraint programming models.

As stated (in swedish) in Constraint programming-nyheter samt nya MiniZinc-modeller (~ Constraint programming news and some new MiniZinc models) the target group for this kind of things (especially in swedish) is quite small. Hence this new blog, and in english.

Some links as introduction to what I have done so far:
* My Constraint Programming page

* My MiniZinc page
* My JaCoP page
* My Choco page

The latter three pages contains information about the systems and some models. I regard the MiniZinc page as my main constraint programming page, since MiniZinc is - as time of writing - my favorite system.

If you know swedish, you may also read the Constraint programming category at hakank.blogg.


And we start with some new MiniZinc models written this weekend.

Three Rosetta Code programs, just to test the limits of MiniZinc.

* 99_bottles_of_beer.mzn: 99 bottles of beer
* knapsack_problem.mzn: Knapsack problem
* pyramid_of_numbers.mzn: Pyramid of numbers

And then an operations research model.
* sportsScheduling.mzn Sport scheduling, using channeling for symmetry breaking. I didn't found out how to generate the channeling matrix automatically, so a Perl one-line is used instead (contained in the model). It was inspired by the Essence' model sportsScheduling.eprime from the Tailor (Minion) distribution.