November 01, 2014

Decision Management Community November 2014 Challenge: Who killed Agatha?

This blog post was inspired by Decision Management Community November 2014 Challenge: Challenge Nov-2014 Decision model 'Who killed Agatha?'. The original text - and the one linked to from the Challenge page - is here.

From DM Community Challenge Nov-2014:
Someone in Dreadsbury Mansion killed Aunt Agatha.
Agatha, the butler, and Charles live in Dreadsbury Mansion, and
are the only ones to live there. A killer always hates, and is
no richer than his victim. Charles hates noone that Agatha hates.
Agatha hates everybody except the butler. The butler hates everyone
not richer than Aunt Agatha. The butler hates everyone whom Agatha hates.
Noone hates everyone. Who killed Agatha?
A model for this problem is actually one of the first I implement whenever I learn a new Constraint Programming (CP) system since in my approach it has some features that are important to test (they are explained in some details below):
  • reification: "reasoning about constraints"
  • matrix element: how to index a matrix with decision variables
The same general approach is implemented in most of the CP systems that I've tested so far: See also below for the full list with links.

All the decodings has the same approach: defining two binary 3x3 matrices of decision variables:
  • a "hates" matrix, i.e. who hates who
  • a "richer" than matrix: who is richer than who
And a Killer decision variable of range 1..3 (Agatha=1, Butler=2, and Charles=3). In some encodings (as the one below), Victim is also a decision variable, but we know that Agatha is the Victim from the start so this decision variable can be skipped.

The constraints will then on these two decision variables, the two matrices + the Killer decision variable.

First I describe the MiniZinc model in detail and after that a discussion about the solution and why it returns 8 solutions (all with the same killer: Agatha).

The Minizinc model

My original MiniZinc model is here who_killed_agatha.mzn.

The decoding shown below is slightly altered for simpler presentation.

Initiation of constants and domains

The following defines the dimension of the matrices (3x3) and also the domain - the possible values - of the killer and victim variables (1..3).
int: n = 3;
set of int: r = 1..3;
Here we define the constants agatha, butler, and charles. The values are fixed to the integers 1..3 since the solvers used handles finite domains in terms of integers. (There are exceptions to this. To state it short: finite domains in terms of integers is the most common in CP. Some systems also support set variables and/or float decision variables.)
% the people involved
int: agatha  = 1;
int: butler  = 2;
int: charles = 3;

Decision variables

The two decision variables the_killer and the_victim has both the domain 1..3, i.e. the set {agatha, butler charles}.
var r: the_killer;
var r: the_victim;
The 3x3 entries in the two matrices hates and richer are binary decision variables: 1 represents true, 0 represents false.
array[r,r] of var 0..1: hates;
array[r,r] of var 0..1: richer;
The following statement just states that the CP solver should use its default search method.
solve satisfy;
Aside: There is a range of different search heuristics available for most CP solvers, than can speed up more complex CSP's significantly, e.g. for an array x of decision variables one could state:
   solve :: int_search(x, first_fail, indomain_split, complete);
but for this simple problem we just settle for the default.


Next is the constraint section where all the requirements (constraints) of the problem are specified. The order of the constraints is the same order as the problem description. In contrast to the MiniZinc version on my MiniZinc site (the link above), I have here separated each constraint in its own section to emphasize the specific constraint.

Constraint: A killer always hates, and is no richer than his victim.

Here we see an example of a "matrix element" constraint, i.e. that a decision variable (the_killer) is used as an index of a matrix of decision variables, e.g.
     hates[the_killer, the_victim] = 1<
[Aside: In MiniZinc this can be done in this most syntactical pleasing ("natural") way, though most of the other CP systems cannot handle this syntax so an alternative syntax must be used. However, most CP systems have a syntax for the one dimensional element constraint, which is then stated as
which corresponds to
    X[Y] = Z,
where X and both Y and Z can be decision variables. The element constraint is central for CP and is one of the features that separates it from traditional LP/MIP modeling systems.] The first constraint
   hates[the_killer, the_victim] = 1
simply means that the killer hates the victim, and the second that the killer is not richer than the victim.
   % A killer always hates, and is no richer than his victim. 
   hates[the_killer, the_victim] = 1 /\
   richer[the_killer, the_victim] = 0

The concept of richer

The only relations we are using here are hates and richer, and we must take care of the meaning of these concepts.

Hates: Regarding the concept of hatred there is no logic involved how it can be used: A and B can both hate each other, or one of them can hate the other, or neither hates the other. Note that A can hate him/herself.

Richer: The concept of richer is a completely different matter, however. There is a logic involved in this relation:
  • if A is richer than B, then B cannot be richer than A
  • A cannot be richer than him/herself
Realizing that the richer relation is special is important for this model. Without it, there would be many more different solutions (256 instead of 8), though all these 256 solutions points to the same killer: Agatha. (See below for an analysis of the 8 different solutions.)

As mentioned above, we don't need to - and cannot - do a similar analysis on (the semantic meaning of) the hate relation.

See below A note on the richer relation for a comment on the assumption that either A is richer than B or B is richer than A.
   % define the concept of richer

   % no one is richer than him-/herself
   forall(i in r) (
      richer[i,i] = 0

   % (contd...) if i is richer than j then j is not richer than i
   forall(i, j in r where i != j) (
      richer[i,j] = 1 <-> richer[j,i] = 0

Constraint: Charles hates noone that Agatha hates.

Here again, reifications is handy. In pseudo code:
  FOR EACH person I in the house:
     IF Agatha hates I THEN Charles don't hate I
  % Charles hates noone that Agatha hates. 
   forall(i in r) (
      hates[charles, i] = 0 <- hates[agatha, i] = 1

Constraint: Agatha hates everybody except the butler

Here we simply state three hate facts.

It is important to not forget that Agatha can hate herself. Missing this fact in this model would yield that either Agatha or Charles is the killer.
   % Agatha hates everybody except the butler. 
   hates[agatha, charles] = 1  /\
   hates[agatha, agatha] = 1 /\
   hates[agatha, butler] = 0

Constraint: The butler hates everyone not richer than Aunt Agatha.

Same as above, we use reification to handle this:
   FOR EACH person I in the house
     IF I is not richer than Agatha THEN Butler hates I
which is implemented as
   % The butler hates everyone not richer than Aunt Agatha. 
   forall(i in r) (
     hates[butler, i] = 1 <- richer[i, agatha] = 0

Constraint: The butler hates everyone whom Agatha hates.

Same reasoning with reifications as above.
   % The butler hates everyone whom Agatha hates. 
   forall(i in r) (
      hates[butler, i] = 1 <- hates[agatha, i] = 1

Constraint: Noone hates everyone.

Here we count - for each person - the number of 1's (i.e. true) for each row in the hates matrix, and ensure that they are less than 3 (i.e. not all).
   % Noone hates everyone. 
   forall(i in r) (
     sum(j in r) (hates[i,j]) <= 2     

Who killed Agatha?

As mentioned above, this is not really needed since we could have hard coded the_victim to agatha without changing anything.
   % Who killed Agatha? 
   the_victim = agatha
To summarize: This model use a couple of important concepts in Constraint Programming:
  • decision variables with specific (finite) domains
  • reifications, implication and equivalence between constraints
  • element constraint (here in term of the more complex variant: matrix element)

Solution and analysis

There are total 8 different solutions for this MiniZinc model, all stating that Agatha is the killer. (Being able to get all solutions to a combinatorial problem is a excellent feature in Constraint Programming.)

The reason this model give more than one solution is because it is - in my interpretation of the problem - under constrained: there is not enough information about certain relations in the hates and richer matrices to yield a unique solution.

Below are the values of the hates and richer matrices after all constraints have been activated, but before search (searching in the search tree and assign all the decision variables to give a solution), as well as the killer variable.

The entries with "0..1" are the decision variables that are not constrained (decided) enough before search. (Note: Not all CP solvers have the feature to show the inferred variable domains before search. The table below is from my Picat model, slightly altered: who_killed_agatha.pi)
    killer= 1

    Agatha : Agatha : 1     Butler : 0        Charles: 1                   
    Butler : Agatha : 1     Butler : 0        Charles: 1                   
    Charles: Agatha : 0     Butler : 0..1     Charles: 0                   

    Agatha : Agatha : 0     Butler : 0        Charles: 0..1       
    Butler : Agatha : 1     Butler : 0        Charles: 0..1       
    Charles: Agatha : 0..1  Butler : 0..1     Charles: 0                  
These undecided variables involves if:
  • Charles hates Butler (or not)
  • Agatha is richer than Charles (or not)
  • Butler is richer than Charles (or not)
  • Charles is richer than Agatha (or not)
  • Charles is richer than Butler (or not)
We see that:
  • The killer has been assigned to 1 (= Agatha).
  • Many (namely 5) of the variables including Charles are undecided before search, i.e. using the constraints only can not figure out if they are true of not. This is perhaps not too surprising since most constraints in the model - and problem statement - involves only Agatha and Butler.
Also, two pairs of these (under constrained) Charles variables cannot both be true of the same time in the same solution. In each solution, one variable of each pair must be true and one must be false:
  • "Agatha is richer than Charles" and "Charles is richer than Agatha"
  • "Butler is richer than Charles" and "Charles is richer than Butler"
This setting is basically the same as having 5 binary variables, V1..V5, where the variables V1 and V2 are xored (i.e. that one of V1 and V2 must be true and the other false), and variables V3 and V4 are xored.

Thus 8 different solutions:
This concludes the analysis.

A note on the "richer" relation

Nothing rules out that A can be as rich as B (i.e. that neither is richer than the other). However, in this simple model, we assume a stricter variant: that either A is richer than B, or B is richer than A. If we would change the equivalence relation
   richer[i,j] = 1 <-> richer[j,i] = 0
(which means:
   IF A is richer than B THEN B is not richer than A
   IF B is not richer than A THEN A is richer than B)
to an implication
    richer[i,j]  = 1 -> richer[j,i] = 0
(IF A is richer than B THEN B is not richer than A)
then it don't change the principal solution but we would yield 18 solutions instead of 8, all point to the same killer: Agatha.


Here is a list of all the encodings of the Agatha murder problem that I have implemented so far. Whenever I learn a new CP system, the Agatha problem will probably be one of the first to test. See

September 28, 2013

CP2013: Jacob Feldman blogs about the CP Solver workshop

In his blog post After CP-2013 Jacob Feldman writes about the workshop CP Solvers: Modeling, Applications, Integration, and Standardization (CPSOLVERS). He summarizes the workshop in general and some of the discussions we had, and also mentions the CP Solver Catalog (that he initiated some months before the conference).

Jacob has a plea about the details of the discussions:
It’s hard for me to reproduce all answers and I want to invite all CPSOLVERS-2013 panelists and other CP practitioners to provide their answers as comments to this post.
I also agree with Jacob's end paragraphs:
Overall, I think the workshop has actually provided us with an industrial overview of CP products that was its main objective. We plan to get such overviews every 4 years – so I hope to see many authors of current and brand new CP tools at CPSOLVERS-2017.

P.S. I want to thank CP-2013 organizers for their constant support and a great organization of the conference.

September 27, 2013

Charles Prod'homme blogs about developing Choco

One week ago, Charles Prod'homme (one of the main developers of the Choco CP system) started the blog Within a Constraint Solver. His first blog post was this:
With this blog I would like to share the good and bad ideas we've had (and have) while developing our constraint solver. As we almost started from scratch, there could be articles about every part of the solver. If you don't know what Constraint Programming is, wikipedia is a good start point.

Since 2008, I've been working on Choco, a Java library for constraint programming. Articles from this blog will mostly talk about the choices made from the version 2 to version 3, but not only.
Welcome to the CP blog sphere, Charles!

A side note: Some days before Charles started his blog, I held a talk at CP-2013 Workshop CP Solvers: Modeling, Applications, Integration, and Standardization, and one of the things I mentioned was that it would be great if there where more CP bloggers to explain what Constraint Programming is and why we are so fascinated by it. I'm not sure is there is some cause and effect here or just a happy coincidence. :-)

If you are interested, here is my talk: CP Solvers/Learning Constraint Programming, and here's Charles' talk at the workshop: Choco3: an Open Source Java Constraint Programming Library.

August 08, 2013

All my public CP models are now available via GitHub

Yesterday I finalized the first round of making all my public CP models - as well as many other programs - available via GitHub: All is available from my site, but it is easier for other to fetch them from a single repository instead of downloading each single file.

Here's a summary of the directories so far. Most of the directories has the same name as on my site (with some few exceptions).


As of writing the repository has these statistics (according to GitHub):
1 author has pushed 
42 commits to all branches, excluding merges. 
On master, 5,747 files have changed and there have been 
2,417,980 additions and 1 deletions. 
The "2,417,980 additions" is the total number of lines in these files.

Let's start with the CP systems. After that follows some non-CP systems.

Constraint Programming systems (and related paradigms)

  • aimms: AIMMS system, linear/integer programming and constraint programming
  • ampl: AMPL, integer/linear programming and constraint programming
  • answer_set_programming : Answer Set Programming
  • bprolog: B-Prolog, logic programming, constraint programming, tabling, loops etc
  • choco3: Choco v3, constraint programming
  • comet: Comet, constraint programming, linear/integer programming, constraint-based local search
  • eclipse_clp: ECLiPSe CLP, Prolog, logic programming, constraint programming, loops etc
  • essence_savile_row: Essence'/Saville Row, constraint programming
  • essence_tailor: Essence'/Tailor, constraint programming
  • gecode: Gecode, constraint programming
  • gecode_r Gecode/R, Ruby interface to Gecode
  • google_or_tools: Google or-tools, constraint programming, integer/linear programming, Java, Python, and C#
  • jacop: JaCoP and JaCoP/Scala, constraint programming
  • jsr_331: JSR-331, Java API for constraint programming
  • minizinc: MiniZinc, constraint programming. Also G12 Zinc files.
  • numberjack: Numberjack, constraint programming
  • oocalc_excel: oocalc/Excel, some few linear/integer programming models
  • oscar: OscaR, constraint programming
  • picat: PICAT, constraint programming, logic programming, loops, tabling, etc
  • sicstus: SICStus Prolog, constraint programming, logic programming, loops. etc

Non-CP systems, mostly Very High Level programming languages/systems

  • apl: APL (note: index.html contains codes etc)
  • frink: Frink, high level programming language
  • j: J array programming language J (note: index.html contains code etc)
  • k: K, array programming language (note: index.html contains code etc)
  • mathematica: Mathematica, mathematical programming
  • pddl: PDDL (planning language)
  • perl6: Perl6 programming language
  • poplog: Poplog Pop-11 high-level programming language
  • sabr: SABR, constraint-based planning language
  • setlx: SETL and SetlX, high level programming language
  • unicon: Unicon/Icon, high level programming language

Non-CP systems, data mining/machine learning

  • eureqa: Eureqa/Formulize, genetic programming
  • jgap: JGAP, genetic programming
  • weka: Weka, data mining/machine learning, Java files, HTML, and data files (ARFF and CSV)
Some comments:
  • There are some duplicates, especially data files that is included in several system directories
  • I'll try to keep this repository as updated as possible, but there might be some time lag since I will probably not update each single new model.
  • There is no guarantee that models/programs will work with the latest version of a specific system. However - for some systems at least - I tend to keep up with major releases.
  • All directories include the latest index.html file from my site. Also, some directory just contain that file (and perhaps some single other file), e.g. for APL, K, J, and Mathematica.
  • This was the first round of making things available. Later on I might publish some of the code from (i.e. my "useless programs").
Hope you will enjoy it!

April 25, 2013

CP-2013 CP Solver Workshop: Survey of CP Solvers

An addendum to CP-2013 Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization" (Uppsala, Sweden):
If you an author of a CP solver, we ask you to fill out the following questionnaire (even if you do not plan to submit a presentation or attend the workshop). We plan to present a summary of all CP Solvers at this website before the start of the conference.
And here is the link to the workshop page: Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization".

January 23, 2013

CP2013: Organization and Call for Papers

The site of CP2013 (The 19th International Conference on Principles and Practice of Constraint Programming, September 16-20, 2013, Uppsala, Sweden) has been updated with the following:
  • Organization: now has information about the full program committees for both the technical and the application track.
  • Call for Workshop Proposals:
    The CP 2013 Workshop Chair invites proposals for the Workshops immediately preceding the main conference in Uppsala.

    The CP workshops offer an informal venue where participants are given the opportunity to present discuss and brainstorm on new ideas, technical topics, exciting new application areas and cross-fertilization with other domains. The forums are meant as a petri dish for new ideas and emerging topics in the field of constraint programming. The topics can overspread many areas ranging from modeling, tools, techniques, algorithms, methodology, discrete and continuous optimization as well as complete and incomplete methods. Relevant areas showcasing or contributing to CP such as cloud computing, cryptography, energy, sustainability, databases, machine learning or data mining — to name just a few — are all equally encouraged.

    All workshops will take place on September 16, 2013 at the site of the main conference. Workshops can follow a half-day or a full-day format. The details of the organization of each workshop are left to its organizers
  • Call for Tutorials:
    The CP 2013 Tutorial Chair invites proposals for tutorials in Uppsala.

    Tutorials are expected to give an in-depth presentation of emerging and exciting topics that are relevant to a broad swath of the constraint programming community. Past CP conferences featured tutorials on numeral CSPs, global constraints, languages, Disaster Management, Decision Diagrams and connection with mathematical programming. Topics currently at the fore-front of research activities with successes, major advances and significant impact are particularly welcome.

    The tutorials will take place during the main technical program.

October 20, 2012

Beldiceanu and Simonis: A Model Seeker

Nicolas Beldiceanu and Helmut Simonis has written a very interesting paper about A Model Seeker, a system that finds the problem given a solution. Or more to the truth: suggests the appropriate global constraints given positive data instances (solution(s) to a certain problem). It was presented at the technical track of CP2012 (18th International Conference on Principles and Practice of Constraint Programming).

The system (A Model Seeker) is described in the paper: A Model Seeker: Extracting Global Constraint Models From Positive Examples (also: Presentation and Poster from the CP2012 conference).

From the Abstract:
We describe a system which generates finite domain constraint models from positive example solutions, for highly structured problems. The system is based on the global constraint catalog, providing the library of constraints that can be used in modeling, and the Constraint Seeker tool, which finds a ranked list of matching constraints given one or more sample call patterns.
We have tested the modeler with 230 examples, ranging from 4 to 6,500 variables, using between 1 and 7,000 samples. These examples come from a variety of domains, including puzzles, sports-scheduling, packing & placement, and design theory. When comparing against manually specified “canonical” models for the examples, we achieve a hit rate of 50%, processing the complete benchmark set in less than one hour on a laptop. Surprisingly, in many cases the system finds usable candidate lists even when working with a single, positive example.
A companion technical report presents the details of the 230 tested examples: A Model Seeker Description and Detailed Results (over 1400 pages).

This is a system I really would like to test...

October 16, 2012

CP2013 (Constraint Programming Conference) will be held in Uppsala, 16 - 20 September 2013

Just a short note about the next Constraint Programming Conference (CP2013). It will be in Uppsala, Sweden. From SweConsNet:
CP'13, the 19th International Conference on the Principles and Practice of Constraint Programming, will be held at Uppsala University on 16 - 20 September 2013.
More details will come.

I'm quite excited about this since it's in my home country which makes everything easier. At last I will have a chance to meet all the nice, smart and interesting people that I have read and read about (and in some case had mail conversation with).

See a list of the former CP conferences: International Conference on Principles and Practice of Constraint Programming. The last one finished some days ago: CP2012.

May 16, 2012

SweConsNet Workshop 2012 - including my presentation "Comparison of >= 14 CP systems - and counting"

This Monday/Tuesday I visited Örebro (Sweden) attending the SweConsNet Workshop 2012, Monday) (co-located with Swedish Artificial Intelligence Society Workshop 2012, program, Monday and Tuesday). SweConsNet is the Network for Sweden-based researchers and practitioners of Constraint programming and is a Special Interest Group of SAIS.

SweConsNet Workshop 2012

One of the greatest joy attending these workshops is to meet and talk to all the clever and nice CP guys in Sweden/Denmark/Norway. But of course, the talks are also very interesting and they yield much inspiration and/or thoughts. Here's the program (the page SweConsNet Workshop 2012 have links to most presentations):
  • Mats Carlsson, SICS. On the Reification of Global Constraints.
    Joint Work with: Nicolas Beldiceanu (EMN, Nantes, France), Pierre Flener and Justin Pearson (Uppsala University).
  • Federico Pecora: Constraint reasoning and motion planning in the SAUNA project (This is a joint talk with SAIS).
  • Uwe Köckemann, Federico Pecora and Lars Karlsson. Towards Planning With Very Expressive Languages via Problem Decomposition Into Multiple CSPs (This is a joint talk with SAIS).
  • Invited talk: Associate professor Rune Møller Jensen from the IT University of Copenhagen will talk about AI and CP methods for optimizing liner shipping operations. (This is a joint talk with SAIS).
  • Krzysztof Kuchcinski, Lund University. Operation Scheduling, Binding and Data Routing for Run-Time Reconfigurable Architecture.
  • Håkan Kjellerstrand. Comparison of >= 14 CP systems - and counting.
  • Stefano Di Alesio, Simula School of Research and Innovation AS, Norway. Testing Deadline Misses for Real-Time Systems Using Constraint Optimization Techniques.
  • SAIS Masters Thesis Award Winner Rodrigo Gumucio. (This is a joint talk with SAIS).
  • Maria Andreina Francisco Rodriguez, Uppsala University. Consistency of Automaton-Induced Constraint Decompositions.
    Joint work with: Pierre Flener and Justin Pearson (Uppsala University).
  • Roberto Castañeda Lozano, SICS & KTH - ICT. Code Generation is a Constraint Problem.
    Joint work with: Mats Carlsson (SICS), Frej Drejhammar (SICS), Christian Schulte (KTH - ICT & SICS).
The two talks that I liked most this year was:
  • Mats Carlsson's talk about "On the Reification of Global Constraints" which will probably/hopefully have a deep impact on the CP field
  • and Roberto Castañeda Lozano about "Code Generation is a Constraint Problem", both for the interesting problem and the excellent presentation.
Thanks to Christian Schulte who together with the Federico Pecora and Lars Carlsson arranged these great days.

My SweConsNet talk: "Comparison of >= 14 CP systems - and counting"

My own talk at the workshop was Comparison of >= 14 CP systems - and counting (PowerPoint). The purpose of it was twofold:
  • Point out that my models - and especially the page Common constraint programming problems - could be used to compare different CP systems since quite a few implements the same problem using the same ("as same as possible") approach. As of now there are:
    • 276 problems that is implemented in >= 2 systems (in total 1260 implemented models)
    • 73 problems in >= 6 systems
    • 44 problems in >= 8 systems
    • 15 problems in >= 10 systems
  • Mention some strengths/weaknesses - or rather: my likes/dislikes - about these 14 tested systems, especially for a person learning the CP system or learning CP in general. One of my running theme is that the syntax for the most common things in modeling (e.g. the Element constraint and reifications) should be as easy as possible. The links are to my model pages for each system:
After the "Thanks" slide (which concluded the talk), there are some more slides for the Q/A session or as reserves.

Note: This is kind of an extension of my SweConsNet Workshop 2009 talk where I presented my take on the then tested 6 CP systems. See Report from SweConsNet2009, including my presentation.

Manufacturing Cell Design Problem (MCDP): My first Constraint Programming related academic papers

Some days ago I was told that the journal paper I have co-authored about Manufacturing Cell Design Problem (MCDP, see below) has been accepted for publication in a journal. Also, some weeks ago a short conference paper about the same topic was accepted. My part in both papers was that I created a couple of MiniZinc models (first the standard formulation and then some other using different approaches) and running a large number of benchmarks on a couple of FlatZinc solvers. This is really fun, since these papers are my first academic papers related to Constraint Programming.

Since the papers are not yet published/presented, I cannot reveal much more than the following. After the publications, I will blog more.

The journal paper

The journal paper is

Ricardo Soto, Hakan Kjellerstrand, Orlando Durán, Broderick Crawford, Eric Monfroy, Fernando Paredes: Cell formation in group technology using constraint programming and Boolean satisfiability

Published in the journal Expert Systems with Applications (ScienceDirect page, "In Press, Corrected Proof")

Cell formation consists in organizing a plant as a set of cells, each of them containing machines that process similar types or families of parts. The idea is to minimize the part flow among cells in order to reduce costs and increase productivity. The literature presents different approaches devoted to solve this problem, which are mainly based on mathematical programming and on evolutionary computing. Mathematical programming can guarantee a global optimal solution, however at a higher computational cost than an evolutionary algorithm, which can assure a good enough optimum in a fixed amount of time. In this paper, we model and solve this problem by using state-of-the-art constraint programming (CP) techniques and Boolean satisfiability (SAT) technology. We present different experimental results that demonstrate the efficiency of the proposed optimization models. Indeed, CP and SAT implementations are able to reach the global optima in all tested instances and in competitive runtime.

Manufacturing cells; Machine grouping; Constraint programming; Boolean satisfiability

Conference paper

The short conference paper is the following (with almost the same authors as the journal paper)

Ricardo Soto, Hakan Kjellerstrand, Juan Gutiérrez, Alexis López, Broderick Crawford, and Eric Monfroy: Solving Manufacturing Cell Design Problems using Constraint Programming

for the conference IEA/AIE 2012 (International Conference on Industrial, Engineering and Other Applications. of Applied Intelligent Systems, 2012 Dalian, China )

A manufacturing cell design problem (MCDP) consists in creating an optimal production plant layout. The production plant is composed of cells which in turn are composed of machines that process part families of products. The goal is to minimize part flow among cells in order to reduce production costs and increase productivity. In this paper, we focus on modeling and solving the MCDP by using state-of-the-art constraint programming (CP) techniques. We implement different optimization models and we solve it by using two solving engines. Our preliminary results demonstrate the efficiency of the proposed implementations, indeed the global optima is reached in all instances and in competitive runtime.

More about MCDP

Manufacturing Cell Design Problem is a kind of clustering problem where the object is to cluster machines that belongs to the same part (families) as good as possible. For a little more about the problem, see the following (Powerpoint) presentation by My Mingang Fu, Lin Ben, and Kuowei Chen Manufacturing Cell Design - Problem Formulation.

March 26, 2012

My talk about Constraint Programming at Google (Paris)

The presentation can be downloaded here: google_talk_20120323.ppt.

In late January this year, I was invited by Laurent Perron - head of the or-tools group - to talk about my view on Constraint Programming and my experience with the or-tools system (I have done quite a few models using the Python, Java, and C# interfaces).

The talk was this Friday (March 23) at the Google's Paris office. It was a lovely day but unfortunately I got a common cold the day before so it was little hard to enjoy all the things Paris can offer.

Friday started with Laurent and me just talking about CP in general, and or-tools in particular and it was really fun and interesting. Later on we where joined by two other guests: Charles Proud'Homme and Nicolas Beldiceanu, both from Ecole des Mines de Nantes and it was great talking with them as well and, above all, listen when they discussed various CP things.

The Google office in Paris was very impressive, very high ceilings and seemed to be build to get lost easily (though neither of us quests got completely lost).

At 1400 I started the talk in front of an audience of about 20 engineers at the Google office (+ the two guests from Ecole des Mines de Nantes) and I think it went quite well considering the cold and all. It was recorded for internal use at Google. I don't know how public it will be but I will blog about this when it has been edited etc. After the 50 minutes talk there was a little Q/A session.

Thanks Laurent for the invitation and a memorable day.

Little more about the talk

The talk was aimed for programmers that don't know very much about Constraint Programming and I especially wanted to convey my own fascination about CP by using this agenda:
  • Introducing the principles of CP (very simplified)
  • Showing the declarativeness of CP by some code in the high level G12 MiniZinc and then in or-tools Python, C#, and sometimes Java.
  • The basic principle of propagation of constraints and domains is shown via a very simple 4x4 Sudoku problem.
  • After that, some of - IMHO - the most fascinating concepts in modeling CP where presented:
    • Global constraints
    • Element constraint
    • Reification
    • Bi-directedness
      Note: After the talk Nicolas Beldiceanu commented that this is more known as "reversibility" in the Prolog world.
    • 1/N/All solutions
    • Symmetry breaking
Here is the talk: google_talk_20120323.ppt.

I would like to thank the following for various degrees of comments, suggestions, and encouragement regarding the presentation:
  • Magnus Bodin
  • Carl Mäsak
  • Mikael Lagerkvist
  • Christian Schulte
  • Laurent Perron
  • Alastair Andrew
And a special thanks to Nikolaj van Omme for his very detailed comments.

March 14, 2012

Tom Schrijvers, Guido Tack Search Combinators - paper and implementation

A paper and an implementation of Search Combinators - a framework to define application-tailored search strategies - has been available.


The paper is:
Tom Schrijvers, Guido Tack, Pieter Wuille, Horst Samulowitz, Peter J. Stuckey: Search Combinators (ArXiv). Abstract:
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver.
The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver.


An implementation using MiniZinc and Gecode is available from Gecode's FlatZinc page. The README file describes the tools as:
These two tools [minizinc-to-minizinc pre-compiler and FlatZinc interpreter], together with the G12 mzn2fzn translator, comprise a complete toolchain for solving MiniZinc models using search combinators. The pre-compiler translates a slightly extended version of MiniZinc to standards-compliant MiniZinc. The FlatZinc interpreter was modified to understand search combinators expressed as annotations.

In order to use the tools, you will need the standard mzn2fzn compiler from the G12 MiniZinc distribution, which can be downloaded at


Two examples are included in the distribution: golomb.mzn and radiation.mzn. The golomb.mzn is shown here, where the specific changes has been marked:
include "globals.mzn";
include "searchcombinators.mzn";

int: m;
int: n = m*m;

array[1..m] of var 0..n: mark;
array[1..(m*(m-1)) div 2] of var 0..n: differences =
    [ mark[j] - mark[i] | i in 1..m, j in i+1..m];

constraint mark[1] = 0;
constraint forall ( i in 1..m-1 ) ( mark[i] < mark[i+1] );
constraint alldifferent(differences);

% Symmetry breaking
constraint differences[1] < differences[(m*(m-1)) div 2];

solve :: dicho(print(mark,int_search(mark,input_order,assign_lb)),
The result using the included data file golomb-10.dzn (m=10) is
{0, 1, 3, 7, 12, 20, 30, 44, 65, 80}
{0, 1, 3, 11, 17, 29, 36, 51, 56, 60}
{0, 1, 6, 10, 23, 26, 34, 41, 53, 55}
I have just started to experiment with this and might return with a longer report.

March 04, 2012

Talk at Google, Paris (France): My view on Constraint Programming

I'm very honored to announce that I have been invited by Laurent Perron (Google or-tools) to talk about my view on Constraint Programming at Google, Paris later this month.

The talk will be about Constraint Programming and what is so fascinating about it, including - of course - examples in or-tools (which I have tested and modeled in for the Python, Java - and now lately - C# interfaces). There will also be more general views about CP and perhaps comparisons between different CP systems. The audience will be the engineers at Google (Paris) which makes it an extra challenge (and fun). It will also be really great to meet the or-tools group.

After the event I will blog about the details, including publishing the slides.

Also see: My or-tools page.

March 06, 2011

A matching problem, a related seating problem, and some other seating problems

The INFORMS blog challenge the last month (February) was O.R. and Love. The entry by Capgemini O.R. Blog Team was Cupids with Computers where a bunch of OR'ers (including me) was matched according to some - not exactly stated - compatibility scores. It is both interesting and fun to read. Also, see their companion blog post Two Deadly Sins that discuss some of the problems with matching.

I was curious how well a Constraint Programming system like MiniZinc would do on this matching problem so I wrote the model or_matching.mzn.

And it did quite well: it (Gecode/fz) solves the problem in 0.1 seconds (with 191 failures). Some comments about the model:
  • Since few CP system have good support for floats I converted the scores to integer (x 10), i.e. 3.7 is represented as 37 etc. However, this can lead to some errors since I assume that the scores are rounded.
  • it also identify the left out person which is just for fun.
The result is the following matchings (slightly edited):
Total score: 519

P1 P2 Score
 1  9: 92
 5 14: 69 (tie)
10 18: 69 (tie)
 8 16: 59
 2 19: 55
 3 12: 52
 6 11: 44
13 15: 42
 4  7: 37

Left out: 17
One reason that I modeled the problem was to see if it was possible to identify the OR'ers, or at least myself, and with some assumptions I have a some guesses. The assumptions is that the matchings in the blog post are presented from best match (score) and then ordered in score order.

My model includes a symmetry breaking which place the person with the lower index in the left position of the pair which means that I can just identify a pair. Also the second and third pair has a tie so it is even harder to guess these.

Ranking My guess (pair)
- Anna Nagurney and Laura McLay: 1,9
- Ian Frommer and Paul Rubin: 5,14 or 10,18 (the score is a tie)
- Michael Trick and Tallys Yunes: 5,14 or 10,18 (ibid.)
- Larry D'Agostino and Robert Randall: 8,16
- David Smith and Thiago Serra: 2,19
- Francisco Marco-Serrano and Niclola Petty: 3,12
- John Angelis and Michale Watson: 6,11
- John Poppelaars and Patricia Randall: 13,15
- Hakan Kjellerstrand and Phillip Mah: 4,7

- Shiva Subramanian (the leftout): 17

So, based on this, I guess that I am either #4 or #7.

Let us further investigate this. In Two Deadly Sins there is some smaller score matrices where I happen to be named:
 Hakan John Patricia Phillip Shiva
  -     3.7   2.2     2.2     1.3   Hakan
  -      -    4.4     2.2     1.8   John
  -      -    -       4.2     2.7   Patricia
  -      -    -       -       2.2   Philip
  -      -    -       -       -     Shiva
Now assume that this is the same score and names as in the main matrix. My (Hakan's) scores for John, Patricia, Phillip, and Shiva, are 3.7, 2.2, 2.2, and 1.3 respectively, and these value can be found for #4 but not for #7. There is just one score 1.3 and it is for #4 and #17 so we might draw the conclusion that #17 is Shiva.

However, since Hakan is #4 and is connected to #7, this would mean that Philip should be #7 if my matchings are correct. But it can't be since there is no 4.4 score for Hakan and Philip. Hmmm.

OK. If we skip my ranking and instead look at the score matrices we can see that there is one 1.8 score, between John and Shiva (in the small score matrix), and #17 and #7. So this means that #7 is John. And if #7 is John it makes more sense since there is a score of 3.7 between Hakan (#4) and John (#7) in the big score matrix. John and Patricia has a score of 4.4, and the only 4.4 for #7 is #13 so Patricia is #13. And the last one to find is Philip which is #11 given by the score 4.2 with Patricia.

So now we have found the following identifies from the small scores:
Hakan: #4
John: #7
Patricia: #13
Philip: #11
Shiva: #17
And let us compare these findings with my matchings.
  • Hakan: #4. This seems to be correct. But the matching to #7 is not Philip (he is #11).
  • John: #7. Now, there are two Johns but neither of them has been attributed to #7 in my matching, so this is plain #fail.
  • Patricia: #13. My matching matched Patricia with John Poppelaars and is could be correct. This means that the "other John" is #15 (according to my matching)
  • Philip: #11. Well, this is the same #fail as for Hakan.
  • Shiva: #17. This seems to be correct.
Well, my method of identifying the persons with the matching seems to be quite wrong, but it was fun trying.

Back to the model and some more details. The kernel of the model is really just the following code, including some symmetry breaking constraints.

% decision variables:
% the matchings
array[1..m, 1..2] of var 1..n: x;
% the scores for the found matching
array[1..m] of var 0..max_score: s;

alldifferent([x[i,j] | i in 1..m, j in 1..2]) :: domain

forall(i in 1..m) (
s[i] = scores[x[i,1], x[i,2]]

% symmetry breaking
/\ x[i,1] <= x[i,2]

% another symmetry breaking
constraint decreasing(s);

I optimized this model for Gecode/fz using these two things found by some experimentation:
  • branching is on the score array s instead of the seating array x
  • alldifference has the :: domain annotation. Without this, it takes much longer.
These two makes Gecode solve the problem quite fast: Gecode/fz: 0.16s, 191 failures

However, for other solvers the branching on the score array s is not so good so here are the statistics for branching on the array x instead, and using instead the variable/value heuristics: first_fail/indomain_median which seems to give the best overall result for most solvers.
  • Gecode/fz: 4.5s, 98469 failures
  • G12/LazyFD: 2.16s
  • fzn2smt: 6.0s
  • fzntini: 16:02 minutes (I'm not sure why this is so hard for fzntini)
  • G12/fd: 10.127s, 189346 choice points
  • G12/minizinc: 10.465s, 189346 choice points
  • G12/fdmip: 10.226s, 189346 choice points
  • JaCoP/fz: 10.358s, 192957 wrong decisions
  • SICStus/fz: 18.419s, 362122 backtracks
  • ECLiPSe/ic: 48.313s
  • ECLiPSe/fd: 19.557s
Update 2011-03-08
Julien Fischer and Sebastian Brand (of G12 team) was kind to point out that there is a better way of getting the identification of the leftout:
   if n mod 2 != 0 then
     % This version was suggested by 
     % Sebastian Brand and Julien Fischer.
     leftout > 0 /\
     forall(j in 1..m, k in 1..2) (
        leftout != x[j,k]
     leftout = 0
Below is shown the statistics using this version instead. Julien also pointed out that G12/fd and G12/minizinc is really the same solver and - for this problem - G12/fdmip doesn't make use of the MIP component so it's the same as G12/fd. So I skipped these two latter in this test. The branching is (still) first_fail/indomain_median
    * Gecode/fz: 3.3s, 98469 failures
    * G12/LazyFD: 1.5s
    * fzn2smt: (I stopped after 10 minutes.)
    * fzntini: 5:52 minutes
    * G12/fd: 6.4s, 107178 choice points
    * JaCoP/fz: 5.4s, 192975 wrong decisions
    * SICStus/fz: 7.5s, 362122 backtracks
    * ECLiPSe/ic: 35.5s
    * ECLiPSe/fd: 11.3s    
This change boosted the times for most solvers, quite much for some. It's strange that fzn2smt was much slower, and I don't understand why. For fzn2smt I also tested different things, e.g. to removed the :: domain annotation on alldifferent but it made no difference.

Testing indomain_random
I was also curious how some solvers would do with the heuristic indomain_random, so I ran each solver three times and took the best result. Some solvers was "stable" and got exactly the same statistics (Gecode, G12/fd, SICStus, ECLiPSe/ic), but some of them got quite varying result and all three results are shown (JaCoP and ECLiPSe/fd).
    * Gecode/fz: 1.7, 27422 failures (stable)
    * G12/fd: 19.0s, 1401656 choice points (stable)
    * JaCoP/fz: 2.3s, 22178 wrong decisions
        8.5s 164599 wrong decisions
        4.1s 50575 wrong decisions
        2.3s 22178 wrong decisions
    * SICStus/fz: 3.3s, 116386 backtracks (stable)
    * ECLiPSe/ic: 34.5s (stable)
    * ECLiPSe/fd: 5.6s
Almost all of the solver got at least one result better than before, except G12/fd which is surprising.
End of Update 2011-03-08

2011-03-12 Yet another update
After some interesting discussions with Sebastian Brand (thank you, Sebastian!) I have included some improvements.
1) Since the calculation of the leftout don't contribute much to the constraint store, it has been removed as a decision variable and just calculated in the output section:

output [
let {
int: leftout = max(0..n diff { fix(x[j,k]) | j in 1..m, k in 1..2 })
% ...

2) The symmetry breaking constraint to sort the matchings in score order
has been replaced with another symmetry breaking constraint which seems to be faster. Unfortunately, there is no sorting anymore but since it is faster for all solvers I prefer it.
  forall(i in 2..m)( x[i-1,1] < x[i,1] )
3) The branching has been changed to first_fail/indomain_split

With these three changes there are now improvements of most solvers; for ECLiPSe's solvers a radical improvement (compare with the listing above):

- G12/lazy: 0.89s
- Gecode: 0.10s and 649 failures
Note: without the ::domain annotation on alldifferent it's slighly worse: 0.16s and 1898
That difference has been much larger in earlier versions.
- JaCoP: 1.4s: 3253 wrong search decisions
- SICStus: 0.29s 3291 backtracks
- Choco: 1.8s 5928 backtracks
- ECLiPSe/ic: 1.6s
- ECLiPSe/fd: 0.94s
- smt2fzn is still very slow (I contacted the fzn2smt developers about this)
- tini: 4:44min (slightly better, but still very slow)

4) Sebastian also suggested that I should test a table constraint. That is, instead of
s[i] = scores[x[i,1], x[i,2]]

we use this:
table([x[i,1], x[i,2], s[i]], scores_table)

This also requires another representation of the score matrix:

scores_table = array2d(1..n*n, 1..3,
1, 1, 00,
1, 2, 30,
1, 3, 10,
1, 4, 33,
1, 5, 49,
% ....

Here is the result of using this table approach (together with the previous changes):

- G12/fd: 2.0s, 708 choice points (slower)
- G12/lazy: 2.442s (slower)
- Gecode: 0.11s 686 failures (about the same)
- JaCoP: 1.2s 785 wrong search decisions (slightly better)
- SICStus: 0.43 746 backtracks (better in backtracks, but slightly slower)
- Choco: 1.9s: 2299 backtracks (better in backtracks, but slightly slower)
- ECLiPSe/ic: 2.8s (slower)
- ECLiPSe/fd: 1.4s (slower)
- fzn2smt: 11.1s (and SMT is back in the game again!)
Note: Using the decreasing/1 constraint instead, it takes 7.0s.
- tini: stopped after 10 minutes, so it's slower than the previous version.

To sum: Most solvers where a tiny bit slower with this table approach, and some have slightly less choice points/failures/etc. Hmmm, maybe this (slight) increment of time for some solvers is caused by the slightly larger file to process by mzn2fzn?

The big winner here is SMT, but for this problem it's still slower than most other solvers.

I've put the improvements in the same model as before or_matching.mzn (the older model is now in or_matching_orig.mzn).

Also, thanks to Krzysztof Kuchcinski for an interesting discussion about this.

End of update 2011-03-12

A related seating problem

This matching problem inspired me also to model a related problem using the score matrix from the matching model:
When all these persons meet, what is the best way to place them around a (topological circular) table if the objective is to get the highest possible compatibility score for the adjacent seated persons.
Another MiniZinc model (or_seating.mzn) gives an optimal value of 996 score points, in about 1.5 seconds (with 17629 failures) using Gecode.

There are some possible way of approaching this problem, and I took the TSP way using a global constraint circuit. This model use my decomposition of a circuit constraint which, given the score matrix, gives an optimal circuit. However, what we really want is the path (the seating), so we have do extract the path. After a while I realized that the temporary array z used in the circuit_me decomposition is exactly this path, so it's just to use it in the parameter of the predicate. A consequence of this is that all seatings ends with person #1, which shouldn't really be a problem since the path is a permutation.

predicate circuit_me(array[int] of var int: x, array[int] of var int: z) =
let {
int: lbx = min(index_set(x)),
int: ubx = max(index_set(x))
all_different(x) :: domain /\
all_different(z) :: domain /\

% put the orbit of x[1] in in z[1..n]
z[lbx] = x[lbx] /\
forall(i in lbx+1..ubx) (
z[i] = x[z[i-1]]
/\ % may not be 1 for i < n
forall(i in lbx..ubx-1) (
z[i] != lbx
/\ % when i = n it must be 1
z[n] = lbx

Here are the two optimal solutions found by this model (with the implicit symmetry breaking that #1 is always the last position). However, the second solution is just the first reversed.

9 10 18 8 16 13 15 12 3 17 19 2 11 6 14 5 7 4 1
4 7 5 14 6 11 2 19 17 3 12 15 13 16 8 18 10 9 1

For the systems that support the circuit/1 constraint (right now only Gecode and JaCoP support this), there is a variant: one can use circuit to get the circuit and then extract the path (seating) with this "extractor" predicate where the solution always start with #1.

predicate circuit_path(array[int] of var int: x,
array[int] of var int: p) =
% we always starts the path at 1
p[1] = 1
forall(i in 2..n) (
p[i] = x[p[i-1]]

Then we got these two optimal solutions, which is just permutations of the two above. 1 9 10 18 8 16 13 15 12 3 17 19 2 11 6 14 5 7 4
1 4 7 5 14 6 11 2 19 17 3 12 15 13 16 8 18 10 9

Connection to the matching problem:
Comparing these seating placements with the matching above, it seems to be correct, e.g. #9 and #1 sit together, as do #18 and #6, #2 and #19, etc.

Some other seating problems

For some reason I am fascinated by seating problems and I'm not really alone. John Toczek's PuzzlOr problem Best Host this February was a seating problem (the object is to minimize seating conflicts given that some persons would like to sit together) that I solved using a rather simple MiniZinc model. However, I will not link to the model since the contest (to win a “O.R. The Science of Better” T-shirt) is still open. And let us not forget Karen Petrie's Wedding planning problem: How a computer scientist organised her wedding.

Here are two (classical) combinatorial seating problems, the formulation taken from Fei Dai, Mohit Dhawan, Changfu Wu and Jie Wu: On Solutions to a Seating Problem, page 1:

Seating, row
MiniZinc model seating_row.mzn
n couples are seated in one row. Each person, except two seated at the two ends, has two neighbors. It is required that each neighbor of person k should either have the same gender or be the spouse of k.
Seating, table:
MiniZinc model seating_table.mzn
A classical [...] seating problem is the following: n couples are seated around a table in such a way that two neighbors of each person k are the different gender of k and they are not the spouse of k.
Both these problems is about constraints on couples and gender. There are different way of representing who is in a couple with "anonymous persons", i.e. using just integers to represent the couples and individuals. Here are two variants (I included both of these in the seating_row model):

1) Using math:
predicate is_couple1(int: n, var int: a, var int: b) =
% exists(i in 1..n) (
exists(i in max(lb(a), lb(b))..min(ub(a), ub(b))) (
(2*(i-1)+1 = a /\ 2*(i-1)+2 = b)
(2*(i-1)+1 = b /\ 2*(i-1)+2 = a)

2) Using set representation, i.e. an array of the pairs:
% define the couple
array[1..n] of set of int: couples = [{2*(i-1)+1, 2*(i-1)+2} | i in 1..n];

predicate is_couple(int: n, var int: a, var int: b, array[int] of set of int: c) =
exists(j in 1..n) (
a in c[j] /\ b in c[j]

The second approach of using sets, is in my opinion the nicest, and it's also easier to use if one has a concrete list of couples.

The gender is simply represented by modulo 2.

The paper "On Solutions to a Seating Problem" (cited above) studies "reasonable patterns":
A reasonable pattern is an assignment of n couples without indices that meet the above requirement. For example, when n = 3 all reasonable patterns are as follows:

wwwhhh whhwwh whhhww hhwwwh
hhhwww hwwhhw hwwwhh wwhhhw
However, I am more interested in the concrete seating placements so it's not quite the same problems. For n=3 there is a total of 60 patterns. Here are all 8 different patterns generated for 3 pairs (n=3). The first number indicates the occurrences of each pattern.
12 gender2:hhhwww
 6 gender2:hhwwwh
 6 gender2:hwwhhw
 6 gender2:hwwwhh
 6 gender2:whhhww
 6 gender2:whhwwh
 6 gender2:wwhhhw
12 gender2:wwwhhh
I tried to generate exactly one of each pattern but had no luck. A model with the following simple symmetry breaking generates 20 solutions, less than 60 but more than exactly 8 patterns.

constraint (x[1] = 1 \/ x[1] = 2);

Statistics for Seating, row problem
Here are the number of solutions (using no symmetry breaking) for some different number of couples (n). The solver statistics below is for Gecode/fz where just the last solutions is printed (it took about 2.5 minutes to print the solutions for n=6, without printing it took just 15.7 seconds).

n #solutions
1 2 (0.001s, 0 failures)
2 8 (0.001s, 5 failures)
3 60 (0.01s, 73 failures)
4 816 (0.02s, 1181 failures)
5 17520 (0.45s, 29872 failures)
6 550080 (15.7s, 1051417 failures)
7 23839200 (12:36min, 49685576 failures)

The sequence of these numbers seems to correspond to the OEIS Integer Sequence A096121 which is described as A rook must land on each square exactly once, but may start and end anywhere and may intersect its own path. Inspired by Leroy Quet in a Jul 05 2004 posting to the Seqfan mailing list.

Well, I'm not sure if there is actually a deep connection between these two problems or just a freaky coincidence...

Statistics for Seating, table problem
For the seating table problem with 3 couples (n=3) there are 12 solutions (two different patterns). Here are the number of solutions for different n.

n #solutions
1 0 (0s, 0 failures)
2 0 (0s, 4 failures)
3 12 (0s, 0 failures)
4 96 (0.01s, 36 failures)
5 3120 (0.04s, 340 failures)
6 115200 (0.9s, 4436 failures)
7 5836320 (43.4s, 146938 failures)
8 382072320 (47:28minutes, 6593012)

The related OEIS sequence is A059375 (Total menage numbers 2*n!*A000179(n)) (the sequence starts with 1 which is the value for n=0, but this is weird in our current setting). This problem is also known as the Menage problem (Wikipedia) so the correspondence between these numbers is not surprising at all. Also see Wolfram MathWorld: Married Couples Problem.

September 24, 2010

JSR-331 wins JCP Award “The Most Innovative JSR of 2010″

From Jacob Feldman: JSR-331 wins JCP Award “The Most Innovative JSR of 2010″:

Surprise, surprise! Our JSR-331 actually won the JCP Award for The Most Innovative JSR of 2010. It was announced at Java One on September 22 in San Francisco. It is certainly nice, and what the JCP notification page says about JSR-331 is flattering: “JSR 331, Constraint Programming. This JSR represents the real innovation in SE/EE since the 300 number of JSRs started; where almost every other JSR especially in SE/EE is just an upgrade to existing technologies.”


Another nice thing about this process is that we did nothing to nominate or somehow promote our JSR – I simply had no idea about this competition. Of course, I understand this award is more related to the recognition of the CP technology itself. But it also shows that CP is still not within main stream of real-world application development where it certainly deserves to be. Hopefully this award and the standard API will make CP more visible and accessible to Java developers. So, now it becomes even more important to expedite creation of a JSR-331 User Manual and making initial implementations available for free downloads to all application developers.


Also see:
* Sun: JCP Program Office
* JCP Program and Industry Awards: The 8th JCP Annual Awards Nominations
* JSR 331: Constraint Programming API
* CP-Standards (Standardization of Constraint Programming)

September 16, 2010

Some exiting news today

This has been an exiting day with a lot of interesting news.
  • G12 version 1 released
    It is the NICTA G12 - Constraint Programming Platform which includes Zinc, MiniZinc, etc.

    Here is the presentation of Zinc:
    ZINC - A state-of-the-art solver-independent constraint modelling system.

    Constraint Modelling
    Zinc is a state-of-the-art constraint modelling system. A model is a set of variables, a set of constraints on and between variables, an objective function to be optimised, and problem specific data. The Zinc compiler converts a model into a program that solves the given problem.

    Solver Independence
    Zinc is a rich language that is not tailored for a specific solver or constraint technology. Almost any type (Booleans, integers, reals, sets, or structures) can be a decision variable in a model.

    Third-party Solver Support
    The Zinc compiler works with many third-party constraint solvers, such as ILOG CPLEX, GeCode, and SCIP.

    Different parts of the same model can be handled by different solvers. The Zinc compiler automatically sets up the necessary communication between the solvers.
    Some links: There is also interactive versions of the systems:
  • The latest MiniZinc Development Snapshots (Release of The Day) now contains support for CP-VIZ, visualization of search tree. CP-VIZ can be downloaded from SourceForge: cpviz.

    In the ./bin directory of the ROTD there is a program minizinc-viz which does everything. It works directly if the CP-VIZ jar file (cpviz/visual/lib/viz.jar) is in the CLASSPATH.

    It is run as the minizinc program
      minizinc-viz model.mzn
    A directory viz.out is then created with a lot of SVG files. In this directory there is a file aaa.html which can be see with a web browser (if it has support for SVG). My Firefox (under Linux) works very nice.

    There is now also a tutorial to MiniZinc: An Introduction to MiniZinc (PDF)..

  • Jean-Charles Régin has started a blog Constraint Programming (I like the URL:

    (Thanks Pierre S for the tip.)

  • Jean-Charles Régin wrote about the Google OR/CP Solver: or-tools (Operations Research Tools developed at Google).
An interesting day, indeed.

July 28, 2010

Jacob Feldman has started to blog about constraint programming standardization

Jacob Feldman has started to blog about constraint programming standardization: Constraint Programming Standardization Blog. In the first larger post Road to CP Standardization he explains the purpose of the blog, and the road to constraint programming standardization.

For more information, see Standardization of Constraint Programming (, its forum, and the JSR 331: Constraint Programming API.

Also see Open Rules, and do as I do: follow Jacob on Twitter: Jacob_OpenRules.

July 16, 2010

Helmut Simonis' blog: Constraint Applications Blog

Helmut Simonis has started to blog: Constraint Applications Blog - Lots of Constraint Applications. This is great news and it will be very interesting to follow his writings.

He was kind enough to email me and also described his intentions with the blog:

I will look at constraint applications (in the wider sense).


At the moment, I'm talking about projects I have been involved in myself, but I image I will run out of material quickly, so that I will also consider papers written on applications by other authors, to abstract and classify them. The idea is to develop a collection of applications in different fields, where one can check what has been done by whom, and where to find more information.

He has been quite busy the last weeks and blogged a lot about of his projects. Here are some of the posts so far:

May 23, 2010

SweConsNet 2010 - some brief comments

This Friday (May 22) I was attending SweConsNet Workshop 2010, co-located with SAIS Workshop 2010 in Uppsala. I wrote about the SweConsNet workshop and the talks in SweConsNet 2010: Abstract for the presentations.

Here are some brief comments.

As with the last year's workshop, it was very interesting to listen to all the talks, a good mix of theory and practical results. Somewhat to my own surprise, two of the talks I enjoyed most was the more theoretical by two PhD students: Serdar Kadıoğlu on Efficient Context-Free Grammar Constraints, and Jun He's on An automaton Constraint for Local Search. I also very much enjoyed Helmut Simonis two talks: CPTV - Generating Personalized TV Schedules, and VIZ - A Generic Visualization Toolkit for Constraint Programming (I'm not at all surprised of this enjoyment).

This year I finally met Pierre Flener who introduced me to SweConsNet in 2009 (and convinced me to talk at the 2009 workshop). I had a great time discussing many things with him, especially on the "Mosquito evening". Also, it was fun to met some readers of this blog, that actually use (or got inspiration from) my constraint programming models. Some business cards where exchanged.

Thanks to the organizers of this great workshop, and to all the persons I listened to/discussed with, for a great workshop (and for the nice weather in Uppsala :-). I hope we all met in SweConsNet Workshop 2011 .

April 28, 2010

SweConsNet 2010: Abstract for the presentations

Below is a more detailed program for SweConsNet2010 (9th workshop of the Network for Sweden-based researchers and practitioners of Constraint programming), collocated with the SAIS Workshop 2010 on May 21 2010 in Uppsala, Sweden. See my earlier blog post for more about this workshop.

Here are the forthcoming presentation (as of writing), including the abstracts. This will be a very interesting workshop.
  • Helmut Simonis, Cork Constraint Computation Centre, Ireland:
    • CPTV - Generating Personalized TV Schedules (SAIS/SweConsNet joint invited talk)
      Today's cable and satellite systems provide a multitude of TV viewing choices for a user, making it difficult to select the best program to watch at any one time. Digital video recording and IPTV add even more options, allowing to time- and sometimes place-shift TV watching. CPTV is a system which generates a personalized viewing schedule for a user, integrated with his current calendar tools. The system is based on three main elements: a rule based system to express viewing preferences, a content-based recommender system to suggest films based on past viewing selections and information extracted from Wikipedia, and a constraint-based scheduler which plans and schedules recording and viewing schedules around existing events and availability time periods of a user. The application thus combines different AI methods in one Web-based application, and is implemented using the constraint logic programming system ECLiPSe.
    • VIZ - A Generic Visualization Toolkit for Constraint Programming
      Visualization is one of the best techniques to understand the behaviour of a constraint program, allowing to directly observe the impact of changes by visual inspection instead of tedious debugging. So far, most constraint visualization tools have been closely linked to specific solvers, making it difficult to compare alternative solvers and to reuse development effort spent on other systems. The new, light-weight VIZ system provides a simple XML based interface for solvers, and can be easily extended by writing basic Java code. In this talk we describe the overall system architecture, give examples of the visualization possible and show how the tool has been used to improve a number of constraint models in different domains.
      (I just noted the paper about this by Helmut Simonis, Paul Davern, Jacob Feldman, Deepak Mehta, Luis Quesada and Mats Carlsson: A Generic Visualization Platform for CP )
  • Arnaud Gotlieb, INRIA, France
    • Using SICStus Prolog to build automatic test data generation tools
      This talk will review our experience in building automatic test data generation tools with SICStus Prolog in various contexts. Using Logic Programming in this area is not new but putting theory into practice requires to take care of some non-trivial aspects of this real-world implementation of Prolog. We will also discuss of the use of global constraints to handle structure and control of classical programming language features in order to benefit from powerful deduction rules available in SICStus Prolog.
  • Jun He, Uppsala University
    • An automaton Constraint for Local Search
      We explore the idea of using automata to implement new constraints for local search. This is already a successful approach in constraint-based global search. We show how to maintain the violations of a constraint and its decision variables via a (deterministic or nondeterministic) finite automaton that describes a ground checker for that constraint. We extend the approach to counter automata, which are often much more convenient than finite automata, if not more independent of the constraint instance, and can even recognise non-regular languages. We establish the practicality of our approach on several real-life combinatorial problems. This is joint work with Pierre Flener and Justin Pearson.
  • Serdar Kadıoğlu, Brown University, USA
    • Efficient Context-Free Grammar Constraints
      Context-free grammar constraints enforce that an ordered set of variables must form a word in a language defined by a context-free grammar. For this constraint, we devise an efficient, complete filtering algorithm that has the same asymptotic complexity as the Cocke-Younger-Kasami algorithm for parsing context-free languages. Particularly, we show how to filter a sequence of monotonically tightening problems in cubic time and quadratic space. We present experiments on a shift scheduling problem that shows orders of magnitude improvements in time and space consumption.
  • Per Kreuger, Swedish Institute of Computer Science
    • Scheduling IPTV Content Distribution Using Caches
      Recently many new IPTV providers have appeared, typically telephone and broadband companies, that now distribute scheduled TV channels, sometimes high-definition TV and video-on-demand (VoD) over IP in their own network. From a network management perspective, distribution of IPTV can be seen as a scheduling problem with the network links and storage facilities as resources and the objective to balance the overall load in the network. We show that scheduling and optimisation techniques can be used to make the distribution of IPTV more efficient by pushing content out to caches in the network to the right place at the right time.
  • Toni Mancini, Sapienza Università di Roma, Italy
    • Constraint-Directed Local Search in Relational Databases
      Solving constraint satisfaction and optimisation problems is increasingly crucial in industry, in order to cope effectively with hard problems of practical relevance. However, the approach of exploiting, in production scenarios, current constraint or mathematical programming solvers has severe limitations, which clearly demand new methods: data is usually stored in potentially very large databases, and building a problem model in central memory suitable for current solvers could be very challenging or impossible. We present a declarative language based on SQL for modelling combinatorial problems, and novel techniques for local search algorithms explicitly designed to work directly on relational databases. We also discuss and experiment with a solver implementation that, working on top of any DBMS, exploits such algorithms in a way transparent to the user, allowing a smooth integration of combinatorial problem solving into industry environments.
  • Tomas Nordlander, Sintef, Norway
    • CP with Iterated Local Search for Norwegian hospitals
      I will describe the work our optimisation group has done in the domain of Nurse rostering. Rostering is the process of creating a working plan that shows employees working hours over a given planning horizon. Our approach is a hybrid one that uses Constraint Programming techniques to find an initial solution that becomes the starting point for an Iterated Local Search algorithm. We have tested and verified our approach on real world instances and it's currently implemented at several hospitals in Norway.
  • Carl Christian Rolf, Lund Institute of Technology
    • Combining Task and Data Parallelism in Constraint Programming
      Automatic parallelization in constraint programming (CP) becomes increasingly important when new multi-core architectures provide ways to improve performance. Previous research on parallelism in CP has mostly focused on parallel search. However, the consistency part of the solving process is often magnitudes more time-consuming. Before, we have looked at parallel consistency. In this paper we investigate how to combine parallel search with parallel consistency. We evaluate which problems are suitable and which are not. Our results show that parallelizing the entire solving process in CP is a major challenge as parallel search and parallel consistency typically suit different problems.

December 15, 2009

SweConsNet 2010 (May 21, 2010 in Uppsala, Sweden)

SweConsNet 2010 is to be held May 21 2010 in Uppsala, Sweden. This is the 9th workshop of the Network for Sweden-based researchers and practitioners of Constraint programming (SweConsNet). It is collocated with the SAIS Workshop 2010 (May 20-21).

From the SweConsNet 2010 site:
The purpose of this workshop is to learn about ongoing research in constraint programming, existing projects and products, and further development of the network.

The workshop is open to everybody interested in the theory and practice of constraint programming, whether based in Sweden or elsewhere.


Please forward this information to people who might be interested in this workshop but are not yet on the SweConsNet mailing list. They can subscribe to it by sending a message to Justin Pearson.



Uppsala University, Information Technology Center (ITC), Polacksbacken, house 1 (Directions).


List of speakers (preliminary):


SweConsNet 2010 is chaired by Magnus Ågren (SICS). Local arrangements are made by Pierre Flener and Justin Pearson (Uppsala University).

The last work shop (May this year) was very interesting and fun, and I'm really looking forward to SweConsNet 2010. For more about SweConsNet 2009, see its homepage and my own Report from SweConsNet2009, including my presentation.

December 14, 2009

The Global Constraint Catalog has been updated

A new version of Global Constraint Catalog has been released. That's great! The update date is 2009-12-10, but I didn't see it until today.

Here is the presentation of the catalog:
About the catalogue

The catalogue presents a list of 348 global constraints issued from the literature in constraint programming and from popular constraint systems. The semantic of each constraint is given together with a description in terms of graph properties and/or automata.

The catalogue is periodically updated by Nicolas Beldiceanu, Mats Carlsson and Jean-Xavier Rampon. Feel free to contact the first author for any questions about the content of the catalogue.

Download the Global Constraint Catalog in pdf format:

The news in this release:
2009-12-10 working version update: 348 constraints
Another nice thing is that it is now possible to search just the HTML pages, or PDF documents.

Sophie Demassey has done a wonderful work with the online version.

Also, I am happy for the links to this blog and my MiniZinc page, see the section on (decompositions of) global constraints.

November 08, 2009

Update on Nonogram: Jan Wolter's Survey and my own new benchmark

Survey of Paint-by-Number Puzzle Solvers

In Some new models, etc I mentioned the great Survey of Paint-by-Number Puzzle Solvers, created by Jan Wolter (also author of the Nonogram solver pbnsolve).

In this survey he included both Gecode's Nonogram solver written by Mikael Lagerkvist as well as my own Nonogram model (with Gecode/FlatZinc).

Since the last update on the former blog post the following has happeded:
  • both our solver has now the assessment: "An amazingly good solver, especially for a simple demo program", and are placed 4, 5, and 6 of 10 tested systems
  • my Gecode/FlatZinc model has been tested for "Full results"; it came 4 out of 5.
  • my Nonogram model with the Lazy FD solver is now included in the "Sample result", at 6'th place
It seems than Wolter has appreciated constraint programming as a general tool for solving these kind of combinatorial problems, much for its ease of experimentation, e.g. with labeling strategies and (for the MiniZinc models) changing solvers:

From the analysis of Lagerkvist's Gecode model:
This is especially impressive because the faster solvers are large, complex programs custom designed to solve paint-by-number puzzles. This one is a general purpose solver with just a couple hundred custom lines of code to generate the constraints, run the solver, and print the result. Considering that this is a simple application of a general purpose solving tool rather than hugely complex and specialized special purpose solving tool, this is an astonishingly good result.

Getting really first class search performance usually requires a lot of experimentation with different search strategies. This is awkward and slow to do if you have to implement each new strategies from scratch. I suspect that a tool like Gecode lets you try out lots of different strategies with relatively little coding to implement each one. That probably contributes a lot to getting to better solvers faster.
From the analysis of my MiniZinc model:
If you tried turning this into a fully useful tool rather than a technology demo, with input file parsers and such, it would get a lot bigger, but clearly the constraint programming approach has big advantages, achieving good search results with low development cost.


These two results [Gecode/FlatZinc and LazyFD] highlight the advantage of a solver-independent modeling language like MiniZinc. You can describe your problem once, and then try out a wide variety of different solvers and heuristics without having to code every single one from scratch. You can benefit from the work of the best and the brightest solver designers. It's hard to imagine that this isn't where the future lies for the development of solvers for applications like this.
And later in the Conclusions:
The two constraint-based systems by Lagerkvist and Kjellerstrand come quite close in performance to the dedicated solvers, although both are more in the category of demonstrations of constraint programming than fully developed solving applications. The power of the underlaying search libraries and the ease of experimentation with alternative search heuristics obviously serves them well. I think it very likely that approaches based on these kinds of methods will ultimately prove the most effective.
I think this is an important lesson: Before starting to write very special tools, first try a general tool like a constraint programming system and see how well it perform.

The Lazy FD solver and the Lion problem

Most of the problems in the Sample Results where solved by some solver within the time limit of 30 minutes. However, one problem stand out as extra hard: The Lion problem. When I tested the MiniZinc's Lazy FD solver on my machine I was very excited that it took just about 2 minutes, and mentioned this to Wolter. He also tested this but on his 64-bit machine it took 80 minutes to solve (and since it was above the time limit this is not in the result table). This is how he describes the Lazy FD solver:
But the remarkable thing is that [the Lazy FD solver] solves almost everything. Actually, it even solved the Lion puzzle that no other solver was able to solve, though, on my computer, it took 80 minutes to do it. Now, I didn't allow a lot of other solvers 80 minutes to run, but that's still pretty impressive. (Note that Kjellerstrand got much faster solving times for the Lion than I did. Lagerkvist also reported that his solver could solve it, but I wasn't able to reproduce that result even after 30 CPU hours. I don't know why.)
After some discussion, we come to the conclusion that the differences was probably due to the fact that I use a 32-bit machine (and the 32-bit version of MiniZinc) with 2 Gb memory, and Wolter use a 64-bit machine with 1 Gb memory.

One should also note that the all other solvers was compiled without optimization, including Gecode/FlatZinc; however the LazyFD does not come with source so it is running optimized. This may be an unfair advantage to the LazyFD solver.

My own benchmark of the Sample Results

The times in the Sample Results is, as mentioned above, for solvers compiled with no optimization. I have now run the same problems on my machine (Linux Mandriva, Intel Dual 3.40GHz, with 2Gb memory), but the solvers uses the standard optimization. All problems was run with a time limit of 10 minutes (compared to Wolters 30 minutes) and searched for 2 solutions, which checks for unique solutions. The last three problems (Karate, Flag, Lion) has multiple solutions, and it is considered a failure if not two where found in the time limit. I should also note that during the benchmark I am using the machine for other things, such as surfing etc.

The problems
I downloaded the problems from Wolter's Webbpbn: Puzzle Export. For copyright reasons I cannot republish these models, but it is easy to download each problem. Select ".DZN" for the MiniZinc files, and "A compiled in C++ format" for Gecode. There is no support for Comet's format, but it's quite easy to convert a .dzn file to Comet's.

The solvers + labeling strategies
Here is a description of each solver and its labeling strategy:
  • fz, "normal" (column_row)
    MiniZinc model with Gecode/FlatZinc. The usual labeling in nonogram_create_automaton2.mzn, i.e. where the columns are labeled before rows:
    solve :: int_search(
          [x[i,j] | j in 1..cols, i in 1..rows], 
  • fz, "row_column"
    MiniZinc model with Gecode/FlatZinc. Here the order of labeling is reversed, rows are labeled before columns. Model is nonogram_create_automaton2_row_column.mzn
    solve :: int_search(
          [x[i,j] | i in 1..rows, j in 1..cols], 
  • fz, "mixed"
    MiniZinc model with Gecode/FlatZinc: nonogram_create_automaton2_mixed.mzn.
    I have long been satisfied with the "normal" labeling in the MiniZinc model because P200 (the hardest problem I until now has tested) was solved so fast. However, the labeling used in the Comet Nonogram model described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second, and which is also used in the Gecode model, is somewhat more complicated since it base the exacl labeling by comparing the hints for the rows and the column.

    I decided to try this labeling in MiniZinc as well. However, labeling in MiniZinc is not so flexible as in Comet and Gecode. Instead we have to add a dedicated array for the labeling (called labeling):
    array[1..rows*cols] of var 1..2: labeling;
    and then copy the element in the grid to that array based on the relation between rows and column hints:
          % prepare for the labeling
          if rows*row_rule_len < cols*col_rule_len then
               % label on rows first
               labeling = [x[i,j] | i in 1..rows, j in 1..cols]
               % label on columns first
               labeling = [x[i,j] | j in 1..cols, i in 1..rows]
          % .... 
    and last, the search is now based just on this labeling array:
    solve :: int_search(
  • jacop, "normal"
    MiniZinc normal model with JaCoP/FlatZinc using the same model as for fz "normal".
  • lazy, satisfy
    Model: nonogram_create_automaton2_mixed.mzn. This use the MiniZinc LazyFD solver with the search strategy:
    solve satisfy;
    This labeling is recommended by the authors of LazyFD. See MiniZinc: the lazy clause generation solver for more information about this solver.

    Note: The solver in MiniZinc latest official version (1.0.3) don't support set vars. Instead I (and also Jan Wolter) used the latest "Release Of The Day" version (as per 2009-11-02).
  • Comet, normal
    Model: This is the Comet model I described in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. No changes has been done.
  • Gecode, normal
    This is the Nonogram model distributed with Gecode version 3.2.1. The labeling is much like the one used in the Comet model, as well as fz, "mixed". (In fact the labeling in the Gecode model was inspired by the labeling in the Comet model).

Here is the results. For each model (+ labeling strategy) two values are presented:
  • time (in seconds)
  • number of failures if applicable (the LazyFD solver always returns 0 here).
The result
Model fz

Some conclusions, or rather notes

Here are some conclusions (or notes) about the benchmark.
  • The same solver Gecode/FlatZinc is here compared with three different labelings. There is no single labeling that is better than the other. I initially has some hopes that the "mixed" labeling should take the best labeling from the two simpler row/columns labelings, but this is not really the case. For example for Tragic the row_column strategy is better than "normal" and "mixed". I am, however, somewhat tempted, to use the "row_column" labeling, but the drawback is that "P200" problem (not included in Wolfter's sample problems) takes much longer with this labeling.
  • The same model and labeling but with different solvers is compared: Gecode/FlatZinc is faster than JaCoP/FlatZinc on all the problems. For the easier problems this could be explained by the extra startup time of Java for JaCoP, but that is not the complete explanation for the harder problems. Note: Both Gecode/FlatZinc and JaCoP/FlatZinc has dedicated and fast regular constraints (whereas the LazyFD, and the Comet solvers use a decomposition).
  • The LazyFD solver is the only one that solves all problems (including Lion), but is somewhat slower on the middle problems than most of the others. It emphasizes that this is a very interesting solver.
  • It is also interesting to compare the result of the Comet model and Gecode/FlatZinc "mixed", since they use the same principle of labeling. However there are some differences. First, the MiniZinc model with Gecode/FlatZinc use a dedicated regular constraint, and Comet use my own decomposition of the constraint. For the Merka problem the Comet version outperforms the Gecode/FlatZinc version, otherwise it's about the same time (and number of failures).
  • The Light problem: It is weird that this problem was solved in almost exactly 10 minutes (the timeout is 10 minutes) for Gecode/FlatZinc and JaCoP/FlatZinc. The solutions seems correct but I'm suspicious of this. Update: Christian Schulte got me on the right track. Here is was happened: The first (unique) solution was found pretty quick and was printed, but the solvers could not prove a unique solution so it timed out. JaCoP/FlatZinc actually printed "TIME-OUT" but I didn't observe that. Case closed: They both FAILED on this test. Thanks, Christian.End update
As said above, I can only agree with Jan Wolter in his comment that the ease of experimenting, for example changing labeling, and solver for the FlatZinc solvers, is a very nice feature.

Last word

No benchmark or comparison of (constraint programming) models is really complete without the reference to the article On Benchmarking Constraint Logic Programming Platforms. Response to Fernandez and Hill's "A Comparative Study of Eight Constraint Programming Languages over the Boolean and Finite Domains" by Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey. (Link is to ACM.)

From the Abstract:
... The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems.

September 23, 2009

MiniZinc Challenge 2009 Results

The result of MiniZinc Challenge 2009 is presented in MiniZinc Challenge 2009 Results:
There were two entrants this year:

* Gecode
* SICStus

In addition, the challenge organisers entered the following three FlatZinc implementations:

* G12/FD
* G12/LazyFD

As per the challenge rules, these entries are not eligible for prizes, but do modify the scoring results.

Summary of Results


sicstus 1651.8
eclipse_ic 322.1
gecode 4008.8
g12_fd 2040.6
g12_lazyfd 1376.6


sicstus 1841.0
gecode 4535.5
g12_fd 1112.4
g12_lazyfd 2511.1

Congratulations to the Gecode team!

September 17, 2009

At Twitter:

Today I decided to use Twitter more, and more systematic. It doesn't mean that I will stop blogging, not at all. There I can direct notify when new models has been published instead of waiting some weeks until blogging about a couple new models; about some newly found papers etc. And also about new blog posts.

Sometimes there will be tweets about other things than constraint programming (and related paradigms), and - not very often - in swedish.

My Twitter page is:, note the username, with an ending "j".

I hope to see you over there.

July 21, 2009

Common constraint programming problems

When learning a new constraint programming system I tend to start to model some 17 "learning models" (base models). I have implemented these in all 7 constraint programming system I've learned so far (with some few exceptions).

Common Constraint Programming Problems
But there are many other problems that have been implemented in more than one system. These "common problem" are now collected on the new page Common Constraint Programming Problems. "Common" is defined as a problem that has been implemented in at least two systems. Right now there are 126 common problems consisting of 364 implemented models.

Please note that this page is generated and may contain some peculiarities.

Number of implemented models
While speaking of statistics, here is the (approximate) number of models implemented so far in each constraint programming system:

* MiniZinc: 560
* Comet: 142
* ECLiPSe: 110 (*)
* Gecode: 45
* Gecode/R: 27
* Choco: 18
* JaCoP: 18

Total: Approx. 920 models

(*) Right now, my focus is on ECLiPSe so this number will probably increase during the next weeks.

June 24, 2009

My old swedish blog posts about constraint programming translated (via Google)

Before starting this blog (i.e. My Constraint Programming blog) late December 2008, I blogged about constraint programming in my Swedish blog hakank.blogg). Translations of those Swedish blog post has not been collected before, and now it is time.

So, here are links to most of the blog posts from the category Constraint Programming, translated via Google's Language Tools. Most of the translation is intelligible, but if you have some questions about some particular, please feel to mail me: for more information.

November 4, 2005
Swedish: Sesemans matematiska klosterproblem samt lite Constraint Logic Programming
Translated: Sesemans kloster mathematical problems and little Constraint Logic Programming ("kloster" means "convent").

April 18, 2006
Swedish: Choco: Constraint Programming i Java
Translated: Choco: Constraint Programming in Java

The two post above was written when I had just a cursory interest in constraint programming. From about February 2008 and onward, it became my main interest.

April 5, 2008
Swedish: Constraint Programming: Minizinc, Gecode/flatzinc och ECLiPSe/minizinc,
Translated: Constraint Programming: Minizinc, Gecode / flatzinc and ECLIPSE / minizinc.

April 14, 2008
Swedish: MiniZinc-sida samt fler MiniZinc-exempel
Translated: MiniZinc-page and multi-MiniZinc example.

April 21, 2008
Swedish: Applikationer med constraint programming, lite om operations research samt nya MiniZinc-modeller
Translated: Applications of constraint programming, a little of operations research and new MiniZinc models

April 26, 2008
Swedish: Ett litet april-pyssel
Translated: A small-craft in April (a better translation would be "A small April puzzle")

April 27, 2008
Swedish: Mitt OpenOffice Calc-/Excel-skov
Translated: My Open Office Calc-/Excel-skov (better translation: My Open Office Calc-/Excel period).

May 26, 2008
Swedish: Några fler MiniZinc-modeller, t.ex. Smullyans Knights and Knaves-problem
Translated: Some more MiniZinc models, eg Smullyans Knights and Knaves-problem Smullyans Knights and Knaves problem

June 2, 2008
Swedish: MiniZinc/FlatZinc version 0.8 släppt
Translated: MiniZinc / FlatZinc version 0.8 released

June 2, 2008
Swedish: Nya MiniZinc-modeller, flera global constraints, däribland clique
Translated: New MiniZinc models, several global constraints, including Clique

June 5, 2008
Swedish: Mats Anderssons tävling kring fotbolls-EM 2008 - ett MiniZinc-genererat tips
Translated: Mats Andersson racing around championship in 2008 - a MiniZinc-generated tips (it is about a competition how to predict Soccer World Cup 2008 using MiniZinc)

June 24, 2008
Swedish: Tre matematiska / logiska pyssel med constraint programming-lösningar: n-puzzle, SETS, M12 (i MiniZinc)
Translated: Three mathematical / logical craft with constraint programming solutions: n-puzzle, SETS, M12 (in MiniZinc) ("craft" should be translated to "puzzles")

June 24, 2008
Swedish: MiniZinc-nyheter
Translated: MiniZinc news

June 29, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP
Translated: Group Theoretical solution of the M12 puzzle in GAP (well, this is not really a constraint programming solution, but it is another way of solving the M12 puzzle blogged about in June 24)

June 30, 2008
Swedish: Gruppteoretisk lösning av M12 puzzle i GAP - take 2
Translated: Group Theoretical solution of the M12 puzzle in GAP - take 2

July 4, 2008
Swedish: Martin Chlond's Integer Programming Puzzles i MiniZinc
Translated: Martin's Chlond Integer Programming Puzzles in MiniZinc

July 7, 2008
Swedish: Fler MiniZinc modeller kring recreational mathematics
Translated: More MiniZinc models on recreational mathematics

July 20, 2008
Swedish: Fler constraint programming-modeller i MiniZinc, t.ex. Minesweeper och Game of Life
Translated: More constraint programming models in MiniZinc, eg Minesweeper och Game of Life Minesweeper and the Game of Life

August 17, 2008
Swedish: JaCoP - Java Constraint Programming solver
Translated: JaCoP - Java Constraint Programming solver

September 14, 2008
Swedish: Constraint programming i Java: Choco version 2 släppt - samt jämförelse med JaCoP
Translated: Constraint programming in Java: Choco version 2 released - and comparison with JaCoP

September 28, 2008
Swedish: Constraint programming: Fler MiniZinc-modeller, t.ex. Martin Gardner och nonogram
Translated: Constraint programming: More MiniZinc models, eg Martin Gardner och nonogram Martin Gardner and nonogram

December 27, 2008
Swedish: Constraint programming-nyheter samt nya MiniZinc-modeller
Translated: Constraint programming, news and new MiniZinc models

December 29, 2008
Swedish: My Constraint Programming Blog
Translated: My Constraint Programming Blog

After that, entries about constraint programming at my swedish blog posts where just summaries of the stuff written here at My Constraint Programming blog.

May 30, 2009

New constraint programming blog: Be-cool Blog

Be-cool Blog is a new blog about constraint programming (there are very few of these kind of blogs). It is written by the Belgian Constraints Group Be-cool @UCLouvain, Belgium, among them Jean-Noël Monette and Pierre Schaus.

The first blog post New Be-Cool Blog states

Upon popular request, here is the new blog of the Be-Cool team. This blog will give the team the opportunity to share its thoughts and ideas about Constraints and a lot of related topics.

The post Solving a Fifteen Puzzle with Lightsolver implements a '"technology-neutral' Solver" in Comet (available from Dynadec).

Good luck with the blog!

May 29, 2009

Report from SweConsNet2009, including my presentation

This Wednesday I attended SweConsNet2009 (SweConsNet is the Network for Sweden-based researchers and practitioners of Constraint programming), co located with SAIS Worksshop 2009 (SAIS, Swedish AI Society, which was held both Wednesday and Thursday).

Since this is a blog about constraint programming I will just focus on the SweConsNet part. But first I want to thank the arrangers of the workshops for two very well organized and interesting days. I'm looking forward to the next year.


It was great fun to actually meet and talk to persons I only have had mail communication with before, e.g. Mikael Zayenz Lagerkvist (KTH, Gecode), Christian Schulte (KTH, Gecode), Justin Pearson (Uppsala), Andreas Launila (Gecode/R), Magnus Ågren (SICS, SICStus Prolog), Krzysztof Kuchcinski (Lund, JaCoP) and others I've only read about. All very nice people and helpful to a newcomber of this field like me. It was - of course - also very interesting to listen to the talks (some where somewhat over my head, though). The workshop was held in an informal environment that I really like.

My talk

As blogged before in My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned", I was one of the talkers. The talk was about my experience and the findings after programmed the same (about) 17 models in the following six systems (a lot more models in some systems though): MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode, and was focused on learning these systems. See My Constraint Programming Page for more information about each systems.

The Powerpoint of the talk is here: Learning Constraint Programming: Some lessons learned (PPT).

Some comments:
* The slides for the actual talk is from slide #1 to the "Thank You" slide (#15). The slides after that are from earlier drafts; I kept them in case of time shortage (which was not likely) or for the Q/A part of the talk. The talk was about 20 minutes and 10 minutes Q/A.

* I originally planned to do show a comparison of two small implementations: decompositions of the global constraint all_different_except_0 and one example of how to implement Element for a integer variables matrix in the Word square problem, but these simply took too long time so I skipped them.

* Much of the talk was about comparing different features of the systems (in regard of learning the systems). Slide 14 is a comparison table of the features I talked about.

* Slide 22 "Introduction in Constraint Programming: Books" was in the context of me missing a general introduction book of modelling in constraints programming (and not just about the theory of constraints and search trees). The listed publications are examples of approaches I have liked, for either a specific system (Mozart/Oz and OPL) or in operations research.

"Single missed feature"

During the Q/A-part Christian asked me to name one thing that I considered the biggest hurdle/problem etc for each system, and the following (very subjectively) list where presented. I realize that different systems has very different goal of what is important to pursue, but these where the obstacles I had when learning and using the systems:
  • MiniZinc: That it is "closed", i.e. there is not possible to extend it with one owns search strategies, propagators etc, except in Mercury (which I don't know and can't even compile on my computer).
  • Comet: Shortage of documentation of the language. The constraints are shortly documented but not the language itself. Fortunately I have the OPL book which is of great help, but the object model (very C++ like) etc is not documented. Possible more examples would help this. Update 2009-09-05: Version 2.0 (released 2009-09-02) includes a very good tutorial explaining (with full examples) both Comet and the three different modeling paradigms of Comet: constraint programming, constraint-based local search, and mathematical programming.
  • Choco: One of the biggest problem I have with Choco is its class structure which makes it very hard to find constraints or other functions.
  • JaCoP: Shortage of documentation. (I respect the "minimalistic approach" of not want to implement a lot of convenience constraints etc, and even if it can make the code more verbose I very much respect this decision.) Update: With version 2.4 (released 2009-09-16) there is an good tutorial describing the constraints and search heuristice.
  • Gecode: Make it possible to state Element for matrices of integer variables more easy. I have blogged about this for example in Learning constraint programming - part II: Modeling with the Element constraint. Update 2009-10-13: Version 3.2 (released 2009-10-05) has now support for this.
  • Gecode/R: For Gecode/R I regard the biggest problem that some of the Gecode constraint (e.g. global cardinality) was not implemented.

Full presentation

The early drafts was actually done as a "full sentenced" version and I plan to publish that version as well, but first after some serious reworking. This will cover all (or at least some) of the parts I skipped.

May 09, 2009

Learning Constraint Programming IV: Logical constraints: Who killed Agatha? revisited

Here is a comparison of how different constraint programming systems implements the logical constraints in the problem Who Killed Agatha?, also known as The Dreadsbury Mansion Murder Mystery.. In Learning constraint programming - part II: Modeling with the Element constraint the problem was presented and showed how the systems implements the Element constraints.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.

Here we see compare how different constraint programming systems implement the three emphasized conditions in the problem formulation above:
  • the concept of richer
  • Charles hates noone that Agatha hates
  • No one hates everyone
All models defines the concepts of hates and richer as two matrices. The matrix declarations are omitted in the code snippets below.


Here are the different models for the Who killed Agatha? problem. JaCoP and Choco has two version for how to implement the Element constraint, see the link above. Also, there is no Gecode/R version of this problem.

Defining the concept of richer

First we define the concept of richer, which consists of two parts:
  • No one is richer than him-/herself
  • if i is richer than j then j is not richer than i
This is an antisymmetric relation which is explored somewhat more (with an alternative predicate of the concept) in the MiniZinc model antisymmetric.mzn.

The logical concept used here is equivalence (if and only of).


% define the concept of richer: no one is richer than him-/herself
forall(i in r) (
   richer[i,i] = 0

/\ % if i is richer than j then j is not richer than 
forall(i, j in r where i != j) (
   richer[i,j] = 1 <-> richer[j,i] = 0


Note that Comet don't have support for equivalence directly, instead we have to use two implications. Update: equivalence in Comet is written as == (I tested <=> which didn't work). Thanks to Pascal Van Hentenryck for pointing this out.
// no one is richer than him-/herself
forall(i in r)[i,i] == 0);

// if i is richer than j then j is not richer than i
forall(i in r, j in r : i != j) {
  /* earlier version: two implications[i,j] == 1 => richer[j,i] == 0);[j,i] == 0 => richer[i,j] == 1);
  // equivalence[i,j] == 1) == (richer[j,i] == 0));


//  no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    store.impose(new XeqC(richer[i][i], 0));

//  if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                          new Eq(
                                 new XeqC(richer[i][j], 1),
                                 new XeqC(richer[j][i], 0)



//   a) no one is richer than him-/herself
for(int i = 0; i < n; i++) {
    m.addConstraint(eq(richer[i][i], 0));

//   b) if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
    for(int j = 0; j < n; j++) {
        if (i != j) {
                                    eq(richer[i][j], 1),
                                    eq(richer[j][i], 0)


// no one is richer than him-/herself
for(int i = 0; i < n; i++) {
  rel(*this, richer_m(i,i), IRT_EQ, 0, opt.icl());

// if i is richer than j then j is not richer than i
for(int i = 0; i < n; i++) {
  for(int j = 0; j < n; j++) {
    if (i != j) {
      post(*this, tt(
                 richer_m(j,i) == 1, // <=>
                 richer_m(i,j) == 0)), 

Charles hates noone that Agatha hates

Here is the definitions of the condition Charles hates noone that Agatha hates, which simply mean the implication:
  if Agatha hates X then Charles don't hate X


When starting to modeling these kind of problems, I tend to follow the order of the conditions, which here means that the Charles part is before the Agatha part. When remodeling in another system the order tends to be fixed (cf the Comet version).
forall(i in r) (
   hates[charles, i] = 0 <- hates[agatha, i] = 1


forall(i in r)[agatha, i] == 1 => hates[charles, i] == 0);


for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(hates[agatha][i], 1),
                            new XeqC(hates[charles][i], 0)


I tend to copy/paste the models for Choco and JaCoP and just change the functions that are different. A consequence of this is that some special feature in one of these two systems is not used.
for(int i = 0; i < n; i++) {
                            eq(hates[agatha][i], 1),
                            eq(hates[charles][i], 0)


for(int i = 0; i < n; i++) {
   post(*this, tt(
                        hates_m(i,agatha) == 1, 
                        // =>
                        hates_m(i,charles) == 0)), 

No one hates everyone

This is the last condition to compare: No one hates everyone. It is implemented by a sum of the number of persons that each person hates, and this sum must be 2 or less. Note that it is possible to hate oneself.


forall(i in r) (
  sum(j in r) (hates[i,j]) <= 2  


  forall(i in r) in r) (hates[i,j]) <= 2);


Note: We could save the XlteqC constraint by restrict the domain of a_sum to 0..2 instead of 0..n (=3) but this explicit use of XlteqC is more clear.
for(int i = 0; i < n; i++) {
    FDV a[] = new FDV[n];
    for (int j = 0; j < n; j++) {
        a[j] = new FDV(store, "a"+j, 0, 1);
        a[j] = hates[i][j];
    FDV a_sum = new FDV(store, "a_sum"+i, 0, n);
    store.impose(new Sum(a, a_sum));
    store.impose(new XlteqC(a_sum, 2));


Note: sum is an operator, which makes the condition somewhat easier to state than in JaCoP.
for(int i = 0; i < n; i++) {
    IntegerVariable a[] = makeIntVarArray("a", n, 0, 1);
    for (int j = 0; j < n; j++) {
        a[j] = hates[i][j];
    m.addConstraint(leq(sum(a), 2));


In Gecode this condition is quite easy to state by using linear. In order to use this there is a Matrix "view" of the hates matrix hates_m.
Matrix hates_m(hates, n, n);
// ...
for(int i = 0; i < n; i++) {
  linear(*this, hates_m.row(i), IRT_LQ, 2, opt.icl());

End comment

The mandatory end comment: There are probably better ways of modeling the problem than shown above, either by changing some details or by model the problem completely different. Maybe this will be done sometime...

May 08, 2009

Learning Constraint Programming III: decomposition of a global constraint: alldifferent_except_0

The series Learning Constraint Programming is a preparation for My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned". Confusingly, the entries is not numbered in any logical order. Sorry about that.

Here are the previous entries:


The global constraint alldifferent_except_0 (or the more general variant alldifferent_except_c) is one of my favorite global constraints. It is very handy to use when 0 (or any constant c) is coded as an unknown or irrelevant value. Then we can constraint the rest to be all distinct.

The great Global Constraint Catalog entry alldifferent_except_0) explains this constraint as:
Enforce all variables of the collection VARIABLES to take distinct values, except those variables that are assigned to 0.

The alldifferent_except_0 constraint holds since all the values (that are different from 0) 5, 1, 9 and 3 are distinct.


I have modeled a decomposition of alldifferent_except_0 in the following models, where the constraint is just tested perhaps combined with some other constraint, e.g. sorted or that there must be at least some zeros:

- MiniZinc alldifferent_except_0.mzn
- Comet:
- Gecode/R: all_different_except_0.rb
- Choco:
- JaCoP:
- Gecode: alldifferent_except_0.cpp

Some models using alldifferent_except_0

And here is some real use of the constraint:

- Nonogram (Comet): (A faster model using the regular constraint,, is described here and here).
- I wrote about alldifferent_except_0 in Pi Day Sudoku 2009. However, as faster way of solving the problem was found and is described in Solving Pi Day Sudoku 2009 with the global cardinality constraint). Note: the competition is still on, so there is no link to any model.
- Sudoku generate (Comet):
- all paths graph (MiniZinc): all_paths_graph.mzn
- Cube sum (MiniZinc): cube_sum.mzn
- Message sending (MiniZinc): message_sending.mzn

As the first two entries indicates, there may be faster solutions than using (a decomposition) of alldifferent_except_0, but even as a decomposition is a great conceptual tool when modeling a problem.


In the implementations below we also see how to define a function (predicate) in the constraint programming systems.

For the Gecode/R model there are different approaches:
- "standard" ("direct") approach where we loop over all different pairs of elements and ensures that if both values are different from 0 then they should be different
- using count
- using global cardinality ("simulated" in Gecode/R, see below)

Also, in some models we use the slighly more general version alldifferent_except_c where c is any constant (e.g. "Pi" in the Pi Day Sudoku puzzle mentioned above.


Model: alldifferent_except_0.mzn.
predicate all_different_except_0(array[int] of var int: x) =
   let {
      int: n = length(x)
   forall(i,j in 1..n where i != j) (
        (x[i] > 0 /\ x[j] > 0) 
        x[i] != x[j]

// usage:
constraint all_different_except_0(x);


function void alldifferent_except_0(Solver m, var{int}[] x) {
  int n = x.getSize();
  forall(i in 1..n, j in i+1..n) {
           x[i] > 0 && x[j] > 0 
           x[i] != x[j]

// usage
exploreall {
  // ...
  alldifferent_except_0(m, x);



When modeling the constraint in Gecode/R, I experimented with different approaches. The reification variant all_different_except_0_reif is actually quite fast.
# The simplest and the fastest implementation 
# using count for 1..max (poor man's global cardinality)
def all_different_except_0
    self.count(i).must <= 1

# global cardinality version using an extra array with the counts
def global_cardinality(xgcc)
    xgcc[i].must == self.count(i)

# The standard approach using reification.
def all_different_except_0_reif(x)
  n = x.length
  b1_is_an bool_var_matrix(n,n)
  b2_is_an bool_var_matrix(n,n)
  b3_is_an bool_var_matrix(n,n)
      if i != j then 
        x[i].must_not.equal(0, :reify => b1[i,j]) 
        x[i].must_not.equal(0, :reify => b2[i,j]) 
        x[i].must_not.equal(x[j], :reify => b3[i,j])
        (b1[i,j] & b2[i,j]).must.imply(b3[i,j])

    # ...
    # usage:
    # all_different_except_0_gcc(x)
    # all_different_except_0_reif(x)



Note that here alldifferent_except_0 is derived from the more general version alldifferent_except_c.
// decomposition of alldifferent except 0
public void allDifferentExcept0(CPModel m, IntegerVariable[] v) {
    allDifferentExceptC(m, v, 0);

// slightly more general: alldifferent except c
public void allDifferentExceptC(CPModel m, IntegerVariable[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
                                           gt(v[i], c), 
                                           gt(v[j], c)
                                       neq(v[i], v[j]),

// ...
// usage: 




This is exactly the same approach as the Choco version.
// decomposition of alldifferent except 0
public void allDifferentExcept0(FDstore m, FDV[] v) {
    allDifferentExceptC(m, v, 0);
} // end allDifferentExcept0

// slightly more general: alldifferent except c
public void allDifferentExceptC(FDstore m, FDV[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
            m.impose(new IfThen(
                                       new And(
                                           new XneqC(v[i], c), 
                                           new XneqC(v[j], c)
                                       new XneqY(v[i], v[j])
} // end allDifferentExceptC

        // ...
        // usage:
        allDifferentExcept0(store, x);



The Gecode version is very succint since it use overloaded boolean operators. Very nice.
void alldifferent_except_0(Space& space, IntVarArray x, IntConLevel icl = ICL_BND) {
  for(int i = 0; i < x.size(); i++) {
    for(int j = i+1; j < x.size(); j++) {
           imp(x[i] != 0 && x[j] != 0, 
           // =>
           x[i] != x[j])),
} // alldifferent_except_0

// ...
// usage:
    alldifferent_except_0(*this, x, opt.icl());

May 05, 2009

Learning constraint programming - part II: Modeling with the Element constraint

As indicated last in Learning constraint programming (languages) - part I here are some findings when implementing Crossword, Word square, and Who killed Agatha?. See links below for the implementations.

The first constraint programming system i learned after constraint logic programming in Prolog was MiniZinc. When implemented the problems below I realized that I have been quite spoiled by using MiniZinc. The way MiniZinc (and also Comet) supports the Element constraint, i.e. access of variable arrays/matrices, is very straightforward in these systems and it doesn't matter whether the array to access is an array of integers or of non-variable variable integers. In the Java (Choco, JaCoP) and C++ systems (Gecode), however, this is another matter. Due to different circumstances I have not implemented these models in Gecode/R.

Element in MiniZinc and Comet
Accessing arrays and matrices in MiniZinc and Comet is simply done by using the [] construct, no matter what the type of the array or the index are (I assume integers and variable integers here). For the other systems we must explicitly use the Element constraint (called nth in Choco).


This is a standard constraint programming problem, used as a running example in Apt's great book Principles of Constraint Programming. Here is a formulation of the problem (stated differently than in the book):
Place the words listed below in the following crossword. The '#' means a blocked cell, and the numbers indicate the overlappings of the words.

      1   2   3   4   5
  1 | 1 |   | 2 |   | 3 |
  2 | # | # |   | # |   |
  3 | # | 4 |   | 5 |   |
  4 | 6 | # | 7 |   |   |
  5 | 8 |   |   |   |   |
  6 |   | # | # |   | # |
We can use the following words


MiniZinc: crossword.mzn
Gecode/R: (Not implemented)
Gecode: crossword.cpp

Note: I have seen more general models for solving crossword problems in Choco, JaCoP, Gecode/R, and Gecode with constructions other that the simple Elements used here. Since I wanted to compare the same way of solving the problem using the same Element-construct this may be an unfair comparison between the systems. Well, this is at least a finding how to implement this problem by Element...

Explanation of variables
The matrix A is the individual chars of the words (Comet variant):
int A[1..num_words,1..word_len] = 
   [h, o, s, e, s], //  HOSES
   [l, a, s, e, r], //  LASER
   [s, a, i, l, s], //  SAILS
   [s, h, e, e, t], //  SHEET
   [s, t, e, e, r], //  STEER
   [h, e, e, l, 0], //  HEEL
   [h, i, k, e, 0], //  HIKE
   [k, e, e, l, 0], //  KEEL
   [k, n, o, t, 0], //  KNOT
   [l, i, n, e, 0], //  LINE
   [a, f, t, 0, 0], //  AFT
   [a, l, e, 0, 0], //  ALE
   [e, e, l, 0, 0], //  EEL
   [l, e, e, 0, 0], //  LEE
   [t, i, e, 0, 0]  //  TIE
overlapping is the matrix of the overlapping cells)
This is the Comet version:
 [1, 3, 2, 1],   //  s
 [1, 5, 3, 1],   //  s 
 [4, 2, 2, 3],   //  i
 [4, 3, 5, 1],   //  k
 [4, 4, 3, 3],   //  e
 [7, 1, 2, 4],   //  l
 [7, 2, 5, 2],   //  e
 [7, 3, 3, 4],   //  e
 [8, 1, 6, 2],   //  l
 [8, 3, 2, 5],   //  s
 [8, 4, 5, 3],   //  e
 [8, 5, 3, 5]    //  r
E is the variable array of which the word to use for the different overlappings. This is in fact the only variable (array) that is needed in the problem, apart from the utility/convenience variables.

The main constraint for the crossword example in each system is stated thus:

forall(i in 1..num_overlapping) (
   A[E[overlapping[i,1]], overlapping[i,2]] =  A[E[overlapping[i,3]], overlapping[i,4]]
  forall(i in 1..num_overlapping) {[E[overlapping[i,1]], overlapping[i,2]] ==  A[E[overlapping[i,3]], overlapping[i,4]], onDomains);
Note that Choco has a special Element which support two dimensional arrays (matrix), which we use.
for(int I = 0; I < num_overlapping; I++) {
  IntegerVariable tmp = makeIntVar("tmp" + I, 1, 26);
  M.addConstraint(nth(E[overlapping[I][0]], W[overlapping[I][1]], AX, tmp));
  M.addConstraint(nth(E[overlapping[I][2]], W[overlapping[I][3]], AX, tmp));
Here we had used some trickery by using a transposed version of the word matrix since JaCoP has no special Element constraint for two dimensional arrays.
for (int I = 0; I < num_overlapping; I++) {
   FDV tmp = new FDV(store, "TMP" + I, 0, num_words*word_len);
   store.impose(new Element(E[overlapping[I][0]], words_t[overlapping[I][1]], tmp));
   store.impose(new Element(E[overlapping[I][2]], words_t[overlapping[I][3]], tmp));
This is more complicated compared to the two Java systems since in Gecode we use an array (of length rows*cols) to simulate the matrix. (There is a Matrix "view" in Gecode but the indices must be of type integer, not IntVar, so it can not be used.) Also, the constraints plus and mult takes IntVar as argument.

The first overlapped crossing is "expanded" like this (Gecode is 0-based):
   A[E[overlapping[i,0]], overlapping[i,1]] // MiniZinc/Comet
   a1 = A[ E[I*4+0] * word_len + overlapping[I*4+1]] // Gecode
Here is the complete code. The comments hopefully explains what is going on.

First we define an utility function for accessing the element according to the above principle.
 * Special version of element for an array version of a "matrix" words,
 * E is an integer variable array, C is an array of IntVars for
 * the offset j in the words "matrix".
 * The call 
 *    element_offset(*this, words, E[i], word_len_v, C[j], res, opt.icl());
 * corresponds to:
 *    res = words[E[i], j] --> words[E[i]*word_len+J]
void element_offset(Space& space,
                   IntArgs words,
                   IntVar e,
                   IntVar word_len,
                   IntVar c,
                   IntVar res,
                   IntConLevel icl = ICL_DOM) {

      element(space, words, 
                        word_len, icl), 
                   c, icl), 
              res, icl);

The function is then used as follows:
for(int I = 0; I < num_overlapping; I++) {
   IntVar e1(*this, 0, num_overlapping*4);
   IntVar e2(*this, 0, num_overlapping*4);

   IntVarArray o(*this, 4, 0, num_overlapping*4);
   for(int J = 0; J < 4; J++) {
     post(*this, o[J] == overlapping[I*4+J], opt.icl());

   element(*this, E, o[0], e1, opt.icl());      // e1 = E[I*4+0]
   element(*this, E, o[2], e2, opt.icl());      // e2 = E[I*4+2]

   IntVar a1(*this, 0, num_words*word_len);
   element_offset(*this, A, e1, word_len_v, o[1], a1, opt.icl());
   element_offset(*this, A, e2, word_len_v, o[3], a1, opt.icl());
(The same element_offset function is also used in the word_square problem below.) It took quite a time to get the function and temporary variables (and their domains) right. With training (and the element_offset as a skeleton) similiar problems should be easier to implement.

Note: this is not a bashing of Gecode. Gecode is a great system and it happens that for this specific problem, Gecode does not has the appropriate support. I should also mention that it was a long time since I programmed in C++ and am little rusty.

As mentioned earlier, I have been very spoiled by the MiniZinc (and Comet) constructs. Also: I'm a very 'lazy' person (in the Perl sense of the word lazy) and likes the agile programming languages - Perl, Ruby, Python etc - much for their high level constructs.

Word square problem

The word problem is a cousin to the crossword problem, and is described in Wikipedia's Word_square:
A word square is a special case of acrostic. It consists of a set of words, all having the same number of letters as the total number of words (the "order" of the square); when the words are written out in a square grid horizontally, the same set of words can be read vertically.
An example of order 7 found by the Comet model where we see that the first row word (aalborg) is also the first column word.



Here are the models for solving the Word square problem:
MiniZinc: word_square.mzn
Gecode/R: (Not implemented it Gecode/R)
Gecode: word_square.cpp

It is somewhat easier than the crossword problem. As before, E is the array of the index of the words to use, and words is an matrix of the words. Also, these models is an experiment of how to read a file, the word list /usr/dict/words (standard on Unix/Linux systems).

forall(I, J in 1..word_len) (
  words[E[I], J] = words[E[J],I]
  forall(i in 1..word_len) {
    forall(j in 1..word_len) { [E[i], j] == words[E[j],i], onDomains);
// The overlappings (crossings).
// Note that we use a transposed word matrix for the Element.
for(int i = 0; i < word_length ; i++) {
    for(int j = 0; j < word_length ; j++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        FDV tmp = new FDV(store, "tmp" + i + " " + j, 0, dict_size);
        store.impose(new Element(E[i], words[j], tmp));
        store.impose(new Element(E[j], words[i], tmp));
// Constants for the nth constraint below
IntegerVariable [] C = new IntegerVariable[dict_size];
for (int I = 0; I < word_length; I++) {
    C[I] = makeIntVar("C"+I, I,I);

// The overlappings (crossings)
for(int I = 0; I < word_length ; I++) {
    for(int J = 0; J < word_length ; J++) {
        // Comet: words[E[i], j] ==  words[E[j],i]
        IntegerVariable tmp = makeIntVar("tmp" + I + " " + J, 0, dict_size);
        M.addConstraint(nth(E[I], C[J], words, tmp));
        M.addConstraint(nth(E[J], C[I], words, tmp));
Note that this model use the same function element_offset that was used in the Crossword problem. It took some time to realize that it also could be used here.
// convenience variables for the element constraints below
// since element, plus, and mult wants IntVars.
IntVar word_len_v(*this, word_len, word_len);
IntVarArray C(*this, word_len, 0, word_len-1);
for(int i = 0; i < word_len; i++) {
  rel(*this, C[i], IRT_EQ, i, opt.icl());

for(int i = 0; i < word_len; i++) {
  for(int j = 0; j < word_len; j++) {
    // words[E[i], j] ==  words[E[j],i]

    IntVar tmp(*this, 0, num_words);

    // tmp == words[E[i], j] --> words[E[i]*word_len+j]
    element_offset(*this, words, E[i], word_len_v, C[j], tmp, opt.icl());

    // tmp == words[E[j], i]  --> words[E[j]*word_len+i]
    element_offset(*this, words, E[j], word_len_v, C[i], tmp, opt.icl());

Who killed Agatha?

This is a standard benchmark for theorem proving, also known as The Dreadsbury Mansion Murder Mystery.

Problem formulation from The h1 Tool Suite
Someone in Dreadsbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadsbury Mansion, and are the only ones to live there. A killer always hates, and is no richer than his victim. Charles hates noone that Agatha hates. Agatha hates everybody except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone whom Agatha hates. Noone hates everyone. Who killed Agatha?
Originally from F. J. Pelletier: Seventy-five problems for testing automatic theorem provers., Journal of Automated Reasoning, 2: 191-216, 1986.


MiniZinc: who_killed_agatha.mzn
JaCoP :
JaCoP :
Gecode: who_killed_agatha.cpp

In Some new Gecode models I wrote about the findings in implemented this problem in Gecode and compared to Comet/MiniZinc.

The models use two 3x3 matrices for representing the two relations hates and richer with 0..1 as domain (i.e. boolean). The Element constraints is used for implementing the condition A killer always hates, and is no richer than his victim. where the_killer is an integer variable; the_victim is in some models replaced by agatha (the integer 0). The interesting thing here is that at least one of the indices are integer variables, which caused the difficulties in the two problems above.

These models also use a lot of boolean constructs. A comparison of how these are implemented in the different CP systems may be described in a future blog post.

hates[the_killer, the_victim] = 1 /\
richer[the_killer, the_victim] = 0
Comet:[the_killer, the_victim] == 1);[the_killer, the_victim] == 0);
Note: In the models below I have simplified the problem by using agatha (defined as the integer 0) instead of the integer variable the_victim. This is not a problem since we know that Agatha is the victim, and is the reason why the Element is easier to use than for Crossword and Word square.

JaCoP variant 1 (no Element):
JaCoP don't have a direct support for the case when the index i (in matrix[i][j]) is an integer variable so the first variant of modeling the condition A killer always hates, and is no richer than his victim. does not use Element at all. In we simply loop over all integers (0..2), check if "this" i equals the_killer and then we can use two integers for accessing the matrices. Also, note the IfThen construct.
for(int i = 0; i < n; i++) {
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(hates[i][agatha], 1)
                 new IfThen(
                            new XeqC(the_killer, i),
                            new XeqC(richer[i][agatha], 0)
This was the first variant I implemented, but then I recall the "trickery" used in Crossword and Word square where the matrices where transposed and Element could be used. The problem with this approach is that all constraints must be rewritten in a way that may be confusing. Come to think of it, maybe the names of the matrices should have been changed to is_hated_by and poorer.

JaCoP variant 2 (transposed matrices, Element)
This method of transposition and using Element is implemented in The constraint is now much simpler:
int shift = -1;
for(int i = 0; i < n; i++) {
    store.impose(new Element(the_killer, hates[agatha], one, shift));
    store.impose(new Element(the_killer, richer[agatha], zero, shift));
Note: the Element in JaCoP defaults to start index as 1, but has support for shifting it to 0, by using -1 as the shift parameter. Choco variant 1 (no Element)
I implemented exact the same principle that was used in the two JaCoP model in the two Choco models. The first - no Element - is

for(int i = 0; i < n; i++) {
                            eq(hates[i][agatha], 1)
                            eq(the_killer, i),
                            eq(richer[i][agatha], 0)
Choco variant 2 (transposed matrices, nth)
Note: one and zero are integer variables since nth cannot handle plain integers as the last argument.
for(int i = 0; i < n; i++) {
   m.addConstraint(nth(the_killer, hates[agatha], one));
   m.addConstraint(nth(the_killer, richer[agatha], zero)


Here we have seen - not surprisingly - that using the Element constraint is quite different depending of which CP system we use and it can be easy or not so easy. It was my explicit intension to see how to solve the same problem as similiar as possible. We should also note that most (if not all) problems can be modeled in many ways, some not using Element at all.

One last comment: The two Java models of Who killed Agatha? took quite a long time to implement. The main reason for that was not the the handling of Element but was due to a bug of confusing the two matrices in one of the conditions. Sigh.

April 26, 2009

Learning constraint programming (languages) - part I

While preparing for My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned", I plan to blog about summarizations, findings etc.

This part is a description of the problems I tend to start modeling in a new constraint programming system.

First run
The first model to run is probably SEND+MORE=MONEY or n-queens, since either one of them are distributed in the package. This is to make sure that the system works, environment variables are correct, compilation works, how is the constraints and search strategies stated in the system, what to expect in terms of output, etc.

First models
If either SEND+MORE=MONEY or n-queens is not in the distribution (which would be a surprise), I start with that model.

The models
The models below are all the first problems I start with, sorted in the way they usually are implemented. Also there is a a short explanation what is learned about the system. After all these models are done, or a larger part of them, I publish them at a My [CP System] Page..

The list is very personal and may not be suited for everybody (if anyone, except me). One reason is that my goal has been to learn about modeling problems using constraint programming, and not so much about implementing propagators etc (and that is why I tend use decompositions a lot). However, lately I have somewhat shifted the focus to more gory details.

Also, I think it is important that when learning new stuff (especially by oneself) is to use examples that are well known, fun, and perhaps even with a personal history. Some examples of this "personal attachment" of a problem:
  • The Least Diff problem was a problem in a company competition 1998 and was actually one of my inspiration to start learning constraint programming. The first version was programmed in a Prolog system, SICStus Prolog.
  • Seseman problem: this was a problem stated by a fellow blogger in 2005, and I blogged about (in swedish) in Sesemans matematiska klosterproblem samt lite Constraint Logic Programming).
  • de Bruijn sequences is a personal favorite problem since long time.
  • etc.
The implementations are sorted in (approximately) the order I start to learn the systems.
  • MiniZinc (feb 2008)
  • Choco (first "touch" in 2006, and then more in september 2008)
  • JaCoP (august 2008)
  • Gecode/R (january 2009)
  • Comet (local search module 2007, constraint programming module: january 2009)
  • Gecode (march 2009)
Note: Before 2008 I have sporadically studied (played) with some Prolog systems with support for constraint logic programming, e.g. ECLiPSe and SICStus Prolog, and also with some other CP-systems (e.g. OPL, XPress, ESSENCE'/Tailor/Minion) but not at all that systematic as with the systems mentioned in the list above. Later on, the following problems has been useful and rewarding to model early. Note: The two problems Crosswords and Word square has not been published before, and I plan to blog more about the findings when modeling these "non-trivial Element" models.

April 07, 2009

My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned"

I'm proud to announce that I will talk at SweConsNet Workshop 2009 on May 27th 2009 in Linköping, Sweden. (See the blog post SweConsNet Workshop 2009 for more info about the workshop.)

The title ot the talk is Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned and will be about my findings when learning different constraint programming systems, including comparisons and - perhaps - some wishes.

For more about these systems see the following Also, there are more written in my swedish blog, mostly about MiniZinc, JaCoP, and Choco: category Constraint programming.

March 22, 2009

Gecode: Guido Tack 'Constraint Propagation - Models, Techniques, Implementation'

Guido Tack is one of the Gecode team. His doctoral dissertation Constraint Propagation - Models, Techniques, Implementation (2009, as PDF) discuss constraint propagation in general, and give many examples of how propagation works in Gecode.


This dissertation presents the design of a propagation-based constraint solver. The design is based on models that span several levels of abstraction, ranging from a mathematical foundation, to a high-level implementation architecture, to concrete data structures and algorithms. This principled design approach results in a well-understood, correct, modular, and efficient implementation.

The core of the developed architecture is the propagation kernel. It provides the propagation infrastructure and is thus crucial for correctness and efficiency of the solver. Based on a mathematical model as well as a careful design of the employed algorithms and data structures, the presented architecture results in an efficient and domain-independent kernel. Constraints are realized by propagators, and implementing a propagator is a challenging, error-prone, and time-consuming task. A practically useful solver must however provide a comprehensive propagator library. This dissertation introduces two techniques for automatically deriving correct and efficient propagators. Views generalize variables and are used to derive propagators from existing propagators. For constraints over set variables, propagators are derived from formal constraint specifications.

The presented techniques are the basis of Gecode, a production-quality, highly efficient, and widely deployed constraint solver. Gecode is the empirical evidence for success and relevance of the principled design approach of this dissertation.

Some comments
My interest right now is to step up from "just modeling" with decompositions to understand more about (and hopefully also write) "real" propagators, such as global constraints. Programming propagators is not possible in the MiniZinc language, the system I've used most the last year. Both Comet and Gecode has support for this, as well as the Java based systems Choco and JaCoP. Until now, I've only used Gecode's examples as a inspiration of and comparison to my own models, but I plan to start modeling in Gecode as well.

In view of this, Constraint Propagation - Models, Techniques, Implementation is a perfect book, explaning what propagations are and how they are implemented in Gecode. It is therefore great for understanding how Gecode works. It is also a modern introduction of the state-of-art techniques and implementation of propagators, which is one of the most important part in constraint programming.

Although quite theoretical in part (it is a dissertation in computer science after all), the book has a nice pedagogic pace and explains all core concepts very well, often with examples. Also, the proofs - there are lot of them - are often explained in "normal prose" before or after the proof, which makes it easy to follow the reasoning. (I didn't follow all the proofs, meaning that for some proofs I either didn't understand them or just glanced them through.)

It was very interesting to read the reasons, supported with benchmarks, behind some of the design decisions that was made in Gecode.

I enjoyed the "Related work" part at the end of a chapter or section, which related the discussed concepts to former research. Also, I liked that the Bibliography shows to what page a specific work is refered in the book.

I happened to read Modeling with Gecode just before Constraint Propagation - Models, Techniques, Implementation, and that was a good thing: Modeling with Gecode shows how to model in Gecode so the core concepts are laid out for the dissertation. (The Documentation page shows that a document "Programming with Gecode" is under preparation, which I look forward to read.)

Last, thanks to Mikael Zayenz Lagerkvist for recommending the book.

March 18, 2009

SweConsNet Workshop 2009

SweConsNet (the Network for Sweden-based researchers and practitioners of Constraint programming) arranges the following:

SweConsNet Workshop 2009

The 8th workshop of SweConsNet,
the Network for Sweden-based researchers and practitioners
of Constraint programming

Collocated with the SAIS Workshop 2009
on May 27th 2009 in Linköping, Sweden

Following the previous successful workshops, it is now time for the 8th
SweConsNet workshop, which will take place at Linköping University on 27
May 2009. The collocated SAIS (Swedish AI Society) workshop continues the
day after.

The purpose of this workshop is to learn about ongoing research in
constraint programming, existing projects and products, and further
development of the network.

The workshop is open to everybody interested in the theory and practice of
constraint programming, whether based in Sweden or elsewhere.

We hope for your participation, if not a presentation of your ongoing work,
recent results, or proposal for discussion. There are no paper
submissions, reviews, or proceedings, hence recent conference/journal
papers may also be presented. For better planning, we need a statement of
intent, and desirably the title and abstract of your talk. Please email
this information as soon as possible to us, the programme chairs. A
preliminary list of presentations is at the workshop website, and several
time slots remain available.

Please forward this message to people who might be interested in this
workshop but are not yet on the SweConsNet mailing list. They can
subscribe to it by sending a message to Justin.Pearson at

Justin Pearson and Pierre Flener

This is the program (from SweConsNet Workshop 2009)

There will be a joint invited talk with the SAIS Workshop 2009:

together with the following preliminary list of talks:
  • 10:30 - 11:00 Krzysztof Kuchcinski (Lund University): Design of Processor Accelerators with Constraints
  • 11:00 - 11:30 Mikael Zayenz Lagerkvist (KTH): Propagating Berge-acyclic propagator graphs
  • 12:30 - 13:10 Tomas Nordlander (Sintef, Norway): Nurse Rostering
  • 13:10 - 13:40 Federico Pecora (Örebro University): From Space to Smart Homes: Constraint-Based Planning for Domestic Assistance
  • 13:50 - 14:20 Håkan Kjellerstrand (Sweden): Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned
  • 14:20 - 14:50 Wlodek Drabent (Linköping University): Negation with the Well-Founded Semantics for CLP

I will (at least) attend the SweConsNet Workshop, and are looking forward to listen to all the interesting researchers and practioners in constraint programming, and hopefully talk to some.

Update 2009-05-06
The list of talks is now updated with complete titles and time slots.