November 01, 2014

Decision Management Community November 2014 Challenge: Who killed Agatha?

This blog post was inspired by Decision Management Community November 2014 Challenge: Challenge Nov-2014 Decision model 'Who killed Agatha?'. The original text - and the one linked to from the Challenge page - is here.

From DM Community Challenge Nov-2014:
Someone in Dreadsbury Mansion killed Aunt Agatha.
Agatha, the butler, and Charles live in Dreadsbury Mansion, and
are the only ones to live there. A killer always hates, and is
no richer than his victim. Charles hates noone that Agatha hates.
Agatha hates everybody except the butler. The butler hates everyone
not richer than Aunt Agatha. The butler hates everyone whom Agatha hates.
Noone hates everyone. Who killed Agatha?
A model for this problem is actually one of the first I implement whenever I learn a new Constraint Programming (CP) system since in my approach it has some features that are important to test (they are explained in some details below):
  • reification: "reasoning about constraints"
  • matrix element: how to index a matrix with decision variables
The same general approach is implemented in most of the CP systems that I've tested so far: See also below for the full list with links.

All the decodings has the same approach: defining two binary 3x3 matrices of decision variables:
  • a "hates" matrix, i.e. who hates who
  • a "richer" than matrix: who is richer than who
And a Killer decision variable of range 1..3 (Agatha=1, Butler=2, and Charles=3). In some encodings (as the one below), Victim is also a decision variable, but we know that Agatha is the Victim from the start so this decision variable can be skipped.

The constraints will then on these two decision variables, the two matrices + the Killer decision variable.

First I describe the MiniZinc model in detail and after that a discussion about the solution and why it returns 8 solutions (all with the same killer: Agatha).

The Minizinc model

My original MiniZinc model is here who_killed_agatha.mzn.

The decoding shown below is slightly altered for simpler presentation.

Initiation of constants and domains

The following defines the dimension of the matrices (3x3) and also the domain - the possible values - of the killer and victim variables (1..3).
int: n = 3;
set of int: r = 1..3;
Here we define the constants agatha, butler, and charles. The values are fixed to the integers 1..3 since the solvers used handles finite domains in terms of integers. (There are exceptions to this. To state it short: finite domains in terms of integers is the most common in CP. Some systems also support set variables and/or float decision variables.)
% the people involved
int: agatha  = 1;
int: butler  = 2;
int: charles = 3;

Decision variables

The two decision variables the_killer and the_victim has both the domain 1..3, i.e. the set {agatha, butler charles}.
var r: the_killer;
var r: the_victim;
The 3x3 entries in the two matrices hates and richer are binary decision variables: 1 represents true, 0 represents false.
array[r,r] of var 0..1: hates;
array[r,r] of var 0..1: richer;
The following statement just states that the CP solver should use its default search method.
solve satisfy;
Aside: There is a range of different search heuristics available for most CP solvers, than can speed up more complex CSP's significantly, e.g. for an array x of decision variables one could state:
   solve :: int_search(x, first_fail, indomain_split, complete);
but for this simple problem we just settle for the default.


Next is the constraint section where all the requirements (constraints) of the problem are specified. The order of the constraints is the same order as the problem description. In contrast to the MiniZinc version on my MiniZinc site (the link above), I have here separated each constraint in its own section to emphasize the specific constraint.

Constraint: A killer always hates, and is no richer than his victim.

Here we see an example of a "matrix element" constraint, i.e. that a decision variable (the_killer) is used as an index of a matrix of decision variables, e.g.
     hates[the_killer, the_victim] = 1<
[Aside: In MiniZinc this can be done in this most syntactical pleasing ("natural") way, though most of the other CP systems cannot handle this syntax so an alternative syntax must be used. However, most CP systems have a syntax for the one dimensional element constraint, which is then stated as
which corresponds to
    X[Y] = Z,
where X and both Y and Z can be decision variables. The element constraint is central for CP and is one of the features that separates it from traditional LP/MIP modeling systems.] The first constraint
   hates[the_killer, the_victim] = 1
simply means that the killer hates the victim, and the second that the killer is not richer than the victim.
   % A killer always hates, and is no richer than his victim. 
   hates[the_killer, the_victim] = 1 /\
   richer[the_killer, the_victim] = 0

The concept of richer

The only relations we are using here are hates and richer, and we must take care of the meaning of these concepts.

Hates: Regarding the concept of hatred there is no logic involved how it can be used: A and B can both hate each other, or one of them can hate the other, or neither hates the other. Note that A can hate him/herself.

Richer: The concept of richer is a completely different matter, however. There is a logic involved in this relation:
  • if A is richer than B, then B cannot be richer than A
  • A cannot be richer than him/herself
Realizing that the richer relation is special is important for this model. Without it, there would be many more different solutions (256 instead of 8), though all these 256 solutions points to the same killer: Agatha. (See below for an analysis of the 8 different solutions.)

As mentioned above, we don't need to - and cannot - do a similar analysis on (the semantic meaning of) the hate relation.

See below A note on the richer relation for a comment on the assumption that either A is richer than B or B is richer than A.
   % define the concept of richer

   % no one is richer than him-/herself
   forall(i in r) (
      richer[i,i] = 0

   % (contd...) if i is richer than j then j is not richer than i
   forall(i, j in r where i != j) (
      richer[i,j] = 1 <-> richer[j,i] = 0

Constraint: Charles hates noone that Agatha hates.

Here again, reifications is handy. In pseudo code:
  FOR EACH person I in the house:
     IF Agatha hates I THEN Charles don't hate I
  % Charles hates noone that Agatha hates. 
   forall(i in r) (
      hates[charles, i] = 0 <- hates[agatha, i] = 1

Constraint: Agatha hates everybody except the butler

Here we simply state three hate facts.

It is important to not forget that Agatha can hate herself. Missing this fact in this model would yield that either Agatha or Charles is the killer.
   % Agatha hates everybody except the butler. 
   hates[agatha, charles] = 1  /\
   hates[agatha, agatha] = 1 /\
   hates[agatha, butler] = 0

Constraint: The butler hates everyone not richer than Aunt Agatha.

Same as above, we use reification to handle this:
   FOR EACH person I in the house
     IF I is not richer than Agatha THEN Butler hates I
which is implemented as
   % The butler hates everyone not richer than Aunt Agatha. 
   forall(i in r) (
     hates[butler, i] = 1 <- richer[i, agatha] = 0

Constraint: The butler hates everyone whom Agatha hates.

Same reasoning with reifications as above.
   % The butler hates everyone whom Agatha hates. 
   forall(i in r) (
      hates[butler, i] = 1 <- hates[agatha, i] = 1

Constraint: Noone hates everyone.

Here we count - for each person - the number of 1's (i.e. true) for each row in the hates matrix, and ensure that they are less than 3 (i.e. not all).
   % Noone hates everyone. 
   forall(i in r) (
     sum(j in r) (hates[i,j]) <= 2     

Who killed Agatha?

As mentioned above, this is not really needed since we could have hard coded the_victim to agatha without changing anything.
   % Who killed Agatha? 
   the_victim = agatha
To summarize: This model use a couple of important concepts in Constraint Programming:
  • decision variables with specific (finite) domains
  • reifications, implication and equivalence between constraints
  • element constraint (here in term of the more complex variant: matrix element)

Solution and analysis

There are total 8 different solutions for this MiniZinc model, all stating that Agatha is the killer. (Being able to get all solutions to a combinatorial problem is a excellent feature in Constraint Programming.)

The reason this model give more than one solution is because it is - in my interpretation of the problem - under constrained: there is not enough information about certain relations in the hates and richer matrices to yield a unique solution.

Below are the values of the hates and richer matrices after all constraints have been activated, but before search (searching in the search tree and assign all the decision variables to give a solution), as well as the killer variable.

The entries with "0..1" are the decision variables that are not constrained (decided) enough before search. (Note: Not all CP solvers have the feature to show the inferred variable domains before search. The table below is from my Picat model, slightly altered: who_killed_agatha.pi)
    killer= 1

    Agatha : Agatha : 1     Butler : 0        Charles: 1                   
    Butler : Agatha : 1     Butler : 0        Charles: 1                   
    Charles: Agatha : 0     Butler : 0..1     Charles: 0                   

    Agatha : Agatha : 0     Butler : 0        Charles: 0..1       
    Butler : Agatha : 1     Butler : 0        Charles: 0..1       
    Charles: Agatha : 0..1  Butler : 0..1     Charles: 0                  
These undecided variables involves if:
  • Charles hates Butler (or not)
  • Agatha is richer than Charles (or not)
  • Butler is richer than Charles (or not)
  • Charles is richer than Agatha (or not)
  • Charles is richer than Butler (or not)
We see that:
  • The killer has been assigned to 1 (= Agatha).
  • Many (namely 5) of the variables including Charles are undecided before search, i.e. using the constraints only can not figure out if they are true of not. This is perhaps not too surprising since most constraints in the model - and problem statement - involves only Agatha and Butler.
Also, two pairs of these (under constrained) Charles variables cannot both be true of the same time in the same solution. In each solution, one variable of each pair must be true and one must be false:
  • "Agatha is richer than Charles" and "Charles is richer than Agatha"
  • "Butler is richer than Charles" and "Charles is richer than Butler"
This setting is basically the same as having 5 binary variables, V1..V5, where the variables V1 and V2 are xored (i.e. that one of V1 and V2 must be true and the other false), and variables V3 and V4 are xored.

Thus 8 different solutions:
This concludes the analysis.

A note on the "richer" relation

Nothing rules out that A can be as rich as B (i.e. that neither is richer than the other). However, in this simple model, we assume a stricter variant: that either A is richer than B, or B is richer than A. If we would change the equivalence relation
   richer[i,j] <-> richer[j,i] = 0
(which means:
   IF A is richer than B THEN B is not richer than A
   IF B is not richer than A THEN A is richer than B)
to an implication
    richer[i,j] -> richer[j,i] = 0
(IF A is richer than B THEN B is not richer than A)
then it don't change the principal solution but we would yield 18 solutions instead of 8, all point to the same killer: Agatha.


Here is a list of all the encodings of the Agatha murder problem that I have implemented so far. Whenever I learn a new CP system, the Agatha problem will probably be one of the first to test. See

May 19, 2014

JaCoP 4.1 released (with support for float variables)

JaCoP version 4.1 has been released:
Dear users,

We have just released JaCoP 4.1. Feel free to clone JaCoP repository - and contribute. In this release we introduce floating-point variables (FloatVar) and constraints on these variables. The methods used in floating point domain of JaCoP are based on floating point intervals and consistency methods build around them. We provide a set of basic arithmetic constraints as well as square root, absolute value, trigonometric constraints (sin, cos,tan, asin, acos, atan), exponential constraint and natural logarithm. Moreover element constraint and equation between integer and float variables makes it possible to build models that mix different JaCoP variables. We have also included experimental implementation of several features that is not consider as final and can change in future.

The floating point domain is fully integrated with JaCoP solver. The same search methods can be used as well as floating point variables can be used as cost functions for minimization. We provide also specialized optimization methods that are specially developed for floating point variables.

The released version provides also flatzinc interpreter for floating point variables. Minizinc models containing float variables can be compiled using standard mzn2fzn compiler and used by JaCoP solver.

This release fixes also few bugs.

Best regards, JaCoP Core Developer Team
The GitHub repo is:

I tested with some MiniZinc models with float variables, and it worked quite nice.

March 29, 2014

SweConsNet 2014: Call for presentations and participation (June13 2014, Kista, Sweden)

The SweConsNet (Network for Sweden-based researchers and practitioners of Constraint programming) Workshops are held annually in the spring. This year, SweConsNet-2014 is to be held in Kista, Sweden, June 13. Here's the call for presentations and participation:
The 13th workshop of SweConsNet, the Network for Sweden-based researchers and practitioners of Constraint programming

Hosted by:
SCALE - KTH & SICS Collaboration in Scalable Computing Systems
SICS, Electrum, Kista, Sweden
June 13th, 2014

The Network for Sweden-based researchers and practitioners of Constraint technology (SweConsNet) kindly invites you to participate in the yearly SweConsNet Workshop. The purpose of the workshop is to learn about ongoing research in Constraint Programming, existing projects and products, and further development of the network. SweConsNet Workshop 2014 is the 13th edition of this annual event.

The workshop is open to everybody interested in the theory and practice of constraint programming, whether based in Sweden or elsewhere. The scope of the workshop spans all areas of Constraint Programming, and is open to presentations and discussions addressing topics related to both theory and application.

We hope for your participation, and highly encourage you to submit a proposal for a presentation of your ongoing work, recent results, or of a relevant discussion topic. There are no paper submissions, reviews, or proceedings, hence recent conference/journal papers may also be presented.

To register, please send a brief statement of intent, and desirably the title and abstract of your talk, to Christian Schulte (cschulte at In order to facilitate organization, please notify us of your intention to participate as soon as possible, and at the latest before May 28th, 2014. The workshop does not have a registration fee.

A preliminary list of speakers will be provided at the workshop website, and several time slots remain available.

Please forward this message to anyone who might be interested in this workshop but is not yet on the SweConsNet mailing list.

Best regards,
Christian Schulte

P.S.: If you wish to subscribe to the mailing list, please send a message to Justin Pearson (Justin.Pearson at
Also, see the earlier workshops.

February 16, 2014

CP 2014 (September 8-12 in Lyon, France)

The site for CP2014 (The 20th International Conference on Principles and Practice of Constraint Programming) went up (or rather: updated) the other day:
The International Conference on Principles and Practice of Constraint Programming will be in Lyon, France, from September 8th to September 12th 2014.

This will be the 20th edition of the annual conference on all aspects of computing with constraints, including: theory, algorithms, environments, languages, models and systems, applications such as decision making, resource allocation, and agreement technologies.
The site currently shows:
  • Call for contributions to the CP 2014 journal track
  • Call for Doctoral Program
  • Call for Tutorials
  • Call for Workshops
  • Call for Application Papers
  • Call For Technical Papers
See Dates, Calls, Submissions for more details.

Here are some placeholders for things that will be filled with contents (as of writing some are TBD'ed): I have all intentions to attend this year as well. Last year, I blogged a bit about the conference (was actually the conference blogger, as well as co-organized the CP Solver Workshop and co-organized First International Lightning Model and Solve Competition). Also, see the official CP2013 site.

Update: This year I'm in the Application Track Committee. That will be interesting and fun.

September 29, 2013

A first look at Picat programming language

For easy reference, here are the links to My Picat page and to my GitHub repo /picat.

Picat is a new programming language created by Neng-Fa Zhou and Jonathan Fruhman. The specification was published in December 2012. The first official (beta) release was in May 2013. Right from the beginning I liked this language and in this blog post I hope to show why. I should also say that the code shown here are not just for Constraint Programming, but also traditional backtracking (a la Prolog), tabling (memoization), imperative constructs etc, i.e. what makes Picat Picat.

The name Picat is an acronym and is explained in the quote below, borrowed from the Picat site:
Picat is a general-purpose language that incorporates features from logic programming, functional programming, and scripting languages. The letters in the name summarize Picat's features:
  • Pattern-matching: A predicate defines a relation, and can have zero, one, or multiple answers. A function is a special kind of a predicate that always succeeds with one answer. Picat is a rule-based language. Predicates and functions are defined with pattern-matching rules.
  • Imperative: Picat provides assignment and loop statements for programming everyday things. An assignable variable mimics multiple logic variables, each of which holds a value at a different stage of computation. Assignments are useful for computing aggregates and are used with the foreach loop for implementing list comprehensions.
  • Constraints: Picat supports constraint programming. Given a set of variables, each of which has a domain of possible values, and a set of constraints that limit the acceptable set of assignments of values to variables, the goal is to find an assignment of values to the variables that satisfies all of the constraints. The Picat system provides modules for different solvers with the same interface.
  • Actors: Actors are event-driven calls. Picat provides action rules for describing event-driven behaviors of actors. Events are posted through channels. An actor can be attached to a channel in order to watch and to process its events. In the future, Picat will treat threads as channels, and allow the use of action rules to program concurrent threads.
  • Tabling: Tabling can be used to store the results of certain calculations in memory, allowing the program to do a quick table lookup instead of repeatedly calculating a value. As computer memory grows, tabling is becoming increasingly important for offering dynamic programming solutions for many problems such as planning problems.
Of these general features, I have mostly tested pattern-matching, imperative programming constructs, constraints (of course!) and tabling, but have not played much with actors directly and will not mention them much here.

Some general Picat links: Let's start with some simple examples to show these features.


First we show how to use constraints with send_more_money.pi, and we clearly see the influences of Prolog:
import cp.

sendmore(Digits) =>
   Digits = [S,E,N,D,M,O,R,Y],
   Digits in 0..9,

   S #> 0,
   M #> 0,
          1000*S + 100*E + 10*N + D
   +      1000*M + 100*O + 10*R + E
   #= 10000*M + 1000*O + 100*N + 10*E + Y,
Except for the => it looks very much like Prolog, right? I explain the => and how it's different from Prolog's :- later on and will also mention some other similarities/differences from/with Prolog.

Another mandatory example is sudoku.pi where we see more differences between traditional Prolog, i.e. the foreach loop:
import cp.
import util.

go =>

sudoku(ProblemName) =>
	problem(ProblemName, Board),
	sudoku(3, Board),

sudoku(N, Board) =>
   N2 = N*N,

   Vars = array_matrix_to_list(Board),
   Vars in 1..N2,

   foreach(Row in Board) all_different(Row) end,
   foreach(Column in transpose(Board)) all_different(Column) end.
   foreach(I in 1..N..N2, J in 1..N..N2)
      all_different([Board[I+K,J+L] : K in 0..N-1, L in 0..N-1])

   solve([ff,down], Vars).

print_board(Board) =>
   N = Board.length,
   foreach(I in 1..N)
      foreach(J in 1..N)
         X = Board[I,J],
         if var(X) then printf("  _") else printf("  %w", X) end

problem(1, Data) => 
Data = [
    [_, _, 2, _, _, 5, _, 7, 9],
    [1, _, 5, _, _, 3, _, _, _],
    [_, _, _, _, _, _, 6, _, _],
    [_, 1, _, 4, _, _, 9, _, _],
    [_, 9, _, _, _, _, _, 8, _],
    [_, _, 4, _, _, 9, _, 1, _],
    [_, _, 9, _, _, _, _, _, _],
    [_, _, _, 1, _, _, 3, _, 6],
    [6, 8, _, 3, _, _, 4, _, _]].
What I have understood, these loops constructs was one of the reasons that Neng-Fa Zhou wanted to created a new language, since he wasn't satisfied with the foreach loops he implemented in B-Prolog. [You can compare many of my Picat models/programs with the one I implemented in B-Prolog earlier this year. See my B-Prolog page, e.g.] One thing that differs between Picat's foreach loops compared to B-Prolog's (as well as ECLiPSe's and SICStus Prolog's) is that the user don't have to explicitly declare the local (or global) variables which makes it simpler to use and read.

I will describe more CP in Picat below, but first I will describe some other Picat features.


Tabling here refers to memoization ("caching the results"). A simple example of tabling is to start with this the standard recursive definition of a Fibonacci number:
fibt(N)=F,N>1 => F=fibt(N-1)+fibt(N-2).
This is normally very slow for larger numbers because of the massive number of recursive steps needed. Using the table declaration before the predicate will automatically cache the calculated values and is used in subsequent calls.

An example of it's use is in euler2.pi (Project Euler #2), where the object is to find the sum of all even-valued terms in the sequence which do not exceed four million. Here is one way of implementing this, using list comprehension:q
euler2 => 
   writeln(sum([F : N in 1..100, F = fibt(N), F < 4000000, F mod 2 =:= 0])).
Finding this sum takes 0.015s (with the overload of starting Picat etc). If we don't table the values then it's much (much!) slower: fibt(100) is 573147844013817084101 (~5.73e+20), which is about the number of recursions steps needed. By the way, here we see that Picat supports big integers as well.

Please note that tabling might in some cases be slower because of the memory overhead involved, so you have to be careful and test it.

Another example of tabling is for dynamic programming, for example calculating the edit distance. This example is from Picat's
% computing the minimal editing distance
% of two given lists
edit([],[],D) => D=0.
edit([X|Xs],[X|Ys],D) =>   % copy
edit(Xs,[_Y|Ys],D) ?=>      % insert
edit([_X|Xs],Ys,D) =>       % delete
Another example is shortest distance (also from exs.pi):
% mode-directed tabling
% finding shortest paths on a graph given by the relation edge/3.
sp(X,Y,Path,W) ?=>
sp(X,Y,Path,W) =>

index(+,-,-) (+,+,-)

Imperative programming

Regarding loops, we have already seen the foreach loop in the Sudoku example. Picat has more imperative programming constructs: while loops and assignments.

The Fibonacci example shown above (Project Euler #2, using the list comprehension) has the drawback of a hard coded limit that checks all N in the range 1..100; this range was found by some experimentation and it's not the smallest range. In Picat it's also possible to use a while loop for this (though not as nice looking as the list comprehension version). This is traditional imperative programming. It also shows an example of (re)assignments using the := operator:
euler2b => 
  I = 1,
  Sum = I,
  F = fibt(I),
  while (F < 4000000) 
     if F mod 2 == 0 then
        Sum := Sum + F
     I := I + 1,
     F := fibt(I)
This version also calculates the solution in 0.015s.

More Constraint Programming

Here are some more examples of CP in Picat.

Element constraint

In Picat, as in B-Prolog (and in many other CP systems), the element constraint (List[Ith]=Value) has to be written explicitly using the predicate element(Ith, List, Value) when Ith is a decision variables. However, if Ith is a plain integer, in Picat it can be stated more naturally as List[Ith] = Value or List[Ith] #= Value if List is a list of decision variables.

Older readers of this blog might remember that I sometimes whine how hard it is to write matrix element in different CP systems, and this applies to Picat as well. For example, in circuit.pi I would like to use this construct but it's not allowed:
foreach(I in 2..N)
  % this is not allowed
  Z[I] = X[Z[I-1]]
Instead, one have to rewrite it. Here's my version:
import cp. 

% ...

foreach(I in 2..N)
   Z1 #= Z[I-1],
   Z[I] #= X2
In stable_marriage.pi a more general version, matrix_element, is used:
matrix_element(X, I, J, Val) =>
  element(I, X, Row),
  element(J, Row, Val).
This version (using two element) can be very good sometimes, but depending on the type of the variables some of these variants might be used; sometimes I have to test them all. Also, they are never active at the same time, but deactivated by comments (as shown in the example):
import cp.

% ...

matrix_element(X, I, J, Val) =>
  element(I, X, Row),
  element(J, Row, Val).

% matrix_element(X, I, J, Val) =>
%   nth(I, X, Row),
%   element(J, Row, Val).

% matrix_element(X, I, J, Val) =>
%   freeze(I, (nth(I, X, Row),freeze(J,nth(J,Row,Val)))).

% matrix_element(X, I, J, Val) =>
%   freeze(I, (element(I, X, Row),freeze(J,element(J,Row,Val)))).
The freeze(X, Call) predicate delays the call to Call until X becomes a non-variable term.


Picat supports reifications with the nice syntax I expect from a high level CP system. Note that #/\ is "and" for FD variables. Here is the alldifferent_except_0 constraint (see alldifferent_except_0.pi for the full model):
import cp.

alldifferent_except_0(Xs) =>
   foreach(I in 1..Xs.length, J in 1..I-1)
      (Xs[I] #!= 0 #/\ Xs[J] #!= 0) #=> (Xs[I] #!= Xs[J])


Below are some different implementations of N-queens, taken from nqueens.pi.
import cp.

queens3(N, Q) =>
   Q in 1..N,
   all_different([$Q[I]-I : I in 1..N]),
   all_different([$Q[I]+I : I in 1..N]),

% Using all_distinct instead
queens3b(N, Q) =>
    Q in 1..N,
    all_distinct([$Q[I]-I : I in 1..N]),
    all_distinct([$Q[I]+I : I in 1..N]),

queens4(N, Q) =>
   Q = new_list(N),
   Q in 1..N,
   foreach(A in [-1,0,1])
      all_different([$Q[I]+I*A : I in 1..N])

Note that in queens3 we have to use $Q[I]-I (i.e. prepended with a $) since since it's an expression that should be evaluated ("escaped").

Now we can do some benchmark on different variants (the model nqueens.pi also includes some variants not shown here) with different N for some of the variants. This is merely to show a use of the meta predicates (time2, once, and call).
go3 => 
   Ps = [queens3=1000, queens3b=1000, queens4=100,queens5=100, queens6=13],
   foreach(P=N in Ps) 
      printf("%w(%d)\n", P, N),
queens3(1000) takes 4s and 2 backtracks, queens3b takes 9.9s and 0 backtracks. Not extremely fast, but quite good.

While we're at it, here is one way of benchmarking the time and number of backtracks for one of the methods (predicate queen3_all in nqueens.pi) for getting all the solutions for N from 2 to 14. statistics/2 is used to get the number of backtracks and run time for each run:
go2 => 
   foreach(N in 2..14) 
      statistics(backtracks, Backtracks),
      statistics(runtime, [_, _Runtime1]),
      queens3_all(N, Solutions),
      statistics(backtracks, Backtracks2),
      B = Backtracks2 - Backtracks,
      Backtracks := Backtracks2,
      statistics(runtime, [_, Runtime2]),
      writef("N:%3d %10d solutions (%d backtracks, %d millis)\n", N, Len, B, Runtime2)
The result:
N:  2          0 solutions (1 backtracks, 0 millis)
N:  3          0 solutions (2 backtracks,k 0 millis)
N:  4          2 solutions (5 backtracks, 0 millis)
N:  5         10 solutions (13 backtracks, 0 millis)
N:  6          4 solutions (39 backtracks, 0 millis)
N:  7         40 solutions (107 backtracks, 0 millis)
N:  8         92 solutions (383 backtracks, 0 millis)
N:  9        352 solutions (1477 backtracks, 4 millis)
N: 10        724 solutions (5715 backtracks, 4 millis)
N: 11       2680 solutions (24475 backtracks, 48 millis)
N: 12      14200 solutions (116081 backtracks, 232 millis)
N: 13      73712 solutions (588949 backtracks, 1340 millis)
N: 14     365596 solutions (3195965 backtracks, 7472 millis)

Magic square

This is a magic squares implementation in Picat, taken from magic_square.pi. One thing to notice is the use of rows(), columns(), diagonal1(), and diagonal2() from the util module.
import cp.
import utils.

magic2(N,Square) =>
   writef("\n\nN: %d\n", N),
   NN = N*N,
   Sum = N*(NN+1)//2,% magical sum
   writef("Sum = %d\n", Sum),

   Square = new_array(N,N),
   Square in 1..NN,


   foreach(Row in Square.rows()) Sum #= sum(Rows) end,
   foreach(Column in Square.columns()) Sum #= sum(Column) end,

   % diagonal sums
   Sum #= sum(Square.diagonal1()),
   Sum #= sum(Square.diagonal2()),

   % Symmetry breaking
   Square[1,1] #< Square[1,N],
   Square[1,1] #< Square[N,N],
   Square[1,1] #< Square[N,1],
   Square[1,N] #< Square[N,1],



Magic sequence

Another standard CP problem is magic_sequence.pi:
import cp.

magic_sequence(N, Sequence) =>
   writef("\n%d: ",N),
   Sequence = new_list(N),
   Sequence in 0..N-1,
   foreach(I in 0..N-1) count(I,Sequence,#=,Sequence[I+1]) end,
   N #= sum(Sequence),
   Integers = [ I : I in 0..N-1],
   N = scalar_product(Integers, Sequence),
   solve([ff], Sequence).

scalar_product(A, X) = Product => 
   Product #= sum([S : I in 1..A.length, S #= A[I]*X[I]]).
Note that I tend to use count(Var, Vars, Relation, N) instead of the built-in global_cardinality(List, Pairs) since it's more natural for me.

Least diff problem, optimization

The Least diff problem is a simple optimization problem in Picat: Minimize the difference between ABCDE-FGHIJ where the letters A..J are distinct integers (in the domain 0..9).
import cp.

least_diff(L,Diff) =>
  L = [A,B,C,D,E,  F,G,H,I,J],
  L in 0..9,


  X #= 10000*A + 1000*B + 100*C + 10*D + E,
  Y #= 10000*F + 1000*G + 100*H + 10*I + J,

  Diff #= X - Y,
  Diff #> 0,
  solve([$min(Diff)], L).
The interesting part is the objective $min(Diff) (note the "$") which defines the objective value.

For more advanced models one can use the $report(Goal) to show the progress of the objective. Here is an example from einav_puzzle.pi:
   solve($[min(TotalSum), report(printf("Found %d\n", TotalSum)), ffd],Vars),
Without report(Goal) the solution is only shown after the solver has finished the search which may take considerable time.

Global constraint table

Picat has a table constraint: the in construct (in is also used to define the domain of a decision variables). Here is a simple example using table constraint: traffic_lights.pi (CSPLib #16) which use the list Allowed to state which combinations are allowed.
import cp.

traffic_lights_table(V, P) =>
  N  = 4,
  % allowed/1 as a table (translated)
  Allowed = [(1,1,3,3),
   V = new_list(N),
   V in 1..N,
   P = new_list(N), 
   P in 1..N,
   foreach(I in 1..N, J in 1..N)
      JJ = (1+I) mod N,
      if J #= JJ then
         VI = V[I], PI = P[I],
         VJ = V[J], PJ = P[J],
         % Table constraint
         (VI, PI, VJ, PJ) in Allowed
   Vars = V ++ P,
The use of in - as a table constraint - requires that the values are integers so we have to translate to/from integers in this version. The model (traffic_lights.pi) also contain some other approaches of this problem that don't use the table constraint.

Another example of table constraint is hidato.pi (though it's not very efficient).

Global constraints

Picat supports the most common global constraints as shown in the list below. See Picat Guide, section 11.5 for details.
  • all_different(List)
  • all_distinct(List)
  • assignment(List) (a.k.a. inverse)
  • circuit(List)
  • count(Value,List,Rel,N)
  • cumulative(Starts,Durations,Resources, Limit)
  • diffn(RectangleList)
  • disjunctive_tasks(Tasks)
  • element(I,List, V)
  • global_cardinality(List, Pairs) (I tend to use this not so much since it's not as natural as using count/4)
  • neqs(NegList)
  • serialized(Starts,Durations)
  • subcircuit(List)
  • in used as the global constraint table (not to be confused with tabling=
In the module cp_utils.pi, I have also defined some other useful decompositions of global constraints:
  • matrix_element (some different versions)
  • to_num
  • latin_square
  • increasing
  • decreasing
  • scalar_product (with a relation)
  • alldifferent_except_0
  • nvalue
  • nvalues
  • global_cardinality(Array, GccList)
  • lex2, lex2eq, lex_less, lex_lesseq, lex_greater, lex_greatereq, lex_less2 (alternative hack)
Labeling: Most of the standard variable and value heuristics are supported, though I miss random variants. Some of the examples above show how to label the variables in the solve predicate. Here the ff is first-fail variable labeling, and down means that a variable is labeled from the largest value to the smallest.
solve([ff,down], Vars)
See the Picat Guide documentation for more description of the options to solve and other solver options.

Pattern matching

Pattern matching is Picat is more influenced by the pattern matching in functional programming than by Prolog's unification. This may confuse some programmers with a Prolog background.

Here is a definition of quick sort which shows some of the pattern matching constructs of Picat, and also shows how to define functions, i.e. predicates with return value.
qsort([])    = L => L = [].
qsort([H|T]) = L => L = qsort([E : E in T, E =< H]) ++ [H] ++
                        qsort([E : E in T, E > H]).
The two functions drop and take (standard in functional programming) can be defined as:
drop(Xs,0) = Xs.
drop([],_N) = [].
drop([_X|Xs],N) = drop(Xs,N-1).

take(_Xs,0) = [].
take([],_N) = [].
take([X|Xs],N) = [X] ++ take(Xs,N-1).
Reverse can be defined as:
reverse(L1,L2) => my_rev(L1,L2,[]).
my_rev([],L2,L3) => L2 = L3.
my_rev([X|Xs],L2,Acc) => my_rev(Xs,L2,[X|Acc]).
As will be mentioned again below, the matching between different variables - or part of variables - is often done in the body and not in the rule part. Here is an example from 99_problems.pi showing that (the problem is "% P09 (**): Pack consecutive duplicates of list elements into sublists.")
% pack(L1,L2) :- the list L2 is obtained from the list L1 by packing
%    repeated occurrences of elements into separate sublists.
%    (list,list) (+,?)

pack([],Y) => Y = [].
pack([X|Xs],[Z|Zs]) => transfer(X,Xs,Ys,Z), pack(Ys,Zs).
pack(X1,Z1) => X1=[X|Xs],Z1=[Z|Zs],transfer(X,Xs,Ys,Z), pack(Ys,Zs).
There are more comments about pattern matching below.

Nondeterminism and other Prolog influences (and differences)

Some of the examples above show clear influences from Prolog. Here we list some more influences (and differences).

Picat has - as Prolog has - support for nondeterminism using a backtracking mechanisn to test different predicates as will be seen below. This is a really great feature in Picat and is one of the reason I like Picat so much. Or rather: it is the combination of the backtracking possibility together with the imperative features, support for constraint programming etc that I really like in Picat.


The nondeterministic member predicate in Picat works as in Prolog, i.e. can be used both as a test for membership and also for generating all the elements:
Picat> member(2, [1,2,3])
Picat> member(4, [1,2,3])
Picat> member(X, [1,2,3])
As in Prolog we can get the next solution with a ; (semicolon) in the Picat shell.

This double nature version of member can be defined in Picat as:
member(X,[Y|_]) ?=> X=Y.
member(X,[_|L]) => member(X,L)
In this definition we see two features of Picat:
  • pattern matching: the pattern [Y|_] matches any list and "extracts" the first elements Y to be matched against X in the first clause.
  • backtracking: The first rule is defined as a backtrackable rule using ?=> (the prepending "?" is the marker for backtrackable rules) and makes the call nondeterministic. The last rule in a definition should be stated as non-backtrackable, i.e. => (without the ?).
This predicate now works in the same way as the built-in member/2.

Predicate facts

Predicate facts are bodyless definitions akin to Prolog's "data definition" (but be careful to take this too literary).

Predicate facts are prepended by index declararion to make them accessible as data (and this is one difference from Prolog). For example, here is a stripped down version of the transitive closure example transitive_closure.pi (ported from B-Prolog program
top ?=> 
top => true.

% Transitive closure right
reach(X,Y) ?=> edge(X,Y).
reach(X,Y)  => edge(X,Z),reach(Z,Y).

% the graph
% ....
Note the use table to cache the reach predicate. In this simple example it is not needed, but for other applications it might be a great advantage.


append in Picat is also nondeterministic as in Prolog. Here is an example where we get the different X and Y that makes up the list [1,2,3,4]. Collecting all the results is done with findall which will be discussed a little more below:
Picat> Z=findall([X,Y], append(X, Y, [1,2,3,4]))    
Z = [[[],[1,2,3,4]],[[1],[2,3,4]],[[1,2],[3,4]],[[1,2,3],[4]],[[1,2,3,4],[]]]
As with member, we can define it on our own:
append(Xs,Ys,Zs) ?=> Xs=[], Ys=Zs.
append(Xs,Ys,Zs) => Xs=[X|XsR], append(XsR,Ys,Zs).


As we saw an example above findall in Picat works much as in Prolog. One difference is that findall in Picat is a function, i.e. returns a value and that is why it was assigned to the Z variable above.

Other nondeterministics predicates

Here are the built-in nondeterministic predicates in Picat. They are mostly wrappers to the underlying B-Prolog predicates.
  • between(From, To, X)
  • append(X,Y,Z)
  • append(W,X,Y,Z)
  • member(Term, List)
  • select(X, List, ResList)
  • find(String, Substring, From, To)
  • find_ignore_case(String, Substring, From, To)
  • repeat
  • permutation(List, Perm)
  • nth(I, List, Val)
  • indomain(Var) (CP)
  • indomain_down(Var) (CP)
  • solve(Vars) (CP)
  • solve(Options, Vars) (CP)
  • statistics(Name,Value)
One can also note that - in contrast to Prolog - length is a "plain" predicate and do not creates a list. It just returns the length of the list (array, structure). (For defining av new list one have to use new_list(Length) instead.)

The nondeterministic predicates are great when they are needed, but don't overuse them. The Picat documentation generally recommend to write functions rather than predicates since functions are easier to debug since they return exactly one value.

An advince to Prolog addicts

In general, Prolog addicts should be a little careful when trying to direct write Prolog code in Picat. Sometimes it work very well to write it as in Prolog (with some minor modifications) but sometimes it don't work. The biggest gotcha is probably pattern matching - as mentioned above - which look much like Prolog's unification but is more like the pattern matching used in functional languages such as Haskell.


Picat also support solving problems using a SAT solver using almost exactly the same code as using the CP solver. The only thing one might need to change is
import cp.
import sat.
However, the SAT solver don't support all the global constraints from the CP solver, but it might be worth testing sometimes, since it can be very fast compared to a CP approach. [Still, I tend to rely on CP since then I can get all the solutions - or two - which is not possible with the SAT solver used in Picat.]

Also, I haven't use the SAT solver very much. See Neng-Fa's (and others) use at Projects sections, e.g. numberlink_b.pi and mqueens.pi (it is one of the problems from the CP-2013 Model and Solve Competition).


One of the things I really like in Picat is that it is now easy to implement certain planning problems. I first wrote a small planning module (bplan.pi, inspired by Hector J. Levesque's version "Thinking as Computation", a rather new Prolog AI book). Neng-Fa later build a much richer module planner with a lot of different variants both optimality versions (best_plan*) and non-optimality versions, both bounded and unbounded, and both with and without costs.

The planning module is quite simple to use:
  • Define the action predicates (the legal moves): action(FromState, ToState, Move, Cost)
  • Define the initialization state (the first FromState)
  • Define the goal state(s)/definition: final(State) that check if the goal state has been reached.
  • call the plan* predicate with the appropriate arguments
A simple example is the M12 problem (test_planner_M12.pi) which has
  • two actions: merge (perfect shuffle) and reverse
  • initial state (the problem instance), e.g. [8,11,6,1,10,9,4,3,12,7,2,5]
  • the goal state: [1,2,3,4,5,6,7,8,9,10,11,12]
Here is the code for the planner part. Using the table directive is often a booster in these kind of planning problems, but not always:
go =>
  Init = [8,11,6,1,10,9,4,3,12,7,2,5],
  time(best_plan_downward(Init, Plan)),

final(Goal) => Goal=[1,2,3,4,5,6,7,8,9,10,11,12].

% merge move
action([M1,M12,M2,M11,M3,M10,M4,M9,M5,M8,M6,M7], To, M, Cost) ?=>
   Cost=1, M=m,To=[M1,M2,M3,M4,M5,M6,M7,M8,M9,M10,M11,M12],Cost=1.
% reverse move
action([M12,M11,M10,M9,M8,M7,M6,M5,M4,M3,M2,M1], To,M, Cost) => 
   Cost=1, M=r,To=[M1,M2,M3,M4,M5,M6,M7,M8,M9,M10,M11,M12],Cost=1.
As mentioned above, it's important to declare the first action with ?=> so it is backtrackable, i.e. that if that clause fails it will backtrack and test the next rule.

Picat finds the shortest solution (27 steps) in 1.8s: [m,m,m,r,m,m,m,r,m,m,m,m,r,m,r,m,r,m,m,r,m,m,r,m,m,m,r] where "m" refers to the merge action/move, and "r" to the reverse action/move.

For a maze problem, the action (move) is simply defined as
action(From,To,Move,Cost) => 
   Move = [From,To],
   Cost = 1.
where maze(From,To) defines which nodes that are connected. And that is about as simple as it could be.

For these kind of "straight" planning problems it's - IMHO - much easier to use this approach in Picat that using Constraint Programming. However, if the problem has a lot of extra constraints then CP might be both easier to write and faster.

My other planning models are here, mostly quite simple standard problems. Neng-Fa has written more advanced programs, shown at the Projects page. One neat example is his solution of the Doubleclock problem from the CP-2013 competition: doubleclock.pi which solves each of the five problem instances in about 0.1s (except for #4 which takes 0.6s).

File handling

Here is an example that use file handling (and some string handling) in Picat. It's the Project Euler problem #96 where the object is to sum the 3-digit number found in the top left corners of fifty Sudoku problems. See euler96.pi for the full encoding.
import util.
import cp.

% ...

euler96 =>
  Sudokus = read_file("sudoku.txt"),
  Total = 0,
  foreach(S in Sudokus) 
     Solved = sudoku(3, S),
     Total := Total + [Solved[1,J].to_string() :
                        J in 1..3].flatten().parse_term()

convert(Sudoku) = [[ R.to_integer() : R in Row] : Row in Sudoku].

sudoku(N, Board) = Board2 =>
  % see above

read_file(File) = Sudokus => 
   FD = open(File),
   Sudokus = [],
   Sudoku = [],
   C = 0,
   while (not at_end_of_stream(FD))
      Line = read_line(FD),
      if once(find(Line, "Grid",1,_To)) then
         if C > 0 then Sudokus := Sudokus ++ [Sudoku.convert()] end,
          Sudoku := [],
          C := C + 1
          Sudoku := Sudoku ++ [Line]
   % the last
   Sudokus := Sudokus ++ [Sudoku.convert()],
It solves the problem in 0.03s.

This way of handling files is much easier than in standard Prolog, but perhaps not as easy as in more agile programming languages. However, Picat also have some "slurp" variants that reads the complete file into a variable: read_file_bytes, read_file_lines, and read_file_terms.

Structure and Map (hash table)

Picat has a construct for creating a structure - new_struct (a kind of poor mans OOP "object"), though I don't use this much. Here is a small example:
Picat> S = new_struct(point,3), Name =, Len = S.length
S = point(_3b0,_3b4,_3b8)
Name = point
Len = 3
A data structure that I use much more is the inevitable map (hash table):
Picat> M = new_map([one=1,two=2]), M.put(three,3), One = M.get(one)
Here is a longer example (from one_dimensional_cellular_automata.pi) which use a map (Seen) to detect fixpoint and cycles in the evolution of the patterns, i.e. if we seen the pattern before. It use new_map() for creating the map, has_key(S) for checking if pattern S already exists in the map, and put(S,1) to add the pattern S to the map. Also note the definition of the rules as predicate facts.
go =>
     %    _ # # # _ # # _ # _ # _ # _ # _ _ # _ _
     S = [0,1,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,0,0],
     All = ca(S),
     foreach(A in All) println(A.convert()) end,

convert(L) = Res =>
    B = "_#",
    Res = [B[L[I]+1] : I in 1..L.length].

ca(S) = All => 
   Len = S.length,
   All := [S],
   Seen = new_map(), % detect fixpoint and cycle
   while (not Seen.has_key(S))
      T = [S[1]] ++ [rule(sublist(S,I-1,I+1)) : I in 2..Len-1] ++ [S[Len]],
      All := All ++ [T],
      S := T

% the rules
rule([0,0,0]) = 0. % 
rule([0,0,1]) = 0. %
rule([0,1,0]) = 0. % Dies without enough neighbours
rule([0,1,1]) = 1. % Needs one neighbour to survive
rule([1,0,0]) = 0. %
rule([1,0,1]) = 1. % Two neighbours giving birth
rule([1,1,0]) = 1. % Needs one neighbour to survive
rule([1,1,1]) = 0. % Starved to death.


There are some support for strings in Picat, such as
  • ++ (for list/string concatenation)
  • atom_chars(Atom)
  • find(String, Substring,From,To) (nondet)
  • find_ignore_case(String, Substring,From,To)
  • join(Tokens)
  • join(Tokens,Separator)
  • number_chars(Num)
  • parse_term(String)
  • split(String)
  • split(String, Separators)
  • to_atom(String)
  • to_binary_string(Int)
  • to_lowercase(String)
  • to_uppercase(String)
  • to_string(Term)
Unfortunately there is not yet any support for regular expressions (I use regexps quite much in other programming languages for my "daily programming") and I hope that will come soon enough. However, since a string is (just) an array of characters much of the general list handling can be used for manipulating strings.

Other features in Picat

Picat has many other features not much mentioned in this blog post, for example:
  • debugging and trace, akin to Prolog's debug facilities
  • os module for handling files and directories (see above for some file reading)
  • math module for standard math functions. I tend to use only primes(Int), prime(Int) and the random* predicates from this module.
There are also many other useful predicates that is not mentioned here. See the Picat User's Guide for more about these.

Missing in Picat

Since Picat is such a new programming language there are some missing features that will - hopefully - come later, though I don't know the exact time lines.

MIP solver

A forthcoming project is to add a MIP solver to Picat.

C/Java interface

Another planned project is to add C and/or Java interface to Picat. Then it will probably be easier to add features such as regular expressions, http/tcpip access etc.

Some other utilities

Here are some few utilities that I tend to use.

Get N solutions, given a goal Goal

Sometimes one just want to find some specificed number of solutions, for example for debugging purposes. There is not built-in support for this in Picat, so I've written this following simple ersatz predicate. It use the global get_global_map for putting and checking the number of solutions so far; if the number of solutions are not yet reached it fails which give a new solution.
get_n_solutions(Goal, N) =>
  printf("Get %d solutions:\n", N),
  time2(Goal), % run the Goal
  C = get_global_map().get(solcount),
  if C < N then get_global_map().put(solcount,C+1), fail end.

Running Picat, start parameters

When running Picat programs/models, I often want to test with different predicates. Picat just support one start parameter, main, which is called by the picat executable. Let's call the program program1.pi to the discussion more concrete.
main => go.

go =>
  println("This is a Picat program.").

go2 =>
  println("This is another predicate.").
When running the Picat program like this:
  $ picat program1.pi
Picat checks if the predicate main is defined and if so calls it, which in this case calls the go predicate.

However, there is no current support of calling the go2 predicate (i.e. predicates not defined in the main predicate) via the command line. Therefore I've done this Linux shell script (picat_run) for that. Note: you should to change the path in PICAT_ME.
# This is just a wrapper for running the 
#    picat run_program.pi picat_program arg1 [arg2]
if [ "$#" -eq 1 ]; then
   picat $RUN_PROGRAM $@ go
   picat $RUN_PROGRAM $@
If running without a parameter it runs with the go predicate (which is my default predicate name).

The shell program runs the Picat program run_program.pi, simply defined as:
main([Program|Goals]) =>
   % compile and load the program
   foreach(Goal in Goals) 
     % convert the goal string to an atom
     G = parse_term(Goal), 
Using this we can now run the go2 predicate:
  $ picat_run program1.pi go2
The magic is done with cl(Program) (which compiles and loads the program) and parse_term(Goal) which converts the Goal (the parameter string from the command line) to an atom G which is then called via call.

Right now this can use only handle simple predicates (with constants as arguments), but cannot execute code that involves variables etc. I hope that later version of Picat will have this as a built-in parameter.

Project Euler benchmark

Whenever a new version of Picat is out, I test and benchmark it with my 50 Euler programs. Here is one way of testing this (test_euler.pi)
go =>
  Skip1 = ["euler67.pi","euler.pi","euler_utils.pi","test_euler.pi",
  Skip = new_map([(S=1) : S in Skip1]), 
  Timeout = 10000, % 10s timeout
  Programs = [PP : PP in listdir("."), 
              match(PP, "euler"), 
              match(PP, ".pi")].sort(),
  Fails = [],
  foreach(P in Programs) 
     if not Skip.has_key(P) then
       printf("Testing %w\n", P),
       time_out(time(go), Timeout, Status),
       if status==Timeout then
           Fails := Fails ++ [P]

match(String, Substring) => once(find(String,Substring,_,_)).
The interesting part is in bold, i.e. the loading and testing the program as well as the timeout check.

To be honest, I often do these tests with another program instead, which also checks that the the correct answers are shown and also reports different other things. Since I don't want to display the solutions to the Project Euler problems this program is not public. Sorry about that.

FlatZinc solver

Neng-Fa started to write a FlatZinc solver (originally a translation of his FlatZinc solver i B-Prolog) some weeks before the MiniZinc Challenge and I helped a bit with testing and added some functionality. Interestingly, Picat was the only logic programming based submission this year. However, it didn't go very well: in fact we came last in all the different categories. Well, it was fun doint this and to be in the challenge. The FlatZinc solver can be downloaded from the Projects page.

My contributions to Picat

Picat was originally created by Neng-Fa Zhou and Jonathan Fruhman. After the release in May this year, I have done some contributions to the code as well as bug reports, suggestions etc, and are now sometimes mentioned as one of the Picat developers/collaborators. Note that I just develop things in Picat, not in the C or B-Prolog level. Some of the modules I've contributed that are in the Picat distribution are:
  • util.pi: One of the utilities modules
  • picat_lib_aux.pi: Utilities used by other modules
  • apl_util.pi: Utilities inspired by the languages APL and K. Most of this is as a proof-of-concept.
  • set_util.pi: Set utilities (ported from Neng-Fa's set utilities in B-Prolog)
Also, many of the Project Euler programs above #20 at the Project page are mine. See also my own collection where I have different variants for the problems below #20.

And see below for a list of the almost 400 public Picat programs/models I've done so far.


Here are all my current public Picat models/programs. They are also available from my Picat page and my GitHub repo. I have categorized them in different groups since I guess that Picat might attract different target groups.

Constraint Programming

Here are models that mainly use Constraint Programming (import cp.).

Project Euler problem

Here are some solutions for the Project Euler problems:
Total time for running all problems #1..#50 is < 14 seconds (on my Linux Ubuntu 12.04, 8 core Intel i7 CPU 930@2.80GHz). Some programs contains several alternative solutions.

test_euler.pi: Simple Picat program for testing all the Project Euler programs (#1..#50).

Rosetta Code problem

Picat solutions for some Rosetta Code problems.

Miscellaneous problem

Here are some other programs, modules, utilities, e.g. standard Prolog/GOFAI examples etc.

September 28, 2013

CP2013: Jacob Feldman blogs about the CP Solver workshop

In his blog post After CP-2013 Jacob Feldman writes about the workshop CP Solvers: Modeling, Applications, Integration, and Standardization (CPSOLVERS). He summarizes the workshop in general and some of the discussions we had, and also mentions the CP Solver Catalog (that he initiated some months before the conference).

Jacob has a plea about the details of the discussions:
It’s hard for me to reproduce all answers and I want to invite all CPSOLVERS-2013 panelists and other CP practitioners to provide their answers as comments to this post.
I also agree with Jacob's end paragraphs:
Overall, I think the workshop has actually provided us with an industrial overview of CP products that was its main objective. We plan to get such overviews every 4 years – so I hope to see many authors of current and brand new CP tools at CPSOLVERS-2017.

P.S. I want to thank CP-2013 organizers for their constant support and a great organization of the conference.

September 27, 2013

Charles Prod'homme blogs about developing Choco

One week ago, Charles Prod'homme (one of the main developers of the Choco CP system) started the blog Within a Constraint Solver. His first blog post was this:
With this blog I would like to share the good and bad ideas we've had (and have) while developing our constraint solver. As we almost started from scratch, there could be articles about every part of the solver. If you don't know what Constraint Programming is, wikipedia is a good start point.

Since 2008, I've been working on Choco, a Java library for constraint programming. Articles from this blog will mostly talk about the choices made from the version 2 to version 3, but not only.
Welcome to the CP blog sphere, Charles!

A side note: Some days before Charles started his blog, I held a talk at CP-2013 Workshop CP Solvers: Modeling, Applications, Integration, and Standardization, and one of the things I mentioned was that it would be great if there where more CP bloggers to explain what Constraint Programming is and why we are so fascinated by it. I'm not sure is there is some cause and effect here or just a happy coincidence. :-)

If you are interested, here is my talk: CP Solvers/Learning Constraint Programming, and here's Charles' talk at the workshop: Choco3: an Open Source Java Constraint Programming Library.

CP-2013: Presentation slides from the talks

The presentation slides for many CP-2013 talks are now available from the Schedule page.

September 23, 2013

CP2013: Photos from the conference

After a failing publishing some of my CP-2013 photos at my Google+ page, I have now collected them all at a special page: They have been collected per day,: Please note:
  • The photos are large and are simply scaled at the presentation page. Click on the photo to get a larger version with proper ratio.
  • Many of the photos are not good but still kept since they are mementos. It's mostly because I didn't realized that using maximum zoom on my camera easily give distortions, especially with unsteady hands. Sorry about that.
  • They are not labeled in any way, though they all have time stamps which might give some clues of the target.
And if you study the slides that I photographed, you get a hint in what areas of CP I'm most interested in. :-)

September 22, 2013

CP2013: Ian Gent blogs about the Lightning Model and Solve Competition

Ian Gent, a member of the winning team (Mano) in the First International Lightning Model and Solve Competition this conference, has just published a blog post about how and why he and his team won: How I became one of the three "best optimizers in the world".

Ian writes very interesting details about how the team approached the problems, why they decided to solve all problems by hand, etc. I like Ian's conclusions:
What Have We Learnt For Constraint Programming?

One word. Modelling.

To expand, Modelling is hard.

To expand, we need better tools to make it easier to model constraint problems quickly. Every group should have been able to use a simple constraints tool to model the easier problems (like queens) and then get on with the other ones.

This is what I take away, and it's nice that this is research that we in St Andrews are already doing, so I hope we can push this harder after this experience.
As one of the organizers of the competition, I also thank Ian for this comments about what we can do better the next time. [This means that I have read it, but will not promise anything. :-)]

Also, see my summary of the competition: CP-2013: The Conference Day 3 (Thursday) and CP-2013: Competition problems, rules, and instances.

September 21, 2013

CP-2013: The Conference Day 4 (Friday)

Friday, Day 4 (last day

After yesterday's quite hectic agenda, today was was - for me at least - a much more relaxed day. Though the organizers and the organge guys was busy to the last minute.

First, I would like to thank all the organizers and the "volunteers" [the quotes was official] for a really well organized and interesting conference. It was my first CP-conference so I didn't really know what to expect, but it was just fun being there, both for the talks but also to meet and discuss CP with the leading experts in the field (or just listening to their discussions).

So, the day was fun and started with the invited talk by Torsten Schaub on ASP (Answer Set Programming), which is a paradigm different from CP in details but quite close in goals and some principles. The system mentioned was Potassco. You can see more about it at my Answer Set Programming page.

After Torsten's talk was the Modeling track with very interesting talks, and I especially liked the CP systems talks : Ozgur Akgun talked about Automated Symmetry Breaking and Model Selection in Conjure. More about Conjure is here.

After that Daniel Fontaine talked about "Model Combinatorics for Hybrid Optimization" in Objective-CP (sorry there is no link to Objective-CP yet) and then Kevin Leo about "Globalizing Constraint Models", a project which tries to automatically make more efficient MiniZinc models.

The tutorial by Phil Kilby about CP for Vehicle Routing Problems was very inspiring. He explained a general CP model which can solve many of the different VRP problems. Really interesting and important stuff.

Last session for the day, and the conference, was "Global Constraints and Conflicts" with four talks which - I have to admit - sometimes was a bit over my head. Though I think that the results they showed was quite impressive.

And the conference is now all. Again, thanks for organizing this! And thanks to all wonderful, inspiring, and bright people in the CP world.

This is last daily report blog post as the CP-2013's official blogger. Hope you have enjoyed it as much as I when writing it; blogging is a great way to digest things. Feel free to contact me in CP related matters:

Oh, I almost forgot today's first greetings with e-contacts (with a slightly extended meaning of e-contacts): Torsten Schaub, Toby Walsh, Christopher Mears, Roland Yap, Thibaut Feydy.

September 20, 2013

CP-2013: Competition problems, rules, and instances

Here are the CP-2013 competition problems (with some small corrections by Peter Stuckey): cp2013_competition_20130919.pdf. The rules might also be interesting to read to see the conditions of the competition: cp2013_competition_rules_20130919.pdf. The problem instances are here: and are named as "problem_instance.dzn", where instance 0 is the instance that's shown in the problem documentation.

If you have a nice model (in CP or some other paradigm) that solves the competition instances on any of the problems, feel free to mail me at If I got more than a few models/programs, I might put them on a web page.

Have fun!

Update 2013-09-24: The problem document, the rules, and the instances has now been added to the competition page. Thanks, Guido.

September 19, 2013

CP-2013: The Conference Day 3 (Thursday)

Thursday, Day 3

Today was an exciting and hectic day. It started with Pascal Van Hentenryck and Laurent Michel presented Objective-CP, a CP system (in progress) built in Objective-C. It looked very versatile and flexible so it will be really fun testing it, even though I haven't written a single line of code in Objective-C. (Right after I read the paper I ordered a book in Objective-C just to be prepared.)

One other talk I enjoyed very much today was Tias Guns ACP Doctoral Research Award talk: "Declarative Pattering Mining Using Constraint Programming". The system he built is CP4IM (which I tested some year ago and it's impressive work).

I have to admit that I skipped some of the other talks since I was preparing (both mentally and technically) for the penultimate event of the day, namely:

The hectic part of the day was the competition (formally: First International Lightning Model and Solve Competition), since I was one of the organizers, together with Peter Stuckey and with a huge amount of technical/practical help by Farshid Hassani Bijarbooneh. Peter wrote the problems and the checkers for the problems and I wrote the framework for reading the submission files (in MiniZinc data format, .dzn) to be fed into Peter's checkers. (I also wrote code for detecting if any of the teams should trying to "beat the system" in certain ways, but all the teams behaved very honourably).

It was a fun competition, although perhaps a bit too hard problems. Later on - I'm not sure when hopefully this Friday or Saturday - we will publish the problems somewhere so anyone can see what the teams was exposed to. I will of course then blog about it. Later: And here are the problems (with some corrections by Peter): cp2013_competition_20130919.pdf (PDF). I also think that the rules can be interesting to read: cp2013_competition_rules_20130919.pdf.

Here is the table of the teams that got at least one point, i.e. submitted at least one correct solution. There where four teams - not mentioned below - that didn't got any points. In all there where 10 competing teams.
IdTeamPointsTeam membersMedal
7Mano71 (Competition winner! )Allan Van Gelder, Ian Gent, Ian MiguelGold
4Be Cool and Friends57 Pierre Schaus, Renauld Hartert, Jean-Noël MonetteSilver
8CO447 Valentin Mayer-Eichberger, Johannes Waldemann, Sebastian WillBronze
1Lazy Guy39 Andreas Schutt, Thibault Feydy, Geoffrey Chu-
5Monash26 Kevin Leo, Guido Tack, Christopher Mears-
10Other6 Gabriel Hjort Blinell, Roberto Castañeda Lozano, Mohammed Siala-

Congratulations to the medallists! And especially to the winners: team Mano, who - interestingly enough - solved the problems only by hand (as indicated by their team name).

And thanks to all the other teams that attended the competition!

After the competition it was a very pleasant Banquet Dinner, with lots of interesting discussions as well as both formal and not so formal speeches.

Handshakes (with the people I've had just e-contacts with before) : Andreas Schutt, David Rijsman, Ian Miguel.

Tomorrow, I'm especially looking forward to the invited talk about ASP by Torsten Schaub. (I tested ASP somewhat some year ago: see My ASP page; it's quite fun to model in ASP, different and fun.). Another talk that will be interesting is Ozgur Akgun who will talk about "Automated Symmetry Breaking and Model Selection in Conjure".

September 18, 2013

CP-2013: The Conference Day 2 (Wednesday)

Wednesday, Day 2

Today started and ended with two very inspiring talks, both showing how to use optimization for important issues, and both also showed the need for multi-disciplicary approaches for these kind of projects, i.e. not only CP or optimization in general but also machine learning, simulation, agent-based modeling (all areas in which I've been interested in on and off, especially machine learning/data mining).

Michaela Milano started the day with the invited talk Optimization for Policy Making: the Cornerstone for an Integrated Approach where the general goal is to build a policy making system (decision support) for energy consumption.

The last lecture was Pascal Van Hentenryck's exiting public lecture Decide Different! about many projects he and his collegues at NICTA (Australia) and other universities was involved in, such as disaster recovery. He also showed two interesting example of optimization in crowd funding and similar systems.

A related aside: Pascal is also behind the very interesting Coursera online Course Discrete Optimization. I have not yet seen all the lectures, but the one about Dynamic Programming and Constraint Programming was excellent, especially for introducing CP. (I will later see the rest of the course, Local search and Integer programming, and perhap be more serious and do some of the homeworks). [In his wonderful introduction lecture, Pascal showed Kidney Exchange as an example of a hard discrete optimization, and that inspired me to write a small MiniZinc model to solve these kind of problems, though it cannot at all handle the huge number of people that is required: about 80000 people is of a need of a kidney. My model solves a problem with 500 people (randomly generated) fairly simple, and 1000 with some more sweat.]

Some other interesting talks today: In the Search track Geoffrey Chu talked about "Dominance Driven Search" (yet another approach that might lead us to a powerful black box solving?), and Marc Shoenauer talked about Bandit-based Search for CP (which - to be honest - was more interesting than I first anticipated). Jean-Charles Régin's talk about Embarrassingly Parallel Search showed a way which seems to be a good way to do massive parallel search (though I'm definitely not an expert in this area).

Ian Gent and Lars Kotthoff held a kind of iconoclastic tutorial about recomputation, which may very well be the future way of doing repeatable experiments in computer science (and surely in other disciplines that use computer experiments). It seems to be quite easy both to submit an experiment to the site and especially to install and redo the experiment. See for more information. The tag line for the blog/project is If we can compute your experiment now, anyone can recompute it 20 years from now. Six of the accepted papers has their experiment at, see here. For a simple demo experiment, see Our First Experiment: A Chess Puzzle.

My favorite talk from the Global Constraint track was "Solving String Constraints: The Case for Constraint Programming", where Pierre Flener presented the paper by Jun He The paper compared a CP approach with dedicated solver such as Hampi, Kaluza and Sushi for certain string constraint benchmark. The CP solver dominated these solvers completely. (I actually played with Hampi this summer missed much of the things I'm used to with a CP solver.)

Tomorrow (Thursday) will be interesting. It starts with a presentation of a new CP system: Objective-CP (a CP solver written in Objective-C) and ends with the First International Lightning Model and Solve Competition . I'm also looking forward to Tias Guns' ACP Doctoral Research Award: "Declarative Pattern Mining using Constraint Programming". Oh, sorry. The day actually ends with the Conference Banquet.

(Greeting list today: Barry Hurley, Geoffrey Chu, Laurent Michel.)

CP-2013: Photos so far (I)

Here are some of my CP-2013 photos so far (Google+ album), taken from my POV.

[Later: I've got some complaints that the link to the photos don't work. Sorry about that. Perhaps it's required that one is a Google+ user to look at the album? I will post these at my site, but that will be after the conference.]

Even later: See CP2013: Photos from the conference for links to the photos.

September 17, 2013

CP-2013: The real Conference Day 1 (Tuesday)

Tuesday, September 17

This is the first "real" day of the Conference. Yesterday, the workshop and doctoral program day, was the zeroth day.

Let me first say I'm sorry that I mislead you yesterday. The talks by Jun He and Pascal are not until tomorrow, Wednesday. That blog post is changed now. Also, it should probably be called "Day 0" but changing that would destroy the links.

And as of this morning, I'm the official offical conference blogger (yesterday I was only the inofficial official conference blogger). As you've already seen it's not about everything at the conference, just a report of what I happen to like or have noticed - as always on my blogs.

Here are some highlights from today.

These following two talks by Peter Stuckey I've really looked forward to:

First Peter's Invited talk "Those who forget the past are condemned to repeat it." The final words was "The Search is Dead. Long Live Proof". A very interesting and provocative talk; and the research is very interesting too, of course. It will very interesting to following this. As a modeler it would be nice with this as ground for a black box solver (at least in my dreams).

And then his report about MiniZinc challenge 2013 results. Here's the medals:

GOLD opturion opturion or-tools or-tools
SILVER or-tools or-tool choco choco
BRONZE gecode iz_plus opturion opturion

Congratulations to all!!

This year Opturion CPX and or-tools did really well. As Peter said: The king [Gecode] is dead, Long Live the King. Gecode finally knocked of its perch of winning ALL previous gold medals.

As in later years, the chuffed solver was actually the best solver in both fixed and free category but since it's a G12 solver it can't get any official place. However, or-tools was better than chuffed in the parallel class (and open) this year (with chuffed as second).

If you wonder how the FlatZinc solver by Neng-Fa Zhou and me (the Picat solver) did, it came last in all categories, which was not especially surprising for me; I'm glad we got some points. We'll try to do better the next year (and this is - as Peter mentioned - one of the purposes of the challenge).

Here is the URL from Peter's talk to the result page: [later: this link works now.]

Related: MiniZinc Challenge Medals 2008-2012

Peter also mentioned the competition at this conference : Lightning Model and Solve competition. The registration form is at the boards. After just some hours there where already about 10 registered. That's great!

Before lunch there was also the three Best Papers talks. All was perhaps not in my forte but really worthwhile listening to.

After the lunch, I went strictly to the left column in the program: first the Applications track, and then the Bin Packing track. One of my two favorite talks where Helmut Simonis' talk about an application of Model Seeker (the last year I blogged about it in Beldiceanu and Simonis: A Model Seeker). The other talk was Andrea Rendl's about Balancing Bike Balancing System. I like these kind of practical applications (and which are not very hard to understand.)

It was interesting to learn that CP was used to calculate orbits in chaotic systems; a long time ago I read some popular science books - as well as some not-so-popular books - about this, and played with quite a few chaos systems

(I didn't attend the ACP General Assembly.)

The scheduled day ended with a reception drink at Östgöta nation. I hope everyone else had lovely discussions and made new contacts.

Assorted things

I was asked about notes from the panel discussion at CP Solver workshop yesterday. Unfortunately we didn't take any notes, but I hope to collect the highlights later this week.

Today's people greet (i.e. the one I've only e-communicated with before and today at least shaked hands with): Narendra Jussien, Barry O'Sullivan, Andrea Rendl, Peter Nightingale. There are still some left to greet. I also made some new friends, and this is actually one of the reason I'm attending this conference. It was mostly CP solver guys and should be to no surprise to the followers of this blog.

Finally, for you you who don't know me earlier: I'm the guy that sometimes wearing an eye patch. Please don't worry, it's just because I don't have a lens in my left eye after an unsuccessful cataract operation this May, and sometimes too much light tend to confuse my small brain. Eventually this will be fixed with a special contact lens.

September 16, 2013

CP-2013: Conference Day 1: Workshops

Today was the first day [later: zeroth day] of the CP-2013 conference: The Doctorial program and Workshops day.

The venue, Uppsala University main building, is a very old (the oldest in Scandinavia, from late 15th century of so) and is beautiful and very impressive (though the quarter bell should perhaps be fixed since it's off by some minutes).

When we arrived at the morning, we met a lot of CP guys with wonderful orange T-shirts (designed by Joseph Scott) which indicate the tech and handy persons, those who do everything behind the scenes, i.e. just makes everything work. This is much appreciated!

As mentioned earlier, I was co-organizing the CP Solver workshop (the presentations are available via the link), and did also a talk (CP Solvers/Learning Constraint Programming), and was also responsible for the "5 minutes left" warning messages (a much helpful feature for the talkers I guess) so I didn't attend other workshops (from what I heard they where great as well).

Our workshop was quite well visited (perhaps in part because almost all speakers was in the audience but there are of course many non-talkers in the audience). I liked the mix of the talks: commercial systems and open source systems, old system and newer systems, and all talkers have some point of view that was interesting. The workshop ended with a hour long Panel Discussion - moderated by Hemut Simonis - with many interesting general discussions about the role and future(s) of CP solvers.

And from a personal point, it was really great to be able to at least greet to some I've only had electronic communication with earlier: Jacob Feldman, Chris Kuip, Victor Zverovich, Peter Stuckey, Pascal Van Hentenryck, Toshimitsu Fujiwara, Jean-Guillaume Fages, Guido Tack, Pierre Schaus.

The (scheduled) day ended with with a "Welcome drink" in the Chancellor room with a lot of old portrait painting (what I understood, this is one of the largest collection in Scandinavia). In all this first day was attended by about 100, about half of the total.

Tomorrow (Tuesday) is the first "real" conference day. Some of the talks I'm looking forward to are:
  • Peter J. Stuckey's Invited talk: "Those Who Cannot Remember the Past are Condemned to Repeat it"
  • Peter J, Stuckey: "MiniZinc Challenge results"
  • Jun He, "Solving String Constraints: The Case for Constraint Programming" [later: sorry, wrong day]
  • Pascal Van Hentenryck Invited Public Lecture: "Decide Different" [later: sorry, wrong day]

September 14, 2013

CP-2013: Conference blogging

In a few days it's time for CP-2013 (The 19th International Conference on Principles and Practice of Constraint Programming), in Uppsala, Sweden.

During the conference, I will:
  • blog (at this blog)
  • tweet, at hakankj, using the tag #cp2013 (not all the hits are about the real CP-2013 conference though). Some tweets will probably also be tagged with #copr.
  • and perhaps also at Google+
I don't know when, how often or how much this reporting will be. It will be just from my own perspective and not a full report on every talk. Hopefully it will convey that I have looked forward to this conference a long time, both listening to the talk (of course), and also to finally meet all the great guys I've just have had mail contact with - or just read about - since I started to be interested in constraint programming (in 2008).

I'm quite sure there will be other that blog/tweet/etc about and during the conference.

Some especially interesting things that I'm looking forward to (for miscellaneous reasons): This will be a fun week!

Here are some useful links:

August 08, 2013

All my public CP models are now available via GitHub

Yesterday I finalized the first round of making all my public CP models - as well as many other programs - available via GitHub: All is available from my site, but it is easier for other to fetch them from a single repository instead of downloading each single file.

Here's a summary of the directories so far. Most of the directories has the same name as on my site (with some few exceptions).


As of writing the repository has these statistics (according to GitHub):
1 author has pushed 
42 commits to all branches, excluding merges. 
On master, 5,747 files have changed and there have been 
2,417,980 additions and 1 deletions. 
The "2,417,980 additions" is the total number of lines in these files.

Let's start with the CP systems. After that follows some non-CP systems.

Constraint Programming systems (and related paradigms)

  • aimms: AIMMS system, linear/integer programming and constraint programming
  • ampl: AMPL, integer/linear programming and constraint programming
  • answer_set_programming : Answer Set Programming
  • bprolog: B-Prolog, logic programming, constraint programming, tabling, loops etc
  • choco3: Choco v3, constraint programming
  • comet: Comet, constraint programming, linear/integer programming, constraint-based local search
  • eclipse_clp: ECLiPSe CLP, Prolog, logic programming, constraint programming, loops etc
  • essence_savile_row: Essence'/Saville Row, constraint programming
  • essence_tailor: Essence'/Tailor, constraint programming
  • gecode: Gecode, constraint programming
  • gecode_r Gecode/R, Ruby interface to Gecode
  • google_or_tools: Google or-tools, constraint programming, integer/linear programming, Java, Python, and C#
  • jacop: JaCoP and JaCoP/Scala, constraint programming
  • jsr_331: JSR-331, Java API for constraint programming
  • minizinc: MiniZinc, constraint programming. Also G12 Zinc files.
  • numberjack: Numberjack, constraint programming
  • oocalc_excel: oocalc/Excel, some few linear/integer programming models
  • oscar: OscaR, constraint programming
  • picat: PICAT, constraint programming, logic programming, loops, tabling, etc
  • sicstus: SICStus Prolog, constraint programming, logic programming, loops. etc

Non-CP systems, mostly Very High Level programming languages/systems

  • apl: APL (note: index.html contains codes etc)
  • frink: Frink, high level programming language
  • j: J array programming language J (note: index.html contains code etc)
  • k: K, array programming language (note: index.html contains code etc)
  • mathematica: Mathematica, mathematical programming
  • pddl: PDDL (planning language)
  • perl6: Perl6 programming language
  • poplog: Poplog Pop-11 high-level programming language
  • sabr: SABR, constraint-based planning language
  • setlx: SETL and SetlX, high level programming language
  • unicon: Unicon/Icon, high level programming language

Non-CP systems, data mining/machine learning

  • eureqa: Eureqa/Formulize, genetic programming
  • jgap: JGAP, genetic programming
  • weka: Weka, data mining/machine learning, Java files, HTML, and data files (ARFF and CSV)
Some comments:
  • There are some duplicates, especially data files that is included in several system directories
  • I'll try to keep this repository as updated as possible, but there might be some time lag since I will probably not update each single new model.
  • There is no guarantee that models/programs will work with the latest version of a specific system. However - for some systems at least - I tend to keep up with major releases.
  • All directories include the latest index.html file from my site. Also, some directory just contain that file (and perhaps some single other file), e.g. for APL, K, J, and Mathematica.
  • This was the first round of making things available. Later on I might publish some of the code from (i.e. my "useless programs").
Hope you will enjoy it!

August 06, 2013

CP-2013 (Uppsala, Sweden): Early registration deadline soon: 8 August

Just a short notice that the early registration of CP 2013 (Uppsala, Sweden) has a deadline very soon: 8 August. The not so early registration is a month away...

From the Call for Participation:
CP 2013 Nineteenth International Conference on Principles and Practice of Constraint Programming Uppsala, Sweden September 16-20, 2013
Also, see the full conference program.

July 03, 2013

Gecode version 4.1.0 released

Gecode version 4.1.0 was released some weeks ago today. From the Changelog:
Changes in Version 4.1.0 (2013-06-13 2013-07-03)

This release adds a really useful feature and the required infrastructure to Gist (it now can show information on the alternatives in a search tree), has some new features, and fixes quite a number of bugs (some of which are quite serious, in particular for: LDSB, restart-based search, and FlatZinc).
  • Kernel
    • Additions
      • Branchers now support a print() member function. (major)
      • Both AFC and Activity can be changed by a set() function. (minor)
  • Search engines
    • Bug fixes
      • Fixed a bug in restart-based search, which would crash if the problem is failed at the root or only had a single solution during best-solution search. (major, thanks to Chris Mears, Roberto Castañeda Lozano)
  • Finite domain integers
    • Additions
      • Integer and Boolean branchings now take an optional argument for a user-defined print function. See MPG for details. (major)
      • Added if-then-else constraint (called ite) for integer variables. (minor)
    • Other changes
      • Added classes IntMinimizeSpace and IntMaximizeSpace for cost-based optimization with an integer cost variable. While the classes MinimizeSpace and MaximizeSpace are still available, there use is deprecated and a later release might remove them. (minor)
    • Bug fixes
      • Fixed crash due to combination of LDSB and Activity. (major, thanks to Stefano Gualandi)
      • The internal representation of integer variable domains could leak memory to the space. (major)
      • Fixed binpacking crash when the b and s arrays are empty. (minor, thanks to David Rijsman)
  • Finite integer sets
    • Additions
      • Set branchings now take an optional argument for a user-defined print function. See MPG for details. (major)
    • Bug fixes
      • Fixed crash due to combination of LDSB and Activity. (major, thanks to Stefano Gualandi)
  • Floats
    • Additions
      • Float branchings now take an optional argument for a user-defined print function. See MPG for details. (major)
      • Added classes FloatMinimizeSpace and FloatMaximizeSpace for cost-based optimization with a float cost variable. (minor, thanks to Pascal Francq)
    • Bug fixes
      • Fixed FLOAT_VAR_ACTIVITY_SIZE_MIN/MAX, which used to compute size/activity instead of activity/size. (minor)
      • Random value selection for float branchings was not declared with the right types. (minor)
      • Fixed missing propagation in reified rel-constraints. (minor, thanks to Duong Khanh Chuong)
  • Minimal modeling support
    • Bug fixes
      • Boolean expressions could lead to a segfault when initialized with default constructor. (minor, thanks to Roberto Castañeda Lozano)
  • Script commandline driver
    • Additions
      • Added classes FloatMinimizeScript and FloatMaximizeScript for cost-based optimization with a float cost variable. (minor, thanks to Pascal Francq)
    • Other changes
      • Added classes IntMinimizeScript and IntMaximizeScript for cost-based optimization with an integer cost variable. While the classes MinimizeScript and MaximizeSpcript are still available, there use is deprecated and a later release might remove them. (minor)
  • Example scripts
    • Additions
      • Added LDSB-based symmetry breaking to graph coloring example. (minor, thanks to Stefano Gualandi)
  • Gist
    • Additions
      • Gist now can show information on the alternatives in a search tree, using the new menu options "Label branches" and "Label path". (major)
  • Gecode/FlatZinc
    • Additions
      • The FlatZinc interpreter now supports float variables as objective functions. (major)
      • Added {int,bool,set,float}_default_search annotations. These can be used on solve items to declare the default search strategy for the respective variable types. (minor)
      • Added int2float constraint which was missing in the FlatZinc interpreter. (minor, thanks to Tias Guns)
    • Bug fixes
      • Fixed printing of float variables to be compatible with MiniZinc output. (minor, thanks to Peter Nightingale)
      • Fixed a segmentation fault when using FlatZinc with float variables that have initializers. (minor, thanks to Peter Nightingale)
      • Fixed the random search strategies to all use the same random number generator instead of different generators all initialised with the same seed (and therefore all producing the same sequences). (minor)
  • General
    • Other changes
      • CMake build system tweaked for out-of-source-dir building. (minor, thanks to Cliff Yapp)

June 11, 2013

CP 2013: First International Lightning Model and Solve Competition

I am happy and proud to announce the First International Lightning Model and Solve Competition which will be held at CP 2013 (The 19th International Conference on Principles and Practice of Constraint Programming, September 16-20, 2013, Uppsala, Sweden).

Here is the description of the Competition:
First International Lightning Model and Solve Competition

Organizers: Peter J. Stuckey and Håkan Kjellerstrand [that's me]

In order to crown the world champion optimizers we are organizing the first "Lightning Model and Solve" competition for teams of up to 3 people.

The competition presents 6 satisfaction and optimization problems, with 3-5 instances each, and each team tries to find good solutions to these problems within 2 hours. The team can use any optimization technology they desire for any problem, including solving problems by hand. Problem data is presented as text files containing numbers. Solutions are submitted as text files.

Solutions are ranked first by quality (for optimization problems) and then by time of submission. Each team scores one point for each instance and each other team for which their solution is better. Any correct solution is better than no submission at all.

The competition entry is for teams of up to 3 people. Each team can make use of ONLY ONE laptop, and cannot make use of any computing resources other than the laptop itself (no connecting to your CLUSTER at home). We recommend a team of 3, smaller teams will be merged unless the number of entrants is small.

The main aim of the competition is to have fun, and earn the eternal glory of the title of "best optimizers in the world".

To take part in the competition, you need to be a registered participant of the CP 2013 conference and of course physically present.

More information will be added closer to the conference.

May 12, 2013

Program for SweConsNet Workshop 2013 (Lund, 27 May 2013)

Here is the program for The 12th Workshop of the Network of Sweden-based researchers and practitioners of Constraint programming:
SweConsNet is the Network for Sweden-based researchers and practitioners of Constraint programming.

Following the previous successful workshops, we would like to announce the 12th SweConsNet workshop, which will take place in Lund, Sweden on May 27th 2013. The purpose of this workshop is to learn about ongoing research in constraint programming, as well as existing projects and products. We will also discuss the further development of the network, such as a possible widening to other Nordic countries.

The workshop is open to everybody interested in the theory and practice of constraint programming, whether based in Sweden or elsewhere. The scope of the workshop spans all areas of Constraint Programming, and is open to presentations and discussions addressing topics related to both theory and application.

Please forward a link of this page to people who might be interested but are not yet on the SweConsNet mailing list. They can subscribe to it by sending a message to Justin.Pearson [at]

We hope for your participation, and highly encourage you to submit a proposal for a presentation of your ongoing work, recent results, or of a relevant discussion topic. There are no paper submissions, reviews, or proceedings, hence recent conference/journal papers may also be presented.

To register for partcipation in the registration form no later than May 22. The workshop does not have a registration fee.


09.30Registration and coffee

Jean-Noël Monette (Uppsala University)
Towards solver-independent propagators

We present an extension to indexicals to describe propagators for global constraints. The resulting language is compiled into actual propagators for different solvers, and is solver-independent. In addition, we show how this high-level description eases the proof of propagator properties, such as correctness and monotonicity. Experimental results show that propagators compiled from their indexical descriptions are sometimes not significantly slower than built-in propagators of Gecode. Therefore, our language can be used for the rapid prototyping of new global constraints


Mats Carlsson (SICS) (joint work with Nicolas Beldiceanu & Aranud Letort)
Scalable Scheduling Constraints

The topic of this talk is synchronized sweep-based filtering algorithms for resource and precedence constraints that are common in scheduling problems.  We first describe a sweep-based implementation of the time-tabling method for "cumulative". Step by step, we then extend this algorithm to handle colored resources, multiple resources, and precedences.  We also derive from the filtering algorithm a greedy assignment algorithm, which is useful for very large scale problems where normal tree search would run out of memory.


Mehmet A. Arslan (LTH)
Instruction Selection and Scheduling for DSP Kernels on Custom Architectures

As custom architectures become more and more common for DSP applications, instruction selection and scheduling for such applications and architectures become important topics. In this paper, we explore the effects of defining the problem of finding an optimal instruction selection and scheduling as a constraint satisfaction problem (CSP). We incorporate methods based on sub-graph isomorphism and global constraints designed for scheduling. We experiment using several media applications on a custom architecture, a generic VLIW architecture and a RISC architecture, all three with several cores. Our results show that defining the problem with constraints gives flexibility in modeling, while state-of-the-art constraint solvers enable optimal solutions for large problems, hinting a new method for code generation. 


Kim-Anh Tran (SICS)
Investigation of Implied Constraints in a Constraint-Based Compiler Back-end

Adding implied constraints addressing register allocation and instruction scheduling to the constraint model of a compiler back-end can be crucial to the performance of the code generator. In the hope of cutting down the search effort, implied constraints are investigated, implemented and evaluated. The talk will cover current findings of a master thesis on a set of implied constraints and give an insight into their actual impact on the code generation problem.


Håkan Kjellerstrand
What I like about Constraint Programming

This is a presentation about fascinating features in Constraint Programming: those I personally find very appealing and/or unique (e.g. compared to other programming paradigms/systems), especially regarding the modeling aspect.


Björn Regnell (LTH)
A Scala DSL for Constraint-based Requirement Engeineering

Software Requirements Engineering (RE) includes the challenging activities of release planning and prioritization, which both are suitable to be tackled by constraint solving. However, there is a need for integrating the representation of software requirements with the representation of constraints in a way that is easy to use by practitioners. This talk aims to interactively discuss what types of constraints are essential in this domain and present and invite feedback on a prototype tool that provides a Scala-internal DSL for Constraint-based RE, wrapping the JaCoP solver.

15.00Coffee break

Krzysztof Kuchcinski (LTH) 
Mapping Streaming Application into MPSoc

16.10Workshop closure

May 06, 2013

CP-2013 Workshops

The Workshops for CP-2013 (Uppsala, 16 September 2013) has been published. From the Workshops page:

The following workshops will take place before the main conference on September 16.

WCB13: Constraint Based Methods for Bioinformatics

WCB'13 is the 9-th of a series of consecutive Workshop on Constraint Based Methods for Bioinformatics. The topics of interest are all those concerning bioinformatics and constraints and related techniques.
workshop homepage

TRICS13: Techniques foR Implementing Constraint programming Systems

Constraint programming systems are software systems that support the modeling and solving of problems using constraint programming. Such systems include constraint programming libraries and runtime systems for constraint programming languages. This workshop, the fourth in the series, will focus on implementation issues of such systems.
workshop homepage

CSP-SAT: 3rd International Workshop on the Cross-Fertilization between CSP and SAT

Constraint Satisfaction Problems (CSP’s) and Boolean Satisfiability Problems (SAT) have much in common. However, they also differ in many important aspects, which result in major differences in solution techniques. This workshop is designed as a venue for bridging the gap and for cross-fertilization between the two communities.

CP Solvers: Modeling, Applications, Integration, and Standardization

The objective of this workshop is to get an industrial overview of currently available CP solvers and their use in real-world applications. The focus of the workshop will be not on implementation but rather on modeling, applications, integration, and standardization. Such an overview will fill a gap in the program of recent CP conferences and will allow CP developers and researchers to share the most recent experience of the actual use of different CP-based tools.
workshop homepage

Workshop on Optimization for Smart Cities

Policy makers who run the complex network of diverse people, expected services and aging infrastructure are on a constant search for more efficient ways to analyse data, anticipate problems and coordinate resources in their cities. This workshop has the purpose of bringing together scholars and practitioners that work on the solution of problems related to the improvement of the quality of life and the use of resources in cities, to make cities smarter.
workshop homepage

COSpeL: Domain Specific Languages in Combinatorial Optimization

Domain Specific Languages (DSLs) are programming languages or libraries developed to handle specific tasks. The aim of COSpeL is to bring together people interested in the development and use of DSLs in the context of combinatorial optimization.
workshop homepage

ModRef: Constraint Modelling and Reformulation

This workshop covers modelling and reformulation techniques that make constraint programming more accessible and easier to use, widening the use of CP technology.
workshop homepage

As mentioned earlier, I have a special interest in the CP Solvers: Modeling, Applications, Integration, and Standardization Workshop

April 25, 2013

CP-2013 CP Solver Workshop: Survey of CP Solvers

An addendum to CP-2013 Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization" (Uppsala, Sweden):
If you an author of a CP solver, we ask you to fill out the following questionnaire (even if you do not plan to submit a presentation or attend the workshop). We plan to present a summary of all CP Solvers at this website before the start of the conference.
And here is the link to the workshop page: Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization".

April 23, 2013

CP-2013 Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization" (Uppsala, Sweden)

I'm very happy to announce the CP-2013 Workshop "CP Solvers: Modeling, Applications, Integration, and Standardization":
To be held at the 19th International Conference on the Principles and Practice of Constraint Programming ( in Uppsala, Sweden on Monday September 16, 2013.

Objectives and Scope
The objective of this workshop is to get an industrial overview of currently available CP solvers and their use in real-world applications. The focus of the workshop will be not on implementation but rather on modeling, applications, integration, and standardization. Such an overview will fill a gap in the program of recent CP conferences and will allow CP developers and researchers to share the most recent experience of the actual use of different CP-based tools.

We invite authors of all commercial and open source CP solvers to submit their presentations that should include (but not be limited to) the following topics:

  • Demonstration of modeling facilities using a commonly known problem from the CSPLib
  • Implementation programming languages and why they were selected
  • Covered variable types, global constraints, and search algorithms
  • Integration with LP/MIP/SAT tools
  • Integration with business rules, predictive analytics, and other decision making techniques
  • Real-world use: positive and negative experience
  • Standardization and future plans
If you an author of a CP solver, we ask you to fill out the following questionnaire (even if you do not plan to submit a presentation or attend the workshop). We plan to present a summary of all CP Solvers at this website before the start of the conference.

We also welcome application developers and researchers who use different CP tools and who are willing to share their practical experience.

A special focus will be on CP standardization efforts including:

  • Development of CP APIs for different programming environments
  • CP XML for interchange of constraint satisfaction/optimization problems
  • CP design patterns
  • Libraries of commonly used constraints
  • Collections of constraint satisfaction and optimization problems.

An expert panel discussion will be held at the end of this one-day workshop.

We hope that the workshop will help to improve communication between different CP vendors, between solver developers and their users, and will ultimately encourage a wider use of CP technology as a key component of the modern decision making applications.

Target Audience

  • Developers and users of different CP solvers
  • Business application developers who want to incorporate CP-based tools into their decision support systems
  • CP researchers.

[To be defined]

Important Dates

  • Sunday, June 16 Presentation title and abstract submission
  • Tuesday, July 16 Acceptance notification
  • Friday, August 16 Complete presentation submission
  • Monday, September 16 Workshop

Organizing Committee

• Jacob Feldman, OpenRules, Monroe, NJ, USA
• Helmut Simonis, 4C, Cork, Ireland
• Hakan Kjellerstrand, Independent Researcher, Malmo, Sweden

All abstracts and presentations must be submitted in PDF format using EasyChair. All accepted presentations will be made publicly available of the workshop’s website. At least one author of any accepted presentation must attend the event. An accepted presentation will be withdrawn if no such participation is secured with the payment of the workshop dues.

Please note that we invite authors of all CP solvers written using any languages to make brief presentations of their tools.

This is exciting, not only for the workshop itself (which is about my favorite areas in CP), but also because I'm a co-organizer (an honourable task). This will be very interesting. I hope to see you there!

April 22, 2013

SweConsNet Workshop 2013 - Second call for presentations and participation

Second call for presentations and participation: The 12th workshop of SweConsNet, the Network for Sweden-based researchers and practitioners of Constraint programming, Hosted at Lund University, Lund, Sweden, May 27th, 2013.
The 12th Workshop of the Network of Sweden-based researchers and practitioners of Constraint programming

SweConsNet is the Network for Sweden-based researchers and practitioners of Constraint programming.

Following the previous successful workshops, we would like to announce the 12th SweConsNet workshop, which will take place in Lund, Sweden on May 27th 2013. The purpose of this workshop is to learn about ongoing research in constraint programming, as well as existing projects and products. We will also discuss the further development of the network, such as a possible widening to other Nordic countries.

The workshop is open to everybody interested in the theory and practice of constraint programming, whether based in Sweden or elsewhere. The scope of the workshop spans all areas of Constraint Programming, and is open to presentations and discussions addressing topics related to both theory and application.

Please forward a link of this page to people who might be interested but are not yet on the SweConsNet mailing list. They can subscribe to it by sending a message to Justin.Pearson [at]

We hope for your participation, and highly encourage you to submit a proposal for a presentation of your ongoing work, recent results, or of a relevant discussion topic. There are no paper submissions, reviews, or proceedings, hence recent conference/journal papers may also be presented.

To register, please fill your data in the registration form. If you want to propose a presentation, please send also the title and the abstract of your talk. In order to facilitate organization, please notify us of your intention to participate as soon as possible, and at the latest before May 10th, 2013. The workshop does not have a registration fee.

In the program so far

Participants will be presented continously

1. Jean-Noël Monette (Uppsala University): Towards solver-independent propagators

2. Håkan Kjellerstrand: What I (still) like about Constraint Programming

3. Mats Carlsson (SICS), TBD.

4. Christian Schulte (KTH), TBD.

5. Björn Regnell (LTH),A Scala DSL for Constraint-based Requirement Engeineering

6. Mehmet A. Arslan (LTH), Instructions Selection and Scheduling for DSP Kernels on Custom Architectures.
I just registered my own talk: What I (still) like about Constraint Programming. with the Presentation abstract:
This is a presentation about fascinating features in Constraint Programming: those I personally find very appealing and/or unique (e.g. compared to other programming paradigms/systems), especially regarding the modeling aspect.
I guess that many/most of the attendants know these "fascinating features" already, but all might not agree (I hope) that they are as important (or fascinating) as I find them.

April 11, 2013

The MiniZinc Challenge 2013 and other MiniZinc news

Here are some news from the G12 MiniZinc world.

MiniZinc main page:

The MiniZinc main page has changed to

MiniZinc challenge

From MiniZinc Challenge 2013:
The aim of the MiniZinc Challenge is to start to compare various constraint solving technology on the same problems sets. The focus is on finite domain propagation solvers. An auxiliary aim is to build up a library of interesting problem models, which can be used to compare solvers and solving technologies.

Entrants to the challenge provide a FlatZinc solver and global constraint definitions specialized for their solver. Each solver is run on 100 MiniZinc model instances. We run the translator mzn2fzn on the MiniZinc model and instance using the provided global constraint definitions to create a FlatZinc file. The FlatZinc file is input to the provided solver. Points are awarded for solving problems, speed of solution, and goodness of solutions (for optimization problems).

Registration opens: Wed, 1 May 2013.
Problem submission deadline: Fri, 14 June 2013.
Initial submission round begins: Mon, 1 July 2013.
Initial submission round ends: Fri, 19 July 2013.
Final submissions: Fri, 2 August 2013.
Announcement of results at CP2013: 17 - 20 September 2013.

For details of the competition see:

Note that the scoring system has slightly changed this year, so that solvers that obtain indistiguishable results in quality of solution split the points inversely proportional to the time taken

To register for the challenge please email

Here is the Call for Problem Submission:
The MiniZinc Challenge is an annual solver competition in the Constraint Programming (CP) community held before the International Conference on Principles and Practice of Constraint Programming. The MiniZinc Challenge 2013 is seeking interesting problem sets on which various constraint solving technologies should be compared on this year. Everyone is allowed to submit problems regardless of whether they are an entrant in the challenge.

Important dates and deadlines:

Problem submission open: now
Problem submission deadline: Fri, 14 June 2013

Problem submission

Send an email with the subject line “[MZNC13] benchmark” to mzn-challenge 'at'

There are no restrictions on the kind of problems, but ideally they should be of interesting nature such as practice-related problems and puzzles etc. Problem submissions with real-world instances are welcome warmly. Models for the 2013 challenge can only use integer and Boolean variables.

The problem submitter provides a MiniZinc model of the problem and 20 instances ranging from easy-to-solve to hard-to-solve for an “ordinary” CP system. It is strongly encouraged to make use of the global constraint definitions provided in the MiniZinc 1.6 distribution. Please, follow the links below for submission instructions and requirements.
MiniZinc Challenge 2013
Also see: MiniZinc Challenge Medals 2008-2012

MiniZinc forum

Also just released is The MiniZinc forums with three forums:
  • Beginners: All beginner-level questions about MiniZinc.
  • Users: General discussion about MiniZinc. For beginners' questions please use the dedicated Beginners' Forum.
  • Developers: Discussions about developing the MiniZinc system and solvers that interface with MiniZinc.

March 14, 2013

Gecode version 4.0.0 released

Gecode version 4.0.0 has been released. Congrats to the developers!

From the Changelog:
Changes in Version 4.0.0 (2013-03-14)

This release adds a multitude of new features, fixes many bugs, and offers a number of performance improvements. There are too many major improvements to even summarize them here all, so here are just some highlights: LDSB as automatic symmetry breaking during search; restart-based search; complete redesign and reimplementation of branching enhancing the expressiveness massively (AFC with decay, activity, user-defined variable and value selection, tie-break control, better randomization, ...); half-reification; addition of floating point constraints.

It is recommended to read the new Chapter in MPG on branching as the changes are substantial and require you to change your programs, see also the full list of changes below.

As the interfaces has changed, please consult How to Change to Gecode 4.0.0.

  • Kernel
    • Additions
      • The random seeds for variable and value branching options can now be initialized from a hardware random number generator (see MPG for details). (minor)
      • All ArgArrays now accept STL vectors and input iterators for construction. (minor)
      • The kernel now can record the activity of variables. The activity of a variable is defined as how often the domain of a variable has been pruned during search. For details, see "Modeling and Programming with Gecode". (major)
    • Other changes
      • The default constrain() function for best-solution search now does by default nothing (it used to throw an exception). (minor)
      • The entire infrastructure for variable-value branchers has been reimplemented from scratch. The new design makes a much better compromise between code size, efficiency, and expressiveness: the efficiency is about the same (for examples with no propagation and just branching one can note a slowdown of 2-4%) while code size shrinks drastically (the overall code size for integer variables shrinks by 20%) and the architecture is much more expressive (in particular, it supports tie-break limits, see MPG). (major)
      • Throw exception when the user-defined copy constructor of a class that inherits from Space does not call the Space copy constructor. (minor)
    • Performance improvements
      • Fixed a bug in the main memory allocation routine: now heap block sizes are decreased dynamically as they should be. Also changed the memory configuration parameters as explained in: Zandra Norman, Memory Management for Gecode. KTH Royal Institute of Technology, Sweden, Bachelor thesis, TRITA-ICT-EX-2012:143, 2012. (major, thanks to Zandra Norman)
  • Search engines
    • Additions
      • Added support for restart-based search, see MPG for details. (major)
    • Other changes
      • Variable selection for branching used the quotient of size divided by degree, accumulated failure count, or activity. They now use the inverse. That is, for example, it is not any longer INT_VAR_SIZE_DEGREE_MIN() but INT_VAR_DEGREE_SIZE_MAX() (that is, largest degree divided by size). (major) That looks like an annoying change but is in fact essential: the strategies using accumulated failure count and activity now could have run into division by zero issues. And just changing the implementation is not good enough because the values of these measures can now be exposed during tie-breaking.
      • The restart best solution search engine has been removed (it is subsumed by the new restart-based meta search engine RBS). (major)
    • Performance improvements
      • The search engines now do not allocate memory on the search stack for the rightmost branch of each node. This means that the search tree depth is now computed differently. It now represents the actual peak depth required at any one time, taking into account that rightmost branches reuse the stack entry of their parents. (minor)
  • Finite domain integers
    • Additions
      • Gecode now supports Lightweight Dynamic Symmetry Breaking (LDSB), see MPG for details. (major , contributed by Christopher Mears)
      • Added value selection strategies for branching INT_VAL_NEAR_MIN(), INT_VAL_NEAR_MAX(), INT_VAL_NEAR_INC(), and INT_VAL_NEAR_DEC(). They can take a previous assignment to the variables to branch on and try to choose values which are near (see MPG for details). (major, thanks to Manuel Loth)
      • Added domain constraint that constrains a variable (or an array) according to the domain of another variable. (minor)
      • Added variants of dom that copy the domain from one variable to another. (minor)
      • The nooverlap constraint now allows sharing of unassigned variables in its argument arrays. (minor)
      • Added half-reification for reified constraints (see Modeling and Programming with Gecode for details). (major)
      • Added pow and nroot constraints. (major)
      • The binpacking constraint now also accepts items of size zero. (minor, thanks to Kathrin Dannmann, Roberto Castañeda Lozano)
      • Added activity-based branching strategies for integer and Boolean variables: INT_VAR_ACTIVITY_MIN, INT_VAR_ACTIVITY_MAX, INT_VAR_ACTIVITY_SIZE__MIN, INT_VAR_ACTIVITY_SIZE_MAX. For details, see "Modeling and Programming with Gecode". (major)
    • Other changes
      • The coefficients for linear constraints are now divided by their greatest common divisor. That means that some equations can be handled now that previously threw an OutOfLimits exception, and some equations can be handled with the more efficient integer precision propagators that previously required double precision. (minor)
      • Generalized definition of no-overlap propagators for better reuse. (minor, thanks to Roberto Castañeda Lozano)
      • The interface for branching on integer and Boolean variables has changed considerably (supporting user-defined variable and value selection, tie-break limit functions, handlers to created branchers, and more). In order to change, you have to add () to all variants of INT_VAR, INT_VAL, and INT_ASSIGN. For example, INT_VAR_SIZE_MIN becomes the function call INT_VAR_SIZE_MIN() and INT_VAL_MIN_MIN becomes the function call INT_VAL_MIN_MIN(). Some of these functions expect additional arguments and can take also optional arguments (this replaces the VarBranchOptions and ValBranchOptions). Please read the new "Branching" chapter in MPG. (major)
      • Respect IntConLevel argument for reified linear constraints with a single integer variable. (minor)
    • Bug fixes
      • Fixed precede constraint with less than two values. (minor, thanks to Roberto Castañeda Lozano)
      • Fixed a bug where bounds consistent distinct reported subsumption instead of failure in certain cases. (major, thanks to Lin Yong)
      • Fixed potential rounding issues in sqr and sqrt constraints. (minor)
      • Fixed copying of tuple sets in extensional constraints and IntSets in sequence constraints (could lead to crashes when using parallel search). (major, thanks to Manuel Baclet)
      • Added missing propagation for nary min/max constraint. (minor, thanks to Jean-Noël Monette)
      • Make extensional constraints work with empty tuple sets. (minor, thanks to Peter Nightingale)
      • Fix count (global cardinality constraint) for multiple occurrences of the same value in the cover array. (minor, thanks to Peter Nightingale)
    • Performance improvements
      • Arithmetic, linear, and cumulative constraints now resort to internal operations using "long long int" rather than "double". This improves performance but also extends the range over integer coefficients that can be handled by linear constraints. (major)
  • Finite integer sets
    • Additions
      • Gecode now supports Lightweight Dynamic Symmetry Breaking (LDSB), see MPG for details. (major , contributed by Christopher Mears)
      • Added domain constraint that constrains a variable (or an array) according to the domain of another variable. (minor)
      • Added domain constraints for arrays of set variables. (minor)
      • Added half-reification for reified constraints (see Modeling and Programming with Gecode for details). (major)
      • Added activity-based branching strategies for set variables: SET_VAR_ACTIVITY_MIN, SET_VAR_ACTIVITY_MAX, SET_VAR_ACTIVITY_SIZE_MIN, SET_VAR_ACTIVITY_SIZE_MAX. For details, see "Modeling and Programming with Gecode". (major)
      • Added channeling constraint between arrays of set variables. (major , contributed by Denys Duchier)
    • Other changes
      • The interface for branching on integer and Boolean variables has changed considerably (supporting user-defined variable and value selection, tie-break limit functions, handlers to created branchers, and more). In order to change, you have to add () to all variants of SET_VAR, SET_VAL, and SET_ASSIGN. For example, SET_VAR_SIZE_MIN becomes the function call SET_VAR_SIZE_MIN() and SET_VAL_MIN_INC becomes the function call SET_VAL_MIN_INC(). Some of these functions expect additional arguments and can take also optional arguments (this replaces the VarBranchOptions and ValBranchOptions). Please read the new "Branching" chapter in MPG. (major)
    • Bug fixes
      • Fixed precede constraint with less than two values. (minor, thanks to Roberto Castañeda Lozano)
  • Floats
    • Additions
      • Added support for float variables. (major , contributed by Vincent Barichard)
  • Minimal modeling support
    • Additions
      • Added pow and nroot expressions for integer variables. (minor)
    • Other changes
      • Made implementations of MiniModel expressions private, so that the MiniModel headers do not have to include propagator headers like gecode/int/linear.hh any longer. (minor)
  • Script commandline driver
    • Additions
      • Added commandline options to control restart-based search, see MPG for details. (major)
      • Decay values can now be passed on the command line using the switch -decay. (minor)
      • Added options -file-sol and -file-stat for writing solutions and statistics to arbitrary files and streams. (minor, thanks to Andrea Pretto)
      • The command line -print-last configures whether only the last solution found is printed. (minor, thanks to Josef Eisl)
    • Other changes
      • Boolean options (BoolOption) can now be given a false or true argument and hence are in-line with all other option types. (minor)
  • Range and value iterators
    • Documentation fixes
      • Clarified for several iterators that when using the assignment operator both iterators must be allocated from the very same region. (minor)
  • Support algorithms and datastructures
    • Bug fixes
      • Fixed a concurrency problem that caused an exception to be thrown at the end of a multi-threaded search on some platforms. (minor)
      • Fixed a bug in the allocation of very large bitsets. (minor, thanks to Max Ostrowski)
  • Example scripts
    • Additions
      • New example: Colored matrix without monochromatic rectangles. (minor)
  • Gist
    • Additions
      • Added option to invoke move cursors during the automatic search. (minor)
    • Other changes
      • Updated to compile with Qt version 5.0 (still works with Qt >= 4.3 as well). (minor)
  • Gecode/FlatZinc
    • Additions
      • Added support for more search annotations (defined in gecode.mzn), and for the restart and decay command line options. (minor)
      • Added native support for lex_less_bool and lex_lesseq_bool. (minor)
      • Gecode/FlatZinc now supports float constraints and variables. (major)
      • The FlatZinc interpreter now supports comparing of nodes in Gist. (major, thanks to Gabriel Hjort Blindell)
      • Added native support for the inverse_set constraint. (minor)
    • Other changes
      • Renamed the FlatZinc executable to fzn-gecode, to better distinguish it when installed alongside other FlatZinc implementations. (minor)
      • The FlatZinc interpreter no longer performs a complete search on variables annotated as var_is_introduced, but tries to extend a solution on the model variables to a solution that includes the introduced variables. Each solution on the model variables is therefore only reported once. (major)
    • Bug fixes
      • The mzn-gecode shell script now passes arguments correctly to the FlatZinc interpreter. (minor)
      • Removed spurious debug output for among constraint. (minor, thanks to Simon Ekvy)
      • Exported registry and helper functions so that users can add constraint handlers to the FlatZinc interpreter. (minor, thanks to Jean-Noël Monette)
  • General
    • Additions
      • Added CMake build script for Gecode (CMakeLists.txt). (minor, thanks to Victor Zverovich)
      • Variable selection using AFC now supports decay. Read more in MPG. (major)
    • Other changes
      • Compiles with MSVC 2012. (minor)
Also see:

March 10, 2013

A first look at B-Prolog

First, here's my B-Prolog page with a lot of CLP models.

B-Prolog is a Prolog implementation that, besides extensions such as CLP(FD), CLP(Set), and CLP(Bool), also shines with the following extensions: B-Prolog also has built in support for memoization (table) which make dynamic programming easier (see some examples here). It also has an almost seamless support for changing between CLP(FD), LP/MIP, and SAT solver.

However, here I have almost exclusively concentrated on the CLP(FD) extensions and with heavily use of foreach loops, list comprehensions, and arrays/matrices (since this is how I tend to think when modeling these kind of combinatorial problems). Most problems has been implemented in other C(L)P systems before, see Common constraint programming problems.

More info about B-Prolog

First model: SEND+MORE=MONEY


Let's start with one of the standard problems just to get a feeling for the syntax of B-Prolog
sendmore(Digits) :-
    Digits = [S,E,N,D,M,O,R,Y],
    Digits :: 0..9,
    S #> 0, M #> 0,
                 1000*S + 100*E + 10*N + D
    +            1000*M + 100*O + 10*R + E
    #= 10000*M + 1000*O + 100*N + 10*E + Y,       
This is no different from most CLP(FD) systems. Note that there is no need to load a specific clpfd module since everything is loaded already (and B-Prolog currently don't support modules).

Here's a short explanation:
  • Digits :: 0..9: This means that all the elements in the list Digits must be in the domain between 0..9.
  • The specific CLP(FD) operators starts with #, e.g. #= and #>.
  • labeling(Digits): starts the search of the solution.

N-Queens: introducing list comprehensions


Tthis N-Queens model show more of the differences between B-Prolog and standard Prolog with CLP(FD) support, namely it's use of list comprehensions.
queens2(N, Q) :-
        length(Q, N),
        Q :: 1..N,
        Q2 @= [Q[I]+I : I in 1..N],
        Q3 @= [Q[I]-I : I in 1..N],
The extraction the two diagonals is done via list comprehensions (Q2 and Q3), i.e.
  Q2 @= [Q[I]+I : I in 1..N],
This works since the special operator V @= [....] extracts the elements Q[I] (note the use of array index here).

Here is a small benchmark using the above encoding (predicate queens2 in the file). queens/2 is the same as the above but use alldistinct instead of alldifferent:
Both take very long time for N=500, but is much faster for N=499 and N=501. As we see, using alldifferent is faster than using alldistinct. The latter use a stronger consistency checking, but that don't help in this encoding:
  For N=400:
    queens2/2: 0.62s 10 backtracks
    queens5/2: 0.71s 10 backtracks
    queens/2:  2.16s 1 backtrack
  For N=499:
    queens2/2: 0.76 1 backtracks
    queens5/2: 1.1s  1 backtracks
    queens/2:  4.168 0 backtracks
  For N=500: too slow 
  For N=501:
    queens5/2: 1.1s 1 backtracks
    queens2/2: 3.14s 1 backtrack
    queens/2:  3.524s 2 backtracks
  For N=1000:
    queens5/2: 8.9s 2 backtracks
    queens2/2: 24.2s 2 backtracks
    queens/2:  34.146s 0 backtracks

[I don't understand why N=500 is so hard when 499 and 501 is solved quite fast. No other CP solver/system I've tested show this behaviour.]

B-Prolog is not the only Prolog based system using foreach loops. See below for some discussion of the differences between B-Prolog and ECliPSe CLP's do-loop construct (also used in SICStus Prolog). a fairly general alphametic solver


Using list comprehensions, foreach loops and arrays makes it quite easy to implement a fairly general solver for this kind of alphametic problems. It's not completely general since it (still) use Prolog's feature of handling variables.
go :-
    L = [[_S,E,N,_D],[M,O,_R,E],[M,O,N,E,_Y]],
    alphametic(L, Base, Res),

alphametic(L,Base, Vars) :- 
    Rev = [Last|Sums],

    term_variables(L, Vars),
    Vars :: 0..Base-1,

    Vals #= sum([Val : S in Sums,[Val],calc(S,Base,Val)]),
    foreach(S in Sums,S[1]#>0),

    labeling([ff,split], Vars).

calc(X,Base,Y) :-
    Y #= sum([X[I]*Base**(Len-I) : I in 1..Len]).
The main work here is done with term_variables/2 which extracts the variables used, the sum/1 which sums all the terms, and calc that is really a special case of scalar product.

Sudoku solver: more lists/arrays


Here is a Sudoku solver in B-Prolog:
go :-

solve(ProblemName) :-
   problem(ProblemName, Board),
   sudoku(3, Board),

print_board(Board) :-
   N @= Board^length,
   foreach(I in 1..N, 
      (foreach(J in 1..N, [X],
         (X @= Board[I,J],
         (var(X) -> write('  _')
          format('  ~q', [X]))

problem(1, [](
    [](_, _, 2, _, _, 5, _, 7, 9),
    [](1, _, 5, _, _, 3, _, _, _),
    [](_, _, _, _, _, _, 6, _, _),
    [](_, 1, _, 4, _, _, 9, _, _),
    [](_, 9, _, _, _, _, _, 8, _),
    [](_, _, 4, _, _, 9, _, 1, _),
    [](_, _, 9, _, _, _, _, _, _),
    [](_, _, _, 1, _, _, 3, _, 6),
    [](6, 8, _, 3, _, _, 4, _, _))).

alldifferent_matrix(Board) :-
   Rows @= Board^rows,
   foreach(Row in Rows, alldifferent(Row)),
   Columns @= Board^columns,
   foreach(Column in Columns, alldifferent(Column)).

sudoku(N, Board) :-
   N2 is N*N,
   BoardVar :: 1..N2,

   foreach(I in 1..N..N2, J in 1..N..N2, 
          (SubSquare @= [Board[I+K,J+L] : 
                        K in 0..N-1, L in 0..N-1],
   labeling([ff,down], BoardVar).
One of the most interesting things here is the code for the sub squares (SubSquare): (SubSquare @= [Board[I+K,J+L] :
K in 0..N-1, L in 0..N-1]

Unlike most Prolog implementations, it works to access the entries in the matrix with I+K and J+L. However this works only when the indices (I, J, K, L) are ground integers. If any of the indices would be decision variables, then one have to use element/3 (or perhaps nth1/3) to obtain the same thing. See below for more about this.

Running this models yield the following result:
  _  _  2  _  _  5  _  7  9
  1  _  5  _  _  3  _  _  _
  _  _  _  _  _  _  6  _  _
  _  1  _  4  _  _  9  _  _
  _  9  _  _  _  _  _  8  _
  _  _  4  _  _  9  _  1  _
  _  _  9  _  _  _  _  _  _
  _  _  _  1  _  _  3  _  6
  6  8  _  3  _  _  4  _  _

  3  6  2  8  4  5  1  7  9
  1  7  5  9  6  3  2  4  8
  9  4  8  2  1  7  6  3  5
  7  1  3  4  5  8  9  6  2
  2  9  6  7  3  1  5  8  4
  8  5  4  6  2  9  7  1  3
  4  3  9  5  7  6  8  2  1
  5  2  7  1  8  4  3  9  6
  6  8  1  3  9  2  4  5  7
Time and backtracks: Since there is no built-in predicate in B-Prolog for getting both time and number of backtracks (failures etc), let's add this:
   statistics(backtracks, Backtracks1),
   statistics(backtracks, Backtracks2),
   T is (End-Start)/1000,
   Backtracks is Backtracks2 - Backtracks1,
   format('CPU time ~w seconds. Backtracks: ~d\n', [T, Backtracks]).
Running this on the command line:
 $ time bp -g "[sudoku], time2(solve(1)),halt"
writes the following after the solution, i.e. that it's a 0 backtrack problem.
CPU time 0.0 seconds. Backtracks: 0
In the model there is a predicate go3/0 which runs all the 13 problem instances and shows the statistics (time and backtracks). The len:1 in the output is for ensuring that the solution is unique (a requirement on traditional Sudoku problems). This check caught two problem instances that was either typed in erroneously by me or by the source I got them from (and thus I removed these two problems from this model).
go3 :-
   foreach(P in 1..13,
      (Len > 1 ->
         writeln('This has more than one solution!') 
      ), nl)).
The result is:
CPU time 0.004 seconds. Backtracks: 0


CPU time 0.04 seconds. Backtracks: 0

What you can - and cannot - do with list comprehensions

When using @= it works to get a list comprehension (provided that there's no problem with array access etc). If one want to use it directly in a (global) constraint, then the requirement is that it must be in an arithmetic context (I first missed that in the documentation).

The following works since sum/1 is a special function in an arithmetic context:
  Sum #= sum([X[I] : I in 1..N])
However, using comprehensions in global constraints such as alldifferent don't work since it's not in an arithmetic context:
  % this don't work
  alldifferent([X[I]+I : I in 1..N])
The following works (as we saw above in the N-Queens model) since we extract the elements to a list using @= before using it.
  % this works
  Q1 @= [X[I]+I : I in 1..N],
  % ...
As mentioned above, array indexing don't work when the indices are decision variables. Then one must use element/3 (or nth/3). See more about element below.

foreach loops

The foreach loop extension i B-Prolog is quite intuitive. They are a often easier to use than the do-loops in ECLiPSe CLP (and SICStus Prolog) since in B-Prolog one declare the local variables in the loop (i.e. "global by default"), whereas in ECLiPSe do-loops one declares the global variables to be used ("local by default") which perhaps is not as intuitive. Both these systems have helpful warnings where a local/global variable is used out of context.

Note that B-Prolog's foreach construct don't have all features found in ECLiPSe's do-loop so certain things might be easier to state in ECLiPSe CLP and SICStus Prolog than in B-Prolog.

In B-Prolog there is also support for accumulators via ac(Var, StartValue) (or ac1/2 which I have not used much), which collects the local variables in the loop to a global variable (here Var).

An example of a foreach loop which use ac is in
preferences(1, 7, [[1,5],

foreach([Pref1,Pref2] in Preferences,
        ac(Diffs,[]), [P1,P2,Diff],
            Diff #= (abs(P1-P2) #= 1),
            Diffs^1 = [Diff|Diffs^0]
Here the Diff list collects the differences of the positions (from the local Diff variable). The local declarations [P1,P2,Diff] is needed here, otherwise they would be considered global variables in the model (with disastrous result).

Also note that the foreach loop handles the list of lists in Preferences, so the foreach is not limited to integer ranges that is shown in a couple of the other examples here.

More about foreach loops is found in the manual, section Declarative Loops and List Comprehensions.

new_array and lists

Creating an array (of "any" dimension) in B-Prolog is done with new_array(Var,Dimensions):
   N = 3,
where Dimensions is a list of the dimensions (of "any" size; I don't know if there is any limitations of the number of dimensions).

The result of new_array is an array structure such as
  | ?- new_array(X,[3,3])
  X = []([](_7a8,_7b0,_7b8),[](_7c8,_7d0,_7d8),[](_7e8,_7f0,_7f8))
Converting an array structure to a list is done with array_to_list(Array,List):
| ?- new_array(X,[3,3]), array_to_list(X,List)
X = []([](_8e0,_8e8,_8f0),[](_900,_908,_910),[](_920,_928,_930))
List = [_8e0,_8e8,_8f0,_900,_908,_910,_920,_928,_930]
Setting the domains for the variables in an array must be done via the converted list (i.e. not directly on the array structure):
   N = 3,
   new_array(X, [N,N]),
   array_to_list(X, Vars),
   Vars :: 1..N*N,
   % ...
After that conversion, one can freely use constraints on either the array structure (e.g. as a matrix) or on the list representation, such as alldifferent(Vars). This "dual representation" sometimes simplifies modeling much. Note that labeling must be done via the list representation, or by using term_variables/2.

Reifications: alldifferent_except_0


Using reifications (reasoning about satisfing constraints using boolean decision variables) is quite direct, using #= (equality), #=> (implication) and #<=> (equivalence). Here is an decomposition of the global constraint
alldifferent_except_0(Xs) :-
   Len @= Xs^length,
   foreach(I in 1..Len, J in 1..Len,
      (I #\=J #/\ Xs[I] #\= 0 #/\ Xs[J] #\= 0)
      (Xs[I] #\= Xs[J])
One thing I noticed when playing with reifications was that foreach don't seems be in boolean context. Example: this don't work as I expected (or hoped), i.e. that if not all X[I] are larger than 0 then B is 0 (and vice versa):
  % don't work
  foreach(I in 1..N, X[I] #> 0) #<=> (B #= 1)

Table constraint


The table constraint is used in and is quite easy to use.

First, define the valid connections that can be used. This is - of course - done via a list comprehension:
   @= [(I1,J1,I2,J2) :
       I1 in 1..N, J1 in 1..N, 
       I2 in 1..N, J2 in 1..N,
       (abs(I1-I2) =< 1,
       abs(J1-J2) =< 1,
       (I1 \=I2; J1 \= J2)
Then place all the numbers 1..N*N in the grid where the places are restriced by the connections in Connections. Note that since we are using decision variables for the indices (I1,J1,I2,J2) of the grid (X), we cannot use array extraction. My solution of this is to use the XVar list (which we must have for the labeling anyway), and calculate the corresponding position in this list for each (I,J) postition in the grid.
XVar @= [X[I,J] : I in 1..N, J in 1..N],
XVar :: 1..N*N,

% ....

% place all integers from 1..N*N
foreach(K in 1..(N*N)-1,
     K2 is K+1,
     [I1,J1,I2,J2] :: 1..N,
     % [(I1,J1,I2,J2)] in Connections,
     [Offset1,Offset2] :: 1..N*N,
     Offset1 #= (I1-1)*N+J1,
     Offset2 #= (I2-1)*N+J2,
     AllConn^1 = [(I1,J1,I2,J2)|AllConn^0],
     Offsets^1 = [[Offset1,Offset2]|Offsets^0]

% the table constraint
AllConn in Connections,
The table constraint is used by simply putting all the variables in a list of parenthesis structure (I1,J1,I2,J2) and then checking all these using AllConn in Connections.

Also, note that both AllConn (the collection of selected connections) as well as the offsets (collected in the Offsets list) is included in the labeling to get faster results.
% ...
% ...
In my experiments with this model, it seems to be faster to put Offsets before XVar and AllConn in the labeling.

This model is not blazingly fast, though it's much faster than the model without the table constraint (the model For example, problem instance #6 takes 6.5 seconds without the table constraint, and 0.17 seconds with (both versions have 0 backtracks).

Note: B-Prolog also have a powerful built-in memoisation feature called table. Please don't confuse these two usages of "table".


As usual, when porting models from my MiniZinc models, I got the most problem with the element constraint. (Porting to B-Prolog has not been the worst case in this matter, though.)

B-Prolog's support for arrays with decision variables is not as good as compared with the ECLiPSe CLP system, and both have quite a way to go before it's as nice as MiniZinc's and Comet's support (where element is encoded very natural). It's especially when using element in a matrix where I had to stop in my "porting flow" (often the port was quite "flowy") and then use element/nth1 constraints and often a few temporary variables.

When there are much matrices with decision variable indices, I tend to use these two predicates instead (and skipping B-Prolog's arrays):
matrix_element(X, I, J, Val) :-
   element(I, X, Row),
   element(J, Row, Val).
matrix(_, []) :- !.
matrix(L, [Dim|Dims]) :-
   length(L, Dim),
   foreach(X in L, matrix(X, Dims)).
(The latter predicate was suggested by Mats Carlsson when I implemented a lot of SICStus Prolog models some years ago.)

Here is an example in the stable marriage model (

In MiniZinc one can code element within decision variables like this (where both husband and wife are lists of decision variables). Here's a snippet from the MiniZinc model stable_marriage.mzn:
forall(m in 1..num_men) ( 
  husband[wife[m]] = m 
In B-Prolog, I use the following (using the same overall approach):
foreach(M in 1..NumMen,[WifeM,HusbandWifeM],
     HusbandWifeM #= M )),
Another example is the Five element problem (from Charles W. Trigg):

The problem statement:
Charles W. Trigg, PI MU EPSILON JOURNAL, Valume 6, Fall 1977, Number 5
From the following square array of the first 25 positive integers, choose five, no two of the same row or column, so that the maximum of the five elements is as small as possible.

2 13 16 11 23
15 1 9 7 10
14 12 21 24 8
3 25 22 18 4
20 19 6 5 17

Ensuring the unicity of rows and columns is simple:
foreach(I in 1..N, 
       (sum([X[I,J] : J in 1..N]) #= 1,
        sum([X[J,I] : J in 1..N]) #= 1
However, we also want to find the specific numbers in the matrix for which X[I,J] #= 1. The following way do not work:
% This don't work
foreach(J in 1..N, [I],ac(Is,[]),
   (I :: 1..N,
    1 #= X[I,J],
    Y[J] #= Matrix[I,J]
If we - still - want to use this matrix approach, then freeze/2 can be used. However, we then have to collect the I indices in an accumulator list Is to be used in the labeling:
% Find the specific row I for which X[I,J] is 1.
foreach(J in 1..N, [I],ac(Is,[]),
    (I :: 1..N,
    freeze(I, 1 #= X[I,J]),
    freeze(I, Y[J] #= Matrix[I,J]),
    Is^1 = [I|Is^0])

MaxY #= max(Y),

term_variables([XVar,Y,MaxY,Is], Vars),
minof(labeling([ff], Vars),MaxY),
% ....
Using freeze/2 is not ideal, but it makes the code more intuitive than with a lot of element/3 or nth1/3, if that even work.


As mentioned in the introduction above, B-Prolog also have support for LP/MIP problems (using GLPK as the solver) and an interface to a SAT solver.

Here I just show the MIP solver by one of my standard problems for which CP solvers tend to be worse than MIP solvers, namely The model also implements a CLP(FD) and a cp_solve.

Some differences between the other CLP(FD) models shown here:
  • The CP, MIP, and SAT solvers has a little different syntax than the CLP(FD) solver:
    • The numeric constraints use $= instead of #= (i.e. "$" instead of "#") to mark that it's a CLP constraint.
    • The labeling is different: cp_solve, lp_solve, ip_solve, and sat_solve.
  • There are just a few global constraint that is supported using this: $alldifferent and $element, and the usual arithmetic sum/1, min/1, max/1, min/2, max/2. The CLP(FD) solver has support for many more global constraints.
coins(N, C) :-
   % N = 10, % 31 the grid size
   % C = 6,  % 14, number of coins per row/column
   new_array(X, [N,N]),
   array_to_list(X, Vars),
   Vars :: 0..1,

   Sum $>= 0,
   foreach(I in 1..N, 
       (C $= sum([T : J in 1..N, [T], T @= X[I,J]]), % rows
        C $= sum([T : J in 1..N, [T], T @= X[J,I]]) % columns

   % quadratic horizontal distance
   Sum $= sum([
               T : I in 1..N, J in 1..N, [T],
               T @= (X[I,J] * abs(I-J)*abs(I-J))

   ip_solve([min(Sum),ff,reverse_split], Vars),
   % cp_solve([min(Sum),ff,reverse_split], Vars),
   % sat_solve([min(Sum),ff,reverse_split], Vars),
By changing ip_solve to cp_solve (or sat_solve) we use the CP solver (or SAT solver). This is quite nifty.

And as usual the MIP solver solves this problem immediately, whereas CP solvers tend to be much slower.

Things not tested or mentioned

Here are some other things that I have not mentioned above:
  • Action rules and events. These are the used to implement propagators and more effective constraints. However, I have not played with these much. Also, see Programming Constraint Propagators.
  • B-Prolog is not an open source project, i.e. the free distribution just contains the executable, documentation and examples. (This was a little unfortunately since then I had not opportunity to study certain features, e.g. how the global constraints where implemented in Action rules.) For individual, academic and non-commercial use, B-Prolog is free to use. There is commercial licenses where source code is included. See more here: Licenses.
  • And then there are other features that I have not mentioned. See the Manual for detailed descriptions about these.


I like B-Prolog, especially the nice support for list comprehensions, array access and foreach loops. They are a nodge neater than the one used in ECLiPSe CLP and SICStus Prolog, though not as good as in MiniZinc (or Comet).

It was quite easy to port my ECLiPSe and SICStus Prolog models to B-Prolog, mostly because these already used the do-loop constructs, and it was mostly easy to port directly from my MiniZinc models, except when using matrices with decision variables as indices. Right now I think that B-Prolog's approach is often neater to use than the other Prolog's, especially list comprehensions.


Finally, I must mention Picat (Predicates, Imperative, Constraints, Actors, and Tabling). Picat is a C(L)P language also created by B-Prolog's creator Neng-Fa Zhou (together with Jonathan Fruhman), which is inspired by Prolog (especially B-Prolog). From the Picat site:
Picat is a general-purpose hybrid language that incorporates many declarative language features for better productivity of software development, including explicit non-determinism, explicit unification, functions, constraints, and tabling. Picat also provides imperative language constructs for programming everyday things. The Picat system will be used for not only symbolic computations, such as knowledge engineering, NLP, and search problems, but also for scripting tasks for the Web, games, and mobile applications.


The Picat implementation will be based on the B-Prolog engine and the first version that has the basic functionality is expected to be released by May, 2013. The project is open to anybody and you are welcome to join, as a developer, a sponsor, a user, or a reviewer. Please contact .
This sounds very interesting, and I will definitely give it a try when it's released (and perhaps blog about it as well).

My B-Prolog models/programs

Here are my public B-Prolog encodings. As of writing it's over 200, most using CLP(FD), some use CLP(Set), and just a few are non-CP Prolog + foreach/list comprehension.

Recent comments

Twitter tweets

Powered by
Movable Type 3.2