Main

May 29, 2011

Global Constraint Seeker

Yesterday I visited Helmut Simonis' site (searching for another thing), and found this following announcement:

A Constraint Seeker

The global constraint catalog is a very useful resource for learning about all the different global constraints that have been decribed in the literature. But it can be challenging to find the right constraint, if you do not already know its name. Nicolas Beldiceanu and myself have been working on a web application which helps you find constraints in the catalog, given positive or negative examples. We have written a paper on this system, and a prototype is now running at Web Application. You may want to read Basic Help before entering your own queries.
Wow, that seems to be very interesting, and I just have to test it.

First some links, and after that some examples of Global Constraint Seeker in action. I haven't tested that many samples, but according to the paper it can handle many of the existing global constraint. However:
Most of the missing constraints use finite set variables or are graph constraints, which are not provided in the SICStus Prolog constraint engine.


The Constraint Seeker is written
in SICStus Prolog, integrated with an Apache web server. Nearly all of the HTML output is generated, the rest is fairly simple forms and css data. The Prolog code is just over 6,400 lines, developed over a period of 3 months by one of the authors. The six constraint models in the Seeker make up 2,000 lines, roughly one third of the code. This line count does not include the 50,000 lines of Prolog describing the constraints themselves.


From Summary and Future Work:
Besides introducing the Constraint Seeker, the key contribution of the paper is to illustrate the power of meta data and meta programming in the context of future constraints platforms.

....

The electronic version of the global constraint catalog provides such information in a systematic way for the different global constraints. The Constraint Seeker presented in this paper illustrates the fact that the same meta data can be used for different purposes (unlike ad-hoc code in a procedural language which is designed for a specific (and unique) usage and a single system).

Some examples

Here are some examples I've tried (played with) which may give a taste of what the Constraint Seeker can do. First are the three instances from the examples in Global Constraint Seeker: How to enter examples and how to interpret results. Note that I here just show the global constraints; there is much more information in the result, such as rank, density, links, etc which is left out.

p(5,7)

A search on p(5,7) means that we are looking for a global constraint with two arguments 5, and 7; p stands for a positive instance . What constraint (relation) is this?

The Seeker Matches (best matches) for p(5,7) is
  • lt
  • same_sign
  • leq
  • gt
  • geq
  • neq
Note that Constraint Seeker also detect global constraints where the order of arguments are swapped.

p(2,[4,2,4,5],4)

This is a slightly harder example: we are searching for three arguments, two scalar values (2 and 4) and the list [4,2,4,5].

A search on p(2, [4,2,4,5], 4) yields this Seeker Matches:
  • exactly
  • atleast
  • atmost
  • minimum_greater_than
  • int_value_precede
  • atmost
And the Wider Matches:
  • element
  • count
These "wider search" are found with some more and general transformation than "Seeker Matches".

p([[1,3,4,1],[2,9,11,2],[3,10,13,1],[6,6,12,1],[7,2,9,3]],8)

The first two queries was quite simple (for a human), but what constraint is p([[1,3,4,1],[2,9,11,2],[3,10,13,1],[6,6,12,1],[7,2,9,3]],8)?

Oh, it's a cumulative:
  • cumulative
  • cumulative_product
  • ordered_atmost_nvector
  • atmost_nvector
  • cumulative
  • coloured_cumulative
  • cumulative_product
  • coloured_cumulative
(Why two occurrences of cumulative, cumulative_product, etc are listed is explained in the paper.)

Below are some of my own tests, selected for different (and personal) reasons.

Looking for circuit

This test (searching for the circuit constraint) use a negative instance (n()) and three positive instances.
p([2,4,5,3,6,8,1,7])
p([4,1,2,5,6,8,3,7])
p([6,4,2,5,1,8,3,7])
n([1,2,3,4,5,6,7,8])
In this case, the Seeker don't yield any "direct" matches, but found the following Wider matches:
  • circuit
  • derangement
  • derangement
Maybe more examples (positive and negative) would help?

p(3, [1,2,4,3,5,6,0,0,0,18,7], 0)

p(3, [1,2,4,3,5,6,0,0,0,18,7], 0)
  • exactly
  • atleast
  • atmost
  • atleast
  • int_value_precede
Wider matches:
  • count

Cumulative again: Furniture moving

In Marriott & Stuckey: "Programming with constraints" (page 112f) there is a problem how to schedule furniture moving. The MiniZinc model furniture_moving.mzn is one way of modeling this. The result is
Durations  = array1d(1..4, [30, 10, 15, 15]);
EndTimes   = array1d(1..4, [60, 10, 30, 15]);
Resources  = array1d(1..4, [3, 1, 3, 2]);
Starts     = array1d(1..4, [30, 0, 15, 0]);
numPersons = 3;
Could Constraint Seeker find that this is a cumulative, and that it is valid? What happen if we change numPersons to 2 (i.e. an invalid solution)?

Let's test that. We have just a single positive instance where the order of the the variables are the same as above : p([30, 10, 15, 15], [60, 10, 30, 15], [3, 1, 3, 2], [30, 0, 15, 0], 3). Ah, it finds cumulative_product, and cumulative. Great.
  • cumulative_product
  • cumulative
  • cumulative
  • coloured_cumulative
  • coloured_cumulative
  • all_differ_from_at_least_k_pos
  • atleast_nvector
These constraints are found with use of the transformation T6.

Now, what will happen if we change number of person from 3 to 2:
p([30,10,15,15], [60,10,30,15], [3,1,3,2], [30,0,15,0], 2)?

Well, it don't find cumulative_product or cumulative anymore, just
  • coloured_cumulative
  • coloured_cumulative
  • all_differ_from_at_least_k_pos
  • atleast_nvector
Hmm, I'm not really sure I understand all the consequences of this...

p([2,2,2,0,0,0,0], [0,1,2,3,4,5,6], [2,0,1,2,0,1])

I first tried it stated as p([2, 2, 2, 0, 0, 0, 0], [2, 0, 1, 2, 0, 1]) but the Seeker could not find it, probably since this form is not in the Global Constraint Catalog. Can you guess what this is? It's one of my favorite global constraints (no, it's not alldifferent_except_0 :-) .

However, stating it as p([2,2,2,0,0,0,0], [0,1,2,3,4,5,6], [2,0,1,2,0,1]) worked very well. Just a single result: the one I had in mind.

Summary and a comment

Even though I have just played little with Global Constraint Seeker I like it very much. And it's quite safe to say that I have not realized its fully potential.

The only thing I miss right now is that the Seeker should - if possible - give a message if there is a syntax error etc. E.g. for the erroneous query p(2,[4,2,4,5],4 (no right parenthesis) it just show ----end of run---.

[Now, why did I visit Helmut's site? Ah, his (& etc.) Using Constraint Visualization Tools (PDF). Great!]

December 09, 2010

Global Constraint Catalog has been updated

The great Global Constraint Catalog has been updated.

The PDF version (about 20Mb) of the catalog is: Nicolas Beldiceanu, Mats Carlsson, Jean-Xavier Rampon: Global Constraint Catalog 2nd Edition, Working version of SICS Technical Report T2010:07, ISSN: 1100-3154, ISRN: SICS-T–2010/07-SE (November 22, 2010).

Abstract: This report presents a catalogue of global constraints where each constraint is explicitly described in terms of graph properties and/or automata and/or first order logical formulae with arithmetic. When available, it also presents some typical usage as well as some pointers to existing filtering algorithms.

The website (managed by Sophie Demassey): Global Constraint Catalog.

Here is the changelog:

2010-11-18 working version update: 354 constraints

Great work, and very useful additions. I especially like the new exercices of modelisation with global constraints.

December 14, 2009

The Global Constraint Catalog has been updated

A new version of Global Constraint Catalog has been released. That's great! The update date is 2009-12-10, but I didn't see it until today.

Here is the presentation of the catalog:
About the catalogue

The catalogue presents a list of 348 global constraints issued from the literature in constraint programming and from popular constraint systems. The semantic of each constraint is given together with a description in terms of graph properties and/or automata.

The catalogue is periodically updated by Nicolas Beldiceanu, Mats Carlsson and Jean-Xavier Rampon. Feel free to contact the first author for any questions about the content of the catalogue.

Download the Global Constraint Catalog in pdf format:

The news in this release:
2009-12-10 working version update: 348 constraints
Another nice thing is that it is now possible to search just the HTML pages, or PDF documents.

Sophie Demassey has done a wonderful work with the online version.

Also, I am happy for the links to this blog and my MiniZinc page, see the section on (decompositions of) global constraints.

May 29, 2009

Global Constraint Catalog, update

I have completely missed to blog about the update of the great online Global Constraint Catalog. It was done 2009-04-03 and had the following changes:
  • gccat_systems.xml: references to global constraints implemented within the constraint systems Choco and Gecode. (For now, the references appear in this XML file only; they will soon be integrated within the catalog.)
  • gccat_schema.xml: the XML Schema of the catalog schema.xsd with a stylesheet
  • navigation bar (prev/next) added
  • google tools added: search and analytics
I really like the comparison of the constraint names in Choco and Gecode, it's very helpful. Maybe other systems will be added later, say JaCoP, MiniZinc and Comet?

May 08, 2009

Learning Constraint Programming III: decomposition of a global constraint: alldifferent_except_0

The series Learning Constraint Programming is a preparation for My talk at SweConsNet Workshop 2009: "Learning Constraint Programming (MiniZinc, JaCoP, Choco, Gecode/R, Comet, Gecode): Some Lessons Learned". Confusingly, the entries is not numbered in any logical order. Sorry about that.

Here are the previous entries:

Introduction

The global constraint alldifferent_except_0 (or the more general variant alldifferent_except_c) is one of my favorite global constraints. It is very handy to use when 0 (or any constant c) is coded as an unknown or irrelevant value. Then we can constraint the rest to be all distinct.

The great Global Constraint Catalog entry alldifferent_except_0) explains this constraint as:
Purpose
Enforce all variables of the collection VARIABLES to take distinct values, except those variables that are assigned to 0.

Example
​(<5,​0,​1,​9,​0,​3>)​
The alldifferent_except_0 constraint holds since all the values (that are different from 0) 5, 1, 9 and 3 are distinct.

Models

I have modeled a decomposition of alldifferent_except_0 in the following models, where the constraint is just tested perhaps combined with some other constraint, e.g. sorted or that there must be at least some zeros:

- MiniZinc alldifferent_except_0.mzn
- Comet: alldifferent_except_0.co
- Gecode/R: all_different_except_0.rb
- Choco: AllDifferentExcept0_test.java
- JaCoP: AllDifferentExcept0_test.java
- Gecode: alldifferent_except_0.cpp

Some models using alldifferent_except_0

And here is some real use of the constraint:

- Nonogram (Comet): nonogram.co. (A faster model using the regular constraint, nonogram_regular.co, is described here and here).
- I wrote about alldifferent_except_0 in Pi Day Sudoku 2009. However, as faster way of solving the problem was found and is described in Solving Pi Day Sudoku 2009 with the global cardinality constraint). Note: the competition is still on, so there is no link to any model.
- Sudoku generate (Comet): sudoku_generate.co
- all paths graph (MiniZinc): all_paths_graph.mzn
- Cube sum (MiniZinc): cube_sum.mzn
- Message sending (MiniZinc): message_sending.mzn

As the first two entries indicates, there may be faster solutions than using (a decomposition) of alldifferent_except_0, but even as a decomposition is a great conceptual tool when modeling a problem.

Implementations

In the implementations below we also see how to define a function (predicate) in the constraint programming systems.

For the Gecode/R model there are different approaches:
- "standard" ("direct") approach where we loop over all different pairs of elements and ensures that if both values are different from 0 then they should be different
- using count
- using global cardinality ("simulated" in Gecode/R, see below)

Also, in some models we use the slighly more general version alldifferent_except_c where c is any constant (e.g. "Pi" in the Pi Day Sudoku puzzle mentioned above.

MiniZinc:

Model: alldifferent_except_0.mzn.
predicate all_different_except_0(array[int] of var int: x) =
   let {
      int: n = length(x)
   }
   in
   forall(i,j in 1..n where i != j) (
        (x[i] > 0 /\ x[j] > 0) 
        -> 
        x[i] != x[j]
   )
;

// usage:
constraint all_different_except_0(x);

Comet:

Model: alldifferent_except_0.co.
function void alldifferent_except_0(Solver m, var{int}[] x) {
  int n = x.getSize();
  forall(i in 1..n, j in i+1..n) {
    m.post(
           x[i] > 0 && x[j] > 0 
           => 
           x[i] != x[j]
           );
  }
}

// usage
exploreall {
  // ...
  alldifferent_except_0(m, x);
}

Gecode/R

Model:all_different_except_0.rb.

When modeling the constraint in Gecode/R, I experimented with different approaches. The reification variant all_different_except_0_reif is actually quite fast.
# The simplest and the fastest implementation 
# using count for 1..max (poor man's global cardinality)
def all_different_except_0
  (1..self[0].domain.max).each{|i|
    self.count(i).must <= 1
  }
end

# global cardinality version using an extra array with the counts
def global_cardinality(xgcc)
  (self[0].domain.max+1).times{|i|
    xgcc[i].must == self.count(i)
  }
end

#
# The standard approach using reification.
#
def all_different_except_0_reif(x)
  n = x.length
  b1_is_an bool_var_matrix(n,n)
  b2_is_an bool_var_matrix(n,n)
  b3_is_an bool_var_matrix(n,n)
  n.times{|i|
    n.times{|j|
      if i != j then 
        x[i].must_not.equal(0, :reify => b1[i,j]) 
        x[i].must_not.equal(0, :reify => b2[i,j]) 
        x[i].must_not.equal(x[j], :reify => b3[i,j])
        (b1[i,j] & b2[i,j]).must.imply(b3[i,j])
      else 
        b1[i,j].must.true
        b2[i,j].must.true
        b3[i,j].must.true
      end
    }
  }
end

    # ...
    # usage:
    x.all_different_except_0
    # all_different_except_0_gcc(x)
    # all_different_except_0_reif(x)

Choco

Model: AllDifferentExcept0_test.java.

Note that here alldifferent_except_0 is derived from the more general version alldifferent_except_c.
//
// decomposition of alldifferent except 0
//
public void allDifferentExcept0(CPModel m, IntegerVariable[] v) {
    allDifferentExceptC(m, v, 0);
}

//
// slightly more general: alldifferent except c
//
public void allDifferentExceptC(CPModel m, IntegerVariable[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
            m.addConstraint(ifThenElse(
                                       and(
                                           gt(v[i], c), 
                                           gt(v[j], c)
                                           ), 
                                       neq(v[i], v[j]),
                                       TRUE
                                       )
                            );
        }
    }    
}

// ...
// usage: 

    m.addConstraint(allDifferent(queens));

JaCoP

Model: AllDifferentExcept0_test.java

This is exactly the same approach as the Choco version.
//
// decomposition of alldifferent except 0
//
public void allDifferentExcept0(FDstore m, FDV[] v) {
    allDifferentExceptC(m, v, 0);
} // end allDifferentExcept0


//
// slightly more general: alldifferent except c
//
public void allDifferentExceptC(FDstore m, FDV[] v, int c) {
    int len = v.length;
    for(int i = 0; i < v.length; i++) {
        for(int j = i+1; j < v.length; j++) {
            m.impose(new IfThen(
                                       new And(
                                           new XneqC(v[i], c), 
                                           new XneqC(v[j], c)
                                           ), 
                                       new XneqY(v[i], v[j])
                                       )
                            );
        }
    }        
} // end allDifferentExceptC


        // ...
        // usage:
        allDifferentExcept0(store, x);

Gecode

alldifferent_except_0.cpp

The Gecode version is very succint since it use overloaded boolean operators. Very nice.
void alldifferent_except_0(Space& space, IntVarArray x, IntConLevel icl = ICL_BND) {
  for(int i = 0; i < x.size(); i++) {
    for(int j = i+1; j < x.size(); j++) {
      post(space,
           tt(
           imp(x[i] != 0 && x[j] != 0, 
           // =>
           x[i] != x[j])),
           icl
           );
    }
  }
} // alldifferent_except_0


// ...
// usage:
    alldifferent_except_0(*this, x, opt.icl());

March 14, 2009

Solving Pi Day Sudoku 2009 with the global cardinality constraint

Here is an update of the Pi Day Sudoku 2009 problem.

After a suggestion from Mikael Zayenz Lagerkvist (of the Gecode team) the model now uses just the global cardinality constraint and is much faster than the model which used my own decomposition of all_different_except_0 (or rather all_different_except_Pi) and the count constraint.

Both of the new MiniZinc and Comet models solves the Pi Day Sudoku 2009 problem in less than a second (compared to couple of seconds before).

The Comet model has the following statistics for finding the first (and unique) solution.

time: 9
#choices = 6
#fail = 1
#propag = 1306

For Gecode/FlatZinc:

runtime: 20
solutions: 1
propagators: 35
propagations: 255
nodes: 3
failures: 0
peak depth: 2
peak memory: 158 KB

Compared to the former model, there are now much less failures.


Some more about the global cardinality constraint
Global cardinality (in Sudoku) is a way of stating how many occurrences there should be of each number per row, column or region. For the Pi Day Sudoku problem the following occurrences are needed:

Pi: 3 occurrences (coded as -1 in the model)
0: 0 occurrences (0 is a dummy value for handling the indata)
1..9: exactly one occurrence each

I MiniZinc the this is coded as

array[1..n, 1..n] of var -1..9: x;
array[-1..9] of 0..3: occ = array1d(-1..9, [3, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]);
contraint
forall(i in 1..n) ( % rows
check([x[i,j] | j in 1..n], occ)
)
...

In Comet the function cardinality is used, i,e, m.post(cardinality(occ, all(j in 1..n) x[i,j]));.


For more information about the global cardinality constraint, see the entry in Global Constraint Catalog.


A comment
Introductions to constraint programming nowadays tend to start with the Sudoko problem (or SEND + MORE = MONEY). These models often use the alldifferent constraint, and seldom, if ever, have I seen any presentation using global cardinality. One reason may be that alldifferent is easier to understand in a introduction, another reason may be that it is general a better constraint to use for these problems.

"Vanilla" Sudoko solver using global cardinality
I have compared some few "vanilla" Sudoku problems with either alldifferent or global cardinality but haven't done any systematic comparison. Sometimes the global cardinality approach is better, sometimes it is somewhat worse.

sudoku_gcc.mzn is a "vanilla" Sudoku solver in MiniZinc using the global cardinality constraint. And the same approach in Comet: sudoku_gcc.co. Note: These models includes just a few problems.


Also see: Gecode's Sudoku solver is very fast, and includes 88 different problems.

March 08, 2009

Comet: About 15 new Comet models

The latter part of the last week was dominated by the optimization of the Nonogram solver, which I wrote about in Comet: Nonogram improved: solving problem P200 from 1:30 minutes to about 1 second. I would like to thank Pascal Van Hentenryck, Mikael Zayenz Lagerkvist, and Christian Schulte for help, comments, and inspiration.


Here are the other Comet models written the last week.

Puzzles, recreation mathematics, etc

Pascal Van Hentenryck's OPL book

These models are translations from Pascal Van Hentenryck "The OPL Optimization Programming Language"

Operations research

  • furniture_moving_cumulative.co: Furniture moving using global constraint cumulative (simple scheduling problem from Marriott & Stuckey: 'Programming with constraints', page 112f). Compare with furniture_moving.co which uses the Comet's Schedule constraint.

Global constraints

Many of these models has been inspired by the great collections in Global Constraint Catalog.

February 28, 2009

Comet: regular constraint, a much faster Nonogram with the regular constraint, some OPL models, and more

Since the last time some more Comet model has been written.

Much faster Nonogram model using the regular constraint

In More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems I presented nonogram.co, a solver for Nonogram puzzles, and also noted that is was quite slow: Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)....

Inspired by the announcement this week of the ECLiPSe example of Nonogram solver using the regular constraint (see nono_regular.ecl.txt and regular.ecl.txt) - and also my earlier Gecode/R model nonogram.rb which used "regular expressions" - I created a Nonogram model in Comet using the regular constraint: nonogram_regular.co.

Let us first look at the regular constraint.


Regular constraint
The regular constraint (see the Comet model regular.co for my implementation) is a global constraint using a DFA (deterministic finite automata) which accepts (or not accepts) an input sequence given a "transition matrix". The constraint was presented by Gilles Pesant in "A Regular Language Membership Constraint for Finite Sequences of Variables" (2004).

My implementation of the regular constraint is heavily borrowed from MiniZinc's builtin regular predicate (from lib/zinc/globals.mzn); the translation to Comet was very straightforward (omitting just some asserts).

An exemple of the usage of the constraint: We want to match the regular expression "123*21" (i.e. first "1", then "2", then zero or more "3", then "2", and last a "1"). Note: The length of the sequence is 10, so there must be 6 "3":s since "123" and "21" are "anchored" at the beginning and the end.

int len = 10;
int n_states = 5; // number of states
int input_max = 3; // the states are 1,2, and 3
int initial_state = 1; // we start with the 1 state
set{int} accepting_states = {n_states}; // This is the last state
// The transition matrix
int transition_fn[1..n_states, 1..input_max] =
[[2, 0, 0], // transitions from state 1: "1" -> state 2
[0, 3, 0], // transitions from state 2: "2" -> state 3
[0, 4, 3], // transitions from state 3: "2" -> state 4 | "3" -> state 3 (loop)
[5, 0, 0], // transitions from state 4: "1" -> state 5
[0, 0, 0]]; // transitions from state 5: END state
...
exploreall {
regular(reg_input, n_states, input_max, transition_fn, initial_state, accepting_states);
} using {
// ....
}

The unique sequence resulting from this automata is thus 1 2 3 3 3 3 3 3 2 1.

For using regular in the Nonogram problem, the automata for each row/column clue, must be built, preferably by a function. In regular.co this is done with the function make_transition_matrix, which by far was the hardest part of this problem (and surely could be written in a smoother way).

For the Nonogram clue [3,2,1] - which represents the regular expression "0*1110*110*10*" - the following automata (transition matrix) is generated:
1 2
0 3
0 4
5 0
5 6
0 7
8 0
8 9
9 0


Note that the regular function uses 0 (zero) as the failing state, so the states must start with 1. This is taken care in the model as the resulting value is subtracted by 1.

As usual in my models, the regular constraint is just a "convenience function", i.e. not using special written propagation methods etc.

The regular constraint has - of course - more applications than for Nonogram solving. I plan to look more into this.


The Nonogram solver: nonogram_regular.co
After the regular constraint and the automata generator was written, it was quite easy to change the old Nonogram solver to use these new tools. The result is in nonogram_regular.co. I was quite curious how fast this version should be compared to the former slow model. In short: it is much faster.

As comparison the Lambda instace took about 12 seconds with the old model; with the new model it takes 0.5 seconds, which is a nice improvement. I never finished a run for Nonunique problem with the older model; the new model takes 0.5 seconds (including the startup of the Comet program). Etc.

The P200 problem nonogram_p200.co now takes 2:05 minutes, which can be compared with 57 seconds for the Gecode/R model. Thus, the Comet model is still slow compared to Gecode version and the ECLiPSe version, which solves the P200 problem in just a second or two. Maybe some creative labeling or a "proper" regular constraint can fasten things up...

Update some hour later: One way to gain about 30 seconds to 1:30 minutes on the P200 problem was to explicitly state the consistency level to onDomain, e.g. cp.post(a[m] == q0, onDomains), and to use another labeling strategy:
forall(i in 1..rows, j in 1..cols : !board[i,j].bound()) {
// label(board[i,j]); // the former strategy
tryall(v in 1..2) m.post(board[i,j] == v, onDomains);
}


Some more Nonogram problems have been coded:

An aside about regular expressions
I have been very interested in regular expressions (especially the more expressive Perl type) for a long time. In 1997 I wrote a simple Perl program MakeRegex which returns a regular expression given a list of words. It was later Appletized in MakeRegexApplet. Now there are better programs/modules for this.

OPL Models

One of my projects is to translate the OPL models from Pascal Van Hentenryck "The OPL Optimization Programming Language" into Comet. Even if OPL and Comet are different languages the reading of the book has been very awarding. Thanks again, Pascal!

Some OPL models already have been published, but now I've been more systematic and started from the beginning. More models to come.

Finding: arrays in a tuple
In Comet: New models, e.g. Einstein puzzle, KenKen, Kakuro, Killer Sudoku, Stigler's Diet problem, translations of OPL models I wrote

One big difference here: In Comet it is not allowed to have an array in a tuple, so the use data must be a separate matrix.

I've found a way of using arrays in a tuple, by first initialize the values in a matrix and then - by using the all function - "slice" the values into the tuple array. This has been done in the models fixed_charge2.co and production3.co.

Example from fixed_charge2.co:

tuple productType {
int profit;
set{Machines} machines;
int[] use;
}
int use[Products, Resources] = [[3,4],[2,3], [6,4]];
productType Product[Products] =
[productType(6, {shirtM}, all(i in Resources) use[shirt,i]),
productType(4, {shortM}, all(i in Resources) use[shorts,i]),
productType(7, {pantM}, all(i in Resources) use[pants,i])];

Combining different models

The alphametic problem SEND + MOST = MONEY has the additional requirement to maximize the value of MONEY. The older model send_most_money.co does that and nothing more.

One natural extension of the problem is the following:
* first find the maximum value of MONEY
* then find all the solutions with this value.

The model send_most_money2.co has two functions:
* smm which is just the constraint for the alphametic problem
* send_most_money which has a parameter money.

send_most_money is first called with 0, indicating that is should maximize the value of MONEY, and then returns that value. The next call to send_most_money is with the calculated MONEY value, which indicates that all solutions should be generated.

The answer is

check all solutions for MONEY = 10876
x[9,7,8,2,1,0,4,6] MONEY: 10876
x[9,7,8,4,1,0,2,6] MONEY: 10876


Jim Orlin's Logic Puzzle

In Colored letters, labeled dice: a logic puzzle Jim Orlin stated a logic puzzle:
My daughter Jenn bough a puzzle book, and showed me a cute puzzle. There are 13 words as follows: BUOY, CAVE, CELT, FLUB, FORK, HEMP, JUDY, JUNK, LIMN, QUIP, SWAG, VISA, WISH.

There are 24 different letters that appear in the 13 words. The question is: can one assign the 24 letters to 4 different cubes so that the four letters of each word appears on different cubes. (There is one letter from each word on each cube.) It might be fun for you to try it. I’ll give a small hint at the end of this post. The puzzle was created by Humphrey Dudley.

...

If anyone wants to write an Excel spreadsheet and solve it via integer programming, please let me know. I’d be happy to post the Excel spreadsheet if you send it to me, or link to it if you post it and send me the URL.

This was a fun puzzle, so I modeled the problem in labeled_dice.co, and mailed Jim the link.

Some days later he wrote Update on Logic Puzzle where the contributed models were presented. There where another Comet model WordPuzzleSchaus.co by Pierre Schaus, one of Comet's developers. Pierres model use an different and more elegant approach than mine.

Jim also linked to some other logical puzzles from the great collection brownbuffalo.sourceforge.net (which have solutions in ECLiPSe/Prolog). One of these puzzles was Building Blocks, in the same vein as his labeled dice puzzle. Hence I had to make a Comet model of this problem: building_blocks.co.

He concludes the post with the following:

Incidentally, the ease for solving it using Constraint Programming makes me think that Constraint Programming should be considered a fundamental tool in the OR toolkit.


Other Comet models this week

Here are the other Comet models created/published this week. Some of them where very easy to do since they are translations of my MiniZinc models.

February 15, 2009

More Comet models, e.g. Nonogram, Steiner triplets, and different set covering problems

During the week some more Comet models has been published. Many of them are translations of my MiniZinc models.

Findings this week

Here are some findings for the previous week.

tryall again: Nonogram

One of the models I really wanted to make was Nonogram, since it would be interesting to compare with the MiniZinc model nonogram.mzn and the Gecode/R model nonogram.rb. The Comet model is nonogram.co.

The first version was very influenced by the MiniZinc model, and used tryall where the MiniZinc model used exists (see the previous post About 20 more constraint programming models in Comet for some more comments ablit this). But this was way too slow.

After some thoughts, I rewrote the tryall:s and it was getting faster. For a simple Nonogram problem like The Hen (see picture below), it originally took about 30 seconds. And after the changes, it now takes a fraction of a second (with alot of fewer propagations).

Nonogram The hen
. X X X . . . .
X X . X . . . .
. X X X . . X X
. . X X . . X X
. . X X X X X X
X . X X X X X .
X X X X X X . .
. . . . X . . .
. . . X X . . .

Still, compared to the Gecode/R (and - of course - Gecode) models, which uses "regular expression" representation, the Comet model is slow, but is faster the MiniZinc version. For example the Lambda picture (see below) takes about 12 seconds with the Comet model, compared to 0.5 seconds with the Gecode/R model. Well, it is nice to have some more optimization to do (or more probably, a complete remodelling)...

Nonogram: Lambda problem
. X X . . . . . . .
X . X X . . . . . .
X . . X . . . . . .
. . . X X . . . . .
. . . . X . . . . .
. . . X X X . . . .
. . . X X X . . . .
. . X X . X X . . .
. . X X . . X . . .
. X X . . . X X . X
. X X . . . . X X X
X X . . . . . X X .

To summarize, the main finding here was to replace two tryall statements with two decision variables, and change the rest accordingly. The model also include two symmetry breaking constraints ("alldifferent except 0" and ordering of a temporary array), but these make no big difference.

binary representation of set variables

As far as I know, Comet don't (yet) have a variable set variables (i.e. as decision variables). Instead, another representation must be used, and probably the most natural representation is an array of binary decision values representing each value of the set.

The following models use this representation:
* set_partition.co, partitions a range of integers 1..n in two equal sized sets where the sums and sums of squares of the two sets are equal.
* steiner_triplets.co: the well known problem of Steiner triples.
* partition_function.co, (partitions values according to a function of some kind)

Models

These models is available by My Comet page. As usual, the problems are presented in the header of the file.

January 22, 2009

New Gecode/R models: langford, all_interval, alpha, golomb_ruler and some more

I have now started to model some of the Gecode examples, and some other things, in Gecode/R.


Langford's number problem (CSPLib problem 24


Langford's number problem is a classic problem in constraint programming. It is the problem 24 in the CSPLib collection.

The problem is defined as:


Arrange 2 sets of positive integers 1..k to a sequence,
such that, following the first occurence of an integer i,
each subsequent occurrence of i, appears i+1 indices later
than the last.
For example, for k=4, a solution would be 41312432

Also see
* John E. Miller: Langford's Problem
* Encyclopedia of Integer Sequences for the number of solutions for each k
* My MiniZinc model langford2.mzn

The Gecode/R model for this problem is langford.rb.


All interval problem (CSPLib problem 7)


Another CSP classic. CSPLib formulates the problem such:

Given the twelve standard pitch-classes (c, c#, d, ...), represented by
numbers 0,1,...,11, find a series in which each pitch-class occurs exactly
once and in which the musical intervals between neighbouring notes cover
the full set of intervals from the minor second (1 semitone) to the major
seventh (11 semitones).

...


The constraints in the Gecode/R model all_interval.rb are simply

 x_is_an int_var_array(n, 1..n)         # The sequence
diffs_is_an int_var_array(n-1, 1..n-1) # The differences
x.must_be.distinct
diffs.must_be.distinct
(0..n-2).each{|k| diffs[k].must == (x[k+1] - x[k]).abs }

And two symmetry breaking constraints which, for n=12, which reduces the number of unique solution from 1328 (in 11 seconds) to 332 solutions (in 3 seconds).


x[0].must < x[1]
diffs[0].must > diffs[n-2]

The Gecode model all-interval (implemented in C++) solves the problem for n=1000 in 1 seconds. This Gecode/R model takes about 7 seconds after some tweaking with the :strength parameter.

Maybe I should benchmark the propagation strategies as for the Golomb ruler problem, see below.


Golomb ruler (CSPLib problem 6)


Yet another classic. See CSPLib problem number 6 for a formulation, and here for more references. Wikipedia's Golomb ruler has more info.

This Gecode/R model golomb_ruler.rb was quite slow at first. When testing for m = 10 it took about 15 seconds, which was about twice as slow as the MiniZinc model (this is from the MiniZinc online examples).

Gecode's program golomb-ruler solves n = 12 slightly under 1 seconds (with default settings). Something had to be done!

After some tweaking, like setting :strength => :bounds without any satisfying result, I decided on a more "scientific" approach, and benchmarked (some of) the different branch/search strategies for the branch_on method. There is a lot of these in Gecode/R.

After about 30 minutes of running, the ranking list below was completed. Note: the time is for running the specific model, not counting the time for startup of the program.

The winner was

:variable => :largest_size, :value =>: split_min 

which is used for all three variables in the model. Now m = 10 takes about 1 second.


The listing below states the time for the combination of the different strategies as
variable strategy ":" value strategy "->" running time

largest_size:split_min -> 1.114403
smallest_min:split_min -> 5.275035
smallest_min_regret:split_min -> 5.289433
none:split_min -> 5.320299
smallest_degree:split_min -> 5.344738
smallest_degree:min -> 5.551923
none:min -> 5.556326
smallest_min:min -> 5.567722
largest_max_regret:split_min -> 5.621116
smallest_min_regret:min -> 5.657271
largest_max_regret:min -> 5.783116
largest_size:min -> 6.067107
smallest_size:split_min -> 7.006949
largest_min:split_min -> 8.229713
smallest_size:min -> 11.800805
largest_min:split_max -> 14.558584
largest_min:min -> 14.736679
largest_min:max -> 17.301876
largest_size:split_max -> 17.564312
largest_min:med -> 23.655615
largest_size:max -> 26.835556
smallest_min:split_max -> 60.566207
largest_max_regret:split_max -> 60.605995
smallest_min_regret:split_max -> 60.79834
smallest_degree:split_max -> 60.948791
none:split_max -> 63.229398
smallest_size:split_max -> 80.155812
smallest_min_regret:max -> 84.988649
none:max -> 85.208536
smallest_degree:max -> 85.541162
largest_max_regret:max -> 85.64104
smallest_min:max -> 85.859321
smallest_size:max -> 93.474409
none:med -> 102.263753
largest_max_regret:med -> 103.352322
smallest_degree:med -> 103.688327
smallest_min:med -> 103.78953
smallest_size:med -> 131.829423
largest_size:med -> 160.26752
smallest_min_regret:med -> 181.760703

It is quite a span from the fastest to the slowest. It seems that :value => :med is bad for this problem.


alpha (alphametic problem)


Gecode/R model alpha.rb.

This alphametic problem asks for a solution of the equations below. The letters "a".."z" shall be unique assigned one and each of the numbers 1..26.

b+a+l+l+e+t       == 45 
c+e+l+l+o         == 43 
c+o+n+c+e+r+t     == 74 
f+l+u+t+e         == 30 
f+u+g+u+e         == 50 
g+l+e+e           == 66 
j+a+z+z           == 58 
l+y+r+e           == 47 
o+b+o+e           == 53 
o+p+e+r+a         == 65 
p+o+l+k+a         == 59 
q+u+a+r+t+e+t     == 50 
s+a+x+o+p+h+o+n+e == 134
s+c+a+l+e         == 51 
s+o+l+o           == 37 
s+o+n+g           == 61 
s+o+p+r+a+n+o     == 82 
t+h+e+m+e         == 72 
v+i+o+l+i+n       == 100
w+a+l+t+z         == 34 

See also
* the Gecode program alpha.ccp
* the MiniZinc models crypto.mzn, crypto_ip.mzn (integer programming model).

global cardinality

global_cardinality.rb implements the global constraint global_cardinality (the link is to the wonderful collections of global constraints: Global Constraint Catalog).

Nothing fancy, just this method, using the built in method count:

class Array
def global_cardinality(gcc)
@n = self.length
(@n).times{|i|
gcc[i].must == self.count(i)
}
end
end

Please note that this global constraint is just on the modellering level; there is no special (Gecode) code with optimized propagations etc. Even of they are not faster than "inline code", I see global constraints as conceptual building blocks, very useful in the modelling.

I intend to implement more global constraints in the future.


sliding sum


sliding_sum.rb implements another global constraint: sliding_sum, which constrain all sliding (running) sums to be between low and high.

January 05, 2009

Tom Schrijvers: "Monadic Constraint Programming" (in Haskell)

Tom Schrijvers presents in the blog post Monadic Constraint Programming a draft version of the paper Monadic Constraint Programming written by him, Peter Stuckey, and Philip Wadler:


A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. In this paper we give a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. We are then able to define search and search strategies as first class objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.


Prototype code in Haskell can be downloaded here.