-
-
Save pragdave/0207669683ab18b28caa8e5732ed1f67 to your computer and use it in GitHub Desktop.
ascending([]). | |
ascending([_]). | |
ascending([H | [ S | T ]]) :- | |
H < S, | |
ascending([S | T]), | |
!. | |
valid([P1, P2, P3]) :- | |
ascending(P1), | |
ascending(P2), | |
ascending(P3), | |
!. | |
make_a_move([[H|R], P2, P3], [R, [H|P2], P3], [H,1,2]). | |
make_a_move([[H|R], P2, P3], [R, P2, [H|P3]], [H,1,3]). | |
make_a_move([P1, [H|R], P3], [[H|P1], R, P3], [H,2,1]). | |
make_a_move([P1, [H|R], P3], [P1, R, [H|P3]], [H,2,3]). | |
make_a_move([P1, P2, [H|R]], [[H|P1], P2, R], [H,3,1]). | |
make_a_move([P1, P2, [H|R]], [P1, [H|P2], R], [H,3,2]). | |
hanoi(Goal, Goal, _, []). | |
hanoi(From, To, AlreadyTried, [ Move | Rest ]) :- | |
make_a_move(From, Intermediate, Move), | |
not(member(Intermediate, AlreadyTried)), | |
valid(Intermediate), | |
hanoi(Intermediate, To, [ Intermediate | AlreadyTried ], Rest). | |
h(Stack, Moves) :- | |
hanoi([Stack, [], []], [[], Stack, []], [ [Stack, [], []] ], Moves). |
Aside from declaring the max length, could you some how declare that all intermediate positions must be unique? I'm guessing that any of the longer solutions repeat a position.
Or is there a property of every intermediate position that tells you what the next position on the shortest path must be? Then maybe you could add a constraint for that property.
With pure code, you can simply use iterative deepening to get a shortest solution as the first solution:
?- length(Ms, _), h([1,2,3], Ms). Ms = [[1, 1, 2], [2, 1, 3], [1, 2, 3], [3, 1, 2], [1, 3, 1], [2, 3, 2], [1, 1, 2]] .
length/2
generates increasingly longer lists on backtracking, so it will create shorter lists first.
Note that iterative deepening relies on an important logical property called monotonicity. Hence, consider for example using dif/2
to stay in the pure monotonic subset of Prolog when stating disequality of terms.
That being said, your code can be significantly generalized, because I get, for the most general query:
?- h(S, Ms), false. false.
Expected: Nontermination.
For clarity, consider using DCGs to describe the list of moves!
Absolutely: Every uninformed search strategy must visit every leaf, and iterative deepening does so too.
To significantly improve performance, problem-specific information must be taken into account.
A really cool thing to do here would be to write a program that "specializes" a simplistic, given search strategy by deducing or using problem-specific information.
First, it seems that you have a very specific idea what declarative means. From what I gathered, you expect from a declarative description/program primarily that it is "algorithm free" and free of any procedural and non-functional concerns like resource consumption and non-termination. In the context of Prolog and logic programming, declarative has a much more restricted meaning: That the meaning of a program is given by a declarative reading ; and that the actual execution will be that meaning too, modulo some procedural issues like time and space consumption. Ensuring that Prolog programs have such a declarative reading is not that obvious since many problematic constructs have entered the language. See [logical-purity] on SO for attempts to regain the territory for purity.
Now for the current problem: Evidently, the search space is quite large, maybe there are loops inside that cannot be found by your loop checker. What about constructing redundant situations and thinking about it? Touching the same disk twice does not make sense (two reasons, loops and a redundant intermediary). But what about cycles of length 3? What is their nature?
By adapting the program minimally:
:- use_module(library(clpq)).
ascending([]).
ascending([H|Hs]) :-
iascending(Hs, H).
iascending([], _).
iascending([S|_], H) :-
{H < S}. % for diagnosis, otherwise H < S
make_a_move([[H|R], P2, P3], [R, [H|P2], P3], H:(1->2)) :- ascending([H|P2]).
make_a_move([[H|R], P2, P3], [R, P2, [H|P3]], H:(1->3)) :- ascending([H|P3]).
make_a_move([P1, [H|R], P3], [[H|P1], R, P3], H:(2->1)) :- ascending([H|P1]).
make_a_move([P1, [H|R], P3], [P1, R, [H|P3]], H:(2->3)) :- ascending([H|P3]).
make_a_move([P1, P2, [H|R]], [[H|P1], P2, R], H:(3->1)) :- ascending([H|P1]).
make_a_move([P1, P2, [H|R]], [P1, [H|P2], R], H:(3->2)) :- ascending([H|P2]).
% ^^^ better readable
(Most important is the <
from clpq, which already fails for: {X<Y},X =Y
)
| ?- _=[A,B,C], make_a_move(A,B, QuaAB), make_a_move(B,C,QuaBC), make_a_move(C,A,QuaCA).
% ^^^ to get A,B,C first as answer substitutions.
A = [[_A],[],[]],
B = [[],[_A],[]],
C = [[],[],[_A]],
QuaAB = _A:(1->2),
QuaBC = _A:(2->3),
QuaCA = _A:(3->1) ? ;
...
% most interesting case with all three lists not fixed in length:
A = [[_A,_B|_C],[_D|_E],[_F|_G]],
B = [[_B|_C],[_A,_D|_E],[_F|_G]],
C = [[_B|_C],[_D|_E],[_A,_F|_G]],
QuaAB = _A:(1->2),
QuaBC = _A:(2->3),
QuaCA = _A:(3->1),
{_A-_D<0},
{_A-_F<0},
{_A-_B<0} ? ;
After this it seems that all cases move the same disk _A. Is this really true? Let's ask the opposite:
| ?- _=[A,B,C], make_a_move(A,B, QuaAB), QuaAB=AB:_, make_a_move(B,C,QuaBC), QuaBC=BC:_, dif(AB,BC),
make_a_move(C,A,QuaCA), QuaCA=CA:_, dif(BC,CA).
no
Therefore: When the same disk cannot be drawn in the next move, cycles of 3 are impossible.
Where can you come to comparable conclusions so easily?
BTW, with the help of length/2 as indicated by @triska and above optimization, I conclude that:
| ?- N +\ ( length(Hs,N), h([1,2,3,4,5],Hs) ).
N = 31 ? ...
| ?- N+\ ( findall(t,h([1,2,3,4],Hs),Ts),length(Ts,N) ).
N = 1872
I've been searching for a while for a proper declarative solution to the Towers of Hanoi puzzle in Prolog. The only things I've come across have been implementations of the classic recursive algorithm:
This isn't declarative; it doesn't express the problem, it encodes a solution.
The code in this gist is an attempt to describe the problem declaratively. And it works: it finds 2 solutions when there's one disk, 12 when there are 2, 1872 for 3, and I'm sure it would find the 6,563,711,232 solutions for 4 disks if only I had time before the universe ends.
My problem is that I need to find the shortest solution, but I don't see how I can know that any particular solution is shortest without knowing the lengths of all those that follow. I could use the fact that I analytically know that the shortest is 2ⁿ-1 long, but that feels like it's cheating.
My question: is there a purely declarative way to solve the puzzle?
Thanks for thinking about this.
Dave