?

Log in

No account? Create an account

Wed, Oct. 7th, 2009, 07:35 pm
Sudoku and the way of the SAT solver

Writing a Sudoku solver is a slightly tricky but reasonably straightforward problem. Here I'll present a non-obvious way of implementing it which is short in terms of lines of code and also much easier to modify and extend than the more direct approaches. The general technique is applicable to a very wide range of problems, so hopefully someone will read this and then find it useful in the future.

My general approach is in multiple steps. First, express the Sudoku problem as a much more general type of problem called a SAT problem (short for 'satisfiability', traditionally capitalized for no particular reason). Then solve the SAT problem, then translate the solution back into the original format.

A SAT problem works on boolean variables, and the problem is to find a find an assignment which satisfies every one of a list of clauses. An individual clause states that at least one of a set of variables an their negations must be true, for example 'A or B or C' or 'not B on not D or E' are both typical clauses. To translate a Sudoku problem into a SAT problem, I make one boolean variable for each possible state of each cell of the original problem, so there's one variable for whether the upper-left cell is a one, another for whether it's a two, etc. Translating the constraints on the Sudoku problem is then straightforward - each digit must occur exactly once in each row, column, and subsquare, and each pair of cells in each row, column, or subsquare must not both be true (expressed as, either the first one is false or the second one is false).

To solve the SAT problem, the solver first scans for a clause containing no variables. If there is one, then the problem is insoluble and you return. Otherwise, it scans for clauses containing exactly one variable. If there is such a thing, it makes that assignment and recurses. If not, it finds the clause with the smallest number of variables in it (skipping the 'not A or not B' clauses, because everything contains about the same number of those) and recurses twice, setting one of the variables to true and false.

The advantage of this approach is that the heuristics for noticing when there's only one possible value left on a row, column, subsquare, or an individual cell don't have to be special cased - they're all subsumed by the single simple deduction that if a clause only contains a single variable, then that assignment has to be made. As a result, adding for example a constraint that the diagonals have to have each digit occur only once is trivial, and making irregular shapes and adding novel types of constraints is also fairly easy, while it would be quite difficult in the more obvious but less flexible approaches to writing Sudoku solvers.

It would be possible to make this code a lot faster, even without changing the architecture - for example, one could build an index of what variables occur in what clauses, to get rid of all the linear scans. But for solving a small number of Sudokus, or even a largeish number, taking a few seconds each is no big deal, and this approach is much more about maintainability, flexibility, and extensibility. Interestingly, for a large number of real world problems the fastest known way to solve them is to translate to SAT and run a general SAT solver with appropriately tuned parameters on the result. Obviously this is a hack, but applying custom heuristics to the unencoded problem is so messy and difficult, and so much work has been done on fine-tuning the general purpose solvers, that more often than not the theoretically better appoach can't compete in practice. Sudoku has some very special properties which tip the balance in favor of a very special custom bit of code, and in fact I made my SAT solver do a heuristic which is very Sudoku specific and skipped the common one that if a variable only appears in the positive or negative then you assign it because that doesn't apply, but such customization being warranted is highly unusual, and even in this case the general SAT solver has some clear advantages, starting with it containing fewer lines of code.

I used exception handling to indicate when a solution is found, and add assigments to it in handle and reraise clauses, mostly to make the point that this is a perfectly valid and highly pragmatic way of using exceptions.

A flat file of the code can be found here, but I've included it verbatim below.

By the way, the method of testing I used on this is to run it on a Sudoku problem with no initial constraints, which is a simple one-line test and does a good job of hitting all the edge cases regardless of the implementation details.

class SolutionFound(BaseException):
	def __init__(self):
		self.s = []

def assign(clauses, val):
	n = []
	for c in clauses:
		if val in c:
			continue
		elif -val in c:
			n.append([x for x in c if x != -val])
		else:
			n.append(c)
	return n

def try_sat(clauses, val):
	try:
		solve_sat(assign(clauses, val))
	except SolutionFound, s:
		s.s.append(val)
		raise s

def solve_sat(clauses):
	if [] in clauses:
		return
	if len(clauses) == 0:
		raise SolutionFound()
	for x in clauses:
		if len(x) == 1:
			try_sat(clauses, x[0])
			return
	smallest = clauses[0]
	for c in clauses:
		if c[0] > 0 and (smallest[0] < 0 or len(smallest) > len(c)):
			smallest = c
	try_sat(clauses, smallest[0])
	try_sat(clauses, -smallest[0])

svals = [list(range(x, x+9)) for x in range(0, 81, 9)] + \
		[list(range(x, x+81, 9)) for x in range(9)] + \
		[[x,x+1,x+2,x+9,x+10,x+11,x+18,x+19,x+20] for x in [0,3,6,27,30,33,54,57,60]]

def group(c, p):
	p.append(c)
	for i in range(len(c)):
		for j in range(i):
			p.append([-c[i], -c[j]])

def groups(c, p):
	for i in range(9):
		group([x + 100 * i + 1 for x in c], p)

def decode(r):
	r2 = [[0 for x in range(9)] for y in range(9)]
	for v in r:
		if v > 0:
			pos = (v-1) % 100
			r2[pos // 9][pos % 9] = (v // 100) + 1
	return r2

def solve_sudoku(problem):
	p = []
	for i in range(81):
		group(list(range(i+1, 901, 100)), p)
	for s in svals:
		groups(s, p)
	pre = []
	for x in range(9):
		for y in range(9):
			if problem[x][y]:
				m = x * 9 + y + 100 * (problem[x][y] - 1) + 1
				pre.append(m)
				p = assign(p, m)
	try:
		solve_sat(p)
	except SolutionFound, s:
		return decode(s.s + pre)
	return None

import sys
sys.setrecursionlimit(10000)

if __name__ == '__main__':
	tt = [[0] * 9 for i in range(9)]
	print('\n'.join(' '.join([str(p) for p in q]) for q in solve_sudoku(tt)))
	print(' ')
	tt[0][0] = 6
	tt[3][4] = 1
	print('\n'.join(' '.join([str(p) for p in q]) for q in solve_sudoku(tt)))

Thu, Oct. 8th, 2009 01:29 pm (UTC)
knowbuddy: TMTOWTDI

I teach an Advanced Database Structures course for a Bachelors program. Our review for the mid-term exam is to write a Sudoku solver in SQL. It's a great review, as it ends up using unions, several types of joins, aggregate functions, and most of the base concepts they'll need for the second half of the course. I get everyone up at the white board and we figure out, together, how to solve the generic case, then variations.

It turns out that you need 4 or 5 tables and as many (very simple) queries to populate them with data. Once you've done that, it's just a repetition of two queries to iteratively solve the puzzle.

The students always have fun with it, even though they are disappointed to see how easy it is.