Skip to content

Conversation

@antmarakis
Copy link
Collaborator

Hopefully this will take care of the issue where random tests would occasionally fail (#627).

The tests that were failing were on csp (min_conflicts) and learning (random_forest). I fixed the latter by using the existing grade_learner function.

For csp, I wrote a new function, failure_test. It takes as input a function and a tests list of tuples in the form (input, failure_output). It basically counts how many tests output something other than failure_output. I went down this route (as explained in the comments) because a lot of algorithms on failure return something like False or None, while on success they return something arbitrary (a dictionary, a list, etc.) so it is much easier to check if a function failed that check if it succeeded. I think this is a simple enough way to take care of this issue.

@norvig norvig merged commit 3ff5f40 into aimacode:master Oct 19, 2017
dj5x5 pushed a commit to dj5x5/aima-python that referenced this pull request Jul 17, 2025
* add failure_test method to utils

* comment fix

* Update test_learning.py

* Update test_csp.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants