Skip to content

Conversation

@wenting-zhao
Copy link
Collaborator

To work with humaneval:

commit0 setup --dataset-name commit0/openai_humaneval all
commit0 build
# test the solution in outputs in example HumanEval/0
commit0 test HumanEval/0 outputs --backend local
# test the reference solution in example HumanEval/0
commit0 test HumanEval/0 outputs --backend local --reference

where cat outputs is

    for idx, elem in enumerate(numbers):
        for idx2, elem2 in enumerate(numbers):
            if idx == idx2:
                distance = abs(elem - elem2)
                if distance < threshold:
                    return True
    print("here")
    return False

which returns the following output

Traceback (most recent call last):
  File "/testbed/test.py", line 43, in <module>
    check(has_close_elements)
  File "/testbed/test.py", line 34, in check
    assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

@wenting-zhao wenting-zhao requested a review from srush December 2, 2024 01:55
@wenting-zhao wenting-zhao merged commit fa81bc4 into main Dec 2, 2024
2 checks passed
@wenting-zhao wenting-zhao deleted the integration branch December 2, 2024 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants