-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
STRIPS Check and Match Tree Implementation #118
base: devel
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## devel #118 +/- ##
=======================================
Coverage 97.71% 97.71%
=======================================
Files 51 51
Lines 3108 3108
Branches 121 121
=======================================
Hits 3037 3037
Misses 55 55
Partials 16 16 Continue to review full report at Codecov.
|
@beckydvn : Can you give this branch a go to see if we have a speedup on generation? Alternatively, you can let me know what the best way to test it would be and I'll have a go. If the problem is STRIPS, we should automagically get the speedup. |
@haz Sure, I'll give it a go :) If you want to see for yourself: if you're just testing the speed of trace generation, you can try making a large Vanilla Sampler generator. As it stands, I think anything more than a few hundred traces gets pretty slow, so hopefully that's faster on this branch. |
That should do the trick. |
When I try to run that line though, I'm getting an error here (in the Here's the stack trace:
|
What's the problem ID you're using? |
Just two local files for blocks world :) |
Working with this problem, and this sampling... traces = VanillaSampling(dom='src/d.pddl', prob='src/p.pddl', plan_len = 20, num_traces = 100) ...we see about a 3x speedup. Bug fixed that you found. I'll try to convert to numeric ID's to improve the performance further. |
Ok, that's about as far as I can push it. Want to have a look @gfrances / @miquelramirez ? If we want to bump the performance on random trace sampling from here, it would require better progression (strips specific, bit vectors, etc). |
Hi @haz, @gfrances is on holidays, I presume for the next couple weeks. I will try to distract a moment today to look over the PR. What I could see very quickly is that one of the tests is failing, probably because the match tree is changing the order of expansions and finding a goal state earlier than the test expects. Do we have tests to check that the match tree isn't pruning actions? I remember we had bugs like that back in |
A great point... How do you feel about random tests? I.e., generate random traces and confirm the MT and naive approaches coincide on action applicability. |
Hi @haz, That sounds like a good concept to me. It would be useful though to capture inputs (printing to |
Yep, both listing the path taken and the seed to generate things again reliably. Will circle back here once I have something implemented... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cheers @haz,
looks to me as a perfectly reasonable first pass on the concept of match trees. The most significant gains I can fathom re: performance would be to use namedtuple
or dataclass
instead of dictionaries for nodes, and use frozenset
or numpy
arrays when possible.
return formula.subformulas | ||
|
||
def _ignore_cost_metric(self, state): | ||
def _is_cost(f): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @haz, a much more straightforward and efficient way to do this could be this:
def _ignore_cost_metric(self, state):
# This may `throw` an exception if there is no `total-cost` function
# defined (e.g. this is a pre 2008 classical
# planning benchmark without support for numeric/object fluents).
total_cost = self.problem.language.get('total-cost')
return [f for f in state.as_atoms() if f[0] != total_cost]
Instances of Function
do have the __eq__
operator overloaded to do something sensible (check out src/tarski/syntax/function.py
, line 50. The __hash__
operator is also implemented to do something useful. So you can use safely Python standard containers if you wish to
fluents |= set(self._extract_atoms(self.problem.goal)) | ||
fluents |= set(self._ignore_cost_metric(self.problem.init)) | ||
|
||
return fluents |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once you got the set of fluents, wouldn't it be better to return a frozenset
rather than a set
? It may have a significant performance impact further down the road.
|
||
def check_strips(self): | ||
"""Confirms if this operator is a STRIPS operator""" | ||
def _strips_condition(c): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks to me like a valuable function to have in the fstrips.representation
module.
# Finds the best fluent based on how well it splits the remaining operators | ||
def _best_fluent(ops, decisions): | ||
best_fluent = None | ||
best_score = 999999 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that we will see any time soon planners capable of handling operators with a million fluents in the precondition, but we could use float('inf')
rather than 999999
for clarity.
# Recursively computes the match tree | ||
def _match_tree(ops, decisions): | ||
|
||
node = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is shame that we cannot use data classes due to compatibility with 3.6, but perhaps using itertools.namedtuple
will allow you to simplify a bit the code (avoiding to use the verbose and error prone dictionary ['attribute name']
idiom below.
Closes #117
Related to discussion @ #112