问题
hypothesis allows two different ways to define derived strategies, @composite
and flatmap
. As far as I can tell the former can do anything the latter can do. However, the implementation of the numpy arrays
strategy, speaks of some hidden costs
# We support passing strategies as arguments for convenience, or at least
# for legacy reasons, but don't want to pay the perf cost of a composite
# strategy (i.e. repeated argument handling and validation) when it's not
# needed. So we get the best of both worlds by recursing with flatmap,
# but only when it's actually needed.
which I assume means worse shrinking behavior but I am not sure and I could not find this documented anywhere else. So when should I use @composite
, when flatmap
and when should I go this halfway route as in the implementation linked above?
回答1:
@composite
and .flatmap
are indeed exactly equivalent - anything you can do with one you can also do with the other, and it will have the same performance too.
I actually wrote that comment, and the reason is that we only sometimes want to use a flatmap/composite, but always want to carefully validate our logic. The way I've set it up, we can avoid calling the validators more than once by using .flatmap
- which would require a second function definition if we wanted to use @composite
.
(there's also an issue of API style in that those arguments are almost always values, but can sometimes be strategies. We now ban such APIs based largely on the confusion arrays()
has caused, in favor of letting users write their own .flatmap
s)
来源:https://stackoverflow.com/questions/59342856/composite-vs-flatmap-in-complex-strategies