hypothesis icon indicating copy to clipboard operation
hypothesis copied to clipboard

WIP: support for `crosshair` backend

Open Zac-HD opened this issue 1 year ago • 8 comments

See https://github.com/HypothesisWorks/hypothesis/issues/3086 for design and previous discussion.

Ping @pschanely - fyi there are some interface changes from the previous draft, both in how you register the backend and in that we now expect a global context manager which yields a per-test-case-ctx function.

I'm inclined to require https://github.com/HypothesisWorks/hypothesis/pull/3801; it might slow us down a bit but I think will make converting the failing example to bytes practical, and from there it's a small step to database and shrinking support 😁

Zac-HD avatar Dec 03 '23 03:12 Zac-HD

Ping @pschanely - fyi there are some interface changes from the previous draft, both in how you register the backend and in that we now expect a global context manager which yields a per-test-case-ctx function.

No problem - this works; updated on my side.

I'm inclined to require #3801; it might slow us down a bit but I think will make converting the failing example to bytes practical, and from there it's a small step to database and shrinking support 😁

Sounds good. IIUC, I would then, at context manager exit, blast all of my realized primitives into the run's conjecturedata via the forced operations on a regular PrimitiveProvider. Is that right? We might need to be more surgical about what the context manager covers; I noticed last night that ConjectureRunner.test_function seems to want to operate on the buffer pretty immediately after executing the user code.

pschanely avatar Dec 05 '23 17:12 pschanely

IIUC, I would then, at context manager exit, blast all of my realized primitives into the run's conjecturedata via the forced operations on a regular PrimitiveProvider. Is that right?

I'm aiming to have a new attribute which is just the list of primitive values provided, so we'd want to materialize the contents of that list (working title self.primitives_prefix in this draft).

Converting to a buffer would I think force realization, so we'd want to delay that until the end of the test, which means we'd actually have to re-run the test... and at that point we might as well do the buffer construction only for failing examples which we pass off to the shrinker or replay of the minimal failure. If we currently realize earlier, we can probably make some changes to defer that; at worst waiting for some more work on the original issue.

Zac-HD avatar Dec 05 '23 20:12 Zac-HD

@tybug - I've now exceeded my timebox for this, and won't be able to come back to it until mid-January at earliest. If you want to use it as a starting point, you'd be welcome to take it over 🙂

Zac-HD avatar Dec 11 '23 04:12 Zac-HD

@Zac-HD roger, thanks! I'll likely end up working on this after the datatree ir migration. (which is slower than I'd hoped, but I am chugging along.)

tybug avatar Dec 11 '23 04:12 tybug

Makes sense! Good chance this one will be easier once the datatree has native IR support too, I got a bit bogged down in adding special cases where an alternative backend means we have to ignore the usual semantics of an empty buffer.

Zac-HD avatar Dec 11 '23 06:12 Zac-HD

I'm taking a look at getting this working. I've got a branch with full shrinking + database playback support for PrngProvider (but not crosshair; see below): https://github.com/HypothesisWorks/hypothesis/compare/master...tybug:hypothesis:provider-plugins-2?expand=1.

I took a different approach from this branch, where primitives are forced to their buffer representation immediately when drawn. I tried forcing them only at the end in freeze, but I think there are places when the engine wants to access the buffer before a ConjectureData concludes, so I didn't get very far with that.

Even though "normal" backends work with this, I don't know whether this approach is OK for crosshair? It seems like "only convert to buffer at the end" may be a requirement for crosshair given the above conversation. I.e., is it the responsibility of hypothesis (convert at end) or crosshair (use per_test_case context manager) to respect the lifetime of the crosshair primitives? I should disclaim that I have no knowledge of crosshair internals and was only vaguely following the above conversation.

If only converting at the end is a requirement, it'll be a bit more work to change things in hypothesis to account for that (unsure how much yet.)

@pschanely, on a related note, I tried running the crosshair plugin with my above branch, but got:

@settings(backend="crosshair", suppress_health_check=HealthCheck.all())
@given(st.integers())
def test(value):
    print("called", value)
    assert value != 150

test()

Traceback (most recent call last):
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/sandbox.py", line 20, in <module>
    test()
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/sandbox.py", line 15, in test
    @given(st.integers())
                   ^^^
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/core.py", line 1620, in wrapped_test
    raise the_error_hypothesis_found
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/core.py", line 1587, in wrapped_test
    state.run_engine()
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/core.py", line 1115, in run_engine
    runner.run()
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/engine.py", line 500, in run
    self._run()
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/engine.py", line 917, in _run
    self.generate_new_examples()
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/engine.py", line 638, in generate_new_examples
    zero_data = self.cached_test_function(bytes(BUFFER_SIZE))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/engine.py", line 1089, in cached_test_function
    self.tree.simulate_test_function(dummy_data)
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/datatree.py", line 727, in simulate_test_function
    v = draw(
        ^^^^^
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/datatree.py", line 716, in draw
    value = draw_func(**kwargs, forced=forced)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tybug/Desktop/Liam/coding/hypothesis/hypothesis-python/src/hypothesis/internal/conjecture/data.py", line 1768, in draw_integer
    value = self.provider.draw_integer(**kwargs, forced=forced)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/hypothesis_crosshair_provider/crosshair_provider.py", line 104, in draw_integer
    symbolic = proxy_for_type(int, self._next_name("int"), allow_subtypes=False)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/crosshair/core.py", line 586, in proxy_for_type
    space = context_statespace()
            ^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/crosshair/statespace.py", line 243, in context_statespace
    raise CrosshairInternal
  File "/opt/homebrew/lib/python3.12/site-packages/crosshair/util.py", line 594, in __init__
    debug("CrosshairInternal", str(self))
                               ^^^^^^^^^
RecursionError: maximum recursion depth exceeded while getting the str of an object

(note the, I believe, real error of CrosshairInternal is hidden by a recursion error red herring). Just curious if this is expected or I've broken things in my provider-plugins version.

tybug avatar Feb 16 '24 03:02 tybug

Unfortunately yeah, for crosshair we have to convert only at the end. Converting while the test is in progress forces crosshair to choose a concrete value for that element, and thus give up the main benefit of concolic execution, earlier than would otherwise be the case.

I think we can get around this reasonably well with only one additional epicycle: add a .from_ir(seq) method to PrimitiveProvider where it records that sequence and replays from it in the same style as ConjectureData.from_buffer(buf). Then to replay, we add one additional execution where we materialize the crosshair choice sequence at the end, and finally push that through the convert-as-we-go logic in your branch.

Happily that'll go away again when we convert the shrinker and database format to IR-native. I guess the places where we try to access an unfinished buffer might turn out to be a problem them, but let's not borrow trouble - they might go away in the refactoring.

Zac-HD avatar Feb 16 '24 18:02 Zac-HD

Yup, Zac's on the ball; draw-time will be too early for CrossHair.

And @tybug thanks - it's not you. Looks like I aleady broke my plugin already with mainline changes. I've got a suspicion about that crash; will be able to investigate this weekend and report back.

pschanely avatar Feb 16 '24 19:02 pschanely

@tybug I hooked up tybug/provider-plugins-2 to the latest versions of CrossHair and crosshair-hypothesis; and made some additional changes to both. (please pull latest)

Some of those updates will hopefully reduce some of the confusion around error reporting. (though expect this to be challenging in general - whenever either Hypothesis or Pytest tries to do some exception handling involving symbolics, things will generally go haywire)

Going forward, I'd call out some likely kinds of exceptions:

  • CrossHairInternal @ statespace.py:243 - if this happens, we're likely attempting to use a symbolic value outside the per-run context manager.
  • NotDeterministic - CrossHair expects, for each run, symbolics will be accessed in the same manner. When they aren't, you'll see this exception. (which may then likely trigger hypothesis's similar Flakey error) An easy failure scenario here is when hypothesis is caching something which causes the symbolics to be used or accessed differently on a subsequent iteration. In my recent edits, I've special-cased my plugin to ignore runs where the symbolics aren't accessed at all, which seems to be somewhat common, at least in the current implementation.

In particular, right now, I see the following immediate issues, though I expect they might just go away with the additional changes you're planning:

  • Hypothesis attempts a run with a zero data buffer here, which uses the crosshair provider, but I believe is not covered by the context manager.
  • NotDeterministic gets raised when the TreeRecordingObserver attempts to lookup the current symbolic in a dictionary, here.

At any rate, I'm happy to continue to diagnose issues as you encounter them. Don't be shy!

pschanely avatar Feb 20 '24 03:02 pschanely

Thanks @pschanely! Especially for the elaboration on the exceptions. Hypothesis' usage of DataTree is the result of both of the current crosshair-hypothesis errors mentioned above, due to DataTree's fresh ConjectureData not using the context manager and premature reification respectively.

I'm thinking about how this pull will play with other backends going forward. Crosshair seems like roughly the most complicated backend we would ever want to support in terms of the requirements it places on how we interact with its provider. Crosshair values can't be reified until the end of a test run, so it seems we have to avoid using DataTree at all here, which tracks what values hypothesis has already tried in order to avoid duplication. In fact, crosshair may effectively reimplement DataTree with its NotDeterministic (flaky) and try-something-new (deduplication) logic.

But other backends may not have this restriction and would benefit from DataTree, lest they produce duplicate inputs. I'd think/hope this is the default mode of execution, and a backend can opt out of using DataTree? @Zac-HD do you have thoughts about how this opting-out might look from the backend's perspective? We could have a wants_datatree = True class-level attr on PrimitiveProvider which CrosshairProvider overrides to False.

I know the stated goal is to get the crosshair backend working, and I don't mean to create extra work here 🙂. I'm personally invested in this because I have ideas for other backends (TargetedProvider) which do benefit from the assurances provided by DataTree. And it's probably good to get at least the forward direction hashed out early, even if we defer some things to later.

tybug avatar Feb 21 '24 18:02 tybug

I've called out crosshair in the PR title, but mostly because I figured if we had that working everything else would work too! Specifically, I think we should work out a design that can support each of crosshair, atheris, hypofuzz, targeted-PBT, and of course hypothesis' own default provider.

It seems to me that the basic split is between backends which do their own tracking of what's happened - whether fuzzing or solving - and those which don't. Having an attribute on PrimitiveProvider which determines whether we use DataTree seems pretty reasonable to me; I think we can get away with a boolean and naming it by function (though no specific suggestions on that yet) rather than e.g. a black/grey/white-box enum or something. wants_datatree= is OK but maybe a bit too specific?

Seems like we're pretty close to something which kinda-works though, if we thread through the context manager and manage to defer reification I think that might actually work?

If you think of a subset that would make sense to ship sooner, I'd also be happy to release an explicitly-unstable feature with e.g. Atheris integration... although that involves arguing over who controls the main loop, so maybe not.

Zac-HD avatar Feb 21 '24 18:02 Zac-HD

I'm also hopeful we're almost there. I'll continue working here and try to get something which works for both crosshair and simpler backends. (thanks for your fixes above, by the way — merge gone wrong on my end).

tybug avatar Feb 21 '24 22:02 tybug

I pushed an approach of tracking the ir tree structure and only ramming it through to a buffer when it finds a counterexample, as discussed. It's so close, and works for PrngProvider, but I can't quite get it to play nicely with crosshair. I figure either of you may spot the issue faster than I could.

I'm at a spot where a counterexample has been found and I want to reify the crosshair values into a buffer (self.__stoppable_test_function(data)):

https://github.com/HypothesisWorks/hypothesis/blob/93d46369e64e2976d6631f62231abe89f0674023/hypothesis-python/src/hypothesis/internal/conjecture/engine.py#L308-L314

This raises CrosshairInternal. But I thought that self.__stoppable_test_function was always under the influence of the hacky context manager, via core.py. So I'm not sure if I'm mistaken, or there is more to this than 'ensure under context manager'.

Here's the function I'm using to test (database=None because I'm still figuring out database-saving related issues).

@settings(backend="crosshair", database=None)
@given(st.integers())
def test(value):
    assert value < 92

test()

I hope you'll forgive the slight interim mess on this branch Zac; this code needs some cleanup before I'd ask for a proper review 🙂

tybug avatar Feb 23 '24 00:02 tybug

OK, sorry, my print is at fault above; that's triggering reification outside the context manager. There is an issue here somewhere, because I only added that print to debug the buffer being empty when it shouldn't be, but what I posted above isn't it.

tybug avatar Feb 23 '24 00:02 tybug

Here's the real problem: crosshair raises NotDeterministic when converting from ir to bits. The flow looks something like this:

  • we're testing @given(integers()) def f(n): assert n != 12345
  • crosshair provides a symbolic integer n
  • eventually (or maybe even on the first iteration?) crosshair reifies n as 12345 at n != 12345
  • we've found a bug! we hand back to hypothesis, which has tracked the symbolic n inside ir_tree
  • hypothesis pushes the ir tree through PrimitiveProvider to convert to bits...
  • ...but it has the symbolic n. When it tries to use it via forced=n, it hits a different constraint than n != 12345, because we're not passing it to the property anymore — we're using it to produce a bitstream value.
    • in practice, this is something like using n.bit_count() while converting to bits.

I think crosshair is correct to raise NotDeterministic here, but I think we also explicitly want to use n in a nondeterministic way? Some bad ideas: (1) crosshair walks the ir tree and replaces stored symbolics with reified values (when? also very fragile) (2) we tell crosshair when a test is finished and it disables NonDeterministic checking (3) some secret third thing.

By the way, reproducing this requires some work, because hypothesis swallows the NotDeterministic exception and produces its own Flaky. Details in case it's useful.

  • set a breakpoint at self.__stoppable_test_function(data)
  • at the breakpoint, enable "raised exceptions"
  • resume debugging

it will break at the first usage of the symbolic int.

image

tybug avatar Feb 23 '24 01:02 tybug

I think the problem is that we shouldn't track symbolic values in our DataTree - if we just appended the symbolic values to a list, then asked crosshair to reify them at the end of the test, and then added them to the tree... would that work?

Zac-HD avatar Feb 23 '24 01:02 Zac-HD

what would "ask crosshair to reify them" look like? We basically are appending the symbolic values to a list by tracking them in ir_tree, and we basically are asking crosshair to reify them by passing them as forced=n. That reification just happens to take what I assume is the form of "give me a new value for this symbolic, and by the way, we're using it in a different context than past iterations."

If there was a magic function from crosshair we could call, where we give it a symbolic value and it returns the most-recently-reified (?) value for that symbolic, then I think that may work? But that's quite a tight coupling for what is supposedly a backend.

tybug avatar Feb 23 '24 01:02 tybug

I can provide something like that function; in fact I'd already drafted one here, thinking I might need it. I'm just not clear on a reasonable way to integrate.

I have no clue how difficult it would be, but could the maintenance of the DataTree become the responsibility of the provider? Then, in my plugin, I could remember my own values (without having to hash them into a dict) and then reify and inject them into the tree when my context manager is about to exit.

EDIT: I don't think I understand enough about how and why forced is used. (lol, which is one reason I shouldn't be in a position to be suggest anything!) But, if it's essentially used as a tree-search optimization, perhaps my plugin should be completely ignoring that parameter instead of returning the forced value?

pschanely avatar Feb 23 '24 14:02 pschanely

Nice! I think if you could update data.export_value, I'll try hooking it up and seeing if it works at all, and then we can settle on how exactly to interop here. A end-of-test-case callback defined on Provider which takes "list of ir values we returned from our provider" (e.g. symbolic values) and returns "actual ir values" (int, str, ...) may be reasonable? I could imagine other providers having similar pseudo concolic execution, with the same issue.

I think making DataTree the responsibility of the provider would not work without some serious refactorings, which are probably too specific to crosshair-like providers to be worthwhile. When hypothesis asks DataTree for a novel input we haven't tried before, there's logic in DataTree like "when we're in a branch (we have seen n different values here before), sample until we hit either a new value, or a value where that branch has more to explore (not child.is_exhausted)".

As written this is a dict lookup and so requires reifying the value immediately when drawing. But this could probably be expressed as a set of constraints instead that are amenable to crosshair? Regardless, I think this approach being feasible is quite far down the line, if we take it at all.

For forced: as far as non-hypothesis providers are concerned, this is mostly a way for hypothesis to force a distribution on the provider. It's used for things like, in st.integers(a, b), force endpoint values to be drawn with slightly larger probability than otherwise. We could maybe even not give providers a choice here, and always force the appropriate value in ConjectureData, without passing forced= to the provider. (is there a provider which has a valid reason not to respect forced values?)

PrimitiveProvider (ab)uses forced for something quite different: taking a value v \in (int, bool, str, float, bytes) and turning it into the bits that would have generated that value. I don't think other providers need to worry about this use-case, though.

tybug avatar Feb 23 '24 18:02 tybug

Understood. This all makes sense to me. If you are always going to call CrossHairPrimitiveProvider.export_value inside my context manager for a run, then I actually don't need to make any changes. (it already does a "deep" reification, so feel free to pass a list of symbolics)

Otherwise, I think I might need to remember my draws and reify them at context manager exit, but that's not hard to do - LMK!

pschanely avatar Feb 23 '24 22:02 pschanely

If you can do it from the context manager and that let's us avoid putting any reification logic in Hypothesis, I'd be keen on that option.

Zac-HD avatar Feb 23 '24 22:02 Zac-HD

It would definitely be easier to implement if export_value could be called outside of the context manager 😄. I think export_value is currently broken though, because it references self.get_contxt_manager, which no longer exists.

To Zac's point, I'm not sure there's a path here which doesn't involve some sort of "hey backend, convert these ir values" callback? If crosshair can do a magic deep_realize(returned_symbolics) and have anywhere we stored those symbolics be turned into values, that would of course be amazing! But given that export_value has a return deep_realize(value), my gut is this is not an in-place modification, and we will need some ir_values = provider.after_test_run(ir_symbolics) in hypothesis somewhere. But I would love to be mistaken here.

tybug avatar Feb 23 '24 22:02 tybug

Yup; it wasn't hard to make export_value work anywhere; try pulling latest. I also added (gasp!) a test, which you might want to review to understand whether we're imagining the same thing.

Agree that probably it makes sense to just get it to some kind of functional and then iterate on the interface.

pschanely avatar Feb 24 '24 13:02 pschanely

Thanks! This looks like what I was imagining as well, though the IgnoreAttempt comment is a bit worrying? I saw IgnoreAttempt when drawing bounded integers from crosshair as well. Hypothesis of course doesn't know how to deal with this error, so I hope it's something internal to crosshair that can be handled by the provider?

It seems like space.detach_path() throws on the second iteration (already generated one failing counterexample, trying to generate more) due to assert self._search_position.is_stem() -> NodeStem.is_stem -> return self.evolution is None failing. Am I misusing the interface or is this something that can be fixed crosshair-side?

tybug avatar Feb 24 '24 18:02 tybug

Woohoo! Self-tests all passing, with only coverage and some type-annotations failing 🤩

(already generated one failing counterexample, trying to generate more)

Probably we should just not try to generate more here? Here's the conditional where I'd add or settings.backend != "hypothesis".

Zac-HD avatar Feb 25 '24 00:02 Zac-HD

Thanks! This looks like what I was imagining as well, though the IgnoreAttempt comment is a bit worrying? I saw IgnoreAttempt when drawing bounded integers from crosshair as well. Hypothesis of course doesn't know how to deal with this error, so I hope it's something internal to crosshair that can be handled by the provider?

Yeah, in my previous edits, I added a catch for IgnoreAttempt (I only expect it to get rasied inside the context manager). So I think this is fine. In that test case, the IgnoreAttempt exception is getting raised while trying to produce the symbolic bytes object, so it's fine that export_value won't be able to realize it.

It seems like space.detach_path() throws on the second iteration (already generated one failing counterexample, trying to generate more) due to assert self._search_position.is_stem() -> NodeStem.is_stem -> return self.evolution is None failing. Am I misusing the interface or is this something that can be fixed crosshair-side?

Interesting. I've got at least one theory here: crosshair cannot signal to hypothesis that it's exhausted all paths, and we might run into some strange behaviors if we continue to attempt to run. But there might be something else going on. Can I reproduce with some example and hypothesis branch? (I'd love to understand what's going on, even if you intend to stop running under the crosshair provider after the first counterexample)

pschanely avatar Feb 25 '24 01:02 pschanely

Probably we should just not try to generate more here?

I noticed you had a similar conditional in your initial implementation, but I'm hopeful we can get the full power of multiple-bug finding here...possibly the only thing standing in our way is the ability for crosshair to tell hypothesis when it's exhausted? I don't see any other blocker in principle.

I think in general, if a backend sets wants_datatree to false (better name for this var pending?), then we could also give a way for that backend to communicate that the tree is exhausted and hypothesis should stop generating. A new raise Exhausted exception could work.

@pschanely here's a reproducer, for this branch:

from hypothesis import settings, given
from hypothesis import strategies as st

@settings(backend="crosshair")
@given(st.integers())
def test(x):
    assert x != 12345

test()

stacktraces go haywire if you run normally, so the most reliable way to examine the error I've found is breakpointing on space.detach_path(), continuing playback the first time it breaks (where the error doesn't manifest) and then stepping the second time it breaks (which will raise the assertion error). Finding this error in the first place was a blind manual binary search 😅

tybug avatar Feb 25 '24 04:02 tybug

I think in general, if a backend sets wants_datatree to false (better name for this var pending?), then we could also give a way for that backend to communicate that the tree is exhausted and hypothesis should stop generating. A new raise Exhausted exception could work.

For the moment, I've coded around this on my side by essentially just restarting. I think that's fine for now. In practice, exhaustion almost never happens.

@pschanely here's a reproducer, for this branch: [...]

Ah! Got it; fixed in this commit. (I also hooked up post_test_case_hook to export_value, which I presume I was supposed to do!) Now, on first run, I get an error at assert self.__random is not None conjecture/data.py@2076 ... which I haven't attempt to debug yet. But it seems to save something to the database, because it works fine on the next run. A real counterexample; woo!

pschanely avatar Feb 25 '24 08:02 pschanely

(I also hooked up post_test_case_hook to export_value, which I presume I was supposed to do!)

Yup, my bad for not mentioning it.

Initial debugging of that error: data.provider.post_test_case_hook(node.value) returns None, because _PREVIOUS_REALIZED_DRAWS is None, because space.detach_path is I believe still erroring the second time around? Same reproducer as last time; set a breakpoint at space.detach_path(), resume once, then step the second time.

image

e: possibly the root cause here is NotDeterministic this time — unsure hypothesis or crosshair is at fault yet.

tybug avatar Feb 25 '24 20:02 tybug

e: possibly the root cause here is NotDeterministic this time — unsure hypothesis or crosshair is at fault yet.

Yes, I think you're right here. CrossHair uses stack traces to detect nondeterminism ... and we're triggering off some variances (in this case, this conditional) in the way hypothesis calls the function under test. (which simply doesn't happen to be a problem when running CrossHair standalone) I think I'll probably just cap the trace at the point CrossHair's context manager starts - this is something I'll handle on my side.

pschanely avatar Feb 26 '24 07:02 pschanely