Name Description Size
error.rs 8606
limited.rs ! This module defines two bespoke reverse DFA searching routines. (One for the lazy DFA and one for the fully compiled DFA.) These routines differ from the usual ones by permitting the caller to specify a minimum starting position. That is, the search will begin at `input.end()` and will usually stop at `input.start()`, unless `min_start > input.start()`, in which case, the search will stop at `min_start`. In other words, this lets you say, "no, the search must not extend past this point, even if it's within the bounds of the given `Input`." And if the search does* want to go past that point, it stops and returns a "may be quadratic" error, which indicates that the caller should retry using some other technique. These routines specifically exist to protect against quadratic behavior when employing the "reverse suffix" and "reverse inner" optimizations. Without the backstop these routines provide, it is possible for parts of the haystack to get re-scanned over and over again. The backstop not only prevents this, but tells you when it is happening* so that you can change the strategy. Why can't we just use the normal search routines? We could use the normal search routines and just set the start bound on the provided `Input` to our `min_start` position. The problem here is that it's impossible to distinguish between "no match because we reached the end of input" and "determined there was no match well before the end of input." The former case is what we care about with respect to quadratic behavior. The latter case is totally fine. Why don't we modify the normal search routines to report the position at which the search stops? I considered this, and I still wonder if it is indeed the right thing to do. However, I think the straight-forward thing to do there would be to complicate the return type signature of almost every search routine in this crate, which I really do not want to do. It therefore might make more sense to provide a richer way for search routines to report meta data, but that was beyond my bandwidth to work on at the time of writing. See the 'opt/reverse-inner' and 'opt/reverse-suffix' benchmarks in rebar for a real demonstration of how quadratic behavior is mitigated. 10602
literal.rs 3261
mod.rs ! Provides a regex matcher that composes several other regex matchers automatically. This module is home to a meta [`Regex`], which provides a convenient high level API for executing regular expressions in linear time. # Comparison with the `regex` crate A meta `Regex` is the implementation used directly by the `regex` crate. Indeed, the `regex` crate API is essentially just a light wrapper over a meta `Regex`. This means that if you need the full flexibility offered by this API, then you should be able to switch to using this API directly without any changes in match semantics or syntax. However, there are some API level differences: The `regex` crate API returns match objects that include references to the haystack itself, which in turn makes it easy to access the matching strings without having to slice the haystack yourself. In contrast, a meta `Regex` returns match objects that only have offsets in them. At time of writing, a meta `Regex` doesn't have some of the convenience routines that the `regex` crate has, such as replacements. Note though that [`Captures::interpolate_string`](crate::util::captures::Captures::interpolate_string) will handle the replacement string interpolation for you. A meta `Regex` supports the [`Input`](crate::Input) abstraction, which provides a way to configure a search in more ways than is supported by the `regex` crate. For example, [`Input::anchored`](crate::Input::anchored) can be used to run an anchored search, regardless of whether the pattern is itself anchored with a `^`. A meta `Regex` supports multi-pattern searching everywhere. Indeed, every [`Match`](crate::Match) returned by the search APIs include a [`PatternID`](crate::PatternID) indicating which pattern matched. In the single pattern case, all matches correspond to [`PatternID::ZERO`](crate::PatternID::ZERO). In contrast, the `regex` crate has distinct `Regex` and a `RegexSet` APIs. The former only supports a single pattern, while the latter supports multiple patterns but cannot report the offsets of a match. A meta `Regex` provides the explicit capability of bypassing its internal memory pool for automatically acquiring mutable scratch space required by its internal regex engines. Namely, a [`Cache`] can be explicitly provided to lower level routines such as [`Regex::search_with`]. 2714
regex.rs 141715
reverse_inner.rs ! A module dedicated to plucking inner literals out of a regex pattern, and then constructing a prefilter for them. We also include a regex pattern "prefix" that corresponds to the bits of the regex that need to match before the literals do. The reverse inner optimization then proceeds by looking for matches of the inner literal(s), and then doing a reverse search of the prefix from the start of the literal match to find the overall start position of the match. The essential invariant we want to uphold here is that the literals we return reflect a set where *at least* one of them must match in order for the overall regex to match. We also need to maintain the invariant that the regex prefix returned corresponds to the entirety of the regex up until the literals we return. This somewhat limits what we can do. That is, if we a regex like `\w+(@!|%%)\w+`, then we can pluck the `{@!, %%}` out and build a prefilter from it. Then we just need to compile `\w+` in reverse. No fuss no muss. But if we have a regex like \d+@!|\w+%%`, then we get kind of stymied. Technically, we could still extract `{@!, %%}`, and it is true that at least of them must match. But then, what is our regex prefix? Again, in theory, that could be `\d+|\w+`, but that's not quite right, because the `\d+` only matches when `@!` matches, and `\w+` only matches when `%%` matches. All of that is technically possible to do, but it seemingly requires a lot of sophistication and machinery. Probably the way to tackle that is with some kind of formalism and approach this problem more generally. For now, the code below basically just looks for a top-level concatenation. And if it can find one, it looks for literals in each of the direct child sub-expressions of that concatenation. If some good ones are found, we return those and a concatenation of the Hir expressions seen up to that point. 9918
stopat.rs ! This module defines two bespoke forward DFA search routines. One for the lazy DFA and one for the fully compiled DFA. These routines differ from the normal ones by reporting the position at which the search terminates when a match isn't* found. This position at which a search terminates is useful in contexts where the meta regex engine runs optimizations that could go quadratic if we aren't careful. Namely, a regex search *could* scan to the end of the haystack only to report a non-match. If the caller doesn't know that the search scanned to the end of the haystack, it might restart the search at the next literal candidate it finds and repeat the process. Providing the caller with the position at which the search stopped provides a way for the caller to determine the point at which subsequent scans should not pass. This is principally used in the "reverse inner" optimization, which works like this: 1. Look for a match of an inner literal. Say, 'Z' in '\w+Z\d+'. 2. At the spot where 'Z' matches, do a reverse anchored search from there for '\w+'. 3. If the reverse search matches, it corresponds to the start position of a (possible) match. At this point, do a forward anchored search to find the end position. If an end position is found, then we have a match and we know its bounds. If the forward anchored search in (3) searches the entire rest of the haystack but reports a non-match, then a naive implementation of the above will continue back at step 1 looking for more candidates. There might still be a match to be found! It's possible. But we already scanned the whole haystack. So if we keep repeating the process, then we might wind up taking quadratic time in the size of the haystack, which is not great. So if the forward anchored search in (3) reports the position at which it stops, then we can detect whether quadratic behavior might be occurring in steps (1) and (2). For (1), it occurs if the literal candidate found occurs before* the end of the previous search in (3), since that means we're now going to look for another match in a place where the forward search has already scanned. It is *correct* to do so, but our technique has become inefficient. For (2), quadratic behavior occurs similarly when its reverse search extends past the point where the previous forward search in (3) terminated. Indeed, to implement (2), we use the sibling 'limited' module for ensuring our reverse scan doesn't go further than we want. See the 'opt/reverse-inner' benchmarks in rebar for a real demonstration of how quadratic behavior is mitigated. 9263
strategy.rs 75373
wrappers.rs ! This module contains a boat load of wrappers around each of our internal regex engines. They encapsulate a few things: 1. The wrappers manage the conditional existence of the regex engine. Namely, the PikeVM is the only required regex engine. The rest are optional. These wrappers present a uniform API regardless of which engines are available. And availability might be determined by compile time features or by dynamic configuration via `meta::Config`. Encapsulating the conditional compilation features is in particular a huge simplification for the higher level code that composes these engines. 2. The wrappers manage construction of each engine, including skipping it if the engine is unavailable or configured to not be used. 3. The wrappers manage whether an engine *can* be used for a particular search configuration. For example, `BoundedBacktracker::get` only returns a backtracking engine when the haystack is bigger than the maximum supported length. The wrappers also sometimes take a position on when an engine *ought* to be used, but only in cases where the logic is extremely local to the engine itself. Otherwise, things like "choose between the backtracker and the one-pass DFA" are managed by the higher level meta strategy code. There are also corresponding wrappers for the various `Cache` types for each regex engine that needs them. If an engine is unavailable or not used, then a cache for it will *not* actually be allocated. 44211