(Edit) "Bug 1452376 - Replace GetDescendantFolders with a recursive subquery" https://hg.mozilla.org/integration/autoland/rev/827cc04dacce
"Recursive Queries Using Common Table Expressions" https://gist.github.com/jbrown123/b65004fd4e8327748b650c7738...
https://en.m.wikipedia.org/wiki/Nested_set_model
https://imrannazar.com/articles/modified-preorder-tree-trave...
Sqlite is the default option on Android and it's pretty common to have time series sensor data that needs to be captured, stored, and analyzed...
But sqlite isn't really meant for a time series workload.
There is also duckdb but I'm not sure about the status of the Android bindings.
Depending on the recursion pattern and the overhead of your sqlite driver, it can be faster to do many ID lookups then try to cram it all into one mega CTE query.
https://www.sqlite.org/np1queryprob.html
source: we have the CTE for loading page data in the notion Android app, and the network regularly beats disk on lower end Android devices using whichever query we pick.
On their own, I'd guess likely not a lot. If a view would help, a CTE will help similarly without needing the external structure, if a correlated subquery would help, then yes similarly, especially if the pattern is repeated in the overall query.
In conjunction with other things (good indexing, materialised sequences ("numbers" tables), etc.), is guess yes.
Though you need to be much more specific about your data and the queries in question before I can do better than these vague guesses.
Then when you use "WITH RECURSIVE", the queries can now refer back to themselves. This is just a for loop over a queue of SQL results, conceptually. The part before the UNION ALL fills the queue to start, and the part after the UNION ALL runs once for each result in the queue, adding all results back into the queue.
If you understand this, then you can understand the Sudoku or Mandelbrot examples (definitely don't start with trying to understand these two though). For example, the Sudoku example contains one recursive query, "x(s, ind)". As explained on the page, "s" is the puzzle (unsolved spaces shown as ".") and "ind" is the index of the first unsolved space (or 0 for a solved puzzle). It creates an unsolved puzzle in the initial setup. The for loop body finds all valid values for the first unsolved space, and the next index to solve; then puts all these results into the queue. The final (non-recursive) SELECT in the CTE looks over all results in the queue, and returns the one where the index is 0 (the solved puzzles).
Learn to think in terms of the relational algebra, and how to translate that to/from SQL, and it starts making sense.
I was implementing newton raphson a few years ago in SQL (so as not to implement it as a straight loop) and iterating over the problem several times really helped.
Get a query. Now think of the next step in the iteration, and you're basically writing a query that connects to that. And now, it runs again and again based on the previous criteria.
If you can isolate each part of the query for each logical step you're going to have a much simpler problem to mentally solve.
Please have a look at the formal definition of regular expressions at https://en.wikipedia.org/wiki/Regular_expression#Formal_defi... on Wikipedia, and let me know where the stack and the iterations are. I can't find them.
https://en.wikipedia.org/wiki/Recursive_definition#Well_form... is also a good example.
Regular expressions are a particularly interesting example because you brought up 'loops', so I'm assuming you are interested in how you can implement some of these recursive definitions on a computer.
So for regular expressions a common technique is to compile them to a finite state machine. You can model that in your computer as one block of eg assembly instruction per machine state. To move to a different state, you just 'jmp' to the corresponding code's address.
That's pretty fun to work out, but I still don't see anything I would describe as a 'loop' here; but the recursive analysis still makes perfect sense.
Yes, some programming languages have special purpose looping constructs. But they are of limited usefulness, and not particularly fundamental: they usually get compiled away.
In contrast, non-tail-recursive functions will make the call stack grow with each call.
1. Recursion and iteration are duals of each other. Anywhere a "recursive" CTE is a good tool for the job, there exists a natural for-loop or while-loop you would write in an ordinary programming language to solve the same job. Figure out what that looks like, and then you can translate it to SQL. Optimizing further often isn't necessary. The general form isn't terrible (ellipses hide the extra columns you'd have to manually include in most SQL dialects):
WITH RECURSIVE t(n, ...) AS (
SELECT 1 as n, * from base_case
UNION ALL
SELECT n+1, f(...) FROM t WHERE not_terminated(n, ...)
)
SELECT something_interesting(...) FROM t
1 (continued). You can explicitly encode for-loops and while-loops, doing all the normal sorts of things an ordinary programming language allows. SQL is just a different language representing all the things you already know how to do; don't make it complicated until performance becomes a problem.2. There exists a concept of "equality saturation" that's exceptionally powerful in a number of domains. Everything in (1) is familiar to an ordinary working programmer, but the database's mental model of a recursive CTE is
while (not_done(working_set)) {
working_set.extend(process(working_set));
}
2 (continued 0). One model of solving a sudoku puzzle is an iterative/recursive branch-and-bound algorithm (try a reasonable thing, expand all the easily learnable knowledge, exit if impossible or if done (recursing at each level), go to the next reasonable thing). One "equality saturation" version of that solution is (plus a stopping condition somewhere):2a. For each partial potential solution
2b. For all the ways in which the potential solution is viable
2c. Expand into all those new potentialities
2 (continued 1). That ability to describe a monotonically increasing set of things which finitely approaches a boundary is powerful in math, powerful in search, powerful in optimization (the Rust EGG crate (e-graphs good) has reasonable documentation for one particular concrete way in which that idea could manifest, if such concreteness helps you learn -- whether you know/like Rust or not), and so on. Gradient descent is just expanding your information till you don't have much more to learn. Optimizing a program is just expanding the set of optimization-based re-writes till you don't have any more re-writes (and pick the best one). Parsing a document is just adding to the things you know about it till no parsing rules can glean any more information. Since that thinking modality is how the database treats your query, you'll usually have better optimized queries if you can naturally formulate your problem in that language (as opposed to the naive iterative solution I proposed in (1)). Not all problems can be thus phrased, but if you're doing a lot of work in your database with recursive CTEs, it's a good idea to spend a week or three hammering home.
3. Combining (1) and (2) a bit, your database will usually have a cost somewhere around O(all_the_rows_you_produce) when evaluating recursive CTEs. These tasks only get hard when performance is a problem and you have to figure out how to take the naive ideas from (1) and transform them into the expected model of (2) in a way that actually reduces unnecessary work. For the sudoku example, you can do that by adding a bit more internal state to transform the breadth-first-search I described into a depth-first-search (making the solution _more_ iterative, interestingly; the "natural" solution is very slow comparatively), but in general you might have to get very creative or might not actually have a reasonable solution available.
That's it. It's just a simple loop.
(Yes, recursion is looping. But in this case it's very clear that a stack is not needed, that the recursion is "tail recursion" if know what that is.)
The hard part lies in getting the JOIN in the "recursive" side of the UNION query right. Here's a transitive closure query:
WITH RECURSIVE closure AS (
SELECT parent, child FROM relationships
UNION
SELECT c.parent AS ancestor, r.child AS descendant
FROM closure c
JOIN relationships r ON c.child = r.parent
)
SELECT * FROM closure;
Here `SELECT parent, child FROM relationships` is the seed, and the rest (`SELECT c.parent, r.child ...`) is the recursive part of the query that gets repeated until the whole set of results from the seed and all the repetitions of the recursive part stops growing. The recursive part simply says to add to the `closure` all the children of children of tuples in the closure, but with the parents from the closure as the parents. So if this were human parent/child relationships then initially you might have your parents' relationships to you, and also yours to your children, and this will find that your parents are ancestors to your children.EDIT: Removed a bunch of stuff that was partly based on a faulty memory of how UNION works vs UNION ALL. Indeed, UNION ALL means the RDBMS does not need to keep around the whole result set.
Every query response in SQL is a rectangular matrix [1]. JOINs add columns sideways. UNIONs add rows vertically[2].
[1] Which is why tree shaped data is awkward to query in SQL in one round-trip.
[2] From this you realize why the column names have to match to apply a UNION, and why recursion is something to do with a UNION.
Not using formulas in Excel but just using it as a rows/columns editor.
This makes it visually clearer.
As much as I enjoy the Mandelbrot set, I bought the Fractint book as a kid, anyone done any outlandish but useful recursive queries?
PS: awesome explanation of how exactly the recursive query works. Wish I had read it when I first needed it in some other DB which help did not have such a clear explanation. Tore out a lot of hair before I got it working right.
I made it into a neat little Django portal with configurable permissions, interactive charts for the data, filtering, etc.
It became a ~10-min async celery task running in the background from what previously used to take the company weeks to create that report in an error prone way with macros written by someone long gone / in another department. I'm still pretty proud of that app even though it never got implemented. I got promoted, moved to a different department and don't think it ever saw the light of day, but I do have the code laying around somewhere
Here's the source: https://aprogrammerwrites.eu/?p=878
Cheers
I can't find a good reason not to left-align everything vs trying to right-align keywords and keep everything else on one line until the next keyword.
If you don't have the time to use one of those or build your own (and you probably shouldn't) sort of thing on top of SQL, then you can instead define a bunch of VIEWs using recursive queries and GROUP BY and aggregation functions to provide something simple for your users to query.
If you do want to build something more general purpose... If you begin by modeling your schema in something other than SQL DDLs (like with an AST) and then model FKs as bi-directional relationships (because they are), and if you add a way to group those relationships, and further provide a way to define a subset of like columns from all the tables in a graph defined by a group of those relationships, then suddenly you can also come up with a simple(ish) language for expressing [sub-]graphs that you want to fetch. And there's your ad-hoc query language for dealing with recursive graphs in your relational data.
Turns out both of those examples have been in that documentation for over a decade now! https://web.archive.org/web/20140331191105/http://sqlite.org...