rquery is a data wrangling system designed to express complex data manipulation as a series of simple data transforms. This is in the spirit of
dplyr::mutate() and uses a pipe in the style popularized in
magrittr. The operators themselves follow the selections in Codd’s relational algebra, with the addition of the traditional
SQL “window functions.” More on the background and context of
rquery can be found here.
In transform formulations data manipulation is written as transformations that produce new
data.frames, instead of as alterations of a primary data structure (as is the case with
data.table). Transform system can use more space and time than in-place methods. However, in our opinion, transform systems have a number of pedagogical advantages.
rquery’s case the primary set of data operators is as follows:
convert_records(supplied by the
These operations break into a small number of themes:
The point is: Codd worked out that a great number of data transformations can be decomposed into a small number of the above steps.
rquery supplies a high performance implementation of these methods that scales from in-memory scale up through big data scale (to just about anything that supplies a sufficiently powerful
SQL interface, such as PostgreSQL, Apache Spark, or Google BigQuery).
We will work through simple examples/demonstrations of the
rquery data manipulation operators.
The simple column operations are as follows.
These operations are easy to demonstrate.
We set up some simple data.
drop_columns works as follows.
drop_columns creates a new
data.frame without certain columns.
## Loading required package: wrapr
## x ## 1 1 ## 2 1 ## 3 2
In all cases the first argument of a
rquery operator is either the data to be processed, or an earlier
rquery pipeline to be extended. We will take about composing
rquery operations after we work through examples of all of the basic operations.
We can write the above in piped notation (using the
wrapr pipe in this case):
Notice the first argument is an explicit “dot” in
wrapr pipe notation.
select_columns’s action is also obvious from example.
rename_columns is given as name-assignments of the form
'new_name' = 'old_name':
The simple row operations are:
select_rows keeps the set of rows that meet a given predicate expression.
Notes on how to use a variable to specify column names in
select_rows can be found here.
order_rows re-orders rows by a selection of column names (and allows reverse ordering by naming which columns to reverse in the optional
reverse argument). Multiple columns can be selected in the order, each column breaking ties in the earlier comparisons.
rquery operations do not depend on row-order and are not guaranteed to preserve row-order, so if you do want to order rows you should make it the last step of your pipeline.
The important create or replace column operation is:
extend accepts arbitrary expressions to create new columns (or replace existing ones). For example:
We can use
:= for column assignment. In these examples we will use
:= to keep column assignment clearly distinguishable from argument binding.
extend allows for very powerful per-group operations akin to what
SQL calls “window functions”. When the optional
partitionby argument is set to a vector of column names then aggregate calculations can be performed per-group. For example.
Notice the aggregates were performed per-partition (a set of rows with matching partition key values, specified by
partitionby) and in the order determined by the
orderby argument (without the
orderby argument order is not guaranteed, so always set
orderby for windowed operations that depend on row order!).
The main aggregation method for
project performs per-group calculations, and returns only the grouping columns (specified by
groupby) and derived aggregates. For example:
Notice we only get one row for each unique combination of the grouping variables. We can also aggregate into a single row by not specifying any
Notes on how to use a variable to specify column names in
project can be found here.
To combine multiple tables in
rquery one uses what we call the
natural_join operator. In the
natural_join, rows are matched by column keys and any two columns with the same name are coalesced (meaning the first table with a non-missing values supplies the answer). This is easiest to demonstrate with an example.
Let’s set up new example tables.
d_left <- data.frame( k = c('a', 'a', 'b'), x = c(1, NA, 3), y = c(1, NA, NA), stringsAsFactors = FALSE ) knitr::kable(d_left)
d_right <- data.frame( k = c('a', 'b', 'q'), y = c(10, 20, 30), stringsAsFactors = FALSE ) knitr::kable(d_right)
To perform a join we specify which set of columns our our row-matching conditions (using the
by argument) and what type of join we want (using the
jointype argument). For example we can use
jointype = 'LEFT' to augment our
d_left table with additional values from
In a left-join (as above) if the right-table has unique keys then we get a table with the same structure as the left-table- but with more information per row. This is a very useful type of join in data science projects. Notice columns with matching names are coalesced into each other, which we interpret as “take the value from the left table, unless it is missing.”
Record transformation is “simple once you get it”. However, we suggest reading up on that as a separate topic here.
We could, of course, perform complicated data manipulation by sequencing
rquery operations. For example to select one row with minimal
x group we could work in steps as follows.
. <- d . <- extend(., row_number := row_number(), partitionby = 'x', orderby = c('y', 'z')) . <- select_rows(., row_number == 1) . <- drop_columns(., "row_number") knitr::kable(.)
The above discipline has the advantage that it is easy to debug, as we can run line by line and inspect intermediate values. We can even use the Bizarro pipe to make this look like a pipeline of operations.
d ->.; extend(., row_number := row_number(), partitionby = 'x', orderby = c('y', 'z')) ->.; select_rows(., row_number == 1) ->.; drop_columns(., "row_number") ->.; knitr::kable(.)
d %.>% extend(., row_number := row_number(), partitionby = 'x', orderby = c('y', 'z')) %.>% select_rows(., row_number == 1) %.>% drop_columns(., "row_number") %.>% knitr::kable(.)
rquery operators can also act on
rquery pipelines instead of acting on data. We can write our operations as follows:
ops <- local_td(d) %.>% extend(., row_number := row_number(), partitionby = 'x', orderby = c('y', 'z')) %.>% select_rows(., row_number == 1) %.>% drop_columns(., "row_number") cat(format(ops))
## mk_td("d", c( ## "x", ## "y", ## "z")) %.>% ## extend(., ## row_number := row_number(), ## partitionby = c('x'), ## orderby = c('y', 'z'), ## reverse = c()) %.>% ## select_rows(., ## row_number == 1) %.>% ## drop_columns(., ## c('row_number'))
And we can re-use this pipeline, both on local data and to generate
SQL to be run in remote databases. Applying this operator pipeline to our
d is performed as follows.
d %.>% ops %.>% knitr::kable(.)
SQL we have the following.
raw_connection <- DBI::dbConnect(RSQLite::SQLite(), ":memory:") RSQLite::initExtension(raw_connection) db <- rquery_db_info( connection = raw_connection, is_dbi = TRUE, connection_options = rq_connection_tests(raw_connection)) cat(to_sql(ops, db))
## SELECT ## `x`, ## `y`, ## `z` ## FROM ( ## SELECT * FROM ( ## SELECT ## `x`, ## `y`, ## `z`, ## row_number ( ) OVER ( PARTITION BY `x` ORDER BY `y`, `z` ) AS `row_number` ## FROM ( ## SELECT ## `x`, ## `y`, ## `z` ## FROM ## `d` ## ) tsql_29495934854436339343_0000000000 ## ) tsql_29495934854436339343_0000000001 ## WHERE `row_number` = 1 ## ) tsql_29495934854436339343_0000000002
# clean up DBI::dbDisconnect(raw_connection)
SQL examples, please see here.
What we are trying to illustrate above: there is a continuum of notations possible between:
rquery supplies a very teachable grammar of data manipulation based on Codd’s relational algebra and experience with pipelined data transforms (such as
For in-memory situations
data.table as the implementation provider (through the small adapter package
rqdatatable) and is routinely faster than any other
R data manipulation system except
For bigger than memory situations
rquery can translate to any sufficiently powerful
SQL dialect, allowing
rquery pipelines to be executed on PostgreSQL, Apache Spark, or Google BigQuery.
In addition the
data_algebra Python package supplies a nearly identical system for working with data in Python. # Background
There are many prior relational algebra inspired specialized query languages. Just a few include:
ISBL/ Information system based language ~1973
IBM System R~1974.
rquery is realized as a thin translation to an underlying
SQL provider. We are trying to put the Codd relational operators front and center (using the original naming, and back-porting
SQL progress such as window functions to the appropriate relational operator).
Some related work includes:
rquery please try
rquery is intended to work with “tame column names”, that is column names that are legitimate symbols in
rquery introduction is available here.