dask-sql comes with a small test implementation for a SQL server.
Instead of rebuilding a full ODBC driver, we re-use the presto wire protocol.
It is - so far - only a start of the development and missing important concepts, such as authentication.
You can test the sql presto server by running (after installation)
or by running these lines of code
from dask_sql import run_server run_server()
or directly with a created context
or by using the created docker image
docker run --rm -it -p 8080:8080 nbraun/dask-sql
This will spin up a server on port 8080 (by default).
The port and bind interfaces can be controlled with the
--host command line arguments (or options to
The running server looks similar to a normal presto database to any presto client and can therefore be used
with any library, e.g. the presto CLI client or
sqlalchemy via the PyHive package:
presto --server localhost:8080
Now you can fire simple SQL queries (as no data is loaded by default):
=> SELECT 1 + 1; EXPR$0 -------- 2 (1 row)
sqlalchemy (after having installed
from sqlalchemy.engine import create_engine engine = create_engine('presto://localhost:8080/') import pandas as pd pd.read_sql_query("SELECT 1 + 1", con=engine)
Of course, it is also possible to call the usual
Preregister your own data sources¶
The python function
run_server() accepts an already created
This means you can preload your data sources and register them with a context before starting your server.
By this, your server will already have data to query:
from dask_sql import Context c = Context() c.create_table(...) # Then spin up the ``dask-sql`` server from dask_sql import run_server run_server(context=c)
Run it in your own
The SQL server implementation in
dask-sql allows you to run a SQL server as a service connected to your
This enables your users to run SQL command leveraging the full power of your
dask cluster without the need to write python code
and allows also the usage of different non-python tools (such as BI tools) as long as they can speak the presto protocol.
To run a standalone SQL server in your
dask cluster, follow these three steps:
Create a startup script to connect
dask-sqlto your cluster. There exist many different ways to connect to a
daskcluster (e.g. direct access to the scheduler, dask gateway, …). Choose the one suitable for your cluster and create a small startup script:
# Connect to your cluster here, e.g. from dask.distributed import Client client = Client(scheduler_address) ... # Then spin up the ``dask-sql`` server from dask_sql import run_server run_server(client=client)
Deploy this script to your cluster as a service. How you do this, depends on your cluster infrastructure (kubernetes, mesos, openshift, …). For example you could create a docker image with a dockerfile similar to this:
FROM nbraun/dask-sql COPY startup_script.py /opt/dask_sql/startup_script.py ENTRYPOINT [ "/opt/conda/bin/python", "/opt/dask_sql/startup_script.py" ]
After your service is deployed, you can use it in your applications as a “normal” presto database.
dask-sql SQL server was successfully tested with Apache Hue, Apache Superset
Running from a jupyter notebook¶
If you quickly want to bridge the gap between your jupyter notebook and a BI tool, you can run a temporary SQL server from your jupyter notebook.
# Create a Context and work with it from dask_sql import Context c = Context() ... # Later create a temporary server c.run_server(blocking=False) # Continue working
This allows you to access the same context with all its registered tables both in the jupyter notebook as well as by connecting to the SQL server started on port 8080 (e.g. with your BI tool).
Once you are done with the SQL server, you can close it with
Please note that this feature should not be used for productive SQL servers, but just for quick analyses via an external application.