Personal tools
Skip to content. | Skip to navigation
stock-supply-forecast module for Tryton application server.
stock-supply-production module for Tryton application server.
timesheet module for Tryton application server.
timesheet-cost module for Tryton application server.
web-user module for Tryton application server.
webdav module for Tryton application server.
bcolz provides columnar and compressed data containers. Column storage allows for efficiently querying tables with a large number of columns. It also allows for cheap addition and removal of column. In addition, bcolz objects are compressed by default for reducing memory/disk I/O needs. The compression process is carried out internally by Blosc, a highperformance compressor that is optimized ...
|Build Status| |Coverage Status| |Join the chat at translates a subset of modified NumPy and Pandaslike syntax to databases and other computing systems. Blaze allows Python users a familiar interface to query data living in other data storage systems.Example We point blaze to a simple dataset in a foreign database (PostgreSQL). Instantly we see results as we would see them in a Pandas ...
In bqplot, every single attribute of the plot is an interactive widget. This allows the user to integrate any plot with IPython widgets to create a complex and feature rich GUI from just a few simple lines of Python code. Goals provide a unified framework for 2-D visualizations with a pythonic API. provide a sensible API for adding user interactions (panning, zooming, selection, etc) Two APIs are provided Users can build custom visualizations using the internal object model, which is inspired by the constructs of the Grammar of Graphics (figure, marks, axes, scales), and enrich their visualization with our Interaction Layer. Or they can use the context-based API similar to Matplotlib's pyplot, which provides sensible default choices for most parameters.
Dask is a flexible parallel computing library for analytic computing. Dask is composed of two components: Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads. “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of the dynamic task schedulers. Dask emphasizes the following virtues: Familiar: Provides parallelized NumPy array and Pandas DataFrame objects Flexible: Provides a task scheduling interface for more custom workloads and integration with other projects. Native: Enables distributed computing in Pure Python with access to the PyData stack. Fast: Operates with low overhead, low latency, and minimal serialization necessary for fast numerical algorithms Scales up: Runs resiliently on clusters with 1000s of cores Scales down: Trivial to set up and run on a laptop in a single process Responsive: Designed with interactive computing in mind it provides rapid feedback and diagnostics to aid humans