Personal tools
Skip to content. | Skip to navigation
AstraPy A pythonic client for DataStax Astra DB. This README targets AstraPy version 1.0.0+, which introduces a whole new API. Click here for the pre-existing API (fully compatible with newer versions). Quickstart Install with pip install astrapy. Get the API Endpoint and the Token to your Astra DB instance at astra.datastax.com. Try the following code after replacing the connection parameters: import astrapy ASTRA_DB_APPLICATION_TOKEN = "AstraCS:..." ASTRA_DB_API_ENDPOINT = "https:/01234567-....apps.astra.datastax.com" my_client = astrapy.DataAPIClient() my_database = my_client.get_database( ASTRA_DB_API_ENDPOINT, token=ASTRA_DB_APPLICATION_TOKEN, ) my_collection = my_database.create_collection( "dreams", dimension=3, metric=astrapy.constants.VectorMetric.COSINE, ) my_collection.insert_one({"summary": "I was flying", "$vector": [-0.4, 0.7, 0]}) my_collection.insert_many( [ { "_id": astrapy.ids.UUID("018e65c9-e33d-749b-9386-e848739582f0"
Asyncer, async and await, focused on developer experience. Documentation: https:/asyncer.tiangolo.com Source Code: https:/github.com/fastapi/asyncer Asyncer is a small library built on top of AnyIO. Asyncer has a small number of utility functions that allow working with async, await, and concurrent code in a more convenient way under my (@tiangolo - Sebastián Ramírez) very opinionated and subjective point of view. The main goal of Asyncer is to improve developer experience by providing better support for autocompletion and inline errors in the editor, and more certainty that the code is bug-free by providing better support for type checking tools like mypy. Asyncer also tries to improve convenience and simplicity when working with async code mixed with regular blocking code, allowing to use them together in a simpler way... again, under my very subjective point of view. 🚨 Warning This small library only exists to be able to use these utility functions until (and if) they
A static analyzer for Python2 and Python3 code.Beniget provides a static over- approximation of the global and local definitions inside Python Module/Class/Function. It can also compute def-use chains from each definition.
cassIO A framework-agnostic Python library to seamlessly integrate Apache Cassandra with ML/LLM/genAI workloads. Note: this is currently an alpha release. Users Installation is as simple as: pip install cassio For example usages and integration with higher-level LLM frameworks such as LangChain, please visit cassio.org. CassIO developers Setup To develop cassio, we use poetry pip install poetry Use poetry to install dependencies poetry install Use cassio current code in other Poetry base projects If the integration is Poetry-based (e.g. LangChain itself), you should get this in your pyproject.toml: cassio = {path = "../../cassio", develop = true} Then you do poetry remove cassio # if necessary poetry lock --no-update poetry install -E all --with dev --with test_integration # or similar, this is for langchain Inspired from this. You also need a recent Poetry for this to work. Versioning We are still at 0.*. Occasional breaking changes are to be
Click Click .. image::
ClickHouse ConnectA high performance core database driver for connecting ClickHouse to Python, Pandas, and Superset * Pandas DataFrames * Numpy Arrays * PyArrow Tables * Superset Connector * SQLAlchemy 1.3 and 1.4 (limited feature set)ClickHouse Connect currently uses the ClickHouse HTTP interface for maximum compatibility. Installation pip install clickhouse-connect ClickHouse Connect...
cloudpickle makes it possible to serialize Python constructs not supported by the default pickle module from the Python standard library. cloudpickle is especially useful for cluster computing where Python expressions are shipped over the network to execute on remote hosts, possibly close to the data. Among other things, cloudpickle supports pickling for lambda expressions, functions and classes defined interactively in the __main__ module.
datapackage-py[ provided on the HuggingFace Datasets Hub. With a simple command like squad_dataset = load_dataset("rajpurkar/squad"), get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX), efficient data pre-processing: simple, fast and reproducible data pre-processing for the public datasets as well as your own local datasets in CSV, JSON, text, PNG, JPEG, WAV, MP3, Parquet, etc. With simple commands like processed_dataset = dataset.map(process_example), efficiently prepare the dataset for inspection and ML model evaluation and training. 🎓 Documentation 🔎 Find a dataset in the Hub 🌟 Share a dataset on the Hub 🤗 Datasets is designed to let the community easily
What is E2B? E2B is an open-source infrastructure that allows you to run AI-generated code in secure isolated sandboxes in the cloud. To start and control sandboxes, use our JavaScript SDK or Python SDK. Run your first Sandbox 1. Install SDK pip install e2b-code-interpreter 2. Get your E2B API key Sign up to E2B here. Get your API key here. Set environment variable with your API key E2B_API_KEY=e2b_*** 3. Execute code with code interpreter inside Sandbox from e2b_code_interpreter import Sandbox with Sandbox() as sandbox: sandbox.run_code("x = 1") execution = sandbox.run_code("x+=1; x") print(execution.text) # outputs 2 4. Check docs Visit E2B documentation. 5. E2B cookbook Visit our Cookbook to get inspired by examples with different LLMs and AI frameworks.