48 rpm packages matching your search terms.
-
python3-langchain-google-community-2.0.3-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-google-community This package contains the LangChain integrations for Google products that are not part of langchain-google-vertexai or langchain-google-genai packages. Installation pip install -U langchain-google-community -
python3-langchain-google-community-2.0.3-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-google-community This package contains the LangChain integrations for Google products that are not part of langchain-google-vertexai or langchain-google-genai packages. Installation pip install -U langchain-google-community -
python3-langchain-google-vertexai-2.0.7-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-google-vertexai This package contains the LangChain integrations for Google Cloud generative models. Installation pip install -U langchain-google-vertexai Chat Models ChatVertexAI class exposes models such as gemini-pro and chat-bison. To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as: from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="gemini-pro") llm.invoke("Sing a ballad of LangChain.") You can use other models, e.g. chat-bison: from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="chat-bison", temperature=0.3) llm.invoke("Sing a ballad of LangChain.") Multimodal inputs Gemini vision model supports image inputs when providing a single chat message. Example: from langchain_core.messages import HumanMessage from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="gemini-pro-vision") message = HumanMessage( c -
python3-langchain-google-vertexai-2.0.7-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-google-vertexai This package contains the LangChain integrations for Google Cloud generative models. Installation pip install -U langchain-google-vertexai Chat Models ChatVertexAI class exposes models such as gemini-pro and chat-bison. To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as: from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="gemini-pro") llm.invoke("Sing a ballad of LangChain.") You can use other models, e.g. chat-bison: from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="chat-bison", temperature=0.3) llm.invoke("Sing a ballad of LangChain.") Multimodal inputs Gemini vision model supports image inputs when providing a single chat message. Example: from langchain_core.messages import HumanMessage from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="gemini-pro-vision") message = HumanMessage( c -
python3-langchain-graph-retriever-0.6.1-1.lbn42.noarch.rpm
Sep 13, 2025 okLangChain Graph Retriever LangChain Graph Retriever is a Python library that supports traversing a document graph on top of vector-based similarity search. It works seamlessly with LangChain's retriever framework and supports various graph traversal strategies for efficient document discovery. Features Vector Search: Perform similarity searches using vector embeddings. Graph Traversal: Apply traversal strategies such as breadth-first (Eager) or Maximal Marginal Relevance (MMR) to explore document relationships. Customizable Strategies: Easily extend and configure traversal strategies to meet your specific use case. Multiple Adapters: Support for various vector stores, including AstraDB, Cassandra, Chroma, OpenSearch, and in-memory storage. Synchronous and Asynchronous Retrieval: Supports both sync and async workflows for flexibility in different applications. Installation Install the library via pip: pip install langchain-graph-retriever Getting Started Here is an example of how to -
python3-langchain-groq-0.3.2-1.lbn42.noarch.rpm
Sep 13, 2025 oklangchain-groq Welcome to Groq! ð At Groq, we've developed the world's first Language Processing Unitâ¢, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload. Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can: Achieve uncompromised low latency and performance for real-time AI and HPC inferences ð¥ Know the exact performance and compute time for any given workload ð® Take advantage of our cutting-edge technology to stay ahead of the competition ðª Want more Groq? Check out our website for more resources and join our Discord community to connect with our developers! -
python3-langchain-huggingface-0.1.2-1.lbn42.noarch.rpm
Sep 14, 2025 oklangchain-huggingface This package contains the LangChain integrations for huggingface related classes. Installation and Setup Install the LangChain partner package pip install langchain-huggingface -
python3-langchain-ibm-0.3.18-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-ibm This package provides the integration between LangChain and IBM watsonx.ai through the ibm-watsonx-ai SDK. Installation To use the langchain-ibm package, follow these installation steps: pip install langchain-ibm Usage Setting up To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key: Obtain an API Key: For more details on how to create and manage an API key, refer to IBM's documentation. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable: import os from getpass import getpass watsonx_api_key = getpass() os.environ["WATSONX_APIKEY"] = watsonx_api_key In alternative, you can set the environment variable in your terminal. Linux/macOS: Open your terminal and execute the following command: ex -
python3-langchain-ibm-0.3.18-1.lbn42.noarch.rpm
Sep 15, 2025 oklangchain-ibm This package provides the integration between LangChain and IBM watsonx.ai through the ibm-watsonx-ai SDK. Installation To use the langchain-ibm package, follow these installation steps: pip install langchain-ibm Usage Setting up To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key: Obtain an API Key: For more details on how to create and manage an API key, refer to IBM's documentation. Set the API Key as an Environment Variable: For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable: import os from getpass import getpass watsonx_api_key = getpass() os.environ["WATSONX_APIKEY"] = watsonx_api_key In alternative, you can set the environment variable in your terminal. Linux/macOS: Open your terminal and execute the following command: ex -
python3-langchain-mistralai-0.2.10-1.lbn42.noarch.rpm
Sep 14, 2025 oklangchain-mistralai This package contains the LangChain integrations for MistralAI through their mistralai SDK. Installation pip install -U langchain-mistralai Chat Models This package contains the ChatMistralAI class, which is the recommended way to interface with MistralAI models. To use, install the requirements, and configure your environment. export MISTRAL_API_KEY=your-api-key Then initialize from langchain_core.messages import HumanMessage from langchain_mistralai.chat_models import ChatMistralAI chat = ChatMistralAI(model="mistral-small") messages = [HumanMessage(content="say a brief hello")] chat.invoke(messages) ChatMistralAI also supports async and streaming functionality: await chat.ainvoke(messages) for chunk in chat.stream(messages): print(chunk.content, end="", flush=True) Embeddings With MistralAIEmbeddings, you can directly use the default model 'mistral-embed', or set a different one if available. Choose model embedding.mode