📄️ Argilla
Argilla is an open-source data curation platform for LLMs.
📄️ Comet Tracing
There are two ways to trace your LangChains executions with Comet:
📄️ Confident
DeepEval package for unit testing LLMs.
📄️ Context
Context provides user analytics for LLM-powered products and features.
📄️ Fiddler
Fiddler is the pioneer in enterprise Generative and Predictive system ops, offering a unified platform that enables Data Science, MLOps, Risk, Compliance, Analytics, and other LOB teams to monitor, explain, analyze, and improve ML deployments at enterprise scale.
📄️ Infino
Infino is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.
📄️ Label Studio
Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
📄️ LLMonitor
LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
📄️ PromptLayer
PromptLayer is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.
📄️ SageMaker Tracking
Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models.
📄️ Streamlit
Streamlit is a faster way to build and share data apps.
📄️ Trubrics
Trubrics is an LLM user analytics platform that lets you collect, analyse and manage user
📄️ Upstash Ratelimit Callback
In this guide, we will go over how to add rate limiting based on number of requests or the number of tokens using UpstashRatelimitHandler. This handler uses ratelimit library of Upstash, which utilizes Upstash Redis.
📄️ uptrain
UpTrain [github || website || docs] is an open-source platform to evaluate and improve LLM applications. It provides grades for 20+ preconfigured checks (covering language, code, embedding use cases), performs root cause analyses on instances of failure cases and provides guidance for resolving them.