Hello everyone 👋, in the past month we've concentrated on enhancing integrations and adding more trace context capabilities to Falcon.
-
Optimized trace data ingestion
- Langchain compatibility for JavaScript/TypeScript developers
- Implementation of Langchain within a Trace context
- Monitoring of software releases and iterations in Python and JavaScript/TypeScript
- Advanced filtering on the Traces table
-
Advanced Analytics Features
- USD cost computation for token usage
- Token utilization charts
-
Additional Updates
- GET API enhancements: filtering by user, model, and date; access to unprocessed traces
- Streamlined self-hosting process with Docker
... along with numerous incremental improvements and bug fixes.
Detailed Breakdown 👇
🦜🔗 JavaScript/TypeScript Langchain Integration
Following the release of the Python Integration for Langchain, we're excited to introduce the equivalent for JavaScript/TypeScript teams. Our new package, falcon-langchain (opens in a new tab), includes a CallbackHandler that seamlessly integrates complex Langchain sequences and agents into your tracing. Simply incorporate it as a callback.
// Initialize Falcon Callback Handler
import CallbackHandler from "falcon-langchain";
const handler = new CallbackHandler({
secretKey: process.env.FALCON_SECRET_KEY, // sk-lf-...
publicKey: process.env.FALCON_PUBLIC_KEY, // pk-lf-...
// additional options
});
// Configure Langchain
import { OpenAI } from "langchain/llms/openai";
const llm = new OpenAI();
// Integrate Falcon Callback Handler
const res = await llm.call("<user-input>", { callbacks: [handler] });
→ Integration Documentation
⛓️ Enhanced Trace Context with Langchain Integrations [#langchain-trace]
With the Langchain Python integration, you can now introduce additional context to the traces. This allows for the inclusion of user IDs, metadata, or customized identifiers to associate evaluations with the trace directly.
python
Copy code
import uuid
from falcon.client import Falcon
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# Initialize Falcon client
falcon = Falcon(ENV_PUBLIC_KEY, ENV_SECRET_KEY, ENV_HOST)
# Generate a unique trace_id
trace_id = str(uuid.uuid4())
# Establish the Trace
trace = falcon.trace(id=trace_id)
# Generate a handler associated with the Trace
handler = trace.getNewHandler()
# Set up Langchain
llm = OpenAI()
chain = LLMChain(llm=llm, prompt=PromptTemplate(...))
chain.run("<your-input>", callbacks=[handler])
→ Documentation
📦 Incorporating Trace Context: Tracking Releases and Versions [#releases-and-versions]