December 19, 2024

User Histograms: Analyze LLM Usage Patterns

We’re excited to introduce User Histograms, a powerful new visualization tool that helps you understand user behavior patterns across your LLM applications.

Key Features

  • Distribution Visualization: See how your users are distributed across different metrics like token usage, costs, and request volumes
  • Percentile Analysis: Quickly identify power users and understand usage patterns at different percentiles
  • Interactive Filtering: Filter and segment your user data to focus on specific time periods or user groups

Use Cases

  1. Usage Pattern Analysis

    • Identify usage clusters and understand how different user segments interact with your LLM applications
    • Spot outliers and investigate unusual usage patterns
  2. Cost Optimization

    • Understand cost distribution across your user base
    • Make informed decisions about pricing tiers and usage limits
  3. Capacity Planning

    • Analyze token usage patterns to better predict and plan for scaling
    • Understand peak usage patterns across your user base

To access User Histograms, navigate to the Users tab in your Helicone dashboard and click on the Histograms view.

December 10, 2024

🎉 Experiments is here!

Changelog Image

We are thrilled to announce that Experiments is out of beta.

Experiments is designed to help you tune your LLM prompt, test it on production data, and verify your iterations with quantifiable data.

Main use cases

1. Continuous Improvement

Analyze production edge cases to refine your application’s performance.

2. Pre-deployment Testing

Benchmark new releases rigorously before rolling out to production environments.

3. Structured Testing

Implement LLM-as-a-judge or custom evaluation metrics, then compare prompt variations side-by-side with quick, actionable feedback loop.

4. Prompt Optimization

Determine the best prompt for production by running evaluators to prevent performance regressions.

For detailed documentation, refer to our updated docs.

December 6, 2024

Support for AWS Bedrock Models

We’re excited to announce support for tracking AWS Bedrock models requests through Helicone

How to track your requests?

To track your Bedrock requests through Helicone, you can set the Bedrock client’s endpoint to use the Helicone Proxy.

endpoint="https://bedrock.helicone.ai/v1/<region>"

For detailed API documentation, please refer to our updated docs.

November 12, 2024

Cerebras: New Model Provider Integration

Cerebras Integration

We’re excited to announce the addition of Cerebras as a new model provider on our platform. This integration expands our suite of available AI models and provides more options for our users.

Getting Started

To start using Cerebras models, create an account on Cerebras and then create a new API key. Once you have your API key, you can add it to your Helicone configuration as a base_url.

base_url="https://cerebras.helicone.ai/v1"

For detailed API documentation, please refer to our updated docs.

October 24, 2024

Webhooks: Real-Time Integration and Automation

Changelog Image

We are excited to announce the addition of webhooks to our platform, enhancing real-time integration and automation capabilities. With this update, you can:

  • Set up and monitor webhooks for real-time data processing.
  • Seamlessly integrate webhook routes with your applications.
  • Test webhooks locally using tools like ngrok.
  • Randomly sample webhook events and filter them by specific properties to tailor your data processing needs.
  • Utilize webhooks for evaluations and score tracking, enabling more precise and automated performance assessments.

For detailed instructions, please refer to our Webhooks Setup Guide.

October 23, 2024

Prompt UI Refresh

Changelog Image

We’ve refreshed the Prompts interface to align with our new UI style — now simpler, more productive, and consistent throughout. Key improvements include:

  • Full-width interface
  • Ability to view production inputs in a prompt template
  • Easily rollback to a previous prompt
  • Ability to edit prompts and push to production from the UI

Check it out in the Prompts tab in Helicone!

October 23, 2024

New Claude 3.5 Sonnet (claude-3-5-sonnet-20241022-v2): Full Cost Support and Tracking

Changelog Image

We’re excited to announce immediate support for Anthropic’s latest Claude 3.5 Sonnet model (claude-3-5-sonnet-20241022-v2), released in October 2024.

What’s New in This Version?

  • Improved performance across various tasks
  • Enhanced capabilities in analysis, coding, and creative writing
  • Better contextual understanding and response relevance

Performance Tracking and Cost Management

Our platform now offers:

  • Comprehensive performance tracking for claude-3-5-sonnet-20241022-v2
  • Updated pricing calculator for accurate cost estimation

How to Use

Refer to our Anthropic Integration Guide for details on how to use the new model with Helicone

Learn More About Claude 3.5 Sonnet

October 4, 2024

🎉 Prompt Experiments V2 Launch! 🎉

Discover Helicone’s experiments, a new spreadsheet-like interface designed for efficient LLM prompt experimentation. Easily manage multiple prompt variations, run flexible experiments, and gain data-driven insights to optimize your AI prompts.

Get early access now 👉 helicone.ai/experiments

October 3, 2024

Redesigned Requests Page for Enhanced LLM Observability

Changelog Image

We’re excited to announce a major redesign of our Requests page, enhancing the user experience and efficiency for AI LLM observability.

Key Improvements

  • Streamlined Navigation: Quick toggle between requests without closing the drawer, allowing for faster review and comparison.
  • Compact Information Display: More data visible at a glance with a sleeker, more compact row design.
  • Reduced Visual Clutter: A cleaner interface that focuses on essential information.
  • Enhanced Time Selector: Improved configuration options and quick select features for more precise data filtering.
  • Unobstructed Page Navigation: Chat widget no longer blocks page navigation, ensuring a smoother user experience.

Benefits for LLM Developers and Data Scientists

  • Efficient Prompt Analysis: Easily view and compare prompts across multiple requests.
  • Improved Performance Monitoring: Quickly identify trends and anomalies in your LLM applications.
  • Streamlined Workflow: Navigate through large volumes of request data with ease.

This redesign reflects our commitment to providing the best tools for AI LLM observability. We’ve focused on enhancing the core features that matter most to our users, making it easier than ever to gain insights from your LLM application data.

We encourage you to explore the new Requests page and experience the improvements firsthand. Your feedback is valuable as we continue to refine and enhance Helicone’s observability platform.

October 2, 2024

Introducing new NPM packages for Helicone

We are thrilled to announce the addition of two essential npm packages: @helicone/async and @helicone/helpers. Additionally, we are also deprecating the @helicone/helicone package.

Why These Changes?

  • Optimized Package Size: The previous @helicone/helicone was wrapped around OpenAI, resulting in a bulky package size.
  • Enhanced Function Utilization: Many functions within the old package were unused and outdated. The new approach ensures that only necessary functions are included and are up to date.

Detailed Changes

  • Deprecated @helicone/helicone:

    • This package is officially deprecated and will no longer receive updates.
    • Existing functions within this package will continue to operate as expected to ensure a smooth transition.
  • Added @helicone/async:

    • HeliconeAsyncLogger Class: Previously part of @helicone/helicone, this class is now housed within @helicone/async. It retains all existing functionalities, offering robust asynchronous logging capabilities.
  • Added @helicone/helpers:

    • HeliconeManualLogger Class: Moved from @helicone/helicone to @helicone/helpers, this class now adopts a more functional approach. Visit the docs to learn more.
    • Enhanced Features:

September 23, 2024

Summary Reports

Changelog Image

Get weekly summary reports of your LLM usage

We’ve launched a new feature that keeps you updated on your LLM usage with detailed weekly reports delivered directly to your inbox every Monday at 10 AM UTC. These reports provide a comprehensive overview of key metrics, including total usage, cost analysis, number of requests, error rate, active users, threats, number of sessions, and average session costs.

With these automated reports, you can easily monitor your AI performance, optimize your usage, and make data-driven decisions for your projects. Ensure you’re staying on top of your LLM utilization and maximizing the value of your resources.

Ready to get started? Configure your weekly summaries now.

September 16, 2024

O1 Models: Support Added with Token and Cost Tracking

Changelog Image

Immediate Support for OpenAI’s o1 Models

We’re excited to announce support for OpenAI’s new o1 models, along with comprehensive tracking of token counts and spending.

What Are o1 Models?

OpenAI’s o1 models represent a significant advancement in language AI. They use reinforcement learning to perform complex reasoning tasks, generating an internal chain of thought before producing a final response. This leads to enhanced performance and new capabilities for your applications.

Accurate Cost Tracking

Our platform now fully supports cost tracking for o1 model usage. Due to the unique way these models process information, it’s important to provide token counts for both input and output to ensure accurate cost calculations.

How to Ensure Accurate Tracking

  • Using Integrations: If you’re using integrations like Langchain, LlamaIndex, or LiteLLM, token usage is automatically tracked.
  • Streaming Usage: For accurate cost calculation while streaming, refer to our guide on Correct Cost Calculation While Streaming.

Learn More About o1 Models

September 12, 2024

Datasets

Streamline your AI data organization and analysis with Helicone’s new Datasets feature. Designed for LLM developers and data scientists, this tool simplifies data handling for improved AI model performance.

Key Features of Helicone Datasets:

  1. Dataset Creation: Quickly set up and organize your AI training data within the requests page.
  2. Export: Easily export your data as JSONL for training or finetuning.
  3. Edit: Edit your dataset and save it as a new version.

Benefits for AI Development:

To begin using the Datasets feature:

  1. Navigate to the Requests page in your Helicone dashboard.
  2. Enter select mode by clicking the select icon in the top right corner.
  3. Select the data points you want to include in your dataset.
  4. Click on “Create Dataset” and give it a name.
  5. Access your datasets from the new Datasets tab to export or edit as needed.

September 11, 2024

Collapsible Sidebar

Changelog Image

Enhance your workflow with our new collapsible sidebar feature. Users can now easily toggle the sidebar visibility, maximizing screen real estate and improving focus. This update offers:

  • One-click sidebar collapse/expand
  • Increased workspace flexibility
  • Improved screen space utilization
  • Seamless transition between full and minimized views

Optimize your productivity by customizing your interface on demand. Experience a cleaner, more adaptable workspace with our latest sidebar enhancement.

September 10, 2024

Slack Alerts

Changelog Image

Real-Time Alerts Now Available in Slack for Faster Issue Resolution

Stay on top of critical issues with Helicone’s latest update: Slack Integration for Alerts. In addition to email notifications, you can now receive real-time alerts directly in your Slack workspace for faster action when something goes wrong.

To get started, visit the Alerts page to create or edit an alert. Enhance your team’s productivity by responding to key notifications without delay.

August 29, 2024

#1 Product of the Day on Product Hunt

Changelog Image

Helicone Reaches #1 on Product Hunt!

This achievement reflects our team’s hard work and the incredible support from our community. We’re thrilled about the boost in visibility for our platform!

Highlights:

  • #1 on Product Hunt’s daily leaderboard
  • Positive feedback from the open-source community
  • Surge in new user sign-ups and engagement

Product Hunt Results

A huge thank you to everyone who upvoted, commented, and shared Helicone. Your support motivates us to keep improving!

For more on our Product Hunt journey, check out our blog posts:

Links:

Product Hunt: Helicone on Product Hunt

August 25, 2024

Docker images on Docker Hub

Changelog Image

Docker images now available on Docker Hub We’ve started publishing Docker images on Docker Hub.

This update simplifies Helicone deployment on platforms that don’t natively support the Google Container Registry. For detailed instructions, please refer to our updated self-hosting guide.

Links:

Docker Hub: helicone

August 12, 2024

New hpstatic Function for Static Prompts in LLM Applications

Changelog Image

We’ve added a new hpstatic function to our Helicone Prompt Formatter (HPF) package. This function allows users to create static prompts that don’t change between requests, which is particularly useful for system prompts or other constant text. The hpstatic function wraps the text in <helicone-prompt-static> tags, indicating to Helicone that this part of the prompt should not be treated as variable input.

Here’s a quick example of how to use hpstatic:

import { hpf, hpstatic } from "@helicone/prompts";

const systemPrompt = hpstatic`You are a helpful assistant.`;
const userPrompt = hpf`Write a story about ${{ character }}`;

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: userPrompt },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

This new feature enhances our prompt management capabilities, allowing for more flexible and efficient prompt structuring in your applications.

Start Using Static Prompts 🚀

August 9, 2024

Ragas Integration for RAG System Evaluation

Changelog Image

We’re excited to announce our integration with Ragas, an open-source framework for evaluating Retrieval-Augmented Generation (RAG) systems. This integration allows you to:

  • Monitor and analyze the performance of your RAG pipelines
  • Gain insights into RAG effectiveness using metrics like faithfulness, answer relevancy, and context precision
  • Easily identify areas for improvement in your RAG systems

Check out this quick video overview of the Ragas integration:

To get started with the Ragas integration, visit our documentation for step-by-step instructions and code examples.

August 6, 2024

Optimistic Updates & Asynchronous Loading in Requests Page

Changelog Image

We’ve improved data loading in the Requests page of the Helicone platform. By fetching metadata and request bodies separately and loading data asynchronously we’ve reduced the time it takes to render large tables by almost 6x, improving speed and UX.

July 26, 2024

New Assistants UI Playground

Changelog Image

We’re thrilled to announce a major update to our Assistants UI Playground! Head to the Playground and click the “Try New Playground” button to explore the latest improvements:

  • Streamed responses for real-time interaction
  • Enhanced tool rendering for better visualization
  • Improved reliability for a smoother experience

Coming soon:

  • Expanded model support
  • Advanced prompt management
  • Integrated Markdown editor

Try out the new Playground today and elevate your LLM testing experience!

July 24, 2024

Fireworks AI + Helicone

Changelog Image

We’re excited to announce our integration with Fireworks AI, the high-performance LLM platform! Enhance your AI applications with Helicone’s powerful observability tools in just two easy steps:

  1. Generate a write-only API key in your Helicone account.
  2. Update your Fireworks AI base URL to:
    https://fireworks.helicone.ai
    

That’s all it takes! Now you can monitor, analyze, and optimize your Fireworks AI models with Helicone’s comprehensive insights.

For more details, check out our Fireworks AI integration guide.

July 23, 2024

Dify + Helicone

Changelog Image

We’re thrilled to announce our integration with Dify, the open-source LLM app development platform! Now you can easily add Helicone’s powerful observability features to your Dify projects in just two simple steps:

  1. Generate a write-only API key in your Helicone account.
  2. Set your API base URL in Dify to:
    https://oai.helicone.ai/<API_KEY>
    

That’s it! Enjoy comprehensive logs and insights for your Dify LLM applications.

Check out our integration guide for more details.

July 22, 2024

Prompts package

Changelog Image

We’re excited to announce the release of our new @helicone/prompts package! This lightweight library simplifies prompt formatting for Large Language Models, offering features like:

  • Automated versioning with change detection
  • Support for chat-like prompt templates
  • Efficient variable handling and extraction

Check it out on GitHub and enhance your LLM workflow today!