Documentation

Complete guide to getting started with OpenFinOps

Getting Started

OpenFinOps is an open-source FinOps platform designed for AI/ML cost observability and optimization. It provides comprehensive visibility into your cloud and AI infrastructure spending with real-time dashboards, intelligent recommendations, and multi-cloud support.

Key Features

  • AI/ML Cost Tracking - Monitor training and inference costs in real-time
  • Multi-Cloud Support - AWS, Azure, and GCP integration
  • Data Platform Monitoring - Databricks DBU tracking, Snowflake credit consumption
  • SaaS Service Tracking - MongoDB Atlas, Redis Cloud, GitHub Actions, DataDog
  • Executive Dashboards - Role-based dashboards for CFO, COO, Infrastructure
  • Smart Recommendations - AI-powered cost optimization suggestions
  • Real-time Monitoring - WebSocket-based live updates
  • Telemetry Agents - Distributed cost data collection across all services

Installation

OpenFinOps can be installed via pip, from source, or using Docker. The package supports Python 3.8 and above.

Install via pip

BASH
pip install openfinops

Install with All Dependencies

BASH
# Install with all cloud providers and features pip install openfinops[all] # Or install specific integrations pip install openfinops[aws] pip install openfinops[azure] pip install openfinops[gcp]

Install from Source

BASH
git clone https://github.com/rdmurugan/openfinops.git cd openfinops pip install -e .

Note: For Web UI functionality, ensure you have flask-socketio and eventlet installed. These are included in the base installation.

Quick Start

Get up and running with OpenFinOps in under 5 minutes. This guide will show you how to launch the Web UI and start monitoring your costs.

1. Start the Web UI Server

BASH
# Using the CLI openfinops server --host 127.0.0.1 --port 8080 # Or using Python python -m openfinops server

2. Access the Dashboards

Open your browser and navigate to:

  • Overview Dashboard: http://localhost:8080/
  • CFO Executive: http://localhost:8080/dashboard/cfo
  • COO Operational: http://localhost:8080/dashboard/coo
  • Infrastructure: http://localhost:8080/dashboard/infrastructure

3. Deploy a Telemetry Agent

Important: OpenFinOps uses agents that automatically discover resources and calculate costs.

PYTHON
from agents.aws_telemetry_agent import AWSTelemetryAgent # Initialize AWS agent agent = AWSTelemetryAgent( openfinops_endpoint="http://localhost:8080", aws_region="us-west-2" ) # Register and run if agent.register_agent(): print("✓ Agent registered") # Collects metrics every 5 minutes agent.run_continuous(interval_seconds=300)

Web UI Server

The Web UI provides beautiful, real-time dashboards with WebSocket-based live updates. All dashboards feature glassmorphism design, Chart.js visualizations, and responsive layouts.

Server Configuration

PYTHON
from openfinops.webui import start_server # Start with custom configuration start_server( host='0.0.0.0', port=8080, debug=False )

Available Features

  • Real-time Updates - WebSocket pushes every 5 seconds
  • Interactive Charts - Click-to-zoom, tooltips, animations
  • Responsive Design - Mobile and desktop optimized
  • Live Metrics - Cost trends, resource usage, performance

Observability Hub

The Observability Hub receives and processes telemetry data from agents. Note: You don't call these methods directly - agents call them automatically.

Architecture

Agent-Based Design: Deploy telemetry agents in your cloud accounts. Agents automatically discover resources, collect metrics, calculate costs, and send to the ObservabilityHub server.

Query Data

PYTHON
from openfinops import ObservabilityHub from openfinops.observability.cost_observatory import CostObservatory # Initialize (server-side) hub = ObservabilityHub() cost_obs = CostObservatory() # Query cluster health (populated by agents) health = hub.get_cluster_health_summary() for cluster_id, metrics in health.items(): print(f"{cluster_id}: {metrics['health_status']}") # Query costs (populated by agents) summary = cost_obs.get_cost_summary(time_range_hours=24) print(f"Total 24h cost: ${summary['total_cost']:.2f}")

Dashboards

OpenFinOps provides role-based dashboards tailored for different stakeholders in your organization.

CFO Dashboard

PYTHON
from openfinops.dashboard import CFODashboard # Initialize CFO dashboard cfo_dash = CFODashboard() # Generate financial report report = cfo_dash.generate_financial_report() print(report.total_spend) print(report.roi_analysis) print(report.budget_status)

Infrastructure Dashboard

PYTHON
from openfinops.dashboard import InfrastructureLeaderDashboard # Initialize infrastructure dashboard infra_dash = InfrastructureLeaderDashboard() # Get resource utilization utilization = infra_dash.get_resource_utilization() print(f"CPU: {utilization.cpu_percent}%") print(f"Memory: {utilization.memory_percent}%")

Telemetry Agents

Deploy telemetry agents as separate processes to automatically collect metrics and costs.

AWS Telemetry Agent

PYTHON
# deploy_aws_agent.py from agents.aws_telemetry_agent import AWSTelemetryAgent # Initialize agent (uses boto3 credential chain) agent = AWSTelemetryAgent( openfinops_endpoint="http://localhost:8080", aws_region="us-west-2" ) # Register with server if agent.register_agent(): print("✓ Agent registered") # Run continuous collection (every 5 minutes) # Agent automatically: # - Discovers EC2, EKS, Lambda, RDS, S3 # - Queries CloudWatch for metrics # - Calculates costs from instance types # - Sends to OpenFinOps server agent.run_continuous(interval_seconds=300)

Azure & GCP Agents

PYTHON
# Azure agent from agents.azure_telemetry_agent import AzureTelemetryAgent agent = AzureTelemetryAgent( openfinops_endpoint="http://localhost:8080", subscription_id="your-subscription-id" ) agent.register_agent() agent.run_continuous(interval_seconds=300) # GCP agent from agents.gcp_telemetry_agent import GCPTelemetryAgent agent = GCPTelemetryAgent( openfinops_endpoint="http://localhost:8080", project_id="your-project-id" ) agent.register_agent() agent.run_continuous(interval_seconds=300)

Security Note: Always use IAM roles with minimal permissions when deploying telemetry agents. Never hardcode credentials.

Data Platform Services

OpenFinOps provides specialized agents for data platforms like Databricks and Snowflake, with automatic cost calculation based on DBUs, credits, and storage usage.

Databricks Cost Tracking

Track Databricks DBU consumption, cluster costs, job execution, and SQL warehouse usage.

BASH
# Install Databricks SDK pip install databricks-sdk requests # Set credentials export DATABRICKS_HOST=https://your-workspace.cloud.databricks.com export DATABRICKS_TOKEN=dapi*** # Run agent python agents/databricks_telemetry_agent.py \ --openfinops-endpoint http://localhost:8080 \ --databricks-host $DATABRICKS_HOST \ --databricks-token $DATABRICKS_TOKEN \ --interval 300

DBU Cost Calculation

Databricks Pricing:

  • All-Purpose Compute: $0.40/DBU
  • Jobs Compute: $0.15/DBU
  • SQL Pro: $0.55/DBU
  • Serverless SQL: $0.70/DBU
  • Delta Live Tables: $0.20-$0.30/DBU

Snowflake Cost Tracking

Monitor Snowflake credit consumption, warehouse usage, storage, and query patterns.

BASH
# Install Snowflake connector pip install snowflake-connector-python requests # Set credentials export SNOWFLAKE_USER=your_user export SNOWFLAKE_PASSWORD=your_password # Run agent python agents/snowflake_telemetry_agent.py \ --openfinops-endpoint http://localhost:8080 \ --snowflake-account xy12345.us-east-1 \ --snowflake-warehouse COMPUTE_WH \ --edition enterprise \ --interval 300

Credit Pricing

Snowflake Editions:

  • Standard: $2.00 per credit
  • Enterprise: $3.00 per credit
  • Business Critical: $4.00 per credit
  • Storage: $40/TB/month (on-demand)

Metrics Collected

  • Databricks: Cluster uptime, DBU consumption, job costs, SQL warehouse usage, storage
  • Snowflake: Warehouse credits, storage (database + failsafe), query patterns, user attribution

SaaS Services Monitoring

Track costs from popular SaaS services including MongoDB Atlas, Redis Cloud, GitHub Actions, DataDog, and more.

Multi-Service Agent

The SaaS services agent can monitor multiple services from a single configuration.

BASH
# Create sample configuration python agents/saas_services_telemetry_agent.py \ --create-config saas_config.json # Run agent python agents/saas_services_telemetry_agent.py \ --openfinops-endpoint http://localhost:8080 \ --config saas_config.json \ --interval 3600

Configuration Example

JSON
{ "mongodb_atlas": { "enabled": true, "public_key": "your_public_key", "private_key": "your_private_key", "project_id": "your_project_id" }, "redis_cloud": { "enabled": true, "api_key": "your_api_key", "secret_key": "your_secret_key", "account_id": "your_account_id" }, "github_actions": { "enabled": true, "token": "ghp_your_token", "org_name": "your_org" }, "datadog": { "enabled": true, "api_key": "your_api_key", "app_key": "your_app_key" } }

Supported Services

Current Support:

  • MongoDB Atlas: Cluster costs, replication, sharding, storage
  • Redis Cloud: Database instances, throughput, subscriptions
  • GitHub Actions: Workflow minutes, storage, per-OS pricing
  • DataDog: Host count, custom metrics, estimated costs

Coming Soon: Elasticsearch, Confluent Kafka, Vercel, Docker Hub

Cost Estimation

  • MongoDB Atlas M10: $0.08/hour (~$58/month)
  • MongoDB Atlas M30: $0.54/hour (~$389/month)
  • GitHub Actions (Linux): $0.008/minute
  • GitHub Actions (macOS): $0.08/minute
  • DataDog: ~$15/host/month

Configuration

OpenFinOps can be configured via environment variables, configuration files, or programmatically.

Environment Variables

BASH
export OPENFINOPS_HOST=0.0.0.0 export OPENFINOPS_PORT=8080 export OPENFINOPS_DEBUG=false # Cloud provider credentials export AWS_ACCESS_KEY_ID=your_key export AWS_SECRET_ACCESS_KEY=your_secret export AZURE_SUBSCRIPTION_ID=your_sub_id export GCP_PROJECT_ID=your_project

Configuration File

YAML
# config.yaml server: host: 0.0.0.0 port: 8080 debug: false observability: update_interval: 5 retention_days: 90 alerts: budget_threshold: 10000 anomaly_detection: true notification_channels: - email - slack