I've been using the Massive financial data API for a while now. It's a solid API — real-time and historical market data across stocks, options, crypto, forex, futures, and indices. But every time I wanted to pull data, I was either writing throwaway scripts or wiring up an MCP server so my AI agent could fetch it for me.
Both approaches had problems. Scripts are disposable. MCP servers flood your agent's context window with massive JSON payloads, eating up tokens and degrading response quality. And when something breaks, debugging is opaque — you can't see what the agent actually sent or received.
So I built a CLI instead. I call it Massive CLI.
Why a CLI Instead of an MCP Server?
This is the question I keep getting, so let me address it head-on. There are four reasons I chose a CLI over an MCP server:
Context Window Friendly
MCP servers dump entire JSON responses into your agent's context, burning through tokens fast. A CLI lets you run targeted queries and pipe just the relevant output into the conversation.
Debuggable
When an AI agent calls an API through an MCP server and something breaks, good luck figuring out what went wrong. With a CLI, you run the exact same command yourself and see the raw output in seconds.
Works Everywhere
No SDK to install, no server to run, no configuration files to wire into your editor. Just a single binary that works in terminals, shell scripts, CI pipelines, and as a tool any AI coding assistant can shell out to.
Human and Machine Readable
Every command outputs clean aligned tables for human scanning. Add --output json and you get machine-readable JSON for piping into jq, scripts, or AI context.
"A CLI is the universal interface. Every language can shell out to it, every agent can call it, and every human can read its output."
What It Covers
Massive CLI wraps the entire Massive API into a single binary. That means access to six major asset classes, real-time WebSocket streaming, bulk historical data downloads, and partner data sources — all from your terminal.
Stocks
OHLC bars, snapshots, quotes, trades, news, fundamentals, corporate actions, SEC filings, technical indicators, and market status.
Options
Aggregate bars, contract details, snapshots, option chains, trades, quotes, and technical indicators.
Crypto & Forex
Bars, snapshots, trades, tickers, currency conversion, and technical indicators across global markets.
Futures & Indices
Contracts, products, schedules, exchanges, snapshots, and aggregate data for futures and index markets.
WebSocket Streaming
Real-time (or 15-minute delayed) streaming of trades, quotes, and aggregates across all asset classes.
Flat File Downloads
Bulk historical data as gzipped CSV files via S3 — trades, quotes, and aggregates for backtesting and research.
On top of that, there's partner data from Benzinga (news, ratings, earnings), economic data (inflation, labor market, treasury yields), ETF analytics, and Canadian market data via TMX.
Getting Started
Installation is straightforward. If you're on macOS or Linux, Homebrew is the easiest path:
$ brew tap cloudmanic/massive https://github.com/cloudmanic/massive
$ brew install massive
Then configure your API key:
$ massive config init
# Prompts for your Massive API key and optional S3 credentials
That's it. You can also set MASSIVE_API_KEY as an environment variable or drop it in a .env file — whatever fits your workflow. Pre-built binaries for macOS, Linux, and Windows are available on the releases page if you don't use Homebrew.
How It Works in Practice
The command structure is intuitive. You pick an asset class, pick a command, and pass in your parameters. Here are some examples I use daily:
Pull OHLC Bars for a Stock
$ massive stocks bars AAPL --from 2025-01-01 --to 2025-01-31
This gives you a clean table of daily bars. Want weekly bars instead? Add --timespan week. Want JSON? Add -o json.
Check Today's Market Movers
$ massive stocks snapshots gainers
$ massive stocks snapshots losers
Get Company Fundamentals
$ massive stocks fundamentals balance-sheet AAPL
$ massive stocks fundamentals income-statement AAPL
$ massive stocks fundamentals ratios AAPL
Stream Real-Time Trades
$ massive ws stocks trades AAPL MSFT
# Press Ctrl+C to disconnect
The WebSocket streaming is particularly useful. You can subscribe to trades, quotes, or minute/second aggregates across any asset class. By default you get 15-minute delayed data — add --realtime if your subscription supports it.
Download Bulk Historical Data
$ massive files list stocks trades --year 2024
$ massive files download stocks trades 2024-06-15 --output-dir ./data
This pulls gzipped CSV files from Massive's S3-compatible storage — perfect for backtesting or building research datasets.
The AI Agent Use Case
This is the use case I'm most excited about. I use Claude Code daily, and having a CLI that Claude can shell out to is far more practical than an MCP server for financial data.
Here's why: when Claude needs stock data, it runs a command and gets back a focused result. It doesn't need to parse a massive JSON blob or manage an open connection. It just runs a command like any other tool:
# Agent pipes JSON into its context
$ massive stocks bars AAPL --from 2025-01-01 --to 2025-01-31 -o json | head -100
# Agent captures streaming data to a file for later analysis
$ massive ws stocks agg-minute --all -o json > market_data.jsonl &
The --output json flag is the key. Table output is great for humans; JSON output is great for agents and scripts. Same command, same data, different format. No adapter layer needed.
"The best tool for an AI agent is the same tool that works for a human — just with a different output flag."
Under the Hood
The tool is written in Go and built on Cobra — the same CLI framework used by Kubernetes, Hugo, and GitHub's CLI. The architecture is clean and predictable:
- REST client — A straightforward HTTP client that authenticates with your API key and returns typed Go structs. Every command follows the same pattern: build params, call the API, render output as a table or JSON.
- WebSocket client — Uses gorilla/websocket for streaming. Connects, subscribes to channels (trades, quotes, aggregates), and delivers messages until you hit Ctrl+C.
- S3 client — Uses the AWS SDK to connect to Massive's S3-compatible endpoint for flat file downloads. List files by year/month, download gzipped CSVs by date.
- Configuration — API keys stored at
~/.config/massive/config.jsonwith0600permissions. Environment variables take priority over the config file. Supports.envfiles.
The whole project compiles to a single binary with no external runtime dependencies. CI/CD automatically cross-compiles for macOS (Intel and Apple Silicon), Linux, and Windows on every push to main.
The Full Command Tree
To give you a sense of how comprehensive the coverage is, here's the full command tree. Every endpoint in the Massive API is accessible from the terminal:
massive config init | show
massive stocks bars | open-close | market | snapshots | quotes | trades
massive stocks news | tickers | exchanges
massive stocks fundamentals balance-sheet | income-statement | cash-flow | ratios | ...
massive stocks corporate-actions dividends | splits
massive stocks filings sections | risk-factors | risk-categories
massive stocks indicators sma | ema | rsi | macd
massive stocks market-ops holidays | status
massive options bars | contracts | snapshots | trades | quotes | indicators
massive indices bars | snapshots | tickers | indicators
massive crypto bars | snapshots | trades | tickers | indicators
massive forex bars | convert | quotes | snapshots | tickers | indicators
massive futures bars | contracts | products | schedules | exchanges | snapshot
massive ws stocks | options | indices | crypto | forex | futures
massive files assets | types | list | download
massive benzinga news | ratings | earnings | guidance | analysts
massive economy inflation | labor-market | treasury-yields
Give It a Try
The project is open source and available on GitHub: github.com/cloudmanic/massive. If you have a Massive API key, you can be pulling data in under a minute.
If you're using AI coding assistants like Claude Code and need financial data, I think you'll find this approach — a simple CLI with table and JSON output — far more practical than an MCP server. Your agent gets exactly the data it needs without blowing up its context window, and you can debug any issue by running the same command yourself.
One binary. Six asset classes. Tables for humans, JSON for machines.
That's the whole idea.
Star the repo, try it out, and open an issue if you run into anything. I'm actively developing this and shipping updates regularly.