Your API is the heart of your application, but what happens when it develops a performance issue? A slow endpoint can lead to frustrated users, cascading failures, and a frantic search for the root cause. Digging through raw logs with `grep` to find performance bottlenecks is slow, tedious, and often inconclusive.
You need to move from guessing to knowing. This requires aggregating data to see the bigger picture. That’s precisely what the LogLens Pro `stats` command is for. It transforms your logs from a stream of text into a source of actionable performance insights.
Step 1: The Baseline - Is My Endpoint Slow?
Let's start with a simple, common problem. You suspect the main payment processing endpoint, `/v1/payment/charge`, has become sluggish. Your logs contain a `latency_ms` field for every request. For a quick first look, you can find the average latency for just that endpoint using the `stats legacy` subcommand.
# Get a simple average latency for a specific endpoint
loglens stats ./api-logs/ legacy --avg "latency_ms" --where 'endpoint == "/v1/payment/charge"'
In seconds, you get a definitive number. This is a great baseline, but an average can be misleading—a few very slow requests can skew the result. To truly understand the user experience, you need to see the full picture.
Step 2: The Full Story with `stats describe`
To move beyond a simple average, the `stats describe` subcommand gives you a complete statistical breakdown. It calculates not just the average, but also the median, percentiles, min, and max. This is crucial for understanding the difference between the typical experience and the worst-case experience.
# Get a full statistical summary for the endpoint's latency
loglens stats ./api-logs/ describe "latency_ms" --where 'endpoint == "/v1/payment/charge"'
The output provides immediate, deep insight:
Statistics for 'latency_ms'
===========================
Metric | Value
---------------------------------
Count | 8210
Average | 254.33
p50 (Median) | 85.10
p95 | 850.75
p99 | 3012.40
Max | 5502.11
This tells a much richer story. The median (`p50`) is a fast 85ms, meaning half of your users have a great experience. However, the average is much higher at 254ms, and the 99th percentile (`p99`) shows that 1% of requests take over 3 seconds. You have a long-tail latency problem.
Step 3: Finding the Noisy Neighbor
Now you know *what* the problem is, but not *why*. The slowdown might be coming from an overloaded database or a slow downstream microservice. You can use `stats legacy` again, this time with `--count-by`, to pinpoint the source.
Let's find out which downstream service is contributing the most latency to our checkout process by grouping the logs.
# For the slow endpoint, find the average latency grouped by downstream service
loglens stats ./api-logs/ legacy --count-by "service" --avg "latency_ms" --where 'endpoint == "/v1/payment/charge"'
This command groups all matching logs by the `service` field and calculates the average latency for each one. The output gives you an instant leaderboard of performance, immediately highlighting the slowest component in the chain.
Stop Guessing, Start Measuring
Performance tuning without data is just guesswork. The `loglens stats` command gives you a powerful, multi-step workflow to diagnose issues right from the command line:
- Get a quick baseline with `stats legacy --avg`.
- Understand the full user impact with `stats describe`.
- Pinpoint the source of the problem with `stats legacy --count-by`.
This workflow turns your logs into a powerful diagnostic tool, helping you stop letting performance issues hide in plain sight and get the answers you need in seconds.