Get Pro License

LogLens Documentation

Welcome to the official documentation for LogLens. Here you'll find everything you need to know to go from a beginner to a power user.

Installation

Install the latest version of LogLens with a single command:

curl -sSL https://download.getloglens.com/install.sh | sh

The script will install the binary to a local directory and add it to your shell's PATH.

Once installed, you can keep LogLens up-to-date by running loglens update.

Configuration

LogLens can be configured by creating a `config.toml` file in the application's configuration directory. This allows you to tailor its behavior to your specific needs, especially for the interactive TUI.

Config File Location:

  • Linux/macOS: ~/.config/loglens/config.toml
  • Windows: C:\Users\YourName\AppData\Roaming\loglens\config.toml

Customizing the TUI Display

The TUI needs to know which fields in your structured logs represent the timestamp, log level, and main message. Since these can vary between logging schemas, you can define a prioritized list of keys for LogLens to check.

# Example ~/.config/loglens/config.toml

[tui]
# LogLens will check for "message", then "msg", then "text" to find the main log content.
message_keys = ["message", "msg", "text"]

# It will check for "level", then "lvl", then "severity" for the log level.
level_keys = ["level", "lvl", "severity"]

# It will check for "timestamp", then "ts", then "@timestamp" for the event time.
timestamp_keys = ["timestamp", "ts", "@timestamp"]

This flexibility ensures the TUI can correctly parse and display your logs, even if they don't use the most common field names.

Core Concept: Structured Logs

LogLens shines when working with structured logs (like JSON or logfmt). While the search command works like a super-fast grep on any text, the Pro commands understand the key-value structure of your logs.

JSON Example:

{"timestamp":"2025-08-26T10:00:00Z", "level":"error", "message":"User login failed", "user_id":"usr_123"}

Logfmt Example:

ts=2025-08-26T10:00:01Z level=info message="Request processed" duration_ms=55

LogLens automatically detects the format, allowing you to write powerful queries against fields like level or duration_ms.

Core Concept: Working with Compressed Files

You don't need to decompress your .gz log files before analysis. LogLens handles them automatically. All commands that read logs (search, query, stats, and tui) will transparently decompress Gzip files on the fly, saving you time and disk space.

# This works out-of-the-box, no extra flags needed!
loglens query "/var/log/archive/app-2025-08-27.log.gz" --query 'level == "error"'

Core Concept: Query Syntax

The Pro query syntax is simple yet powerful. It allows you to filter your structured logs based on field values using natural, readable operators.

  • Comparison: ==, is (equals), !=, isnot (not equals), >, <, >=, <=.
  • Case-Insensitive: Use ~= for case-insensitive string matching and !~= for negation (e.g., level ~= "error").
  • Existence: Check if a field is present with exists or absent with !exists (e.g., user_id exists).
  • Text Search: Use contains and !contains for substring matching on structured fields or raw text.
  • Logic: Use && (or AND) and || (or OR) to combine conditions. AND has higher precedence.
  • Values: Wrap string values in double quotes (e.g., "error"). Numbers do not need quotes.
# Example: Find all error or warning logs from a specific service that took longer than 1 second.
'level is "error" or level is "warn" and service is "api-gateway" and duration_sec > 1'

Querying Nested Fields

For deeply nested JSON objects, you can query fields using a JSON Pointer path. Start the field name with a forward slash / and separate nested keys with slashes.

# Given a log like: {"data": {"user": {"id": 123}}}
# Query the nested 'id' field
loglens query ./logs --query '/data/user/id == 123'

Querying Raw Text

You can also query the raw, unstructured text of a log line by using the special text field. This is useful for finding logs that contain certain keywords but may not be fully structured.

# Find high-latency logs that also mention a database timeout
'latency_ms > 500 and text contains "database timeout"'

Core Concept: Time Formats

The --since and --until arguments accept flexible time formats.

  • Relative: Human-readable strings like "1h ago", "30m", "2 days ago".
  • Absolute: Full RFC3339 / ISO 8601 timestamps like "2025-08-22T18:00:00Z".
  • Keywords: The special keyword "now" can be used to refer to the current time.

Command Reference

loglens query PRO

Query structured logs with an advanced query language.

ArgumentDescription
path[Required] The directory or file path to query (including .gz files).
query[Required] The query string (e.g., 'level == "error"').
--sinceFilter logs since a specific time (e.g., "1h ago", "2025-08-22T18:00:00Z").
--untilFilter logs until a specific time (e.g., "now", "10m ago").
--rawOutput only the raw log line, without any formatting.
--jsonOutput matching lines as compact JSON objects.
-C, --context NShow N lines of context around a match.
-B, --before NShow N lines of context before a match.
-A, --after NShow N lines of context after a match.
--squashSquash consecutive, identical matching lines.

Example:

# Find all 5xx errors that happened in the last day
loglens query /var/log/ 'status_code >= 500' --since "1 day ago"

loglens stats PRO

Perform statistical analysis on log data. Use the global --where flag to filter the logs before calculating statistics.

Subcommand: summary

Provides a full, dynamic summary of the entire log dataset. It automatically discovers numeric and low-cardinality categorical fields and provides statistics for them.

ArgumentDescription
path[Required] The directory or file path to analyze.
--top NThe number of top values to show for categorical fields (Default: 5).

Example:

# Get a dynamic summary of all fields in the production logs
loglens stats ./logs/production/ summary

Subcommand: describe

Provides a detailed statistical summary (count, sum, min, max, average) for a single numeric field. This is ideal for analyzing performance metrics.

ArgumentDescription
path[Required] The directory or file path to analyze.
field[Required] The numeric field to describe (e.g., "latency_ms").

Example:

# Get a performance summary for the 'latency_ms' field
loglens stats ./logs describe latency_ms --where 'service == "api-gateway"'

Example Output:

Statistics for 'latency_ms'
===========================
Metric      |           Value
------------------------------
Count       |            1532
Sum         |        18432.50
Min         |            5.12
Average     |           12.03
Max         |         5012.77

Subcommand: legacy

Provides simple aggregations for counting by a category or finding an average.

ArgumentDescription
path[Required] The directory or file path to analyze.
--count-byThe field to count unique values by (e.g., --count-by "level").
--avgThe field to calculate the average value of (e.g., --avg "response_time_ms").

Example:

# Count the number of logs by level for the payments service
loglens stats ./logs legacy --count-by "level" --where 'service == "payments"'

loglens count PRO

Quickly count the number of lines that match a query, without printing the lines themselves. It's an efficient way to get a total number of occurrences.

ArgumentDescription
path[Required] The directory or file path to search in.
query[Required] The query string to match.
--sinceCount logs since a specific time.
--untilCount logs until a specific time.

Example:

# Count the number of critical errors in the last 2 hours
loglens count /var/log/ --query 'level == "critical"' --since "2h ago"

loglens watch PRO

Watch files and tail logs in real-time, with optional filtering.

ArgumentDescription
path[Required] The directory or file path to watch.
--whereAn optional query filter to apply to the live logs.
--highlightA comma-separated list of terms to highlight in the output.
--jsonOutput matching lines as compact JSON objects.
--squashSquash consecutive, identical matching lines from the stream.

Example:

# Watch for any new critical errors in the production API logs
loglens watch ./prod/api.log --where 'level == "critical"'

Example with highlighting:

# Watch for any new errors and highlight the words "timeout" and "database"
loglens watch ./prod/api.log --where 'level == "error"' --highlight="timeout,database"

loglens tui PRO

Start an interactive Terminal User Interface for log exploration. It supports filtering, browsing multiple files, and viewing stats dynamically. It can also open .gz files directly.

ArgumentDescription
path[Optional] The directory or file path to explore (can be .gz).
-f, --followFollow the file and append new entries in real-time.

Example:

# Open the TUI to explore all logs in the current directory
loglens tui .

loglens license

Manage your LogLens Pro license.

  • activate <key>: Activate your license key for Pro features.
  • purchase: Get a link to purchase a Pro license.
  • status: Check the status of your current license.

Example:

# Activate your purchased license key
loglens license activate YOUR-LICENSE-KEY-HERE

loglens compress

Compress a log file using Gzip. By default, this command replaces the original file with its compressed version (e.g., app.log becomes app.log.gz). Use the --output flag to keep the original.

ArgumentDescription
path[Required] The file to compress (e.g., "app.log").
-o, --outputOptional output directory or full file path. If specified, the original file is not deleted.

Example:

# Compress and replace a large log file
loglens compress large-app.log

loglens decompress

Decompress a Gzip-compressed log file. By default, this command replaces the original file. Use the --output flag to keep the original .gz file.

ArgumentDescription
path[Required] The compressed file to decompress (e.g., "app.log.gz").
-o, --outputOptional output directory or full file path. If specified, the original file is not deleted.

Example:

# Decompress an archived log file to a specific directory
loglens decompress ./archive/app.log.gz -o ./unarchived/

loglens update

Checks for a new version of LogLens and automatically updates it if one is available and your license is eligible. This is the easiest way to stay up-to-date.

ArgumentDescription
--forceForce re-installation even if the tool is up-to-date.

Example:

# Check for updates and install if available
loglens update

Cookbook: Practical Recipes

This section provides step-by-step solutions to common, real-world problems using LogLens.

Recipe: Debugging a 5XX Server Error

Problem: Your monitoring system alerted you to a spike in 5xx errors in the last hour, and you need to find the root cause quickly.

Step 1: Isolate the Initial Errors

First, use query to find all the logs that represent the initial server errors. We'll filter by status code and the time window.

loglens query ./logs/production/ --query 'status_code >= 500' --since "1h ago"

This gives you a list of the failed requests. Look for a common field like a trace_id or request_id that you can use to track a single request through the entire system.

Example output might show a log with "trace_id": "trace_abc_123".

Step 2: Trace the Problematic Request

Now, use the basic search command to find every single log line associated with that specific trace ID across all log files. This is extremely fast and doesn't require the logs to be structured.

loglens search ./logs/ "trace_abc_123"

The output will show the complete lifecycle of the failed request, from the initial API gateway entry, through various microservices, to the final database query that likely caused the failure. This allows you to pinpoint exactly where the error occurred.

Recipe: Monitoring a Specific User's Activity

Problem: A user is reporting strange behavior in their account. You need to watch their activity in real-time to see what's happening as they reproduce the issue.

Step 1: Find the User's ID

First, identify the user's unique identifier in your logs (e.g., user_id, customer_uuid).

Step 2: Watch Their Logs Live

Use the watch command with a --where filter to tail the logs and only show entries matching that user's ID. This command will run continuously, printing new matching logs as they are written.

loglens watch /var/log/app.log --where 'user_id == "usr_fgh_456"'

Now, ask the user to perform the actions that are causing the issue. As they navigate the application, you will see a clean, filtered stream of their specific activities, making it easy to spot anomalies or errors as they happen.

Recipe: Analyzing API Endpoint Performance

Problem: You suspect that the checkout API endpoint has become slow after a recent deployment, and you need to verify it with data.

Step 1: Get a Full Performance Picture

A simple average can be misleading, as a few very slow requests can hide otherwise good performance. Use the stats describe command to get a full statistical summary.

loglens stats ./api_logs/ describe "response_time_ms" --where 'endpoint == "/api/v1/checkout"'

The output gives you a richer story than a single average. The `Min`, `Max`, and `Average` values give you a clear picture of the performance range. A high `max` value or a large gap between the `Average` and the typical values you expect is a clear sign of performance outliers that need investigation.

Step 2: Isolate Slowdowns to a Specific Service

If you find that performance is poor, you can check if the problem is isolated to a specific microservice. Run the same command, but this time add a filter for a downstream service that you suspect might be the bottleneck.

# Check the latency specifically for calls involving the 'payment-service'
loglens stats ./api_logs/ describe "response_time_ms" --where 'endpoint == "/api/v1/checkout" AND service == "payment-service"'

By comparing the performance statistics for different services, you can quickly determine where the slowdown is occurring in your request chain.