Reports
The Reports page provides a centralized view of all test executions in a BlinqIO project. Each execution represents a complete test run that may include multiple scenarios and example rows.
Types of Reports
BlinqIO supports multiple types of reports, all accessible from the Reports page. These reflect different ways tests can be executed or generated:
Run Reports
Reports from manual or automated test runs showing scenarios, executed steps, logs, and outcomes.
AI Recovery Reports
Reports for test runs where AI detected and fixed failures, including recovered scenarios and retraining details.
AI Generation Reports
Reports generated from test runs where scenarios were created or modified by AI, showing details of AI-generated steps and their execution results.
Execution Summary
At the top of the Reports page, you can view a quick summary of all test executions in the project. This section helps users get an immediate sense of test activity and overall project health.
The summary displays:
Metric | Description |
---|---|
Total Executions | The total number of test executions run in the project. |
Successful | Number of executions where all scenarios passed. |
Failed | Number of executions where one or more scenarios failed |
These numbers automatically update as new tests are executed.
Below the summary, each test execution appears as a row in a table. This table provides detailed information about every test run, including its status and environment.
Column | Description |
---|---|
Execution Info | A unique identifier or summary of the run. |
Execution Date | Timestamp when the execution started. |
Execution Type | Indicates how the run was triggered (e.g., Local Execution, Local AI Generation). |
Status | Whether the execution Passed, Failed, or Error fixed by AI. |
Pass Ratio | Number of scenarios that passed versus total executed. |
User | The person who initiated the run. |
Environment | Target environment or environment URL where the test was executed. |
You can also filter the executions by time range using quick filters like Last 24 hours, Last week, Last 2 weeks, or Show all. These help you focus on recent or specific runs efficiently.
Opening a Report
Click on any execution row to open a detailed report for that test run. Inside the execution, you’ll find all the test scenarios that were part of that run.
Each scenario provides the following details:
- Start Time – The time the scenario execution began.
- Status – Whether the scenario Passed, Failed, or was Recovered using AI.
- Error Message – Specific error message if the scenario failed.
- Root Cause – AI-generated explanation of why the failure occurred.
- Retraining Reason – Reason for AI retraining, if applicable.
- Duration – Total time taken for the scenario to run.
- Past Executions – Color-coded history of the last 5 runs for the same scenario.
- User Comment – Optional comment area to document findings or decisions.
Click the dropdown arrow next to the scenario name. This expands a list of the most recent executions for that specific scenario.
You can click to expand each scenario and view a detailed step-by-step breakdown.
Scenario Step Details
Within each scenario, BlinqIO captures every step and substep in detail to help with debugging and analysis.
For every step, you can view:
- Screenshots – Captured at key points during the step for visual reference.
- Console Logs – Any relevant console output from the browser or application.
- Network Logs – Records of network requests and responses triggered during the step.
- Errors – Detailed error messages and stack traces, if a step fails.
- Substeps – Some steps include substeps when the action involves multiple interactions or validations.
This full breakdown allows users to trace exactly what happened during execution and where issues may have occurred.