Logs and Events in Studio May 12, 2026 18:37 Updated Table of Contents Introduction What is Logs and Events and when to use it? How to access and generate logs Getting to know the Logs and Events screen How to locate a specific conversation How to interpret events and actions How to view errors How to view trackings How to analyze executions with AI How to use Logs and Events with Unit Tests How to consult and clear contact data Debugging best practices Common problems Frequently Asked Questions (FAQ) Limitations and points of attention Quick glossary IntroductionThe Logs and Events screen helps Studio users monitor and investigate conversational flow executions. With it, it is possible to view exchanged messages, generated events, executed actions, errors, trackings, technical payloads, and token consumption in interactions with AI agents. The new experience organizes this information into an investigation journey: first you locate the conversation, then you follow the interaction timeline, expand the execution tree, and analyze the technical details of each event. What is Logs and Events and when to use it?Logs and Events are records generated during the execution of a conversational flow. They help to understand what happened in a conversation, which path the flow followed, and which technical events were generated at each stage. Use Logs and Events to: Test flows under construction; Investigate conversations of published bots; Validate if the flow followed the expected path; Identify errors or unexpected behaviors; Consult payloads, JSONs, and variables; Verify trackings; Analyze AI agent executions; Monitor token consumption; Support troubleshooting before or after publishing a flow. The screen helps answer questions such as: Which message started the execution? Which block or agent was triggered? Did the flow follow the expected path? Was there an error? Was any tracking recorded? Was there a tool call? How many tokens were consumed? What was the generated output? How to access and generate logsTo use Logs and Events, open the bot or flow you wish to investigate. In the Studio, access the desired flow. In the top right corner of the screen, click on the testing/laboratory icon. Select Logs and Events. The screen will open with the list of available conversations and events.To generate new logs, start an interaction with the bot. You can do this by testing a flow under construction, a published chatbot, or a real conversation on an integrated channel. In published bots, use a known contact, filter by channel, and note the approximate time of the interaction to find the conversation more easily. Getting to know the Logs and Events screenThe new screen is organized into four main areas. Conversation list: The conversation list shows the interactions available for investigation. Select a conversation to view its timeline and associated events. Conversation timeline: The timeline presents the interaction in a conversational format, allowing you to understand the context before analyzing the technical details. It can display messages sent by the user, bot responses, AI agent responses, and other elements related to the execution. Execution tree: The execution tree shows the technical events and actions generated during the conversation. By expanding the items, you can follow the path taken by the flow. Side panel: The side panel displays the details of the selected event. Depending on the type of event, you can consult input, output, JSON, payloads, trackings, errors, Tool Calls, execution time, and token consumption. How to locate a specific conversationIn testing environments, it is generally simple to locate the recently created conversation. In published bots or those with a high volume of messages, it may be necessary to use filters.Using channel filter: Use the channel filter to view only conversations originating from a specific channel, such as Blip Chat, WhatsApp, or another integrated channel.Using contact search: When available, use the contact search to locate a conversation associated with a specific user. The search can be done by user id or name. Recommendations for bots with many conversations When testing bots in production: Use a known contact; Send a message that is easy to identify; Note the approximate time of the test; Filter by the channel used; Search for the contact; Wait a few seconds if the conversation does not appear yet. What to check if the conversation does not appear Check if: the message was actually sent; the correct channel is selected; the bot is published or in testing correctly; you are in the correct bot or flow; there are active filters hiding the conversation; the screen needs to be updated; there is a delay in the arrival of the logs. How to interpret events and actionsThe execution tree shows the technical events associated with the selected conversation. By expanding the items in the tree, you can follow the path taken by the flow and consult the details of each stage.List of some of the events that can be found: Event Type What it means Characterization User Input Message or input sent by the user. Gray box Block Name Name of each block the user navigates through. Blue Icon Output/Input Actions Group of actions executed in that block. Green Icon SendMessage/SendRawMessage/SendMessageFromHttp Message sent by the bot to the user. Yellow Icon Execute Script/ExecuteScriptV2/ExecuteBlipFunction Execution of a script configured in the flow. Yellow Icon Request HTTP HTTP call to an external service. Yellow Icon Track Event/TrackContactsJourney Tracking event recorded by the flow. Yellow Icon ManageList Manage distribution list. Yellow Icon MergeContact Contact update. Yellow Icon Redirect Redirect between subbots. Yellow Icon ForwardMessageToDesk/ForwardToDesk/LeavingFromDesk Desk-specific events. Yellow Icon SetVariable Set variable. Yellow Icon ProcessCommand Process a command Yellow Icon ProcessContentAssistant Processes NLP. Yellow Icon ForwardToAgent Agent input events. Yellow Icon KnowledgeBaseConsult Knowledge base consultation. Yellow Icon Agent Run AI agent execution. Yellow Icon LLM Invoke Language model call. Yellow Icon Tool Call Tool call performed by an agent. Yellow Icon Error Failure recorded during execution. Red To inspect an event: Select a conversation. Expand the execution tree. Click on the event you want to analyze. Consult the information displayed in the side panel. Use the JSON or technical payload when you need to confirm sent or received fields, validate variable values, analyze integration returns, or share evidence with support and engineering teams. How to view errorsWhen an error is recorded, it may appear associated with an execution event or action.To investigate errors: Locate the conversation; Expand the execution tree; Look for events marked as error or failure; Click on the event; Consult the side panel; Check the error message, payload, related action, and previous context. When analyzing an error, seek to answer: Which message started the execution? Which block was being executed? Which action failed? Did the error occur before or after an external call? Did the error occur in a deterministic block, AI agent, or Tool Call? Is there a return payload? Is there a clear error message? Does the error repeat in new tests? How to view trackingsTrackings help monitor important flow events, such as conversions, completed stages, user actions, or business events. In the new experience, trackings appear integrated into the event inspection, allowing them to be analyzed within the execution context. To view trackings: Select a conversation; Expand the execution tree; Click on an event or block; Access the trackings area in the side panel, when available. A tracking can contain information such as: category; action; event name; additional properties; sent values; execution context. Use trackings to validate if important events are being recorded correctly, such as the start of a journey, stage completion, user click or choice, conversion, business error, or passage through a specific block. How to analyze executions with AIIn flows with AI agents, Logs and Events helps to understand not only the flow path but also the agent's behavior. You can analyze: which agent was triggered; which input reached the agent; which instructions or prompt were considered; which tools were available; if any Tool Call was executed; which response was generated; if there was an error; how long the execution took; how many tokens were consumed. Agent Run Agent Run represents the execution of an AI agent. It indicates that the flow triggered an agent to interpret the context, execute a task, generate a response, or trigger tools. Use this event to understand when an agent was triggered and which part of the conversation is related to its execution. LLM Invoke LLM Invoke represents the call to the language model. It is in this event that information related to the input sent to the model, generated output, execution time, and token consumption typically appears. Use this event to analyze: what was sent to the model; what response was generated; how long the execution took; how many tokens were consumed; if the agent's behavior is consistent with what is expected. Tool Calls Tool Calls are calls to tools performed by AI agents. They can represent, for example: consulting an API; searching in a knowledge base; setting a contact; recording information; executing an external action; triggering a tool configured in the agent. When an agent uses a tool, the Tool Call appears associated with the agent's execution. When analyzing a Tool Call, check: which tool was called; which parameters were sent; which return was received; if there was an error; how long the call took; if the call impacted the agent's final output. Tool Calls and trackings When a tracking is recorded directly as a Track Event, it appears in the trackings area. When tracking occurs through a Tool Call, it may be necessary to check the Tool Call itself in the execution tree and, if necessary, validate the event in complementary reports. Token consumption Tokens are units of text processed by the language model. Everything the agent reads and writes is converted into tokens. Token consumption can help understand: execution cost; size of the context sent to the agent; size of the generated response; impact of conversation history; impact of tools and instructions; possible optimization opportunities. You can view tokens in events related to the execution of AI agents, especially in events such as LLM Invoke. Indicator What it means Input tokens Quantity of tokens sent to the model. Includes instructions, context, history, and user input. Input cached Part of the input reused in cache, when applicable. Output tokens Quantity of tokens generated by the model in the response. Total tokens Total tokens processed in the execution. Consumption may increase when: the prompt is very long; there are many instructions in the agent; the conversation history is large; many tools are available; the context base sent to the agent is extensive; the generated response is long; there are multiple LLM calls in the same execution. How to investigate an unexpected AI responseWhen an agent responds unexpectedly: Open the conversation in Logs and Events; Locate the agent execution; Open the LLM Invoke event; Check the sent input; Check the generated output; Analyze the prompt or instructions, when available; See if the history influenced the response; Check if any Tool Call was triggered; Consult tokens and execution time; Adjust the flow, prompt, tools, or knowledge base as necessary. How to use Logs and Events with Unit TestsLogs and Events and Unit Tests have different objectives, but they can be used together. Functionality When to use Unit Tests To validate if an input generates the expected output. Logs and Events To technically investigate the path taken and the events generated during execution. Use both together when a unit test fails and you need to understand the reason.Recommended flow: Run the unit test; If the test fails, open Logs and Events; Locate the conversation or execution generated by the test; Expand the execution tree; Verify where the behavior diverged from what was expected; Analyze input, output, errors, trackings, Tool Calls, and tokens. Some flows depend on keywords, triggers, or initial conditions to direct the conversation correctly. If the unit test does not include the necessary input to trigger this path, the flow may not reach the expected agent or block. How to consult and clear contact dataDuring testing, it may be necessary to clear a contact's data to restart the journey and validate the flow from the beginning. On the Logs and Events screen, when selecting a conversation or contact, you can access details related to the contact, such as variables and context information.To clear or restore contact data: Select the desired conversation. Click on the header or contact details area. Open the contact information side panel. Locate the option to restore or clear data. Confirm the action, if applicable. Run the test again. Debugging best practicesBefore publishing or changing a flow, use Logs and Events to validate if the execution is happening as expected. Check if: the conversation appears in the Logs and Events list; the timeline correctly represents the interaction; the execution tree follows the expected path; the correct blocks were executed; the redirections happened correctly; there are no unexpected errors; the expected trackings were recorded; HTTP calls returned success; scripts were executed correctly; AI agents received the correct input; Tool Calls were triggered when expected; AI responses are coherent; token consumption is within expectations; execution time is acceptable. To report a problem, include: bot name; environment; channel used; approximate time of the test; contact used; message sent by the user; screenshot or video of the screen; event or error observed; relevant JSON or payload, when possible; expected behavior; occurred behavior. Common problems 1. No logs appear Possible causes: no conversation was started; the bot did not receive a message; the selected channel is not correct; there is an active filter; there is a rendering delay; the opened bot or flow is not the same one used in the test. What to do: Send a new message through the test chat; Check if you are in the correct bot; Remove active filters; Wait a few seconds; Refresh the screen; Try again. 2. I can't find my conversation in production Possible causes: there are many conversations happening at the same time; the contact was not correctly identified; the filtered channel is incorrect; the conversation has not been rendered yet. What to do: Filter by the channel used; Use the contact search; Look for the approximate time of the interaction; Send a new test message that is easy to identify; Wait for the list to update. 3. The filter does not open or does not respond Possible causes: intermittent interface behavior; local session conflict; loading delay; browser problem. What to do: Wait a few seconds; Close and reopen the test chat; Refresh the page; Try again; Record a video if the problem persists; Report it to the responsible team. 4. The order of events seems incorrect Possible causes: Events may arrive from different sources; There may be a rendering delay; Sorting may depend on the available timestamp; Some AI events and deterministic events may be recorded at different times. What to do: Check the details of each event; Consult the time and type of action; Compare with the conversation timeline; Rerun the test, if necessary; Report it if the order hinders the analysis. 5. The expected tracking did not appear Possible causes: The flow did not pass through the block that records the tracking; The tracking was not configured correctly; The tracking was done via Tool Call; The event does not appear in the Trackings tab, but appears in the execution tree; There is a rendering delay. What to do: Confirm if the flow passed through the expected block. Check the execution tree. Consult the trackings area. Look for related Tool Calls. Validate in external reports, if necessary. 6. Token consumption seems high Possible causes: long prompt; extensive conversation history; many instructions; many tools available; long response; multiple LLM calls; large context sent to the agent. What to do: Open the LLM Invoke event; Check input tokens; Check output tokens; See if there is a very extensive history; Review prompt and instructions; Evaluate if there are unnecessary tools available; Compare different executions. 7. The unit test failed, but I don't know why The Unit Test shows the validation result but may not show all the investigative details of the execution.What to do: Open Logs and Events in parallel; Locate the execution generated by the test; Expand the tree; Check at which stage there was a divergence; Analyze input, output, errors, Tool Calls, and tokens; Confirm if the test included necessary triggers or keywords. Frequently Asked Questions (FAQ) Does Logs and Events work in published bots? Yes. In published bots, the screen can display conversations from multiple users. Use filters, contact search, channel, and approximate time to locate the desired conversation. Does Logs and Events replace Beholder? The new experience centralizes log investigation directly in the Studio and was created to make the debugging process clearer, more accessible, and integrated into flow construction. Does Logs and Events save history? Logs and Events was primarily designed to support debugging and inspection of executions. Historical data availability may vary depending on the type of information, environment, channel, and functionality implementation. For auditing or consolidated historical analysis, use the reports and dashboards recommended by the responsible team. What is the difference between Logs and Events and Unit Tests? Unit Tests validate if an input generates the expected output. Logs and Events help to technically investigate the path taken and the events generated during execution. Where do I see token consumption? In events related to AI execution, especially in LLM Invoke What are input tokens? They are the tokens sent for the model to process, including instructions, context, history, available tools, and the user's message. What are output tokens? They are the tokens generated by the model in the response. What is input cached? It is the part of the input that may have been reused in cache by the model, when applicable. Why does token consumption increase throughout the conversation? Because the agent may receive more history and context as the conversation progresses. The more information that is sent to the model, the higher the token consumption tends to be. Do Tool Calls appear on the screen? Yes. When there are recorded Tool Calls, they may appear in the execution tree, associated with the agent's execution.Do Tool Calls appear in the Trackings tab?When tracking is recorded directly as a Track Event, it appears in the trackings area. When it occurs via Tool Call, it may be necessary to consult the Tool Call itself in the execution tree. What does “inactivity” mean on the timeline? The indication of inactivity represents a flow behavior or event, not a message typed by the user. How to clear contact data to test again? Access the contact details through the selected conversation and use the option to restore, clear, or reset contact data. What to do when you find a bug? Record the bot, environment, channel, time, contact, sent message, screenshots or video, error event, relevant payload, expected behavior, and observed behavior. Then, send the material to the support channel or the team responsible for the functionality. Limitations and points of attentionBefore using Logs and Events, consider the following points: Point of attention What it means The screen supports real-time debugging Use Logs and Events to investigate executions as they happen or while the screen is open. Can be used in published flows The functionality can also support technical analysis of flows in production, provided the investigation happens in real time. Data stays in local storage The logs currently displayed are stored locally in the browser. Logs are not persisted in a database Upon reloading the screen, the displayed logs are reset. It is not an analytics tool The screen does not generate a dashboard, historical report, consolidated metrics, or retroactive monitoring. Does not replace analytical reports For historical analysis, aggregated metrics, or persistent observability, use the reports and dashboards specifically designed for that purpose. Coverage can still evolve In complex scenarios, logs may not clearly show the root cause or the event that led the flow to an exception. The purpose of the new experience is to centralize the technical debugging process in the Studio, gradually reducing reliance on parallel tools such as Beholder and the Blip Chat debug tab. Quick glossary Term Meaning Log Technical record of an execution. Event Occurrence recorded during the conversation or flow. Timeline Visualization of the conversation in order of interaction. Execution tree Expandable structure with execution actions and events. Payload Technical content sent or received by an event. JSON Structured format used to represent technical data. Tracking Event used to track actions, metrics, or stages. Agent Run AI agent execution. LLM Invoke Language model call. Tool Call Tool call made by an agent. Input tokens Tokens sent to the model. Output tokens Tokens generated by the model. Input cached Part of the input reused in cache. Total tokens Total tokens processed. Debug Process of investigating behavior or error. Deterministic flow Flow based on previously defined rules and blocks. AI flow Flow that uses agents or language models. Need more help? Explore our content at Blip Academy or Blip Community, watch tutorials on our YouTube channel, or clear your doubts in our service channel 😃 Related articles Unit Tests Studio: Knowledge Base Studio: Best Practices Topic Analysis Block libraries - Ready-made skills