Run your team with Team.run() (sync) or Team.arun() (async).
Basic Execution
from agno.team import Team
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.hackernews import HackerNewsTools
from agno.tools.yfinance import YFinanceTools
from agno.utils.pprint import pprint_run_response
news_agent = Agent( name = "News Agent" , role = "Get tech news" , tools = [HackerNewsTools()])
finance_agent = Agent( name = "Finance Agent" , role = "Get stock data" , tools = [YFinanceTools()])
team = Team(
name = "Research Team" ,
members = [news_agent, finance_agent],
model = OpenAIResponses( id = "gpt-4o" )
)
# Run and get response
response = team.run( "What are the trending AI stories?" )
print (response.content)
# Run with streaming
stream = team.run( "What are the trending AI stories?" , stream = True )
for chunk in stream:
print (chunk.content, end = "" , flush = True )
Execution Flow
When you call run():
Pre-hooks execute (if configured)
Reasoning runs (if enabled) to plan the task
Context is built with system message, history, memories, and session state
Model decides whether to respond directly, use tools, or delegate to members
Members execute their tasks (concurrently in async mode)
Leader synthesizes member results into a final response
Post-hooks execute (if configured)
Session and metrics are stored (if database configured)
In TeamMode.tasks, the leader uses task management tools to build and execute a shared task list, looping until the goal is complete or max_iterations is reached.
Teams can pause for human-in-the-loop requirements (e.g., approvals or user input). When a run requires confirmation, the run returns with pending requirements so you can collect input or resolve approvals before continuing.
Streaming
Enable streaming with stream=True. This returns an iterator of events instead of a single response.
stream = team.run( "What are the top AI stories?" , stream = True )
for chunk in stream:
print (chunk.content, end = "" , flush = True )
Streaming is not supported in TeamMode.tasks. If you set stream=True, the run falls back to non-streaming execution.
Stream All Events
By default, only content is streamed. Set stream_events=True to get tool calls, reasoning steps, and other internal events:
stream = team.run(
"What are the trending AI stories?" ,
stream = True ,
stream_events = True
)
for event in stream:
if event.event == TeamRunEvent.run_content:
print (event.content, end = "" , flush = True )
elif event.event == TeamRunEvent.tool_call_started:
print ( f "Tool call started" )
elif event.event == TeamRunEvent.tool_call_completed:
print ( f "Tool call completed" )
Stream Member Events
When using arun() with multiple members, they execute concurrently. Member events arrive as they happen, not in order.
Disable member event streaming with stream_member_events=False:
team = Team(
name = "Research Team" ,
members = [news_agent, finance_agent],
model = OpenAIResponses( id = "gpt-4o" ),
stream_member_events = False
)
Run Output
Team.run() returns a TeamRunOutput object containing:
Field Description contentThe final response text messagesAll messages sent to the model metricsToken usage, execution time, etc. member_responsesResponses from delegated members
See TeamRunOutput reference for the full schema.
Async Execution
Use arun() for async execution. Members run concurrently when the leader delegates to multiple members at once.
import asyncio
async def main ():
response = await team.arun( "Research AI trends and stock performance" )
print (response.content)
asyncio.run(main())
Tasks Mode
Tasks mode runs an iterative loop that creates, executes, and updates tasks until the goal is complete.
from agno.team.mode import TeamMode
from agno.models.openai import OpenAIResponses
team = Team(
name = "Ops Team" ,
members = [news_agent, finance_agent],
model = OpenAIResponses( id = "gpt-4o" ),
mode = TeamMode.tasks,
max_iterations = 6
)
response = team.run( "Compile a short report on recent AI agent frameworks." )
print (response.content)
Specifying User and Session
Associate runs with a user and session for history tracking:
team.run(
"Get my monthly report" ,
user_id = "john@example.com" ,
session_id = "session_123"
)
See Sessions for details.
Passing Files
Pass images, audio, video, or files to the team:
from agno.media import Image
team.run(
"Analyze this image" ,
images = [Image( url = "https://example.com/image.jpg" )]
)
See Multimodal for details.
Structured Output
Pass an output schema to get structured responses:
from pydantic import BaseModel
class Report ( BaseModel ):
overview: str
findings: list[ str ]
response = team.run( "Analyze the market" , output_schema = Report)
See Input & Output for details.
Cancelling Runs
Cancel a running team with Team.cancel_run(). See Run Cancellation .
Print Response
For development, use print_response() to display formatted output:
team.print_response( "What are the top AI stories?" , stream = True )
# Show member responses too
team.print_response( "What are the top AI stories?" , show_members_responses = True )
Core Events Event Description TeamRunStartedRun started TeamRunContentResponse text chunk TeamRunContentCompletedContent streaming complete TeamRunCompletedRun completed successfully TeamRunErrorError occurred TeamRunCancelledRun was cancelled
Event Description TeamToolCallStartedTool call started TeamToolCallCompletedTool call completed
Reasoning Events Event Description TeamReasoningStartedReasoning started TeamReasoningStepSingle reasoning step TeamReasoningCompletedReasoning completed
Memory Events Event Description TeamMemoryUpdateStartedMemory update started TeamMemoryUpdateCompletedMemory update completed
Hook Events Event Description TeamPreHookStartedPre-hook started TeamPreHookCompletedPre-hook completed TeamPostHookStartedPost-hook started TeamPostHookCompletedPost-hook completed
Developer Resources