igp2.agents package

Submodules

igp2.agents.agent module

class igp2.agents.agent.Agent(agent_id: int, initial_state: AgentState, goal: Goal = None, fps: int = 20)[source]

Bases: ABC

Abstract class for all agents.

property agent_id: int

ID of the agent.

property alive: bool

Whether the agent is alive in the simulation.

done(observation: Observation) bool[source]

Check whether the agent has completed executing its assigned task.

property fps: int

Simulation frames per second.

property goal: Goal

Final goal of the agent.

property metadata: AgentMetadata

Metadata describing the physical properties of the agent.

next_action(observation: Observation) Action[source]

Return the next action the agent will take

next_state(observation: Observation, return_action: bool = False) AgentState[source]

Return the next agent state after it executes an action.

reset()[source]

Reset agent to initialisation defaults.

property state: AgentState

Return current state of the agent as given by its vehicle, or initial state if no vehicle is attached.

property trajectory_cl

The closed loop trajectory that was actually driven by the agent.

update_goal(new_goal: Goal)[source]

Overwrite the current goal of the agent.

property vehicle: Vehicle

Return the physical vehicle attached to this agent.

igp2.agents.macro_agent module

class igp2.agents.macro_agent.MacroAgent(agent_id: int, initial_state: AgentState, goal: Goal | None = None, fps: int = 20)[source]

Bases: Agent

Agent executing a pre-defined macro action. Useful for simulating the ego vehicle during MCTS.

property current_macro: MacroAction

The current macro action of the agent.

done(observation: Observation) bool[source]

Returns true if the current macro action has reached a completion state.

property maneuver_end_idx: List[int]

The closed loop trajectory id at which each macro action maneuver completes.

next_action(observation: Observation) Action[source]

Get the next action from the macro action.

Parameters:

observation – Observation of current environment state and road layout.

Returns:

The next action of the agent.

next_state(observation: Observation, return_action: bool = False) AgentState[source]

Get the next action from the macro action and execute it through the attached vehicle of the agent.

Parameters:
  • observation – Observation of current environment state and road layout.

  • return_action – If True return the underlying action as well.

Returns:

The new state of the agent.

reset()[source]

Reset the vehicle and macro action of the agent.

update_macro_action(macro_action: ABCMeta, args: Dict, observation: Observation) MacroAction[source]

Overwrite and initialise current macro action of the agent using the given arguments.

Parameters:
  • macro_action – new macro action to execute

  • args – MA initialisation arguments

  • observation – Observation of the environment

igp2.agents.maneuver_agent module

class igp2.agents.maneuver_agent.ManeuverAgent(maneuver_configs: List[ManeuverConfig], agent_id: int, initial_state: AgentState, fps: int = 20, view_radius: float | None = None)[source]

Bases: Agent

For testing purposes. Agent that executes a sequence of maneuvers

create_next_maneuver(agent_id, observation)[source]
done(observation: Observation) bool[source]

Check whether the agent has completed executing its assigned task.

next_action(observation: Observation | None = None) Action[source]

Return the next action the agent will take

igp2.agents.mcts_agent module

class igp2.agents.mcts_agent.MCTSAgent(agent_id: int, initial_state: AgentState, t_update: float, scenario_map: Map, goal: Goal | None = None, view_radius: float = 50.0, fps: int = 20, kinematic: bool = False, n_simulations: int = 5, max_depth: int = 5, store_results: str = 'final', trajectory_agents: bool = True, cost_factors: Dict[str, float] | None = None, reward_factors: Dict[str, float] | None = None, velocity_smoother: dict | None = None, goal_recognition: dict | None = None, stop_goals: bool = False)[source]

Bases: TrafficAgent

done(observation: Observation)[source]

True if the agent has reached its goal.

get_goals(observation: Observation, threshold: float = 2.0) List[Goal][source]

Retrieve all possible goals reachable from the current position on the map in any direction. If more than one goal is found on a single lane, then only choose the one furthest along the midline of the lane.

Parameters:
  • observation – Observation of the environment

  • threshold – The goal checking threshold

property goal_probabilities: Dict[int, GoalsProbabilities]

Return the currently stored goal prediction probabilities of the ego.

property mcts: MCTS

Return the MCTS planner of the agent.

next_action(observation: Observation) Action[source]

Returns the next action for the agent.

If the current macro actions has finished, then updates it. If no macro actions are left in the plan, or we have hit the planning time step, then calls goal recognition and MCTS.

property observations: Dict[int, Tuple[StateTrajectory, AgentState]]

Returns the ego’s knowledge about other agents, sorted in a dictionary with keys corresponding to agents ids. It stores the trajectory observed so far and the frame at which each agent was initially observed. Currently, any agent out of view is immediately forgotten.

property possible_goals: List[Goal]

Return the current list of possible goals.

reset()[source]

Reset the vehicle and macro action of the agent.

update_observations(observation: Observation)[source]
update_plan(observation: Observation)[source]

Runs MCTS to generate a new sequence of macro actions to execute.

property view_radius: float

The view radius of the agent.

igp2.agents.traffic_agent module

class igp2.agents.traffic_agent.TrafficAgent(agent_id: int, initial_state: AgentState, goal: Goal | None = None, fps: int = 20, macro_actions: List[MacroAction] | None = None)[source]

Bases: MacroAgent

Agent that follows a list of MAs, optionally calculated using A*.

done(observation: Observation) bool[source]

Returns true if there are no more actions on the macro list and the current macro is finished.

property macro_actions: List[MacroAction]

The current macro actions to be executed by the agent.

next_action(observation: Observation) Action[source]

Get the next action from the macro action.

Parameters:

observation – Observation of current environment state and road layout.

Returns:

The next action of the agent.

reset()[source]

Reset the vehicle and macro action of the agent.

set_destination(observation: Observation, goal: Goal | None = None)[source]

Set the current destination of this vehicle and calculate the shortest path to it using A*.

Parameters:
  • observation – The current observation.

  • goal – Optional new goal to override the current one.

set_macro_actions(new_macros: List[MacroAction])[source]

Specify a new set of macro actions to follow.

igp2.agents.trajectory_agent module

class igp2.agents.trajectory_agent.TrajectoryAgent(agent_id: int, initial_state: AgentState, goal: Goal | None = None, fps: int = 20, open_loop: bool = False)[source]

Bases: Agent

Agent that follows a predefined trajectory.

done(observation: Observation) bool[source]

Check whether the agent has completed executing its assigned task.

next_action(observation: Observation) Action | None[source]

Calculate next action based on trajectory and optionally steps the current state of the agent forward.

next_state(observation: Observation, return_action: bool = False) AgentState[source]

Calculate next action based on trajectory, set appropriate fields in vehicle and returns the next agent state.

property open_loop: bool

Whether to use open-loop predictions directly instead of closed-loop control.

parked(tol=1.0) bool[source]
reset()[source]

Reset agent to initialisation defaults.

set_trajectory(new_trajectory: Trajectory)[source]

Override current trajectory of the vehicle and resample to match execution frequency of the environment. If the trajectory given is empty or None, then the vehicle will stay in place for 10 seconds.

property trajectory: Trajectory

Return the currently defined trajectory of the agent.

Module contents