igp2.recognition package¶
Submodules¶
igp2.recognition.astar module¶
- class igp2.recognition.astar.AStar(cost_function: Callable[[VelocityTrajectory, PointGoal], float] | None = None, heuristic_function: Callable[[VelocityTrajectory, PointGoal], float] | None = None, next_lane_offset: float | None = None, max_iter: int = 100)[source]¶
Bases:
object
Class implementing A* search over trajectories to goals.
- NEXT_LANE_OFFSET = 0.01¶
- cost_function(trajectory: VelocityTrajectory, goal: Goal) float [source]¶
- static goal_reached(goal: Goal, trajectory: VelocityTrajectory) bool [source]¶
- search(agent_id: int, frame: Dict[int, AgentState], goal: Goal, scenario_map: Map, n_trajectories: int = 1, open_loop: bool = True, debug: bool = False, visible_region: Circle | None = None) Tuple[List[VelocityTrajectory], List[List[MacroAction]]] [source]¶
Run A* search from the current frame to find trajectories to the given goal.
- Parameters:
agent_id – The agent to plan for
frame – State of the environment to search from
goal – The target goal
scenario_map – The Map of the scenario
n_trajectories – The number of trajectories to return
open_loop – Whether to generate open loop or closed loop macro actions in the end
debug – If True, then plot the evolution of the frontier at each step
visible_region – Region of the map that is visible to the ego vehicle
- Returns:
List of VelocityTrajectories ordered in increasing order of cost. The best trajectory is at index 0, while the worst is at index -1
- static time_to_goal(trajectory: VelocityTrajectory, goal: Goal) float [source]¶
- static trajectory_duration(trajectory: VelocityTrajectory, goal: Goal) float [source]¶
igp2.recognition.goalprobabilities module¶
- class igp2.recognition.goalprobabilities.GoalWithType(goal: Goal | None = None, goal_type: str | None = None)[source]¶
Bases:
object
Tuple of a Goal object and a goal_type string defining an action the vehicle has to perform to reach the goal (e.g. turn left). In the current implementation, goal_types are not defined.
- class igp2.recognition.goalprobabilities.GoalsProbabilities(goals: List[Goal] | None = None, goal_types: List[List[str]] | None = None, priors: List[float] | None = None)[source]¶
Bases:
object
Class used to store and update goal probabilities, as well as store useful results such as the priors, likelihoods, generated trajectories and rewards
- add_smoothing(alpha: float = 1.0, uniform_goals: bool = False)[source]¶
Perform add-alpha smoothing on the probability distribution in place.
- Parameters:
alpha – Additive factor for smoothing.
uniform_goals – Whether to normalise goal probabilities to uniform distribution,
- property all_plans: Dict[GoalWithType, List[List[MacroAction]]]¶
Returns all plans from the most recent vehicle position generated to each goal.
- property all_reward_differences: Dict[GoalWithType, List[float]]¶
Returns the rewards generated by all_trajectories for each goal, if we are using the reward_as_difference toggle.
- property all_rewards: Dict[GoalWithType, List[float]]¶
Returns a list of rewards generated by all_trajectories for each goal
- property all_trajectories: Dict[GoalWithType, List[VelocityTrajectory]]¶
Returns the real vehicle trajectory, extended by all possible generated paths to a given goal.
- property current_reward: Dict[GoalWithType, float]¶
Returns the reward generated by the current_trajectory for each goal
- property current_trajectory: Dict[GoalWithType, VelocityTrajectory]¶
Returns the real vehicle trajectory, extended by the trajectory from current vehicle position that was generated to each goal to calculate the likelihood.
- property goals_and_types: List[GoalWithType]¶
Return each goal and the possible corresponding type
- property goals_priors: Dict[GoalWithType, float]¶
Return the goals priors.
- property goals_probabilities: Dict[GoalWithType, float]¶
Returns the current goals probabilities.
- property likelihood: Dict[GoalWithType, float]¶
Returns the computed likelihoods for each goal
- map_prediction() Tuple[GoalWithType, VelocityTrajectory] [source]¶
Return the MAP goal and trajectory prediction for each agent.
- property optimum_plan: Dict[GoalWithType, List[MacroAction]]¶
Returns the plan from initial vehicle position generated to each goal.
- property optimum_reward: Dict[GoalWithType, float]¶
Returns the reward generated by the optimum_trajectory for each goal
- property optimum_trajectory: Dict[GoalWithType, VelocityTrajectory]¶
Returns the trajectory from initial vehicle position generated to each goal to calculate the likelihood.
- plot(scenario_map: Map | None = None, max_n_trajectories: int = 1, cost: Cost | None = None) Axes [source]¶
Plot the optimal, and predicted trajectories.
- Parameters:
scenario_map – Optional road layout to be plotted as background.
max_n_trajectories – The maximum number of trajectories to plot for each goal if they exist.
cost – If given, re-calculate cost factors for plotting
- property reward_difference: Dict[GoalWithType, float]¶
Returns the reward generated by the optimum_trajectory for each goal, if we are not using the reward_as_difference toggle.
- sample_goals(k: int = 1) List[GoalWithType] [source]¶
Used to randomly sample a goal according to the goals probability distribution.
- sample_trajectories_to_goal(goal: GoalWithType, k: int = 1) Tuple[List[VelocityTrajectory], List[List[MacroAction]]] [source]¶
Randomly sample up to k trajectories from all_trajectories to the given goal using the trajectory distributions
- property trajectories_probabilities: Dict[GoalWithType, List[float]]¶
Return the trajectories probability distribution to each goal
- trajectory_to_plan(goal: GoalWithType, trajectory: VelocityTrajectory) List[MacroAction] [source]¶
Return the plan that generated the trajectory. Not used for optimal trajectories.
igp2.recognition.goalrecognition module¶
- class igp2.recognition.goalrecognition.GoalRecognition(astar: AStar, smoother: VelocitySmoother, scenario_map: Map, cost: Cost | None = None, n_trajectories: int = 1, beta: float = 1.0, gamma=1, reward_as_difference: bool = True)[source]¶
Bases:
object
This class updates existing goal probabilities using likelihoods computed from the vehicle current trajectory. It also calculates the probabilities of up to n_trajectories paths to these goals.
- update_goals_probabilities(goals_probabilities: GoalsProbabilities, observed_trajectory: Trajectory, agent_id: int, frame_ini: Dict[int, AgentState], frame: Dict[int, AgentState], visible_region: Circle | None = None, debug: bool = False) GoalsProbabilities [source]¶
Updates the goal probabilities, and stores relevant information in the GoalsProbabilities object.
- Parameters:
goals_probabilities – GoalsProbabilities object to update
observed_trajectory – current vehicle trajectory
agent_id – id of agent in current frame
frame_ini – frame corresponding to the first state of the agent’s trajectory
frame – current frame
visible_region – region of the map which is visible to the ego vehicle
debug – Whether to plot A* planning