pytsc.wrappers package

Submodules

pytsc.wrappers module

pytsc.wrappers module

pytsc.wrappers module

pytsc.wrappers module

pytsc.wrappers.epymarl module

class pytsc.wrappers.epymarl.DomainRandomizedEPyMARLTrafficSignalNetwork(map_names, simulator_backend='sumo', **kwargs)[source]

Bases: smac.env.multiagentenv.MultiAgentEnv

A wrapper for the TrafficSignalNetwork environment that supports domain randomization. It randomly selects a map from a provided list at each reset, and pads outputs so that the observation and action interfaces remain fixed (with a maximum number of agents).

Parameters
  • map_names (list) – A list of map names to choose from.

  • max_n_agents (int) – The maximum number of agents.

  • simulator_backend (str) – The simulator backend to use (default “sumo”).

  • **kwargs – Additional keyword arguments for TrafficSignalNetwork.

apply_actions(actions)[source]

Apply actions and remove padding if necessary.

close()[source]
get_avail_actions()[source]

Get available actions from the current environment and pad the list.

get_env_info()[source]

Get environment info from the current environment and pad the adjacency matrix. Also, set n_agents to max_n_agents.

get_local_rewards()[source]

Get local rewards from the current environment and pad the list.

get_obs()[source]

Get observations from the current environment and pad the list of observations.

get_obs_size()[source]

Returns the size of the observation.

get_state()[source]

Get the state from the current environment. The state is padded to match the max_n_agents.

get_state_size()[source]

Returns the size of the global state.

get_stats()[source]
get_total_actions()[source]

Returns the total number of actions an agent could ever take.

reset()[source]

Reset the environment. Reinitialize the underlying TrafficSignalNetwork using a (potentially) different map to achieve domain randomization.

step(actions)[source]

Step through the environment using valid (unpadded) actions, then pad the observations before returning.

step_stats = None
class pytsc.wrappers.epymarl.EPyMARLTrafficSignalNetwork(map_name='pasubio', simulator_backend='sumo', **kwargs)[source]

Bases: smac.env.multiagentenv.MultiAgentEnv

A wrapper for the TrafficSignalNetwork environment that allows for multi-agent reinforcement learning. This environment is designed to be used with the EPyMARL framework, which is a multi-agent reinforcement learning library.

Parameters
  • map_name (str) – The name of the map to use (default “pasubio”).

  • simulator_backend (str) – The simulator backend to use (default “sumo”).

  • **kwargs – Additional keyword arguments for TrafficSignalNetwork.

apply_actions(actions)[source]
close()[source]
get_avail_actions()[source]

Returns the available actions of all agents in a list.

get_domain_class()[source]
get_env_info()[source]
get_local_rewards()[source]
get_network_flow()[source]
get_obs()[source]

Returns all agent observations in a list.

get_obs_size()[source]

Returns the size of the observation.

get_pressures()[source]
get_state()[source]

Returns the global state.

get_state_size()[source]

Returns the size of the global state.

get_stats()[source]
get_total_actions()[source]

Returns the total number of actions an agent could ever take.

is_terminated()[source]
reset()[source]

Returns initial observations and states.

set_domain_class(domain_class)[source]
sim_step()[source]
step(actions)[source]

Returns reward, terminated, info.

step_stats = None

pytsc.wrappers.pymarl module

class pytsc.wrappers.pymarl.PyMARLTrafficSignalNetwork(map_name='monaco', simulator_backend='sumo', **kwargs)[source]

Bases: smac.env.multiagentenv.MultiAgentEnv

apply_actions(actions)[source]
close()[source]
get_avail_actions()[source]

Returns the available actions of all agents in a list.

get_domain_class()[source]
get_env_info()[source]
get_local_rewards()[source]
get_network_flow()[source]
get_obs()[source]

Returns all agent observations in a list.

get_obs_size()[source]

Returns the size of the observation.

get_pressures()[source]
get_state()[source]

Returns the global state.

get_state_size()[source]

Returns the size of the global state.

get_stats()[source]
get_total_actions()[source]

Returns the total number of actions an agent could ever take.

is_terminated()[source]
reset()[source]

Returns initial observations and states.

set_domain_class(domain_class)[source]
sim_step()[source]
step(actions)[source]

Returns reward, terminated, info.

step_stats = None

pytsc.wrappers.rllib module

Module contents