Make your own environment

Here are the steps required to create a new environment.

Note

Pull requests are welcome!

Set up files

  1. Create a new your_env.py file in highway_env/envs/

  2. Define a class YourEnv, that must inherit from AbstractEnv

This class provides several useful functions:

  • A default_config() method, that provides a default configuration dictionary that can be overloaded.

  • A define_spaces() method, that gives access to a choice of observation and action types, set from the environment configuration

  • A step() method, which executes the desired actions (at policy frequency) and simulate the environment (at simulation frequency)

  • A render() method, which renders the environment.

Create the scene

The first step is to create a RoadNetwork that describes the geometry and topology of roads and lanes in the scene. This should be achieved in a YourEnv._make_road() method, called from YourEnv.reset() to set the self.road field.

See Roads for reference, and existing environments as examples.

Create the vehicles

The second step is to populate your road network with vehicles. This should be achieved in a YourEnv._make_road() method, called from YourEnv.reset() to set the self.road.vehicles list of Vehicle.

First, define the controlled ego-vehicle by setting self.vehicle. The class of controlled vehicle depends on the choice of action type, and can be accessed as self.action_type.vehicle_class. Other vehicles can be created more freely, and added to the self.road.vehicles list.

See vehicle behaviors for reference, and existing environments as examples.

Make the environment configurable

To make a part of your environment configurable, overload the default_config() method to define new {"config_key": value} pairs with default values. These configurations then be accessed in your environment implementation with self.unwrapped.config["config_key"], and once the environment is created, it can be configured with env.unwrapped.config["config_key"] = other_value followed by env.reset().

Register the environment

In highway_env/envs/__init__.py, add the following import:

from highway_env.envs.your_env import *

so you can register your env in the register_highway_envs() method of highway_env/__init__.py

# your_env.py
register(
    id='your-env-v0',
    entry_point='highway_env.envs:YourEnv'
)

This registration will be performed automatically as a hook if you reinstall the highway_env module with

python setup.py install

Alternatively, you can call it manually with

import highway_env
highway_env.register_highway_envs()

Profit

That’s it! You should now be able to run the environment.

# Only required if you did not reinstall highway_env
import highway_env
highway_env.register_highway_envs()

import gymnasium as gym
env = gym.make('your-env-v0')
obs, info = env.reset()
obs, reward, terminated, truncated, info = env.step(env.action_space.sample())
env.render()

API

highway_env.__init__._register_highway_envs()[source]

Import the envs module so that envs register themselves.

class highway_env.envs.common.abstract.AbstractEnv(config: dict = None, render_mode: str | None = None)[source]

A generic environment for various tasks involving a vehicle driving on a road.

The environment contains a road populated with vehicles, and a controlled ego-vehicle that can change lane and speed. The action space is fixed, but the observation space and reward function must be defined in the environment implementations.

PERCEPTION_DISTANCE = 200.0

The maximum distance of any vehicle present in the observation [m]

property vehicle: Vehicle

First (default) controlled vehicle.

classmethod default_config() dict[source]

Default environment configuration.

Can be overloaded in environment implementations, or by calling configure(). :return: a configuration dict

define_spaces() None[source]

Set the types and spaces of observation and action from config.

_reward(action: int | ndarray) float[source]

Return the reward associated with performing a given action and ending up in the current state.

Parameters:

action – the last action performed

Returns:

the reward

_rewards(action: int | ndarray) dict[str, float][source]

Returns a multi-objective vector of rewards.

If implemented, this reward vector should be aggregated into a scalar in _reward(). This vector value should only be returned inside the info dict.

Parameters:

action – the last action performed

Returns:

a dict of {‘reward_name’: reward_value}

_is_terminated() bool[source]

Check whether the current state is a terminal state

:return:is the state terminal

_is_truncated() bool[source]

Check we truncate the episode at the current step

Returns:

is the episode truncated

_info(obs: Observation, action: Action | None = None) dict[source]

Return a dictionary of additional information

Parameters:
  • obs – current observation

  • action – current action

Returns:

info dict

reset(*, seed: int | None = None, options: dict | None = None) tuple[Observation, dict][source]

Reset the environment to it’s initial configuration

Parameters:
  • seed – The seed that is used to initialize the environment’s PRNG

  • options – Allows the environment configuration to specified through options[“config”]

Returns:

the observation of the reset state

_reset() None[source]

Reset the scene: roads and vehicles.

This method must be overloaded by the environments.

step(action: int | ndarray) tuple[Observation, float, bool, bool, dict][source]

Perform an action and step the environment dynamics.

The action is executed by the ego-vehicle, and all other vehicles on the road performs their default behaviour for several simulation timesteps until the next decision making step.

Parameters:

action – the action performed by the ego-vehicle

Returns:

a tuple (observation, reward, terminated, truncated, info)

_simulate(action: Action | None = None) None[source]

Perform several steps of simulation with constant action.

render() np.ndarray | None[source]

Render the environment.

Create a viewer if none exists, and use it to render an image.

close() None[source]

Close the environment.

Will close the environment viewer if it exists.

_automatic_rendering() None[source]

Automatically render the intermediate frames while an action is still ongoing.

This allows to render the whole video and not only single steps corresponding to agent decision-making. If a RecordVideo wrapper has been set, use it to capture intermediate frames.

simplify() AbstractEnv[source]

Return a simplified copy of the environment where distant vehicles have been removed from the road.

This is meant to lower the policy computational load while preserving the optimal actions set.

Returns:

a simplified environment state

change_vehicles(vehicle_class_path: str) AbstractEnv[source]

Change the type of all vehicles on the road

Parameters:

vehicle_class_path – The path of the class of behavior for other vehicles Example: “highway_env.vehicle.behavior.IDMVehicle”

Returns:

a new environment with modified behavior model for other vehicles

class highway_env.envs.common.abstract.MultiAgentWrapper(env)[source]

Wraps an environment to allow a modular transformation of the step() and reset() methods.

Parameters:

env – The environment to wrap

step(action)[source]

Uses the step() of the env that can be overwritten to change the returned data.