sustaingym.algorithms.evcharging.baselines#

This module implements baseline algorithms for EVChargingEnv.

Module Contents#

Classes#

GreedyAlgorithm

Per-time step greedy charging. Whether the action space is continuous or

RandomAlgorithm

Random action.

MPC

Model predictive control.

OfflineOptimal

Calculates best performance of a controller that knows the future.

Attributes#

sustaingym.algorithms.evcharging.baselines.MAX_ACTION = 1[source]#
sustaingym.algorithms.evcharging.baselines.D_MAX_ACTION = 4[source]#
class sustaingym.algorithms.evcharging.baselines.GreedyAlgorithm(env: sustaingym.envs.evcharging.EVChargingEnv)[source]#

Bases: sustaingym.algorithms.base.BaseAlgorithm

Per-time step greedy charging. Whether the action space is continuous or discrete, GreedyAlgorithm outputs the maximum pilot signal allowed.

Parameters:

env (sustaingym.envs.evcharging.EVChargingEnv) –

get_action(observation: dict[str, Any]) numpy.ndarray[source]#

Returns greedy charging action.

Parameters:

observation (dict[str, Any]) –

Return type:

numpy.ndarray

class sustaingym.algorithms.evcharging.baselines.RandomAlgorithm(env: sustaingym.envs.evcharging.EVChargingEnv)[source]#

Bases: sustaingym.algorithms.base.BaseAlgorithm

Random action.

Parameters:

env (sustaingym.envs.evcharging.EVChargingEnv) –

get_action(observation: dict[str, Any]) numpy.ndarray[source]#

Returns random charging action.

Parameters:

observation (dict[str, Any]) –

Return type:

numpy.ndarray

class sustaingym.algorithms.evcharging.baselines.MPC(env: sustaingym.envs.evcharging.EVChargingEnv, lookahead: int = 12)[source]#

Bases: sustaingym.algorithms.base.BaseAlgorithm

Model predictive control.

See BaseAlgorithm for more attributes.

Parameters:
lookahead#

number of timesteps to forecast future trajectory. Note that MPC cannot see future car arrivals and does not take them into account.

get_action(observation: dict[str, Any]) numpy.ndarray[source]#

Returns first action of the MPC trajectory.

Parameters:

observation (dict[str, Any]) –

Return type:

numpy.ndarray

class sustaingym.algorithms.evcharging.baselines.OfflineOptimal(env: sustaingym.envs.evcharging.EVChargingEnv)[source]#

Bases: sustaingym.algorithms.base.BaseAlgorithm

Calculates best performance of a controller that knows the future.

Parameters:

env (sustaingym.envs.evcharging.EVChargingEnv) – EV charging environment

TOTAL_TIMESTEPS = 288[source]#
reset() None[source]#

Reset timestep count.

Return type:

None

get_action(observation: dict[str, Any]) numpy.ndarray[source]#

On first call, get action for all timesteps (should be directly after environment reset).

Parameters:

observation (dict[str, Any]) –

Return type:

numpy.ndarray