No Target Deep Q-Networks

class srl.algorithms.not_dqn.Config(observation_mode: Literal['', 'render_image'] = '', override_env_observation_type: srl.base.define.SpaceTypes = <SpaceTypes.UNKNOWN: 1>, override_observation_type: Union[srl.base.define.RLBaseTypes, str] = <RLBaseTypes.NONE: 1>, override_action_type: Union[srl.base.define.RLBaseTypes, str] = <RLBaseTypes.NONE: 1>, action_division_num: int = 10, observation_division_num: int = 1000, frameskip: int = 0, extend_worker: Optional[Type[ForwardRef('ExtendWorker')]] = None, processors: List[ForwardRef('RLProcessor')] = <factory>, render_image_processors: List[ForwardRef('RLProcessor')] = <factory>, enable_rl_processors: bool = True, enable_state_encode: bool = True, enable_action_decode: bool = True, window_length: int = 1, render_image_window_length: int = 1, render_last_step: bool = True, render_rl_image: bool = True, render_rl_image_size: Tuple[int, int] = (128, 128), enable_sanitize: bool = True, enable_assertion: bool = False, dtype: str = 'float32', test_epsilon: float = 0, epsilon: float = 0.1, epsilon_scheduler: srl.rl.schedulers.scheduler.SchedulerConfig = <factory>, discount: float = 0.995, max_n_step: int = 500, alignment_loss_coeff: float = 0.1, alignment_loss_coeff_scheduler: srl.rl.schedulers.scheduler.SchedulerConfig = <factory>, memory: srl.rl.memories.priority_replay_buffer.PriorityReplayBufferConfig = <factory>, batch_size: int = 32, lr: float = 0.0002, input_block: srl.rl.models.config.input_block.InputBlockConfig = <factory>)
test_epsilon: float = 0

ε-greedy parameter for Test

epsilon: float = 0.1

ε-greedy parameter for Train

epsilon_scheduler: SchedulerConfig

<Scheduler>

discount: float = 0.995

Discount rate

max_n_step: int = 500

累積報酬和を計算する最大ステップ数

alignment_loss_coeff: float = 0.1

Q値がNステップ割引累積報酬和から乖離しすぎないようにする正則化項の係数

alignment_loss_coeff_scheduler: SchedulerConfig

<Scheduler>

memory: PriorityReplayBufferConfig

<PriorityReplayBuffer>

batch_size: int = 32

Batch size

lr: float = 0.0002

Learning rate

input_block: InputBlockConfig

<InputBlock>

hidden_block: DuelingNetworkConfig

<DuelingNetwork> hidden+out layer

get_processors(prev_observation_space: SpaceBase) List[RLProcessor]

前処理を追加したい場合設定

setup_from_env(env: EnvRun) None

env初期化後に呼び出されます。env関係の初期化がある場合は記載してください。