gym RL with MultiDiscrete ActionSpace AttributeError: 'MultiDiscrete' object has no attribute 'spaces'

Question:

I’m trying to build an Reinforcement Learning Algorithm, which can play the MasterMind Game. I’m using an MultiDiscrete Anction and Observation Space. The Action Space takes 4 slots with 6 colors each and the Observation Space is 2×4. I created an Custom Environment to connect with my programmed game. The Environment isnt ready yet due to the occuring error. Maybe someone can help me solving this issue.

import gym as gym
from gym import Env
from gym.spaces import Discrete, Box, MultiDiscrete, Dict
from stable_baselines3.common.policies import MultiInputActorCriticPolicy

action_space = MultiDiscrete(np.array([6,6,6,6]), dtype=int)
observation_space = MultiDiscrete(np.array([4,4]), dtype=int)

...

class MasterMindEnv(Env):
    def __init__(self) -> None:
        super(MasterMindEnv, self).__init__()
        self.action_space = action_space
        self.observation_space = observation_space

    def step(self, action:np.ndarray):
        pass_action(action)
        output = get_output()
        print(output)

        reward = output[0] + output[1]
        print(reward)
        
        done = False
        info = {}

        return observation_space.sample(), 1, done, info

    def reset(self):
        return self.observation_space.sample()
        
...

model = A2C(MultiInputActorCriticPolicy, env)
model.learn(total_timesteps=1000)

And the Error is:

AttributeError                            Traceback (most recent call last)
c:...model.ipynb Zelle 10 in <module>
----> 1 model = A2C(MultiInputActorCriticPolicy, env)
      2 model.learn(total_timesteps=1000)


File c:...Python310libsite-packagesstable_baselines3a2ca2c.py:126, in A2C.__init__(self, policy, env, learning_rate, n_steps, gamma, gae_lambda, ent_coef, vf_coef, max_grad_norm, rms_prop_eps, use_rms_prop, use_sde, sde_sample_freq, normalize_advantage, tensorboard_log, create_eval_env, policy_kwargs, verbose, seed, device, _init_setup_model)
    123     self.policy_kwargs["optimizer_kwargs"] = dict(alpha=0.99, eps=rms_prop_eps, weight_decay=0)
    125 if _init_setup_model:
--> 126     self._setup_model()

File c:...Python310libsite-packagesstable_baselines3commonon_policy_algorithm.py:123, in OnPolicyAlgorithm._setup_model(self)
    112 buffer_cls = DictRolloutBuffer if isinstance(self.observation_space, gym.spaces.Dict) else RolloutBuffer
    114 self.rollout_buffer = buffer_cls(
    115     self.n_steps,
    116     self.observation_space,
   (...)
    121     n_envs=self.n_envs,
    122 )
--> 123 self.policy = self.policy_class(  # pytype_disable=not-instantiable
...
--> 258 for key, subspace in observation_space.spaces.items():
    259     if is_image_space(subspace):
    260         extractors[key] = NatureCNN(subspace, features_dim=cnn_output_dim)

AttributeError: 'MultiDiscrete' object has no attribute 'spaces'

Asked By: AR_Jini

||

Answers:

observation_space = MultiDiscrete(np.array([4,4]), dtype=int)
...
model = A2C(MultiInputActorCriticPolicy, env)
...
for key, subspace in observation_space.spaces.items():

MultiInput should not be needed for a MultiDiscrete space. It is still only one observation space while MultiInput is needed when providing multiple observation spaces.

Either do not use the MultiInput policy (such as with ActorCriticPolicy) or wrap the space (such as with spaces.Tuple)

Stable Baselines3 supports handling of multiple inputs by using Dict Gym space. 
This can be done using MultiInputPolicy, which by default uses the 
CombinedExtractor feature extractor to turn multiple inputs into a single 
vector, handled by the net_arch network.
Answered By: c.lorenz