You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training minispot using spot_ars.py, there are two options to train: multiprocessing or without multiprocessing.
1. If multiprocessing is used, process is frozen when ars.py is called. It is stuck at line 605, I suspect parentPipe.send() is sending data larger than buffer on computer (I have tried with a few computers to run the code). Is there an alternative way to send using multiprocessing?
2. If no multiprocessing is used, ars.py is again called. This time it is stuck at line 403, with the following error message:
_Traceback (most recent call last):
File "C:\Users\TON93824\OneDrive - Mott MacDonald\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars.py", line 227, in
main()
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars.py", line 184, in main
episode_reward, episode_timesteps = agent.train()#parallel(parentPipes)
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 564, in train
positive_rewards[i] = self.deploy(direction="+", delta=deltas[i])
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 407, in deploy
state, reward, done, _ = self.env.step(action)
File "../..\spotmicro\GymEnvs\spot_bezier_env.py", line 163, in step
action = self.ja # during training state have to use action[2:]
AttributeError: 'spotBezierEnv' object has no attribute 'ja'
A closer look shows that the "def pass_joint_angles" was not called under spot_bezier_env.py, thus "def step" cannot call for self.ja. I suspect the no multiprocessing training function was not complete but I cannot fix the script myself, any hint?
3. Under the "contact" folder where trained policies are stored, they do not seem to be able to be loaded during evaluation. The following error message is found:
Traceback (most recent call last):
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars_eval.py", line 313, in
main()
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars_eval.py", line 186, in main
agent.load(models_path + "/" + file_name + str(agent_num))
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 702, in load
self.policy.theta = pickle.load(filehandle)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcf in position 0: ordinal not in range(128)
Looks like the originally generated policy files were done differently? (Maybe on a linux machine??)
The text was updated successfully, but these errors were encountered:
to strip the clearanceheight and penetrationdepth from the action space, just grab the last 12 items in the action space (the joint angles - ja).
Change the line
action = self.ja
to
action = action[2:]
When training minispot using spot_ars.py, there are two options to train: multiprocessing or without multiprocessing.
1. If multiprocessing is used, process is frozen when ars.py is called. It is stuck at line 605, I suspect parentPipe.send() is sending data larger than buffer on computer (I have tried with a few computers to run the code). Is there an alternative way to send using multiprocessing?
2. If no multiprocessing is used, ars.py is again called. This time it is stuck at line 403, with the following error message:
_Traceback (most recent call last):
File "C:\Users\TON93824\OneDrive - Mott MacDonald\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars.py", line 227, in
main()
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars.py", line 184, in main
episode_reward, episode_timesteps = agent.train()#parallel(parentPipes)
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 564, in train
positive_rewards[i] = self.deploy(direction="+", delta=deltas[i])
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 407, in deploy
state, reward, done, _ = self.env.step(action)
File "../..\spotmicro\GymEnvs\spot_bezier_env.py", line 163, in step
action = self.ja # during training state have to use action[2:]
AttributeError: 'spotBezierEnv' object has no attribute 'ja'
A closer look shows that the "def pass_joint_angles" was not called under spot_bezier_env.py, thus "def step" cannot call for self.ja. I suspect the no multiprocessing training function was not complete but I cannot fix the script myself, any hint?
3. Under the "contact" folder where trained policies are stored, they do not seem to be able to be loaded during evaluation. The following error message is found:
Traceback (most recent call last):
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars_eval.py", line 313, in
main()
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\spot_ars_eval.py", line 186, in main
agent.load(models_path + "/" + file_name + str(agent_num))
File "C:\Users\Desktop\scripts\083. dmd\spot_bullet\src\ars_lib\ars.py", line 702, in load
self.policy.theta = pickle.load(filehandle)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcf in position 0: ordinal not in range(128)
Looks like the originally generated policy files were done differently? (Maybe on a linux machine??)
The text was updated successfully, but these errors were encountered: