问题
I've tried going through the documentation but the steps mentioned there aren't quite clear enough. Is there a good step by step video tutorial that can be helpful? The ones I saw on YouTube are pretty old and don't work with the latest updates on GitHub https://github.com/Unity-Technologies/ml-agents
回答1:
This will help you to setup ml agent version 14.
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md
I am currently using this version.
I suggest you create a new python environment for only this purpose.
Don't use pip install mlagents
in your python environment terminal. This is not updated to 14 when i installed it , so use the link above to see complete guide but here's the important stuff.
Download zip file
Ml Agents master
When you extract this Zip, you should open this extracted folder .
Then open cmd at that location. Activate your python environment and follow these steps:
You should install the packages from the cloned repo rather than from PyPi. To do this, you will need to install ml-agents and ml-agents-envs separately.Open cmd inside ml-agents=master folder and activate your python environment then, From the repo's root directory, run:cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
It is very important that both packages are installed from same ml agent folder , this will not work if the version are not compatible. if installed from same folder both packages will have same version i.e. 14 in this case.
These two packages will help you to use predefined PPo and SAC algo.
I suppose you have installed 2018 or 2019 Unity. Open it and Goto File -> Open project
Now in open dialog box select folder Project inside ml-agents-master folder that you have downloaded.
Sorry that most of the things are named project but don't be confused , Earlier project was a folder inside ml-agents-master but after opening it you will see a Project toolbar. Follow Assets\ML-Agents\Examples\3DBall\Scenes now double click on 3Dball.
This will open a scene as you can see here. You can also see TFModels and Scirpts they are predefined neural network and code ,respectively.
Select Agent in Hierarchy toolbar in left side (this will make change for only that instance of 3Dball it will be better to go to prefabs then double click on 3Dball this will open only one 3Dball whose settings will be applied on all instances now in hierarchy you will see only one 3Dball , now select it's Agent rest is same but now changes will affect all copies of 3Dball, prefabs are used to control all the copies this helps to train multiple agents at same time) then in right side Inspector view will open ,inside Behaviors parameters you can see Model and a input in its box. Keep the agent selected otherwise this inspector view will disappear . Now Goto TFModels folder , you will see a 3DBall file that looks like Neural network. Drag this to that Agent's Behavior parameters Model. After Following all these steps Click on play option on top . Now the predefined model will start playing and you will see that it can balance the ball pretty well.
Now that you are able to see how trained model work and want to train again using predefined PPO and SAC , follow this
goto : ml-agents-master\config, here you will find a file trainer_config.yaml , Now open cmd then activate your environment and enter codemlagents-learn trainer_config.yaml --run-id=firstRun --train
When the message "Start training by pressing the Play button in the Unity Editor" is displayed on the screen, you can press the ▶️ button in Unity to start training in the Editor. You can press Ctrl+C to stop the training, and your trained model will be at models run-identifier -> behavior_name.nn where behavior_name is the name of the Behavior Name of the agents corresponding to the model.
Move your model file into Project/Assets/ML-Agents/Examples/3DBall/TFModels/.
Open the Unity Editor, and select the 3DBall scene as described above.
Select the 3DBall prefab Agent object.
Drag the .nn file from the Project window of the Editor to the Model placeholder in the Ball3DAgent inspector window.
Press the ▶️ button at the top of the Editor.
Now , For your own RL algorithm in python :
see this jupyter notebook it shows how we activate unity gym and get observations ,rewards and reset environment. For this use can also create Unity executable , it's just creating exe files with some settings which you will find here
Hope this works without any issues.And Good Luck with this.
来源:https://stackoverflow.com/questions/60019337/how-to-get-ml-agents-to-run-in-unity