Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my initial reading, the end user can't create environments? Is that a feature that I can expect will eventually come?


It looks like the image from the server and control information to the server is sent through the VNC protocol. Other information such as the reward signal from the environment server is sent through a WebSockets protocol using JSON:

https://github.com/openai/universe/blob/master/doc/protocols...

You should be able to implement this protocol for your environment and run a VNC server for the rest. A new class for the client representing your environment can be based on this:

https://github.com/openai/universe/blob/master/universe/envs...

Then register the class with OpenAI Gym:

https://github.com/openai/universe/blob/master/universe/__in...

After creating the environment using gym.make you need to add information about your remote in the call to configure:

env = gym.make('gtav.SaneDriving-v0')

env.configure(remotes="vnc://localhost:vnc_port+rewarder_port")

https://github.com/openai/gym/blob/master/gym/core.py#L234

https://github.com/openai/universe/blob/master/universe/envs...

This is only based on a cursory reading, but it should be possible to use custom environments with OpenAI Universe as it is today.


You can create environments - it's coming! We'll be releasing many components over next few months.


Brilliant :)


Does that mean we can't train it on new games? only preexisting ones?


If it's true, I believe we have to wait for the OpenAI team to build new gym environments before we can train in new games.

I only briefly poked around because it's nearing on midnight here - maybe you can pull open the examples included and work out how to rewire them to work on new games, maybe not. Either way, I've got a particular use case I'd like to make a gym for so I'm interested in finding out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: