#minerl #minecraft #deeplearning




The MineRL BASALT challenge has no reward functions or technical descriptions of what's to be achieved. Instead, the goal of each task is given as a short natural language string, and the agent is evaluated by a team of human judges who rate both how well the goal has been fulfilled, as well as how human-like the agent behaved. In this video, I interview KAIROS, the winning team of the 2021 challenge, and discuss how they used a combination of machine learning, efficient data collection, hand engineering, and a bit of knowledge about Minecraft to beat all other teams.




OUTLINE:


0:00 - Introduction


4:10 - Paper Overview


11:15 - Start of Interview


17:05 - First Approach


20:30 - State Machine


26:45 - Efficient Label Collection


30:00 - Navigation Policy


38:15 - Odometry Estimation


46:00 - Pain Points & Learnings


50:40 - Live Run Commentary


58:50 - What other tasks can be solved?


1:01:55 - What made the difference?


1:07:30 - Recommendations & Conclusion


1:11:10 - Full Runs: Waterfall


1:12:40 - Full Runs: Build House


1:17:45 - Full Runs: Animal Pen


1:20:50 - Full Runs: Find Cave




Paper: https://arxiv.org/abs/2112.03482


Code: https://github.com/viniciusguigo/kair...


Challenge Website: https://minerl.io/basalt/




Paper Title: Combining Learning from Human Feedback and Knowledge Engineering to Solve Hierarchical Tasks in Minecraft




Abstract:


Real-world tasks of interest are generally poorly defined by human-readable descriptions and have no pre-defined reward signals unless it is defined by a human designer. Conversely, data-driven algorithms are often designed to solve a specific, narrowly defined, task with performance metrics that drives the agent's learning. In this work, we present the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL BASALT Challenge: Learning from Human Feedback in Minecraft, which challenged participants to use human data to solve four tasks defined only by a natural language description and no reward function. Our approach uses the available human demonstration data to train an imitation learning policy for navigation and additional human feedback to train an image classifier. These modules, together with an estimated odometry map, are then combined into a state-machine designed based on human knowledge of the tasks that breaks them down in a natural hierarchy and controls which macro behavior the learning agent should follow at any instant. We compare this hybrid intelligence approach to both end-to-end machine learning and pure engineered solutions, which are then judged by human evaluators. Codebase is available at this https URL.




Authors: Vinicius G. Goecks, Nicholas Waytowich, David Watkins, Bharat Prakash




Links:


TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick


YouTube: https://www.youtube.com/c/yannickilcher


Twitter: https://twitter.com/ykilcher


Discord: https://discord.gg/4H8xxDF


BitChute: https://www.bitchute.com/channel/yann...


LinkedIn: https://www.linkedin.com/in/ykilcher


BiliBili: https://space.bilibili.com/2017636191




If you want to support me, the best thing to do is to share out the content :)




If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):


SubscribeStar: https://www.subscribestar.com/yannick...


Patreon: https://www.patreon.com/yannickilcher


Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq


Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2


Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m

Twitter Mentions