NaNoGenMo 2019 - GPT-2 Edition: Difference between revisions
m (formatting) |
No edit summary |
||
Line 12: | Line 12: | ||
====progress...==== | ====progress...==== | ||
So there are some good generators out there that sort of start to do what I want. I may just daisy chain them together... listed a few below. | So there are some good generators out there that sort of start to do what I want. I may just daisy chain them together... listed a few below. | ||
TextWorld world generations is almost exactly what I want, it's just single-character. Hmmm. | |||
Line 26: | Line 27: | ||
git clone https://github.com/openai/gpt-2.git && cd gpt-2 | git clone https://github.com/openai/gpt-2.git && cd gpt-2 | ||
docker build --tag gpt-2 -f Dockerfile.cpu . | docker build --tag gpt-2 -f Dockerfile.cpu . | ||
docker run -it gpt-2 bash | |||
export PYTHONIOENCODING=UTF-8 | |||
</nowiki> | </nowiki> | ||
Revision as of 02:01, 10 November 2019
The Basics
My goal this year was to take some random text, character, and location generators and build a basic recursive quest engine.
Let's unpack that. All I want for now is:
- locations - a small network of places, with the ability for characters to travel between them
- characters - a recurring finite set of characters who travel between the locations and interact when they're in the same place
- stretch goal - interact when they're passing one another!
- recursive - the idea is that we have a world... it has, say, cities, within cities are places (inns, bars, markets), within those may be rooms... or maybe our recursion is deep enough at two levels
Eh, maybe that's even too much. I work a super lot and in eight days of November so far all I've done is written those bullets. But let's see what's out there...
progress...
So there are some good generators out there that sort of start to do what I want. I may just daisy chain them together... listed a few below. TextWorld world generations is almost exactly what I want, it's just single-character. Hmmm.
Local laptop
gpt-2
This: https://openai.com/blog/better-language-models/ Made this: https://github.com/openai/gpt-2 Which made this: https://colab.research.google.com/drive/1gB03iSnshYcSzSCrS9gPGcoOFCpMhVq_
hit this bug: https://github.com/openai/gpt-2/issues/178
git clone https://github.com/openai/gpt-2.git && cd gpt-2 docker build --tag gpt-2 -f Dockerfile.cpu . docker run -it gpt-2 bash export PYTHONIOENCODING=UTF-8
Need to run this on AWS or something:
2019-11-09 20:43:18.919955: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-09 20:43:23.418088: W tensorflow/core/framework/allocator.cc:122] Allocation of 154389504 exceeds 10% of system memory. 2019-11-09 20:43:23.722155: W tensorflow/core/framework/allocator.cc:122] Allocation of 154389504 exceeds 10% of system memory. 2019-11-09 20:43:44.915623: W tensorflow/core/framework/allocator.cc:122] Allocation of 18137088 exceeds 10% of system memory. 2019-11-09 20:43:45.006514: W tensorflow/core/framework/allocator.cc:122] Allocation of 18210816 exceeds 10% of system memory. 2019-11-09 20:43:45.102333: W tensorflow/core/framework/allocator.cc:122] Allocation of 18284544 exceeds 10% of system memory.
Parl.AI
git clone https://github.com/facebookresearch/ParlAI.git cd ParlAI; python setup.py develop
textworld
From: https://www.microsoft.com/en-us/research/project/textworld/ Yields: https://github.com/microsoft/textworld
hit this bug: https://github.com/authomatic/chromedriver_installer/issues/11
pip install textworld