So I ran the first few hundred words of this article as the prompt for GPT-3
Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs
With that fake article as a prompt, it nailed every trial.
cc: @gdb
A lot of people asked questions and made suggestions in response to this article. I've posted a brief follow-up that answers some of these questions: medium.com/@melaniemitchell.…
@ErnestSDavis @lmthang @gromgull @teemu_roos teemu_roos @gideonmann
it's not really, when you take the time to consider how GPT is built/trained and what it optimizes for.
It's basically auto-completing an article.
the more context you give it the better it does. few shot training combined with a large amount of context makes it v powerful.
I tried it with your exact prompts and but it does not generalize to other problems as mentioned by @MelMitchell1 in her article. One example below, first image is with your prompt and example, second is continuation where I tried one example from the article:
No. Didn't work. Back to the prompt board :)
Let me try other prompts.
Aug 12, 2020 · 3:23 AM UTC