So I ran the first few hundred words of this article as the prompt for GPT-3 Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs With that fake article as a prompt, it nailed every trial. cc: @gdb
A lot of people asked questions and made suggestions in response to this article. I've posted a brief follow-up that answers some of these questions: medium.com/@melaniemitchell.… @ErnestSDavis @lmthang @gromgull @teemu_roos teemu_roos @gideonmann
That is really bizarre. 👀
it's not really, when you take the time to consider how GPT is built/trained and what it optimizes for. It's basically auto-completing an article. the more context you give it the better it does. few shot training combined with a large amount of context makes it v powerful.
I tried it with your exact prompts and but it does not generalize to other problems as mentioned by @MelMitchell1 in her article. One example below, first image is with your prompt and example, second is continuation where I tried one example from the article:
i would structure it without changing types of examples, that's fairly confusing even for a human. just grab a single type of problem and run it.
No. Didn't work. Back to the prompt board :) Let me try other prompts.

Aug 12, 2020 · 3:23 AM UTC

weird that it's not understanding the outcomes in previous examples too, seeing how it evaluates probabilities there usually indicates if it'll get the generation right
will test more on my end tomorrow too. pleasantly surprised by outcomes today but definitely want to understand this more completely.
you're running the whole article before that paragraph as the pre-prompt right, not just the paragraph in the screenshot?
Tried both. same answer