I think what's disturbing me so much about these #GPT3 examples is that for the first time we're really seeing that computer programs are optimized not to solve problems but instead to convince its programmer/operator/user that it has solved those problems.
This distinction was almost irrelevant before (when fooling us was harder)... but not anymore.
The distinction isn't really novel; heck, I myself have written about one aspect of it before. But I still find it shocking to see it in action.