More GPT-2, the ‘writer’ of Unicorn AI – Computerphile

35 thoughts on “More GPT-2, the ‘writer’ of Unicorn AI – Computerphile

  1. Everest is actually the better answer, if you look at the paper, there was a previous question answered in the same way (yes/no question answered with a name), so the model did exactly what was requested of it.

    In another test the question was about location, all humans answered Sweden, the model answered Stockholm!

  2. To create true commonsense test for the NLP systems, I think we should try not to use regular nouns, otherwise most probably there will be statistical correlation between the words which make the answer obvious without understanding what the sentence means. Like "roads are wide" and "chickens are scared" are common sentences which you may encounter everywhere.

  3. I think the humans provided a better answer than the A.I. when asked: "Did they climb any mountains?"
    The humans understood this clearly as a yes or no question, which it is, so the asking person is only interested in the mere fact of them having climbed mountains or not. The A.I. on the other hand did not understand the nature of the question and thus provided an unnatural answer.

  4. I wonder how that algorithm would do if given knitting patterns? (Ever seen “SkyKnit”? It’s a neural network that was given Ravelry patterns and made to generate new ones, and Ravelry users test-knitted several of the patterns. The results are amusing.)

  5. Just watched the first minute yet, but wouldn't it be an interesting solution to the AGI threat that it's impractical to do and has no real use because it turns out there is no useful or easily defined task that actually required general intelligence?

    EDIT: There could also be an interesting parallel to make with one of the solutions to the simulation hypothesis, which is that nobody would ever want to build one because it's impractical and useless?

  6. 8:21 top notch French indeed! "un poulet" → "il", "une route" → "elle", the chicken is masculine and the road is feminine 🙂

  7. It would be interesting to train it on wikipedia and prime it on Jeopardy or some other quiz show. The answers might be wrong, but just about coherent enough to be right sometimes I guess? It could be fun on Tom Scott's experimental liars game, too 🙂

  8. I want to give this AI complex legal questions and prompt it to fill in what comes next after the initialism IANAL (stands for "I Am Not A Lawyer"). It's a common disclaimer redditors use before giving legal advice that they are usually not at all qualified to give. The potential for comedy seems endless.

  9. The way we use language has so many assumptions that communication seems like it is almost entirely comprised of approximate meaning.

  10. "Has anyone made any of those recipes?"

    I can't think of a better way to get at the heart of language processing and the disconnect to the real world.

    Also yeah, "because it was the style at the time" lol

  11. T usually denotes the unit tesla, i.e. the magnetic flux density of an electromagnetic field.

    Now the recipe makes total sense.

  12. The most surprising thing is that it shows promise of becoming better than this with a larger dataset. You know this is going to happen at some point!

  13. Can this thing do subtle dialect stuff? Like for example, if the prompt says “torch” or “lift” instead of “flashlight” or “elevator,” will it tend to use more British terms for things?

Leave a Reply

Your email address will not be published. Required fields are marked *