• @lolola
    link
    81 month ago

    What’s the y axis?

    • @kciwsnurb@aussie.zone
      link
      fedilink
      91 month ago

      The temperature scale, I think. You divide the logit output by the temperature before feeding it to the softmax function. Larger (resp. smaller) temperature results in a higher (resp. lower) entropy distribution.

      • @lolola
        link
        31 month ago

        I still don’t understand.

        • @kciwsnurb@aussie.zone
          link
          fedilink
          11 month ago

          Each row in the figure is a probability distribution over possible outputs (x-axis labels). The more yellow, the more likely (see the colour map on the right). With a small temperature (e.g., last row), all the probability mass is on 42. This is a low entropy distribution because if you sample from it you’ll constantly get 42, so no randomness whatsoever (think entropy as a measure of randomness/chaos). As temperature increases (rows closer to the first/topmost one), 42 is still the most likely output, but the probability mass gets dispersed to other possible outputs too (other outputs get a bit more yellow), resulting in higher entropy distributions. Sampling from such distribution gives you more random outputs (42 would still be frequent, but you’d get 37 or others too occasionally). Hopefully this is clearer.

          Someone in another reply uses the word “creativity” to describe the effect of temperature scaling. The more commonly used term in the literature is “diversity”.

    • The Octonaut
      link
      fedilink
      71 month ago

      Temperature is basically how creative you want the AI to be. The lower the temperature, the more predictable (and repeatable) the response.

      • @lolola
        link
        11 month ago

        Creativity is hot. That makes more sense, thanks.