8 comments

  • Fiveplus 7 minutes ago
    The jump from CPU to GPU makes total sense, but I am looking at your shape vectors and thinking about the "L" vs "J" distinction. Since you are normalizing the vectors, you are basically creating a perceptual hash of the cell. Have you tried weighting the 6 dimensions differently? I imagine the middle two circles (3 and 4) carry less information about edges than the corners, so you could perhaps pack the cache even tighter by lowering the bit-depth on the center y-axis.

    I am also curious since you're already paying the cost of a 6-pass GPU pipeline, did you consider using signed distance fields for the character lookup instead of the 6-sample heuristic? To me, it feels like you could get infinite 'shape' resolution that way compared to discrete sampling circles. Kudos on shipping this.

  • chrisra 1 minute ago
    Next up: proportional fonts and font weights?
  • sph 1 hour ago
    Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.

    Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog

  • chrisra 10 minutes ago
    > To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.

    How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.

  • nickdothutton 44 minutes ago
    What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.
  • nathaah3 1 hour ago
    that was so brilliant! i loved it! thanks for putting it out :)
  • adam_patarino 45 minutes ago
    Tell me someone has turned this into a library we can use
  • Jyaif 1 hour ago
    It's important to note that the approach described focuses on giving fast results, not the best results.

    Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

    This is a well known problem because early computers with monitors used to only be able to display characters.

    At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

    And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

    • spuz 6 minutes ago
      Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.
    • brap 18 minutes ago
      You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.

      For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.

    • Sharlin 48 minutes ago
      And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.
    • finghin 34 minutes ago
      In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?
      • spuz 25 minutes ago
        In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.