22

If I set 175% scaling in Gnome Settings, the value is saved as 1.7518248558044434 in ~/.config/monitors.xml:

<monitors version="2">
  <configuration>
    <logicalmonitor>
      <x>0</x>
      <y>0</y>
      <scale>1.7518248558044434</scale>
      <primary>yes</primary>
      <monitor>
        <monitor spec>
          <connector>DP-3</connector>

Why is it so? At first, I thought it could be due to floating point rounding error, but 1.75 is one of those happy numbers whose the exact value can be expressed.

Gnome Wayland 43.3

Jonathon Reinhart
  • 1,821
  • 1
  • 16
  • 20
Damn Vegetables
  • 1,187
  • 9
  • 19

2 Answers2

53

The preset scale factors (100%, 125%, etc.) get adjusted to the closest values that give a whole number of pre-scaling virtual pixels both horizontally and vertically for your resolution; judging by your value of 1.7518248558044434 this is probably 2192 x 1233 and you have a 3840 x 2160 display.

Also as to why the width you would calculate with that value, 3840/1.7518248558044434 = 2191.9999520937613, is only accurate to about four places after the decimal point, clearly the scale has been converted from single-precision floating point (IEEE-754 32-bit). The double precision approximation of 3840/2192 is more like 1.7518248175182483, but if you convert that value to single-precision and back to double-precision you get 1.7518248558044434 precisely. I did it with Python, as suggested by the answer https://stackoverflow.com/a/43405711/60422:

>>> struct.unpack('f', struct.pack('f', 1.7518248175182483))[0]
1.7518248558044434

Stéphane Chazelas suggests the corresponding one-liner in Perl:

perl -e 'printf "%.17g\n", unpack "f", pack "f", 1.7518248175182483'

Why converting a floating point number to a higher precision gives a decimal representation with more digits that are of no use is the kind of floating point rounding error the question is alluding to -- the internal representation of the floating point number is in binary, and so the digits after the floating point internally (the "binary point" since it's binary) represent power of 2 fractions (1/2, 1/4, 1/8, and so on). A number you can express in a finite number of places in decimal does not necessarily have a finite representation in binary. For more on this see: https://stackoverflow.com/a/588014/60422

Single precision is generally said to be good for about 7 decimal significant figures and that's what we're seeing here.

To get an idea of how the adjustment of the scale factor that comes up with this number actually works, the get_closest_scale_factor_for_resolution function in mutter calculates the virtual width and height from the scale factor, and then if these aren't whole numbers, starting from the calculated width rounded down it tries whole number widths around the calculated one on both sides, expanding outward from it one pixel at a time, until it finds a width that gives an adjusted scale factor that would also make the virtual height a whole number, or until it gives up because the scale has gone out of range or out of the search threshold. https://gitlab.gnome.org/GNOME/mutter/-/blob/176418d0e7ac6a0418eea46669f33c8e3b03c4bd/src/backends/meta-monitor.c#L1960

If you want to know why the developers decided to do this, I don't have the answer for that, but my guess is backwards compatibility: developers are used to peoples' monitors having whole numbers of pixels, and so this is what the existing software out there is designed for.

rakslice
  • 1,147
  • 2
  • 11
  • 17
  • As an aside, what's up with the iterative approach there and testing of values that can't possibly ever be returned? Maybe it was intended to eventually allow the best approximation if no exact match was found in range? – rakslice Mar 20 '23 at 06:12
  • 2
    Thanks. Yeah, it's a 4K monitor. I was trying to automate what Gnome display settings GUI does, and I must pass a scaling value, and I was not sure if I could pass just 1.75 (for 175% scaling) or pass that exact long fractional number. – Damn Vegetables Mar 20 '23 at 08:07
  • 1
    "if you round-trip that value to single-precision and back to double-precision you get 1.7518248558044434 precisely.": could you explain that a bit more clearly please? It seems to make perfect sense to those who know how this works, but for those of us more ignorant, that is really confusing. Or maybe just for me :) – terdon Mar 20 '23 at 12:08
  • 2
    @terdon, see `perl -e 'printf "%.17g\n", unpack "f", pack "f", 1.7518248175182483'` for instance. – Stéphane Chazelas Mar 20 '23 at 13:33
  • 5
    @terdon Might make more sense to you as "if you store the calculation in a 32-bit float, then copy that into a 64-bit float..."? Essentially, introducing a rounding error, but in the limit of the binary floating point representation, rather than a number of decimal digits. – IMSoP Mar 20 '23 at 14:55
  • 1
    @terdon: https://www.h-schmidt.net/FloatConverter/IEEE754.html shows how a value is represented as a single-precision float, `2^exponent * 1.mantissa`; you can input a decimal number and it will show you the nearest representable `float` and the binary representation. Converting back to `double` doesn't introduce any additional error because it's also binary floating point, with more exponent and mantissa bits, so encode the same exponent and zero-pad the low mantissa bits. See also https://en.wikipedia.org/wiki/Single-precision_floating-point_format for details of how floats work. – Peter Cordes Mar 21 '23 at 10:37
  • 1
    Or more simply, every float32 can be exactly represented as a float64. But if you first convert a decimal digit-string to `double` (float64) and then to `float` (float32), you do potentially have two rounding steps, first to nearest representable float64, then to nearest float32, assuming the default rounding mode. – Peter Cordes Mar 21 '23 at 10:41
  • 2
    Rounding a number to single precision is really easy in Java's [jshell](https://en.wikipedia.org/wiki/JShell): `(double)(float)1.7518248175182483` – Nayuki Mar 21 '23 at 15:31
  • @Nayuki: Why the cast back to `(double)`? Is that to get something to print more decimal digits, enough to round-trip the `double` with all those low zero bits in the mantissa, instead of just a string that will round back to same `float`? – Peter Cordes Mar 22 '23 at 14:09
  • @PeterCordes Well, my intent was to show shorter code than Python and Perl. Yes, the side effect is to print more decimal digits, at `double` granularity rather than `float` granularity. – Nayuki Mar 22 '23 at 14:49
6

Another theory: the rational which 1.7518248558044434 is an approximation of is not 2192/1233 but the simpler 240/137 = 1.7518248175182481... (To get a rational that's closer, you'd need numerator and denominator larger by a factor of 1390. And yes, the decimal representations of multiples of 1/137 have an 8-digit cycle.) So there are several possibilities for the height and width in pixels which would give this ratio, including 2160 x 1233.

But, you say, 240/137 is close but not that close. Another good approximation is 3673843/2097152. To get a rational that's closer, you'd need numerator and denominator larger by a factor of thousands. 1/2097152 is 2^{-21}. So that suggests that 240/137 was stored in a binary floating-point with enough room for 22 mantissa bits: one bit to the left and 21 bits to the right of the binary point. (These bit counts neglect any trailing 0s there might be.) Then converted to decimal with far more precision than there was accuracy in that binary representation.

Rosie F
  • 161
  • 3
  • pixel width/pixel height (2192/1233 and so on) gives the aspect ratio, which the scale factor is probably a very bad approximation of since it's unrelated =) – rakslice Mar 20 '23 at 20:48
  • 1
    As it turns out, the 240/137 ratio is basically the rub in `get_closest_scale_factor_for_resolution`'s weird approach in its current form, it's just `3840/2192` or `2160/1233` reduced, but why do they reduce to that? Well, you might notice it puts the ratio in terms of the basic 16:9 pixel steps where a scale applied in both directions can land on a pixel boundary in both at a 16:9 native resolution. – rakslice Mar 20 '23 at 21:11
  • IEEE single-precision float32 stores 23 mantissa bits explicitly, and a leading 1 implied by a non-zero exponent. https://en.wikipedia.org/wiki/Single-precision_floating-point_format . So for purposes of precision, it's really a 24-bit mantissa. – Peter Cordes Mar 21 '23 at 10:44
  • @PeterCordes OK, there could have been a couple of trailing 0s there. Answer edited to suit. – Rosie F Mar 21 '23 at 10:48