All of the sRGB-compatible color functions (everything from Color 3) eagerly normalize into an sRGB tuple, and serialize with rgb(), so you lose information about whether it was originally written as #000, hsl(), etc. This lets UAs eagerly translate it into an efficient internal format (a 32-bit int) and not worry about additional details.
How should we serialize color(), lab(), etc? Chrome is looking into implementing, and is currently planning to eagerly convert these into extended-sRGB, likely stored as four 32-bit floats. Which serialization form should we prefer?
Reasonable options seem to be:
rgb(), with decimals and values outside of [0,255] to represent the extended-sRGB value. Benefit: should be backwards-compatible with at least some existing color code; doesn't require us to distinguish between legacy color formats (which must stay serializing as rgb()) and newer ones. Cons: requires us to define that rgb() supports extended-sRGB.color(), with some arbitrary choice of colorspace. Benefit: perhaps clearer, especially if it happens to match the colorspace the author chose. Downside: requires us to carry an extra bit around for whether the color was in a legacy format or not; might still be an unexpected result if you used a different colorspace.lab(), same benefits/downsides as color(), but maybe more appropriate as a "universal" serialization format for higher-def colors.@smfr Since y'all just announced support color(display-p3), any opinions?
I don't think we should convert between colorspaces when serializing, and by colorspaces I mean the space the color is represented in internally (so hsl/hwb/rgb/gray are all in the sRGB colorspace, lab/lch are in the Lab colorspace, color(foo, ...) is in the foo colorspace.
So I think color(display-p3, ...) should serialize as color(display-p3, ...), lab() and lch() should serialize as either lab or lch.
We have to then decide howcolor(sRGB, ...) and color(lab, ...) serialize. My slight preference is preserve them as color() functions.
Okay, so if we go that way, our current planned impl would have to carry around a colorspace enum as well, so it knows what colorspace to convert back into when serializing.
lab() and lch() should serialize as either lab or lch.
Analogously to rgb() and hsl() both serializing as rgb(), I think both of these should serialize to lab(). (Serializing arbitrary colors to lch() is fraught anyway, since a non-chromatic color doesn't have a unique angle.)
We have to then decide how color(sRGB, ...) and color(lab, ...) serialize. My slight preference is preserve them as color() functions.
I agree.
I think both of these should serialize to lab(). (Serializing arbitrary colors to lch() is fraught anyway, since a non-chromatic color doesn't have a unique angle.)
I agree that lab and lch should serialize to lab.
We have to then decide how color(sRGB, ...) and color(lab, ...) serialize. My slight preference is preserve them as color() functions.
Agreed, because that is also clearer for color(prophoto-rgb ...) etc.
Could browsers always serialize to color() with the color space used, i. e. all level-3 notations would yield color(srgb <red> <green> <blue> / <alpha>)?
Sure, it breaks backwards compatibility, but otherwise it seems like the cleanest solution.
PS: #4649 adds color(lab …) which would make this possible.
No, because it absolutely breaks backwards compat of something that has been stable for many, many years and that lots of scripts rely on.
Do they, though?
I tried to find examples on Github of scripts that assume a particular notation for a serialized <color> (apart from assertions in Blink / Webkit / Chromium conformance test files). Maybe Iʼm searching the wrong way, but I have not found any yet, certainly not using getPropertyValue().
By the way, while reading the respective part of the CSSOM draft, I noticed that
shortest base-ten integer serialization of the color’s [red/green/blue] component
seems underspecified, because it does not say whether the RGB components are [0…255] (apparently intended), [0…100]% or [0.0…1.0].
I know that I've written plenty of code that looks at color serializations for various reasons, and I absolutely depend on it coming out as rgb() so I can parse it and do math on it. (Old IE versions instead giving back whatever input format you put in was thus extremely annoying as I had to implement all the named keywords too.) I assume that my experiences aren't unique.
So, looking first at what is in CSSOM currently for <color>:
rgb() or rgba() based on the alpha being 1.0 or notQuestions as I start porting this to CSS Color 4. I'm looking to maximize backwards compatibility but also extend to what CSS Color 4 allows for sRGB values, notably more than 8 bits per component
rgba() or should it use rgb() with the optional alpha component?rgb() can use arbitrary precision. @therealglazou @emilio @tabatkins particularly looking for your input on those questions
Do you agree that it would be ideal if everything could be serialized to color()? (i.e. without backcompat concerns)
If so, just make the cut already and serialize anything that uses features (notation, color space) beyond what Level 3 offered to color(), even if the author was just leaving out commas in rgb().
it generates the shortest base-10 integer representation of each color component
What does that even mean?
I would have chosen percentages for all four RGBA components, because they are unambiguous and of arbitrary precision.
Do you agree that it would be ideal if everything could be serialized to color()?
No, not in the slightest, for reasons that have already been explained.
Should this continue to serialize non-unity-alpha as rgba() or should it use rgb() with the optional alpha component?
I don't think we should break from the older pattern here just because the bit depth is higher. So the rgb()/rgba() distinction should be maintained.
Should this continue to use the still-allowed comma-separated list form, or the newly allowed space-separated-list form?
Same.
How close to an integer should the value be, to be snapped to an integer serialization?
Hm, should depend on what depth people can actually distinguish. I presume it would be safe to put the epsilon at .001 or so? For it to not work would require a thousandth of an sRGB unit to be visually distinguishable, which I doubt.
What is the minimum bit depth for non-integer serialization? I assume this should be 12, because rec2020 can use either 10 or 12 bits; the percent form of rgb() can use arbitrary precision.
Not sure I understand the question - are you asking what the smallest bit depth is that could possibly trigger a non-integer serialization? If so, it's 9 bits, right?
Going from 8 bits to 12 bits: 1 / 16 = 0.0625 which is much bigger than 0.001. So maybe 0.05?
What is the minimum bit depth for non-integer serialization? I assume this should be 12, because rec2020 can use either 10 or 12 bits; the percent form of rgb() can use arbitrary precision.
Not sure I understand the question - are you asking what the smallest bit depth is that could possibly trigger a non-integer serialization? If so, it's 9 bits, right?
No, I'm asking what is the minimum bit depth that we need to preserve. Since Rec 2020 allows 10 and recommends 12, we need to preserve at least 12 bits.
From CSSOM:
If the alpha component of the color is not equal to one, then return the serialization of the rgba() functional equivalent of the non-opaque color.
That seems to be comparing two floats, alpha and 1.0, which is typically seen as poor practice. Shouldn't that be updated to say that if 1.0 - α > ε for some suitably defined ε?
Aha okay
If the value is internally represented as an integer between 0 and 255 inclusive (i.e. 8-bit unsigned integer), follow these steps:
So I'm guessing 255/255 is one and anything else/255 is not one. The case where alpha is stored as a number rather than a [0..255] integer is still underdefined.
Going from 8 bits to 12 bits: 1 / 16 = 0.0625 which is much bigger than 0.001. So maybe 0.05?
If they're using bit channels like that, then we don't have to worry about rounding; you get an integer when it's exactly an integer value (last four bits are zero), and don't otherwise.
What we have to worry about is things like using float-channel extended sRGB, where it's not guaranteed that we'll get back to an integer value. (Chrome's current plan is to just upgrade its color handling to always be 32-bit float channels in extended sRGB. Our textures will be f16, to reduce memory usage, but CSS colors can afford to pay for the convenience of f32.)
A .001 epsilon is probably fine for a float 32, but .01 is likely fine as well.
No, I'm asking what is the minimum bit depth that we need to preserve. Since Rec 2020 allows 10 and recommends 12, we need to preserve at least 12 bits.
Still a little confused. Are you thinking that browsers would round to a particular bit depth, then write it out in decimal?
They'll just do the conversion to sRGB with floats and serialize the result, using the normal serialization rules.
That seems to be comparing two floats, alpha and 1.0, which is typically seen as poor practice. Shouldn't that be updated to say that if 1.0 - α > ε for some suitably defined ε?
Yeah, probably. Lower priority since 1 is exactly representable and alpha isn't usually mathed at, but might as well make all these changes together.
Still a little confused. Are you thinking that browsers would round to a particular bit depth, then write it out in decimal?
No, I'm looking at the minimum bit depth that _must_ be preserved, and ensuring ε is smaller than 1LSB at that bit depth. For rec2020, that depth is 12.
Okay, so you're just making sure that, if a browser is using 12-bit channels, the ε for rounding to integers is smaller than the minimum a 12-bit channel can differ from the integer value.
So yeah, still, .01 or .001 sound fine.
How should we serialize color(), lab(), etc? Chrome is looking into implementing, and is currently planning to eagerly convert these into extended-sRGB, likely stored as four 32-bit floats. Which serialization form should we prefer?
Internal storage as four 32-bit floats sounds fine (storage as 16-bit half-float would also have met the precision requirements). BTW is the transfer curve for the extended sRGB fully defined for negative values (does the straight portion continue forever or does it have a curve mirroring the positive portion)?
color(), with some arbitrary choice of colorspace. Benefit: perhaps clearer, especially if it happens to match the colorspace the author chose. Downside: requires us to carry an extra bit around for whether the color was in a legacy format or not; might still be an unexpected result if you used a different colorspace.
Anything originally specified as color(foo c1 c2 c3 ... cn) will now be serialized as exactly that (color(0 in lowercase, exactly one ascii space between component values, etc. See Serializing values of the color() function
lab(), same benefits/downsides as color(), but maybe more appropriate as a "universal" serialization format for higher-def colors.
If originally specified as lab() or lch(), yes. See Serializing Lab and LCH values
I tried to be explicit about the exact serialization format but if any issues or ambiguities are noticed, please comment on this issue or raise a more specific new one as needed.
Okay, so you're just making sure that, if a browser is using 12-bit channels, the ε for rounding to integers is smaller than the minimum a 12-bit channel can differ from the integer value.
There is a new issue specifically on minimum bit depth
@tabatkins I would appreciate extra eyes on the whole of Resolving <color> values and Serializing <color> Values