Preact: Odd behaviour in rendering elements list.

Created on 22 Sep 2017  路  13Comments  路  Source: preactjs/preact

This codepen [1] illustrates a problem I am having in an application. Essentially the desired behaviour is that a list of items is displayed with css that gradually changes over time. However the list has a maximum size _n_ and when it gets to this size then only the last _n_ elements should be displayed. In the code this is done by _concat_ing the array of new elements to the base array and then if it's too long _splice_ing the excess off the front of the array. Fairly simple stuff.

This works fine in all cases where the new array of elements to be added only contains one entry, the bottom border of the older entries gradually fades to nothing . It also works fine in the case when the new array is larger than one element _until_ it gets to the point where the maximum size if reached and it's truncated and then it no longer works, the borders are permanently solid as if the li elements are recreated on each render.

Is anyone able to shed any light on why this might be? Around line 20 in the codepen there is a commented out line which can be used instead of the following line to toggle between both behaviours.

question

Most helpful comment

Preact doesn't rerender/recreate all elements when you use keys, but it moves them to ensure the correct order.
An example to show the difference between using keys and not using keys.
Imagine your example from above but with only 3 elements:

Without keys:
When list looks like [1, 2, 3] it will be rendered like this:

<li>1</li>
<li>2</li>
<li>3</li>

When you update the list to [3, 4, 5] the dom will look like this after diffing the first node

<li>3</li>
<li>2</li>
<li>3</lI>

Preact "knows" that the first element matches the type of the first vnode and just updates the text of the nodes.

Now the same state with keys after diffing the first node:

<li>3</li>
<li>1</li>
<li>2</li>

So what happend?
Preact can't update the first node because it sees keys on the elements.
So it takes the element with the node with the right key and reinserts it as the first position because the index of the vnode (0) doesn't match the index of the dom node (2).
If we would have more nodes, preact would move them.
After that preact inserts the new nodes (for the values 4 & 5) and then remove the old, unused nodes (values 1 & 2).

Preact handles this case different if the node to update is the next node, in this case it just removes the next node. That 's why the example works if you only add and remove one node at a time.

All 13 comments

Since you're using key={count}, you're forcing preact to remove and re-add each element once you hit the max. This is because there will always be a different key at each index on every render. Removing the keys seems to fix the issue and likely is the behavior you are looking for.

Thanks for your reply, though I'm still not quite sure I'm getting it. I thought the point of having keys was so that preact could tell which elements were still required and could only add or remove those that were required and leave the others alone. If the list contains elements with keys 10-40 and I add two new elements I want the list to now contain keys 12-42. So remove elements with keys 10 & 11 from the dom, add elements with keys 41 & 42 and leave the others exactly as they are.

Also the example works in the special case of adding only one new element each time but fails in the general case of adding multiple elements. The behaviour of evaluating the keys of the children in the list should be the same in both cases shouldn't it? What is different in these cases?

Finally, removing the keys doesn't work anyway. In the simplified example each new element should have a bottom border that fades out. In the original failure once the list exceeds the maximum each element's border is recreated at full strength on each pass. If the keys are removed then in this case new elements are created with an already faded bottom border, I guess because preact just reuses the same _n_ dom elements, which is not correct.

The specific behaviour I'm after and which I tried to generalise was that of periodically adding several elements to a list, the first of which would initially have a border to mark the beginning of the new entries, which would gradually fade out and newer elements were added below. The codepen just has each element with a border for better understanding of what might be happening.

I have forked the codepen to change the native li to an Li component to try and get a handle on when they are being created and/or destroyed and the lifecycle calls etc and everything appears to be behaving as it should, it's just that as far as the dom is concerned if you add more than one element _and_ elements are removed from the front of the list then all the nodes appear to be being generated from scratch. Don't drop any from the start of the list and everything is fine, add one element at a time and everything is fine. What is happening differently between these scenarios?

Preact doesn't rerender/recreate all elements when you use keys, but it moves them to ensure the correct order.
An example to show the difference between using keys and not using keys.
Imagine your example from above but with only 3 elements:

Without keys:
When list looks like [1, 2, 3] it will be rendered like this:

<li>1</li>
<li>2</li>
<li>3</li>

When you update the list to [3, 4, 5] the dom will look like this after diffing the first node

<li>3</li>
<li>2</li>
<li>3</lI>

Preact "knows" that the first element matches the type of the first vnode and just updates the text of the nodes.

Now the same state with keys after diffing the first node:

<li>3</li>
<li>1</li>
<li>2</li>

So what happend?
Preact can't update the first node because it sees keys on the elements.
So it takes the element with the node with the right key and reinserts it as the first position because the index of the vnode (0) doesn't match the index of the dom node (2).
If we would have more nodes, preact would move them.
After that preact inserts the new nodes (for the values 4 & 5) and then remove the old, unused nodes (values 1 & 2).

Preact handles this case different if the node to update is the next node, in this case it just removes the next node. That 's why the example works if you only add and remove one node at a time.

Thanks for chipping in, this is interesting. I have been looking at the source trying to work out the logic. So it appears there is no way of preserving long lived nodes in the DOM if they are adjacent to others that may be removed? Any CSS transitions in progress will always be reset if keys are used, or will be associated with the wrong content if keys are not used.

Is there any way to hint or modify the update sequence as in my example case if the update was done as remove->update->insert as opposed to update->insert->remove then I believe it would resolve correctly? Given that in my real application there may be several hundred li elements it would seem to be a considerable optimisation if the surplus elements were removed first as then all the other existing elements in the list would be matched between the dom and vdom (if I'm understand things correctly) and would not need to be reinserted, breaking the ongoing CSS transitions.

Or can anyone suggest a way to get my desired behaviour in a 'preact' style? I have at least two projects I am migrating from raw JS that have this type of behaviour (a 'live' logger and a chat interface) and it's the one thing I've not been able to get working.

@web2wire Adding keys does not remount - your example is remounting nodes because the keys all change on every render. You can see a demo of the intended use of keys here:
https://codepen.io/developit/pen/bEXBXW

Also, I gave a talk a while back that might help explain keys - you can see the relevant part here:
https://youtu.be/LY6y3HbDVmg?t=13m49s

TL,DR: if you use keys, you have to make sure they match on subsequent renders.

Example misuse of keys:

// render 1
<li key="1">one</li>
<li key="2">two</li>
<li key="3">three</li>

// render 2
<li key="4">one</li>   // no matching key in old set, gets recreated
<li key="5">two</li>   // no matching key in old set, gets recreated
<li key="6">three</li>   // no matching key in old set, gets recreated

Example correct usage of keys:

// render 1
<li key="1">one</li>
<li key="2">two</li>
<li key="3">three</li>

// render 2
---    // the element with key=1 isn't in this render, so it gets removed
<li key="2">two</li>   // matches the existing key=2 el, already in the right place
<li key="3">three</li>   // matches the existing key=3 el, already in the right place
<li key="4">four</li>   // no matching key in old set, gets created

Thanks for your comment. AFAIK I am using keys correctly, the value used for the key is the same as that which is printed so it's relatively easy to see in the example.

The problem I have can be seen in your codepen with some minor tweaks (forked at (https://codepen.io/web2wire/pen/oGELow?editors=0110). Basically I have changed the remove function to shift off the front of the items array instead of filtering (so that it's easier to do multiple removes in one call). If I shift off one element it is fine, two or more and the problem appears, just as in my example. However it is not apparent by just looking at the render count (which doesn't increase) so I have added a css fade out to highlight the actual DOM activity. In this case you can see the remaining elements are reset at 100% when they should continue to fade (as they do with a single 'shift' off the array).

I think @Kanaye pinpointed the cause of the problem above. Basically as the innerDiffNode function iterates through the list of children it finds every single one of them is different and so mutates them all to be what it expects. The special case of one element difference is picked up by the sibling checks I believe.

So if I have 1000 elements (my app may have many more) and remove the first two from the underlying data then the other 998 will all be changed/recreated to be the same as another element that already exists just a little further along the tree, and could conceivably just be kept and reused. Basically, element 0 is changed to match element 2, 1 to 3, 2 to 4, ... 998 to 1000. Then the two redundant elements are removed at the end. While in the general case this approach may be fine, in a large 'FIFO' style component it is obviously fairly sub-optimal even discounting the undesired side-effects in terms of the DOM nodes which is my main problem.

For my application it would be more appropriate to remove first, then compare/update and then insert as this would result in very few DOM operations or updates compared to potentially thousands. However I do not see a way to either hint this to the diffing algorithm or override and customise it on a per component basis but if there is a way I would really be grateful to hear it.

Any suggestions?

Maybe I'm not following along correctly here, but in the example you linked, the fact that the render count does not increase in the right (keyed) column demonstrates that things are working correctly.

In terms of the code, removing 3 items from the front of a keyed list will result in this condition being triggered 3 times, and no other operations:
https://github.com/developit/preact/blob/master/src/vdom/diff.js#L227-L229

Ack! Bloody Codepen reverted my example for some reason. Should be updated now.

It's not about the rendering, that's all fine. It's about the management of DOM nodes so that the same component with the same key gets recreated so that CSS transitions get restarted anomalously. If you try again and click to delete any element you will see the keyed list elements don't get re-rendered but do get recreated in the DOM, as the fade restarts. The other lists do get re-rendered but re-use the DOM nodes so they continue to fade.

Alright, I think I follow now. @Kanaye is definitely on the right track with that line reference.

Here's an updated codepen that also shows the effect doesn't happen in reverse (appending items to the beginning of the list):
https://codepen.io/developit/pen/EwJWEm?editors=0010

Whoops. I somehow managed to link the wrong lines above.
I've edited my comment to link to the correct lines.

So the issue is that preact reinserts all nodes to match the order of the vdom what restarts the animation.

I thought a little bit about this today and it might be possible to skip reinserting nodes if the nodes index in the dom is larger then the vnode index.
This would assume that all previous nodes will either be used later in the vdom (keyed) and moved then or that the nodes can 't be reused and will be removed later.

Unfortunatly this requires to track the index of the dom nodes and also requires a lot of changes to the current code.

I've played a little bit with this idea but I couldn't come up with a solution that passes all tests (for different reasons e.g. inserting new nodes in case a placeholder node get 's replaced by a component).

Also I haven't thought about how this change could be implemented with little performance and size impact.

I think I have a promissing aproach but I have to fiddle a little more with it.

What do you think ?

Correct me if I'm wrong but I think the current diffing algorithm iterates through the children and compares and morphs at the same time, only removing any remnant nodes at the end. Have I got that right?

If it did a simpler comparison in one stage, only to establish the validity of a node to remain in the DOM, then removed any unused nodes, and then applied the more detailed comparison and morphing it would probably be much more efficient in my case, by vastly reducing DOM operations. Of course it might possibly be less efficient in the general case, though how much it would be difficult to predict without trying it.

I think FIFO style components (chat clients, news tickers, event viewers etc) are common enough for a consideration to made about how to improve the efficiency of rendering in the case when there is a maximum size and so nodes are pruned off the top, resulting in multiple domino re-morphs and DOM inserts. This is apart from the CSS effects that are my current issue, though one would tend to solve the other.

Also, I want to say thanks to you guys for looking at this for me. Whilst it is something I'm obviously keen to solve I know it's not the most important issue in the world (probably not the most important 'Preact issue'!) and AFAIK is present in React itself. Your comments and suggestions on this are always much appreciated.

I don 't think a "simple" compare, followed by node removal and then node updates will be a viable solution size and performance wise.
Also in the end it will not make a difference to my approach.
Let 's take my example from above to explain the differences a bit:

Both examples assume that keys are used.
In "2 step rendering" the dom will look like this after the first compare and node removal:

<div>3</div>

And after the second step the dom will look like this:

<div>3</div>
<div>4</div>
<div>5</div>

The approach I'm trying to achive is different as it keeps updates, inserts and compares basicly the way they are now.

After the updates (before node removal) the dom will look like this:

<div>1</div>
<div>2</div>
<div>3</div>
<div>4</div>
<div>5</div>

So no nodes where reinserted/moved, just added (or if necessary updated).
Now we should remove all unused nodes. The dom will look like this:

<div>3</div>
<div>4</div>
<div>5</div>

Basicly I try to keep code changes, runtime overhead and filesize as small as possible.

Looks like this is resolved 馃帀

Was this page helpful?
0 / 5 - 0 ratings

Related issues

nopantsmonkey picture nopantsmonkey  路  3Comments

SabirAmeen picture SabirAmeen  路  3Comments

simonjoom picture simonjoom  路  3Comments

jescalan picture jescalan  路  3Comments

paulkatich picture paulkatich  路  3Comments