Chat module in status is the oldest module and keeps all legacy code since the beginning, but it's most important one and should be more performant.
Why do we recalculate all messages for each new message
Why do we calculate identicon in view for every message
Two directions to move
1) since we moved all data from realm to status-go, messages can be prepared in status-go, so status-react can render them without any additional calculations
Move all logic from views and subscriptions to an event, move all these logic from an event to status-go
Event
https://github.com/status-im/status-react/blob/ff2b24b1c69bb953bf60d50922b2ef93da5e8f2c/src/status_im/transport/message/core.cljs#L71
Subscription
https://github.com/status-im/status-react/blob/d07c9c6613551a6b68ff1087276f08aa64f61a71/src/status_im/subs.cljs#L638
https://github.com/status-im/status-react/blob/dcb741520844126582427d855eb784065e800592/src/status_im/chat/db.cljs#L140
Render
https://github.com/status-im/status-react/blob/4e7c7c254b2868d5e75e7c9a3a29d9406fb94e63/src/status_im/ui/screens/chat/message/message.cljs#L236
https://github.com/status-im/status-react/blob/843de6aa901f745e19350f49310f2d226f8fffdf/src/status_im/ui/screens/chat/photos.cljs#L21
2) Optimize UI, less containers, and nested levels, no calculations in renderers, flatlist optimizations
It's possible to render messages from status-go without addition calculations and rendering is performant
@flexsurfer could you add more information about the performance issue? For example is it less performing when switching chats/scrolling/receiving new messages etc, it will help pin point the issue and understand exactly what is it due to (rendering? sorting? timestamp calculation etc). Moving sorting etc to status-go is a fairly large chunk of work, and although I think it's best to do it, it is also useful to understand exactly why is not performing.
@cammellos yes, sure, sorry, so when you switch between chats list and chat it takes 1-3s and UI is freezing, even loading indicator (which has native animation, not js thread) is freezing, maybe at first we need to cache messages in status-react, and don't load them each time just add new to the cache, but yeah better to dig first and find the reason
here you can find some results from Serhy https://github.com/status-im/status-react/pull/8943#issuecomment-529990030
@flexsurfer I will do some investigation on the causes, to make sure that it's worth moving to status-go and which part is worth (for example if timestamp is the issue, we can just start from calculating the timestamp and take it from there)
@Serhy helped me profiling a couple of cases. What found so far:
1) We tried to cache all the computations in the subscription (basically return an constant list of messages). So: No fetching messages from status-go, no sorting, no computation of timestamps. No meaningful improvement of performance. Seems to indicate that just moving sorting messages to status-go might not be a huge performance boost.
2) We tried to memoize images of identicon, loading a chat with no memoized images is on average 3.6 seconds, loading a chat with memoized images is 2 seconds on average, so that's an improvement worth doing.
I will work on 2 unless there are any objections. I won't just memoize images ideally, but possibly compute them and save on receiving the message, but I will check what's the best plan.
cc @flexsurfer @yenda
it would be great to have a third experiment, trying to render only plain text of message, for example, render function [react/text message-text], i think we have really complex item renderer and it's also not optimized for flat-list and we re-render all items for every new message, but at least worth trying plain renderer
I won't just memoize images ideally, but possibly compute them and save on receiving the message,
yes, no calculations in view for sure
for 1 it's still worth it to cache, if i understand correct, if i have 100 messages in vector ,and i recieve 1 message, all 101 messages will be recalculated regrouped etc?
@flexsurfer not sure, I haven't checked exactly, but I might do, so far we are investigating switching chats rather than receiving a message when a chat is on, but I can focus on that later.
I have also did as you suggested, only render text, it's clearly much much faster. Another thing that seems to be slowing down the rendering is that we also compute the 3 words name for each message, so I will include that in the fix above, and we can take it from there.
I know it's not a bug but am lumping tech debt with bugs while prioritising for v1 release + want to make sure I count this, hence label.
What's the state of this one guys? Is it actually a v1 requirement? @flexsurfer @cammellos
@rachelhamlin I am not working on this actively, we have improved chat performance of about 45% in the previous task, I can spend more time on this if necessary, but maybe it's time to re-asses if performance are now acceptable?
Nope, I think it makes good sense to measure again before continuing work. I'm removing this from v1 and deprioritising on the backlog. Thanks @cammellos.
Sounds good @StatusSceptre
Most helpful comment
@flexsurfer not sure, I haven't checked exactly, but I might do, so far we are investigating switching chats rather than receiving a message when a chat is on, but I can focus on that later.
I have also did as you suggested, only render text, it's clearly much much faster. Another thing that seems to be slowing down the rendering is that we also compute the 3 words name for each message, so I will include that in the fix above, and we can take it from there.