Premium support for our pure JavaScript UI components


Post by rakshith.snps »

Hi Team,
Thank you for your efforts.
In one of our previous posts, because salesforce uses Proxy wrapper classes we had to use deep cloning approach to assign arrays to project.
by using deep cloning JSON.Parse and JSON.stringify there is no issue when the records are smaller but it is very ineffective for larger records especially over 1000 to 5000.
Could you take a look at the source code (provided with debug points and console logs) and help us improve the performance of scheduler .

Steps to reproduce the performance issue.
wait till the progress bar loads the 1000 records.
once 1000 records are loaded , in picklist named Number of records to display choose 1000 records .

We will send the org details to you in mail. Thank you.


Post by marcio »

He rakshith.snps,

Thanks for the report, we're aware of that performance issue, we have a ticket for that here https://github.com/bryntum/support/issues/1899

Best regards,
Márcio


Post by rakshith.snps »

Hi marcio, Thank you for the ticket.
in the ticket it was recommended to use forEach loops . we did try that but when we try to load 1000 records . its too slow and the screen breaks.
the changes can be seen in the org that i mailed is there a way to fix that ? Thank you.


Post by Maxim Gorkovsky »

Hello.
In the ticket it is recommended to clone the data before passing it to the grid. Have you tried it?


Post by rakshith.snps »

Hi Maxim,
Yes we are using forEach loops cloning the elements of the loop and creating a project object . the assignments are happening fairly quickly but once the data is passed to the grid its just keeps loading. the issue can be reproduced in the org i sent in the mail .
Thank you.


Post by Maxim Gorkovsky »

Would you mind providing steps to reproduce the issue? There are several applications and I am not sure which one I should check.


Post by rakshith.snps »

Hi Maxim ,
The App name is UMGANTT.
The screen name is Scheduler.
I will attach a gif to reproduce the problem. Thank you

performanceDrop.gif
performanceDrop.gif (677.29 KiB) Viewed 229 times

Post by Maxim Gorkovsky »

I recorded two performance profiles with about 500 records loaded, I see that scheduler rendering takes same time, but onChangeNumberOfRecords grows really fast:

100records.png
100records.png (17.74 KiB) Viewed 220 times
500records.png
500records.png (24.12 KiB) Viewed 220 times

Few problems that I see:

  1. arrays are proxy objects, so accessing properties in a cycle really adds up.
  2. you're iterating 13k array and for each record you iterate another array (with amount of records currently added) only to compare id.

So instead of nested cycles to check id, you can build a map object:

// this should help with lookup, single pass through array. if you rebuild this map as you load
// events it should be even better
const map = Object.fromEntries(this.events.map(r => [r.id, true]))

const resultEvents = this.allEvents.filter((all) => all.planObjectid in map)

Same applies to assignments and resources. Try to optimize this code and see how it works


Post by rakshith.snps »

Hi Maxim Thanks for the reply.
We made the necessary changes. and reduced the execution time of OnChangeNumberOfRecords . we checked this in performance profiling.
but when loading 1000 records it takes to long to even profile even after the changes. could you help us in this ?


Post by rakshith.snps »

Also we have noticed the time to draw the grid increases for every thousand records.
it took 13 seconds to load 3000 records , 6 seconds to load 2000 records.
is there a way to reduce or optimise this . Please let us know

perform.PNG
perform.PNG (119.76 KiB) Viewed 196 times

Post Reply