You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the y-websocket always restart automatically(I am deployment this application in kubernetes pod), I am using the grafana to monitor the y-websocket memory, now I found the momory are increase suddenly and there is no new connection to y-websoket. My document is small(less than 1MB).
Expected behavior
The momory could back to normal.
Screenshots
Environment Information
Node.js
y-websocket 1.5.0
I tried to dump the memory and found there have 50000+ map take 68% memory like this:
this is the level db file size:
[root@k8smasterone yjs-storage]# pwd
/data/k8s/reddwarf-pro/texhub-server-service/yjs-storage
[root@k8smasterone yjs-storage]# ls -alh
total 13M
drwxr-xr-x 2 root root 4.0K Nov 28 22:13 .
drwxr-xr-x 9 root root 4.0K Oct 18 22:54 ..
-rw-r--r-- 1 root root 450 Nov 28 22:16 001891.log
-rw-r--r-- 1 root root 5.3M Nov 28 22:13 001893.ldb
-rw-r--r-- 1 root root 3.3M Nov 28 22:13 001894.ldb
-rw-r--r-- 1 root root 3.3M Nov 28 22:13 001895.ldb
-rw-r--r-- 1 root root 780K Nov 28 22:13 001896.ldb
-rw-r--r-- 1 root root 16 Nov 28 22:03 CURRENT
-rw-r--r-- 1 root root 0 Sep 12 16:08 LOCK
-rw-r--r-- 1 root root 1.6K Nov 28 22:13 LOG
-rw-r--r-- 1 root root 1.7K Nov 28 20:32 LOG.old
-rw-r--r-- 1 root root 1.6K Nov 28 22:13 MANIFEST-001889
I have tried to diff the dump, somewhere map in the y-websocket did not release. this issue was look like the same with #145
The text was updated successfully, but these errors were encountered:
the destroy function are just create a new map? I am confused with this:
if (doc.conns.size === 0 && persistence !== null) {
// if persisted, we store state and destroy ydocument
persistence.writeState(doc.name, doc).then(() => {
doc.destroy()
})
docs.delete(doc.name)
}
Describe the bug
the y-websocket always restart automatically(I am deployment this application in kubernetes pod), I am using the grafana to monitor the y-websocket memory, now I found the momory are increase suddenly and there is no new connection to y-websoket. My document is small(less than 1MB).
Expected behavior
The momory could back to normal.
Screenshots
Environment Information
I tried to dump the memory and found there have 50000+ map take 68% memory like this:
this is the level db file size:
I have tried to diff the dump, somewhere map in the y-websocket did not release. this issue was look like the same with #145
The text was updated successfully, but these errors were encountered: