Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading data returns only empty buffer #1162

Closed
jeffrson opened this issue Apr 13, 2022 · 12 comments
Closed

Reading data returns only empty buffer #1162

jeffrson opened this issue Apr 13, 2022 · 12 comments

Comments

@jeffrson
Copy link

Although I'm using ssh-promise as a wrapper, this issue is not related as far as I can tell.

I'm connecting to a server that's written using SSH2 as well. When I try to read some file (3289 Bytes) I can see the server filling buffers, but in the client they are returned empty.

I followed until https://github.com/mscdex/ssh2/blob/master/lib/protocol/SFTP.js#L2013, where req.nb is 0, so bufferSlice creates an empty buffer for data.

The file I try to read is 3289 Bytes, my buffer is 32kB, so req.nb is initialized with 0 and never changed.
nb is the correct size of the file, len is the size of the buffer (32kB). overflow is 816 (what does it mean?). data (incoming in cb) does contain the correct file contents.

So what might be wrong here? Why is data (outgoing) empty?

@mscdex
Copy link
Owner

mscdex commented Apr 13, 2022

Can you provide some minimal code to reproduce this issue, especially the server side?

@jeffrson
Copy link
Author

jeffrson commented Apr 14, 2022

Thanks, yes, sure. Server-side is pure ssh2, while client side is ssh2-promise which probably will not help much.

Server side:

                    sftpStream.on('READ', async (reqId, aHandle, offset, length) => {
                        log(`file read off=${offset}, len=${length}`)

                        const handle = aHandle.toString() // encoding 'utf8'
                        const openHandle = openHandles[handle]
                        //
                        if (!openHandle) {
                            return sftpStream.status(reqId, STATUS_CODE.FAILURE, 'Invalid handle')
                        }
                        if (openHandle.mode !== 'READ') {
                            return sftpStream.status(reqId, STATUS_CODE.FAILURE, 'Invalid mode of handle')
                        }
                        try {
                            const readData = await readAsync(openHandle.fd, Buffer.alloc(length), 0, length, offset)
                            if (readData.bytesRead === 0) {
                                //
                                return sftpStream.status(reqId, STATUS_CODE.EOF)
                            } else {
                                //
                                sftpStream.data(reqId, readData.buffer.slice(0, readData.bytesRead))
                            }
                        } catch (err) {
                            //
                            return sftpStream.status(reqId, STATUS_CODE.PERMISSION_DENIED, errorToString(err))
                        }
                    })

Client side:

        const readChunks: any[] = []
        for (let i = 0; i < concurrency; i++) {
            readChunks.push(srcSftp.read(srcFd, readBuffers[i], 0, chunkSize, bytesRead + i * chunkSize))
        }
        const read = await Promise.all(readChunks)
        if (cancelToken.isCancelled) {
            throw cancelToken.reason
        }

        const writeChunks: any[] = []
        read.forEach((r: any) => {
            if (r[0] > 0) {
                writeChunks.push(dstSftp.write(dstFd, r[1] as Buffer, 0, r[0] as number, r[2] as number))
            }
        })
        const write = await Promise.all(writeChunks)
        bytesRead += write.reduce((total, w) => total + w, 0)
        if (cancelToken.isCancelled) {
            throw cancelToken.reason
        }

In client, dstSftp.write fails, because buffer in r[1] has length zero.

Don't know if this is sufficient. Runnable code would be harder, though.

One thing I didn't mention: Client worked without issue until I updated ssh2 from 0.8.9 to 1.9.0.

And if I replace req.nb in https://github.com/mscdex/ssh2/blob/master/lib/protocol/SFTP.js#L2013 by nb it works fine for a small file. Didn't test further. AFAICT, req.nb is never set other than zero.

@mscdex
Copy link
Owner

mscdex commented Apr 14, 2022

What is chunkSize ?

@jeffrson
Copy link
Author

chunkSize was "32 * 1024" - which is related to overflow of 816, because maxLen in read_ is 31952.

I tried chunkSize = 31952 - overflow is zero, but nothing else changed (as somewhat expected).

@jeffrson
Copy link
Author

It's only a small file: 3289 Bytes - also concurrency is 1 for this test.

However, I tried larger files, with the same result.

@jeffrson
Copy link
Author

jeffrson commented Apr 14, 2022

I tried this little test program against openssh-sftp-server (Debian 1:8.4p1-5), reading another (arbitrary) file of 15706 bytes length.

const { Client } = require('ssh2')

const buf=Buffer.alloc(32*1024,0)
const conn = new Client()
conn.on('ready', () => {
  console.log('Client :: ready')
  conn.sftp((err, sftp) => {
    if (err) throw err;
    sftp.open('/foo/bar', 'r+', (err, handle) => {
       if (err) throw err
       sftp.read(handle, buf, 0, 16*1024, 0, (err, bytesRead, buf0, pos) => {
         if (err) throw err
         console.log(bytesRead)
         console.log(buf0.length)
         console.log(pos)
         sftp.close(handle, (err) => {
           if (err) throw err
           conn.end()
         })
       })     
    })
  })
}).connect({
  host: '192.168.100.100',
  port: 22,
  username: 'frylock',
  password: 'nodejsrules'
})

using ssh2@^1.9.0. Here's the output:

Client :: ready
15706
0
0

This looks like the same problem: although 15706 bytes are read, the buffer is empty.

The same test program with ssh2 0.8.9 gives this output:

15706
16384
0

@mscdex mscdex closed this as completed in bd4c41a Apr 14, 2022
@mscdex
Copy link
Owner

mscdex commented Apr 14, 2022

This should be fixed now.

@jeffrson
Copy link
Author

Great, thank you!

@jeffrson
Copy link
Author

Do you have an estimation when the fixed version will be release on npm?

@jeffrson
Copy link
Author

jeffrson commented Apr 23, 2022

@mscdex I'm afraid this issue should be reopened.

Actually it's another bug, but related.

Say, I have a chunksize of 32kb (that does not seem to be effective, because of the small "overflow" package of 816 Bytes - in fact, I'd like to use 2*31952 - but the explanation will be more clear).

Such, the client reads 31952 Bytes, and then 816 Bytes. It's obvious, that the client needs the server to ask for 816 Bytes starting at 31952 (see https://github.com/mscdex/ssh2/blob/master/lib/protocol/SFTP.js#L2004). However, the client should use 0 in the callback, not 31952 as in the request (see https://github.com/mscdex/ssh2/blob/master/lib/protocol/SFTP.js#L2015).

So you probably need something like OrigPos in addition to OrigOffs.

BTW, if the chunkSize is larger than 2*31952 it looks to me as if read_ fails even more (TM): for chunkSize of 64kb (with overflow 1632), the position reported in the callback is 63904...

@mscdex
Copy link
Owner

mscdex commented Apr 23, 2022

@jeffrson That's probably a duplicate of #1166

@jeffrson
Copy link
Author

Hmmm, may be.

I don't know how fastGet works though: does it use read_? and if so, why wasn't it affected from this issue (here)? The (somewhat :-)) fixed version hasn't been released yet, afaict. I'm using a dependency on the git repo.

BTW, I tried the "origPosition" approach (no PR yet):

 const req = (req_ || {
...
    position,
    origPosition: position,
...
      cb(undefined, req.nb + nb, data, req.origPosition);
...

Looks good to me so far. Would you please check?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants