Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test v3.14 roundtrip #34

Merged
merged 4 commits into from
Dec 3, 2024
Merged

Test v3.14 roundtrip #34

merged 4 commits into from
Dec 3, 2024

Conversation

ricklupton
Copy link
Owner

No description provided.

@Seb-sti1
Copy link
Contributor

Could it be the new paper_size property in the SceneInfo?

{
  "SceneInfo": {
    "backgroundVisible": {
      "timestamp": "0:0",
      "value": 1
    },
    "currentLayer": {
      "timestamp": "0:0",
      "value": "0:0"
    },
    "paperSize": {
      "first": 1404,
      "second": 1872
    },
    "rootDocumentVisible": {
      "timestamp": "0:0",
      "value": 1
    }
  }
}

(this is from one of my file using the debug env var of the remarkable software)

In your 3.14.4 file, it is (1620, 2160). I started a branch yesterday to try to fix some small issues/missing properties and i have done this Seb-sti1@8b74595 for the paper_size property. I didn't create an MR yet 'cause i wanted to have a look to #32 before.

Hope it can help :)

@Seb-sti1
Copy link
Contributor

Me again :)

I just had time to run the pipeline/tests with the commit I mentioned and another one, and it appears to fix the pipeline, see here Seb-sti1#1 (of course, I did not deactivate the pipeline to publish on PyPI, so the corresponding pipelines fail).

Regarding the two commits:

The first one (Seb-sti1@9505d1d) is the addition of paperSize in the SceneInfo class

On the files that had excess bytes, they were 92, 8, 0, 0, 0, 124, 5, 0, 0, 80, 7, 0, 0 which seems to correspond to :

  • 92 = 5 (index), 12 (TagType.Length4) as read and check by DataSteam.read_tag
  • 8, 0, 0, 0 the size 8 bytes here
  • 124, 5, 0, 0 = 124 + 5 << 4 = 1404 = first
  • 80, 7, 0, 0 = 80 + 7 << 4 = 1872 = second

The first implementation I tried was with the code below as it seemed to be closer to a simple value but given the existence of what appears to be a size I changed it and used TaggedBlockReader.read_subblock. Btw I don't know if it would be better to read size/4 uint32 instead of the hardcoded two.

def read_two_int(self, index: int) -> tuple[int, int]:
    """Read a tagged two 4-byte int."""
    self.data.read_tag(index, TagType.Length4)
    first = self.data.read_uint32()
    second = self.data.read_uint32()
    return first, second


def read_two_int_optional(
        self, index: int, default: tp.Optional[float] = None
) -> tp.Optional[tuple[int, int]]:
    """Read a tagged two 4-byte int."""
    return self._read_optional(self.read_two_int, index, default)

The second one (Seb-sti1@ecd56a0) makes the background_visible, root_document_visible and paper_size on the SceneInfo optional

I had some files where the whole SceneInfo was only containing the current_layer property

UnreadableBlock(extra_data=b'',
                error='Bad tag type 0x0 at position 144',
                data=b'\x1c\x06\x00\x00\x00\x1f\x00\x00/\x00\x00',
                info=MainBlockInfo(offset=132,
                                   size=11,
                                   extra_data=b'',
                                   block_type=13,
                                   min_version=0,
                                   current_version=1))



Let me know what you think and if you want me to create a MR ([email protected] -> main / [email protected] -> test-v3.14-roundtrip) or you cherry-pick the commit from my fork (do whatever is easier/better from your pov)

This allows for round-tripping of data where blocks contain new tagged
values which have not been added to the parsing code.
Currently this is xfailing for `Color_and_tool_v3.14.4.rm` because the
parser is not updated yet. When it is, the xfail mark should be removed.
@ricklupton ricklupton force-pushed the test-v3.14-roundtrip branch from 63e9baa to 0287369 Compare December 3, 2024 20:46
@ricklupton ricklupton marked this pull request as ready for review December 3, 2024 20:50
@ricklupton ricklupton merged commit 68203f7 into main Dec 3, 2024
10 checks passed
@ricklupton
Copy link
Owner Author

I think you're right that those are the parts that are not parsed, but given we have the extra_data in the blocks, it should still work to round-trip the data even when everything is not being parsed fully. This PR makes sure that's true and tested.

It also adds another test, which checks that all the blocks were actually parsed correctly. One of this is xfailing but should be fixed by your changes #39 as a follow up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants