Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR Uncaught error from Logz.io sender java.lang.IndexOutOfBoundsException: Index 214 is out of bounds [217 - 937] #52

Open
slavag opened this issue Jul 26, 2023 · 36 comments
Assignees

Comments

@slavag
Copy link

slavag commented Jul 26, 2023

Hi,
We're using logz.io appender for log4j2 to ship logs. While on x64 servers it's running fine, in the ARM instances we see next exception

2023-07-25 13:20:55,788 Log4j2-TF-1-LogzioAppender-1 ERROR Uncaught error from [Logz.io](http://logz.io/) sender java.lang.IndexOutOfBoundsException: Index 214 is out of bounds [217 - 937]
	at [io.logz.sender.org](http://io.logz.sender.org/).kairosdb.bigqueue.BigArrayImpl.validateIndex(BigArrayImpl.java:471)
	at [io.logz.sender.org](http://io.logz.sender.org/).kairosdb.bigqueue.BigArrayImpl.get(BigArrayImpl.java:411)
	at [io.logz.sender.org](http://io.logz.sender.org/).kairosdb.bigqueue.BigQueueImpl.dequeue(BigQueueImpl.java:124)
	at io.logz.sender.DiskQueue.dequeue(DiskQueue.java:65)
	at io.logz.sender.LogzioSender.dequeueUpToMaxBatchSize(LogzioSender.java:218)
	at io.logz.sender.LogzioSender.drainQueue(LogzioSender.java:234)
	at io.logz.sender.LogzioSender.drainQueueAndSend(LogzioSender.java:133)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:832)

Please advise , what can be an issue here ?
Thanks

@tamir-michaeli tamir-michaeli self-assigned this Jul 26, 2023
@tamir-michaeli
Copy link
Contributor

Hi @slavag, can you provide the appender version you are using, and JDK version?

@slavag
Copy link
Author

slavag commented Jul 26, 2023

Appender is 1.0.19 and JDK is AWS-Corretto 15.And also with OpenJDK 8
Thanks

@slavag
Copy link
Author

slavag commented Jul 26, 2023

@tamir-michaeli As temp solution we put <inMemoryQueue>true</inMemoryQueue> and it solved the issue.

@tamir-michaeli
Copy link
Contributor

@slavag Good to hear. We will release a new version next week that will have a better maintained bigqueue module for the underlying queue implementation, hopefully it will fix the issue.

@tamir-michaeli
Copy link
Contributor

@slavag new version is out. Please note that it is supported only with JDK 11 and above.

@tamir-michaeli
Copy link
Contributor

@slavag Did you had a chance to check the new version?

@slavag
Copy link
Author

slavag commented Aug 6, 2023

@tamir-michaeli sorry, missed that, will check in the coming days and let you know.
Thanks

@slavag
Copy link
Author

slavag commented Aug 7, 2023

@tamir-michaeli Hi, unfortunately not fixed :

2023-08-07 10:32:37,886 Log4j2-TF-1-LogzioAppender-2 ERROR Uncaught error from Logz.io sender java.lang.IndexOutOfBoundsException: Index 214 is out of bounds [217 - 1430]
        at io.logz.sender.org.ikasan.bigqueue.BigArrayImpl.validateIndex(BigArrayImpl.java:444)
        at io.logz.sender.org.ikasan.bigqueue.BigArrayImpl.get(BigArrayImpl.java:385)
        at io.logz.sender.orgikasan.bigqueue.BigQueueImpl.dequeue(BigQueueImpl.java:112)
        at io.logz.sender.DiskQueue.dequeue(DiskQueue.java:65)
        at io.logz.sender.LogzioSender.dequeueUpToMaxBatchSize(LogzioSender.java:222)
        at io.logz.sender.LogzioSender.drainQueue(LogzioSender.java:238)
        at io.logz.sender.LogzioSender.drainQueueAndSend(LogzioSender.java:133)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
        at java.base/java.lang.Thread.run(Thread.java:832)

@Nico-CloudAlly
Copy link

Hey @tamir-michaeli ,
Our java application is using /mnt as tmp folder (-Djava.io.tmpdir=/mnt).
Premissions for that folder is 777.

If we are not using that folder the appender is working without true flag.

Any suggestions?

Thanks Nicolas

@slavag
Copy link
Author

slavag commented Aug 7, 2023

Hey @tamir-michaeli , Our java application is using /mnt as tmp folder (-Djava.io.tmpdir=/mnt). Premissions for that folder is 777.

If we are not using that folder the appender is working without true flag.

Any suggestions?

Thanks Nicolas

Need to add that /mnt is different disk. But as Nico said it's writable to every user

@tamir-michaeli
Copy link
Contributor

@slavag Did the appender worked for some time when setting the tmpdir, or not at all?

@slavag
Copy link
Author

slavag commented Aug 16, 2023

@tamir-michaeli not at all, failing during initialization.

@slavag
Copy link
Author

slavag commented Aug 22, 2023

@tamir-michaeli any update ?

@tamir-michaeli
Copy link
Contributor

@slavag We will probably start working on it next week.

@tamir-michaeli
Copy link
Contributor

@slavag Can you please share your config?

@slavag
Copy link
Author

slavag commented Aug 24, 2023

@tamir-michaeli

<?xml version="1.0" encoding="UTF-8" standalone="no" ?>

<Configuration status="WARN">
    <Properties>
        <Property name="LOG_PATTERN">%d{HH:mm:ss,SSS} [%p] [%t] %notEmpty{[SID:%X{SID}]} - %m [%c]%n</Property>
    </Properties>

    <Appenders>
        <Console name="Console" target="SYSTEM_OUT" follow="true">
            <PatternLayout pattern="${LOG_PATTERN}"/>
        </Console>
        <LogzioAppender name="Logzio">
            <logzioToken>....</logzioToken>
            <logzioType>Tika</logzioType>
            <logzioUrl>https://listener-eu.logz.io:8071</logzioUrl>
            <additionalFields>hostname=${hostName}</additionalFields>
            <inMemoryQueue>true</inMemoryQueue>
        </LogzioAppender>
        <RollingFile name="File" fileName="/mnt/logs/tikalogs.log"
                     filePattern="mnt/logs/tikalogs-%d{MM-dd-yyyy}-%i.log.gz">
            <PatternLayout>
                <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
            </PatternLayout>
            <Policies>
                <TimeBasedTriggeringPolicy />
                <SizeBasedTriggeringPolicy size="1 G"/>
            </Policies>
            <DefaultRolloverStrategy>
                <Delete basePath="/mnt/logs/">
                    <IfFileName glob="*/tikalogs-*.log.gz"/>
                    <IfAccumulatedFileCount exceeds="10"/>
                </Delete>
            </DefaultRolloverStrategy>
        </RollingFile>
    </Appenders>

    <Loggers>
        <Root level="debug">
            <AppenderRef ref="Console" level="warn"/>
            <AppenderRef ref="Logzio" level="warn"/>
            <AppenderRef ref="File" level="info"/>
        </Root>
        <Logger name="com.yyy.tika.parser" level="error" additivity="false">
            <AppenderRef ref="Console"/>
            <AppenderRef ref="Logzio"/>
            <AppenderRef ref="File"/>
        </Logger>

        <!-- disable the below, as sometimes they fill our logs with grabage and increase log size in logz.io -->
        <Logger name="org.apache.pdfbox" level="error" additivity="false">
        </Logger>
        <Logger name="org.apache.fontbox" level="error" additivity="false">
        </Logger>
        <Logger name="org.mp4parser" level="error" additivity="false">
        </Logger>
    </Loggers>
</Configuration>

Please note that with <inMemoryQueue>true</inMemoryQueue> it's working ok, without it - failing.
Thanks

@tamir-michaeli
Copy link
Contributor

Hi @slavag ,
unfortunately i could not reproduce the issue, these are the steps i took:

  1. Build jar with latest log4j2 appender, setting inMemoryQueue to false.
  2. Create ec2 instance (arm64) with 2 volumes.
  3. Run jar from the first drive while setting the tempdir to a folder on the second drive.

The jar ran with no issues, for multiple runs as well (to test the loading of data saved in disk).

Maybe setting the appender debug flag to true will provide us more information.

@slavag
Copy link
Author

slavag commented Aug 30, 2023

@tamir-michaeli Hi, is there any update ? Thanks

@tamir-michaeli
Copy link
Contributor

@slavag Please see the message above. I could not reproduce the issue. Can you try running the appender with debug mode?

@slavag
Copy link
Author

slavag commented Aug 31, 2023

@tamir-michaeli sorry , somehow didn't see your message. Sure we'll run it in debug, can you please tell me where to put that debug property ?
Thanks

@tamir-michaeli
Copy link
Contributor

@slavag , add <debug>true</debug>:

        <LogzioAppender name="Logzio">
            <logzioToken>....</logzioToken>
            <logzioType>Tika</logzioType>
            <debug>true</debug>
            <logzioUrl>https://listener-eu.logz.io:8071</logzioUrl>
            <additionalFields>hostname=${hostName}</additionalFields>
            <inMemoryQueue>true</inMemoryQueue>
        </LogzioAppender>

@Nico-CloudAlly
Copy link

Hey @tamir-michaeli,
I've tested with true.
The error is the same.

INFO [main] 05:59:38,972 org.apache.tika.server.core.TikaServerConfig As of Tika 2.5.0, you can set the digester via the AutoDetectParserConfig in tika-config.xml. We plan to remove this commandline option in 2.8.0
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://127.0.0.1:8778/jolokia/
2023-09-04 05:59:39,564 Log4j2-TF-2-LogzioAppender-2 ERROR Uncaught error from Logz.io sender java.lang.IndexOutOfBoundsException: Index 214 is out of bounds [217 - 1744]
at io.logz.sender.org.ikasan.bigqueue.BigArrayImpl.validateIndex(BigArrayImpl.java:444)
at io.logz.sender.org.ikasan.bigqueue.BigArrayImpl.get(BigArrayImpl.java:385)
at io.logz.sender.org.ikasan.bigqueue.BigQueueImpl.dequeue(BigQueueImpl.java:112)
at io.logz.sender.DiskQueue.dequeue(DiskQueue.java:65)
at io.logz.sender.LogzioSender.dequeueUpToMaxBatchSize(LogzioSender.java:222)
at io.logz.sender.LogzioSender.drainQueue(LogzioSender.java:238)
at io.logz.sender.LogzioSender.drainQueueAndSend(LogzioSender.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Sep 04, 2023 5:59:40 AM org.apache.cxf.endpoint.ServerImpl initDestination

When using true
it works with no issues.

@slavag
Copy link
Author

slavag commented Sep 10, 2023

@tamir-michaeli Hi, is there any update ? Thanks

@tamir-michaeli
Copy link
Contributor

@slavag Unfortunately no, i could not reproduce the issue, tried multiple combinations of disks & jar locations.
Will keep this issue open.

@slavag
Copy link
Author

slavag commented Sep 11, 2023

@tamir-michaeli we can debug with you in our servers where the issue is constantly reproduced, would you like to do conf call (screen sharing) ?

@tamir-michaeli
Copy link
Contributor

@slavag What kind of disk you are using (efs, ebs, fsx)?

@slavag
Copy link
Author

slavag commented Sep 11, 2023

@tamir-michaeli we're using EBS gp3, maybe I forgot to mention it's LVM managed disk on top of EBS gp3

@tamir-michaeli
Copy link
Contributor

@slavag I will try to reproduce the issue with LVM.

@slavag
Copy link
Author

slavag commented Sep 13, 2023

@tamir-michaeli thanks, please use XFS file system on it.

@tamir-michaeli
Copy link
Contributor

@slavag Im keeping the issue open until we can solve it, no ETA.
As of today our recommendation to not store integration queue's on LVM.

@slavag
Copy link
Author

slavag commented Sep 20, 2023

@tamir-michaeli the problem is that java temp folder in the app that uses logger is pointing LVM disk, and this affects of course integration queue location. So, if we can somehow define different location for integration queue , we definitely can put into /tmp standard folder. Is there any way to define different from default Java temp folder to integration queue storage ?
Thanks

@tamir-michaeli
Copy link
Contributor

@slavag Not sure that i understand, -Djava.io.tmpdir does not work?

@slavag
Copy link
Author

slavag commented Sep 26, 2023

@tamir-michaeli we're using -Djava.io.tmpdir for our needs to point on LVM volume, and same for logs.io appender.
I'm asking if there's another option in appender to point to different location, other than -Djava.io.tmpdir ?

@tamir-michaeli
Copy link
Contributor

tamir-michaeli commented Sep 27, 2023

@slavag You can use queueDir parameter for the disk queue - https://github.com/logzio/logzio-log4j2-appender#parameters-for-disk-queue

@slavag
Copy link
Author

slavag commented Sep 27, 2023

@tamir-michaeli thanks, will check.

@slavag
Copy link
Author

slavag commented Oct 2, 2023

@tamir-michaeli works, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants