-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There are 1 datanode(s) running and 1 node(s) are excluded in this operation #51
Comments
@lokinell hi! Can you check logs of datanode/namenode and copy paste it here. How to check logs: Most likely there is a problem with datanode startup, and it has not been registered with namenode. |
请问最后怎么解决的 |
@shicanLeft if you have the same problem, please share the logs from your namenode/datanode |
@earthquakesan 125 2018-10-12 17:00:19,363 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30008 milliseconds |
I have a strange problem pretty much like describe above. When I use Hue I can upload files (as long as they are below 64MB), but when I use HDFS from outside in, it fails. HDFS |
check your docker version. in version 18.03 everything is ok. i had the same problem with docker 18.06 and after downgrading to 18.03 the problem was solved. |
Mine was 18.09, but I downgrade to 18.03 likes suggested. Unfortunately I get the same result as described above. I might need to clarify that I'm running windows and have a Hyper-V VM running Ubuntu 18.04 Server, and on this I run docker with this projects docker-compose.yml. On my Windows I have hadoop downloaded (not running) to get to do hadoop fs -copyFromLocal into the VM IP:8020. |
i think this problem is caused by the fact that datanode is not started an permissions, you can start from these two aspects. i meet this problem not in the docker environment, but i solved it from these two directions |
How did you solve it in the end? I have the same problem. |
How did you solve it in the end? I have the same problem.
|
any updates on this? |
1 similar comment
any updates on this? |
The reason of this exception is that client cannot connect to the DATANODE
First, in docker-compose.yml, specify hostnames and ports for both for datanode and namenode, like this:
the port 50010 and 50020 are used for datanode to communicate, while the 8020 for namenode is used for namenode to communicate too. After this, then specify the port mapping in /etc/hosts (if your os is linux or mac), like this:
"xxx.xx.xx.xx" is the ip address of your server machine (if u are using your own computer, then can set it to localhost) |
Hi,
when I try to write parquet file into HDFS, I have below issue:
File /data_set/hello/crm_last_month/2534cb7a-fc07-401e-bdd3-2299e7e657ea.parquet could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
The text was updated successfully, but these errors were encountered: