-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why not copy the directory "src/" in Dockerfile #2
Comments
Hi Yachao, I think you can solve this issue if you first clone the repository to your machine and then mount the repository directory using the -v option, so something like: docker run -v /path/to/BOLSTM/:/BOLSTM bolstm bash then you can run all the scripts inside /BOLSTM. let me know if that works. |
@AndreLamurias Thanks for your quick reply! I can move the repository's scrips to the image now, but I can't find a proper way to run it. I means if I use "docker run -v /path/to/BOLSTM/:/BOLSTM bolstm bash" command then a container is created but unfortunately it exits anyway. I've seen this with "docker ps -a". I search the google and find the reason's there's already a bash file in the image. Another way is an interactive mode "docker run -it -v /path/to/BOLSTM/:/BOLSTM bolstm bash", then I can run all the scripts but when I exit the bash, all changes are gone. I don't know how to do now. Do you have any other solutions? |
Sorry, I forgot to include the -it option. The changes you make inside /BOLSTM should persist. What are you loosing? I guess another option would be to follow the instructions of the Dockerfile and configure your system manually without docker |
Hi Andre, @AndreLamurias |
@AndreLamurias Hi Andre, now I can save the changes using docker commit command, and I'm confronted with the same issue with @mjlorenzo305 , I can't generate the chebi.db with "python dishin.py" in the docker or locally. Looking for your reply and solutions! Thank you! |
@dearchill @mjlorenzo305 I am working on an updated Dockerfile to include all the dependencies, should be ready next week |
@AndreLamurias (@dearchill ) ,
Also, I noticed the code expects certain directories to be precreated (temp, models, data, etc) or it will fail during preprocessing or training. Hope this is helpful |
I am currently hitting an issue (as mentioned above) when in the middle of training the model. The error looks like this: AttributeError: 'ProgbarLogger' object has no attribute 'log_values' I have searched around and see others have hit this problem but I haven't found any solution yet. Here is the trace back console output: None |
@mjlorenzo305 from the error log you seem to be using anaconda, which I'm really not familiar with, so I can't help you with that, although it's probably something to do with the keras/tensorflow versions. I have updated Dockerfile and Dockerfile_gpu to include all the dependencies. It will download a patched version of sst-light, the pubmed-w2v binary and all the other dependencies you mentioned, except the DDICorpus, as the authors require a form submission to download it. In the future I may update the repo again to make the image smaller, because as it is now it's too big to upload to dockerhub (7GB). But I tested building the image and running the commands inside the container and it seemed to work. I think this is the best way to do it, otherwise you will have issues with the package versions. You probably will want to use the -v option to mount some directories such as results/ and models/. But do not mount src/ as it will overwrite what is already in the container. @mjlorenzo305 @dearchill let me now if the new Dockerfile is working for you. |
this usually shouldn't be an issue, but you can add some lines to create the dirs if they don't exist, the paths are at the top of the script files, should be simple and you can then do a pull request |
|
I was able to get beyond the issue I reported above: "AttributeError: 'ProgbarLogger' object has no attribute 'log_values'" as well as a few others that followed. I'll mention a few things here in case others run into those issues.
Trace output: |
When I use docker run command in an interactive way, I can't prepare data or train the model for the reason "python3: can't open file 'src/train_rnn.py': [Errno 2] No such file or directory". And in the image directory I can't find prepare_ddi.sh and pipeline_ddi.sh. So maybe there's any better choice to run the docker image you provided, could you please offer any instructions specifically? I'm very first to docker and thank's a lot!
The text was updated successfully, but these errors were encountered: