Training BERT-Fake-Tweets-Classification Model with MYSQL, PostgreSQL and Neo4j, and comparing their performance of training process
- conda install -c huggingface transformers
- conda install -c pytorch pytorch
- conda install -c anaconda flask
- conda install -c anaconda jinja2
- conda install -c anaconda numpy
- conda install -c anaconda mysql-connector-python
- conda install -c anaconda psycopg2
- pip install neo4j
- run training_model.py OR
- run training_model_db.py
- python app.py
- access the local server: http://127.0.0.1:5000/
- Huggingface bert-base-cased: https://huggingface.co/bert-base-cased
- Datasets: https://github.com/prathameshmahankal/Fake-News-Detection-Using-BERT/tree/main/data
- BERT Paper Reference: https://arxiv.org/abs/1810.04805
- Frontend Reference: https://github.com/ac4mm/Fake-Detector