Skip to content
This repository has been archived by the owner on Apr 13, 2024. It is now read-only.

Latest commit

 

History

History
67 lines (50 loc) · 2.89 KB

README.md

File metadata and controls

67 lines (50 loc) · 2.89 KB

X-PERT: Cross-Platform Error ReporTer

Notice: This is a academic, open-source reimplementation of the X-PERT paper 
and is copyright of Georgia Institute of Technology and has been released 
under MIT License. Fujitsu Labs of America has the patents to protect 
commercial interests in this line of research.

Here are some differences from the original version of X-PERT:

  • The tool only checks for layout ALL issues described in the paper.
  • The new tool only provides command line output in CSV format unlike the HTML reports generated by the tool in the paper.

Running X-PERT via its Web front-end

Install the gevent and flask python packages:

pip install gevent flask

If you don't have pip, see installation instructions here

Configure VM or make suitable changes

The web app assumes that you are running a Windows virtual machine using VMWare and runs the crawler inside the VM. To tweak this or to run the crawler on your local machine, change the vm_cmd option in the x.py file.

Start the web app

cd xpert/web
python app.py

It should start the web app on port 8000. Use your web browser to open http://localhost:8000 and navigate through the wizard.

Running X-PERT via Command line

Crawl

java -cp xpert.jar CrawlDriver

If you're using the command line option, you need to write this CrawlDriver class based on Crawljax's specification. Here is an example CrawlDriver

The web front-end generates a suitable CrawlDriver based on the information provided in the form using this Jinja template.

Compare

java -cp xpert.jar edu.gatech.xpert.XpertMain <output_dir> <browser1> <browser2>

output_dir refers to the folder in which the crawled data is saved. browser1 and browser2 are names of the two browsers against which X-PERT checks for XBIs in an application. Valid browser values supported by Crawljax are CHROME, FIREFOX and INTERNET_EXPLORER

The web front-end creates an output folder for each crawl performed with a random 32char alphanumeric name. See example output directory

Troubleshooting

If things seem broken, you might need to update the Selenium library and perhaps Chrome and IE Driver. (if you are trying to use the latest versions of these browsers.) These files are located inside the exec folder.

You can find the latest versions of these files form the selenium website. Download the latest selenium standalone jar from https://code.google.com/p/selenium/downloads/list

Issues

You can create bugs/issues on github. Incase of any questions you can also email shauvik [at] gatech {dot} edu