-
Notifications
You must be signed in to change notification settings - Fork 1
/
DESCRIPTION
41 lines (41 loc) · 1.29 KB
/
DESCRIPTION
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Package: reapr
Type: Package
Title: Reap Information from Websites
Version: 0.1.0
Date: 2019-01-15
Authors@R: c(
person("Bob", "Rudis", email = "[email protected]", role = c("aut", "cre"),
comment = c(ORCID = "0000-0001-5670-2640"))
)
Maintainer: Bob Rudis <[email protected]>
Description: There's no longer need to fear getting at the gnarly bits of web pages.
For the vast majority of web scraping tasks, the 'rvest' package does a
phenomenal job providing just enough of what you need to get by. But, if you
want more of the details of the site you're scraping, some handy shortcuts to
page elements in use and the ability to not have to think too hard about
serialization during scraping tasks, then you may be interested in reaping
more than harvesting. Tools are provided to interact with web sites content
and metadata more granular level than 'rvest' but at a higher level than
'httr'/'curl'.
URL: https://gitlab.com/hrbrmstr/reapr
BugReports: https://gitlab.com/hrbrmstr/reapr/issues
NeedsCompilation: yes
Encoding: UTF-8
License: MIT + file LICENSE
Suggests:
testthat,
covr
Depends:
R (>= 3.2.0)
Imports:
httr,
jsonlite,
xml2,
selectr,
magrittr,
curl,
methods,
xslt,
stats
Roxygen: list(markdown = TRUE)
RoxygenNote: 6.1.1