In this ~90 minute introduction, the goal is:
Introduce new dataRetrieval
functions
The intended audience is someone:
Seasoned dataRetrieval
user
AND/OR intermediate R user
Familiar with USGS water data
New to dataRetrieval
? Introduction to dataRetrieval
NWIS servers are shutting down
That means all readNWIS
functions will eventually stop working
Timeline is very uncertain, so we wanted to get information out on replacement functions ASAP.
New dataRetrieval
functions are available to replace the NWIS functions
read_waterdata_
functions are the modern functions
They use the new USGS Water Data APIs
The features shown in this presentation are available on the most recent CRAN update:
Functions added when new API endpoints are introduced will initially be pushed to the “develop” branch on GitHub. To test those updates, using the remotes
packages:
The “develop” branch WILL change frequently, and there are no promises of future behavior.
Open Geospatial Consortium (OGC), a non-profit international organization that develops and promotes open standards for geospatial information. OGC-compliant interfaces to USGS water data:
read_waterdata_monitoring_location - Monitoring location information
read_waterdata_ts_meta - Time series availability
read_waterdata_daily - Daily data
read_waterdata_latest_continuous - Latest continuous data
read_waterdata_field_measurements - Discrete hydrologic data (gage height, discharge, and readings of groundwater levels)
read_waterdata - Generalized function
read_waterdata_metadata - Metadata
The Water Data APIs limit how many queries a single IP address can make per hour
You can run new dataRetrieval
functions without a token
You might run into errors quickly. If you (or your IP!) have exceeded the quota, you will see:
! HTTP 429 Too Many Requests.
• You have exceeded your rate limit. Make sure you provided your API key from https://api.waterdata.usgs.gov/signup/, then either try again later or contact us at https://waterdata.usgs.gov/questions-comments/?referrerUrl=https://api.waterdata.usgs.gov for assistance.
Request a USGS Water Data API Token: https://api.waterdata.usgs.gov/signup/
Save it in a safe place (KeyPass or other password management tool)
Add it to your .Renviron file as API_USGS_PAT.
Restart R
Run after restarting R:
See next slide for a demonstration.
My favorite method to do add your token to .Renviron is to use the usethis
package. Let’s pretend the token sent you was “abc123”:
API_USGS_PAT = "abc123"
Save that file
Restart R/RStudio.
Check that it worked by running (you should see your token printed in the Console):
Use your “tab” key!
Replaces readNWISsite
:
All the columns that you retrieve, you can also filter on.
You should not specify all of these parameters.
You should not specify too few of these parameters.
Let’s get all the monitoring locations for Dane County, Wisconsin:
site_info <- read_waterdata_monitoring_location(state_name = "Wisconsin",
county_name = "Dane County")
Requesting:
https://api.waterdata.usgs.gov/ogcapi/v0/collections/monitoring-locations/items?f=json&lang=en-US&limit=10000&state_name=Wisconsin&county_name=Dane%20County
Remaining requests this hour:138
[1] 1036
Now that we’ve seen the whole data set, maybe we realize in the future we can ask for just stream sites, and we only really need a few of those columns:
site_info_refined <- read_waterdata_monitoring_location(
state_name = "Wisconsin",
county_name = "Dane County",
site_type = "Stream",
properties = c("monitoring_location_id",
"monitoring_location_name",
"drainage_area",
"geometry"))
Requesting:
https://api.waterdata.usgs.gov/ogcapi/v0/collections/monitoring-locations/items?f=json&lang=en-US&limit=10000&properties=monitoring_location_name%2Cdrainage_area&state_name=Wisconsin&county_name=Dane%20County&site_type=Stream
Remaining requests this hour:138
“geometry” column means it’s an sf
object, and makes mapping easy!
sf
sf
package:skipGeometry=TRUE
in the query to return a plain data frame with no geometry:Time-Series Metadata. Kind of replaces whatNWISdata
:
Let’s get all the time series in Dane County, WI with daily mean (statistic_id = “00003”) discharge (parameter code = “00060) or temperature (parameter code =”00010):
Tip
Geographic filters are limited to monitoring_location_id and bbox in “waterdata” functions other than read_waterdata_monitoring_location
.
Using sf::st_bbox()
is a convenient way to take advantage of the spatial features integration.
Replaces readNWISdv
:
The “time” argument has a few options:
A single date (or date-time): “2024-10-01” or “2024-10-01T23:20:50Z”
A bounded interval: c(“2024-10-01”, “2025-07-02”)
Half-bounded intervals: c(“2024-10-01”, NA)
Duration objects: “P1M” for data from the past month or “PT36H” for the last 36 hours
Here are a bunch of valid inputs:
# Ask for exact times:
time = "2025-01-01"
time = as.Date("2025-01-01")
time = "2025-01-01T23:20:50Z"
time = as.POSIXct("2025-01-01T23:20:50Z",
format = "%Y-%m-%dT%H:%M:%S",
tz = "UTC")
# Ask for specific range
time = c("2024-01-01", "2025-01-01") # or Dates or POSIXs
# Asking beginning of record to specific end:
time = c(NA, "2024-01-01") # or Date or POSIX
# Asking specific beginning to end of record:
time = c("2024-01-01", NA) # or Date or POSIX
# Ask for period
time = "P1M" # past month
time = "P7D" # past 7 days
time = "PT12H" # past hours
Most recent observation for each time series of continuous data. Continuous data are collected via automated sensors installed at a monitoring location. They are collected at a high frequency and often at a fixed 15-minute interval.
latest_uv_data <- read_waterdata_latest_continuous(monitoring_location_id = "USGS-01491000",
parameter_code = "00060")
latest_dane_county <- read_waterdata_latest_continuous(bbox = sf::st_bbox(site_info),
parameter_code = "00060")
single_ts <- read_waterdata_latest_continuous(time_series_id = "202345d175874d2c814648ac9bea5deb")
Note
Nearly all arguments can be vectors.
Latest discharge (00060) in Dane County, WI:
pal <- colorNumeric("viridis", latest_dane_county$value)
leaflet(data = latest_dane_county |>
sf::st_transform(crs = leaflet_crs)) |>
addProviderTiles("CartoDB.Positron") |>
addCircleMarkers(popup = paste(latest_dane_county$monitoring_location_id, "<br>",
latest_dane_county$time, "<br>",
latest_dane_county$value,
latest_dane_county$unit_of_measure),
color = ~ pal(value),
radius = 3,
opacity = 1) |>
addLegend(pal = pal,
position = "bottomleft",
title = "Latest Discharge",
values = ~value)
This function is totally different!
Uses CQL2 Queries: Common Query Language (CQL2)
Great examples here: https://api.waterdata.usgs.gov/docs/ogcapi/complex-queries/
Wisconsin and Minnesota sites with a drainage area greater than 1000 mi^2:
pal <- colorNumeric("viridis", sites_mn_wi$drainage_area)
leaflet(data = sites_mn_wi |>
sf::st_transform(crs = leaflet_crs)) |>
addProviderTiles("CartoDB.Positron") |>
addCircleMarkers(popup = ~monitoring_location_name,
color = ~ pal(drainage_area),
radius = 3,
opacity = 1) |>
addLegend(pal = pal,
position = "bottomleft",
title = "Drainage Area",
values = ~drainage_area)
Let’s find a list of sites with HUCs that fall within 02070010. Use the wildcard %
[1] "020700100305" "020700100307" "020700100303" "020700100203" "020700100102"
[6] "020700100101" "020700100201" "020700100204" "020700100103" "020700100202"
[11] "020700100301" "020700100302" "020700100304" "020700100306" "02070010"
[16] "020700100402" "020700100401" "020700100805" "020700100601" "020700100602"
[21] "020700100603" "020700100604" "020700100606" "020700100501" "020700100502"
[26] "020700100504" "020700100503" "020700100801" "020700100701" "020700100702"
[31] "020700100703" "020700100704" "020700100705" "020700100802" "020700100803"
[36] "020700100804" "020700100605"
Flexible Queries
Lots of options to define your query
Do NOT define all of them
Do NOT define to few of them
Flexible Columns Returned
Simple Features
sf
CQL query support
There is a character limit to how big your query can be
Possible alternatives to large site lists are bounding box queries, or loops/applys/etc to chunk up the request
Need to balance the character size of the request with the requests per hour limit.
max_results
lets you define how many rows are returned
limit
lets you define how many rows are returned per page of data. With a good internet connection, you can probably get away with ignoring this argument.
I would ignore both most of the time.
If you run dataRetrieval calls in a CI job, you’ll need to add an API Token to the configuration.
Go to: Settings -> CI/CD -> Variables -> Add Variable
Key should be API_USGS_PAT, value will be the token
Click on Masked and hidden
Add to your .gitlab-ci.yml file:
variables:
API_USGS_PAT: "${API_USGS_PAT}"
In GitHub:
Settings -> Secrets and variables -> Actions -> Secrets
Secret can be stored in Environment or Repository
If you created an Environment called “CI_config”, your CI yaml will need:
environment: CI_config
env:
API_USGS_PAT: ${{ secrets.API_USGS_PAT }}
You’ll want to add a token for any Posit Connect product (Shiny app, Quarto slides, etc.).
OR
USGS switched to Aquarius Samples March 11, 2024.
On that day, the USGS data in the Water Quality Portal was frozen.
“modern USGS discrete data” = data that includes pre and post Aquarius Samples conversion.
The new function read_waterdata_samples
gets modern USGS discrete data.
Water Quality Portal (WQP) also has modern USGS discrete data, but not by default.
If you only need USGS data, use read_waterdata_samples
, if you need USGS and non-USGS, use readWQPdata
.
Replaces readNWISqw
read_waterdata_samples
is the SAME as read_USGS_samples
, but going forward we want to use waterdata for consistent branding.
site <- "USGS-01631000"
pcode <- "00660"
qw_data <- read_waterdata_samples(monitoringLocationIdentifier = site,
usgsPCode = pcode,
dataType = "results",
dataProfile = "basicphyschem")
ncol(qw_data)
[1] 100
That’s a LOT of columns that come back.
Let’s pull just a few columns out and look at those:
library(dplyr)
qw_data_slim <- qw_data |>
select(Date = Activity_StartDate,
Result_Measure,
DL_cond = Result_ResultDetectionCondition,
DL_val_A = DetectionLimit_MeasureA,
DL_type_A = DetectionLimit_TypeA) |>
mutate(Result = if_else(!is.na(DL_cond), DL_val_A, Result_Measure),
Detected = if_else(!is.na(DL_cond), "Not Detected", "Detected")) |>
arrange(Detected)
A summary service exists for 1 site at a time (so in this case, monitoringLocationIdentifier cannot be a vector of sites):
If you use readWQPqw
, add “legacy=FALSE” to get modern USGS data:
If you use readWQPdata
, add ‘service = “ResultWQX3”’:
There’s a lot of new information and changes being presented. There are going to be scripts that have been passed down through the years that will start breaking once the NWIS servers are decommissioned.
Check back on the documentation often: https://doi-usgs.github.io/dataRetrieval/
Peruse the “Additional Articles” - when we find common issues people have with converting their old workflows, we will try to add articles to clarify new workflows.
If you have additional questions, email comptools@usgs.gov.
Any use of trade, firm, or product name is for descriptive purposes only and does not imply endorsement by the U.S. Government.