问题
My shiny app will reads content from a local file in my desktop every 5 mins because the file's content get updated every 5 mins too. And my shiny app basically read in the new content and append the data to the existing dataframe and plot the new content out every 5 mins.
Question: Ultimately, I would like to host this online. If I host this on shinyapps.io
, would I still be able to read the local file in my desktop that is updated every 5 minutes? If not, what can I do?
回答1:
I haven't extensive experience with shiny deployments on shinyapps.io but I'll try to keep this as general as possible.The main limitation lies on being unable to schedule a CRON job to grab data from your machine on a schedule. Hence I would consider the following:
- Push your data on a storage provider (Dropbox will be used as an example ) each 5 minutes using a CRON job
- Grab the data in your Shiny dashboard.
Below you can find couple example around Dropbox and Google Drive but you can easily it apply pretty much the same concepts for AWS and GCP (although you'll have to fiddle with passing secrets or encrypting your auth tokens).
Dropbox example
rdrop2
offer an easy to use wrapper around Dropbox API. Below you can find a
simple example on how to push and retrieve a text file from an account (from rdrop2
readme file).
library(rdrop2)
# Authenticate and save token for later use2
token <- drop_auth()
saveRDS(token, "~/dropbox_token.rds")
# Create a folder
drop_create('upload_test')
# You can also create a public folder if data is not sensitive
# drop_create('public/upload_test')
# Upload the file in the freshly created folder
drop_upload("~/mtcars.csv", path = "upload_test")
## Retrieveing your file is as simple as
drop_download("upload_test/mtcars.csv", local_path = "~/new_file.csv")
Implementing it in Shiny
The cleanest way to apply the example above in Shiny would be to place data acquisition
in a global.R
file that will be imported into your Shiny application before running:global.R
:
library(rdrop2)
# Authenticate and save token for later use2
token <- drop_auth(rdstoken = "dropbox_token.rds")
# Retrieveing your file is as simple as
drop_download("upload_test/mtcars.csv", local_path = "data.csv",
overwrite = TRUE)
drop_df <- read.csv("data.csv", sep = ",")
print("Downloaded and imported data!")
Your app.R
file will look something like this:
library(shiny)
source("global.R")
ui <- fluidPage(
# Application title
titlePanel("Pulling data from Dropbox"),
mainPanel(
tableOutput("df_output")
)
)
server <- function(input, output) {
output$df_output <- renderTable({
drop_df
})
}
shinyApp(ui = ui, server = server)
Deploy to shinyapps
You can then deploy your app as usual (including the auth token).
Scheduling data upload
Since your data gets refreshed every 5 mintues on your local machine, it'll be needed
to have an upload schedule with that cadence. Here I'll be using the cronR
package but
using crontab
on Linux will work just fine.
library(cronR)
cron_add(source("data_upload.R"), frequency = "*/5 * * * *",
description = "Push data to Dropbox")
plumber
api
As @Chris mentioned, calling an API might be an option, especially if data will be needed outside of R scripts and Shiny dashboards. Below you can find a short endpoint one could call to retrieve data in csv format. Shinyapps.io doesn't support hosting plumber
api, hence you'd have to host it on your favorite cloud provider.
library(plumber)
library(rdrop2)
#* @apiTitle Plumber Example API
#* Echo dropbox .csv file
#* @get /get-data
function(req, res) {
auth_token <- drop_auth(rdstoken = "token.rds")
drop_download('upload_test/mtcars.csv', dtoken = auth_token,
local_path = "mtcars.csv", overwrite = TRUE)
include_file("mtcars.csv", res, 'text/csv')
}
Building and starting the service with:
r <- plumb("plumber.R")
r$run()
来源:https://stackoverflow.com/questions/60900297/shinyapp-io-to-read-a-local-file-that-update-its-content-every-5-minutes