Predictive Hacks

Rest API in R using Restrserve & Plumber

Rest API In R Using Restrserve & Plumber

Following the last post on how to make an API in python, we will make an API in R using two methods. Restrserve and Plumber.

The model of the API

We have made a machine learning model from the iris dataset that predicts the species. The code of the model is the following.

library(datasets)
library("caret")
library(dplyr)

ir_data<- iris
levels(ir_data$Species)
x <- ir_data[,1:4] 
y<-ir_data$Species

fit.lda <- train(Species~., data=ir_data, method="lda")

#save the model
saveRDS(fit.lda, "./final_model.rds")

Now that we have created and saved the model we can make an API.

Restrserve

The first method is Restrserve. Basically we have to create a function that gets the requests and gives the response back. In this example, the request$body is a JSON containing “Sepal.Length”, “Sepal.Width”, “Petal.Length”, “Petal.Width”. So using the jsonlite library we are getting these values and we are creating a data frame to feed our model to get the prediction. Then, we are setting the response$body with the result.

We only have to set the Restrserve application settings which are to add the endpoint “/iris_predict” and map it with our function. The full code of API is the following

library(RestRserve)
library("caret")
library(dplyr)
library(jsonlite)

#loading the model
super_model <- readRDS("./final_model.rds")



iris_predict=function(request, response){
  x = fromJSON(rawToChar(request$body))
  test <- data.frame("Sepal.Length" = x["Sepal.Length"], "Sepal.Width" = x["Sepal.Width"], "Petal.Length" = x["Petal.Length"],"Petal.Width"=x["Petal.Width"])
  result<-predict(super_model, test)
  response$body = result
  response$content_type = "application/json"
}



app = RestRserve::Application$new()
app$add_post(path = "/iris_predict", FUN = iris_predict)

backend = BackendRserve$new()
backend$start(app, 8080)

Running the above locally, it will run on the URL: http://0.0.0.0:8080/iris_predict. As an example we can set the request body with the following JSON:

{
    "Sepal.Length": 5,
    "Sepal.Width": 3,
    "Petal.Length": 6,
    "Petal.Width": 1
}

We are getting the following result:

"virginica"

Plumber

The second method is using Plumber. Now we have to add the parameters in a comment starting with @param and to set the endpoint with @post for a post method. Then in a function, we are adding the parameters as input and we continue as before.

library(RestRserve)
library("caret")
library(dplyr)
library(jsonlite)


super_model <- readRDS("./final_model.rds")

#' @param Sepal.Length Sepal.Width Petal.Length Petal.Width
#' @post /iris_predict


function(Sepal.Length, Sepal.Width, Petal.Length, Petal.Width){
  test <- data.frame("Sepal.Length" = Sepal.Length, "Sepal.Width" =Sepal.Width, "Petal.Length" = Petal.Length,"Petal.Width"=Petal.Width)
  predict(super_model, test)
}

Now, to run this API we have to run the following code in R:

library(plumber)
r <- plumb("plumber.R")
r$run(port=8080, host="0.0.0.0", swagger=TRUE)

The plumber.R is our code and for the port, you can add anything you want.

If you run the code above, the output should be the following:

Starting server to listen on port 8080
Running the swagger UI at http://127.0.0.1:8080/__swagger__/

That means that we are live and we can use our model!

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Leave a Comment

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Python

Image Captioning with HuggingFace

Image captioning with AI is a fascinating application of artificial intelligence (AI) that involves generating textual descriptions for images automatically.

Python

Intro to Chatbots with HuggingFace

In this tutorial, we will show you how to use the Transformers library from HuggingFace to build chatbot pipelines. Let’s