chattr
is a package that allows interaction with large language models (LLMs), such as GitHub Copilot Chat and OpenAI’s GPT 3.5 and 4. The main vehicle is a Shiny application running within the RStudio IDE. Here’s an example of what it looks like running inside the Viewer panel:
Figure 1: chattr
The Shiny app
Although this article highlights chattr
Regarding the integration with the RStudio IDE, it is worth mentioning that it works outside of RStudio, for example the terminal.
Getting started
To get started, install the package from CRAN and then call the Shiny app using the chattr_app()
function:
# Install from CRAN
install.packages("chattr")
# Run the app
::chattr_app()
chattr
#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 1: GitHub - Copilot Chat - (copilot)
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> 4: LlamaGPT - ~/ggml-gpt4all-j-v1.3-groovy.bin (llamagpt)
#>
#>
#> Selection:
>
After selecting the model you want to interact with, the app will open. The following screenshot provides an overview of the different buttons and keyboard shortcuts you can use with the app:
Figure 2: chattr
the user interface
You can start typing your requests in the main text box at the top left of the app. Then submit your question by clicking the ‘Submit’ button or by pressing Shift+Enter.
chattr
parses the output of the LLM and displays the code within fragments. Also place three buttons on top of each piece. One to copy the code to the clipboard, the other to copy it directly to your active script in RStudio, and another to copy the code to a new script. To close the application, press the ‘Escape’ key.
Pressing the ‘Settings’ button will open the defaults used by the chat session. These can be changed as you see fit. The ‘Ask’ text box is the additional text that is sent to the LLM as part of your question.
Figure 3: chattr
UI – Settings Page
Custom settings
chattr
It will try to identify which models you have configured and will include only those in the selection menu. For Copilot and OpenAI,
chattr
confirms that an authentication token is available to display them in the menu. For example, if you only have OpenAI configured, the message will look like this:
::chattr_app()
chattr#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> Selection:
>
If you want to bypass the menu, use the chattr_use()
function. Below is an example of how to set GPT 4 as default:
library(chattr)
chattr_use("gpt4")
chattr_app()
You can also select a model by setting the CHATTR_USE
environment variable.
Advanced customization
Many aspects of your interaction with the LLM can be customized. To do this, use the chattr_defaults()
function. This function displays and configures the additional message sent to the LLM, the model to use, determines whether the chat history will be sent to the LLM, and models specific arguments.
For example, you may want to change the maximum number of tokens used per response; for OpenAI you can use this:
# Default for max_tokens is 1,000
library(chattr)
chattr_use("gpt4")
chattr_defaults(model_arguments = list("max_tokens" = 100))
#>
#> ── chattr ──────────────────────────────────────────────────────────────────────
#>
#> ── Defaults for: Default ──
#>
#> ── Prompt:
#> • {{readLines(system.file('prompt/base.txt', package = 'chattr'))}}
#>
#> ── Model
#> • Provider: OpenAI - Chat Completions
#> • Path/URL: https://api.openai.com/v1/chat/completions
#> • Model: gpt-4
#> • Label: GPT 4 (OpenAI)
#>
#> ── Model Arguments:
#> • max_tokens: 100
#> • temperature: 0.01
#> • stream: TRUE
#>
#> ── Context:
#> Max Data Files: 0
#> Max Data Frames: 0
#> ✔ Chat History
#> ✖ Document contents
If you want to keep changes to the default values, use the chattr_defaults_save()
function. This will create a yaml file, named ‘chattr.yml’ by default. If you find it,
chattr
will use this file to load all defaults including the selected model.
A more extensive description of this feature is available in the chattr
low website
Modify message enhancements
Beyond the app
In addition to the Shiny app, chattr
offers a couple of other ways to interact with the LLM:
- Use the
chattr()
function - Highlight a question in your script and use it as a prompt.
> chattr("how do I remove the legend from a ggplot?")
#> You can remove the legend from a ggplot by adding
#> `theme(legend.position = "none")` to your ggplot code.
A more detailed article is available at chattr
website
here.
RStudio Plugins
chattr
comes with two RStudio plugins:
Figure 4: chattr
accessories
You can bind these plugin calls to keyboard shortcuts, making it easy to open the app without having to type the command each time. To find out how to do this, see the keyboard shortcut section in the
chattr
official website.
Work with local LLMs
Today, capable open source models are widely available that can run on your laptop. Instead of integrating with each model individually, chattr
work with CallGPTJ-chat. This is a lightweight application that communicates with a variety of local models. At this time LlamaGPTJ-chat integrates with the following model families:
- GPT-J (ggml and gpt4all models)
- Calls (ggml Vicuña de Meta models)
- Tile Pretrained Transformers (MPT)
LlamaGPTJ-chat works directly from the terminal. chattr
It integrates with the application by starting a “hidden” terminal session. There it initializes the selected model and makes it available to start chatting with it.
To get started, you need to install LlamaGPTJ-chat and download a compatible model. More detailed instructions are found.
here.
chattr
looks for the LlamaGPTJ chat location and the installed model in a specific folder location on your machine. If the installation paths do not match the locations expected by chattr
then the CallGPT will not appear on the menu. But it’s okay, you can still access it with chattr_use()
:
library(chattr)
chattr_use(
"llamagpt",
path = "(path to compiled program)",
model = "(path to model)"
)#>
#> ── chattr
#> • Provider: LlamaGPT
#> • Path/URL: (path to compiled program)
#> • Model: (path to model)
#> • Label: GPT4ALL 1.3 (LlamaGPT)
Extension chattr
chattr
aims to make it easier to add new LLM APIs. chattr
has two components, the user interface (Shiny app and
chattr()
function), and the included back-ends (GPT, Copilot, LLamaGPT). No need to add new backends directly in chattr
. If you are a package developer and would like to take advantage of the benefits chattr
UI, all you need to do is define ch_submit()
method in your package.
The two exit requirements for ch_submit()
are:
-
As the final return value, send the full response of the model you are integrating into
chattr
. -
If transmitted (
stream
isTRUE
), generates the current output as it occurs. Generally through acat()
function call.
Here is a simple toy example showing how to create a custom method for
chattr
:
library(chattr)
<- function(defaults,
ch_submit.ch_my_llm prompt = NULL,
stream = NULL,
prompt_build = TRUE,
preview = FALSE,
...) {# Use `prompt_build` to prepend the prompt
if(prompt_build) prompt <- paste0("Use the tidyverse\n", prompt)
# If `preview` is true, return the resulting prompt back
if(preview) return(prompt)
<- paste0("You said this: \n", prompt)
llm_response if(stream) {
cat(">> Streaming:\n")
for(i in seq_len(nchar(llm_response))) {
# If `stream` is true, make sure to `cat()` the current output
cat(substr(llm_response, i, i))
Sys.sleep(0.1)
}
}# Make sure to return the entire output from the LLM at the end
llm_response
}
chattr_defaults("console", provider = "my llm")
#>
chattr("hello")
#> >> Streaming:
#> You said this:
#> Use the tidyverse
#> hello
chattr("I can use it right from RStudio", prompt_build = FALSE)
#> >> Streaming:
#> You said this:
#> I can use it right from RStudio
For more details, visit the feature reference page, link
here.
Welcome comments
After trying it out, feel free to post your thoughts or issues on the
chattr
‘s GitHub repository.