Last updated on 2026-01-10 21:49:43 CET.
| Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
|---|---|---|---|---|---|---|
| r-devel-linux-x86_64-debian-clang | 0.3.0 | 107.46 | 83.62 | 191.08 | OK | |
| r-devel-linux-x86_64-debian-gcc | 0.3.0 | 75.05 | 60.92 | 135.97 | OK | |
| r-devel-linux-x86_64-fedora-clang | 0.3.0 | 214.00 | 127.82 | 341.82 | OK | |
| r-devel-linux-x86_64-fedora-gcc | 0.3.0 | 240.00 | 129.28 | 369.28 | ERROR | |
| r-devel-windows-x86_64 | 0.3.0 | 19.00 | 136.00 | 155.00 | OK | |
| r-patched-linux-x86_64 | 0.3.0 | 111.84 | 71.79 | 183.63 | OK | |
| r-release-linux-x86_64 | 0.3.0 | 102.06 | 72.10 | 174.16 | OK | |
| r-release-macos-arm64 | 0.3.0 | 2.00 | 26.00 | 28.00 | OK | |
| r-release-macos-x86_64 | 0.3.0 | 7.00 | 148.00 | 155.00 | OK | |
| r-release-windows-x86_64 | 0.3.0 | 19.00 | 129.00 | 148.00 | OK | |
| r-oldrel-macos-arm64 | 0.3.0 | 2.00 | 30.00 | 32.00 | OK | |
| r-oldrel-macos-x86_64 | 0.3.0 | 7.00 | 124.00 | 131.00 | OK | |
| r-oldrel-windows-x86_64 | 0.3.0 | 21.00 | 141.00 | 162.00 | ERROR |
Version: 0.3.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [14s/49s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview
> # * https://testthat.r-lib.org/articles/special-files.html
>
> library(testthat)
> library(tidyprompt)
>
> test_check("tidyprompt")
trying URL 'https://upload.wikimedia.org/wikipedia/commons/3/3a/Cat03.jpg'
Content type 'image/jpeg' length 279603 bytes (273 KB)
==================================================
downloaded 273 KB
stdout:]
stderr:]
Saving _problems/test-answer_using_r-41.R
--- Sending request to LLM provider (llama3.1:8b): ---
Hi!
--- Receiving response from LLM provider: ---
I'm a fake LLM! This is my default response.
--- Sending request to LLM provider (llama3.1:8b): ---
Hi
--- Receiving response from LLM provider: ---
I'm a fake LLM! This is my default response.
--- Sending request to LLM provider (llama3.1:8b): ---
Hi
How are you?
--- Receiving response from LLM provider: ---
I'm a fake LLM! This is my default response.
--- Sending request to LLM provider (llama3.1:8b): ---
Hi
What is 2 + 2?
You must answer with only an integer (use no other characters).
--- Receiving response from LLM provider: ---
4
--- Sending request to LLM provider (llama3.1:8b): ---
You are given a user's prompt.
To answer the user's prompt, you need to think step by step to arrive at a final answer.
----- START OF USER'S PROMPT -----
What is 2 + 2?
You must answer with only an integer (use no other characters).
----- END OF USER'S PROMPT -----
What are the steps you would take to answer the user's prompt?
Describe your thought process in the following format:
>> step 1: <step 1 description>
>> step 2: <step 2 description>
(etc.)
When you are done, you must type:
FINISH[<put here your final answer to the user's prompt>]
Make sure your final answer follows the logical conclusion of your thought process.
--- Receiving response from LLM provider: ---
>> step 1: Identify the mathematical operation in the prompt,
which is a simple addition problem.
>> step 2: Recall the basic arithmetic fact that 2 + 2 equals a specific
numerical value.
>> step 3: Apply this knowledge to determine the result of the addition problem,
using the known facts about numbers and their operations.
>> step 4: Conclude that based on this mathematical understanding, the
solution to the prompt "What is 2 + 2?" is a fixed numerical quantity.
FINISH[4]
--- Sending request to LLM provider (no model specified): ---
Hello
NA
Hello!
--- Sending request to LLM provider (llama3.1:8b): ---
Hi
--- Receiving response from LLM provider: ---
beepido boop ba
--- Sending request to LLM provider (llama3.1:8b): ---
Hi
--- Receiving response from LLM provider: ---
I'm a fake LLM! This is my default response.
--- Sending request to LLM provider (llama3.1:8b): ---
You are given a user's prompt.
To answer the user's prompt, you need to think step by step to arrive at a final answer.
----- START OF USER'S PROMPT -----
What is 2 + 2?
You must answer with only an integer (use no other characters).
----- END OF USER'S PROMPT -----
What are the steps you would take to answer the user's prompt?
Describe your thought process in the following format:
>> step 1: <step 1 description>
>> step 2: <step 2 description>
(etc.)
When you are done, you must type:
FINISH[<put here your final answer to the user's prompt>]
Make sure your final answer follows the logical conclusion of your thought process.
--- Receiving response from LLM provider: ---
>> step 1: Identify the mathematical operation in the prompt,
which is a simple addition problem.
>> step 2: Recall the basic arithmetic fact that 2 + 2 equals a specific
numerical value.
>> step 3: Apply this knowledge to determine the result of the addition problem,
using the known facts about numbers and their operations.
>> step 4: Conclude that based on this mathematical understanding, the
solution to the prompt "What is 2 + 2?" is a fixed numerical quantity.
FINISH[4]
--- Sending request to LLM provider (llama3.1:8b): ---
hi
--- Receiving response from LLM provider: ---
I'm a fake LLM! This is my default response.
[ FAIL 1 | WARN 0 | SKIP 36 | PASS 187 ]
══ Skipped tests (36) ══════════════════════════════════════════════════════════
• OPENAI_API_KEY env var is not configured (1): 'test-ellmer_streaming.R:43:3'
• OPENAI_API_KEY not set; skipping OpenAI tests (1):
'test-openai_request.R:14:1'
• OPENAI_API_KEY not set; skipping stream_callback test (1):
'test-stream_callback.R:14:3'
• On CRAN (2): 'test-add-image.R:27:3', 'test-add-image.R:72:3'
• skip test: Ollama not available (10): 'test-add-image-integration.R:98:3',
'test-add-image-integration.R:112:3', 'test-answer-as-boolean.R:2:3',
'test-answer-as-json.R:180:3', 'test-answer_as_integer.R:2:3',
'test-answer_as_key_value.R:32:3', 'test-answer_as_list.R:27:3',
'test-answer_using_r.R:2:3', 'test-answer_using_tools.R:88:3',
'test-quit-if.R:2:3'
• skip test: OpenAI API key not found (21):
'test-add-image-integration.R:40:3', 'test-add-image-integration.R:54:3',
'test-add-image-integration.R:67:3', 'test-add-image-integration.R:83:3',
'test-answer-as-json.R:2:3', 'test-answer-as-json.R:16:3',
'test-answer-as-json.R:142:3', 'test-answer-as-json.R:218:3',
'test-answer-as-json.R:301:3', 'test-answer-as-json.R:345:3',
'test-answer-as-json.R:381:3', 'test-answer-as-json.R:416:3',
'test-answer-as-json.R:452:3', 'test-answer_using_tools.R:30:3',
'test-answer_using_tools.R:42:3', 'test-answer_using_tools.R:98:3',
'test-answer_using_tools.R:113:3', 'test-answer_using_tools.R:128:3',
'test-answer_using_tools.R:146:3', 'test-answer_using_tools.R:161:3',
'test-answer_using_tools.R:176:3'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-answer_using_r.R:37:3'): answer_using_r handles complex objects in formatted_output ──
<rlib_error_3_0/rlib_error/error/condition>
Error in `"rs_init(self, private, super, options, wait, wait_timeout)"`: ! Could not start R session, timed out
Backtrace:
▆
1. └─tidyprompt::answer_using_r("test", evaluate_code = TRUE, return_mode = "formatted_output") at test-answer_using_r.R:37:3
2. └─callr::r_session$new(options = r_session_options)
3. └─callr (local) initialize(...)
4. └─callr:::rs_init(self, private, super, options, wait, wait_timeout)
5. └─throw(new_error("Could not start R session, timed out"))
[ FAIL 1 | WARN 0 | SKIP 36 | PASS 187 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Version: 0.3.0
Check: re-building of vignette outputs
Result: ERROR
Error(s) in re-building vignettes:
--- re-building 'creating_prompt_wraps.Rmd' using rmarkdown
--- finished re-building 'creating_prompt_wraps.Rmd'
--- re-building 'getting_started.Rmd' using rmarkdown
File ../man/figures/answer_using_r1-1.png not found in resource path
Error: processing vignette 'getting_started.Rmd' failed with diagnostics:
pandoc document conversion failed with error 99
--- failed re-building 'getting_started.Rmd'
--- re-building 'sentiment_analysis.Rmd' using rmarkdown
File ../man/figures/plot_sentiment_analysis-1.png not found in resource path
Error: processing vignette 'sentiment_analysis.Rmd' failed with diagnostics:
pandoc document conversion failed with error 99
--- failed re-building 'sentiment_analysis.Rmd'
--- re-building 'streaming_shiny_ipc.Rmd' using rmarkdown
--- finished re-building 'streaming_shiny_ipc.Rmd'
SUMMARY: processing the following files failed:
'getting_started.Rmd' 'sentiment_analysis.Rmd'
Error: Vignette re-building failed.
Execution halted
Flavor: r-oldrel-windows-x86_64