GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
OpenAI::API::Request::Chat(3) User Contributed Perl Documentation OpenAI::API::Request::Chat(3)

OpenAI::API::Request::Chat - Request class for OpenAI API chat-based completion

    use OpenAI::API::Request::Chat;
    my $chat = OpenAI::API::Request::Chat->new(
        messages => [
            { role => 'system', content => 'You are a helpful assistant.' },
            { role => 'user', content => 'Who won the world series in 2020?' },
        ],
    );
    my $res     = $chat->send();                  # or: my $res = $chat->send(%args);
    my $message = $res->{choices}[0]{message};    # or: my $message = "$res";
    # continue the conversation...
    # $res = $chat->send_message('What is the capital of France?');

This module provides a request class for interacting with the OpenAI API's chat-based completion endpoint. It inherits from OpenAI::API::Request.

ID of the model to use.

See Models overview <https://platform.openai.com/docs/models/overview> for a reference of them.

The messages to generate chat completions for, in the chat format <https://platform.openai.com/docs/guides/chat/introduction>.

The maximum number of tokens to generate.

Most models have a context length of 2048 tokens (except for the newest models, which support 4096).

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

An alternative to sampling with temperature, called nucleus sampling.

We generally recommend altering this or "temperature" but not both.

How many completions to generate for each prompt.

Use carefully and ensure that you have reasonable settings for "max_tokens" and "stop".

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far.

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

This method adds a new message with the given role and content to the messages attribute.

This method adds a user message with the given content, sends the request, and returns the response. It also adds the assistant's response to the messages attribute.

This module inherits the following methods from OpenAI::API::Request:

OpenAI::API::Request, OpenAI::API::Config

2023-04-09 perl v5.40.2

Search for    or go to Top of page |  Section 3 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.