API Reference
types
type ChatConfig
interface ChatConfig {
model_name: string;
endpoint: string;
api_key: string;
api_version?: string;
max_tokens?: number;
}
type ChatOptions
interface ChatOptions {
temperature?: number;
presence_penalty?: number;
frequency_penalty?: number;
stop?: string[];
top_p?: number;
response_format?: any;
max_tokens?: number;
quiet?: boolean;
}
Ling extends EventEmitter
constructor(private config: ChatConfig, private options: ChatOptions = {})
{
super();
this.tube = new Tube();
}
createBot
createBot(root: string | null = null, config: Partial<ChatConfig> = {}, options: Partial<ChatOptions> = {})
{
config: Partial<ChatConfig> = {},
options: Partial<ChatOptions> = {}) {
const bot = new Bot(this.tube, {...this.config, ...config}, {...this.options, ...options});
bot.setJSONRoot(root);
bot.setCustomParams(this.customParams);
this.bots.push(bot);
return bot;
}
Create a Bot object using the given config and options, where 'root' indicates the default root URI path for the output JSON content.
setCustomParams
setCustomParams(params: Record<string, string>)
{
this.customParams = {...params};
}
Add default variables to all Bot objects created for Ling, which can be used when rendering prompt templates; the prompt templates are parsed by default using Nunjucks.
setSSE
setSSE(sse: boolean)
{
this.tube.setSSE(sse);
}
Enable or disable SSE (Server-Sent Events) mode.
TIP
Traditionally, a web page has to send a request to the server to receive new data; that is, the page requests data from the server. With server-sent events, it's possible for a server to send new data to a web page at any time, by pushing messages to the web page. These incoming messages can be treated as Events + data inside the web page.
See more about (SSE)[https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events]
sendEvent
sendEvent(event: any)
{
this.tube.enqueue(event);
}
async close
async close()
{
while (!this.isAllBotsFinished()) {
await sleep(100);
}
this.tube.close();
this.bots = [];
}
Close the data stream when the workflow ends.
async cancel
async cancel()
{
while (!this.isAllBotsFinished()) {
await sleep(100);
}
this.tube.cancel();
this.bots = [];
}
Cancel the stream when an exception occurs.
prop stream
get stream()
{
return this.tube.stream;
}
The Readable Stream object created by Ling.
prop closed
get closed()
{
return this.tube.closed;
}
Whether the workflow has been closed.
prop canceled
get canceled()
{
return this.tube.canceled;
}
Whether the workflow has been canceled.
event message
The message sent to client with an unique event id.
{
"id": "t2ke48g1m3:293",
"data": { "uri": "related_question/2", "delta": "s" }
}
Bot extends EventEmitter
addPrompt
addPrompt(promptTpl: string, promptData: Record<string, string> = {})
{
const promptText = nunjucks.renderString(promptTpl, { chatConfig: this.config, chatOptions: this.options, ...this.customParams, ...promptData, });
this.prompts.push({ role: "system", content: promptText });
}
Set the prompt for the current Bot, supporting Nunjucks templates.
addHistory
addHistory(messages: ChatCompletionMessageParam [])
{
this.history.push(...messages);
}
Add chat history records.
async chat
async chat(message: string)
{
this.chatState = ChatState.CHATTING;
const messages = [...this.prompts, ...this.history, { role: "user", content: message }];
return getChatCompletions(this.tube, messages, this.config, this.options,
(content) => { // on complete
this.chatState = ChatState.FINISHED;
this.emit('response', content);
}, (content) => { // on string response
this.emit('string-response', content);
}, ({id, data}) => {
this.emit('message', {id, data});
}).then((content) => {
this.emit('inference-done', content);
});
}
event string-response
This event is triggered when a string field in the JSON output by the AI is completed, returning a jsonuri object.
event inference-done
This event is triggered when the AI has completed its current inference, returning the complete output content. At this point, streaming output may not have ended, and data continues to be sent to the front end.
event response
This event is triggered when all data generated by the AI during this session has been sent to the front end.
INFO
Typically, the string-response
event occurs before inference-done
, which in turn occurs before response
.