Skip to main content
  1. Home
  2. Knowledge Base
  3. eLearning Magic Toolkit - OpenAI Integration for Storyline
  4. How To Request And Receive A Response From ChatGPT In Storyline

How To Request And Receive A Response From ChatGPT In Storyline

How To Request And Receive A Written Response From ChatGPT In Storyline

The development process for your AI-enhanced eLearning experience using the eLearning Magic Toolkit all takes place within Articulate Storyline 360. Once you have published your project to your WordPress website, and made available for users to access via an iframe or lightbox on the page or post for example, then no further setup work is required.

The main step to take in order to make a Create Chat Completion request (prompt to ChatGPT) take place within your Storyline activity is to add an Execute JavaScript trigger, which can occur whenever you wish such as at the start of a timeline, on a slide layer, or when the user has clicked a button or performed another action on the slide:

Clicking on the red coloured ‘JavaScript’ link within the trigger panel will open the JavaScript Code Editor window. Here you can type or paste your JavaScript code that will process the prompt request to the OpenAI as well as the returned response back into a Storyline text variable which you can then make use of within your project.

Let’s start with the JavaScript code snippet required to make a simple one-question request to the AI to answer (visit the All JavaScript And Shortcode article page for all code snippet examples you can take advantage of within Articulate Storyline):

var player = GetPlayer();
var prompt_text = player.GetVar("StorylineTextVariable");

var messages = []

var system_message = { role: "system", content: "You are a helpful and knowledgeable AI chatbot." };
var user_prompt00 = {role: "user", content: "Please answer the following question: " + prompt_text };

messages.push(system_message);
messages.push(user_prompt00);

var sendData = {
'nonce': parent.storylineMagicNonce,
'value': JSON.stringify( messages ),
'api': 'chatCompletion', 
'jsonresponse': 'false',
};

sendData = JSON.stringify( sendData );

const myHeaders = new Headers();

myHeaders.append("Content-Type", "application/json");
myHeaders.append("X-WP-Nonce", parent.storylineMagicRestNonce);

async function openai_req() {

fetch(parent.storylineMagicEndpoint, {
method:'POST',
headers: myHeaders,
body: sendData
})

.then(res => res.json())
.then(data => {
if ( Object.prototype.hasOwnProperty.call(data.data, 'data') ) {
var gpt_content = data.data.data.choices[0].message.content;
gpt_content = gpt_content.trim();
player.SetVar("StorylineAnswerTextVariable",gpt_content);
}else{
console.error('Error fetching data:', error);
}

})
.catch(error => {
console.error('Error fetching data:', error);
})
};

openai_req();

The code above provides everything the eLearning Magic Toolkit needs to take a prompt generated by the user (e.g. as a text entry field converted into a project variable, in this case called StorylineTextVariable) to send on to the ChatGPT API in order to return a response on the same Storyline slide.

GPT-4 model update (Nov23)Please be aware that access to the latest GPT-4 model APIs is contingent on you having spent at least $1 on your Open AI Platform account, therefore if you have just created your account you should accrue some credit spend using the GPT-3.5 models before attempting to switch to GPT-4 – https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4

If you are using the latest GPT-4 model for your request then you have the option of whether the request data should always be returned in JSON format. This can be useful if your intention is to format the response given with further JavaScript, i.e. to break up and place into Articulate Storyline variables. Simply change the ‘jsonresponse’: ‘false’ value to true in this case.

Once the prompt has returned, it will be stored to another project variable, which in this case would be called StorylineAnswerTextVariable (but you can rename this to anything you wish).

You can see that the System message and the User message content are all very straight forward and do not provide much in the way of prompt engineering in order to tailor the type or format of answer that we would like to receive from the AI.

Therefore if this is something that you wish to do then feel free to customise either the System, User, or both messages to discover what works best for your situation! What the user has typed (contained within the prompt_text variable) can be processed by the AI in any way you wish too. For example in the case of the prompt above we are simply asking for the question to be answered, but you could request the prompt to do something else with what the user might have typed into your interaction, such as creating a poem on a subject which the user submits, providing a classroom plan for a topic which the user submits, or even perhaps creating a pilot for a TV show based on a theme which the user submits!

The possibilities are evidently boundless, and the eLearning Magic Toolkit provides the technical bridge for developers to bring their ideas to reality!

Read the rest of our Knowledge Base pages to make the most of eLearning Magic Toolkit.

Was this article helpful?

Related Articles

2 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.