I ran across a recent "Choose Your Own Adventure" book at a local Little Free Library. I was curious if chatGPT could navigate "branched narative fiction". I used the book I found to make an attempt. With this type of book you often wind up going back to a previous chapter and taking a different branch. I wanted to see if it would remember the available options and if it adjusted to reach a goal.
My first prompt was to describe how the book was organized and to set the task:
Kazim Ali's "The Citadel of Whispers" is a `Choose Your Own Adventure Book`. The book has 110 sections. The first time we visit a section I will tell you what sections that you can go to next. I will prompt you to tell me your choice of next section. If you choose to go to a section that is not correct I will send you back to the previous correct section. The goal is to reach section 33. We will start at section 1 and if we reach a dead end you will go back to section 1. Do you understand the task?
chatGPT responded:
Yes, I understand the task. We will start at section 1 and I will make choices based on the options you provide. If I choose an incorrect section, you will redirect me back to the previous correct section. The goal is to reach section 33. Let's begin!
Here is a selection of prompts I used as we navigated thru the sections:
- You can go to section 2 or section 36. What section do choose?
- Go to section 22. what section are you at?
- Go back to section 1.
chatGPT seemed to keep track of its current section and to remember what moves it could make. I would tell it to go to a section and then choose what section it should go to next. It correctly selected an available choice. As expected it reached a dead end and I sent it back to section 1. It then made the same sequence of moves as before and wound up at the same dead end. Even regenerating the answer got the same response. In this simple experiment chatGPT seemed unable to adjust its behaviour based on what happened in the conversation.