On social media sites Reddit and Twitter, users had shared images of chat histories that they said were not theirs.
OpenAI CEO Sam Altman said the company feels "awful", but the "significant" error had now been fixed.
Many users, however, remain concerned about privacy on the platform.
Millions of people have used ChatGPT to draft messages, write songs and even code since it launched in November of last year.
Each conversation with the chatbot is stored in the user's chat history bar where it can be revisited later.
But as early as Monday, users began to see conversations appear in their history that they said they hadn't had with the chatbot.
One user on Reddit shared a photo of their chat history including titles like "Chinese Socialism Development", as well as conversations in Mandarin.
On Tuesday, the company told Bloomberg that it had briefly disabled the chatbot late on Monday to fix the error.
They also said that users had not been able to access the actual chats.
OpenAI's chief executive tweeted that there would be a "technical postmortem" soon. But the error has drawn concern from users who fear their private information could be released through the tool.
The glitch seemed to indicate that OpenAI has access to user chats.
The company's privacy policy does say that user data, such as prompts and responses, may be used to continue training the model.
But that data is only used after personally identifiable information has been removed.
The blunder also comes just a day after Google unveiled its chatbot Bard to a group of beta testers and journalists.
Google and Microsoft, a major investor in OpenAI, have been jostling for control of the burgeoning market for artificial intelligence tools.
But the pace of new product updates and releases has many concerned missteps like these could be harmful or have unintended consequences.