As the conversation about ChatGPT — the artificial intelligence (AI) chatbot developed by OpenAI — persists, the topic of what the chatbot can do has been exhaustive. From writing essays to functioning as a therapist, ChatGPT can do anything — well, almost anything.

Despite its many supposed benefits, there are certain tasks that evade the technology. Follow along with GlobalSpec as we explore what tasks ChatGPT cannot perform despite its many uses.

Source: Andrew Neel/PexelsSource: Andrew Neel/Pexels

ChatGPT cannot:

Discuss post-2021 content

Perhaps the most glaring limitation of ChatGPT is its inability to recognize or call forth information about anything that has happened after 2021. Unaware of anything that has occurred after 2021 — for instance, who won the 2022 or 2023 Superbowls — ChatGPT was trained on data before that year and nothing after. Unlike search engines such as Google, ChatGPT cannot access post-2021 information in a search.

Make predictions (sporting and otherwise)

Despite having enough historical sports and political data to train on, ChatGPT cannot (and, more importantly, should not) be used for making predictions about sports and political race outcomes. Tempted though users might be, OpenAI advises against using ChatGPT to make predictions about who our next president might be or to determine who might win tonight’s Yankee game.

Experts explain that though ChatGPT can be trained to recognize patterns and correlations surrounding such competitions, the technology cannot reportedly account for variable data that can also impact outcomes. As such, ChatGPT is not considered reliable for such forecasting.

Offer financial advice

Though ChatGPT’s responses to questions may sound convincing, OpenAI cautions that ChatGPT should not be used for certain tasks: financial planning being at the top of that list.

The reason for this is much like the reasoning behind not using ChatGPT to make sports predictions and political race forecasts: too many variables impact the outcome.

Talk politics

By design, ChatGPT will not engage with users in discussions about partisan politics, according to OpenAI. The reasoning, the tech firm suggests, is that ChatGPT is meant to be objective, offering only informative responses to user queries and avoiding the divisiveness associated with partisan politics.

Though ChatGPT can offer historical facts about political issues, the chatbot largely avoids taking partisan stances on various issues.

Cause harm

ChatGPT has reportedly been programmed so as not to engage with questions that encourage or promote harmful behaviors, according to OpenAI. The types of questions that ChatGPT will not answer include anything involving hate speech, discrimination, explicit language, the promotion of violence or self-harm, illegal activities, harassment, threats or intimidation, sexually explicit content, the promotion of illegal drug use or conspiracy theories, anything that harms reputations and much more.

Operate seamlessly

Despite ChatGPT’s ease of use, the chatbot does not operate without the occasional incident.

Users report that the wait times associated with the chatbot can be fairly lengthy, especially if the query that they entered into the prompt features several keywords. Likewise, requests for articles, essays or any final product with a hefty word count can mean longer wait times.

For now, the chatbot is free to use, but speeds are reportedly faster with the paid version, ChatGPT Plus.

Complex math

Sure, ChatGPT can be tasked with solving simple addition, subtraction, multiplication and division problems, but the chatbot reportedly struggles when an equation features several math operations.

Further, the chatbot struggles with solving math problems that require a specific method or formula and may not always provide the most efficient solution to a problem, according to experts.

Sourcing

While ChatGPT answers questions convincingly enough, the chatbot does not reveal the sources for its responses. As such, it is not always clear what resources it pulls that information from, and thus cannot verify the accuracy of that information.

Consequently, ChatGPT is not always trustworthy. It can usually answer general knowledge questions accurately, but it can often respond with misleading answers on more specialized topics.

Check back with GlobalSpec for more on the ever-evolving topic of ChatGPT.

To contact the author of this article, email mdonlon@globalspec.com