OpenAI pushed out a small fix that changes how the model reacts when a user writes a clear instruction inside the personalization panel. Altman framed it as a simple win, and the move arrived shortly after the company rolled out the new GPT 5.1 model. It sounds minor on the surface. Yet people who rely on the tool know how often the bot stubbornly added that long mark even when asked to avoid it.
Writers said the habit broke their tone and made their work stand out for the wrong reasons. Many stopped using the dash in their own writing because they did not want readers to assume a chatbot drafted their text. Complaints piled up across forums where people kept posting examples of the model promising to avoid it, then slipping it back into the very next sentence.
The new behavior only kicks in when the user plants the instruction in the custom settings area. Altman did not promise success every time in regular chats. That fits with the broader reality of LLM behavior. These models shift output by leaning on probability patterns rather than fixed rules. If a user places the instruction in the right slot, the odds of a clean output increase, though nothing becomes absolute.
Some critics pushed the conversation in another direction. They pointed out that if OpenAI struggled for years to control one simple punctuation mark, talk of near term general intelligence feels a bit premature. The model may look sharp on the surface. Yet it still works like a giant pattern engine that tries to anticipate what should come next rather than follow strict commands with mechanical precision.
Older training data also played a role. People have used the long dash for centuries. It showed up across novels, editorials and essays that filled older datasets. Because the model tries to echo the shape of the writing it has seen, the dash became a default move. Once reinforcement learning kicked in and evaluators rewarded responses that felt polished, the preference grew stronger. That gave the model a habit that stuck around even as users pushed back.
OpenAI now says the fix is part of its work to hand people more control. The company already introduced tools that remember user preferences and let people fine tune how the bot behaves across sessions. The long dash update shows that simple choices matter to users just as much as headline features. For many, this is less about punctuation and more about trying to make the output feel like their own voice.
Every change will still depend on how the model handles probabilities in the background. That leaves room for odd behavior to creep back after future updates. Some users already say the fix works inside the settings panel but still fails if you only mention it inside the chat. With a system that keeps learning from new interactions, small shifts can break old tuning in unpredictable ways. Anyone expecting a crisp on off switch will need patience.
Still, for now, people who truly want to avoid the long dash have a practical way to do it.
How To Add a No Em Dash (—) Rule in Custom Instructions
Below is a clear set of steps based on OpenAI’s official customization guide. You only need to do it once. After that, ChatGPT will try to follow the rule in every conversation.
Step 1: Open the Custom Instructions Panel
- Open ChatGPT in your browser or app.
- Look for your profile picture in the bottom corner, then go to "Settings" option and then "Personalization" tab (you can also directly access it through this link).
- Now in the Personalization tab you will be able to see Custom Instructions option.
Step 2: Add Your Style Requirement
You will see two large text boxes. One controls how ChatGPT should respond. This is where you add the rule.
Write something like:
or
"Avoid using em dashes unless necessary for clarity or emphasis; otherwise, use standard punctuation."
Keep it short and clear so the model can pull the instruction into every session.
Step 3: Save the Setting
Scroll down and hit Save.
The instruction becomes active across all chats unless you turn the feature off or erase it later.
Step 4: Test the Behavior
Start a new conversation. Ask the model to write a few lines of text.
Step 5: Adjust Anytime
You can change, refine or remove the rule by visiting the same panel.
Note: This post was edited/created using GenAI tools and proofread/fact-check by human editors.
Read next:
• ChatGPT Experiments With Real Group Conversations in a Limited Rollout
• 3 Out of 4 Americans Willingly Trade Personal Data For Discounts Despite Privacy Fears
by Asim BN via Digital Information World

No comments:
Post a Comment