Tuesday, April 28, 2026

Canva Fixes Design Tool After Reported “Palestine” to “Ukraine” Change, Audit Underway

Canva says it has fixed an issue in its Magic Layers feature after users reported that the tool changed the phrase “Cats for Palestine” to “Cats for Ukraine” inside a design.

"this shouldn't have happened and we're very sorry for your experience!", Canva said in a response to a user.

The issue was first highlighted on X by user @ros_ie9 and was later reported by The Verge and Gizmodo this week. According to those reports, the behavior appeared to affect the word “Palestine” specifically, while related words such as “Gaza” or “Israel” were reportedly unaffected.

Image: ros_ie9 / X

A separate statement provided to Gizmodo said the company had launched an audit into how the issue happened and was reviewing its internal testing processes to detect and prevent unexpected outputs in the future. Canva also said the problem was isolated and did not affect designs broadly.

The company has not publicly explained what caused the substitution or which technical layer triggered it.

That question has drawn attention because Magic Layers is promoted as a tool for converting flat designs into editable layers, allowing users to manually adjust text and visual elements after processing. Users reported that the wording changed during that process without being requested.

The incident has also received attention because Canva publicly promotes its AI governance framework, Canva Shield, as focused on safe, fair, and secure AI. In its January 2026 update, Canva says its generative AI products go through "rigorous safety reviews", certain prompts involving political topics are automatically moderated, and the company works to reduce bias and improve fairness in AI outputs.

Online discussion following the reports focused on whether the issue reflected a model error, moderation behavior, or another system failure. Some users argued that AI tools should preserve original content exactly when performing layout conversion, while others said companies remain responsible for unexpected outputs regardless of whether the issue came from training data, moderation layers, or external model providers.

The incident follows previous criticism of wider AI systems across the technology and social media industry involving disputed or politically sensitive outputs related to Palestinian Muslims, including earlier concerns involving chatbot responses and image generation tools from other major platforms.

DIW has contacted Canva with follow-up questions about the root cause of the Magic Layers issue, whether third-party AI systems were involved, how the company’s audit classified the problem, and what specific safeguards have been added beyond the additional checks already mentioned. Canva has not publicly specified a timeline for the completion or publication of the audit findings. No further response had been received at the time of publication.

Note: This post was improved using a generative AI tool.

Read next: When AI relationships trigger ‘delusional spirals’
by Asim BN via Digital Information World

No comments:

Post a Comment