Meta AI Tag Feature on Threads Can’t Be Blocked by Users, Reports Say

Meta is testing a new way to pull artificial intelligence into everyday conversations on Threads—and early user feedback suggests the company may have underestimated how quickly people will demand control over what shows up in their feeds.

According to reports, Meta has begun rolling out a Threads feature that lets users tag a Meta AI account within a post or conversation. The promise is simple: instead of leaving people to guess, search, or ask someone else for context, you can bring Meta AI directly into the thread to answer questions, clarify details, or provide additional background. In practice, it’s a familiar pattern for anyone who has watched social platforms evolve over the last year—AI is no longer confined to separate chat apps. It’s being woven into the places where people already talk, argue, share links, and coordinate.

But the rollout has hit a snag that matters more than most product teams expect: users say they can’t block the Meta AI account associated with the feature. That means even if someone doesn’t want AI responses, they may still be unable to remove the account from their experience in the way they would with other accounts. For many users, blocking isn’t just about avoiding harassment; it’s about shaping the environment. It’s a form of personalization and boundary-setting. When a platform introduces a new “always present” actor—especially one tied to a major company—people naturally look for the same controls they’ve used elsewhere.

This is where the controversy begins. Threads users reportedly discovered that the Meta AI account can’t be blocked, despite the fact that it behaves like an account that can be tagged and referenced. The result is frustration that feels less like a technical complaint and more like a trust issue: if you can’t block it, you can’t fully opt out. And if you can’t opt out, the feature risks feeling less like a helpful tool and more like an unavoidable layer on top of the conversation.

To understand why this is such a big deal, it helps to look at how social platforms treat “blocking” as a concept. Blocking is often framed as safety—preventing unwanted contact. But in reality, it also functions as a visibility control. People block accounts to reduce noise, avoid certain viewpoints, stop spam, and curate their attention. Even when the content isn’t harmful, the ability to block is part of how users maintain a sense of ownership over their feed.

When Meta introduces an AI account that can be tagged, it effectively creates a new category of participant in conversations. That participant may not be a person, but it can still influence what users see and how they interpret posts. If users can’t block it, they may feel like the platform is changing the rules midstream: the conversation becomes partially mediated by an entity you can’t silence.

The feature itself, however, is not surprising. Meta has been investing heavily in AI across its ecosystem, and Threads is simply another surface where those investments can show up. The company’s broader strategy has been to make AI feel native—less like a separate product and more like a capability embedded in the flow of social media. That approach mirrors what other tech giants have done: rather than asking users to open a dedicated chatbot, platforms are trying to meet them where they already are.

In Threads, tagging Meta AI is positioned as a way to get answers or context. Think of it as a shortcut for explanation. If someone posts something confusing, you can tag AI to summarize. If a conversation needs background, AI can supply it. If you’re trying to respond but want to ensure accuracy or tone, AI can help draft or refine. The appeal is obvious: it reduces friction. It turns “I’ll look it up later” into “I can get context right now.”

Yet the friction that users are reporting—specifically the inability to block—highlights a tension that runs through nearly every AI integration into social platforms. AI features are often marketed as optional assistance, but they can behave like persistent infrastructure. Once AI is integrated into the interface, it can become difficult to ignore. Even if the user isn’t actively using it, the presence of AI-related accounts and prompts can shape the conversation.

This is why the block issue matters beyond personal preference. It’s about governance. When a platform adds a new interactive element, users want to know what they can control: visibility, participation, and influence. If the AI account is treated differently from normal accounts—especially in ways that limit user control—users may interpret that as a sign that the platform is prioritizing engagement over autonomy.

There’s also a deeper question underneath the reported behavior: what does it mean to “block” an AI account? Blocking is designed for entities that generate content and interact with users. An AI account can do both. It can be tagged, it can respond, and it can appear in contexts that users didn’t initiate. If blocking is disabled, users may wonder whether the platform is treating AI as a system-level feature rather than a user-level actor.

That distinction could be intentional. Meta may view the AI account as part of the product’s core functionality during testing. If so, disabling block might be a temporary limitation while the company figures out how to handle opt-out behavior safely and consistently. But even if that’s the case, the user experience still matters. A “temporary” lack of control can still shape how people perceive the feature—and perceptions can harden quickly.

Another angle is the business logic behind AI integrations. AI features can increase time spent on the platform, encourage more replies, and create new interaction loops. Tagging AI is a direct invitation to engage with the platform’s capabilities. If users can block the AI account, adoption might slow. That doesn’t necessarily mean Meta is trying to force AI on people, but it does suggest that the company may be balancing user control against growth metrics.

Still, there’s a difference between encouraging use and removing choice. The best AI integrations tend to offer clear, consistent controls: the ability to hide AI suggestions, disable AI interactions, or manage which AI accounts can appear. If Meta wants Threads users to trust the feature, it will likely need to demonstrate that opt-out is real—not just theoretical.

The timing of this rollout also matters. Threads is still evolving rapidly, and Meta has been experimenting with how AI should appear in social contexts. The company’s AI efforts aren’t limited to one model or one product. It has launched new AI models and capabilities, and it has been hiring and investing to expand its capabilities. That momentum makes it likely that AI features will continue to appear across Meta’s apps. Threads is simply one of the most visible places where those changes can be tested publicly.

But public testing is where user expectations collide with product experimentation. When users discover limitations—like the inability to block—the reaction tends to be immediate because the feature is happening in real conversations. People don’t experience AI as a distant roadmap item; they experience it as something that appears in their feed, interrupts their flow, or changes the tone of a discussion.

This is why the current situation is likely to become a focal point for how Meta handles AI governance on Threads. If Meta responds by adding block controls, it could be seen as a quick course correction. If Meta doesn’t, the feature may face ongoing resistance, especially from users who value strict curation and minimal algorithmic interference.

There’s also the question of how the AI account behaves in threads. Tagging implies a kind of direct interaction. If the AI account can be tagged, it can be summoned. That changes the social dynamics. Instead of asking a human for clarification, users can summon AI. That might be useful, but it also changes who participates in the conversation. Over time, if AI becomes a default helper, some users may feel that human discourse is being replaced by machine assistance.

This doesn’t have to be negative. AI can improve accessibility and understanding. It can help people who are new to a topic, who speak different languages, or who need help interpreting complex information. But the tradeoff is that AI can also homogenize responses. If AI drafts replies or provides context in a similar style across many threads, conversations can start to sound alike. Users may notice that shift even if they can’t articulate it.

Control features like blocking are one way to mitigate that risk. They allow users to decide when AI is welcome and when it isn’t. Without those controls, the platform may inadvertently push AI into roles it wasn’t meant to occupy.

From a product perspective, Meta’s testing approach suggests the company is still refining the feature. Testing implies iteration: adjusting how tagging works, how responses are displayed, and how users can manage their experience. The block limitation could be a bug, a policy decision, or a temporary constraint. But regardless of cause, the user reaction indicates that Meta needs to treat this as a priority issue, not a minor detail.

What would “better” look like? At minimum, users would expect parity with other accounts: if the AI account is taggable and appears in the same space as other accounts, it should likely be blockable. Alternatively, Meta could offer a different control mechanism—such as an option to disable AI tagging entirely, hide AI responses, or prevent AI accounts from appearing in replies. The key is that users need a clear, reliable way to opt out.

There’s also room for more nuanced settings. For example, users might want AI help only for certain types of requests—summaries, translations, or factual explanations—while disabling AI drafting or persuasive responses. They might want AI to appear only when explicitly requested, not when it’s suggested. They might want to limit AI to certain contexts or topics. These are the kinds of controls that can turn AI from a disruptive presence into a genuinely user-centered tool.

Meta’s challenge is that AI features are often built quickly and integrated deeply. Once integrated, they can become hard to disentangle from the interface. That’s why early governance decisions matter. If Meta waits too long to add controls, users may develop habits and expectations that are difficult to reverse.

There’s also a trust dimension. Users are increasingly aware that AI can be wrong, biased, or incomplete.