In a remarkable shift, the market research industry has rapidly integrated artificial intelligence (AI) into its daily operations, with a staggering 98% of professionals now utilizing AI tools. According to a recent survey conducted by QuestDIY, a research platform owned by The Harris Poll, 72% of these professionals employ AI on a daily basis or more frequently. This widespread adoption reflects the transformative potential of AI in enhancing productivity and efficiency within the sector. However, the survey also uncovers a significant paradox: while AI is heralded for its ability to streamline processes and deliver insights at unprecedented speeds, it simultaneously raises critical concerns regarding accuracy and trustworthiness.
The findings from the survey, which gathered responses from 219 U.S. market research and insights professionals in August 2025, paint a complex picture of an industry grappling with the dual pressures of delivering rapid business insights and ensuring the reliability of AI-generated outputs. More than half of the respondents—56%—reported saving at least five hours per week by leveraging AI tools. Yet, nearly four in ten participants expressed concerns about their increasing reliance on technology that can produce errors. Specifically, 39% of researchers noted that AI has led to a heightened dependence on error-prone systems, while 37% cited new risks related to data quality and accuracy. Additionally, 31% indicated that the use of AI has resulted in more work dedicated to re-checking or validating AI outputs.
This disconnect between the productivity gains afforded by AI and the persistent issues surrounding its reliability has created what can be described as a “grand bargain” within the research industry. Professionals are willing to accept the time savings and enhanced capabilities that AI offers, but this comes at the cost of constant vigilance over the technology’s shortcomings. As a result, the workflow in market research is evolving, with researchers increasingly viewing AI as a “junior analyst”—a tool that can process vast amounts of data quickly but requires human oversight to ensure the accuracy and relevance of its outputs.
The rapid transition from skepticism to daily usage of AI among market researchers has been striking. The survey indicates that among those who use AI daily, 39% deploy it once per day, while 33% utilize it several times a day or more. Adoption rates are accelerating, with 80% of researchers reporting increased usage compared to six months prior, and 71% anticipating further growth in the coming months. Only 8% expect their usage to decline. This swift integration of AI into research practices underscores a fundamental shift in how insights are generated and delivered.
Erica Parker, Managing Director of Research Products at The Harris Poll, emphasizes the importance of human judgment in this evolving landscape. She notes, “While AI provides excellent assistance and opportunities, human judgment will remain vital. The future is a teamwork dynamic where AI will accelerate tasks and quickly unearth findings, while researchers will ensure quality and provide high-level consultative insights.” This perspective highlights the necessity of maintaining a balance between leveraging AI’s capabilities and exercising critical human oversight.
The top use cases for AI in market research reflect its strengths in handling data at scale. According to the survey, 58% of researchers utilize AI for analyzing multiple data sources, 54% for analyzing structured data, 50% for automating insight reports, 49% for analyzing open-ended survey responses, and 48% for summarizing findings. These tasks, traditionally labor-intensive and time-consuming, can now be accomplished in a fraction of the time, allowing researchers to focus on higher-order analysis and strategic interpretation.
Despite the clear benefits of AI, the survey reveals deep-seated unease regarding the technology’s reliability. Researchers have articulated a range of concerns, including increased reliance on error-prone technology (39%), new risks surrounding data quality or accuracy (37%), additional validation work (31%), uncertainty about job security (29%), and ethical considerations related to data privacy (28%). The report underscores that accuracy remains the primary frustration experienced by researchers when using AI, with one participant succinctly capturing the tension: “The faster we move with AI, the more we need to check if we’re moving in the right direction.”
This paradox—wherein time savings are accompanied by the creation of new validation work—reflects a fundamental characteristic of current AI systems. While AI can produce outputs that appear authoritative, they may also contain what researchers refer to as “hallucinations,” or fabricated information presented as fact. This challenge is particularly acute in a profession where credibility hinges on methodological rigor, and where erroneous data can lead clients to make costly business decisions.
The metaphor of AI as a junior analyst aptly encapsulates the industry’s current operating model. Researchers treat AI outputs as drafts requiring senior review rather than finished products, establishing a workflow that provides necessary guardrails while simultaneously highlighting the technology’s limitations. This approach necessitates a careful balancing act, as researchers must navigate the complexities of AI-generated insights while ensuring that the final deliverables meet the rigorous standards expected by clients.
Data privacy and security concerns emerge as the most significant barriers to AI adoption in market research. When asked what would limit their use of AI at work, 33% of researchers identified these concerns as paramount. Given that researchers handle sensitive customer data, proprietary business information, and personally identifiable information subject to regulations such as GDPR and CCPA, the implications of sharing this data with AI systems—particularly cloud-based large language models—raise legitimate questions about data control and potential misuse.
Other notable barriers include the time required to experiment with and learn new tools (32%), the need for training (32%), integration challenges (28%), internal policy restrictions (25%), and cost considerations (24%). Additionally, 31% of respondents cited a lack of transparency in AI use as a concern, complicating the task of explaining results to clients and stakeholders. The transparency issue is particularly problematic; when an AI system generates an analysis or insight, researchers often struggle to trace how the system arrived at its conclusion. This lack of clarity conflicts with the scientific method’s emphasis on replicability and clear methodology, leading some clients to impose no-AI clauses in their contracts. Such clauses force researchers to either avoid AI entirely or utilize it in ways that skirt ethical boundaries.
Despite these challenges, researchers are not abandoning AI; rather, they are developing frameworks to use it responsibly. The consensus model emerging from the survey is one of “human-led research supported by AI,” where AI handles repetitive tasks such as coding, data cleaning, and report generation, while humans concentrate on interpretation, strategy, and business impact. Approximately one-third of researchers (29%) describe their current workflow as “human-led with significant AI support,” while 31% characterize it as “mostly human with some AI help.” Looking ahead to 2030, 61% envision AI as a “decision-support partner” with expanded capabilities, including generative features for drafting surveys and reports (56%), AI-driven synthetic data generation (53%), automation of core processes like project setup and coding (48%), predictive analytics (44%), and deeper cognitive insights (43%).
This evolving division of labor positions researchers as “Insight Advocates”—professionals who validate AI outputs, connect findings to stakeholder challenges, and translate machine-generated analysis into strategic narratives that drive business decisions. In this model, technical execution becomes less central to the researcher’s value proposition than judgment, context, and storytelling. Gary Topiol, Managing Director at QuestDIY, emphasizes that while AI can surface missed insights, it still requires a human touch to determine what truly matters.
The experience of the market research industry serves as a potential harbinger for other knowledge work professions where AI promises to accelerate analysis and synthesis. The lessons learned by researchers—early adopters who have integrated AI into their daily workflows—offer valuable insights into both the opportunities and pitfalls associated with this technology.
First and foremost, speed genuinely matters. One boutique agency research lead recounted the experience of watching survey results accumulate in real-time after fielding: “After submitting it for fielding, I literally watched the survey count climb and finish the same afternoon. It was a remarkable turnaround.” This velocity enables researchers to respond to business questions within hours rather than weeks, making insights actionable while decisions are still being made.
Secondly, while productivity gains are evident, they are not uniformly distributed. Saving five hours per week represents a meaningful efficiency for individual contributors, but those savings can evaporate if spent validating AI outputs or correcting errors. The net benefit derived from AI depends on the specific task, the quality of the AI tool, and the user’s skill in prompting and reviewing the technology’s work.
Thirdly, the skills required for effective research are evolving. The report identifies future competencies that will be essential, including cultural fluency, strategic storytelling, ethical stewardship, and what it terms “inquisitive insight advocacy”—the ability to ask the right questions, validate AI outputs, and frame insights for maximum business impact. While technical execution remains important, it is becoming less differentiating as AI takes on more of the mechanical work.
The survey’s most striking finding may be the persistence of trust issues despite widespread adoption. In most technology adoption curves, trust builds as users gain experience and tools mature. However, in the case of AI, researchers appear to be using tools intensively while simultaneously questioning their reliability—a dynamic driven by the technology’s tendency to perform well most of the time but fail unpredictably. This creates a verification burden that lacks a clear endpoint. Unlike traditional software bugs that can be identified and fixed, AI systems’ probabilistic nature means they may produce different outputs for the same inputs, complicating the development of reliable quality assurance processes.
The data privacy concerns highlighted by 33% of respondents reflect a different dimension of trust. Researchers are not only concerned about whether AI produces accurate outputs but also about the fate of the sensitive data they input into these systems. QuestDIY’s approach, as outlined in the report, is to build AI directly into a research platform with ISO/IEC 27001 certification, rather than relying on general-purpose tools like ChatGPT that may store and learn from user inputs.
As the market research industry navigates this complex landscape, the future of research work appears poised for transformation. The report positions 2026
