Cognitive Warfare and Large Language Models: Devis leads the way in research

On May 16th, our senior leadership team published research concerning cognitive warfare happening on TikTok. Shane Morris (Senior Executive Advisor) and David Gurzick Ph.D (CTO) collaborated with Sean Guillory Ph.D and Glenn Borsky to better illuminate the landscape of peer and near-peer cognitive warfare being enabled by new media and social media platforms.

Link: https://information-professionals.org/countering-cognitive-warfare-in-the-digital-age/

This article published in the IPA Blog details what appeared to be use of large language models (LLMs) to influence social media dialogue. While no source API could be found, it's possible the OpenAI API and use of Chat GPT3, GPT4, etc could have been used. It is also just as plausible that a competitor LLM from Google, Meta, or other self-hosted LLM was used. Our article does not definitively say which LLM, but only that the patterns are attributable to LLM's.

It would be presumptuous to assume OpenAI is responding directly to our article. We consider the timing more coincidental and timely rather than responsive in nature. Based upon what OpenAI has published, it is likely they have been battling this issue of adversarial abuse for months.

OpenAI Publishes a Statement Confirming Adversarial Use of OpenAI API

Link: https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

"We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services."

Our Take:

OpenAI has many motivations to understate the malicious use of their large language model API (and related tools) because it would harm their public image. Additionally, plausible deniability is on their side, and only OpenAI has the analytics tools to give people the full story.

This development has raised concerns about the ethical implications and security risks associated with powerful AI technologies. OpenAI has every motivation to downplay and brush off risks. Additionally, it's unlikely their team allows a third party to audit the use of their API or prompting engines.

Motivations for Downplaying Malicious Use

  • Preserving Public Trust: OpenAI has built a reputation as a leading AI research organization committed to the ethical development and deployment of artificial intelligence. Acknowledging widespread malicious use of their API could damage public trust and undermine their credibility. By downplaying the extent of misuse, OpenAI aims to maintain a positive image and reassure stakeholders of their commitment to ethical standards.

  • Commercial Interests: OpenAI operates in a competitive market where trust and reliability are key factors influencing adoption. Highlighting the negative uses of their API could deter potential clients and partners, affecting their market position and financial performance. Therefore, minimizing public awareness of such misuse helps protect their commercial interests.

  • Avoiding Regulatory Scrutiny: Increased attention to malicious uses of AI could lead to heightened regulatory scrutiny and potentially restrictive legislation. OpenAI may downplay these issues to avoid triggering regulatory actions that could impose stringent requirements on their operations and limit their flexibility.

  • Liability Issues: Acknowledging the extent of malicious use could expose OpenAI to legal liabilities. Victims of cognitive warfare campaigns might pursue legal action against the company for negligence or insufficient safeguards. By downplaying the issue, OpenAI can mitigate the risk of legal repercussions.

  • Highlighting Beneficial Applications: OpenAI is motivated to emphasize the positive applications of their technology in areas such as healthcare, education, and research. By focusing on success stories and beneficial outcomes, they aim to balance public perception and demonstrate the value of their innovations.

  • Research and Development Goals: OpenAI's mission includes advancing AI for the benefit of humanity. Focusing on the constructive uses of their technology aligns with their long-term research goals and helps attract funding and talent to further their mission.

Reasons for Limited Transparency in Use Logs

  • User Privacy: OpenAI collects and stores sensitive information from users of their API. Publicly disclosing use logs could compromise user privacy and violate data protection regulations. Ensuring the confidentiality of user data is a legal and ethical obligation that limits the extent of transparency OpenAI can provide.

  • Security Risks: Making use logs publicly accessible could expose vulnerabilities and operational details that malicious actors could exploit. This information could be used to identify weaknesses in the system or develop more sophisticated attacks, increasing the overall risk to the platform and its users.

  • Protecting Intellectual Property: OpenAI's use logs contain valuable insights into how their API is used, including application patterns and performance metrics. Disclosing this information could reveal proprietary knowledge and give competitors an advantage. Maintaining confidentiality helps protect their intellectual property and sustain their competitive edge.

  • Market Positioning: Transparent access to use logs could inadvertently reveal business strategies, customer bases, and market trends. Competitors could leverage this information to gain market share or develop competing products, eroding OpenAI's market position.

  • Volume of Data: The sheer volume of data generated by API usage makes comprehensive logging and transparent access a challenging task. Managing and presenting this data in a meaningful and secure way would require significant resources and could detract from other operational priorities.

  • Data Interpretation: Use logs can be complex and require context to be properly understood.
    Without proper interpretation, raw logs could be misinterpreted, leading to incorrect conclusions and potentially damaging the company’s reputation.

Previous
Previous

Employee Recognition

Next
Next

Silicon Review Recognizes Devis as a Top 5 AI Solution Provider to Watch in 2024