cognitive

Cognitive Warfare and Large Language Models: Devis leads the way in research

On May 16th, our senior leadership team published research concerning cognitive warfare happening on TikTok. Shane Morris (Senior Executive Advisor) and David Gurzick Ph.D (CTO) collaborated with Sean Guillory Ph.D and Glenn Borsky to better illuminate the landscape of peer and near-peer cognitive warfare being enabled by new media and social media platforms.

Link: https://information-professionals.org/countering-cognitive-warfare-in-the-digital-age/

This article published in the IPA Blog details what appeared to be use of large language models (LLMs) to influence social media dialogue. While no source API could be found, it’s possible the OpenAI API and use of Chat GPT3, GPT4, etc could have been used. It is also just as plausible that a competitor LLM from Google, Meta, or other self-hosted LLM was used. Our article does not definitively say which LLM, but only that the patterns are attributable to LLM’s.

It would be presumptuous to assume OpenAI is responding directly to our article. We consider the timing more coincidental and timely rather than responsive in nature. Based upon what OpenAI has published, it is likely they have been battling this issue of adversarial abuse for months.

OpenAI Publishes a Statement Confirming Adversarial Use of OpenAI API

Link: https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

“We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services.”

Our Take:

OpenAI has many motivations to understate the malicious use of their large language model API (and related tools) because it would harm their public image. Additionally, plausible deniability is on their side, and only OpenAI has the analytics tools to give people the full story.

This development has raised concerns about the ethical implications and security risks associated with powerful AI technologies. OpenAI has every motivation to downplay and brush off risks. Additionally, it’s unlikely their team allows a third party to audit the use of their API or prompting engines.

Motivations for Downplaying Malicious Use

  • Preserving Public Trust: OpenAI has built a reputation as a leading AI research organization committed to the ethical development and deployment of artificial intelligence. Acknowledging widespread malicious use of their API could damage public trust and undermine their credibility. By downplaying the extent of misuse, OpenAI aims to maintain a positive image and reassure stakeholders of their commitment to ethical standards.
  • Commercial Interests: OpenAI operates in a competitive market where trust and reliability are key factors influencing adoption. Highlighting the negative uses of their API could deter potential clients and partners, affecting their market position and financial performance. Therefore, minimizing public awareness of such misuse helps protect their commercial interests.
  • Avoiding Regulatory Scrutiny: Increased attention to malicious uses of AI could lead to heightened regulatory scrutiny and potentially restrictive legislation. OpenAI may downplay these issues to avoid triggering regulatory actions that could impose stringent requirements on their operations and limit their flexibility.
  • Liability Issues: Acknowledging the extent of malicious use could expose OpenAI to legal liabilities. Victims of cognitive warfare campaigns might pursue legal action against the company for negligence or insufficient safeguards. By downplaying the issue, OpenAI can mitigate the risk of legal repercussions.
  • Highlighting Beneficial Applications: OpenAI is motivated to emphasize the positive applications of their technology in areas such as healthcare, education, and research. By focusing on success stories and beneficial outcomes, they aim to balance public perception and demonstrate the value of their innovations.
  • Research and Development Goals: OpenAI’s mission includes advancing AI for the benefit of humanity. Focusing on the constructive uses of their technology aligns with their long-term research goals and helps attract funding and talent to further their mission.

Reasons for Limited Transparency in Use Logs

  • User Privacy: OpenAI collects and stores sensitive information from users of their API. Publicly disclosing use logs could compromise user privacy and violate data protection regulations. Ensuring the confidentiality of user data is a legal and ethical obligation that limits the extent of transparency OpenAI can provide.
  • Security Risks: Making use logs publicly accessible could expose vulnerabilities and operational details that malicious actors could exploit. This information could be used to identify weaknesses in the system or develop more sophisticated attacks, increasing the overall risk to the platform and its users.
  • Protecting Intellectual Property: OpenAI’s use logs contain valuable insights into how their API is used, including application patterns and performance metrics. Disclosing this information could reveal proprietary knowledge and give competitors an advantage. Maintaining confidentiality helps protect their intellectual property and sustain their competitive edge.
  • Market Positioning: Transparent access to use logs could inadvertently reveal business strategies, customer bases, and market trends. Competitors could leverage this information to gain market share or develop competing products, eroding OpenAI’s market position.
  • Volume of Data: The sheer volume of data generated by API usage makes comprehensive logging and transparent access a challenging task. Managing and presenting this data in a meaningful and secure way would require significant resources and could detract from other operational priorities.
  • Data Interpretation: Use logs can be complex and require context to be properly understood.
    Without proper interpretation, raw logs could be misinterpreted, leading to incorrect conclusions and potentially damaging the company’s reputation.
gleam

Revolutionizing Government IT with Gleam and Fly: A New Approach to Coding and Hosting

The launch of Gleam v1.0 this week marks an exhilarating milestone for the programming community and, by extension, the technology landscape at large. Achieving version 1.0 signifies a level of maturity and stability in the language that developers and organizations have eagerly anticipated. This milestone is particularly important because it signals to the world that Gleam is ready for production use; it’s not just an experimental language anymore but a robust tool designed to build reliable, efficient applications. The stability of a 1.0 release instills confidence in developers and enterprises alike, encouraging the adoption of Gleam in a wider array of projects and environments. For industries where reliability and performance are critical—such as finance, healthcare, and government—Gleam v1.0 offers a promising new option for building systems that require fault tolerance and efficiency. This launch is a testament to the hard work and dedication of the Gleam community and a beacon for the future of functional programming in system critical applications.

In the rapidly evolving landscape of government IT, the quest for efficiency, security, and rapid deployment of applications is a never-ending pursuit. Enter Gleam, a statically typed functional programming language, and Fly, a cutting-edge microVM hosting service. Together, they offer a unique and powerful approach to coding and hosting, poised to significantly enhance how government agencies deliver applications.

Gleam: The Statically Typed Functional Programming Star

Gleam, designed to run on the Erlang virtual machine, brings the robustness and reliability of Erlang to the table, with a twist. It’s statically typed, which means errors can be caught at compile time, significantly reducing runtime errors. For government applications where reliability and security are paramount, this feature is a game-changer. Gleam’s functional programming paradigm promotes immutability and pure functions, further minimizing side effects and making the codebase easier to reason about, test, and maintain.

The benefits for government agencies are clear: more reliable code means fewer service disruptions and a reduced risk of security vulnerabilities. Additionally, the ease of maintenance and scalability inherent in functional programming allows for quicker updates and adaptation to changing requirements, a critical capability in the dynamic world of government IT.

Fly: MicroVM Hosting for the Modern Age

Fly steps into the hosting arena with its microVM technology, offering lightweight, isolated environments for applications. This isolation ensures that applications do not interfere with one another, enhancing security—a crucial aspect for government operations. Furthermore, Fly’s global network of servers ensures low-latency access to applications, no matter where the end-users are located, be it government employees, military personnel, or the public.

For agencies, this means being able to deploy and scale applications rapidly across the globe, ensuring that critical services are always available and responsive. The ease of use and deployment on Fly also accelerates the development cycle, enabling agencies to bring more applications to the users faster than ever before.

Impact on Government Services and Military Operations

The combination of Gleam and Fly offers a compelling proposition for government agencies looking to modernize their IT infrastructure. The reliability and efficiency of Gleam, coupled with the secure and scalable hosting provided by Fly, pave the way for a new era of government applications that are both robust and rapidly deployable.

For Government Employees

Government employees stand to benefit from applications that are more reliable and easier to use, with less downtime and faster access to the services they need. This translates to improved efficiency in day-to-day operations and the ability to provide better services to the public.

For Members of Our Military

(Note: Fly is not FEDRAMP’d, nor has it been submitted for an ATO, to our knowledge. This is purely about capability in the future, rather than current state.) For military operations, the implications are profound. The ability to quickly deploy secure, reliable applications can enhance operational capabilities, improve logistics, and provide commanders and soldiers with the information and tools they need in a timely manner. The global reach of Fly’s hosting services ensures that these applications can be accessed quickly, from anywhere, which is often a critical requirement for military operations.

Conclusion

The synergy between Gleam and Fly represents a forward-thinking approach to coding and hosting that could transform how government agencies and the military deploy applications. By embracing these technologies, agencies can deliver more applications quickly, securely, and reliably, ultimately enhancing the services provided to government employees, military personnel, and the public. This unique combination of programming and hosting is not just a technological advancement; it’s a step towards a more efficient, effective, and secure future for government IT.