21 Aug 2023

Experimenting with ChatGPT

Filed Under: blog
Tags: blog  ChatGPT  OpenAI  artificial-intelligence 

An Experiment with ChatGPT

Over the coming weeks and months, I will be experimenting with ChatGPT and looking at how ChatGPt-4 produces content. There has been a lot of buzz about how AI and Large Language Models (LLMs) like ChatGPT will replace mainstream journalism and other various levels of content.

Such efforts are not without very visible failures though. For example, when ChatGPT was used to publish an error-filled Star Wars article on Gizmodo, the generated text was mostly hallucinated, or filled with bot-generated errors. The same thing happened at CNET too, and they caught a lot of backlash for their efforts.

Note: The link above referencing the issues at Gizmodo is a paywall-free gift link to the Washington Post.

Within the security space, I am personally aware of at least one company that is using ChatGPT to generate blogs and other various types of enablement content. Granted, their content editor catches many of the articles and seriously scrubs or rejects them completely, but the fact it is already happening at such an early stage is a bit alarming. Therefore, one of the experiments you’ll be seeing on this site is the publication of ChatGPT-4 generated articles that I will then edit and critique. In some cases, I will compare ChatGPT-3 articles to those generated by ChatGPT-4.

Government Compliance

Under a new law, schools in Iowa have to remove titles with specific sexual content from libraries. Asking an AI chatbot proved to be the easiest way.

A report in Wired, coming off a brief in The Gazette talks about ChatGPT being used to enforce book banning laws. The Wired article goes into more detail, but the deeper problem isn’t ChatGPT, laws like the one in Iowa have placed educators between a rock and a hard spot, often so people completely removed from the education system can score political points. Again, I think this is another example of expecting artificial intelligence to do too much without any verification.

No Silver Bullet for Criminals

It’s already been established that LLMs can help assist with some tasks, but they’re not a silver bullet. The Register had a good write-up on some of the things criminals have been attempting to use LLMs for, including a note on what is and isn’t working.

If anything, criminals are likely to focus AI on helping with common tasks, streamlining operations, and creating efficiencies where possible. For example, pulling language from breach-based press releases, FAQs, and press comments, which can then be used to craft text that is believable. From there, it is a simple matter of modifying a template, and there are scores of programs out there to do that already. Some spam developers are even including ChatGPT in to their applications and services for this purpose.

Criminals can also use AI to process large amounts of compromised data. Using basic processing and enrichment methods, criminals can focus their data collection via AI for various points and connections. For example, targeting people on their personal connections to other people, vendors, hobbies, professional networks, region, or likes/dislikes. This is already happening, but AI could make it faster.

Bug Bounties and Partial-Full Disclosure

I will also be poking at ChatGPT as it relates to its bug bounty program, and there is a lot to do there.

Since jailbreaks are out of scope for the program, I’ll stick to posting about the funny ones here. If there is something really serious, I will do partial Full Disclosure. This will include confirming a jailbreak and disclosing the results, but keeping the exact details of the prompt and follow-up methods redacted. If the issue I discover is covered by the bounty program, I will report it there first and disclose as soon as possible.

For now, I’m going to start with generated articles. If you have topic suggestions, feel free to ping me on social media and share them.


-30-

-[ Return ⬏ ]-