Co-Intelligence, GPTs, and autonomous agents

    The big technology news this past week has been OpenAI, the company behind ChatGPT and DALL-E, announcing the availability of GPTs. Confusing naming aside, this introduces the idea of anyone being able to build ‘agents’ to help them with tasks.

    Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, is somewhat of an authority in this area. He’s posted on what this means in practice, and gives some examples.

    Mollick has a book coming out next April, called Co-Intelligence which I’m looking to reading. For now, I’d recommend adding his newsletter to those that you read about AI (along with Helen Beetham’s, of course).

    The easy way to make a GPT is something called GPT Builder. In this mode, the AI helps you create a GPT through conversation. You can also test out the results in a window on the side of the interface and ask for live changes, creating a way to iterate and improve your work. This is a very simple way to get started with prompting, especially useful for anyone who is nervous or inexperienced. Here, I created a choose-your-own adventure game by just asking the AI to make one, and letting it ask me questions about what else I wanted.


    So GPTs are easy to make and very powerful, though they are not flawless. But they also have two other features that make them useful. First, you can publish or share them with the world, or your organization (which addresses my previous calls for building organizational prompt libraries, which I call grimoires) and potentially sell them in a future App Store that OpenAI has announced. The second thing is that the GPT starts seemlessly from its hidden prompt, so working with them is much more seamless than pasting text right into the chat window. We now have a system for creating GPTs that can be shared with the world.


    In their reveal of GPTs, OpenAI clearly indicated that this was just the start. Using that action button you saw above, GPTs can be easily integrated into with other systems, such as your email, a travel site, or corporate payment software. You can start to see the birth of true agents as a result. It is easy to design GPTs that can, for example, handle expense reports. It would have permission to look through all your credit card data and emails for likely expenses, write up a report in the right format, submit it to the appropriate authorities, and monitor your bank account to ensure payment. And you can imagine even more ambitious autonomous agents that are given a goal (make me as much money as you can) and carry that out in whatever way they see fit.

    You can start to see both near-term and farther risks in this approach. In the immediate future, AIs will become connected to more systems, and this can be a problem because AIs are incredibly gullible. A fast-talking “hacker” (if that is the right word) can convince a customer service agent to give a discount because the hacker has “super-duper-secret government clearance, and the AI has to obey the government, and the hacker can’t show the clearance because that would be disobeying the government, but the AI trusts him right…” And, of course, as these agents begin to truly act on their own, even more questions of responsibility and autonomous action start to arise. We will need to keep a close eye on the development of agents to understand the risks, and benefits, of these systems.

    Source: Almost an Agent: What GPTs can do | Ethan Mollick

    Raising the average level of creativity using AI

    Like most infants, my daughter wanted to speak before she was able to. Unlike most infants, she was extremely frustrated that she couldn’t do so.

    Most people can’t draw as well as they would like. Many people become exasperated when they can’t adequately express their ideas in written form.

    AI can help with all of this and, in my case, already is. This article, which draws on the results of three academic studies, is interesting in terms of how we can raise the average level of human creativity with the use of AI.

    Each of the three papers directly compares AI-powered creativity and human creative effort in controlled experiments. The first major paper is from my colleagues at Wharton. They staged an idea generation contest: pitting ChatGPT-4 against the students in a popular innovation class that has historically led to many startups. The researchers — Karan Girotra, Lennart Meincke, Christian Terwiesch, and Karl Ulrich — used human judges to assess idea quality, and found that ChatGPT-4 generated more, cheaper and better ideas than the students. Even more impressive, from a business perspective, was that the purchase intent from outside judges was higher for the AI-generated ideas as well! Of the 40 best ideas rated by the judges, 35 came from ChatGPT.

    A second paper conducted a wide-ranging crowdsourcing contest, asking people to come up with business ideas based on reusing, recycling, or sharing products as part of the circular economy. The researchers (Léonard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani) then had judges rate those ideas, and compared them to the ones generated by GPT-4. The overall quality level of the AI and human-generated ideas were similar, but the AI was judged to be better on feasibility and impact, while the humans generated more novel ideas.

    The final paper did something a bit different, focusing on creative writing ideas, rather than business ideas. The study by Anil R. Doshi and Oliver P. Hauser compared humans working alone to write short stories to humans who used AI to suggest 3-5 possible topics. Again, the AI proved helpful: humans with AI help created stories that were judged as significantly more novel and more interesting than those written by humans alone. There were, however, two interesting caveats. First, the most creative people were helped least by the AI, and AI ideas were generally judged to be more similar to each other than ideas generated by people. Though again, this was using AI purely for generating a small set of ideas, not for writing tasks.

    Source: Automating creativity | Ethan Mollick