The Rise of Prompt Engineering, And Lawyers As The Top Candidates.
Happy Sunday and welcome to the Investing in AI Newsletter. I’m Rob May, a Partner at PJC, focused on AI and Robotics. Be sure to check out the latest Investing in AI podcasts on iTunes, featuring tech columnist Kevin Roose of the NYTimes discussing automation and Davis Sawywer of Deeplite discussing edge AI.
I want to point out a super interesting thread from a week ago. Karl Higley, who works on recommendation systems at Nvidia, wrote about how recommenders can collapse in on themselves by using data from the output of a recommender. It’s a big problem that few people are thinking about, but when AI makes recommendations, and those AI outputs become inputs into a new AI system, you can lose some of the data diversity which can cause issues.
— Other Interesting Links —
The state of mental health apps in AI. The Cut.
A gaming site reviews Intel’s AI powered hate speech censoring tool. Kotaku.
A critique of machine learning for finding accounting fraud. EconJournal.
ARM’s new architecture explains why NVIDIA is buying them. NextPlatform.
Geoff Hinton on what’s next for AI. MIT Tech Review.
State of the art AI models for every field. Towards AI.
— Commentary —
Want to hear about a new job title you might see popping up at some companies? Try “Prompt Engineer.” It’s a new way to think about interacting with AI, and comes from the launch of OpenAI’s GPT-3 model.
If you aren’t technical, GPT-3 is the world’s top language model for natural language processing, at the moment. The way you access GPT-3 is with an API, which is like a technical pipe that goes into the model and gets things back out. While APIs have been around for a long time, they’ve become pretty standardized. You ask the API for certain data, like the balance a customer owes, and that data gets returned. With GPT-3 it’s different.
GPT-3 requires a “prompt.” This prompt is something that tells GPT-3 how much language to give you, and in what format. It’s less precise than a traditional API because you aren’t necessarily saying “give me 14 words to finish this sentence.” But, you may want GPT-3 to give you enough words to finish something that sounds like it makes sense, within a range. And setting a definite limit may lead to an awkward outcome. Or if you are doing something like asking GPT-3 to write a Haiku, maybe you do need a very precise limit.
Here is an example of a prompt that shows the first sentences provided by the prompt writer and then GPT-3 finishing it off.
Prompt engineering is a little bit art and a little bit science. But it’s quickly becoming an important skill. The essence of prompt engineering is getting the model to give you the output you want in the format you want. It’s like playing a game with a human where you lead the human into what their natural language output to something should be.
For that reason, I wonder if we will see some lawyers turn to GPT-3 prompt engineers. If I think of one job that has the core skill of getting people to say what you want them to say, leading them along with prompts, it’s trial lawyers.
And I expect prompt engineering to grow. As huge powerful ML models like GPT-3 proliferate in all kinds of domains, these models will be smart and flexible, and getting the right data out of them will be a bit trickier in some cases than the skills required to read a database table or ping an API.
We often hear that automation will take jobs. And GPT-3 will certainly reduce the number of writers we need, but it, and tools like it, will increase the number of Prompt Engineers we need to make these tools work. It’s definitely something to watch.
Thanks for reading.
@robmay