Large language models and generative

Connect Asia Data learn, and optimize business database management.
Post Reply
rochona
Posts: 743
Joined: Thu May 22, 2025 5:25 am

Large language models and generative

Post by rochona »

artificial intelligence (AI) are extremely powerful because they allow us as humans to interact with AI tools in new ways we’ve never had before. But working with large language models, or LLMs, requires prompting.

How we “design” our prompts – the order, the specificity, length, style, tone, and other aspects – are important to consider. We’ve led the teams that developed Prompt Builder and Einstein Studio, and we’ve gained critical learnings we’d love to share.

What’s a prompt?
First, let’s define “prompts”: A prompt is what we call the message or “instructions” we submit to an LLM to be processed, thus giving us an answer or a “prompt response.” In natural language, we might say “question and answer.” When we refer to LLMs – deep-learning algorithms that perform a variety of natural language america phone number list processing (NLP) tasks – it’s common to use “prompt and response” as the parallel terminology. Prompt design, or prompt engineering, is the practice we’re learning as we embark on this new way of working with machines and it’s a critical skill for our success interacting with AI.


Improving your prompt design
To be effective at prompting LLMs and getting good results, we must unlearn what we know about English conversations and writing. Talking to a computer is not the same as an exchange between two humans. Proper grammar and sentences that we would use with human interactions do not work as well when talking to machines. We have devised seven tips that can help you design powerful prompts:

Know that AI doesn’t know you: Provide clear context on the who, what, and purpose of your prompt.
Always bowl with bumpers: Provide guidelines or “policies” in your prompt to keep AI in its lane.
Shuffle the playlist: The order or sequence of your instructions drastically impacts the response.
Do and do not…you must try: Telling language models what not to do can be as important as what to do.
Post Reply