From here it is recommended to start thinking about fine-tuning your models or experimenting with more advanced prompting techniques. When zero-shot prompting and few-shot prompting are not sufficient, it might mean that whatever was learned by the model isn't enough to do well at the task. Overall, it seems that providing examples is useful for solving some tasks. More recently, chain-of-thought (CoT) prompting (opens in a new tab) has been popularized to address more complex arithmetic, commonsense, and symbolic reasoning tasks. In other words, it might help if we break the problem down into steps and demonstrate that to the model. If you take a closer look, the type of task we have introduced involves a few more reasoning steps. The example above provides basic information on the task. It seems like few-shot prompting is not enough to get reliable responses for this type of reasoning problem. Let's first try an example with random labels (meaning the labels Negative and Positive are randomly assigned to the inputs):
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |