如果预训练阶段没有看过,在微调时增加的知识可能更容易让模型产生幻觉。
如果预训练阶段没有看过,在微调时增加的知识可能更容易让模型产生幻觉。 以此推论,开源模型微调的事实性很难有提升。而GPT4的事实性准确性也来自预训练。 anton: This is a useful recent talk on why LLMs hallucinate. It seems that fine tuning can teach the model to hallucinate more if that knowledge was not previously seen before during training
在Telegram中查看相关推荐

🔍 发送关键词来寻找群组、频道或视频。
启动SOSO机器人