{"id":3244,"date":"2025-12-20T21:54:10","date_gmt":"2025-12-21T05:54:10","guid":{"rendered":"https:\/\/www.tiptinker.com\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/"},"modified":"2026-02-02T10:09:27","modified_gmt":"2026-02-02T18:09:27","slug":"the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo","status":"publish","type":"post","link":"https:\/\/www.tiptinker.com\/zh-hans\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/","title":{"rendered":"\u544a\u522b PPO \u4e0e DPO\uff1a\u6df1\u5ea6\u89e3\u6790 GRPO\u3001DAPO \u4e0e GSPO \u2014\u2014 \u4e0b\u4e00\u4ee3 LLM \u5bf9\u9f50\u6280\u672f\u6808"},"content":{"rendered":"<p>\u5728 2023-2024 \u5e74\uff0cRLHF\uff08Reinforcement Learning from Human Feedback\uff09\u7684\u683c\u5c40\u662f\u4e8c\u5143\u5bf9\u7acb\u7684\uff1a<\/p>\n<ol>\n<li><strong>PPO (Proximal Policy Optimization):<\/strong> \u5b83\u662f\u201c\u6b63\u7edf\u201d\u4f46\u6602\u8d35\u7684\u9009\u62e9\u3002\u4e3a\u4e86\u7ef4\u6301\u8bad\u7ec3\u7a33\u5b9a\u6027\uff0c\u4f60\u4e0d\u5f97\u4e0d\u52a0\u8f7d\u4e00\u4e2a\u4e0e Policy Model \u7b49\u5927\u7684 <strong>Critic Model (Value Network)<\/strong>\uff0c\u8fd9\u76f4\u63a5\u5bfc\u81f4\u663e\u5b58\u5360\u7528\u7ffb\u500d\uff082x Parameters + Optimizer States\uff09\u3002\u5bf9\u4e8e 70B+ \u7684\u6a21\u578b\uff0c\u8fd9\u610f\u5473\u7740\u6602\u8d35\u7684 H100 \u96c6\u7fa4\u6210\u672c\u3002\u6b64\u5916\uff0cPPO \u7684\u8d85\u53c2\u6570\u654f\u611f\u6027\uff08Hyperparameter Sensitivity\uff09\u8ba9\u65e0\u6570\u5de5\u7a0b\u5e08\u5728 <code>kl_coeff<\/code> \u548c <code>clip_range<\/code> \u7684\u8c03\u4f18\u4e2d\u5d29\u6e83\u3002<\/li>\n<li><strong>DPO (Direct Preference Optimization):<\/strong> \u5b83\u662f\u201c\u9ad8\u6548\u201d\u7684\u66ff\u4ee3\u54c1\u3002\u901a\u8fc7\u5c06\u539f\u672c\u7684 RL \u95ee\u9898\u8f6c\u5316\u4e3a\u4e8c\u5206\u7c7b\uff08Preference Learning\uff09\u95ee\u9898\uff0cDPO \u79fb\u9664\u4e86 Critic Model\u3002\u7136\u800c\uff0c\u5230\u4e86 2025 \u5e74\uff0c\u6211\u4eec\u53d1\u73b0 DPO \u5728 <strong>\u63a8\u7406\u5bc6\u96c6\u578b\u4efb\u52a1\uff08Reasoning\/CoT\uff09<\/strong> \u4e0a\u5b58\u5728\u5c40\u9650\u6027\u3002DPO \u66f4\u591a\u662f\u5728\u505a\u201c\u98ce\u683c\u5bf9\u9f50\u201d\uff0c\u800c\u4e0d\u662f\u771f\u6b63\u7684\u201c\u63a2\u7d22\u4e0e\u5229\u7528\uff08Exploration &amp; Exploitation\uff09\u201d\u3002\u5b83\u96be\u4ee5\u50cf RL \u90a3\u6837\u901a\u8fc7\u5c1d\u8bd5\uff08Rollout\uff09\u53d1\u73b0\u66f4\u4f18\u7684\u89e3\u9898\u8def\u5f84\u3002<\/li>\n<\/ol>\n<p><strong>\u73b0\u5728\u7684\u75db\u70b9\u662f\uff1a<\/strong> \u6211\u4eec\u9700\u8981\u4e00\u79cd\u65e2\u80fd\u50cf RL \u90a3\u6837\u5177\u5907\u5f3a\u5927\u201c\u641c\u7d22\u201d\u80fd\u529b\uff08\u6fc0\u53d1 DeepSeek-R1 \u7ea7\u522b\u7684\u63a8\u7406\u80fd\u529b\uff09\uff0c\u53c8\u4e0d\u9700\u8981 PPO \u90a3\u6837\u6602\u8d35\u663e\u5b58\u5f00\u9500\u7684\u7b97\u6cd5\u3002<\/p>\n<p><strong>\u7b54\u6848\u5c31\u662f Group-Based Policy Optimization \u5bb6\u65cf\uff1aGRPO\u3001DAPO \u4e0e GSPO\u3002<\/strong><\/p>\n<hr \/>\n<h2>\ud83c\udfd7\ufe0f \u67b6\u6784\u6f14\u8fdb\uff1a\u4ece Token \u7ea7\u5230 Group \u7ea7<\/h2>\n<h3>1. GRPO (Group Relative Policy Optimization)<\/h3>\n<p>\u6765\u6e90: <a href=\"https:\/\/arxiv.org\/abs\/2402.03300\">DeepSeek-Math \/ DeepSeek-R1 Paper<\/a><\/p>\n<p>GRPO \u662f\u8fd9\u4e00\u6ce2\u8303\u5f0f\u8f6c\u79fb\u7684\u57fa\u77f3\u3002\u5176\u6838\u5fc3\u521b\u65b0\u5728\u4e8e<strong>\u5f7b\u5e95\u79fb\u9664\u4e86 Critic Model<\/strong>\u3002<\/p>\n<p><strong>\u5de5\u4f5c\u539f\u7406\uff1a<\/strong><br \/>\n\u4e0d\u540c\u4e8e PPO \u4f9d\u8d56 Value Network \u6765\u4f30\u8ba1 Baseline \uff0cGRPO \u5229\u7528\u201c\u7fa4\u4f53\u7edf\u8ba1\u201d\u6765\u4f5c\u4e3a Baseline\u3002<br \/>\n\u5bf9\u4e8e\u540c\u4e00\u4e2a Prompt \uff0c\u6a21\u578b\u91c7\u6837\u4e00\u7ec4\u8f93\u51fa \uff08\u4f8b\u5982 \uff09\u3002<br \/>\n\u6bcf\u4e2a\u8f93\u51fa\u7684\u4f18\u52bf\u51fd\u6570\uff08Advantage\uff09\u901a\u8fc7\u7ec4\u5185\u5f52\u4e00\u5316\u8ba1\u7b97\uff1a<\/p>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-1\" data-formula=\"A_i = \\frac{r_i - \\text{mean}(\\{r_1, \\dots, r_G\\})}{\\text{std}(\\{r_1, \\dots, r_G\\}) + \\epsilon}\" data-display=\"true\"><\/div>\n<p><strong>\u6838\u5fc3\u4f18\u52bf\uff1a<\/strong><\/p>\n<ul>\n<li><strong>\u663e\u5b58\u8282\u7701\uff1a<\/strong> \u53ea\u9700\u8981\u52a0\u8f7d Policy Model \u548c Reference Model\uff08\u7528\u4e8e KL \u6563\u5ea6\u8ba1\u7b97\uff09\u3002<\/li>\n<li><strong>\u9002\u5408\u63a8\u7406\uff1a<\/strong> \u901a\u8fc7\u91c7\u6837\u591a\u4e2a Path\uff0c\u6a21\u578b\u80fd\u81ea\u52a8\u5b66\u4f1a\u201c\u54ea\u79cd\u63a8\u7406\u6b65\u9aa4\u66f4\u597d\u201d\uff0c\u975e\u5e38\u9002\u5408\u6570\u5b66\u548c\u4ee3\u7801\u4efb\u52a1\u3002<\/li>\n<\/ul>\n<h3>2. DAPO (Decoupled Clip &amp; Dynamic Sampling)<\/h3>\n<p>\u6765\u6e90: <a href=\"https:\/\/arxiv.org\/abs\/2503.14476\">DAPO Paper (ArXiv 2025)<\/a><\/p>\n<p>\u867d\u7136 GRPO \u89e3\u51b3\u4e86\u663e\u5b58\u95ee\u9898\uff0c\u4f46\u5728\u957f\u601d\u7ef4\u94fe\uff08Long Chain-of-Thought\uff09\u8bad\u7ec3\u4e2d\u5bb9\u6613\u51fa\u73b0<strong>\u71b5\u574d\u584c\uff08Entropy Collapse\uff09<\/strong>\u548c\u8bad\u7ec3\u4e0d\u7a33\u5b9a\u7684\u95ee\u9898\u3002DAPO \u662f\u9488\u5bf9\u5927\u89c4\u6a21\u63a8\u7406\u8bad\u7ec3\u7684\u4f18\u5316\u7248\u3002<\/p>\n<p><strong>\u5173\u952e\u6539\u8fdb\uff1a<\/strong><\/p>\n<ul>\n<li><strong>Clip-Higher:<\/strong> \u4f20\u7edf\u7684 PPO\/GRPO \u9650\u5236\u66f4\u65b0\u5e45\u5ea6\uff08Clip\uff09\u3002DAPO \u53d1\u73b0\u5bf9\u4e8e\u63a8\u7406\u4efb\u52a1\uff0c\u901a\u8fc7\u653e\u5bbd\u4e0a\u9650\uff08Clip-Higher\uff09\uff0c\u53ef\u4ee5\u9632\u6b62\u6a21\u578b\u8fc7\u65e9\u6536\u655b\u5230\u5c40\u90e8\u6700\u4f18\uff0c\u4fdd\u6301\u591a\u6837\u6027\u3002<\/li>\n<li><strong>Dynamic Sampling:<\/strong> \u52a8\u6001\u8fc7\u6ee4\u6389\u90a3\u4e9b\u201c\u5168\u5bf9\u201d\u6216\u201c\u5168\u9519\u201d\u7684 Prompt \u7ec4\u3002\u5982\u679c\u4e00\u7ec4\u91c7\u6837 \u4e2a\u5168\u662f\u6ee1\u5206\u6216\u5168\u662f\u96f6\u5206\uff0c\u68af\u5ea6\u5c31\u6ca1\u6709\u533a\u5206\u5ea6\uff08Variance=0\uff09\uff0c\u7eaf\u5c5e\u6d6a\u8d39\u7b97\u529b\u3002DAPO \u52a8\u6001\u91cd\u91c7\u6837\u4ee5\u4fdd\u8bc1\u68af\u5ea6\u7684\u6709\u6548\u6027\u3002<\/li>\n<\/ul>\n<h3>3. GSPO (Group Sequence Policy Optimization)<\/h3>\n<p>\u6765\u6e90: <a href=\"https:\/\/arxiv.org\/abs\/2507.18071\">GSPO Paper (ArXiv 2025)<\/a><\/p>\n<p>\u5f53\u4f60\u5c06\u6a21\u578b\u6269\u5c55\u5230 <strong>MoE (Mixture of Experts)<\/strong> \u67b6\u6784\uff08\u5982 Qwen3-MoE \u6216 DeepSeek-V3\uff09\u65f6\uff0cToken \u7ea7\u7684 GRPO \u66f4\u65b0\u53ef\u80fd\u4f1a\u56e0\u4e3a Expert \u8def\u7531\u7684\u7a00\u758f\u6027\u5bfc\u81f4\u4e0d\u7a33\u5b9a\u3002<\/p>\n<p><strong>\u6838\u5fc3\u903b\u8f91\uff1a<\/strong><br \/>\nGSPO \u5c06\u4f18\u5316\u7c92\u5ea6\u4ece <strong>Token \u7ea7<\/strong> \u63d0\u5347\u5230\u4e86 <strong>Sequence \u7ea7<\/strong>\u3002<\/p>\n<ul>\n<li><strong>Token-level (GRPO):<\/strong> \u6bcf\u4e2a Token \u90fd\u6709\u81ea\u5df1\u7684 Importance Ratio\u3002<\/li>\n<li><strong>Sequence-level (GSPO):<\/strong> \u6574\u4e2a\u5e8f\u5217\u5171\u4eab\u4e00\u4e2a Importance Ratio\u3002<\/li>\n<\/ul>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-2\" data-formula=\"J_{GSPO}(\\theta) = \\mathbb{E} \\left[ \\min \\left( \\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)} A, \\text{clip}\\left(\\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)}, 1-\\epsilon, 1+\\epsilon\\right) A \\right) \\right]\" data-display=\"true\"><\/div>\n<p>\u8fd9\u79cd\u65b9\u6cd5\u5728\u8bad\u7ec3\u8d85\u5927\u6a21\u578b\u65f6\u8868\u73b0\u51fa\u4e86\u6781\u5f3a\u7684\u9c81\u68d2\u6027\uff0c\u5c24\u5176\u662f\u5728\u9632\u6b62 MoE \u6a21\u578b\u7684 Router \u5d29\u6e83\u65b9\u9762\u3002<\/p>\n<hr \/>\n<h2>\ud83d\udcca \u7b97\u6cd5\u51b3\u7b56\u77e9\u9635<\/h2>\n<table>\n<thead>\n<tr>\n<th>\u7279\u6027<\/th>\n<th>PPO<\/th>\n<th>DPO<\/th>\n<th>GRPO<\/th>\n<th>DAPO<\/th>\n<th>GSPO<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Critic Model<\/strong><\/td>\n<td>\u2705 \u9700\u8981 (\u6602\u8d35)<\/td>\n<td>\u274c \u4e0d\u9700\u8981<\/td>\n<td>\u274c \u4e0d\u9700\u8981<\/td>\n<td>\u274c \u4e0d\u9700\u8981<\/td>\n<td>\u274c \u4e0d\u9700\u8981<\/td>\n<\/tr>\n<tr>\n<td><strong>\u4e3b\u8981\u573a\u666f<\/strong><\/td>\n<td>\u901a\u7528 RLHF<\/td>\n<td>\u804a\u5929\/\u98ce\u683c\u5bf9\u9f50<\/td>\n<td><strong>\u6570\u5b66\/\u4ee3\u7801\/\u63a8\u7406<\/strong><\/td>\n<td><strong>\u957f\u6587\u672c\u63a8\u7406 (CoT)<\/strong><\/td>\n<td><strong>MoE\/\u8d85\u5927\u6a21\u578b\u8bad\u7ec3<\/strong><\/td>\n<\/tr>\n<tr>\n<td><strong>\u663e\u5b58\u5360\u7528<\/strong><\/td>\n<td>\u9ad8<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<\/tr>\n<tr>\n<td><strong>\u5b9e\u73b0\u590d\u6742\u5ea6<\/strong><\/td>\n<td>\u6781\u9ad8<\/td>\n<td>\u4f4e<\/td>\n<td>\u4e2d<\/td>\n<td>\u9ad8<\/td>\n<td>\u9ad8<\/td>\n<\/tr>\n<tr>\n<td><strong>\u7a33\u5b9a\u6027<\/strong><\/td>\n<td>\u4f4e (\u654f\u611f)<\/td>\n<td>\u9ad8<\/td>\n<td>\u4e2d<\/td>\n<td>\u9ad8<\/td>\n<td>\u6781\u9ad8<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h2>\ud83d\udcbb \u5b9e\u65bd\uff1a\u4f7f\u7528 TRL \u5b9e\u73b0 GRPO<\/h2>\n<p>\u622a\u81f3 2025 \u5e74\u5e95\uff0cHugging Face \u7684 <code>trl<\/code> \u5e93\u5df2\u7ecf\u539f\u751f\u652f\u6301 GRPO\u3002\u8fd9\u662f\u76ee\u524d\u5927\u591a\u6570\u5de5\u7a0b\u56e2\u961f\u7684\u9996\u9009\u843d\u5730\u8def\u5f84\u3002<\/p>\n<h3>1. \u73af\u5883\u51c6\u5907<\/h3>\n<pre><code class=\"language-bash\">pip install trl transformers accelerate bitsandbytes\r\n<\/code><\/pre>\n<h3>2. \u6838\u5fc3\u4ee3\u7801\u5b9e\u73b0 (GRPOTrainer)<\/h3>\n<p><strong>\u6ce8\u610f\uff1a<\/strong> \u8fd9\u91cc\u7684\u5173\u952e\u5728\u4e8e <code>Reward Function<\/code> \u7684\u8bbe\u8ba1\u3002GRPO \u975e\u5e38\u9002\u5408 Rule-based Reward\uff08\u5982\u4ee3\u7801\u901a\u8fc7\u7387\u3001\u6570\u5b66\u7b54\u6848\u5339\u914d\uff09\u3002<\/p>\n<pre><code class=\"language-python\">import torch\r\nfrom datasets import load_dataset\r\nfrom trl import GRPOTrainer, GRPOConfig\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n# 1. \u914d\u7f6e\u8def\u5f84\u4e0e\u6a21\u578b\r\nmodel_id = \"deepseek-ai\/deepseek-r1-distill-llama-8b\"\r\noutput_dir = \".\/grpo_reasoning_v1\"\r\n\r\n# 2. \u5b9a\u4e49\u5956\u52b1\u51fd\u6570 (Reward Functions)\r\n# GRPO \u652f\u6301\u591a\u4e2a\u5956\u52b1\u51fd\u6570\u5e76\u884c\uff0c\u6700\u7ec8\u5956\u52b1\u662f\u5b83\u4eec\u7684\u52a0\u6743\u548c\u6216\u76f4\u63a5\u6c42\u548c\r\ndef correctness_reward_func(prompts, completions, answer, **kwargs):\r\n    \"\"\"\r\n    \u68c0\u67e5\u8865\u5168\u5185\u5bb9\u662f\u5426\u5305\u542b\u6b63\u786e\u7b54\u6848\u3002\r\n    \"\"\"\r\n    rewards = []\r\n    for completion, gold_answer in zip(completions, answer):\r\n        # \u7b80\u5355\u7684\u5b57\u7b26\u4e32\u5339\u914d\u903b\u8f91\uff0c\u5b9e\u9645\u573a\u666f\u53ef\u7528\u66f4\u590d\u6742\u7684\u89e3\u6790\u5668\r\n        if str(gold_answer) in completion:\r\n            rewards.append(1.0)\r\n        else:\r\n            rewards.append(0.0)\r\n    return rewards\r\n\r\ndef format_reward_func(completions, **kwargs):\r\n    \"\"\"\r\n    \u5f3a\u5236\u6a21\u578b\u8f93\u51fa\u7279\u5b9a\u7684 CoT \u683c\u5f0f\uff0c\u4f8b\u5982 &lt;think&gt;...&lt;\/think&gt;\r\n    \"\"\"\r\n    rewards = []\r\n    for completion in completions:\r\n        if \"&lt;think&gt;\" in completion and \"&lt;\/think&gt;\" in completion:\r\n            rewards.append(0.5)\r\n        else:\r\n            rewards.append(0.0)\r\n    return rewards\r\n\r\n# 3. \u52a0\u8f7d\u6570\u636e\u96c6\r\n# \u5047\u8bbe\u662f\u4e00\u4e2a\u5305\u542b 'prompt' \u548c 'answer' \u5217\u7684\u6570\u636e\u96c6\r\ndataset = load_dataset(\"json\", data_files=\"math_reasoning_data.jsonl\", split=\"train\")\r\n\r\n# 4. \u914d\u7f6e GRPO\r\ntraining_args = GRPOConfig(\r\n    output_dir=output_dir,\r\n    num_train_epochs=1,\r\n    per_device_train_batch_size=4,  # \u663e\u5b58\u5141\u8bb8\u7684\u60c5\u51b5\u4e0b\u5c3d\u91cf\u5927\r\n    gradient_accumulation_steps=4,\r\n    learning_rate=5e-6,             # GRPO \u901a\u5e38\u9700\u8981\u8f83\u4f4e\u7684 LR\r\n    logging_steps=10,\r\n    max_completion_length=1024,     # \u786e\u4fdd\u8db3\u591f\u957f\u4ee5\u5bb9\u7eb3 CoT\r\n    num_generations=8,              # G: \u6bcf\u4e2a Prompt \u91c7\u6837\u7684\u6570\u91cf (Group Size)\r\n    beta=0.04,                      # KL \u60e9\u7f5a\u7cfb\u6570\r\n    fp16=True,\r\n)\r\n\r\n# 5. \u521d\u59cb\u5316 Trainer\r\n# GRPOTrainer \u81ea\u52a8\u5904\u7406 Policy Model \u7684\u52a0\u8f7d\u548c Group Sampling\r\ntrainer = GRPOTrainer(\r\n    model=model_id,\r\n    args=training_args,\r\n    train_dataset=dataset,\r\n    reward_funcs=[correctness_reward_func, format_reward_func],\r\n)\r\n\r\n# 6. \u5f00\u59cb\u8bad\u7ec3\r\nprint(\"\ud83d\ude80 Starting GRPO Training...\")\r\ntrainer.train()\r\ntrainer.save_model(output_dir)\r\n<\/code><\/pre>\n<hr \/>\n<h2>\ud83d\udee0\ufe0f \u843d\u5730\u6267\u884c\u6e05\u5355<\/h2>\n<ol>\n<li><strong>\u6570\u636e\u6e05\u6d17 (Crucial):<\/strong> GRPO \u4f9d\u8d56\u4e8e\u6a21\u578b\u5728\u63a2\u7d22\u4e2d\u5076\u7136\u201c\u649e\u201d\u5bf9\u7b54\u6848\u3002\u5982\u679c\u4f60\u7684 Base Model \u8fde 1% \u7684\u6b63\u786e\u7387\u90fd\u6ca1\u6709\uff0cGRPO \u65e0\u6cd5\u542f\u52a8\uff08Cold Start Problem\uff09\u3002<\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> \u5148\u505a\u4e00\u8f6e <strong>SFT (Supervised Fine-Tuning)<\/strong>\uff0c\u4f7f\u7528\u9ad8\u8d28\u91cf\u7684 CoT \u6570\u636e\uff08\u5982 OpenO1 \u6216 DeepSeek \u84b8\u998f\u6570\u636e\uff09\uff0c\u786e\u4fdd\u6a21\u578b\u61c2\u5f97\u57fa\u672c\u7684 <code>&lt;think&gt;<\/code> \u683c\u5f0f\u3002<\/li>\n<\/ul>\n<ol start=\"2\">\n<li><strong>Reward Engineering:<\/strong> \u4e0d\u8981\u53ea\u7528\u4e00\u4e2a Sparse Reward\uff08\u5bf9\/\u9519\uff09\u3002<\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> \u6df7\u5408 Process Reward\uff08\u683c\u5f0f\u6b63\u786e\u3001\u6b65\u9aa4\u6e05\u6670\uff09\u548c Outcome Reward\uff08\u7b54\u6848\u6b63\u786e\uff09\u3002<\/li>\n<\/ul>\n<ol start=\"3\">\n<li><strong>Group Size ():<\/strong><\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> \u53ea\u8981\u663e\u5b58\u5141\u8bb8\uff0c \u8d8a\u5927\u8d8a\u597d\u3002\u63a8\u8350 \u3002\u66f4\u5927\u7684 Group Size \u80fd\u63d0\u4f9b\u66f4\u51c6\u786e\u7684 Baseline \u4f30\u8ba1\u3002<\/li>\n<\/ul>\n<ol start=\"4\">\n<li><strong>\u76d1\u63a7 KL \u6563\u5ea6:<\/strong><\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> \u5373\u4f7f\u6ca1\u6709 Critic\uff0cKL \u7206\u70b8\u4f9d\u7136\u662f\u8bad\u7ec3\u5d29\u6e83\u7684\u4e3b\u56e0\u3002\u5982\u679c\u5728 TensorBoard \u770b\u5230 KL \u98d9\u5347\uff0c\u589e\u5927 <code>beta<\/code> \u6216\u51cf\u5c0f <code>learning_rate<\/code>\u3002<\/li>\n<\/ul>\n<hr \/>\n<p>PPO \u7684\u65f6\u4ee3\u5e76\u672a\u7ed3\u675f\uff0c\u4f46\u5728<strong>\u63a8\u7406\uff08Reasoning\uff09\u548c\u4ee3\u7801\u751f\u6210<\/strong>\u9886\u57df\uff0c<strong>GRPO \u53ca\u5176\u53d8\u4f53 (DAPO\/GSPO)<\/strong> \u5df2\u7ecf\u786e\u7acb\u4e86\u7edf\u6cbb\u5730\u4f4d\u3002\u5b83\u4eec\u7528\u66f4\u5c11\u7684\u663e\u5b58\u6362\u53d6\u4e86\u66f4\u5f3a\u7684\u63a2\u7d22\u80fd\u529b\u3002<\/p>\n<ul>\n<li>\u5982\u679c\u662f\u5e38\u89c4\u5fae\u8c03\uff1a<strong>GRPO<\/strong> \u662f\u76ee\u524d\u7684 Sweet Spot\u3002<\/li>\n<li>\u5982\u679c\u9047\u5230\u957f\u6587\u672c CoT \u4e0d\u7a33\u5b9a\uff1a\u5c1d\u8bd5 <strong>DAPO<\/strong> \u7684 Dynamic Sampling \u7b56\u7565\u3002<\/li>\n<li>\u5982\u679c\u8bad\u7ec3 MoE \u5de8\u578b\u6a21\u578b\uff1a\u5fc5\u987b\u4e0a <strong>GSPO<\/strong>\u3002<\/li>\n<\/ul>\n<p><strong>\u73b0\u5728\uff0c\u53bb\u91cd\u6784\u4f60\u7684 Training Pipeline \u5427\u3002<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u5728 2023-2024 \u5e74\uff0cRLHF\uff08Reinforcement Learning from Human Feedback\uff09\u7684\u683c\u5c40\u662f\u4e8c\u5143\u5bf9\u7acb\u7684\uff1a PPO (Proximal Policy Optimization): \u5b83\u662f\u201c\u6b63\u7edf\u201d\u4f46\u6602\u8d35\u7684\u9009\u62e9\u3002\u4e3a\u4e86\u7ef4\u6301\u8bad\u7ec3\u7a33\u5b9a\u6027\uff0c\u4f60\u4e0d\u5f97\u4e0d\u52a0\u8f7d\u4e00\u4e2a\u4e0e Policy Model \u7b49\u5927\u7684 Critic Model (Value Network)\uff0c\u8fd9\u76f4\u63a5\u5bfc\u81f4\u663e\u5b58\u5360\u7528\u7ffb\u500d\uff082x Parameters + Optimizer States\uff09\u3002\u5bf9\u4e8e 70B+ \u7684\u6a21\u578b\uff0c\u8fd9\u610f\u5473\u7740\u6602\u8d35\u7684 H100 \u96c6\u7fa4\u6210\u672c\u3002\u6b64\u5916\uff0cPPO \u7684\u8d85\u53c2\u6570\u654f\u611f\u6027\uff08Hyperparameter Sensitivity\uff09\u8ba9\u65e0\u6570\u5de5\u7a0b\u5e08\u5728 kl_coeff \u548c clip_range \u7684\u8c03\u4f18\u4e2d\u5d29\u6e83\u3002 DPO (Direct Preference Optimization): \u5b83\u662f\u201c\u9ad8\u6548\u201d\u7684\u66ff\u4ee3\u54c1\u3002\u901a\u8fc7\u5c06\u539f\u672c\u7684 RL \u95ee\u9898\u8f6c\u5316\u4e3a\u4e8c\u5206\u7c7b\uff08Preference Learning\uff09\u95ee\u9898\uff0cDPO \u79fb\u9664\u4e86 Critic Model\u3002\u7136\u800c\uff0c\u5230\u4e86 2025 \u5e74\uff0c\u6211\u4eec\u53d1\u73b0 DPO \u5728 \u63a8\u7406\u5bc6\u96c6\u578b\u4efb\u52a1\uff08Reasoning\/CoT\uff09 \u4e0a\u5b58\u5728\u5c40\u9650\u6027\u3002DPO \u66f4\u591a\u662f\u5728\u505a\u201c\u98ce\u683c\u5bf9\u9f50\u201d\uff0c\u800c\u4e0d\u662f\u771f\u6b63\u7684\u201c\u63a2\u7d22\u4e0e\u5229\u7528\uff08Exploration &amp; Exploitation\uff09\u201d\u3002\u5b83\u96be\u4ee5\u50cf RL \u90a3\u6837\u901a\u8fc7\u5c1d\u8bd5\uff08Rollout\uff09\u53d1\u73b0\u66f4\u4f18\u7684\u89e3\u9898\u8def\u5f84\u3002 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3213,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55],"tags":[],"class_list":["post-3244","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tips-tutorials"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/posts\/3244","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/comments?post=3244"}],"version-history":[{"count":0,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/posts\/3244\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/media\/3213"}],"wp:attachment":[{"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/media?parent=3244"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/categories?post=3244"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tiptinker.com\/zh-hans\/wp-json\/wp\/v2\/tags?post=3244"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}