{"id":3247,"date":"2025-12-20T21:54:10","date_gmt":"2025-12-21T05:54:10","guid":{"rendered":"https:\/\/www.tiptinker.com\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/"},"modified":"2026-02-02T10:09:56","modified_gmt":"2026-02-02T18:09:56","slug":"the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo","status":"publish","type":"post","link":"https:\/\/www.tiptinker.com\/ja\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/","title":{"rendered":"PPO\u3068DPO\u306b\u5225\u308c\u3092\uff1aGRPO\u3001DAPO\u3001GSPO\u5fb9\u5e95\u89e3\u8aac \u2014\u2014 \u6b21\u4e16\u4ee3LLM\u30a2\u30e9\u30a4\u30e1\u30f3\u30c8\u6280\u8853\u30b9\u30bf\u30c3\u30af"},"content":{"rendered":"<p>2023\u5e74\u304b\u30892024\u5e74\u306b\u304b\u3051\u3066\u3001RLHF\uff08Reinforcement Learning from Human Feedback\uff09\u306e\u30e9\u30f3\u30c9\u30b9\u30b1\u30fc\u30d7\u306f\u4e8c\u9805\u5bfe\u7acb\u306e\u69cb\u56f3\u3067\u3057\u305f\uff1a<\/p>\n<ol>\n<li><strong>PPO (Proximal Policy Optimization):<\/strong> \u300c\u6b63\u7d71\u300d\u3060\u304c\u9ad8\u30b3\u30b9\u30c8\u306a\u9078\u629e\u80a2\u3002\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306e\u5b89\u5b9a\u6027\u3092\u7dad\u6301\u3059\u308b\u305f\u3081\u306b\u3001Policy Model\u3068\u540c\u7b49\u306e\u30b5\u30a4\u30ba\u3092\u6301\u3064 <strong>Critic Model (Value Network)<\/strong> \u3092\u30ed\u30fc\u30c9\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u3001VRAM\u6d88\u8cbb\u91cf\u304c\u500d\u5897\u3057\u307e\u3059\uff082x Parameters + Optimizer States\uff09\u300270B\u4ee5\u4e0a\u306e\u30e2\u30c7\u30eb\u306b\u3068\u3063\u3066\u3001\u3053\u308c\u306f\u9ad8\u4fa1\u306aH100\u30af\u30e9\u30b9\u30bf\u30fc\u306e\u30b3\u30b9\u30c8\u3092\u610f\u5473\u3057\u307e\u3059\u3002\u3055\u3089\u306b\u3001PPO\u306e\u30cf\u30a4\u30d1\u30fc\u30d1\u30e9\u30e1\u30fc\u30bf\u611f\u5ea6\uff08Hyperparameter Sensitivity\uff09\u306f\u9ad8\u304f\u3001<code>kl_coeff<\/code> \u3084 <code>clip_range<\/code> \u306e\u8abf\u6574\u3067\u591a\u304f\u306e\u30a8\u30f3\u30b8\u30cb\u30a2\u3092\u75b2\u5f0a\u3055\u305b\u307e\u3057\u305f\u3002<\/li>\n<li><strong>DPO (Direct Preference Optimization):<\/strong> \u300c\u52b9\u7387\u7684\u300d\u306a\u4ee3\u66ff\u6848\u3002RL\u306e\u554f\u984c\u3092\u4e8c\u5024\u5206\u985e\uff08Preference Learning\uff09\u306b\u5909\u63db\u3059\u308b\u3053\u3068\u3067\u3001Critic Model\u3092\u6392\u9664\u3057\u307e\u3057\u305f\u3002\u3057\u304b\u30572025\u5e74\u73fe\u5728\u3001DPO\u306f <strong>\u63a8\u8ad6\u96c6\u7d04\u578b\u30bf\u30b9\u30af\uff08Reasoning\/CoT\uff09<\/strong> \u306b\u304a\u3044\u3066\u9650\u754c\u304c\u3042\u308b\u3053\u3068\u304c\u5224\u660e\u3057\u3066\u3044\u307e\u3059\u3002DPO\u306f\u672c\u8cea\u7684\u306b\u300c\u30b9\u30bf\u30a4\u30eb\u8abf\u6574\u300d\u3092\u884c\u3046\u3082\u306e\u3067\u3042\u308a\u3001\u771f\u306e\u300c\u63a2\u7d22\u3068\u6d3b\u7528\uff08Exploration &amp; Exploitation\uff09\u300d\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002RL\u306e\u3088\u3046\u306b\u8a66\u884c\u932f\u8aa4\uff08Rollout\uff09\u3092\u901a\u3058\u3066\u3001\u3088\u308a\u512a\u308c\u305f\u89e3\u6cd5\u30d1\u30b9\u3092\u767a\u898b\u3059\u308b\u3053\u3068\u306f\u56f0\u96e3\u3067\u3059\u3002<\/li>\n<\/ol>\n<p><strong>\u73fe\u5728\u306e\u8ab2\u984c\uff08Pain Point\uff09\uff1a<\/strong> PPO\u306e\u3088\u3046\u306a\u9ad8\u4fa1\u306aVRAM\u30b3\u30b9\u30c8\u3092\u304b\u3051\u305a\u306b\u3001RL\u306e\u3088\u3046\u306a\u5f37\u529b\u306a\u300c\u63a2\u7d22\uff08Search\uff09\u300d\u80fd\u529b\uff08DeepSeek-R1\u30ec\u30d9\u30eb\u306e\u63a8\u8ad6\u80fd\u529b\uff09\u3092\u5099\u3048\u305f\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u5fc5\u8981\u3067\u3059\u3002<\/p>\n<p><strong>\u305d\u306e\u7b54\u3048\u304c\u3001Group-Based Policy Optimization \u30d5\u30a1\u30df\u30ea\u30fc\uff1aGRPO\u3001DAPO\u3001\u305d\u3057\u3066 GSPO \u3067\u3059\u3002<\/strong><\/p>\n<hr \/>\n<h2>\ud83c\udfd7\ufe0f \u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u306e\u9032\u5316\uff1aToken\u30ec\u30d9\u30eb\u304b\u3089Group\u30ec\u30d9\u30eb\u3078<\/h2>\n<h3>1. GRPO (Group Relative Policy Optimization)<\/h3>\n<p>\u51fa\u5178: <a href=\"https:\/\/arxiv.org\/abs\/2402.03300\">DeepSeek-Math \/ DeepSeek-R1 Paper<\/a><\/p>\n<p>GRPO \u306f\u3053\u306e\u30d1\u30e9\u30c0\u30a4\u30e0\u30b7\u30d5\u30c8\u306e\u57fa\u790e\u3067\u3059\u3002\u305d\u306e\u6838\u5fc3\u7684\u306a\u9769\u65b0\u306f\u3001<strong>Critic Model\u3092\u5b8c\u5168\u306b\u6392\u9664\u3057\u305f<\/strong>\u70b9\u306b\u3042\u308a\u307e\u3059\u3002<\/p>\n<p><strong>\u52d5\u4f5c\u539f\u7406\uff1a<\/strong><br \/>\nPPO\u304c\u30d9\u30fc\u30b9\u30e9\u30a4\u30f3 \u3092\u63a8\u5b9a\u3059\u308b\u305f\u3081\u306bValue Network\u306b\u4f9d\u5b58\u3059\u308b\u306e\u306b\u5bfe\u3057\u3001GRPO\u306f\u300c\u30b0\u30eb\u30fc\u30d7\u7d71\u8a08\u300d\u3092\u30d9\u30fc\u30b9\u30e9\u30a4\u30f3\u3068\u3057\u3066\u5229\u7528\u3057\u307e\u3059\u3002<br \/>\n\u540c\u4e00\u306e\u30d7\u30ed\u30f3\u30d7\u30c8 \u306b\u5bfe\u3057\u3066\u3001\u30e2\u30c7\u30eb\u306f\u51fa\u529b\u306e\u30b0\u30eb\u30fc\u30d7 \u3092\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u307e\u3059\uff08\u4f8b\uff1a\uff09\u3002<br \/>\n\u5404\u51fa\u529b\u306e\u30a2\u30c9\u30d0\u30f3\u30c6\u30fc\u30b8\uff08Advantage\uff09\u306f\u3001\u30b0\u30eb\u30fc\u30d7\u5185\u306e\u6b63\u898f\u5316\u306b\u3088\u3063\u3066\u8a08\u7b97\u3055\u308c\u307e\u3059\uff1a<\/p>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-1\" data-formula=\"A_i = \\frac{r_i - \\text{mean}(\\{r_1, \\dots, r_G\\})}{\\text{std}(\\{r_1, \\dots, r_G\\}) + \\epsilon}\" data-display=\"true\"><\/div>\n<p><strong>\u4e3b\u306a\u5229\u70b9\uff1a<\/strong><\/p>\n<ul>\n<li><strong>VRAM\u7bc0\u7d04\uff1a<\/strong> Policy Model\u3068Reference Model\uff08KL\u767a\u6563\u8a08\u7b97\u7528\uff09\u3092\u30ed\u30fc\u30c9\u3059\u308b\u3060\u3051\u3067\u6e08\u307f\u307e\u3059\u3002<\/li>\n<li><strong>\u63a8\u8ad6\u306b\u6700\u9069\uff1a<\/strong> \u8907\u6570\u306e\u30d1\u30b9\u3092\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u30e2\u30c7\u30eb\u306f\u300c\u3069\u306e\u63a8\u8ad6\u30b9\u30c6\u30c3\u30d7\u304c\u512a\u308c\u3066\u3044\u308b\u304b\u300d\u3092\u81ea\u52d5\u7684\u306b\u5b66\u7fd2\u3057\u3001\u6570\u5b66\u3084\u30b3\u30fc\u30c9\u751f\u6210\u30bf\u30b9\u30af\u306b\u975e\u5e38\u306b\u9069\u3057\u3066\u3044\u307e\u3059\u3002<\/li>\n<\/ul>\n<h3>2. DAPO (Decoupled Clip &amp; Dynamic Sampling)<\/h3>\n<p>\u51fa\u5178: <a href=\"https:\/\/arxiv.org\/abs\/2503.14476\">DAPO Paper (ArXiv 2025)<\/a><\/p>\n<p>GRPO\u306fVRAM\u554f\u984c\u3092\u89e3\u6c7a\u3057\u307e\u3057\u305f\u304c\u3001\u9577\u3044\u601d\u8003\u306e\u9023\u9396\uff08Long Chain-of-Thought\uff09\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306b\u304a\u3044\u3066\u3001<strong>\u30a8\u30f3\u30c8\u30ed\u30d4\u30fc\u5d29\u58ca\uff08Entropy Collapse\uff09<\/strong>\u3084\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306e\u4e0d\u5b89\u5b9a\u3055\u3092\u62db\u304f\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002DAPO\u306f\u5927\u898f\u6a21\u63a8\u8ad6\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306b\u7279\u5316\u3057\u305f\u6700\u9069\u5316\u7248\u3067\u3059\u3002<\/p>\n<p><strong>\u4e3b\u306a\u6539\u5584\u70b9\uff1a<\/strong><\/p>\n<ul>\n<li><strong>Clip-Higher:<\/strong> \u5f93\u6765\u306ePPO\/GRPO\u306f\u66f4\u65b0\u5e45\uff08Clip\uff09\u3092\u5236\u9650\u3057\u307e\u3059\u3002DAPO\u306f\u63a8\u8ad6\u30bf\u30b9\u30af\u306b\u304a\u3044\u3066\u4e0a\u9650\u3092\u7de9\u548c\uff08Clip-Higher\uff09\u3059\u308b\u3053\u3068\u3067\u3001\u30e2\u30c7\u30eb\u304c\u5c40\u6240\u89e3\u306b\u65e9\u671f\u53ce\u675f\u3059\u308b\u306e\u3092\u9632\u304e\u3001\u591a\u69d8\u6027\u3092\u7dad\u6301\u3057\u307e\u3059\u3002<\/li>\n<li><strong>Dynamic Sampling:<\/strong> \u300c\u5168\u6b63\u89e3\u300d\u307e\u305f\u306f\u300c\u5168\u4e0d\u6b63\u89e3\u300d\u306e\u30d7\u30ed\u30f3\u30d7\u30c8\u30b0\u30eb\u30fc\u30d7\u3092\u52d5\u7684\u306b\u30d5\u30a3\u30eb\u30bf\u30ea\u30f3\u30b0\u3057\u307e\u3059\u3002\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u305f \u500b\u3059\u3079\u3066\u304c\u6e80\u70b9\u307e\u305f\u306f0\u70b9\u306e\u5834\u5408\u3001\u52fe\u914d\u306b\u8b58\u5225\u529b\uff08Variance=0\uff09\u304c\u306a\u304f\u3001\u8a08\u7b97\u30ea\u30bd\u30fc\u30b9\u306e\u7121\u99c4\u3067\u3059\u3002DAPO\u306f\u52d5\u7684\u306a\u518d\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3044\u3001\u52fe\u914d\u306e\u6709\u52b9\u6027\u3092\u4fdd\u8a3c\u3057\u307e\u3059\u3002<\/li>\n<\/ul>\n<h3>3. GSPO (Group Sequence Policy Optimization)<\/h3>\n<p>\u51fa\u5178: <a href=\"https:\/\/arxiv.org\/abs\/2507.18071\">GSPO Paper (ArXiv 2025)<\/a><\/p>\n<p>\u30e2\u30c7\u30eb\u3092 <strong>MoE (Mixture of Experts)<\/strong> \u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\uff08Qwen3-MoE\u3084DeepSeek-V3\u306a\u3069\uff09\u306b\u62e1\u5f35\u3059\u308b\u5834\u5408\u3001Token\u30ec\u30d9\u30eb\u306eGRPO\u66f4\u65b0\u306fExpert\u30eb\u30fc\u30c6\u30a3\u30f3\u30b0\u306e\u30b9\u30d1\u30fc\u30b9\u6027\uff08Sparsity\uff09\u306b\u3088\u308a\u4e0d\u5b89\u5b9a\u306b\u306a\u308b\u53ef\u80fd\u6027\u304c\u3042\u308a\u307e\u3059\u3002<\/p>\n<p><strong>\u6838\u5fc3\u30ed\u30b8\u30c3\u30af\uff1a<\/strong><br \/>\nGSPO\u306f\u6700\u9069\u5316\u306e\u7c92\u5ea6\u3092 <strong>Token\u30ec\u30d9\u30eb<\/strong> \u304b\u3089 <strong>Sequence\u30ec\u30d9\u30eb<\/strong> \u306b\u5f15\u304d\u4e0a\u3052\u307e\u3057\u305f\u3002<\/p>\n<ul>\n<li><strong>Token-level (GRPO):<\/strong> \u5404\u30c8\u30fc\u30af\u30f3\u304c\u72ec\u81ea\u306e Importance Ratio \u3092\u6301\u3064\u3002<\/li>\n<li><strong>Sequence-level (GSPO):<\/strong> \u30b7\u30fc\u30b1\u30f3\u30b9\u5168\u4f53\u3067\u5358\u4e00\u306e Importance Ratio \u3092\u5171\u6709\u3059\u308b\u3002<\/li>\n<\/ul>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-2\" data-formula=\"J_{GSPO}(\\theta) = \\mathbb{E} \\left[ \\min \\left( \\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)} A, \\text{clip}\\left(\\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)}, 1-\\epsilon, 1+\\epsilon\\right) A \\right) \\right]\" data-display=\"true\"><\/div>\n<p>\u3053\u306e\u624b\u6cd5\u306f\u3001\u8d85\u5927\u898f\u6a21\u30e2\u30c7\u30eb\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306b\u304a\u3044\u3066\u975e\u5e38\u306b\u5f37\u529b\u306a\u30ed\u30d0\u30b9\u30c8\u6027\u3092\u793a\u3057\u3001\u7279\u306bMoE\u30e2\u30c7\u30eb\u306eRouter\u5d29\u58ca\u3092\u9632\u3050\u4e0a\u3067\u6709\u52b9\u3067\u3059\u3002<\/p>\n<hr \/>\n<h2>\ud83d\udcca \u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u6c7a\u5b9a\u30de\u30c8\u30ea\u30af\u30b9<\/h2>\n<table>\n<thead>\n<tr>\n<th>\u7279\u6027<\/th>\n<th>PPO<\/th>\n<th>DPO<\/th>\n<th>GRPO<\/th>\n<th>DAPO<\/th>\n<th>GSPO<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Critic Model<\/strong><\/td>\n<td>\u2705 \u5fc5\u8981 (\u9ad8\u30b3\u30b9\u30c8)<\/td>\n<td>\u274c \u4e0d\u8981<\/td>\n<td>\u274c \u4e0d\u8981<\/td>\n<td>\u274c \u4e0d\u8981<\/td>\n<td>\u274c \u4e0d\u8981<\/td>\n<\/tr>\n<tr>\n<td><strong>\u4e3b\u306a\u30b7\u30ca\u30ea\u30aa<\/strong><\/td>\n<td>\u6c4e\u7528 RLHF<\/td>\n<td>\u30c1\u30e3\u30c3\u30c8\/\u30b9\u30bf\u30a4\u30eb\u8abf\u6574<\/td>\n<td><strong>\u6570\u5b66\/\u30b3\u30fc\u30c9\/\u63a8\u8ad6<\/strong><\/td>\n<td><strong>\u9577\u6587\u63a8\u8ad6 (CoT)<\/strong><\/td>\n<td><strong>MoE\/\u8d85\u5de8\u5927\u30e2\u30c7\u30eb<\/strong><\/td>\n<\/tr>\n<tr>\n<td><strong>VRAM\u6d88\u8cbb<\/strong><\/td>\n<td>\u9ad8<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<td>\u4f4e<\/td>\n<\/tr>\n<tr>\n<td><strong>\u5b9f\u88c5\u8907\u96d1\u5ea6<\/strong><\/td>\n<td>\u6975\u3081\u3066\u9ad8\u3044<\/td>\n<td>\u4f4e<\/td>\n<td>\u4e2d<\/td>\n<td>\u9ad8<\/td>\n<td>\u9ad8<\/td>\n<\/tr>\n<tr>\n<td><strong>\u5b89\u5b9a\u6027<\/strong><\/td>\n<td>\u4f4e (\u654f\u611f)<\/td>\n<td>\u9ad8<\/td>\n<td>\u4e2d<\/td>\n<td>\u9ad8<\/td>\n<td>\u6975\u3081\u3066\u9ad8\u3044<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h2>\ud83d\udcbb \u5b9f\u88c5\uff1aTRL\u3092\u4f7f\u7528\u3057\u305fGRPO<\/h2>\n<p>2025\u5e74\u672b\u73fe\u5728\u3001Hugging Face\u306e <code>trl<\/code> \u30e9\u30a4\u30d6\u30e9\u30ea\u306fGRPO\u3092\u30cd\u30a4\u30c6\u30a3\u30d6\u30b5\u30dd\u30fc\u30c8\u3057\u3066\u3044\u307e\u3059\u3002\u3053\u308c\u304c\u591a\u304f\u306e\u30a8\u30f3\u30b8\u30cb\u30a2\u30ea\u30f3\u30b0\u30c1\u30fc\u30e0\u306b\u3068\u3063\u3066\u306e\u6a19\u6e96\u7684\u306a\u5b9f\u88c5\u30d1\u30b9\u3067\u3059\u3002<\/p>\n<h3>1. \u74b0\u5883\u6e96\u5099<\/h3>\n<pre><code class=\"language-bash\">pip install trl transformers accelerate bitsandbytes\r\n<\/code><\/pre>\n<h3>2. \u30b3\u30a2\u30b3\u30fc\u30c9\u5b9f\u88c5 (GRPOTrainer)<\/h3>\n<p><strong>\u6ce8\u610f\uff1a<\/strong> \u8fd9\u91cc\u7684\u9375\u306f <code>Reward Function<\/code> \u306e\u8a2d\u8a08\u306b\u3042\u308a\u307e\u3059\u3002GRPO\u306f\u30eb\u30fc\u30eb\u30d9\u30fc\u30b9\u306e\u5831\u916c\uff08\u30b3\u30fc\u30c9\u306e\u901a\u904e\u7387\u3001\u6570\u5b66\u306e\u6b63\u7b54\u4e00\u81f4\u306a\u3069\uff09\u306b\u975e\u5e38\u306b\u9069\u3057\u3066\u3044\u307e\u3059\u3002<\/p>\n<pre><code class=\"language-python\">import torch\r\nfrom datasets import load_dataset\r\nfrom trl import GRPOTrainer, GRPOConfig\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n# 1. \u30d1\u30b9\u3068\u30e2\u30c7\u30eb\u306e\u8a2d\u5b9a\r\nmodel_id = \"deepseek-ai\/deepseek-r1-distill-llama-8b\"\r\noutput_dir = \".\/grpo_reasoning_v1\"\r\n\r\n# 2. \u5831\u916c\u95a2\u6570\u306e\u5b9a\u7fa9 (Reward Functions)\r\n# GRPO\u306f\u8907\u6570\u306e\u5831\u916c\u95a2\u6570\u306e\u4e26\u5217\u51e6\u7406\u3092\u30b5\u30dd\u30fc\u30c8\u3057\u3001\u6700\u7d42\u5831\u916c\u306f\u305d\u308c\u3089\u306e\u52a0\u91cd\u548c\u307e\u305f\u306f\u5408\u8a08\u306b\u306a\u308a\u307e\u3059\r\ndef correctness_reward_func(prompts, completions, answer, **kwargs):\r\n    \"\"\"\r\n    \u88dc\u5b8c\u5185\u5bb9\u306b\u6b63\u89e3\u304c\u542b\u307e\u308c\u3066\u3044\u308b\u304b\u30c1\u30a7\u30c3\u30af\u3057\u307e\u3059\u3002\r\n    \"\"\"\r\n    rewards = []\r\n    for completion, gold_answer in zip(completions, answer):\r\n        # \u5358\u7d14\u306a\u6587\u5b57\u5217\u4e00\u81f4\u30ed\u30b8\u30c3\u30af\u3002\u5b9f\u904b\u7528\u3067\u306f\u3088\u308a\u8907\u96d1\u306a\u30d1\u30fc\u30b5\u30fc\u3092\u4f7f\u7528\u63a8\u5968\r\n        if str(gold_answer) in completion:\r\n            rewards.append(1.0)\r\n        else:\r\n            rewards.append(0.0)\r\n    return rewards\r\n\r\ndef format_reward_func(completions, **kwargs):\r\n    \"\"\"\r\n    \u30e2\u30c7\u30eb\u306b\u7279\u5b9a\u306eCoT\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\uff08\u4f8b\uff1a&lt;think&gt;...&lt;\/think&gt;\uff09\u3092\u5f37\u5236\u3057\u307e\u3059\r\n    \"\"\"\r\n    rewards = []\r\n    for completion in completions:\r\n        if \"&lt;think&gt;\" in completion and \"&lt;\/think&gt;\" in completion:\r\n            rewards.append(0.5)\r\n        else:\r\n            rewards.append(0.0)\r\n    return rewards\r\n\r\n# 3. \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u8aad\u307f\u8fbc\u307f\r\n# 'prompt' \u3068 'answer' \u5217\u3092\u542b\u3080\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092\u60f3\u5b9a\r\ndataset = load_dataset(\"json\", data_files=\"math_reasoning_data.jsonl\", split=\"train\")\r\n\r\n# 4. GRPO\u306e\u8a2d\u5b9a\r\ntraining_args = GRPOConfig(\r\n    output_dir=output_dir,\r\n    num_train_epochs=1,\r\n    per_device_train_batch_size=4,  # VRAM\u304c\u8a31\u3059\u9650\u308a\u5927\u304d\u304f\r\n    gradient_accumulation_steps=4,\r\n    learning_rate=5e-6,             # GRPO\u306f\u901a\u5e38\u3001\u4f4e\u3044LR\u3092\u5fc5\u8981\u3068\u3057\u307e\u3059\r\n    logging_steps=10,\r\n    max_completion_length=1024,     # CoT\u3092\u53ce\u5bb9\u3059\u308b\u305f\u3081\u306b\u5341\u5206\u306a\u9577\u3055\u3092\u78ba\u4fdd\r\n    num_generations=8,              # G: \u30d7\u30ed\u30f3\u30d7\u30c8\u3054\u3068\u306e\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u6570 (Group Size)\r\n    beta=0.04,                      # KL\u30da\u30ca\u30eb\u30c6\u30a3\u4fc2\u6570\r\n    fp16=True,\r\n)\r\n\r\n# 5. Trainer\u306e\u521d\u671f\u5316\r\n# GRPOTrainer\u306fPolicy Model\u306e\u30ed\u30fc\u30c9\u3068Group Sampling\u3092\u81ea\u52d5\u51e6\u7406\u3057\u307e\u3059\r\ntrainer = GRPOTrainer(\r\n    model=model_id,\r\n    args=training_args,\r\n    train_dataset=dataset,\r\n    reward_funcs=[correctness_reward_func, format_reward_func],\r\n)\r\n\r\n# 6. \u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u958b\u59cb\r\nprint(\"\ud83d\ude80 Starting GRPO Training...\")\r\ntrainer.train()\r\ntrainer.save_model(output_dir)\r\n<\/code><\/pre>\n<hr \/>\n<h2>\ud83d\udee0\ufe0f \u5c0e\u5165\u30fb\u5b9f\u884c\u30c1\u30a7\u30c3\u30af\u30ea\u30b9\u30c8<\/h2>\n<ol>\n<li><strong>\u30c7\u30fc\u30bf\u30af\u30ea\u30fc\u30cb\u30f3\u30b0 (\u6700\u91cd\u8981):<\/strong> GRPO\u306f\u3001\u30e2\u30c7\u30eb\u304c\u63a2\u7d22\u4e2d\u306b\u5076\u7136\u300c\u6b63\u89e3\u300d\u306b\u305f\u3069\u308a\u7740\u304f\u3053\u3068\u306b\u4f9d\u5b58\u3057\u3066\u3044\u307e\u3059\u3002Base Model\u306e\u6b63\u7b54\u7387\u304c1%\u306b\u3082\u6e80\u305f\u306a\u3044\u5834\u5408\u3001GRPO\u306f\u6a5f\u80fd\u3057\u307e\u305b\u3093\uff08\u30b3\u30fc\u30eb\u30c9\u30b9\u30bf\u30fc\u30c8\u554f\u984c\uff09\u3002<\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> \u307e\u305a\u9ad8\u54c1\u8cea\u306aCoT\u30c7\u30fc\u30bf\uff08OpenO1\u3084DeepSeek\u306e\u84b8\u7559\u30c7\u30fc\u30bf\u306a\u3069\uff09\u3092\u4f7f\u7528\u3057\u3066 <strong>SFT (Supervised Fine-Tuning)<\/strong> \u3092\u884c\u3044\u3001\u30e2\u30c7\u30eb\u304c\u57fa\u672c\u7684\u306a <code>&lt;think&gt;<\/code> \u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u3092\u7406\u89e3\u3067\u304d\u308b\u3088\u3046\u306b\u3057\u307e\u3059\u3002<\/li>\n<\/ul>\n<ol start=\"2\">\n<li><strong>Reward Engineering:<\/strong> \u5358\u4e00\u306e\u758e\u306a\u5831\u916c\uff08\u6b63\u89e3\/\u4e0d\u6b63\u89e3\uff09\u3060\u3051\u3067\u6e08\u307e\u305b\u306a\u3044\u3067\u304f\u3060\u3055\u3044\u3002<\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> Process Reward\uff08\u30d5\u30a9\u30fc\u30de\u30c3\u30c8\u304c\u6b63\u3057\u3044\u3001\u30b9\u30c6\u30c3\u30d7\u304c\u660e\u78ba\uff09\u3068 Outcome Reward\uff08\u7b54\u3048\u304c\u6b63\u3057\u3044\uff09\u3092\u6df7\u5408\u3055\u305b\u307e\u3059\u3002<\/li>\n<\/ul>\n<ol start=\"3\">\n<li><strong>Group Size ():<\/strong><\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> VRAM\u304c\u8a31\u3059\u9650\u308a\u3001 \u306f\u5927\u304d\u3044\u65b9\u304c\u826f\u3044\u3067\u3059\u3002\u63a8\u5968\u306f \u3067\u3059\u3002Group Size\u304c\u5927\u304d\u3044\u307b\u3069\u3001\u30d9\u30fc\u30b9\u30e9\u30a4\u30f3\u63a8\u5b9a\u306e\u7cbe\u5ea6\u304c\u5411\u4e0a\u3057\u307e\u3059\u3002<\/li>\n<\/ul>\n<ol start=\"4\">\n<li><strong>KL\u767a\u6563\u306e\u76e3\u8996:<\/strong><\/li>\n<\/ol>\n<ul>\n<li><em>Action:<\/em> Critic\u304c\u306a\u304f\u3068\u3082\u3001KL\u306e\u7206\u767a\u306f\u4f9d\u7136\u3068\u3057\u3066\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u5d29\u58ca\u306e\u4e3b\u56e0\u3067\u3059\u3002TensorBoard\u3067KL\u304c\u6025\u4e0a\u6607\u3057\u3066\u3044\u308b\u5834\u5408\u306f\u3001<code>beta<\/code> \u3092\u5927\u304d\u304f\u3059\u308b\u304b\u3001<code>learning_rate<\/code> \u3092\u4e0b\u3052\u3066\u304f\u3060\u3055\u3044\u3002<\/li>\n<\/ul>\n<h2>\ud83d\udd1a \u7d50\u8ad6<\/h2>\n<p>PPO\u306e\u6642\u4ee3\u304c\u7d42\u308f\u3063\u305f\u308f\u3051\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u304c\u3001<strong>\u63a8\u8ad6\uff08Reasoning\uff09<\/strong> \u3084 <strong>\u30b3\u30fc\u30c9\u751f\u6210<\/strong> \u306e\u9818\u57df\u306b\u304a\u3044\u3066\u3001<strong>GRPO\u3068\u305d\u306e\u6d3e\u751f\u5f62 (DAPO\/GSPO)<\/strong> \u306f\u652f\u914d\u7684\u306a\u5730\u4f4d\u3092\u78ba\u7acb\u3057\u307e\u3057\u305f\u3002\u3053\u308c\u3089\u306f\u3001\u3088\u308a\u5c11\u306a\u3044VRAM\u3067\u3088\u308a\u5f37\u529b\u306a\u63a2\u7d22\u80fd\u529b\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002<\/p>\n<ul>\n<li>\u901a\u5e38\u306e\u30d5\u30a1\u30a4\u30f3\u30c1\u30e5\u30fc\u30cb\u30f3\u30b0\u3067\u3042\u308c\u3070\uff1a<strong>GRPO<\/strong> \u304c\u73fe\u5728\u306e\u30b9\u30a4\u30fc\u30c8\u30b9\u30dd\u30c3\u30c8\u3067\u3059\u3002<\/li>\n<li>\u9577\u6587CoT\u3067\u4e0d\u5b89\u5b9a\u306b\u306a\u308b\u5834\u5408\uff1a<strong>DAPO<\/strong> \u306eDynamic Sampling\u6226\u7565\u3092\u8a66\u3057\u3066\u304f\u3060\u3055\u3044\u3002<\/li>\n<li>MoE\u306a\u3069\u306e\u5de8\u5927\u30e2\u30c7\u30eb\u3092\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3059\u308b\u5834\u5408\uff1a<strong>GSPO<\/strong> \u304c\u5fc5\u9808\u3067\u3059\u3002<strong><\/strong><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>2023\u5e74\u304b\u30892024\u5e74\u306b\u304b\u3051\u3066\u3001RLHF\uff08Reinforcement Learning from Human Feedback\uff09\u306e\u30e9\u30f3\u30c9\u30b9\u30b1\u30fc\u30d7\u306f\u4e8c\u9805\u5bfe\u7acb\u306e\u69cb\u56f3\u3067\u3057\u305f\uff1a PPO (Proximal Policy Optimization): \u300c\u6b63\u7d71\u300d\u3060\u304c\u9ad8\u30b3\u30b9\u30c8\u306a\u9078\u629e\u80a2\u3002\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u306e\u5b89\u5b9a\u6027\u3092\u7dad\u6301\u3059\u308b\u305f\u3081\u306b\u3001Policy Model\u3068\u540c\u7b49\u306e\u30b5\u30a4\u30ba\u3092\u6301\u3064 Critic Model (Value Network) \u3092\u30ed\u30fc\u30c9\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u3001VRAM\u6d88\u8cbb\u91cf\u304c\u500d\u5897\u3057\u307e\u3059\uff082x Parameters + Optimizer States\uff09\u300270B\u4ee5\u4e0a\u306e\u30e2\u30c7\u30eb\u306b\u3068\u3063\u3066\u3001\u3053\u308c\u306f\u9ad8\u4fa1\u306aH100\u30af\u30e9\u30b9\u30bf\u30fc\u306e\u30b3\u30b9\u30c8\u3092\u610f\u5473\u3057\u307e\u3059\u3002\u3055\u3089\u306b\u3001PPO\u306e\u30cf\u30a4\u30d1\u30fc\u30d1\u30e9\u30e1\u30fc\u30bf\u611f\u5ea6\uff08Hyperparameter Sensitivity\uff09\u306f\u9ad8\u304f\u3001kl_coeff \u3084 clip_range \u306e\u8abf\u6574\u3067\u591a\u304f\u306e\u30a8\u30f3\u30b8\u30cb\u30a2\u3092\u75b2\u5f0a\u3055\u305b\u307e\u3057\u305f\u3002 DPO (Direct Preference Optimization): \u300c\u52b9\u7387\u7684\u300d\u306a\u4ee3\u66ff\u6848\u3002RL\u306e\u554f\u984c\u3092\u4e8c\u5024\u5206\u985e\uff08Preference Learning\uff09\u306b\u5909\u63db\u3059\u308b\u3053\u3068\u3067\u3001Critic Model\u3092\u6392\u9664\u3057\u307e\u3057\u305f\u3002\u3057\u304b\u30572025\u5e74\u73fe\u5728\u3001DPO\u306f \u63a8\u8ad6\u96c6\u7d04\u578b\u30bf\u30b9\u30af\uff08Reasoning\/CoT\uff09 \u306b\u304a\u3044\u3066\u9650\u754c\u304c\u3042\u308b\u3053\u3068\u304c\u5224\u660e\u3057\u3066\u3044\u307e\u3059\u3002DPO\u306f\u672c\u8cea\u7684\u306b\u300c\u30b9\u30bf\u30a4\u30eb\u8abf\u6574\u300d\u3092\u884c\u3046\u3082\u306e\u3067\u3042\u308a\u3001\u771f\u306e\u300c\u63a2\u7d22\u3068\u6d3b\u7528\uff08Exploration &amp; Exploitation\uff09\u300d\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002RL\u306e\u3088\u3046\u306b\u8a66\u884c\u932f\u8aa4\uff08Rollout\uff09\u3092\u901a\u3058\u3066\u3001\u3088\u308a\u512a\u308c\u305f\u89e3\u6cd5\u30d1\u30b9\u3092\u767a\u898b\u3059\u308b\u3053\u3068\u306f\u56f0\u96e3\u3067\u3059\u3002 \u73fe\u5728\u306e\u8ab2\u984c\uff08Pain Point\uff09\uff1a PPO\u306e\u3088\u3046\u306a\u9ad8\u4fa1\u306aVRAM\u30b3\u30b9\u30c8\u3092\u304b\u3051\u305a\u306b\u3001RL\u306e\u3088\u3046\u306a\u5f37\u529b\u306a\u300c\u63a2\u7d22\uff08Search\uff09\u300d\u80fd\u529b\uff08DeepSeek-R1\u30ec\u30d9\u30eb\u306e\u63a8\u8ad6\u80fd\u529b\uff09\u3092\u5099\u3048\u305f\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u5fc5\u8981\u3067\u3059\u3002 \u305d\u306e\u7b54\u3048\u304c\u3001Group-Based Policy Optimization \u30d5\u30a1\u30df\u30ea\u30fc\uff1aGRPO\u3001DAPO\u3001\u305d\u3057\u3066 GSPO \u3067\u3059\u3002 \ud83c\udfd7\ufe0f \u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u306e\u9032\u5316\uff1aToken\u30ec\u30d9\u30eb\u304b\u3089Group\u30ec\u30d9\u30eb\u3078 1. GRPO (Group Relative Policy Optimization) \u51fa\u5178: DeepSeek-Math \/ [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3213,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[56],"tags":[],"class_list":["post-3247","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tips-tutorials"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/posts\/3247","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/comments?post=3247"}],"version-history":[{"count":0,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/posts\/3247\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/media\/3213"}],"wp:attachment":[{"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/media?parent=3247"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/categories?post=3247"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tiptinker.com\/ja\/wp-json\/wp\/v2\/tags?post=3247"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}