{"id":3248,"date":"2025-12-20T21:54:10","date_gmt":"2025-12-21T05:54:10","guid":{"rendered":"https:\/\/www.tiptinker.com\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/"},"modified":"2026-02-02T10:09:42","modified_gmt":"2026-02-02T18:09:42","slug":"the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo","status":"publish","type":"post","link":"https:\/\/www.tiptinker.com\/ko\/the-llm-alignment-frontier-a-deep-dive-into-ppo-dpo-grpo-dapo-and-gspo\/","title":{"rendered":"PPO\uc640 DPO\ub97c \ub118\uc5b4: GRPO, DAPO, GSPO &#8211; \ucc28\uc138\ub300 LLM \uc815\ub82c(Alignment) \uae30\uc220 \uc2a4\ud0dd \uc2ec\uce35 \ubd84\uc11d"},"content":{"rendered":"<p>2023-2024\ub144 RLHF(Reinforcement Learning from Human Feedback)\uc758 \uc9c0\ud615\uc740 \ud06c\uac8c \ub450 \uac00\uc9c0\ub85c \ub098\ub258\uc5c8\uc2b5\ub2c8\ub2e4:<\/p>\n<ol>\n<li><strong>PPO (Proximal Policy Optimization):<\/strong> &#8216;\uc815\uc11d&#8217;\uc774\uc9c0\ub9cc \ube44\uc6a9\uc774 \ub9e4\uc6b0 \ub9ce\uc774 \ub4ed\ub2c8\ub2e4. \ud559\uc2b5 \uc548\uc815\uc131\uc744 \uc704\ud574 Policy \ubaa8\ub378\uacfc \ub3d9\uc77c\ud55c \ud06c\uae30\uc758 <strong>Critic \ubaa8\ub378(Value Network)<\/strong>\uc744 \ub85c\ub4dc\ud574\uc57c \ud558\uba70, \uc774\ub294 \uba54\ubaa8\ub9ac \uc810\uc720\uc728\uc744 \uc989\uc2dc \ub450 \ubc30\ub85c \ub192\uc785\ub2c8\ub2e4(2x Parameters + Optimizer States). 70B \uc774\uc0c1\uc758 \ubaa8\ub378\uc744 \ud559\uc2b5\ud558\ub824\uba74 \uace0\uac00\uc758 H100 \ud074\ub7ec\uc2a4\ud130\uac00 \ud544\uc218\uc801\uc785\ub2c8\ub2e4. \ub610\ud55c <code>kl_coeff<\/code>, <code>clip_range<\/code>\uc640 \uac19\uc740 \ud558\uc774\ud37c\ud30c\ub77c\ubbf8\ud130 \ubbfc\uac10\ub3c4\uac00 \ub192\uc544 \uc5d4\uc9c0\ub2c8\uc5b4\uc758 \ud53c\ub85c\ub3c4\uac00 \uadf9\uc2ec\ud569\ub2c8\ub2e4.<\/li>\n<li><strong>DPO (Direct Preference Optimization):<\/strong> &#8216;\ud6a8\uc728\uc801&#8217;\uc778 \ub300\uc548\uc785\ub2c8\ub2e4. RL \ubb38\uc81c\ub97c \uc774\uc9c4 \ubd84\ub958(Preference Learning) \ubb38\uc81c\ub85c \ubcc0\ud658\ud558\uc5ec Critic \ubaa8\ub378\uc744 \uc81c\uac70\ud588\uc2b5\ub2c8\ub2e4. \uadf8\ub7ec\ub098 2025\ub144\uc5d0 \ub4e4\uc5b4\uc11c\uba70 DPO\uac00 <strong>\ucd94\ub860 \uc9d1\uc57d\uc801 \uc791\uc5c5(Reasoning\/CoT)<\/strong>\uc5d0\uc11c \uba85\ud655\ud55c \ud55c\uacc4\ub97c \ubcf4\uc778\ub2e4\ub294 \uac83\uc774 \uc99d\uba85\ub418\uc5c8\uc2b5\ub2c8\ub2e4. DPO\ub294 \uc9c4\uc815\ud55c &#8216;\ud0d0\uc0c9\uacfc \ud65c\uc6a9(Exploration &amp; Exploitation)&#8217;\ubcf4\ub2e4\ub294 &#8216;\uc2a4\ud0c0\uc77c \uc815\ub82c&#8217;\uc5d0 \uac00\uae5d\uc2b5\ub2c8\ub2e4. \uc989, DeepSeek-R1\uacfc \uac19\uc740 \uace0\ub3c4\uc758 \ucd94\ub860 \ub2a5\ub825\uc744 \uc774\ub04c\uc5b4\ub0b4\uae30\uc5d0\ub294 \ubd80\uc871\ud569\ub2c8\ub2e4.<\/li>\n<\/ol>\n<p><strong>\ud604\uc7ac\uc758 \uacfc\uc81c:<\/strong> RL\uc758 \uac15\ub825\ud55c &#8216;\ud0d0\uc0c9&#8217; \ub2a5\ub825\uc740 \uc720\uc9c0\ud558\uba74\uc11c(DeepSeek-R1 \uc218\uc900\uc758 \ucd94\ub860), PPO\uc640 \uac19\uc740 \ub9c9\ub300\ud55c \uba54\ubaa8\ub9ac \ube44\uc6a9\uc774 \ub4e4\uc9c0 \uc54a\ub294 \uc54c\uace0\ub9ac\uc998\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.<\/p>\n<p><strong>\ud574\ub2f5\uc740 Group-Based Policy Optimization \uc81c\ud488\uad70: GRPO, DAPO, \uadf8\ub9ac\uace0 GSPO\uc785\ub2c8\ub2e4.<\/strong><\/p>\n<hr \/>\n<h2>\ud83c\udfd7\ufe0f \uc544\ud0a4\ud14d\ucc98 \uc9c4\ud654: Token \ub2e8\uc704\uc5d0\uc11c Group \ub2e8\uc704\ub85c<\/h2>\n<h3>1. GRPO (Group Relative Policy Optimization)<\/h3>\n<p>\ucd9c\ucc98: <a href=\"https:\/\/arxiv.org\/abs\/2402.03300\">DeepSeek-Math \/ DeepSeek-R1 Paper<\/a><\/p>\n<p>GRPO\ub294 \uc774 \ud328\ub7ec\ub2e4\uc784 \uc804\ud658\uc758 \ucd08\uc11d\uc785\ub2c8\ub2e4. \ud575\uc2ec \ud601\uc2e0\uc740 <strong>Critic \ubaa8\ub378\uc744 \uc644\uc804\ud788 \uc81c\uac70<\/strong>\ud55c \uac83\uc785\ub2c8\ub2e4.<\/p>\n<p><strong>\uc791\ub3d9 \uc6d0\ub9ac:<\/strong><br \/>\nPPO\ucc98\ub7fc Value Network\ub97c \ud1b5\ud574 Baseline $V(s)$\ub97c \ucd94\uc815\ud558\ub294 \ub300\uc2e0, GRPO\ub294 &#8216;\uadf8\ub8f9 \ud1b5\uacc4&#8217;\ub97c Baseline\uc73c\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.<br \/>\n\ub3d9\uc77c\ud55c Prompt \uc5d0 \ub300\ud574 \ubaa8\ub378\uc774 \ucd9c\ub825 \uadf8\ub8f9 (\uc608: )\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4. \uac01 \ucd9c\ub825\uc758 Advantage \ud568\uc218\ub294 \uadf8\ub8f9 \ub0b4 \uc815\uaddc\ud654\ub97c \ud1b5\ud574 \uacc4\uc0b0\ub429\ub2c8\ub2e4.<\/p>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-1\" data-formula=\"A_i = \\frac{r_i - \\text{mean}(\\{r_1, \\dots, r_G\\})}{\\text{std}(\\{r_1, \\dots, r_G\\}) + \\epsilon}\" data-display=\"true\"><\/div>\n<p><strong>\ud575\uc2ec \uc7a5\uc810:<\/strong><\/p>\n<ul>\n<li><strong>\uba54\ubaa8\ub9ac \uc808\uc57d:<\/strong> Policy \ubaa8\ub378\uacfc Reference \ubaa8\ub378(KL \ubc1c\uc0b0 \uacc4\uc0b0\uc6a9)\ub9cc \ub85c\ub4dc\ud558\uba74 \ub429\ub2c8\ub2e4.<\/li>\n<li><strong>\ucd94\ub860 \ucd5c\uc801\ud654:<\/strong> \uc5ec\ub7ec \uacbd\ub85c\ub97c \uc0d8\ud50c\ub9c1\ud568\uc73c\ub85c\uc368 \ubaa8\ub378\uc740 \uc5b4\ub5a4 \ucd94\ub860 \ub2e8\uacc4\uac00 \ub354 \ub098\uc740\uc9c0 \uc2a4\uc2a4\ub85c \ud559\uc2b5\ud569\ub2c8\ub2e4.<\/li>\n<\/ul>\n<h3>2. DAPO (Decoupled Clip &amp; Dynamic Sampling)<\/h3>\n<p>\ucd9c\ucc98: <a href=\"https:\/\/arxiv.org\/abs\/2503.14476\">DAPO Paper (ArXiv 2025)<\/a><\/p>\n<p>GRPO\uac00 \uba54\ubaa8\ub9ac \ubb38\uc81c\ub97c \ud574\uacb0\ud588\uc9c0\ub9cc, \uae34 \uc0ac\uace0 \uc0ac\uc2ac(Long CoT) \ud559\uc2b5 \uc2dc <strong>\uc5d4\ud2b8\ub85c\ud53c \ubd95\uad34(Entropy Collapse)<\/strong> \ud604\uc0c1\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. DAPO\ub294 \ub300\uaddc\ubaa8 \ucd94\ub860 \ud559\uc2b5\uc744 \uc704\ud55c \ucd5c\uc801\ud654 \ubc84\uc804\uc785\ub2c8\ub2e4.<\/p>\n<p><strong>\ud575\uc2ec \uac1c\uc120 \uc0ac\ud56d:<\/strong><\/p>\n<ul>\n<li><strong>Clip-Higher:<\/strong> \uae30\uc874 PPO\/GRPO\ub294 \uc5c5\ub370\uc774\ud2b8 \ud3ed\uc744 \uc5c4\uaca9\ud788 \uc81c\ud55c(Clip)\ud558\uc9c0\ub9cc, DAPO\ub294 \uc0c1\ud55c\uc120\uc744 \uc644\ud654\ud558\uc5ec \ubaa8\ub378\uc774 \uad6d\uc18c \ucd5c\uc801\ud574\uc5d0 \ube60\uc9c0\uc9c0 \uc54a\uace0 \ub2e4\uc591\uc131\uc744 \uc720\uc9c0\ud558\ub3c4\ub85d \ud569\ub2c8\ub2e4.<\/li>\n<li><strong>Dynamic Sampling:<\/strong> \ubcf4\uc0c1\uc774 \ubaa8\ub450 \uac19\uac70\ub098(\ubaa8\ub450 \uc815\ub2f5 \ud639\uc740 \uc624\ub2f5) \ubcc0\ubcc4\ub825\uc774 \uc5c6\ub294 \uc0d8\ud50c \uadf8\ub8f9\uc744 \ub3d9\uc801\uc73c\ub85c \ud544\ud130\ub9c1\ud558\uc5ec \ud559\uc2b5 \ud6a8\uc728\uc744 \uadf9\ub300\ud654\ud569\ub2c8\ub2e4.<\/li>\n<\/ul>\n<h3>3. GSPO (Group Sequence Policy Optimization)<\/h3>\n<p>\ucd9c\ucc98: <a href=\"https:\/\/arxiv.org\/abs\/2507.18071\">GSPO Paper (ArXiv 2025)<\/a><\/p>\n<p>Qwen3-MoE\ub098 DeepSeek-V3\uc640 \uac19\uc740 <strong>MoE (Mixture of Experts)<\/strong> \uc544\ud0a4\ud14d\ucc98\uc5d0\uc11c\ub294 \ud1a0\ud070 \ub2e8\uc704\uc758 GRPO \uc5c5\ub370\uc774\ud2b8\uac00 \ub77c\uc6b0\ud305\uc758 \ud76c\uc18c\uc131\uc73c\ub85c \uc778\ud574 \ubd88\uc548\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.<\/p>\n<p><strong>\ud575\uc2ec \ub85c\uc9c1:<\/strong><br \/>\nGSPO\ub294 \ucd5c\uc801\ud654 \ub2e8\uc704\ub97c <strong>\ud1a0\ud070 \ub808\ubca8<\/strong>\uc5d0\uc11c <strong>\uc2dc\ud000\uc2a4 \ub808\ubca8<\/strong>\ub85c \uaca9\uc0c1\ud569\ub2c8\ub2e4.<\/p>\n<div class=\"easy-katex-wrapper easy-katex-block\" id=\"katex-2\" data-formula=\"J_{GSPO}(\\theta) = \\mathbb{E} \\left[ \\min \\left( \\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)} A, \\text{clip}\\left(\\frac{\\pi_\\theta(o|q)}{\\pi_{old}(o|q)}, 1-\\epsilon, 1+\\epsilon\\right) A \\right) \\right]\" data-display=\"true\"><\/div>\n<p>\uc774 \ubc29\uc2dd\uc740 \ucd08\uac70\ub300 \ubaa8\ub378 \ud559\uc2b5 \uc2dc, \ud2b9\ud788 MoE \ubaa8\ub378\uc758 Router\uac00 \ubd95\uad34\ub418\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\ub294 \ub370 \ud0c1\uc6d4\ud55c \uc131\ub2a5\uc744 \ubcf4\uc785\ub2c8\ub2e4.<\/p>\n<hr \/>\n<h2>\ud83d\udcca \uc54c\uace0\ub9ac\uc998 \uacb0\uc815 \ub9e4\ud2b8\ub9ad\uc2a4<\/h2>\n<table>\n<thead>\n<tr>\n<th>\ud2b9\uc131<\/th>\n<th>PPO<\/th>\n<th>DPO<\/th>\n<th>GRPO<\/th>\n<th>DAPO<\/th>\n<th>GSPO<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Critic \ubaa8\ub378<\/strong><\/td>\n<td>\u2705 \ud544\uc694 (\uace0\ube44\uc6a9)<\/td>\n<td>\u274c \ubd88\ud544\uc694<\/td>\n<td>\u274c \ubd88\ud544\uc694<\/td>\n<td>\u274c \ubd88\ud544\uc694<\/td>\n<td>\u274c \ubd88\ud544\uc694<\/td>\n<\/tr>\n<tr>\n<td><strong>\uc8fc\uc694 \uc2dc\ub098\ub9ac\uc624<\/strong><\/td>\n<td>\uc77c\ubc18 RLHF<\/td>\n<td>\ub300\ud654\/\uc2a4\ud0c0\uc77c \uc815\ub82c<\/td>\n<td><strong>\uc218\ud559\/\ucf54\ub4dc\/\ucd94\ub860<\/strong><\/td>\n<td><strong>\uc7a5\ubb38 \ucd94\ub860 (CoT)<\/strong><\/td>\n<td><strong>MoE\/\ucd08\uac70\ub300 \ubaa8\ub378<\/strong><\/td>\n<\/tr>\n<tr>\n<td><strong>\uba54\ubaa8\ub9ac \uc810\uc720<\/strong><\/td>\n<td>\ub192\uc74c<\/td>\n<td>\ub0ae\uc74c<\/td>\n<td>\ub0ae\uc74c<\/td>\n<td>\ub0ae\uc74c<\/td>\n<td>\ub0ae\uc74c<\/td>\n<\/tr>\n<tr>\n<td><strong>\uad6c\ud604 \ubcf5\uc7a1\ub3c4<\/strong><\/td>\n<td>\ub9e4\uc6b0 \ub192\uc74c<\/td>\n<td>\ub0ae\uc74c<\/td>\n<td>\uc911\uac04<\/td>\n<td>\ub192\uc74c<\/td>\n<td>\ub192\uc74c<\/td>\n<\/tr>\n<tr>\n<td><strong>\uc548\uc815\uc131<\/strong><\/td>\n<td>\ub0ae\uc74c (\ubbfc\uac10)<\/td>\n<td>\ub192\uc74c<\/td>\n<td>\uc911\uac04<\/td>\n<td>\ub192\uc74c<\/td>\n<td>\ub9e4\uc6b0 \ub192\uc74c<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h2>\ud83d\udcbb \uad6c\ud604: TRL\uc744 \uc0ac\uc6a9\ud55c GRPO \uc801\uc6a9<\/h2>\n<p>2025\ub144 \ub9d0 \ud604\uc7ac, Hugging Face\uc758 <code>trl<\/code> \ub77c\uc774\ube0c\ub7ec\ub9ac\ub294 GRPO\ub97c \uae30\ubcf8\uc801\uc73c\ub85c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.<\/p>\n<h3>\ud575\uc2ec \ucf54\ub4dc \uad6c\ud604 (GRPOTrainer)<\/h3>\n<pre><code class=\"language-python\">from trl import GRPOTrainer, GRPOConfig\r\n\r\n# 1. \ubcf4\uc0c1 \ud568\uc218 \uc815\uc758 (Reward Functions)\r\ndef correctness_reward_func(prompts, completions, answer, **kwargs):\r\n    rewards = []\r\n    for completion, gold_answer in zip(completions, answer):\r\n        if str(gold_answer) in completion:\r\n            rewards.append(1.0)\r\n        else:\r\n            rewards.append(0.0)\r\n    return rewards\r\n\r\n# 2. GRPO \uc124\uc815\r\ntraining_args = GRPOConfig(\r\n    output_dir=\".\/grpo_model\",\r\n    per_device_train_batch_size=4,\r\n    num_generations=8,              # G: \uadf8\ub8f9 \uc0ac\uc774\uc988\r\n    max_completion_length=1024,\r\n    beta=0.04,                      # KL \ud398\ub110\ud2f0 \uacc4\uc218\r\n    fp16=True,\r\n)\r\n\r\n# 3. \ud2b8\ub808\uc774\ub108 \ucd08\uae30\ud654 \ubc0f \uc2e4\ud589\r\ntrainer = GRPOTrainer(\r\n    model=model_id,\r\n    args=training_args,\r\n    train_dataset=dataset,\r\n    reward_funcs=[correctness_reward_func],\r\n)\r\n\r\ntrainer.train()\r\n<\/code><\/pre>\n<hr \/>\n<h2>\ud83d\udee0\ufe0f \uc2e4\ud589 \uccb4\ud06c\ub9ac\uc2a4\ud2b8<\/h2>\n<ol>\n<li><strong>\ub370\uc774\ud130 \uc815\uc81c (\ud544\uc218):<\/strong> GRPO\ub294 \ud0d0\uc0c9 \uc911 \uc6b0\uc5f0\ud788 \uc815\ub2f5\uc744 \ub9de\ud788\ub294 \uac83\uc5d0 \uc758\uc874\ud569\ub2c8\ub2e4. \ubca0\uc774\uc2a4 \ubaa8\ub378\uc758 \uc815\ub2f5\ub960\uc774 0%\ub77c\uba74 \ud559\uc2b5\uc774 \uc2dc\uc791\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ubc18\ub4dc\uc2dc \uace0\ud488\uc9c8 CoT \ub370\uc774\ud130\ub85c <strong>SFT<\/strong>\ub97c \uc120\ud589\ud558\uc2ed\uc2dc\uc624.<\/li>\n<li><strong>\ubcf4\uc0c1 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1:<\/strong> \ub2e8\uc21c\ud788 \ub9de\uace0 \ud2c0\ub9bc(Outcome Reward)\ubfd0\ub9cc \uc544\ub2c8\ub77c, \uc0ac\uace0 \uacfc\uc815\uc758 \ud615\uc2dd\uc774 \ub9de\ub294\uc9c0(Process Reward)\ub97c \ud568\uaed8 \uc124\uacc4\ud558\uc2ed\uc2dc\uc624.<\/li>\n<li><strong>Group Size ():<\/strong> \uba54\ubaa8\ub9ac\uac00 \ud5c8\uc6a9\ud558\ub294 \ud55c \ub97c \ud06c\uac8c \uc7a1\uc73c\uc2ed\uc2dc\uc624. \uad8c\uc7a5 \ubc94\uc704\ub294 8~64\uc785\ub2c8\ub2e4.<\/li>\n<\/ol>\n<h2>\ud83d\udd1a \uacb0\ub860<\/h2>\n<p>\ucd94\ub860(Reasoning)\uacfc \ucf54\ub4dc \uc0dd\uc131 \uc601\uc5ed\uc5d0\uc11c <strong>GRPO \ubc0f \uadf8 \ubcc0\uccb4(DAPO\/GSPO)<\/strong>\ub294 \uc774\ubbf8 PPO\ub97c \ub300\uccb4\ud588\uc2b5\ub2c8\ub2e4. \uc801\uc740 \uba54\ubaa8\ub9ac\ub85c \ub354 \uac15\ub825\ud55c \ud0d0\uc0c9 \uc131\ub2a5\uc744 \uc81c\uacf5\ud558\ub294 \uc774 \uc54c\uace0\ub9ac\uc998\ub4e4\uc740 \ud604\uc7ac LLM \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1\uc758 \ud45c\uc900\uc785\ub2c8\ub2e4.<strong><\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>2023-2024\ub144 RLHF(Reinforcement Learning from Human Feedback)\uc758 \uc9c0\ud615\uc740 \ud06c\uac8c \ub450 \uac00\uc9c0\ub85c \ub098\ub258\uc5c8\uc2b5\ub2c8\ub2e4: PPO (Proximal Policy Optimization): &#8216;\uc815\uc11d&#8217;\uc774\uc9c0\ub9cc \ube44\uc6a9\uc774 \ub9e4\uc6b0 \ub9ce\uc774 \ub4ed\ub2c8\ub2e4. \ud559\uc2b5 \uc548\uc815\uc131\uc744 \uc704\ud574 Policy \ubaa8\ub378\uacfc \ub3d9\uc77c\ud55c \ud06c\uae30\uc758 Critic \ubaa8\ub378(Value Network)\uc744 \ub85c\ub4dc\ud574\uc57c \ud558\uba70, \uc774\ub294 \uba54\ubaa8\ub9ac \uc810\uc720\uc728\uc744 \uc989\uc2dc \ub450 \ubc30\ub85c \ub192\uc785\ub2c8\ub2e4(2x Parameters + Optimizer States). 70B \uc774\uc0c1\uc758 \ubaa8\ub378\uc744 \ud559\uc2b5\ud558\ub824\uba74 \uace0\uac00\uc758 H100 \ud074\ub7ec\uc2a4\ud130\uac00 \ud544\uc218\uc801\uc785\ub2c8\ub2e4. \ub610\ud55c kl_coeff, clip_range\uc640 \uac19\uc740 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3213,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[57],"tags":[],"class_list":["post-3248","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tips-tutorials"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/posts\/3248","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/comments?post=3248"}],"version-history":[{"count":0,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/posts\/3248\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/media\/3213"}],"wp:attachment":[{"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/media?parent=3248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/categories?post=3248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tiptinker.com\/ko\/wp-json\/wp\/v2\/tags?post=3248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}