From 69f25fd85abb06e06345c4f63e71319887a1b6bc Mon Sep 17 00:00:00 2001 From: Alma Geer Date: Sun, 16 Feb 2025 08:48:16 +0100 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..20d6059 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with support learning (RL) to enhance [reasoning capability](https://ubereducation.co.uk). DeepSeek-R1 [attains outcomes](https://bucket.functionary.co) on par with [OpenAI's](https://club.at.world) o1 model on a number of standards, including MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, a mixture of [professionals](https://git.youxiner.com) (MoE) model recently open-sourced by [DeepSeek](https://stagingsk.getitupamerica.com). This base model is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented variant of RL. The research group likewise [carried](http://120.79.75.2023000) out understanding distillation from DeepSeek-R1 to [open-source Qwen](https://tv.goftesh.com) and Llama designs and launched a number of variations of each \ No newline at end of file