From 1c253055c4f32547fbe9c42384336d79ae478417 Mon Sep 17 00:00:00 2001 From: octavioirwin5 Date: Tue, 8 Apr 2025 05:14:06 +0200 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..3aca7ee --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, [trademarketclassifieds.com](https://trademarketclassifieds.com/user/profile/2672496) an LLM [fine-tuned](https://gogs.adamivarsson.com) with support knowing (RL) to enhance thinking ability. DeepSeek-R1 [attains](https://www.pkjobshub.store) results on par with [OpenAI's](http://47.95.167.2493000) o1 design on numerous standards, including MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, a mixture of specialists (MoE) model recently [open-sourced](http://66.85.76.1223000) by DeepSeek. This base design is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented variant of RL. The research group likewise performed understanding distillation from DeepSeek-R1 to [open-source Qwen](https://www.myad.live) and Llama designs and launched numerous versions of each \ No newline at end of file