commit d344e461b3b81a9bdf367f8fcb31990c2a880f5d
Author: kraigmackinlay <kraig-mackinlay_7506@quirkyemails.fun>
Date:   Sat Feb 15 02:39:51 2025 +0100

    Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model'

diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
new file mode 100644
index 0000000..522d6fc
--- /dev/null
+++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
@@ -0,0 +1,2 @@
+<br>[DeepSeek open-sourced](https://kol-jobs.com) DeepSeek-R1, an LLM fine-tuned with reinforcement [learning](https://www.graysontalent.com) (RL) to [improve reasoning](http://47.120.20.1583000) ability. DeepSeek-R1 attains outcomes on par with OpenAI's o1 design on several benchmarks, consisting of MATH-500 and SWE-bench.<br>
+<br>DeepSeek-R1 is based on DeepSeek-V3,  [links.gtanet.com.br](https://links.gtanet.com.br/terilenz4996) a mix of experts (MoE) model recently open-sourced by DeepSeek. This base model is fine-tuned using Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research group also performed knowledge distillation from DeepSeek-R1 to open-source Qwen and Llama designs and released several versions of each
\ No newline at end of file