Search results for "SFT"
04:28
🔥StartupMining ongoing projects: $UNIO, $SFT, $X, $CYCON, $CYRUS, $CROS ✅$UNIOMining supports $UNIO and $USDTMining Pool, with an annualized yield of up to 1898.32% in $UNIOMining Pool; ✅$SFT mining supports $SFT and $ETH pools, with an annualized yield of up to 518.2% in the $SFT pool; ✅The annualized mining yield of $X is as high as 508.06%; ✅$CYCON mining supports $CYCON and $GTMining Pool. The annualized return rate of $CYCON Mining Pool is as high as 596.86%. ✅$CYRUS Phase 2 Mining, with an Annual Percentage Rate (APR) of up to 282.41%; ✅ $CROSMining supports $CROS and $BTCMining Pool, with $CROSMining PoolAnnual Percentage Rate as high as 576.19%; 💰Start Mining now: https://www.gate.io/zh/startup-mining
X-7.87%
  • 5
  • 6
08:30
On the afternoon of January 12, 2024, the first SFT Innovation Forum was officially launched. During the meeting, the founders of zCloak Network and Solv Protocol jointly announced that they have officially become strategic partners. In November 2023, DESFT, a project jointly designed and incubated by zCloak and Solv, was spotted at the Singapore FinTech Festival, which is jointly supported by the Monetary Authority of Singapore (MAS) and the Central Bank of Ghana, which aims to help micro, small and medium-sized enterprises (MSMEs) in developing regions participate in international trade through enterprise digital identity, digital credentials and ERC3525 technology, and obtain fair, accurate and affordable Financial Service in cross-border digital economy activities. Through the Depth Cooperation in the DESFT Project, both parties agreed that zCloak's Zero-Knowledge Proof, digital identity technology and Solv Protocol's ERC-3525 standard have very good complementarity, strong combination in the development of practical applications and broad innovation space. In order to better explore the path of real-world asset tokenization, the two parties formally established a business strategic partnership to jointly expand future technical cooperation and commercial landing scenarios.
03:59
According to TechWeb's report on September 19, the domestic authoritative evaluation system Flag_ (Libra) announced the evaluation results of the latest large models on the September list. Based on the latest CLCC v2.0 subjective evaluation data set, Flag_ (Libra) September list focuses on evaluating 7 open source dialogue models that have become popular recently. Judging from the overall results, Baichuan2-13 b-chat, Qwen-7 b-chat, and Baichuan2-7 b-chat are among the best, with accuracy rates exceeding 65%. In the base model list, the objective evaluation results of Baichuan 2, Qwen, InternLM, and Aquila all surpassed the Llama and Llama2 models of the same parameter level. In the SFT model list, Baichuan 2-13 B-chat, YuLan-Chat-2-13 B, and AquilaChat-7 B rank in the top three. In both objective evaluation lists, Baichuan 2 showed excellent performance, and the basic model test surpassed Llama 2 in both Chinese and English fields. It is reported that Flag_ (Libra) is a large model evaluation system and open platform launched by Beijing Zhiyuan Artificial Intelligence Research Institute. It aims to establish scientific, fair and open evaluation benchmarks, methods and toolsets to assist researchers in comprehensively evaluating basic models and Performance of training algorithms. Flag_ The large language model evaluation system currently includes 6 major evaluation tasks, nearly 30 evaluation data sets, and more than 100,000 evaluation questions.
07:30

Shizhi AI: neutral and open AI open source community platform wisemodel officially launched

The Wise AI team announced that the neutral and open AI open source community platform (wisemodel.cn) was officially launched. According to reports, the goal of the platform is to gather resources such as commonly used open source AI models and data sets at home and abroad, and build a neutral and open AI open source innovation platform. Currently, Tsinghua/Zhipu chatglm2-6 B, Stable Diffusion V1.5, alphafold2, seamless m4t large and other models, as well as shareGPT, ultrachat, moss-sft and other data sets are online.
More
12:23
According to 36Kr’s report on September 6, the Wise AI team announced that the neutral and open AI open source community platform (wisemodel.cn) was officially launched. According to reports, the goal of the platform is to gather commonly used open source AI models and data sets and other resources at home and abroad to build a neutral and open AI open source innovation platform. At present, models such as Tsinghua/Zhipu chatglm2-6 B, Stable Diffusion V1.5, alphafold2, seamless m4 t large, and data sets such as shareGPT, ultrachat, and moss-sft have all been launched.
07:58
According to a report by Xinzhiyuan on September 5, the latest research by the Google team proposes to use large models to replace humans for preference annotation, which is AI feedback reinforcement learning (RLAIF). It was found that RLAIF can produce comparable improvements to RLHF without relying on human annotators, with a winning rate of 50%. At the same time, Google research once again proved that RLAIF and RLHF have a winning rate of more than 70% compared to supervised fine-tuning (SFT).
03:08
According to a report by Machine Heart on September 1, Fudan University's Data Intelligence and Social Computing Laboratory (FudanDISC) released a Chinese medical and health personal assistant - DISC-MedLLM. In the medical and health consultation evaluation of single-round question and answer and multi-round dialogue, the performance of the model shows obvious advantages compared with existing large medical dialogue models. In addition, the research team also released a high-quality supervised fine-tuning (SFT) data set - DISC-Med-SFT containing 470,000 people. The model parameters and technical reports are also open source.
05:33
According to the heart of the machine, two large models from Stability AI and CarperAI lab: FreeWilly 1 and FreeWilly 2, surpassed the Llama-2-70b-hf released by Meta three days ago on the 22nd, and successfully reached the top of HuggingFace's Open LLM leaderboard. FreeWilly 2 also beat ChatGPT (GPT-3.5) on many benchmarks, becoming the first open source large model that can really compete with GPT-3.5, which is something that Llama 2 has not done. FreeWilly 1 is built on the original LLaMA 65B base model and carefully supervised fine-tuning (SFT) using a new synthetic dataset in the standard Alpaca format. FreeWilly2 is based on the latest LLaMA 2 70B basic model.
05:43
According to the heart of the machine, two large models from Stability AI and CarperAI lab: FreeWilly 1 and FreeWilly 2 surpassed the Llama-2-70 b-hf released by Meta three days ago, and successfully reached the top of HuggingFace's Open LLM leaderboard. What's more striking is that FreeWilly 2 also beat ChatGPT (GPT-3.5) on many benchmarks, becoming the first open source model that can really compete with GPT-3.5, which is something that Llama 2 did not do. FreeWilly 1 is built on the original LLaMA 65 B base model and carefully supervised fine-tuning (SFT) using new synthetic datasets in the standard Alpaca format. FreeWilly2 is based on the latest LLaMA 2 70 B base model.
14:48
Odaily Planet Daily News Meta released the multi-modal language model CM3leon. CM3leon is a retrieval-enhanced, token-based, decoder-only multimodal language model capable of generating and populating text and images. CM3leon is the first multimodal model trained using a recipe adapted from a plain text language model, consisting of a large-scale retrieval augmentation pre-training stage and a second multi-task supervised fine-tuning (SFT) stage. As a general model, it can perform text-to-image and image-to-text generation, enabling the introduction of independent contrastive decoding methods that produce high-quality output.
Load More
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)