Microsoft introduces Phi-3-mini AI model, cost-effective and versatile

Ars Technica April 23, 2024, 10:00 PM UTC

Summary: Microsoft introduced Phi-3-mini, a lightweight AI language model with 3.8 billion parameters, simpler and cheaper than large models like GPT-4. Phi-3-mini can run on consumer GPUs and smartphones, unlike models needing data center GPUs. Microsoft plans to release larger versions of Phi-3. Phi-3's performance rivals Mixtral 8x7B and GPT-3.5. It's available on Azure, Hugging Face, and Ollama for local use. Phi-3's innovation lies in its training data and alignment for robustness and safety.

Full article

Article metrics
Significance5.3
Scale7.0
Magnitude6.5
Potential7.5
Novelty8.0
Actionability6.0
Immediacy9.0
Positivity7.0
Credibility7.5

Timeline:

  1. [5.6]
    Microsoft introduces cost-effective Phi-3-Mini language model series (MarTech)
    23d

  2. [4.6]
    Microsoft introduces Phi-3-Mini, part of efficient SLM family (The Indian Express)
    24d
    Source
  3. [3.2]
    Microsoft launched cost-effective Phi-3-mini Small Language Model (MediaNama.com)
    24d
    Source
  4. [4.4]
    Microsoft launches cost-effective Phi-3-mini language model in 2024 (The Express Tribune)
    25d
    Source
  5. [3.1]
    Microsoft introduces Phi-3 Mini AI model with 3.8B parameters (ReadWrite)
    26d
    Source
  6. [5.4]
    Microsoft launches Phi-3-mini AI model for businesses (CNBCTV18)
    26d
    Source
  7. [5.0]
    Microsoft launches Phi-3 Mini AI model with 3.8B parameters (The Verge)
    26d
    Source