

@Sunshine@piefed.ca This list is missing many names: to mention some of the platforms I know, there are also Qwen, QLM and Kimi (Chinese), Maritaca Sabiá IA and Amazônia IA (Brazil). There are also smaller (often homebrewed by hobbyists) language models (SMLs) often found on HuggingFace.
Online platforms aside, self-hosted (offline) inference is the most private way to run LLMs, independent of who built it (be it Llama from Meta, Gemma from Google, Mixtral, DeepSeek or Qwen: they can’t really collect data from offline usage, especially if one proceeded to fully air-gap their computer).
@raspberriesareyummy@lemmy.world @themachinestops@lemmy.dbzer0.com
Plus, on the bright side, it’d slow climate change rates by a pretty considerable rate, if not reversing to pre-Industrial Revolution global temperatures (seriously). But maybe I’m being too optimistic here.