PoddsändningarSamhälle och kulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Senaste avsnittet

734 avsnitt

  • LessWrong (Curated & Popular)

    "Good if make prior after data instead of before" by dynomight

    2025-12-27 | 17 min.

    They say you’re supposed to choose your prior in advance. That's why it's called a “prior”. First, you’re supposed to say say how plausible different things are, and then you update your beliefs based on what you see in the world. For example, currently you are—I assume—trying to decide if you should stop reading this post and do something else with your life. If you’ve read this blog before, then lurking somewhere in your mind is some prior for how often my posts are good. For the sake of argument, let's say you think 25% of my posts are funny and insightful and 75% are boring and worthless. OK. But now here you are reading these words. If they seem bad/good, then that raises the odds that this particular post is worthless/non-worthless. For the sake of argument again, say you find these words mildly promising, meaning that a good post is 1.5× more likely than a worthless post to contain words with this level of quality. If you combine those two assumptions, that implies that the probability that this particular post is good is 33.3%. That's true because the red rectangle below has half the area of the blue [...] ---Outline:(03:28) Aliens(07:06) More aliens(09:28) Huh? --- First published: December 18th, 2025 Source: https://www.lesswrong.com/posts/JAA2cLFH7rLGNCeCo/good-if-make-prior-after-data-instead-of-before --- Narrated by TYPE III AUDIO. ---Images from the article:

  • LessWrong (Curated & Popular)

    "Measuring no CoT math time horizon (single forward pass)" by ryan_greenblatt

    2025-12-27 | 12 min.

    A key risk factor for scheming (and misalignment more generally) is opaque reasoning ability.One proxy for this is how good AIs are at solving math problems immediately without any chain-of-thought (CoT) (as in, in a single forward pass).I've measured this on a dataset of easy math problems and used this to estimate 50% reliability no-CoT time horizon using the same methodology introduced in Measuring AI Ability to Complete Long Tasks (the METR time horizon paper). Important caveat: To get human completion times, I ask Opus 4.5 (with thinking) to estimate how long it would take the median AIME participant to complete a given problem.These times seem roughly reasonable to me, but getting some actual human baselines and using these to correct Opus 4.5's estimates would be better. Here are the 50% reliability time horizon results: I find that Opus 4.5 has a no-CoT 50% reliability time horizon of 3.5 minutes and that time horizon has been doubling every 9 months. In an earlier post (Recent LLMs can leverage filler tokens or repeated problems to improve (no-CoT) math performance), I found that repeating the problem substantially boosts performance.In the above plot, if [...] ---Outline:(02:33) Some details about the time horizon fit and data(04:17) Analysis(06:59) Appendix: scores for Gemini 3 Pro(11:41) Appendix: full result tables(11:46) Time Horizon - Repetitions(12:07) Time Horizon - Filler The original text contained 2 footnotes which were omitted from this narration. --- First published: December 26th, 2025 Source: https://www.lesswrong.com/posts/Ty5Bmg7P6Tciy2uj2/measuring-no-cot-math-time-horizon-single-forward-pass --- Narrated by TYPE III AUDIO. ---Images from the article:

  • LessWrong (Curated & Popular)

    "Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance" by ryan_greenblatt

    2025-12-23 | 36 min.

    Prior results have shown that LLMs released before 2024 can't leverage 'filler tokens'—unrelated tokens prior to the model's final answer—to perform additional computation and improve performance.[1]I did an investigation on more recent models (e.g. Opus 4.5) and found that many recent LLMs improve substantially on math problems when given filler tokens.That is, I force the LLM to answer some math question immediately without being able to reason using Chain-of-Thought (CoT), but do give the LLM filter tokens (e.g., text like "Filler: 1 2 3 ...") before it has to answer.Giving Opus 4.5 filler tokens[2] boosts no-CoT performance from 45% to 51% (p=4e-7) on a dataset of relatively easy (competition) math problems[3].I find a similar effect from repeating the problem statement many times, e.g. Opus 4.5's no-CoT performance is boosted from 45% to 51%. Repeating the problem statement generally works better and is more reliable than filler tokens, especially for relatively weaker models I test (e.g. Qwen3 235B A22B), though the performance boost is often very similar, especially for Anthropic models.The first model that is measurably uplifted by repeats/filler is Opus 3, so this effect has been around for a [...] ---Outline:(03:59) Datasets(06:01) Prompt(06:58) Results(07:01) Performance vs number of repeats/filler(08:29) Alternative types of filler tokens(09:03) Minimal comparison on many models(10:05) Relative performance improvement vs default absolute performance(13:59) Comparing filler vs repeat(14:44) Future work(15:53) Appendix: How helpful were AIs for this project?(16:44) Appendix: more information about Easy-Comp-Math(25:05) Appendix: tables with exact values(25:11) Gen-Arithmetic - Repetitions(25:36) Gen-Arithmetic - Filler(25:59) Easy-Comp-Math - Repetitions(26:21) Easy-Comp-Math - Filler(26:45) Partial Sweep - Gen-Arithmetic - Repetitions(27:07) Partial Sweep - Gen-Arithmetic - Filler(27:27) Partial Sweep - Easy-Comp-Math - Repetitions(27:48) Partial Sweep - Easy-Comp-Math - Filler(28:09) Appendix: more plots(28:14) Relative accuracy improvement plots(29:05) Absolute accuracy improvement plots(29:57) Filler vs repeat comparison plots(30:02) Absolute(30:30) Relative(30:58) Appendix: significance matrices(31:03) Easy-Comp-Math - Repetitions(32:27) Easy-Comp-Math - Filler(33:50) Gen-Arithmetic - Repetitions(35:12) Gen-Arithmetic - Filler The original text contained 9 footnotes which were omitted from this narration. --- First published: December 22nd, 2025 Source: https://www.lesswrong.com/posts/NYzYJ2WoB74E6uj9L/recent-llms-can-use-filler-tokens-or-problem-repeats-to --- Narrated by TYPE III AUDIO. ---Images from the article:

  • LessWrong (Curated & Popular)

    "Turning 20 in the probable pre-apocalypse" by Parv Mahajan

    2025-12-23 | 5 min.

    Master version of this on https://parvmahajan.com/2025/12/21/turning-20.html I turn 20 in January, and the world looks very strange. Probably, things will change very quickly. Maybe, one of those things is whether or not we’re still here. This moment seems very fragile, and perhaps more than most moments will never happen again. I want to capture a little bit of what it feels like to be alive right now. 1.  Everywhere around me there is this incredible sense of freefall and of grasping. I realize with excitement and horror that over a semester Claude went from not understanding my homework to easily solving it, and I recognize this is the most normal things will ever be. Suddenly, the ceiling for what is possible seems so high - my classmates join startups, accelerate their degrees; I find myself building bespoke bioinformatics tools in minutes, running month-long projects in days. I write dozens of emails and thousands of lines of code a week, and for the first time I no longer feel limited by my ability but by my willpower. I spread the gospel to my friends - “there has never been a better time to have a problem” - even as [...] ---Outline:(00:34) 1.(03:12) 2. --- First published: December 21st, 2025 Source: https://www.lesswrong.com/posts/S5dnLsmRbj2JkLWvf/turning-20-in-the-probable-pre-apocalypse --- Narrated by TYPE III AUDIO.

  • LessWrong (Curated & Popular)

    "Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk

    2025-12-23 | 20 min.

    TL;DR LLMs pretrained on data about misaligned AIs themselves become less aligned. Luckily, pretraining LLMs with synthetic data about good AIs helps them become more aligned. These alignment priors persist through post-training, providing alignment-in-depth. We recommend labs pretrain for alignment, just as they do for capabilities. Website: alignmentpretraining.ai Us: geodesicresearch.org | x.com/geodesresearch Note: We are currently garnering feedback here before submitting to ICML. Any suggestions here or on our Google Doc are welcome! We will be releasing a revision on arXiv in the coming days. Folks who leave feedback will be added to the Acknowledgment section. Thank you! Abstract We pretrained a suite of 6.9B-parameter LLMs, varying only the content related to AI systems, and evaluated them for misalignment. When filtering the vast majority of the content related to AI, we see significant decreases in misalignment rates. The opposite was also true - synthetic positive AI data led to self-fulfilling alignment. While post-training decreased the effect size, benign fine-tuning[1] degrades the effects of post-training, models revert toward their midtraining misalignment rates. Models pretrained on realistic or artificial upsampled negative AI discourse become more misaligned with benign fine-tuning, while models pretrained on only positive AI discourse become more aligned. This [...] ---Outline:(00:15) TL;DR(01:10) Abstract(02:52) Background and Motivation(04:38) Methodology(04:41) Misalignment Evaluations(06:39) Synthetic AI Discourse Generation(07:57) Data Filtering(08:27) Training Setup(09:06) Post-Training(09:37) Results(09:41) Base Models: AI Discourse Causally Affects Alignment(10:50) Post-Training: Effects Persist(12:14) Tampering: Pretraining Provides Alignment-In-Depth(14:10) Additional Results(15:23) Discussion(15:26) Pretraining as Creating Good Alignment Priors(16:09) Curation Outperforms Naive Filtering(17:07) Alignment Pretraining(17:28) Limitations(18:16) Next Steps and Call for Feedback(19:18) Acknowledgements The original text contained 1 footnote which was omitted from this narration. --- First published: December 20th, 2025 Source: https://www.lesswrong.com/posts/TcfyGD2aKdZ7Rt3hk/alignment-pretraining-ai-discourse-causes-self-fulfilling --- Narrated by TYPE III AUDIO. ---Images from the article:

Fler podcasts i Samhälle och kultur

Om LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-webbplats

Lyssna på LessWrong (Curated & Popular), Gynning & Berg och många andra poddar från världens alla hörn med radio.se-appen

Hämta den kostnadsfria radio.se-appen

  • Bokmärk stationer och podcasts
  • Strömma via Wi-Fi eller Bluetooth
  • Stödjer Carplay & Android Auto
  • Många andra appfunktioner

LessWrong (Curated & Popular): Poddsändningar i Familj

Sociala nätverk
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/27/2025 - 8:50:49 PM