Is this the dawn of cheap AI?
DeepSeek was just the beginning 🤖
AI researchers at Stanford and the University of Washington were able to train an AI “reasoning” model for under $50 in cloud compute credits
Training methods appear to be proliferating with the number of models available:
The s1 paper suggests that reasoning models can be distilled with a relatively small dataset using a process called supervised fine-tuning (SFT), in which an AI model is explicitly instructed to mimic certain behaviors in a dataset.
A race to the bottom we can actually benefit from:
After training s1, which took less than 30 minutes using 16 Nvidia H100 GPUs, s1 achieved strong performance on certain AI benchmarks
& a lesson we can all learn from:
The researchers used a nifty trick to get s1 to double-check its work and extend its “thinking” time: They told it to wait. Adding the word “wait” during s1’s reasoning helped the model arrive at slightly more accurate answers
via TechCrunch
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
Or, as Nudie says:
Not all change is progress
Amazon is ramping up to release its gen-AI powered Alexa (unless it gets delayed again).
Pricing model is up in the air at this point, but $5 to $10 month sounds pretty locked in.
This is the make or break moment for voice assistants.
You get personalities out of the shop and all that’s left is the retail experience.
You need the crust of the human.
Scale necessitates the removal of personality.
Which means personality becomes a differentiator.
How long until we’re asked to measure return on attention and return on personality? And how will we do it?
via the Weird Studies podcast
