Quote of the week
“He who conquers himself is the mightiest warrior.”
- Confucius
Edition 50 - December 14, 2025
“He who conquers himself is the mightiest warrior.”
- Confucius
Australia has just rolled out a world first ban preventing children under 16 from having accounts on major social media platforms. The policy applies to services like TikTok, Instagram, Facebook, Snapchat, YouTube, and others. Starting on December 10, 2025, these companies will be legally required to take reasonable steps to block underage users or face massive fines that can reach tens of millions of dollars per violation. Rather than targeting parents or children, the law places responsibility squarely on the platforms themselves.
The legislation passed after months of debate and reflects growing concern about the impact of social media on mental health, attention, and development. Lawmakers pointed to rising anxiety, depression, cyberbullying, and exposure to harmful content among teenagers. Supporters argue that algorithmic feeds are uniquely powerful on young minds and that delaying access could meaningfully improve long term outcomes. Critics counter that enforcement will be messy, that teens will find workarounds, and that bans risk pushing kids toward less regulated corners of the internet.
The early response in Australia has been a mix of relief, frustration, and experimentation. Many teens are losing access to accounts they have used for years and are publicly saying goodbye to online communities. Some parents welcome the reset, while others worry about social isolation or cutting kids off from how their peers communicate. International governments and tech companies are watching closely, since this is the most aggressive attempt yet to draw a hard age line around social media.
My hope is that this turns into a rare real world experiment with clear results. If the first group of children who grow up without social media until 16 show visible improvements across mental health, focus, social development, and resilience, the evidence will be hard to ignore. If that happens, this policy should not stay confined to Australia. It should spread, improve, and become a global reset for how we think about childhood in the digital age.
In 1910, there was a man whose job was to drive a horse and buggy for a living. He knew how to calm nervous horses, dodge potholes, and navigate crowded streets without killing anyone. When cars first appeared, he hated them. They were loud, unreliable, and terrified his horses. He yelled at them constantly. He called them dangerous, unnatural, and a passing fad. But the cars improved. Roads were paved. Engines got quieter. One by one, his customers stopped renewing his services. Eventually, there was nobody left looking for horse and buggy rides. One morning he realized the problem was not that cars were bad. It was that his job no longer made sense. The horse did not disappear. The driver did. He became the world's first taxi cab driver.
Every major technological revolution has come with fear, disruption, and job loss. The loom replaced weavers. Tractors replaced farmhands. Computers replaced clerks. Each time, people worried that this was the end of work as we know it. Each time, new jobs emerged. Not because technology created kindness, but because it created gaps. Machines could do more, but they could not do everything humans could do. Humans were still needed to control machines, repair them, design them, and think creatively around them. Even the computer revolution mostly shifted work from physical effort to cognitive effort. We typed instead of lifted. We analyzed instead of filed.
AI may be different. Paired with robotics, it represents the first technology that can plausibly do both categories of human work. It can control machines and it can replicate much of what happens in our heads. Vision, language, planning, pattern recognition, and increasingly decision making. A sufficiently advanced AI system does not just replace a task. It replaces the entire bundle of tasks that made up a job. When combined with robots that can move, manipulate, and interact with the physical world, the question is no longer which jobs are automated, but which are not. In that world, the usual answer of new jobs will appear feels less obvious.
So what happens then? One option is universal basic income, not as charity but as infrastructure. If productivity becomes abundant and labor becomes optional, income must detach from employment. But income alone is not enough. Humans derive worth from contribution, mastery, and purpose. A society that solves material needs but ignores meaning risks trading poverty for despair. In a world where machines do most work, what becomes valuable are things machines struggle with even if they can technically perform them. Trust, taste, judgment, empathy, and values. The future jobs may look less like labor and more like stewardship. Human oversight of AI systems. Ethical governance. Community building. Education that focuses on wisdom rather than information. Art, not because machines cannot make it, but because humans care who made it.
The good news is that this future is not tomorrow. We are likely at least a decade away from AI and robotics crossing the threshold where they can truly do everything humans can do at scale. It may take longer if we deliberately slow progress to align incentives, safety, and social adaptation. That gives us time. Time to redesign education, rethink work, and decide what we value before the old structures collapse. The car did not end human purpose. It just made yelling at horses obsolete. AI will do the same to many jobs. The real challenge is making sure it does not do the same to our sense of meaning.
In a recent experiment, a developer gave an AI coding agent a simple instruction and a lot of freedom: repeatedly improve the quality of a real production codebase. The agent was placed in a loop, fed the same TypeScript project over and over, and allowed to make changes autonomously for roughly a day and a half. Each iteration resulted in a new commit. After around 200 runs, the codebase had grown dramatically. Lines of code more than quadrupled, tests exploded in number, and comments multiplied everywhere. The experiment was designed to see what code quality looks like when an AI is allowed to pursue it relentlessly without human guidance.
What emerged was less a masterpiece and more a mirror. The AI optimized for what it could easily measure. Test coverage went up, but many tests were shallow. Documentation expanded, but often restated the obvious. Utility code sprawled as the agent preferred to reinvent solutions rather than lean on existing libraries. The system still worked and it did not collapse under bugs, but the improvements were mostly cosmetic. The AI produced the shape of a high quality codebase without fully grasping the intent behind one. It mistook volume and consistency for judgment.
This is why I do not see an AGI milestone in experiments like this, but I do see an ASGI one. Artificial Specialized General Intelligence feels like the more honest framing. There may not be a single model anytime soon that thinks broadly and flexibly across every domain. But there can absolutely be models that become exceptional inside specific domains like writing or coding. In those lanes, they already outperform humans on speed and stamina, even if they lag on taste and tradeoffs.
A real ASGI signal in coding, at least to me, would be much stricter. Hand an enterprise scale codebase to an AI, give it time, and ask it to optimize for maintainability, performance, and clarity without introducing bugs. If it comes back with changes that make experienced engineers stop and say I would not have thought of that, then something meaningful has shifted. Not because the AI wrote more code, but because it demonstrated judgment. That is the bar that matters.
Meta just made another big bet on wearable AI by acquiring Limitless, a startup known for an AI powered pendant that can record, transcribe, and summarize real world conversations. The deal is a reminder that the dream of always on computing has been circling for a decade, from early smart glasses to new devices that promise something more ambitious than notifications. Not just a screen on your face, but a system that turns daily life into searchable context.
But will the public ever truly accept this? A world where anyone can record audio and video at any time changes how we speak, how we relax, and how we trust. Even if the use case is personal memory, the presence of constant capture pushes every interaction toward performance and caution. The potential benefits are real, better recall, less cognitive load, and more accessibility. Still, the privacy tradeoff is not just technical, it is social, and the ick factor is hard to code away. Especially for those who've read about the Thought Police.
My view is that broad adoption will require constraints that are obvious to everyone, not just buried in settings. These devices should begin by recording only the owner, not other people, and they should have clear visual and audible signals that make recording unmistakable in the moment. If AI wearables keep advancing, laws will follow fast. I expect a wave of new rules over the next five to ten years that define where these devices are allowed, what they can store, and what consent really means when the record button is everywhere.
A major boost for ocean conservation is underway in the Eastern Tropical Pacific, where the Bezos Earth Fund announced a $24.5 million investment to protect one of the most biodiverse marine regions on the planet. The funding supports coordinated conservation efforts across Costa Rica, Panama, Colombia, and Ecuador, helping strengthen marine protected areas, improve enforcement, and advance scientific monitoring. The initiative also reinforces long-term plans for a cross-border marine biosphere reserve, designed to safeguard migratory routes for sharks, whales, turtles, and other keystone species while balancing conservation with sustainable livelihoods for coastal communities.
Danish scientists have identified a tiny molecular “switch” in plant roots that could one day allow major cereal crops like wheat and barley to fertilize themselves by partnering with nitrogen-fixing bacteria instead of relying on artificial fertilizers. Most crops need added nitrogen to grow, but legumes like peas naturally form symbiotic relationships with bacteria that convert atmospheric nitrogen into usable nutrients. Researchers at Aarhus University found that changing just two amino acids in a root receptor protein can turn on this cooperative behavior, temporarily shutting down a plant’s immune response so beneficial bacteria can help provide nitrogen. Early lab tests in model plants and barley show promise, and if this trait can be introduced into staple cereals, it could dramatically reduce fertilizer use, lower emissions, and make agriculture more sustainable.
An English fisherman who credits fishing with helping him overcome years of anxiety, depression, and addiction has turned that personal recovery into a formal program called Tackling Minds, which works with the UK’s National Health Service to let doctors prescribe angling as a treatment for anxiety and depression. The volunteer-run initiative provides participants with equipment, coaching, and structured time outdoors, combining mindfulness, community, and gentle physical activity. More than 2,300 people have already taken part, with all fish caught safely released, and the program has shown strong mental-health benefits. Its impact was recently recognized with the King’s Award for Voluntary Service, highlighting the growing role of “social prescribing” as a complement to traditional healthcare.
Enjoying The Hillsberg Report? Share it with friends who might find it valuable!
Haven't signed up for the weekly notification?
Subscribe Now