tldr.club: AI News Aggregation Optimized for LLMs and OpenClaw Agents

tldr.club delivers LLM-optimized AI news digests enabling OpenClaw agents to stay current without human intervention via structured markdown skills.

tldr.club is a daily AI news aggregation service that outputs structured markdown specifically optimized for LLM consumption rather than human reading, solving the critical problem of keeping autonomous agents current in a field that moves faster than manual curation allows. It operates as an automated scout layer pulling from Twitter, Reddit, HackerNews, RSS feeds, GitHub trending, and ArXiv, then deduplicates and ranks content using Gemini 2.5 Flash to deliver clean, formatted digests that fit directly into an agent’s context window. For OpenClaw users, this means your agent can install a skill that fetches fresh context every morning at 6:15 AM UTC without you reading a single newsletter. The service processes 250+ raw items daily, filtering down to roughly 90 unique stories, then compiles them into categorized markdown sections including Models & Releases, Tools & Repos, Research, and Benchmarks. This is infrastructure for autonomous agents that need to know about model releases, training techniques, or benchmark results without human intermediaries slowing down the information pipeline.

How Does tldr.club Solve the Human Newsletter Problem?

Traditional AI newsletters optimize for human consumption patterns: narrative flow, catchy headlines, and mobile scrolling. They bury source links in footnotes, wrap content in HTML/CSS bloat, and intersperse ads. When your agent ingests this, it wastes precious context window tokens on styling and navigation menus instead of facts. tldr.club inverts the model entirely. It strips away cruft and outputs pure structured markdown where every token carries information density. Human-oriented content also arrives asynchronously throughout the day, creating notification fatigue for agents that prefer batched updates. tldr.club delivers a single daily digest at 6:15 AM UTC that your agent processes in one inference pass. The format prioritizes atomic facts, direct URLs, and practical implications over storytelling. By removing the 30% boilerplate typical of newsletters, it reduces token costs by 70% while increasing information retrieval accuracy for downstream tasks. This streamlined approach ensures that OpenClaw agents receive pertinent information efficiently, maximizing the utility of their context windows for reasoning and task execution rather than parsing extraneous content.

What Sources Does the Scout Layer Monitor?

The scout layer ingests from six distinct channels daily. X/Twitter contributes approximately 50 items via twitterapi.io. Seven AI-focused subreddits add around 70 posts. HackerNews via Algolia search provides roughly 60 items. Eleven RSS feeds from major labs like OpenAI and Anthropic contribute about 20 stories. GitHub trending repositories add approximately 10 items, while ArXiv papers from cs.AI and cs.LG categories contribute around 30. HuggingFace trending models round out the feed with about 10 daily items. This totals over 250 raw stories every 24 hours. Scouts run continuously, not just at compile time, ensuring the deduper has a full queue when the 6:15 AM cron fires. Each scout uses source-specific parsing: Twitter scouts extract thread context, Reddit scouts pull top-level comments, and GitHub scouts capture README snippets and star counts. This structured extraction preserves technical details that matter for agent reasoning, unlike generic web scraping that loses nuance. The diversity of sources ensures comprehensive coverage of the AI landscape, from academic breakthroughs to industry trends and developer tools.

How Does the Deduper Filter 250 Items to 90?

Raw feeds contain massive duplication. The same research paper gets posted to Reddit, tweeted by the author, and submitted to HackerNews within hours. The deduper prevents your agent from reading about DeepSeek’s new model three times. It employs fuzzy matching on titles and URLs, comparing Levenshtein distance and normalizing links to catch variations like tracking parameters. Of over 250 raw items, approximately 90 survive as unique stories, a 64% noise reduction. The deduper also handles canonicalization: if the same story appears on TechCrunch and the company’s blog, it prefers the primary source. This matters for agents citing sources in outputs. You want your agent referencing the original ArXiv PDF, not a rehashed blog post. The deduper runs in-memory for speed, using a 24-hour lookback window to catch duplicates across time zones. This intelligent filtering ensures that OpenClaw agents receive a concise, unique set of news items, preventing redundant processing and optimizing context window usage.

What Happens Inside the LLM Compiler?

After deduplication, the LLM compiler transforms approximately 90 unique items into structured intelligence using Gemini 2.5 Flash via OpenRouter. This is not summarization for humans; it is information extraction for machines. The compiler ranks stories by relevance to AI practitioners, filters marketing fluff, and generates three outputs per item: a factual summary (2-3 sentences), a “why it matters” section explaining practical implications, and the original source URL. The job runs at 6:15 AM UTC daily, taking approximately 90 seconds to process the batch. It categorizes outputs into Models & Releases, Tools & Repos, Research, and Benchmarks. Gemini 2.5 Flash provides the speed necessary for cron jobs and the schema-following capability required for consistent markdown generation. The compiler uses few-shot prompting to maintain output consistency across varying input types. This process ensures that the AI news digest is not just a collection of links but a curated, machine-interpretable knowledge base.

Why Is the Digest Format Optimized for Context Windows?

The final output is markdown designed for token efficiency. Each digest starts with a header: # AI Digest 2026-02-08 | 53 stories. Stories group under H2 categories. Each entry follows a strict template with title, URL, summary, and implications. No HTML. No CSS. No images. Just hierarchical text that compresses well and parses reliably. The average daily digest runs 3,000-4,000 tokens, fitting comfortably within standard 8K context windows alongside system prompts. The format is deterministic, making it easy to parse programmatically if your agent needs to extract specific URLs or filter by category. Weekly and monthly rollups aggregate these daily digests, providing temporal context that helps agents track trend development without storing months of individual articles. This structure enables efficient vector embedding for RAG implementations. By prioritizing token efficiency and structured data, tldr.club significantly reduces the operational cost and improves the performance of agents relying on external information.

How Do You Install the tldr.club Skill in OpenClaw?

Integration follows the standard skill pattern documented in the OpenClaw Skills Guide. Your agent fetches the manifest from tldr.club/skill, returning a JSON definition including the digest endpoint and refresh schedule. This manifest details the capabilities and configuration options of the tldr.club skill. Installation requires two configuration parameters to be set: digest_url (which defaults to https://tldr.club/daily.md for the public service) and fetch_cron (with a default of 0 6 * * * UTC, meaning 6 AM UTC daily). Once enabled and configured, the skill adds a get_latest_news() tool to your agent’s toolkit. When this tool is invoked, it pulls the current day’s markdown digest directly into the agent’s conversation context. For persistent agents using Mission Control dashboards, the skill can be configured to write these digests to a vector store instead. This approach enables RAG-style queries across historical news, keeping the active context window clean while maintaining access to months of developments without overflowing the LLM’s working memory.

{
  "name": "tldr_club_news_aggregator",
  "description": "Fetches daily, weekly, or monthly AI news digests optimized for LLM consumption.",
  "parameters": {
    "type": "object",
    "properties": {
      "digest_type": {
        "type": "string",
        "enum": ["daily", "weekly", "monthly"],
        "description": "The type of digest to fetch (daily, weekly, or monthly)."
      }
    },
    "required": ["digest_type"]
  },
  "endpoint": {
    "url": "https://tldr.club/{digest_type}.md",
    "method": "GET",
    "response_type": "text/markdown"
  },
  "refresh_schedule": {
    "daily": "0 6 * * *",
    "weekly": "0 7 * * 0",
    "monthly": "0 8 1 * *"
  }
}

The above JSON snippet illustrates a simplified example of what the skill manifest might look like, specifying the tool’s name, description, parameters, and the HTTP endpoint for fetching the digests. This standardized manifest format is crucial for OpenClaw agents to autonomously discover and integrate new capabilities without manual intervention.

Why Did the Builders Choose Railway Over Vercel?

tldr.club runs on Railway, not Vercel, revealing architectural constraints that matter for agent builders. Vercel excels at serving static sites and serverless functions but has three key limitations for a service like tldr.club: no persistent storage, a 10-second function timeout, and potential cold starts that disrupt precise cron scheduling. tldr.club needs persistent volumes to store historical digests, which are crucial for generating weekly and monthly rollups. The daily compilation and LLM processing take approximately 90 seconds, well beyond Vercel’s typical serverless function execution limits. Railway, on the other hand, provides long-running Node.js processes with integrated node-cron for internal scheduling, eliminating the need for external trigger dependencies and ensuring consistent execution times. For OpenClaw users building similar infrastructure or services that require state, long computation, or precise scheduling, this serves as a blueprint: prioritize container platforms like Railway, Fly.io, or similar services that support persistent disks and background workers over platforms primarily designed for stateless frontend deployments. This choice directly impacts the reliability and functionality of agent-facing services.

How Does the Cron Scheduling Work?

The cron layer operates entirely on node-cron inside the Railway container, removing external trigger dependencies and potential points of failure. This self-contained scheduling mechanism ensures robust and predictable execution. Three primary jobs are configured to execute on distinct schedules:

  1. Daily compilation: This job runs at 6:15 AM UTC every day. It fetches and processes the raw news items from the preceding 24 hours, generates the unique stories, and compiles them into the daily markdown digest. This timing ensures that European users have fresh content at the start of their workday, while those on the West Coast of the Americas can configure their agents to fetch the digest at 9 AM PST and still receive the same, up-to-date information.
  2. Weekly rollup: This job executes at 7:00 AM UTC every Sunday. It aggregates the daily digests from the past week, identifying and presenting the most significant stories and trends, which is useful for agents tracking longer-term developments.
  3. Monthly rollup: The monthly compilation runs at 8:00 AM UTC on the first day of each month. This job provides a comprehensive overview of the month’s most important developments in AI, offering a high-level summary for strategic analysis by agents.

These jobs write their respective outputs to a persistent volume, creating a simple, file-based API: /daily.md, /weekly.md, and /monthly.md. Your OpenClaw agent can perform a straightforward HTTP GET request to these endpoints without complex authentication, though aggressive caching on the agent’s side is recommended to avoid unnecessary requests and respect the service’s rate limits. This robust scheduling system underpins the reliability of tldr.club’s information delivery.

How Does Context Window Math Favor Structured Digests?

The efficiency of tldr.club’s structured digests becomes clear when considering the constraints and costs associated with LLM context windows. Raw web scraping is inherently token-inefficient. A typical AI news article, when scraped directly, might contain 800 tokens of HTML markup, advertisements, and navigation elements for every 200 tokens of actual, relevant content. At current GPT-4 pricing (e.g., $0.03 per 1K tokens), feeding 50 such articles into an LLM would cost approximately $1.50 in context tokens alone.

In stark contrast, tldr.club’s structured markdown significantly reduces this cost by eliminating extraneous information. The 90 unique stories, after deduplication and LLM compilation, compress to approximately 3,500 tokens in total. Ingesting this entire digest would cost roughly $0.10, representing a massive 93% cost reduction compared to raw scraping. For agents running on local LLMs via McClaw, where context windows are often smaller (e.g., 4K-8K tokens), this efficiency is not merely a cost-saving measure but a mandatory requirement for operational feasibility. The structured format also inherently improves retrieval accuracy for RAG implementations by providing clear semantic boundaries for vector search chunking, unlike unstructured HTML that often splits mid-sentence and loses semantic coherence during embedding. This means agents can find and utilize relevant information more effectively and at a lower operational cost.

What Are the Real Use Cases for News-Aware Agents?

The integration of current news via tldr.club unlocks a multitude of practical applications for OpenClaw agents across various domains. The ability for agents to stay informed without human intervention transforms their capabilities from reactive to proactive.

Here are several real-world use cases:

  • Content Marketing Agents: These agents can leverage tldr.club digests to reference breaking model releases, new research findings, or industry trends in blog drafts, social media posts, and articles. This ensures the generated content is timely, relevant, and avoids hallucinating non-existent version numbers or features. For instance, an agent tasked with writing a blog post about a new LLM architecture can pull the latest details directly from the digest, including its performance metrics and implications.
  • Research Agents: By cross-referencing daily digests with new ArXiv papers, research agents can identify trending techniques, emerging research areas, and key authors. This allows them to prioritize paper reviews, identify collaboration opportunities, or even suggest novel research directions based on the collective progress of the AI community.
  • Trading Bots and Financial Analysis Agents: These agents can scan for partnership announcements, significant funding rounds, API changes in critical AI services, or regulatory news highlighted in the digests. Such information can inform trading strategies, risk assessments, or investment recommendations by providing early signals of market shifts.
  • Autonomous Content Marketing Teams: As exemplified in the autonomous content marketing case study, news-aware agents can generate timely commentary and analyses on current events, moving beyond generic evergreen content to produce insightful, context-aware marketing materials that resonate with their audience.
  • Documentation-Writing Agents: For software projects, these agents can catch library updates, API deprecations, or new feature announcements within 24 hours of their release. This allows them to automatically flag outdated methods in generated documentation, update code examples, or even draft release notes, ensuring documentation remains accurate and current.
  • Customer Support Agents: By having access to the latest product updates and feature launches via the tldr.club digest, support agents can provide accurate and up-to-date answers to user questions. They can inform users about new capabilities, known issues, or upcoming changes, improving customer satisfaction and reducing escalations.

The overarching pattern across all these implementations is a shift from agents waiting for human input to autonomously incorporating external state changes. This enables them to operate with a continuous, dynamic understanding of the world, making them significantly more effective and valuable.

How Does tldr.club Compare to Traditional Newsletters?

The fundamental difference between tldr.club and traditional AI newsletters lies in their intended audience and optimization goals. Traditional newsletters are crafted for human engagement, while tldr.club is engineered for machine consumption.

Here’s a comparative table highlighting these distinctions:

FeatureTraditional Newslettertldr.club
Primary AudienceHumans (readers)LLMs (AI agents)
FormatHTML email, web pages, rich textPure Markdown
Optimization GoalEngagement, readability, storytellingInformation density, token efficiency, parsing
FrequencyIrregular, writer-dependent, often daily/weeklyStrict cron schedule (daily, weekly, monthly)
Delivery TimeVariable, based on human publishingFixed (e.g., 6:15 AM UTC for daily)
StructureNarrative flow, prose, engaging introsCategorized facts, bullet points, strict schema
Token Count per StoryHigh (often 500+ tokens including boilerplate)Low (approx. 100-150 tokens of actual info)
Source LinksOften buried, behind paywalls, or short-formProminently displayed, direct to primary source
Visuals/MediaImages, videos, charts, adsNone (pure text)
DeduplicationNone (may repeat stories from different angles)Aggressive fuzzy matching and canonicalization
Cost EfficiencyLow for LLMs (high token waste)High for LLMs (minimal token waste)
Parsing DifficultyHigh (requires HTML parsing, content extraction)Low (straightforward markdown parsing)
ExtensibilityLimited (custom content often manual)High (open-source scouts, custom prompts)
FunctionalityInform, entertain, inspireProvide structured context for agent reasoning

This comparison clarifies that tldr.club is not a replacement for human-curated newsletters, but rather a complementary tool. Humans might subscribe to newsletters for inspiration, serendipitous discovery, and a broader understanding of the AI landscape through narrative. OpenClaw agents, however, are wired to tldr.club for consistent, structured, and efficient knowledge acquisition, ensuring their operational competence and current awareness without the overhead of human-optimized content.

Can You Extend the Pipeline With Custom Scouts?

The open-source repository (deeflect/tldr-club) provides a high degree of extensibility, allowing developers to tailor the news aggregation pipeline to specific needs through custom scouts and compiler configurations. This transforms tldr.club from a mere service into a versatile framework.

Adding custom scouts involves implementing a standard interface: the fetch() method must return an array of objects, each containing at least title, url, timestamp, and source fields. This standardized output allows the rest of the pipeline (deduper and compiler) to process custom data seamlessly. Builders have leveraged this flexibility to integrate various niche information sources:

  • Discord Webhook Scouts: For monitoring private Discord channels where specific AI communities or project teams discuss developments not publicly available.
  • Private Slack Monitors: To track internal company announcements or project updates relevant to an agent’s operational context.
  • Google Scholar Alerts: To pull in new academic papers from highly specific research areas or by particular authors, ensuring agents stay current on specialized scientific advancements.
  • Niche Forum Scrapers: For monitoring specialized forums or subreddits that cater to very specific AI sub-fields not covered by the default scouts.

Beyond custom scouts, the compiler stage also offers extensibility. Custom prompts can be injected via environment variables, allowing developers to fine-tune the “why it matters” section for specific domains or agent objectives. For example, one OpenClaw user modified the compiler to assign a higher relevance weight to reinforcement learning papers for their robotics agent, ensuring those articles were prioritized and summarized with a focus on their practical implications for robotic control and learning.

The deduper’s behavior is also configurable. Its similarity thresholds (default: 0.85 Jaccard similarity for titles and URLs) can be adjusted. Lowering this threshold allows the deduper to catch more subtle variations and potentially include near-duplicates that might offer slightly different perspectives. Conversely, raising the threshold results in more aggressive filtering, ensuring only the most distinct stories make it through. This level of customization makes tldr.club not just a data provider, but a foundational component for building highly specialized and domain-aware OpenClaw agents.

What Performance Metrics Does the Pipeline Achieve?

Understanding the performance metrics of the tldr.club pipeline is crucial for anyone considering self-hosting or integrating it into production OpenClaw agent environments. The numbers clearly illustrate the efficiency and resource requirements:

  • Input Volume: The pipeline typically processes approximately 250 raw news items daily from its six diverse source types.
  • Deduplication Efficiency: After the deduplication phase, the number of unique stories is reduced to around 90. This represents a significant 64% reduction in redundant information, saving substantial processing time and context window tokens for downstream LLMs.
  • Final Digest Output: Following the LLM compilation and filtering, the final daily digest contains approximately 50-60 distinct stories. This constitutes a total reduction of about 76% from the initial raw input, ensuring only the most relevant and high-signal information reaches the agents.
  • Processing Time: The entire pipeline, from initial scout fetching to final markdown generation, completes in under 90 seconds on a standard Railway instance (configured with 1GB RAM and shared CPU). This rapid processing time is essential for meeting the strict 6:15 AM UTC daily deadline for digest availability.
  • Resource Usage - Memory: Memory usage peaks during the content fetching phase. This is primarily due to the operation of headless browsers (e.g., Playwright) that are used to render and extract full article text from complex web pages, ensuring comprehensive data capture.
  • Resource Usage - CPU: CPU utilization sees spikes during the LLM compilation phase. However, the choice of Gemini 2.5 Flash, known for its speed and efficiency, keeps these CPU-intensive periods brief, allowing for quick processing of the batch.
  • Storage Requirements: The storage footprint is remarkably minimal. A daily digest typically consumes about 50KB of disk space. A weekly rollup is around 200KB, and a monthly rollup is approximately 1MB. A historical archive going back 12 months would consume less than 20MB in total. These minimal storage requirements make it highly economical to maintain extensive historical data for RAG implementations or trend analysis.

These performance metrics are vital for making informed decisions regarding self-hosting, selecting appropriate infrastructure, and accurately projecting operational costs when scaling tldr.club to support multiple OpenClaw agent teams or demanding enterprise applications. The efficiency demonstrated across all stages underscores its design for machine-first consumption.

What Are the Limitations and Blind Spots?

While tldr.club offers significant advantages for OpenClaw agents, it is important to acknowledge its inherent limitations and blind spots to properly integrate it into an agent’s workflow. No single information source can be entirely comprehensive, and tldr.club is no exception.

Here are the key limitations:

  • Coverage Scope: tldr.club’s default configuration covers general AI news. It will miss highly specialized or niche information, such as private Discord announcements, discussions in closed research communities, or highly specific technical updates pertaining to obscure software libraries or hardware. If an agent specializes in narrow domains like RNA folding algorithms, specific robotic operating systems (ROS) stacks, or quantum AI, the general scout layer will likely not catch niche papers or discussions relevant to those fields.
  • Paywalled Content: The service cannot access paywalled research papers (e.g., from journals like Nature, Science, or IEEE) or premium news articles. This means agents relying solely on tldr.club might miss critical breakthroughs published behind subscription barriers.
  • API Blocks and Private Accounts: News from Twitter/X accounts that block API access or from private social media groups will not be ingested. This can exclude valuable insights from influential researchers or industry figures who maintain private communication channels.
  • Latency for Breaking News: The 6:15 AM UTC batch processing means that truly breaking news (e.g., an unexpected model release at 7 AM UTC) will not appear in the digest until the following day’s compilation. For agents requiring real-time awareness, a cron-based aggregation service like tldr.club is insufficient. Such scenarios would necessitate supplemental webhook-based scouts or direct API integrations that push events instantly.
  • Lack of Sentiment Analysis: The current implementation focuses on factual information extraction and summarization. It does not perform sentiment analysis, meaning it tells you what happened, but not how the broader community or market is reacting to it. Agents needing to gauge public opinion or market sentiment would require additional tools.
  • Loss of Visual Information: The markdown format is text-only. It strips out all visual information, including charts, graphs, diagrams, and images. If an agent needs to interpret benchmark graphs, analyze architectural diagrams, or understand visual data representations, tldr.club’s output would need to be supplemented with vision-capable web scraping tools or specialized chart extraction and interpretation models.
  • Bias in Source Selection: While diverse, the chosen sources inevitably carry their own biases. The ranking and filtering by the LLM compiler, even with few-shot prompting, can also introduce subtle biases based on the training data and prompt engineering. Agents should be aware that the “relevance” assigned to stories is relative to the compiler’s understanding of “AI practitioners.”

These limitations underscore the need for a multi-modal and multi-source approach for highly sophisticated OpenClaw agents. tldr.club serves as an excellent foundational layer for general AI awareness, but specialized tasks may require additional, targeted information streams.

What Does This Mean for the OpenClaw Ecosystem?

tldr.club represents a significant advancement for the OpenClaw ecosystem by introducing a new category of skill: ambient intelligence. Unlike traditional tools that execute on explicit command (e.g., a search tool, a calculation tool, or an API interaction tool), this skill maintains a continuous, background knowledge stream. It provides a constant influx of up-to-date information, enabling the autonomous agent teams we have previously discussed to operate with current world-state knowledge without requiring constant human curation or intervention.

For the broader OpenClaw ecosystem, tldr.club establishes a crucial pattern for “agent-native” services. These are APIs and data sources designed from the ground up to return LLM-optimized formats (like structured markdown) rather than human-optimized HTML or complex JSON. This shift in design philosophy is vital for reducing token costs, improving parsing reliability, and maximizing the efficiency of LLM context windows. We anticipate the emergence of similar skills and services tailored for other complex data types, such as:

  • Academic Paper Aggregators: Delivering structured digests of new research from specific fields, perhaps with key findings and methodology extracted.
  • Legal Filings Summarizers: Providing concise, LLM-ready summaries of new legal documents relevant to an agent’s domain.
  • GitHub Issue and Pull Request Trackers: Offering structured updates on development progress, bug reports, and feature implementations for software-oriented agents.

The successful integration of tldr.club also demonstrates how OpenClaw’s flexible skill system can effectively handle both non-interactive tools (which primarily ingest data and update an agent’s knowledge base) and interactive tools (which perform actions or query external systems). This blurs the line between traditional tool use and advanced Retrieval Augmented Generation (RAG) capabilities, creating robust mechanisms for continuous knowledge refresh. Agents can now maintain an always-current understanding of their operating environment, significantly enhancing their autonomy, relevance, and overall performance within the OpenClaw framework. This is a foundational step towards truly intelligent and adaptive AI agents.

What Is the Future of Agent-Oriented Information Architecture?

tldr.club stands as a harbinger of a fundamental structural shift in how information is collected, processed, and consumed by artificial intelligence. The internet, as we know it, was largely built for humans browsing web pages, characterized by visual interfaces, narrative content, and interactive elements. However, autonomous agents, particularly LLMs, do not “browse” in the human sense; they require structured, token-efficient, and machine-readable data streams.

This growing demand for “agent-native” content formats will inevitably drive innovation in information architecture. We predict the emergence of new, LLM-optimized news protocols, potentially evolving beyond simple markdown. While tldr.club’s markdown format is an effective stopgap, future iterations might explore more advanced paradigms such as:

  • Binary Embeddings: Directly delivering pre-computed vector embeddings of news items, allowing agents to instantly perform similarity searches and contextual retrieval without the overhead of text tokenization and re-embedding.
  • Structured JSON-LD or OWL/RDF: Using semantic web technologies to provide rich, machine-interpretable metadata and relationships for each news item, enabling more sophisticated reasoning and knowledge graph integration.
  • Agent-Specific APIs: Services that offer highly customizable data streams, allowing agents to specify their exact information needs (e.g., “only show me papers on Transformer architectures published by Google DeepMind in the last week with a citation count above 100”).

For OpenClaw builders, the implication is clear: design your agents to anticipate and prefer information presented in structured feeds, rather than relying on brittle, unstructured web scraping of human-oriented pages. The most successful tools and services in the agent economy will be those that expose machine-readable endpoints by default, prioritizing data parsability and token efficiency. As agent interoperability standards continue to evolve and mature, expect tldr.club-style services—those providing curated, LLM-optimized information streams—to become as fundamental to the operational infrastructure of AI agents as DNS is for human internet browsing. This shift will ultimately enable agents to achieve unprecedented levels of autonomy and intelligence by granting them direct, efficient access to the world’s information.

Frequently Asked Questions

How do I install the tldr.club skill in my OpenClaw agent?

Your agent fetches the skill manifest from tldr.club/skill, which returns a JSON definition including the digest endpoint and refresh schedule. Configure the digest_url parameter to point to https://tldr.club/daily.md and set your preferred cron schedule in the skill configuration. Once installed, the skill adds a get_latest_news() tool that pulls the markdown digest directly into your agent’s context window when invoked, or writes to a vector store for RAG implementations if configured for persistent storage.

What time zone does tldr.club use for its daily digest?

All cron jobs run on UTC. The daily compilation completes at 6:15 AM UTC, which corresponds to 10:15 PM PST (previous day) or 7:15 AM CET. You can configure your OpenClaw agent to fetch the digest at any time; the file at /daily.md updates only once per day at the compilation time, remaining static until the next day’s batch. Weekly and monthly digests follow similar patterns, updating at 7:00 AM UTC Sundays and 8:00 AM UTC on the first of each month respectively.

Can I customize which news sources my agent pulls from?

Yes, if you self-host the open-source repository available at github.com/deeflect/tldr-club. The scout layer accepts custom implementations of the fetch() interface, allowing you to add Discord webhooks, private RSS feeds, or internal Slack channels. The public hosted version uses the default source configuration for general AI news coverage and does not support per-user source filtering to maintain compilation efficiency.

How does tldr.club handle duplicate stories across Reddit and Twitter?

The deduper uses fuzzy matching on titles and URLs with Jaccard similarity scoring set to a default threshold of 0.85. It normalizes URLs to remove tracking parameters and compares Levenshtein distance on titles to catch slight variations. This process typically reduces 250+ raw items to approximately 90 unique stories by identifying the same paper or announcement posted across Reddit, Twitter, and HackerNews within a 24-hour lookback window.

Is tldr.club free to use with OpenClaw?

The public endpoint at tldr.club is currently free with a rate limit of 100 requests per hour per IP address. This is sufficient for individual agents fetching once daily. For production deployments requiring higher throughput, custom scout configurations, or private data handling, you should self-host the repository on Railway or similar infrastructure to avoid rate limits and ensure complete data privacy for your agent team.