Skip to content

Conversation

Olshansk
Copy link

Note

This analysis was done by Claude Opus 4 on Twitter's 2023 Algorithm
It MAY (and is likely to) be outdated w.r.t how things work today

Key Takeaway for Improvement

  1. Critical Bias: Reply weight (13.5x) heavily favors controversial content that generates arguments over informational content
  2. Echo Chamber Risk: Strong similarity-based matching with limited diversity injection creates preference bubbles
  3. Safety Gaps: Insufficient negative feedback weights allow engaging-but-harmful content to overcome safety signals
  4. Missing Diversity: No explicit mechanisms to introduce cross-demographic or ideological content diversity
  5. Performance Issues: Several bugs and inefficiencies identified, including a critical runtime error in MaskNet

Personal Motivation

I personally believe that Public Source != Open Source.

For example, If Google were to open source their entire google3 repository, almost no one would know or have the resources to use it.

Similarly, everyone made a big deal about Twitter's algorithm being closed source but I have not seen any effort to review, analyze or improve it since it first became public.

People often forget how hard it is to onboard and understand a new codebase.

Fortunately, AI agents made it much easier. I've personally been able to start adding small features to OSS I use or experiment with.

This investigation was done to satisfy my curiosity in seeing what an agent could find.

Three Step Approach

journal.md contains:

  1. My original (simple) prompt
  2. An improved prompt by Claude Opus 4
  3. The final prompt I used to kick off Claude Code

@CLAassistant
Copy link

CLAassistant commented Jul 12, 2025

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants