Methodology
How does scoring work?
Overview
Every job is broken down into its core tasks sourced from the O*NET database, the U.S. Department of Labor's comprehensive occupational database. Each task is individually evaluated for AI automation risk, then combined into an overall job score using weighted averages.
Task-Level Analysis
Rather than scoring a job as a whole, we analyze each individual task that makes up the role. This gives a much more nuanced picture. Most jobs have a mix of highly automatable tasks and tasks that remain deeply human. A financial analyst might have routine data compilation (high risk) alongside client relationship management (low risk).
Risk Scoring
Each task receives a risk score from 0 to 100, reflecting how likely AI is to automate that task in the near to medium term. The score considers:
- —Whether the task is primarily cognitive or physical
- —How much the task relies on interpersonal skills, empathy, or trust
- —Whether AI tools already exist that can perform the task
- —The degree of judgment, creativity, or novel problem-solving required
- —Regulatory or safety constraints that limit automation
Weighted Averages
Not all tasks matter equally. Each task has a weight reflecting how central it is to the role. A surgeon's “perform operations” task carries more weight than “update patient records.” The overall job score is the weighted average of all task scores.
The Model
All scoring is currently performed by Gemini 3.1 Pro Preview. The model evaluates each task with a detailed explanation of its reasoning, which you can read by clicking on any task in a job's breakdown. We plan to add cross-validation from multiple AI models in the future.
Risk Levels
Limitations
These scores represent an AI model's assessment, not a definitive prediction. The actual pace of automation depends on regulation, economics, social acceptance, and technological breakthroughs that are difficult to forecast. Use these scores as a starting point for thinking about your career, not as a verdict.