AI Proficiency

Oct 14, 2025

You're Not Measuring AI — Here's How To Start

AI Proficiency vs Usage Matric
AI Proficiency vs Usage Matric
AI Proficiency vs Usage Matric
Carla Lubin

Ameya Kanitkar

Co-founder & CTO

Reading time: 6 minutes

TL;DR


  • Most companies aren't measuring AI at all—just tracking licenses or collecting anecdotes.

  • Without measurement, you're guessing where AI creates value and where it doesn't.

  • Larridin Scout makes three signals visible: Utilization × Proficiency × Value—in 30 minutes.

Most companies aren't measuring AI at all.

At best, there's a license count. Maybe a few adoption anecdotes. Some dashboard tracking MAUs.

That's like running a gym and tracking key fobs handed out, not whether anyone got stronger.

If you can't see who's using AI, how well they're using it, and whether it creates value, you're guessing.

This post is a practical starting point: a simple model—Utilization × Proficiency × Value—and how to capture each signal in days, not months.

We built Larridin Scout to make those signals visible with a 30‑minute deployment and privacy by design.

Here's how to begin.


The question you can't answer (yet)

Ask most enterprise leaders:

"Which teams are getting real value from AI, and which ones are spinning their wheels?"

Silence. Or: "We think marketing is doing well with it... I heard someone mention a good use case..."

That's the symptom of zero measurement.

You've invested in tools. Maybe ran enablement sessions. Sent Slack announcements about "responsible AI use."

But you have no data on:

  • What AI tools are actually being used (sanctioned or shadow IT)

  • Who's using them effectively vs. struggling

  • Whether usage creates time savings, quality improvements, or just busy work

Without measurement, every decision about AI—training, tools, policy, investment—is a guess.


Why utilization alone isn't enough

Some companies do track one thing: adoption.

"42% of employees logged into our AI tool this month."

Great. Now what?

The problem: Same tools, same training, 10x difference in outcomes.

  • One employee writes prompts that turn 3 hours of work into 20 minutes.

  • Another copies a template, gets generic output, gives up, and does it manually anyway.

Both show up as "active users" in your dashboard.

Utilization without proficiency measurement is just vanity metrics.

If you can't see who's good at using AI—and where value actually shows up—you can't manage, coach, or invest intelligently.


A simple model: Utilization × Proficiency × Value

At Larridin, we see AI effectiveness as the product of three signals:

1. Utilization: Are people using AI at all, and how often?
  • Which tools are in use (sanctioned and shadow)?

  • Who's using them, and in which teams or functions?

  • How frequently, and for what types of work?

2. Proficiency: Are they using it well?
  • Are prompts clear, structured, and iterative?

  • Do people know how to refine outputs or are they accepting first drafts?

  • Who are your 10x users vs. your beginners?

3. Value: Did the interaction save time, improve quality, or unlock capability?
  • Real-time feedback: "Was this helpful?"

  • Perceived impact: time saved, quality lift, new capability unlocked

  • Connection to business outcomes

All three matter.
  • Double utilization with low proficiency → value stays flat.

  • Raise proficiency → value climbs, even if usage doesn't spike.

Measure value → you learn which use cases to amplify and which to stop.


What Larridin Scout does (and how to start in 30 minutes)

Phase 1: AI Discovery + Usage Intelligence (ships today)

Deploy in 30 minutes. Get immediate visibility.

Scout discovers:
  • Every AI tool across your organization (the sanctioned, the shadow, the surprisingly popular)

  • Who's using what, how often, and where usage is concentrated

  • Utilization patterns by team, function, and role

What you learn in week one:
  • The long tail of tools you didn't know existed

  • Hotspots where AI is already indispensable (and why)

  • Gaps where adoption is low or attempts are unproductive

  • A shortlist of power users and use cases to amplify now

Most companies discover 3–5x more AI usage than expected.

That discovery feels like progress—until the real question surfaces:

"Is any of this usage valuable?"


Phase 2: AI Proficiency Intelligence (coming next)

This is where measurement becomes strategic.

Prompt Proficiency Assessment (zero data storage)
  • Evaluate prompt quality to map AI capability across your organization

  • Identify power users who can mentor others

  • Spot teams struggling with prompt engineering basics

  • Target enablement where it drives maximum impact

Privacy by design: We assess structure and technique without storing your prompt content.

In-Context Value Measurement
  • Lightweight micro‑surveys at the point of use

  • "How valuable was this AI interaction?"

  • "Did this save you time or improve quality?"

  • Capture perceived value in the moment, not weeks later in a survey no one remembers

Execution Intelligence Dashboard

Combine all three signals: Utilization × Proficiency × Value

See where AI is truly working—and where it isn't.


The conversations measurement unlocks

Once you have data, you can have better conversations:

Instead of:
"We should probably do more AI training."

You can say:
"Sales ops has high utilization but low proficiency—let's run a targeted workshop. Meanwhile, marketing has 3 power users we should turn into coaches."

Instead of:
"I think people like the new AI tool."

You can say:
"Tool A shows 4x higher value scores than Tool B for the same use case. Let's consolidate licenses and reinvest the savings."

Instead of:
"AI is part of our strategy."

You can say:
"Here are the 5 use cases driving measurable ROI. Here's where we're investing next quarter, and here's what we're stopping."


Why this matters now

AI spending is hitting $644B in 2025, up 76% from 2024.

The winners won't be the companies with the most AI tools.

They'll be the organizations that measure and develop AI proficiency at scale.

You can't manage what you don't measure.

You can't optimize AI effectiveness without knowing who's effective and why.


What changes when you start measuring
  • You stop guessing where to invest: training vs. tools vs. workflow redesign.

  • You find your internal AI coaches: the 10x users who can lift entire teams.

  • You target enablement precisely: the right nudge, to the right people, at the right time.

  • You connect usage to outcomes: not just "who logged in," but "who delivered value."


Privacy and trust by design

We built Scout with privacy at the core:

  • Zero prompt data storage in proficiency assessment

  • Lightweight, browser-based deployment in ~30 minutes

  • Clear controls and aggregate reporting to respect user privacy while giving leaders the signals they need

You get the insights. Your teams keep their trust.


How to begin

Step 1: Get your AI baseline

Deploy Scout and discover what's actually happening with AI in your organization.

  • See utilization across tools, teams, and functions

  • Identify hotspots and gaps

  • Get a shortlist of power users and high-value use cases

Step 2: Add proficiency and value measurement

Layer in prompt assessment and in-context feedback to understand how well people are using AI and what value it's creating.

Step 3: Build your enablement roadmap with data

Now you can answer:

  • Which teams are AI‑fluent and driving ROI?

  • Where will training lift performance the most?

  • Which use cases actually create business value here?

  • What enablement should we stop doing because it's not moving the needle?


Ready to start?

Get your AI Productivity Report: We'll deploy Scout, map your current utilization, and highlight top opportunities to raise proficiency and value.

[Request a Demo] | [Try Scout Free]

You can't transform what you can't see. Let's make AI effectiveness visible—and manageable—across your organization.


About the author

Ameya Kanitkar builds products that turn AI from hype into operational advantage. At Larridin, Ameya leads the work behind Scout's Utilization × Proficiency × Value model for measuring AI effectiveness at scale.


Connect: LinkedIn | ameya@larridin.com

Tags: #AIStrategy #EnterpriseAI #AIProductivity #AIAnalytics #FutureOfWork #DigitalTransformation

Brand logo

Larridin is the complete platform for enterprise Al — from discovery to adoption to impact.

Brand logo

Larridin is the complete platform for enterprise Al — from discovery to adoption to impact.

Brand logo

Larridin is the complete platform for enterprise Al — from discovery to adoption to impact.

Brand logo

Larridin is the complete platform for enterprise Al — from discovery to adoption to impact.