StyleMatch compares a student's known in-class writing against submitted work across eight research-based stylometric metrics — the same signals forensic linguists have used for decades. All analysis runs in your browser. No text ever leaves your device.
StyleMatch is designed around how teachers actually work — the submitted document is already open in Google Docs, and the controlled sample is whatever in-class writing you have on hand.
Open the student's submitted Google Doc. Click Fill from open doc in the StyleMatch sidebar — the submitted work populates automatically from the active document.
No copy-pasting. No file uploads. One click fills the field directly from the document you're reviewing.
Paste any in-class writing you have from the same student into the Controlled Sample field — a timed in-class essay, a Google Classroom response, a previous assignment.
The controlled sample is the anchor. The closer in genre to the submitted work, the stronger the signal on function words and discourse markers.
Eight metrics compute instantly in your browser. A printable Authorship Consistency Report is generated with divergence ratings, the raw numbers, and plain-language research context for each metric.
The report is designed to be documentable — something you can save, print, or share with a department head before a student conversation.
Every metric in StyleMatch is drawn from peer-reviewed stylometric and computational linguistics research. These are the same signals used in authorship attribution studies, forensic linguistics, and academic integrity research. The citations appear in every report we generate — not as decoration, but because you should know what you're looking at.
Stylometric analysis has been used in academic and legal contexts for decades. Function words in particular — the the, and, of, in that writers use without thinking — are among the most stable and reliable authorship signals in free prose precisely because no one writes them deliberately.
Report generates instantly in the sidebar.
Print or save as PDF for documentation.
Every metric was selected because it's stable within a writer, difficult to consciously replicate, and grounded in published research. The report explains each one — so you understand what you're looking at, not just whether a number is red or green.
Burrows' Delta over 30 core function words — the, and, of, in, that and so on. These words are used unconsciously and are among the most reliable stylometric signals known. Writers cannot easily change their function word profile even when they try.
Delta over 29 connectives and stance markers — however, therefore, furthermore, suggests. Reflects how a writer structures argument and signals reasoning. Strong signal for same-genre comparisons; the report flags when genres differ.
Comma, semicolon, em-dash, colon, exclamation, and question mark rates per 1,000 characters. Punctuation habits are largely unconscious — most people don't know how often they use a semicolon, which makes it a surprisingly strong authorship signal.
Average sentence length and its standard deviation, separately. Average length catches obvious differences in sentence construction; SD catches rhythm — a writer who mixes short and long sentences has a signature distinct from someone who writes uniformly.
Computed from sentence length and syllable count. Reading complexity is relatively stable across writing tasks for the same writer. A gap of more than 3 grade levels between samples is flagged as worth examining.
StyleMatch gives you a structured, repeatable data point. It's not a verdict machine, and we've deliberately designed it not to be one. Here's what to keep in mind.
Function words and discourse markers perform best when both samples are the same genre. A timed in-class response compared against a polished analytical essay will show divergence on these metrics regardless of authorship. The report flags this — but you need to account for it.
A minimum of 500 words per sample is required. Below that, statistical noise dominates. Even above 500, shorter samples produce wider confidence intervals — a 600-word sample is less reliable than a 1,200-word one.
A Divergent rating means the metric differs substantially between samples — not that the student didn't write it. Heavy editing, collaborative revision, significant time between samples, or genuine writing development can all produce divergence.
StyleMatch gives you eight metrics and a printable report. When you want AI-assisted sentence-level analysis — or follow-up questions to use in the student conversation — Verify takes it further. Both tools are in the same extension.
See how the submitted document was built — every session, paste event, and revision cluster. Free. Start here if you haven't already.
AI-assisted deep analysis — sentence-level consistency scoring, internal coherence, and AI-suggested follow-up questions for the student conversation. Uses StyleMatch scores as anchors.
All eight metrics compute entirely in your browser. Student writing never leaves your device — not to PaperTrail, not anywhere.
Students never interact with PaperTrail. No logins, no sign-ups, no data of any kind collected from students.
StyleMatch is pure browser computation. The only thing that leaves your device is the report you choose to print or save yourself.
Designed with North American school privacy requirements in mind. No advertising, no data resale, no third-party tracking.
Install the extension, run Inspect on any student document, and activate StyleMatch when you have an in-class sample to compare against. One extension. Three tools.
⬇ Add PaperTrail to Chrome