Voice Recognition Software Dictation Test

By Dan Hope
FOLLOW US
SHARE

Comparing features between voice recognition software can only tell you so much about them. In order to really know which product is the best, you need to actually run a comparison test. So we’ve put together a voice recognition test to see how well these programs translate your voice to text on your screen.

While these programs do more than dictation (they’re also designed to help you navigate your operating system and other programs), the dictation test is actually a good way to compare them because it uses words that aren’t programmed as commands. If a program can recognize all the words in a regular, conversational sentence, it should be worth the money. If it can’t recognize them, then what’s the point?

The paragraph below is the sample text we used to test the software. It explains how the test works and how to see the results. The sample text was read twice to each program. The first results are from a steady, enunciated reading, slightly slower than normal conversation. The second reading was done at a slightly faster, conversational pace. Check out the results.

Sample Text
This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text in the dictation mode of each program. By using the same text and the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold, so you can easily see where the program made the mistakes. The number of errors will be counted (including punctuation errors) and combined to give an accuracy score for each program. And to create a real challenge, we’ll include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can jive with less stuffy lingo. Savvy?

MacSpeech Dictate
This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text in the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold, so you can easily see where the program made the mistakes. The number of errors will be counted (including punctuation errors) and combined to give an accuracy score for each program. And to create a real challenge, [will] include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can jive with less stuffy lingo. Savvy?

           Results: 1 error

This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text in the dictation mode of each program. By using the same text and the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold, so you can easily see where the program made [the] mistakes. The number of errors will be counted (including punctuation errors) and combine to give an accuracy score for each program. And to create a real challenge, [will] include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can jive with less stuffy lingo. Sadly rushed and Mark.

           Results: 8 errors (including 2 missed words)

Dragon Naturally Speaking Dictation

This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text in the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold and underlined, so you can easily see where the program made the mistakes. The number of errors will be counted fantasies including punctuation errors print this and combined to give an accuracy score for each program. And to create a real challenge, will include a few humdinger colloquialisms, by golly. We'll see if these groovy programs can jive with Lester feeling they'll. 70?

Results: 8 errors

This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text in the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold and underlined, so you can easily see where the program made [the] mistakes. The number of errors will be counted for emphasis including punctuation errors parenthesis and combined to give an accuracy score for each program. And [to] create a real challenge, will include a few humdinger colloquialisms, by golly. We'll see if these groovy programs can jive with Lester feeling they'll. Savvy?

Results: 9 errors (including 2 missed words)

e-Speaking

This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text and the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors in dictation will be in bold and underlined, so you can easily see where the program made the mistakes. The number of errors will be counted parenthesis including punctuation errors parenthesis and combines to give an accuracy score for each program. And to create a real challenge, we’ll include a few humdinger cool colloquialisms, by golly. We’ll see if these groovy programs can jive with less stuffy lingo. Setting?

Results: 7 errors

This text was created as a trial of the voice recognition software we tested for this review. The same reviewer read this text and the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately each program translates speech, allowing us to compare them. Any errors and dictation will be in bold and underlined, so you can easily see where the program made the mistakes. The number of errors or be counted parenthesis including punctuation errors parenthesis and scum binds to give an accuracy score for each program. And to create a real challenge, will include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can jibe with less to feeling go. Sadly Russian mark

Results: 12 errors

Talking Desktop
This text was created as a trial of the voice recognition software we tested for this review. The same remover read this text in the dictation motivates [of] [each]program. By using the same text and the same voice profile, we can get [a] pretty good idea of how accurately to program translates speech, allowing us to compare them. Any errors indication will be in bold and underlined, so you can easily see where the program made the mistakes. The number of errors will be counted parenthesis including punctuation errors but this is an combined to give an accuracy score for each program. And to create a real challenge, we’ll include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can drive with west of the lingo. Sadly? Can

Results: 15 errors

This text was created as a trial [of] the voice recognition software we tested for this review. The same reviewer of this text in the dictation mode of each program. By using the same text in the same voice profile, we can get a pretty good idea of how accurately to program translates beach, allowing us to compare them. Any errors in dictation will be in bold and underlined, so you can easily see where the program made the mistakes. The number of errors to be counted princess including the jewishness parenthesis and combines to give an actress to score for each program. And to create a real challenge, will include a few humdinger colloquialisms, by golly. We’ll see if these groovy programs can jibe with what’s the feeling go. Adding? And

Results: 19 errors

Final Results
1. MacSpeech Dictate: 9 errors, 98% accuracy
2. Dragon Naturally Speaking: 17 errors, 93% accuracy
3. E-Speaking: 19 errors, 92% accuracy
4. Talking Desktop: 34 errors, 86% accuracy

For more information on these products, see the side-by-side comparison of features and price.

At TopTenREVIEWS We Do the Research So You Don’t Have To.™ 


 
Software » Business » Voice Recognition Software Review » Voice Recognition Software Dictation Test