Putting a statement into old calligraphy is nice. But it all boils down to because I said so. If you’re going to go to that effort you might as well put the rationale for why it can’t possibly parse the language into the explanation rather than because I said so
The section about “regular language” is the reason. That’s not being cheeky, that’s a technical term. It immediately dives into some complex set theory stuff but that’s the place to start understanding.
Here’s the scenario: you’re given the job of checking the pages on a web server for doubled words (such as “this this”), a common problem with documents sub- ject to heavy editing. Your job is to create a solution that will:
Accept any number of files to check, report each line of each file that has doubled words, highlight (using standard ANSI escape sequences) each dou- bled word, and ensure that the source filename appears with each line in the report.
Work across lines, even finding situations where a word at the end of one line is repeated at the beginning of the next.
Find doubled words despite capitalization differences, such as with The the, as well as allow differing amounts of whitespace (spaces, tabs, new- lines, and the like) to lie between the words.
Find doubled words even when separated by HTML tags. HTML tags are for marking up text on World Wide Web pages, for example, to make a word bold: it is <b>very</b> very important
That’s certainly a tall order! But, it’s a real problem that needs to be solved. At one point while working on the manuscript for this book. I ran such a tool on what I’d written so far and was surprised at the way numerous doubled words had crept in. There are many programming languages one could use to solve the problem, but one with regular expression support can make the job substantially easier.
Regular expressions are the key to powerful, flexible, and efficient text processing. Regular expressions themselves, with a general pattern notation almost like a mini programming language, allow you to describe and parse text… With additional sup- port provided by the particular tool being used, regular expressions can add, remove, isolate, and generally fold, spindle, and mutilate all kinds of text and data.
None of these examples are for parsing English sentences. They parse completely different formal languages. That it’s text is irrelevant, regex usually operates on text.
You cannot write a regex to give you for example “the subject of an English sentence”, just as you can’t write a regex to give you “the contents of a complete div tag”, because neither of those are regular languages (HTML is context-free, not sure about English, my guess is it would be considered recursively enumerable).
You can’t even write a regex to just consume <div> repeated exactly n times followed by </div> repeated exactly n times, because that is already a context-free language instead of a regular language, in fact it is the classic example for a minimal context-free language that Wikipedia also uses.
There’s a difference between ‘processing’ the text and ‘parsing’ it. The processing described in the section you posted it fine, and you can manage a similar level of processing on HTML. The tricky/impossible bit is parsing the languages. For instance you can’t write a regex that’ll relibly find the subject, object and verb in any english sentence, and you can’t write a regex that’ll break an HTML document down into a hierarchy of tags as regexs don’t support counting depth of recursion, and HTML is irregular anyway, meaning it can’t be reliably parsed with a regular parser.
For instance you can’t write a regex that’ll relibly find the subject, object and verb in any english sentence
Identifying parts of speech isn’t a requirement of the word parse. That’s the linguistic definition. In computer science identifying tokens is parsing.
The text does technically give the reason on the first page:
It is not a regular language and hence cannot be parsed by regular expressions.
Here, “regular language” is a technical term, and the statement is correct.
The text goes on to discuss Perl regexes, which I think are able to parse at least all languages in LL(*). I’m fairly sure that is sufficient to recognize XML, but am not quite certain about HTML5. The WHATWG standard doesn’t define HTML5 syntax with a grammar, but with a stateful parsing procedure which defies normal placement in the Chomsky hierarchy.
This, of course, is the real reason: even if such a regex is technically possible with some regex engines, creating it is extremely exhausting and each time you look into the spec to understand an edge case you suffer 1D6 SAN damage.
Putting a statement into old calligraphy is nice. But it all boils down to because I said so. If you’re going to go to that effort you might as well put the rationale for why it can’t possibly parse the language into the explanation rather than because I said so
I appreciate the Zalgo calligraphy in particular.
The section about “regular language” is the reason. That’s not being cheeky, that’s a technical term. It immediately dives into some complex set theory stuff but that’s the place to start understanding.
English isn’t a regular language either. So that means you can’t use regex to parse text. But everyone does anyway.
Huh? Show me the regex to parse the English language.
Parsing text to is the reason regex was created!
Page 1, Chapter 1, “Mastering Regular Expressions”, Friedl, O’Reilly 1997.
" Introduction to Regular Expressions
Here’s the scenario: you’re given the job of checking the pages on a web server for doubled words (such as “this this”), a common problem with documents sub- ject to heavy editing. Your job is to create a solution that will:
Accept any number of files to check, report each line of each file that has doubled words, highlight (using standard ANSI escape sequences) each dou- bled word, and ensure that the source filename appears with each line in the report.
Work across lines, even finding situations where a word at the end of one line is repeated at the beginning of the next.
Find doubled words despite capitalization differences, such as with The the, as well as allow differing amounts of whitespace (spaces, tabs, new- lines, and the like) to lie between the words.
Find doubled words even when separated by HTML tags. HTML tags are for marking up text on World Wide Web pages, for example, to make a word bold: it is <b>very</b> very important
That’s certainly a tall order! But, it’s a real problem that needs to be solved. At one point while working on the manuscript for this book. I ran such a tool on what I’d written so far and was surprised at the way numerous doubled words had crept in. There are many programming languages one could use to solve the problem, but one with regular expression support can make the job substantially easier.
Regular expressions are the key to powerful, flexible, and efficient text processing. Regular expressions themselves, with a general pattern notation almost like a mini programming language, allow you to describe and parse text… With additional sup- port provided by the particular tool being used, regular expressions can add, remove, isolate, and generally fold, spindle, and mutilate all kinds of text and data.
Chapter 1: Introduction to Regular Expressions "
None of these examples are for parsing English sentences. They parse completely different formal languages. That it’s text is irrelevant, regex usually operates on text.
You cannot write a regex to give you for example “the subject of an English sentence”, just as you can’t write a regex to give you “the contents of a complete div tag”, because neither of those are regular languages (HTML is context-free, not sure about English, my guess is it would be considered recursively enumerable).
You can’t even write a regex to just consume
<div>
repeated exactly n times followed by</div>
repeated exactly n times, because that is already a context-free language instead of a regular language, in fact it is the classic example for a minimal context-free language that Wikipedia also uses.Read it again:
“At one point while working on the manuscript for this book. I ran such a tool on what I’d written so far”
The author explicitly stated that he used regex to parse his own book for errors! The example was using regex to parse html.
Just because regex can’t do everything in all cases doesn’t mean it isn’t useful to parse some html and English text.
It’s like screaming, “You can’t build an Operating system with C because it doesn’t solve the halting problem!”
There’s a difference between ‘processing’ the text and ‘parsing’ it. The processing described in the section you posted it fine, and you can manage a similar level of processing on HTML. The tricky/impossible bit is parsing the languages. For instance you can’t write a regex that’ll relibly find the subject, object and verb in any english sentence, and you can’t write a regex that’ll break an HTML document down into a hierarchy of tags as regexs don’t support counting depth of recursion, and HTML is irregular anyway, meaning it can’t be reliably parsed with a regular parser.
Identifying parts of speech isn’t a requirement of the word parse. That’s the linguistic definition. In computer science identifying tokens is parsing.
https://en.m.wikipedia.org/wiki/Parsing
The text does technically give the reason on the first page:
Here, “regular language” is a technical term, and the statement is correct.
The text goes on to discuss Perl regexes, which I think are able to parse at least all languages in
LL(*)
. I’m fairly sure that is sufficient to recognize XML, but am not quite certain about HTML5. The WHATWG standard doesn’t define HTML5 syntax with a grammar, but with a stateful parsing procedure which defies normal placement in the Chomsky hierarchy.This, of course, is the real reason: even if such a regex is technically possible with some regex engines, creating it is extremely exhausting and each time you look into the spec to understand an edge case you suffer 1D6 SAN damage.