PLY not matching the correct terminal
I created a simple parser in PLY that has two rules:
- when a
:
comes first a name appears - when a
=
comes first a number appears
Corresponding code:
from ply import lex, yacc
tokens = ['Name', 'Number']
def t_Number(t):
r'[0-9]'
return t
def t_Name(t):
r'[a-zA-Z0-9]'
return t
literals = [':', '=']
def t_error(t):
print("lex error: " + str(t.value[0]))
t.lexer.skip(1)
lex.lex()
def p_name(p):
'''
expression : ':' Name
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
def p_error(p):
print("yacc error: " + str(p.value))
yacc.yacc()
yacc.parse("=3")
yacc.parse(":a")
yacc.parse(":3")
My expectation is that if it sees a :
or =
it enters the corresponding rule and tries to match the corresponding terminal. Yet in the third example it matches a Number which should be a Name and then fails.
Afaik the grammar should be context free (which is needed to be parsed), is this the case? Also how would I handle the case when one token is a superset of another token?
python parsing yacc lex ply
add a comment |
I created a simple parser in PLY that has two rules:
- when a
:
comes first a name appears - when a
=
comes first a number appears
Corresponding code:
from ply import lex, yacc
tokens = ['Name', 'Number']
def t_Number(t):
r'[0-9]'
return t
def t_Name(t):
r'[a-zA-Z0-9]'
return t
literals = [':', '=']
def t_error(t):
print("lex error: " + str(t.value[0]))
t.lexer.skip(1)
lex.lex()
def p_name(p):
'''
expression : ':' Name
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
def p_error(p):
print("yacc error: " + str(p.value))
yacc.yacc()
yacc.parse("=3")
yacc.parse(":a")
yacc.parse(":3")
My expectation is that if it sees a :
or =
it enters the corresponding rule and tries to match the corresponding terminal. Yet in the third example it matches a Number which should be a Name and then fails.
Afaik the grammar should be context free (which is needed to be parsed), is this the case? Also how would I handle the case when one token is a superset of another token?
python parsing yacc lex ply
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.
– Konrad Rudolph
Nov 13 '18 at 16:51
add a comment |
I created a simple parser in PLY that has two rules:
- when a
:
comes first a name appears - when a
=
comes first a number appears
Corresponding code:
from ply import lex, yacc
tokens = ['Name', 'Number']
def t_Number(t):
r'[0-9]'
return t
def t_Name(t):
r'[a-zA-Z0-9]'
return t
literals = [':', '=']
def t_error(t):
print("lex error: " + str(t.value[0]))
t.lexer.skip(1)
lex.lex()
def p_name(p):
'''
expression : ':' Name
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
def p_error(p):
print("yacc error: " + str(p.value))
yacc.yacc()
yacc.parse("=3")
yacc.parse(":a")
yacc.parse(":3")
My expectation is that if it sees a :
or =
it enters the corresponding rule and tries to match the corresponding terminal. Yet in the third example it matches a Number which should be a Name and then fails.
Afaik the grammar should be context free (which is needed to be parsed), is this the case? Also how would I handle the case when one token is a superset of another token?
python parsing yacc lex ply
I created a simple parser in PLY that has two rules:
- when a
:
comes first a name appears - when a
=
comes first a number appears
Corresponding code:
from ply import lex, yacc
tokens = ['Name', 'Number']
def t_Number(t):
r'[0-9]'
return t
def t_Name(t):
r'[a-zA-Z0-9]'
return t
literals = [':', '=']
def t_error(t):
print("lex error: " + str(t.value[0]))
t.lexer.skip(1)
lex.lex()
def p_name(p):
'''
expression : ':' Name
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
def p_error(p):
print("yacc error: " + str(p.value))
yacc.yacc()
yacc.parse("=3")
yacc.parse(":a")
yacc.parse(":3")
My expectation is that if it sees a :
or =
it enters the corresponding rule and tries to match the corresponding terminal. Yet in the third example it matches a Number which should be a Name and then fails.
Afaik the grammar should be context free (which is needed to be parsed), is this the case? Also how would I handle the case when one token is a superset of another token?
python parsing yacc lex ply
python parsing yacc lex ply
asked Nov 13 '18 at 14:10
jklmnnjklmnn
85110
85110
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.
– Konrad Rudolph
Nov 13 '18 at 16:51
add a comment |
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.
– Konrad Rudolph
Nov 13 '18 at 16:51
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input
1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.– Konrad Rudolph
Nov 13 '18 at 16:51
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input
1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.– Konrad Rudolph
Nov 13 '18 at 16:51
add a comment |
1 Answer
1
active
oldest
votes
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53282892%2fply-not-matching-the-correct-terminal%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
add a comment |
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
add a comment |
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.
edited Nov 13 '18 at 16:48
answered Nov 13 '18 at 16:27
ricirici
153k19135200
153k19135200
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
add a comment |
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
Thanks for that explanation! I'll try to make my grammar accept more rules (especially with tokens that are a subset of the allowed tokens.
– jklmnn
Nov 14 '18 at 10:42
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53282892%2fply-not-matching-the-correct-terminal%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Sorry, I misread. There are parser generators which work as you expect but afaik they are all predictive parsers -- like Antlr -- and not bottom-up parsers like Ply or yacc. The logic of bottom-up parsing means that the parser doesn't know which production to pick until it gets to the end. That makes it possible to parse a larger set of languages.
– rici
Nov 13 '18 at 16:34
“Afaik the grammar should be context free” — Correct, but the lexer isn’t (the input
1
could be either a name and a number). And as rici’s answer explains, PLY performs lexical analysis first.– Konrad Rudolph
Nov 13 '18 at 16:51