| Age | Commit message (Collapse) | Author |
|
This avoids creating a dictionary every time an arithmetic operator
is evaluated.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Tuples are inefficient, require the ability to use hash table lookup
via either a frozenset or a dictionary.
This also allows using accept_any with COMPARISON_MAP.
|
|
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Identifiers are more common than strings, check against 'id'
first.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Match single-character tokens a separate dictionary lookup.
As pointed out by dcbaker, this is even faster than str.index
and gives the syntax error check for free (via KeyError).
It also enables splitting the special-case "if" in two parts, one
for long tokens and one for short tokens, thus providing further
speedup.
This shaves about 2/3rds of the time spent in lex().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
- Split long expressions in () according to max line length
- Partly revert d028502 . Fixes #14935.
- Fixes #15032.
|
|
Change the semantics of IntrospectionBuildTarget.source_nodes
and IntrospectionBuildTarget.extra_files .
The rewriter and the static introspection tool used to be very broken,
now it is *less* broken, hence we add some tests in this commit.
Fixes #11763
|
|
Without this commit, meson thinks that the `var` token in the code below
starts at a different column number than it actually starts, because the
old author forgot to account for the length of the triple quotes.
```
'''
some multiline strings
abc''' + var
```
|
|
The parser should behave exactly as before,
but the code is a bit easier to understand now.
|
|
By default we point to the start of the most recent token we parsed.
This is used when erroring out on parser issues, to print the line that
caused the error, with a pointer to where we were when we got the error.
In this particular case, the pointer pointed to the start of the last
token we successfully parsed (col), but was not updated if we hit a
token we didn't understand at all. Instead use a pointer to the
unrecognized token itself.
Fixes: https://github.com/mesonbuild/meson/issues/14415
|
|
Fixes #14415
|
|
|
|
Except for set_variable(), the variable name is certainly an identifier because it
comes from the parser; thus, the check is unnecessary. Move the regular expression
match to func_set_variable().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Fixes #13566. Fixes #13567.
|
|
|
|
Fixes #13508
- Fix indentation of comments in arrays
- Fix indentation of comments in dicts
- Fix indentation of comments in if clauses
- Fix indentation of comments in foreach clauses
|
|
format was adding a new empty line each time when trying to split a long line containing a function with no arguments
|
|
|
|
this will allow transforming string types in the formater
|
|
In #02ff955, I used the word `columns` instead of `colons`, but the
meaning really was about the ':' symbol.
|
|
Some text editors on Windows may use utf8bom encoding by default.
Prevent crash and properly report misencoded files.
Fixes #12766.
|
|
IfClauseNode is only ever initialized in such a way that this attribute
is immediately set to something valid. And attempting to access its
value when the value is None would be a pretty broken error anyway. The
assignment served no purpose, but did perform a frivolous runtime op in
addition to angering mypy's checks for implicit None.
|
|
This replaces all of the Apache blurbs at the start of each file with an
`# SPDX-License-Identifier: Apache-2.0` string. It also fixes existing
uses to be consistent in capitalization, and to be placed above any
copyright notices.
This removes nearly 3000 lines of boilerplate from the project (only
python files), which no developer cares to look at.
SPDX is in common use, particularly in the Linux kernel, and is the
recommended format for Meson's own `project(license: )` field
|
|
FIXME: another approach would be to consider cont_eol as comment (i.e.
add backslash and whitespaces to the comment regex). In both cases it
works until we want to parse comments separately.
TODO?: handle eol_cont inside a string (to split long string without
breaking lines). Probably a bad idea and better to simply join a
multiline string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
use separate Node for multiline strings
|
|
|
|
|
|
|
|
|
|
Performed using https://github.com/ilevkivskyi/com2ann
This has no actual effect on the codebase as type checkers (still)
support both and negligible effect on runtime performance since
__future__ annotations ameliorates that. Technically, the bytecode would
be bigger for non function-local annotations, of which we have many
either way.
So if it doesn't really matter, why do a large-scale refactor? Simple:
because people keep wanting to, but it's getting nickle-and-dimed. If
we're going to do this we might as well do it consistently in one shot,
using tooling that guarantees repeatability and correctness.
Repeat with:
```
com2ann mesonbuild/
```
|
|
These annotations all had a default initializer of the correct type, or
a parent class annotation.
|
|
- Include BaseNode position in hash methods, integer is the most
straightforward way of differentiating nodes.
- Exclude non hashable fields from hash method.
- Avoid using default values in BaseNode that way subclasses can have
fields wihtout default value without repeating init=False.
- Nodes that does not add fields does not need `@dataclass`.
- Make all node types hashable because they can be used for feature_key
in FeatureCheckBase.use().
- Remove unused type annotations
|
|
This makes use of dataclasses, but without a dataclass generated
initializer. This means that we get nice `__repr__` and `__eq__` methods
without having to type them by hand.
Pylance understands `dataclass(init=False)`, but mypy doesn't.
https://github.com/microsoft/pyright/issues/1753
https://github.com/python/mypy/issues/10309
|
|
Surprisingly enough we need to do this twice. In some cases
(failing-meson/72 triggers this) we can error out after parsing the
codeblock, but without getting the expected eof.
We need to catch both exceptions as either one can interrupt the built
codeblock object.
Co-authored-by: Xavier Claessens <xavier.claessens@collabora.com>
|
|
|
|
This reverts commit 348248f0a19bdc80e8a184befb2faaa1d5e66f40.
The rules were relaxed in commit ccc4ce28cc9077d77a0bc9e72b1177eba1be7186
to permit this, so it's never possible to raise this exception anymore.
But that commit was incomplete, and didn't remove the now-useless
infrastructure for exception handling.
The test needed to test this was always broken, and then removed in
commit 465ef856ac9b978f13414db4aff649c66f2e6be5, and still this useless
try/except persisted.
|
|
This is currently only enabled when running unit tests to facilitate
writing failing unit tests.
Fixes: #11394
|