Spade
Mini Shell
3
\8s�-@s�dZdZdZddlmZddlmZmZddl Z ddl
mZddlm
Z
ddlZddlZddlZdd lTejd
ej�Zejdej�ZddlZejdd
dddddgZ[eZdee<edZdee<edZdee<ed7Zeeeee
e!e"e#e$e%e&e'e(e)e*e+e,e-e.e/e0e1e2e3e4e5e6e7e8e9e:e;e<e=e>e?e@eAeBeCeDeEeFeGd�,ZHGdd�de jIdd��ZJdd�ZKdd�ZLdd�ZMdZNd
ZOeNeLd!eN�eMeO�ZPd"ZQd#ZRd$ZSd%ZTd&ZUeKeReSeTeU�ZVd'ZWeKd(d)�eMeW�ZXd*eWZYeKeXeY�ZZeKd+eZd,�Z[eKe[eZeV�Z\d-d.�Z]d/d0�Z^eKe]��Z_d1Z`d2Zad3Zbd4ZceKe_d5e_d6�ZdeKe_d7e_d8�ZeeKd9d:d;d<d=d>d?d@�ZfdAZgeKdBdCdD�ZheKefegeh�ZieKe\eieeeQ�ZjePejZkeKe_dEeKdFd!�e_dGeKdHd!��ZleKdIeOed�ZmeNeKeme\eieleQ�ZniZox@e]�D]6Zpe`eoepdF<eaeoepdH<ebeoepd5<eceoepd6<�q�Weq�Zreq�Zsx\e]�D]RZtx$etdHetdFfD]Zuerjveu��q8Wx$etd6etd5fD]Zuesjveu��q^W�q"WdJZwGdKdL�dLex�ZyGdMdN�dNex�ZzGdOdP�dP�Z{dQd�Z|dRdS�Z}dTd�Z~dUdV�ZdWd
�ZdXdY�Z�dZd[�Z�d\d]�Z�e�d^k�r�e��dS)_aoTokenization
help for Python programs.
tokenize(readline) is a generator that breaks a stream of bytes into
Python tokens. It decodes the bytes according to PEP-0263 for
determining source file encoding.
It accepts a readline-like method which is called repeatedly to get the
next line of input (or b"" for EOF). It generates 5-tuples with
these
members:
the token type (see token.py)
the token (a string)
the starting (row, column) indices of the token (a 2-tuple of ints)
the ending (row, column) indices of the token (a 2-tuple of ints)
the original line (string)
It is designed to match the working of the Python tokenizer exactly, except
that it produces COMMENT tokens for comments and gives type OP for all
operators. Additionally, all token lists start with an ENCODING token
which tells you which encoding was used to decode the bytes stream.
zKa-Ping Yee <ping@lfw.org>zpGvR, ESR, Tim Peters, Thomas Wouters,
Fred Drake, Skip Montanaro, Raymond Hettinger, Trent Nelson, Michael
Foord�)�open)�lookup�BOM_UTF8N)�
TextIOWrapper)�chain)�*z&^[
\t\f]*#.*?coding[:=][ \t]*([-\w.]+)s^[
\t\f]*(?:[#\r\n]|$)�COMMENT�tokenize�detect_encoding�NL�
untokenize�ENCODING� TokenInfo���),�(�)�[�]�:�,�;�+�-r�/�|�&�<�>�=�.�%�{�}z==z!=z<=z>=�~�^z<<z>>z**z+=z-=z*=z/=z%=z&=z|=z^=z<<=z>>=z**=z//z//=�@z@=c@s
eZdZdd�Zedd��ZdS)rcCs$d|jt|jf}d|j|d�S)Nz%d
(%s)z8TokenInfo(type=%s, string=%r, start=%r, end=%r,
line=%r))�type)r(�tok_name�_replace)�self�annotated_type�r-�
/usr/lib64/python3.6/tokenize.py�__repr__dszTokenInfo.__repr__cCs(|jtkr|jtkrt|jS|jSdS)N)r(�OP�string�EXACT_TOKEN_TYPES)r+r-r-r.�
exact_typeis
zTokenInfo.exact_typeN)�__name__�
__module__�__qualname__r/�propertyr3r-r-r-r.rcsztype string
start end
linecGsddj|�dS)Nrrr)�join)�choicesr-r-r.�grouppsr:cGst|�dS)Nr)r:)r9r-r-r.�anyqsr;cGst|�dS)N�?)r:)r9r-r-r.�maybersr=z[
\f\t]*z #[^\r\n]*z\\\r?\nz\w+z0[xX](?:_?[0-9a-fA-F])+z0[bB](?:_?[01])+z0[oO](?:_?[0-7])+z(?:0(?:_?0)*|[1-9](?:_?[0-9])*)z[eE][-+]?[0-9](?:_?[0-9])*z)[0-9](?:_?[0-9])*\.(?:[0-9](?:_?[0-9])*)?z\.[0-9](?:_?[0-9])*z[0-9](?:_?[0-9])*z[0-9](?:_?[0-9])*[jJ]z[jJ]cCsnddddddg}tdg�}xN|D]F}x@tj|�D]2}x,tjdd �|D��D]}|jdj|��qJWq0Wq
W|S)
N�b�r�u�f�br�fr�cSsg|]}||j�f�qSr-)�upper)�.0�cr-r-r.�
<listcomp>�sz(_all_string_prefixes.<locals>.<listcomp>)�set�
_itertools�permutations�product�addr8)�_valid_string_prefixes�result�prefix�tr@r-r-r.�_all_string_prefixes�s
rRcCstj|tj�S)N)�re�compile�UNICODE)�exprr-r-r.�_compile�srWz[^'\\]*(?:\\.[^'\\]*)*'z[^"\\]*(?:\\.[^"\\]*)*"z%[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''z%[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""z'''z"""z'[^\n'\\]*(?:\\.[^\n'\\]*)*'z"[^\n"\\]*(?:\\.[^\n"\\]*)*"z\*\*=?z>>=?z<<=?z!=z//=?z->z[+\-*/%&@|^=<>]=?r%z[][(){}]z\r?\nz\.\.\.z[:;.,@]z'[^\n'\\]*(?:\\.[^\n'\\]*)*�'z"[^\n"\\]*(?:\\.[^\n"\\]*)*�"z
\\\r?\n|\Z�c@seZdZdS)�
TokenErrorN)r4r5r6r-r-r-r.r[�sr[c@seZdZdS)�StopTokenizingN)r4r5r6r-r-r-r.r\�sr\c@s,eZdZdd�Zdd�Zdd�Zdd�Zd S)
�UntokenizercCsg|_d|_d|_d|_dS)Nrr)�tokens�prev_row�prev_col�encoding)r+r-r-r.�__init__�szUntokenizer.__init__cCs�|\}}||jks&||jkr>||jkr>tdj|||j|j���||j}|rb|jjd|�d|_||j}|r�|jjd|�dS)Nz+start
({},{}) precedes previous end ({},{})z\
r� )r_r`�
ValueError�formatr^�append)r+�start�row�col�
row_offset�
col_offsetr-r-r.�add_whitespace�s
zUntokenizer.add_whitespacecCs4t|�}g}d}�x|D�]
}t|�dkr8|j||�P|\}}}} }
|tkrV||_q|tkr`P|tkrv|j|�qnl|tkr�|j �| \|_
|_qnL|tt
fkr�d}n:|r�|r�|d}|dt|�kr�|jj|�t|�|_d}|j|�|jj|�| \|_
|_|tt
fkr|j
d7_
d|_qWdj|j�S)NFrTrrrD���)�iter�len�compatr
ra� ENDMARKER�INDENTrf�DEDENT�popr_r`�NEWLINErr^rlr8)r+�iterable�it�indents� startlinerQ�tok_type�tokenrg�end�line�indentr-r-r.r�sF
zUntokenizer.untokenizec
Cs�g}|jj}|dttfk}d}x�t|g|�D]�}|dd�\}} |tkrR| |_q.|ttt t
fkrj| d7} |tkr�|r~d| } d}nd}|tkr�|j| �q.n>|t
kr�|j�q.n*|ttfkr�d}n|r�|r�||d�d}|| �q.WdS)NrFrrcTrrm)r^rfrurrr
ra�NAME�NUMBER�ASYNC�AWAIT�STRINGrrrsrt)
r+r{rvrx�toks_appendry�
prevstring�tok�toknum�tokvalr-r-r.rps8
zUntokenizer.compatN)r4r5r6rbrlrrpr-r-r-r.r]�s
%r]cCs*t�}|j|�}|jdk r&|j|j�}|S)aTransform
tokens back into Python source code.
It returns a bytes object, encoded using the ENCODING
token, which is the first token sequence output by tokenize.
Each element returned by the iterable must be a token sequence
with at least two elements, a token number and token value. If
only two tokens are passed, the resulting output is poor.
Round-trip invariant for full input:
Untokenized source will match input source exactly
Round-trip invariant for limited input:
# Output bytes will tokenize back to the input
t1 = [tok[:2] for tok in tokenize(f.readline)]
newcode = untokenize(t1)
readline = BytesIO(newcode).readline
t2 = [tok[:2] for tok in tokenize(readline)]
assert t1 == t2
N)r]rra�encode)rv�ut�outr-r-r.r=s
cCsH|dd�j�jdd�}|dks*|jd�r.dS|d
ks@|jd�rDdS|S)z(Imitates
get_normal_name in tokenizer.c.N��_rzutf-8zutf-8-�latin-1�
iso-8859-1�iso-latin-1�latin-1-�iso-8859-1-�iso-latin-1-)r�r�r�)r�r�r�)�lower�replace�
startswith)�orig_enc�encr-r-r.�_get_normal_nameXs
r�cs�y�jj�Wntk
r$d�YnXd�d}d}�fdd�}��fdd�}|�}|jt�rpd�|d d�}d
}|s||gfS||�}|r�||gfStj|�s�||gfS|�}|s�||gfS||�}|r�|||gfS|||gfS)a
The detect_encoding() function is used to detect the encoding that
should
be used to decode a Python source file. It requires one argument,
readline,
in the same way as the tokenize() generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (left as bytes) it has read in.
It detects the encoding from the presence of a utf-8 bom or an encoding
cookie as specified in pep-0263. If both a bom and a cookie are
present,
but disagree, a SyntaxError will be raised. If the encoding cookie is
an
invalid charset, raise a SyntaxError. Note that if a utf-8 bom is
found,
'utf-8-sig' is returned.
If no encoding is specified, then the default of 'utf-8' will
be returned.
NFzutf-8cs y��Stk
rdSXdS)N�)�
StopIterationr-)�readliner-r.�read_or_stop{sz%detect_encoding.<locals>.read_or_stopcs�y|jd�}Wn4tk
rBd}�dk r6dj|��}t|��YnXtj|�}|sVdSt|jd��}yt|�}Wn:t k
r��dkr�d|}ndj�|�}t|��YnX�r�|dkr؈dkr�d}n
dj��}t|��|d 7}|S)
Nzutf-8z'invalid or missing encoding declarationz{} for
{!r}rzunknown encoding: zunknown encoding for {!r}: {}zencoding
problem: utf-8z encoding problem for {!r}: utf-8z-sig)
�decode�UnicodeDecodeErrorre�SyntaxError� cookie_re�matchr�r:r�LookupError)r}�line_string�msgr�ra�codec)� bom_found�filenamer-r.�find_cookie�s6
z$detect_encoding.<locals>.find_cookieTrz utf-8-sig)�__self__�name�AttributeErrorr�r�blank_rer�)r�ra�defaultr�r��first�secondr-)r�r�r�r.r
cs8
&
cCsVt|d�}y0t|j�\}}|jd�t||dd�}d|_|S|j��YnXdS)zXOpen
a file in read only mode using the encoding detected by
detect_encoding().
�rbrT)�line_bufferingr?N)�
_builtin_openr
r��seekr�mode�close)r��bufferra�lines�textr-r-r.r�s
rcCsBddlm}m}t|�\}}t|d�}|d�}t||||�j|�S)a�
The tokenize() generator requires one argument, readline, which
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as bytes. Alternatively, readline
can be a callable function terminating with StopIteration:
readline = open(myfile, 'rb').__next__ # Example of
alternate readline
The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple (srow, scol) of ints specifying the row and
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
logical line; continuation lines are included.
The first token sequence will always be an ENCODING token
which tells you which encoding was used to decode the bytes stream.
r)r�repeatr�)� itertoolsrr�r
rn� _tokenize�__next__)r�rr�ra�consumed�rl_gen�emptyr-r-r.r �s
c!cs"d}}}d}d!\}}d}dg} d}
d}d}d}
|dk rX|dkrFd}tt|d"d#d�Vd}d}�xy|}|�}Wntk
r�d}YnX|dk r�|j|�}|d7}dt|�}}|�r�|s�td |��|j|�}|�r|jd�}}tt||d|�|||f||�Vd$\}}d}nf|�rn|d%d�dk�rn|d&d�d
k�rntt ||||t|�f|�Vd}d}qdn||}||}qd�n@|dk�r�|�r�|�s�Pd}xf||k�r||dk�r�|d7}n6||dk�r�|t
dt
}n||dk�r�d}nP|d7}�q�W||k�rP||dk�r�||dk�r�||d�jd�}|t|�}tt|||f||t|�f|�Vtt
||d�||f|t|�f|�Vqdtt
tf||dk||d�||f|t|�f|�Vqd|| d'k�r| j|�tt|d|�|df||f|�Vxv|| d(k�r�|| k�r8tdd|||f��| dd)�} |�rd|| d*k�rdd}d}
d}ttd||f||f|�V�qW|�r�|
�r�|| d+k�r�d}d}
d}n|�s�td|df��d}�x�||k�rztt�j||�}|�rL|jd�\}}||f||f|}}}||k�r�q�|||�||}}||k�sZ|dk�rp|dk�rp|dk�rptt||||�V�qv|dk�r�|
�r�|
Vd}
|dk�r�tt
||||�Vntt||||�V|�rJd}
�qv|dk�r
|jd��s�t�|
�r�|
Vd}
tt||||�V�qv|tk�r~tt|�}|j||�}|�r`|jd�}|||�}tt||||f|�Vn||f}||d�}|}P�qv|tk�s�|dd
�tk�s�|dd�tk�r |d,dk�r
||f}ttj|��p�tj|d��p�tj|d
��}||d�d}}|}Pntt||||�V�qv|j��r�|d-k�r^|�r^t|dk�rJtnt||||�V�q�tt
||||�}|dk�r�|
�r�|}
�q�|dk�r�|
�r�|
j!t k�r�|
j"dk�r�d}| d.}tt|
j"|
j#|
j|
j$�Vd}
|
�r�|
Vd}
|VnX|dk�rd}nH|dk�r|d7}n|d k�r(|d8}|
�r8|
Vd}
tt%||||�Vn*tt ||||f||df|�V|d7}�q�WqdW|
�r�|
Vd}
|�r�|d/dk�r�ttd|dt|�f|dt|�dfd�Vx0| dd�D]
} ttd|df|dfd�V�q�Wtt&d|df|dfd�VdS)0Nr�
0123456789rDFz utf-8-sigzutf-8r�rzEOF in multi-line stringrz\
rz\
rc� �z#
�#z
z3unindent does not match any outer indentation levelz
<tokenize>zEOF in multi-line statementr!z...T�
�async�await�def�\z([{z)]})rDr)rr)rr)rDr������rmrmrmrmrmrm)r�r�rmrm)'rr
r�r�ror[r�r|r��
ERRORTOKEN�tabsize�rstriprrrfrr�IndentationErrorrsrW�PseudoToken�spanr�ru�endswith�AssertionError�
triple_quoted�endpats�
single_quoted�get�isidentifierr�r�rr(r1rgr}r0rq)!r�ra�lnum�parenlev� continued�numchars�contstr�needcont�contlinerx�stashed� async_def�async_def_indent�async_def_nl� last_liner}�pos�max�strstart�endprog�endmatchr|�column�
comment_token�nl_pos�pseudomatchrg�spos�eposr{�initialr�r~r-r-r.r��st
*
"
. r�cCs
t|d�S)N)r�)r�r-r-r.�generate_tokens�sr�c
s(ddl}dd��d�fdd� }|jdd�}|jdd d
dd�|jd
ddddd�|j�}y�|jr�|j}t|d��}tt|j��}WdQRXnd}t t
jjd�}xF|D]>}|j}|j
r�|j}d|j|j} td| t||jf�q�WW�n8tk
�r:}
z2|
jddd�\}}||
jd|||f�WYdd}
~
Xn�tk
�r�}
z*|
jd\}}||
jd|||f�WYdd}
~
Xn�tk
�r�}
z||
|�WYdd}
~
Xnxtk
�r�}
z||
�WYdd}
~
XnNtk
�r�td�Yn2tk
�r"}
z�d|
��WYdd}
~
XnXdS)NrcSst|tjd�dS)N)�file)�print�sys�stderr)�messager-r-r.�perror�szmain.<locals>.perrorcsR|r"|f||f}�d|�n"|r8�d||f�n�d|�tjd�dS)Nz%s:%d:%d:
error: %sz
%s: error: %sz error:
%sr)r��exit)r�r��location�args)r�r-r.�error�szmain.<locals>.errorzpython
-m tokenize)�progr�r<zfilename.pyz'the file to tokenize;
defaults to
stdin)�dest�nargs�metavar�helpz-ez--exact�exact�
store_truez(display token names using the exact
type)r��actionrr�z<stdin>z%d,%d-%d,%d:z%-20s%-15s%-15rrrzinterrupted
zunexpected error:
%s)NN)�argparse�ArgumentParser�add_argument�
parse_argsr�r��listr r�r�r��stdinr(rr3rgr|r�r)r1r�r�r[r��OSError�KeyboardInterrupt� Exception)
rr��parserr�r�rAr^r{�
token_type�token_range�errr}r�r-)r�r.�main�sN
&&r�__main__)��__doc__�
__author__�__credits__�builtinsrr��codecsrr�collections�iorr�rrJrSr�r{rT�ASCIIr�r��__all__�N_TOKENSrr)rr
�LPAR�RPAR�LSQB�RSQB�COLON�COMMA�SEMI�PLUS�MINUS�STAR�SLASH�VBAR�AMPER�LESS�GREATER�EQUAL�DOT�PERCENT�LBRACE�RBRACE�EQEQUAL�NOTEQUAL� LESSEQUAL�GREATEREQUAL�TILDE�
CIRCUMFLEX� LEFTSHIFT�
RIGHTSHIFT�
DOUBLESTAR� PLUSEQUAL�MINEQUAL� STAREQUAL�
SLASHEQUAL�PERCENTEQUAL�
AMPEREQUAL� VBAREQUAL�CIRCUMFLEXEQUAL�LEFTSHIFTEQUAL�RIGHTSHIFTEQUAL�DOUBLESTAREQUAL�DOUBLESLASH�DOUBLESLASHEQUAL�AT�ATEQUALr2�
namedtuplerr:r;r=�
Whitespace�Comment�Ignore�Name� Hexnumber� Binnumber� Octnumber� Decnumber� Intnumber�Exponent�
Pointfloat�Expfloat�Floatnumber�
Imagnumber�NumberrRrW�StringPrefix�Single�Double�Single3�Double3�Triple�String�Operator�Bracket�Special�Funny�
PlainToken�Token�ContStr�PseudoExtrasr�r��_prefixrIr�r�rQr@rMr�rr[r\r]rr�r
r r�r�rr4r-r-r-r.�<module>s
_]x<