Spade
Mini Shell
�
��Rc @s�dZddlZddlmZddlmZddlmZddlm Z ddl
mZmZm
Z
mZmZe d�Zejd ej�Zejd
ej�Zejd�Zyedd
d�Wn ek
r�ejd�Zn0XddlmZejdejejf�Zejd�Zejd�Zed�Zed�Z
ed�Z!ed�Z"ed�Z#ed�Z$ed�Z%ed�Z&ed�Z'ed�Z(ed�Z)ed�Z*ed
�Z+ed!�Z,ed"�Z-ed#�Z.ed$�Z/ed%�Z0ed&�Z1ed'�Z2ed(�Z3ed)�Z4ed*�Z5ed+�Z6ed,�Z7ed-�Z8ed.�Z9ed/�Z:ed0�Z;ed1�Z<ed2�Z=ed3�Z>ed4�Z?ed5�Z@ed6�ZAed7�ZBed8�ZCed9�ZDed:�ZEed;�ZFed<�ZGed=�ZHed>�ZIed?�ZJed@�ZKedA�ZLedB�ZMedC�ZNedD�ZOiedE6e7dF6e#dG6e&dH6e/dI6e.dJ6e2dK6e8dL6e*dM6e4dN6e+dO6e5dP6e)dQ6e3dR6e%dS6e0dT6e'dU6e(dV6e,dW6e-dX6e
dY6e$dZ6e!d[6e1d\6e"d]6e6d^6ZPeQgeeP�D]\ZRZSeSeRf^q[�ZTeUeP�eUeT�ks�tVd_��ejd`d\jWda�eXePdbdc��D���ZYeZeEeGeFe9e9eJeKeLg�Z[eZe9eMeGeLg�Z\dd�Z]de�Z^df�Z_dg�Z`dh�Zadiebfdj��YZcdkedfdl��YZee
dmebfdn��Y�Zfe
doebfdp��Y�Zgdq�Zhdrebfds��YZidS(ts�
jinja2.lexer
~~~~~~~~~~~~
This module implements a Jinja / Python combination lexer. The
`Lexer` class provided by this module is used to do some preprocessing
for Jinja.
On the one hand it filters out invalid operators like the bitshift
operators we don't allow in templates. On the other hand it
separates
template code and python code in expressions.
:copyright: (c) 2010 by the Jinja Team.
:license: BSD, see LICENSE for more details.
i����N(t
itemgetter(tdeque(tTemplateSyntaxError(tLRUCache(tnextt iteritemstimplements_iteratort text_typetinterni2s\s+s7('([^'\\]*(?:\\.[^'\\]*)*)'|"([^"\\]*(?:\\.[^"\\]*)*)")s\d+sföös <unknown>tevals\b[a-zA-Z_][a-zA-Z0-9_]*\b(t_stringdefss [%s][%s]*s(?<!\.)\d+\.\d+s(\r\n|\r|\n)taddtassigntcolontcommatdivtdotteqtfloordivtgttgteqtlbracetlbrackettlparentlttlteqtmodtmultnetpipetpowtrbracetrbrackettrparent semicolontsubttildet
whitespacetfloattintegertnametstringtoperatortblock_begint block_endtvariable_begintvariable_endt raw_begintraw_endt
comment_begintcomment_endtcommenttlinestatement_begintlinestatement_endtlinecomment_begintlinecomment_endtlinecommenttdatatinitialteoft+t-t/s//t*t%s**t~t[t]t(t)t{t}s==s!=t>s>=t<s<=t=t.t:t|t,t;soperators
droppeds(%s)ccs|]}tj|�VqdS(N(tretescape(t.0tx((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pys <genexpr>�stkeycCst|�S(N(tlen(RS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt<lambda>�scCsx|tkrt|Sidt6dt6dt6dt6dt6dt6dt6dt6dt 6d t
6d
t6dt6j
||�S(Nsbegin of commentsend of commentR3sbegin of
statement blocksend of statement blocksbegin of print statementsend of
print statementsbegin of line statementsend of line statementstemplate
data / textsend of
template(treverse_operatorstTOKEN_COMMENT_BEGINtTOKEN_COMMENT_ENDt
TOKEN_COMMENTtTOKEN_LINECOMMENTtTOKEN_BLOCK_BEGINtTOKEN_BLOCK_ENDtTOKEN_VARIABLE_BEGINtTOKEN_VARIABLE_ENDtTOKEN_LINESTATEMENT_BEGINtTOKEN_LINESTATEMENT_ENDt
TOKEN_DATAt TOKEN_EOFtget(t
token_type((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt_describe_token_type�s
cCs#|jdkr|jSt|j�S(s#Returns a description of the
token.R((ttypetvalueRf(ttoken((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytdescribe_token�scCsGd|kr7|jdd�\}}|dkr=|Sn|}t|�S(s0Like
`describe_token` but for token
expressions.RLiR((tsplitRf(texprRgRh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytdescribe_token_expr�scCsttj|��S(ssCount
the number of newline characters in the string. This is
useful for extensions that filter a stream.
(RUt
newline_retfindall(Rh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytcount_newlines�scCstj}t|j�d||j�ft|j�d||j�ft|j�d||j�fg}|jd
k r�|jt|j�dd||j�f�n|j d
k r�|jt|j �dd||j �f�ngt
|dt�D]}|d ^q�S(sACompiles all the rules from the environment
into a list of rules.R3tblocktvariablet
linestatements ^[
\t\v]*R8s(?:^|(?<=\S))[^\S\r\n]*treverseiN(RPRQRUtcomment_start_stringtblock_start_stringtvariable_start_stringtline_statement_prefixtNonetappendtline_comment_prefixtsortedtTrue(tenvironmenttetrulesRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt
compile_rules�s tFailurecBs#eZdZed�Zd�ZRS(sjClass
that raises a `TemplateSyntaxError` if called.
Used by the `Lexer` to specify known errors.
cCs||_||_dS(N(tmessageterror_class(tselfR�tcls((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__init__�s cCs|j|j||��dS(N(R�R�(R�tlinenotfilename((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__call__�s(t__name__t
__module__t__doc__RR�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR��stTokencBs`eZdZdZd�ed�D�\ZZZd�Zd�Z d�Z
d�Zd�ZRS( sToken
class.ccs!|]}tt|��VqdS(N(tpropertyR(RRRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pys <genexpr>�sicCs%tj||tt|��|f�S(N(ttuplet__new__Rtstr(R�R�RgRh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR��scCs7|jtkrt|jS|jdkr0|jS|jS(NR((RgRWRh(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__str__�s
cCsE|j|krtSd|krA|jdd�|j|jgkStS(s�Test a
token against a token expression. This can either be a
token type or ``'token_type:token_value'``. This can
only test
against string values and types.
RLi(RgR}RkRhtFalse(R�Rl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttest�s
"cGs(x!|D]}|j|�rtSqWtS(s(Test against multiple
token
expressions.(R�R}R�(R�titerableRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttest_any�s
cCsd|j|j|jfS(NsToken(%r,
%r,
%r)(R�RgRh(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__repr__s((
R�R�R�t __slots__trangeR�RgRhR�R�R�R�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR��s
tTokenStreamIteratorcBs)eZdZd�Zd�Zd�ZRS(s`The
iterator for tokenstreams. Iterate over the stream
until the eof token is reached.
cCs
||_dS(N(tstream(R�R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�scCs|S(N((R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__iter__scCsE|jj}|jtkr4|jj�t��nt|j�|S(N(R�tcurrentRgRctcloset
StopIterationR(R�Ri((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__next__s
(R�R�R�R�R�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�s tTokenStreamcBs�eZdZd�Zd�Zd�ZeZed�dd�Zd�Z d�Z
d d
�Zd�Zd�Z
d
�Zd�Zd�ZRS(s�A token stream is an
iterable that yields :class:`Token`\s. The
parser however does not iterate over it but calls :meth:`next` to go
one token ahead. The current active token is stored as
:attr:`current`.
cCsYt|�|_t�|_||_||_t|_tdt d�|_
t|�dS(Nit(titert_iterRt_pushedR(R�R�tclosedR�t
TOKEN_INITIALR�R(R�t generatorR(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�(s cCs
t|�S(N(R�(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�1scCst|j�p|jjtk S(N(tboolR�R�RgRc(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__bool__4scCs|S(N((RS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRV8stdocs
Are we at the end of the stream?cCs|jj|�dS(s Push a token back
to the
stream.N(R�Rz(R�Ri((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytpush:scCs/t|�}|j}|j|�||_|S(sLook
at the next
token.(RR�R�(R�t old_tokentresult((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytlook>s
icCs%xt|�D]}t|�q
WdS(sGot n tokens
ahead.N(R�R(R�tnRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytskipFscCs
|jj|�rt|�SdS(sqPerform the token test and return the token if
it matched.
Otherwise the return value is `None`.
N(R�R�R(R�Rl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytnext_ifKscCs|j|�dk S(s8Like
:meth:`next_if` but only returns `True` or
`False`.N(R�Ry(R�Rl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytskip_ifRscCst|j}|jr'|jj�|_nI|jjtk rpyt|j�|_Wqptk
rl|j�qpXn|S(s)Go one token ahead and return the old
one( R�R�tpopleftRgRcRR�R�R�(R�trv((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�Vs
cCs1t|jjtd�|_d|_t|_dS(sClose
the
stream.R�N(R�R�R�RcRyR�R}R�(R�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�bs cCs�|jj|�s�t|�}|jjtkrXtd||jj|j|j��ntd|t |j�f|jj|j|j��nz|jSWdt
|�XdS(s}Expect a given token type and return it. This accepts the
same
argument as :meth:`jinja2.lexer.Token.test`.
s(unexpected end of template, expected %r.sexpected token %r, got
%rN(R�R�RmRgRcRR�R(R�RjR(R�Rl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytexpecths (R�R�R�R�R�R�t__nonzero__R�teosR�R�R�R�R�R�R�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�!s cCs�|j|j|j|j|j|j|j|j|j|j |j
|jf}tj
|�}|dkr�t|�}|t|<n|S(s(Return a lexer
which is probably
cached.N(Rvtblock_end_stringRwtvariable_end_stringRutcomment_end_stringRxR{ttrim_blockst
lstrip_blockstnewline_sequencetkeep_trailing_newlinet_lexer_cacheRdRytLexer(R~RTtlexer((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt get_lexer}s"
R�cBsPeZdZd�Zd�Zdddd�Zddd�Zddd�ZRS(s
Class
that implements a lexer for a given environment. Automatically
created by the environment class, usually you don't have to do
that.
Note that the lexer is not automatically bound to an environment.
Multiple environments can share the same lexer.
cCs4d�}tj}ttdfttdfttdft t
dfttdft
tdfg}t|�}|jr{dp~d}i}|jr�|d�}|d||j��} | j|j�}
||
r�d||
jd��p�d7}| j|j�}
||
r$d||
jd��p'd7}|d||j��}|j|j�}
|
rud||
jd��pxd}d }
d
|
||j�|||j�f}d|
||j�|||j�f}||d<||d
<nd||j�}|j|_|j|_i|ddjd||j�|||j�||j�fgg|D]+\}}d|||j||�f^qZ��tdfdf|d�tdfgd6|d||j�||j�|f�ttfdf|d�t
d�fdfgt!6|d||j�||j�|f�t"dfg|t#6|d||j$�||j$�f�t%dfg|t&6|d||j�|||j�||j�|f�tt'fdf|d�t
d�fdfgt(6|d�t)dfg|t*6|d�t+t,fdfgt-6|_.dS(
NcSstj|tjtjB�S(N(RPtcompiletMtS(RS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRV�ss\n?R�R<s^%s(.*)s|%sis(?!%s)s^[
\t]*s%s%s(?!%s)|%s\+?s%s%s%s|%s\+?RqR3s%ss(.*?)(?:%s)RMs4(?P<raw_begin>(?:\s*%s\-|%s)\s*raw\s*(?:\-%s\s*|%s))s(?P<%s_begin>\s*%s\-|%s)s#bygroups.+troots(.*?)((?:\-%s\s*|%s)%s)s#pops(.)sMissing
end of comment tags(?:\-%s\s*|%s)%ss
\-%s\s*|%ss1(.*?)((?:\s*%s\-|%s)\s*endraw\s*(?:\-%s\s*|%s%s))sMissing end
of raw
directives \s*(\n|$)s(.*?)()(?=\n|$)(/RPRQt
whitespace_retTOKEN_WHITESPACERytfloat_retTOKEN_FLOATt
integer_ret
TOKEN_INTEGERtname_ret
TOKEN_NAMEt string_retTOKEN_STRINGtoperator_retTOKEN_OPERATORR�R�R�RvtmatchRutgroupRwR�R�tjoinR�RdRbR�RZRYR�RXR]R\R�R_R^t
TOKEN_RAW_ENDtTOKEN_RAW_BEGINRaR`R[tTOKEN_LINECOMMENT_ENDtTOKEN_LINECOMMENT_BEGINR�(R�R~tcRt tag_rulestroot_tag_rulestblock_suffix_ret prefix_retno_lstrip_ret
block_difftmtcomment_difftno_variable_ret lstrip_retblock_prefix_retcomment_prefix_reR�tr((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR��s� ))%
:
"
"cCstj|j|�S(s@Called for strings and template
data to normalize it to
unicode.(RnR#R�(R�Rh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt_normalize_newlinesscCs7|j||||�}t|j|||�||�S(sCCalls
tokeniter + tokenize and wraps it in a token stream.
(t tokeniterR�twrap(R�tsourceR(R�tstateR�((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttokenizesc cs�x�|D]�\}}}|tkr(qn�|dkr=d}np|dkrRd}n[|dkrdqnI|dkr�|j|�}n+|dkr�|}n|d kr�t|�}n�|d
kr^y/|j|dd!�jd
d�jd�}WnGtk
r6}t|�jd�dj�}t||||��nXyt|�}Wq�t k
rZq�XnO|dkryt
|�}n4|dkr�t|�}n|dkr�t|}nt
|||�VqWdS(s�This
is called with the stream as returned by `tokenize` and wraps
every token in a :class:`Token` and converts the value.
R4R+R5R,R/R0R9tkeywordR(R)ii����tasciitbackslashreplacesunicode-escapeRLR'R&R*N(R/R0(tignored_tokensR�R�tencodetdecodet ExceptionRktstripRtUnicodeErrortintR&t operatorsR�( R�R�R(R�R�RiRhRtmsg((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�$sD
c csrt|�}|j�}|jr[|r[x1d
D]&}|j|�r.|jd�Pq.q.Wndj|�}d}d}dg} |dk r�|dkr�|d!ks�td
��| j|d�nd}|j| d}
t |�}g}xxy|
D]>\}
}}|
j
||�}|dkr)q�n|rA|d"krAq�nt|t�rMx
t
|�D]�\}}|jtkr�|||��q]|dkr�x�t|j��D]=\}}|dk r�|||fV||jd�7}Pq�q�Wtd|
��q]|j|d�}|s"|tkr3|||fVn||jd�7}q]Wn|j�}|dkr'|dkr�|jd�q'|dkr�|jd�q'|dkr�|jd�q'|d#kr'|s�td||||��n|j�}||kr$td||f|||��q$q'n|s9|tkrJ|||fVn||jd�7}|j�}|dk r|dkr�| j�nl|dkr�x]t|j��D])\}}|dk r�| j|�Pq�q�Wtd|
��n
| j|�|j| d}
n||kr-td|
��n|}Pq�W||krHdStd|||f|||��q�dS($s�This
method tokenizes the text and returns the tokens in a
generator. Use this method if you just want to tokenize a
template.
s
s
s
R�iiR�RrRqs
invalid statet_begini����R.R,R5s#bygroups?%r
wanted to resolve the token dynamically but no group
matchedR*RFRGRDRERBRCsunexpected '%s'sunexpected
'%s', expected '%s's#popsC%r wanted to resolve the new
state dynamically but no group matcheds,%r yielded empty string without
stack changeNsunexpected char %r at %d(s
s
s
(svariablesblock(R.s block_endslinestatement_end(RGRERC(Rt
splitlinesR�tendswithRzR�RytAssertionErrorR�RUR�t
isinstanceR�t enumeratet __class__R�Rt groupdicttcounttRuntimeErrorR�tignore_if_emptyRtpoptend(R�R�R(R�R�tlinestnewlinetposR�tstacktstatetokenst
source_lengthtbalancing_stacktregexttokenst new_stateR�tidxRiRTRhR9texpected_optpos2((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR�Qs�
N( R�R�R�R�R�RyR�R�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR��s � -(jR�RPR*RtcollectionsRtjinja2.exceptionsRtjinja2.utilsRtjinja2._compatRRRRRR�R�tUR�R�R�R�tSyntaxErrorR�tjinja2R
t xid_starttxid_continueR�Rnt TOKEN_ADDtTOKEN_ASSIGNtTOKEN_COLONtTOKEN_COMMAt TOKEN_DIVt TOKEN_DOTtTOKEN_EQtTOKEN_FLOORDIVtTOKEN_GTt
TOKEN_GTEQtTOKEN_LBRACEtTOKEN_LBRACKETtTOKEN_LPARENtTOKEN_LTt
TOKEN_LTEQt TOKEN_MODt TOKEN_MULtTOKEN_NEt
TOKEN_PIPEt TOKEN_POWtTOKEN_RBRACEtTOKEN_RBRACKETtTOKEN_RPARENtTOKEN_SEMICOLONt TOKEN_SUBtTOKEN_TILDER�R�R�R�R�R�R\R]R^R_R�R�RXRYRZR`RaR�R�R[RbR�RcR�tdicttktvRWRUR�R�R|R�t frozensetR�RRfRjRmRpR�tobjectR�R�R�R�R�R�R�(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt<module>s�(
1$
+[