f,ac@sdZdZdZddlZddlmZmZddlZddlm Z ddl m Z ddl Z ddl Z ddlTe jd e jZe jd e jZddlZejd d d ddddgZ[eZd eed:6e?d;6e@d<6eAd=6eBd>6eCd?6ZDGd@ddejEddAZFdBdCZGdDdEZHdFdGZIdHZJdIZKeJeHdJeJeIeKZLdKZMdLZNdMZOdNZPdOZQeGeNeOePeQZRdPZSeGdQdReIeSZTdSeSZUeGeTeUZVeGdTeVdUZWeGeWeVeRZXdVZYdWZZdXZ[dYZ\dZZ]eGeYd[eYd\Z^eGeYd]eYd^Z_eGd_d`dad*dbdcddd-Z`deZaeGdfdgdhZbeGe`eaebZceGeXece_eMZdeLedZeeGeYdieGdjdJeYdkeGdldJZfeGdmeKe^ZgeJeGegeXecefeMZhdndoZii(eZdj6e[dl6e\d[6e]d\6e\dp6e]dq6e\dr6e]ds6e\dt6e]du6e\dv6e]dw6e\dx6e]dy6e\dz6e]d{6e\d|6e]d}6e\d~6e]d6e\d6e]d6e\d6e]d6e\d6e]d6e\d6e]d6e\d6e]d6e\dt6e]du6e\d6e]d6dd6dd6dd6dd6dd6dd6ZkiZlxdD]ZmemelemupGvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro, Raymond Hettinger, Trent Nelson, Michael FoordiN(ulookupuBOM_UTF8(u TextIOWrapper(uchain(u*u%^[ \t\f]*#.*coding[:=][ \t]*([-\w.]+)s^[ \t\f]*(?:[#\r\n]|$)uCOMMENTutokenizeudetect_encodinguNLu untokenizeuENCODINGu TokenInfoiiiu(u)u[u]u:u,u;u+u-u*u/u|u&uu=u.u%u{u}u==u!=u<=u>=u~u^u<>u**u+=u-=u*=u/=u%=u&=u|=u^=u<<=u>>=u**=u//u//=u@cBs2|EeZdZddZeddZdS(u TokenInfocCs.d|jt|jf}d|jd|S(Nu%d (%s)u8TokenInfo(type=%s, string=%r, start=%r, end=%r, line=%r)utype(utypeutok_nameu_replace(uselfuannotated_type((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu__repr__bsuTokenInfo.__repr__cCs4|jtkr)|jtkr)t|jS|jSdS(N(utypeuOPustringuEXACT_TOKEN_TYPES(uself((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu exact_typegs uTokenInfo.exact_typeN(u__name__u __module__u __qualname__u__repr__upropertyu exact_type(u __locals__((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu TokenInfoas utype string start end linecGsddj|dS(Nu(u|u)(ujoin(uchoices((u-/opt/alt/python33/lib64/python3.3/tokenize.pyugroupnsugroupcGst|dS(Nu*(ugroup(uchoices((u-/opt/alt/python33/lib64/python3.3/tokenize.pyuanyosuanycGst|dS(Nu?(ugroup(uchoices((u-/opt/alt/python33/lib64/python3.3/tokenize.pyumaybepsumaybeu[ \f\t]*u #[^\r\n]*u\\\r?\nu\w+u0[xX][0-9a-fA-F]+u 0[bB][01]+u 0[oO][0-7]+u(?:0+|[1-9][0-9]*)u[eE][-+]?[0-9]+u[0-9]+\.[0-9]*u\.[0-9]+u[0-9]+u [0-9]+[jJ]u[jJ]u(?:[bB][rR]?|[rR][bB]?|[uU])?u[^'\\]*(?:\\.[^'\\]*)*'u[^"\\]*(?:\\.[^"\\]*)*"u%[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''u%[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""u'''u"""u'[^\n'\\]*(?:\\.[^\n'\\]*)*'u"[^\n"\\]*(?:\\.[^\n"\\]*)*"u\*\*=?u>>=?u<<=?u//=?u->u[+\-*/%&|^=<>]=?u[][(){}]u\r?\nu\.\.\.u[:;.,@]u'[^\n'\\]*(?:\\.[^\n'\\]*)*u'u"[^\n"\\]*(?:\\.[^\n"\\]*)*u"u \\\r?\n|\ZcCstj|tjS(N(ureucompileuUNICODE(uexpr((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu_compilesu_compileur'''ur"""ub'''ub"""uR'''uR"""uB'''uB"""ubr'''ubr"""ubR'''ubR"""uBr'''uBr"""uBR'''uBR"""urb'''urb"""uRb'''uRb"""urB'''urB"""uRB'''uRB"""uu'''uu"""uU'''uU"""uruRubuBuuuUur'ur"uR'uR"ub'ub"uB'uB"ubr'ubr"uBr'uBr"ubR'ubR"uBR'uBR"urb'urb"urB'urB"uRb'uRb"uRB'uRB"uu'uu"uU'uU"icBs|EeZdZdS(u TokenErrorN(u__name__u __module__u __qualname__(u __locals__((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu TokenErrorsu TokenErrorcBs|EeZdZdS(uStopTokenizingN(u__name__u __module__u __qualname__(u __locals__((u-/opt/alt/python33/lib64/python3.3/tokenize.pyuStopTokenizingsuStopTokenizingcBsD|EeZdZddZddZddZddZd S( u UntokenizercCs(g|_d|_d|_d|_dS(Nii(utokensuprev_rowuprev_coluNoneuencoding(uself((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu__init__s   uUntokenizer.__init__cCs|\}}||jks9||jkr`||jkr`tdj|||j|jn||j}|r|jjd|d|_n||j}|r|jjd|ndS(Nu+start ({},{}) precedes previous end ({},{})u\ iu (uprev_rowuprev_colu ValueErroruformatutokensuappend(uselfustarturowucolu row_offsetu col_offset((u-/opt/alt/python33/lib64/python3.3/tokenize.pyuadd_whitespaces -    uUntokenizer.add_whitespacec Cst|}x|D]}t|dkr?|j||Pn|\}}}}}|tkro||_qn|tkrPn|j||jj||\|_ |_ |t t fkr|j d7_ d|_ qqWdj |jS(Niiiu(uiterulenucompatuENCODINGuencodingu ENDMARKERuadd_whitespaceutokensuappenduprev_rowuprev_coluNEWLINEuNLujoin( uselfuiterableuitututok_typeutokenustartuenduline((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu untokenizes$      uUntokenizer.untokenizec CsPg}|jj}|dttfk}d}xt|g|D]}|dd\}} |tkrx| |_qAn|tt fkr| d7} n|t kr|rd| } nd}nd}|t kr|j| qAnZ|t kr|jqAn>|ttfkrd}n#|r>|r>||dd}n|| qAWdS(Niiu iFTi(utokensuappenduNEWLINEuNLuFalseuchainuENCODINGuencodinguNAMEuNUMBERuSTRINGuTrueuINDENTuDEDENTupop( uselfutokenuiterableuindentsu toks_appendu startlineu prevstringutokutoknumutokval((u-/opt/alt/python33/lib64/python3.3/tokenize.pyucompat s8              uUntokenizer.compatN(u__name__u __module__u __qualname__u__init__uadd_whitespaceu untokenizeucompat(u __locals__((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu Untokenizers  u UntokenizercCs@t}|j|}|jdk r<|j|j}n|S(uTransform tokens back into Python source code. It returns a bytes object, encoded using the ENCODING token, which is the first token sequence output by tokenize. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited intput: # Output bytes will tokenize the back to the input t1 = [tok[:2] for tok in tokenize(f.readline)] newcode = untokenize(t1) readline = BytesIO(newcode).readline t2 = [tok[:2] for tok in tokenize(readline)] assert t1 == t2 N(u Untokenizeru untokenizeuencodinguNoneuencode(uiterableuutuout((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu untokenize.s  cCsd|ddjjdd}|dks=|jdrAdS|d ks\|jdr`dS|S(u(Imitates get_normal_name in tokenizer.c.Ni u_u-uutf-8uutf-8-ulatin-1u iso-8859-1u iso-latin-1ulatin-1-u iso-8859-1-u iso-latin-1-(ulatin-1u iso-8859-1u iso-latin-1(ulatin-1-u iso-8859-1-u iso-latin-1-(ulowerureplaceu startswith(uorig_encuenc((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu_get_normal_nameIs" u_get_normal_namec s4yjjWntk r*dYnXd d}d}fdd}fdd}|}|jtrd |dd}d}n|s|gfS||}|r||gfStj |s||gfS|}|s||gfS||}|r$|||gfS|||gfS( u The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. uutf-8c s(y SWntk r#dSYnXdS(Ns(u StopIteration((ureadline(u-/opt/alt/python33/lib64/python3.3/tokenize.pyu read_or_stopls  u%detect_encoding..read_or_stopcs;y|jd}WnEtk rZd}dk rJdj|}nt|YnXtj|}|stdSt|jd}yt |}WnIt k rdkrd|}ndj|}t|YnXr7|dkr*dkr d}ndj}t|n|d 7}n|S( Nuutf-8u'invalid or missing encoding declarationu {} for {!r}iuunknown encoding: uunknown encoding for {!r}: {}uencoding problem: utf-8u encoding problem for {!r}: utf-8u-sig( udecodeuUnicodeDecodeErroruNoneuformatu SyntaxErroru cookie_reumatchu_get_normal_nameugroupulookupu LookupError(ulineu line_stringumsgumatchuencodingucodec(u bom_foundufilename(u-/opt/alt/python33/lib64/python3.3/tokenize.pyu find_cookiers6           u$detect_encoding..find_cookieiNu utf-8-sigFT( u__self__unameuAttributeErroruNoneuFalseu startswithuBOM_UTF8uTrueublank_reumatch(ureadlineuencodingudefaultu read_or_stopu find_cookieufirstusecond((u bom_foundufilenameureadlineu-/opt/alt/python33/lib64/python3.3/tokenize.pyudetect_encodingTs8  &         cCsVtj|d}t|j\}}|jdt||dd}d|_|S(uXOpen a file in read only mode using the encoding detected by detect_encoding(). urbiuline_bufferingurT(ubuiltinsuopenudetect_encodingureadlineuseeku TextIOWrapperuTrueumode(ufilenameubufferuencodingulinesutext((u-/opt/alt/python33/lib64/python3.3/tokenize.pyuopens   uopencCs_ddlm}m}t|\}}t|d}|d}t||||j|S(u The tokenize() generator requires one argment, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as bytes. Alternately, readline can be a callable function terminating with StopIteration: readline = open(myfile, 'rb').__next__ # Example of alternate readline The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. The first token sequence will always be an ENCODING token which tells you which encoding was used to decode the bytes stream. i(uchainurepeats(u itertoolsuchainurepeatudetect_encodinguiteru _tokenizeu__next__(ureadlineuchainurepeatuencodinguconsumedurl_genuempty((u-/opt/alt/python33/lib64/python3.3/tokenize.pyutokenizes  ccs5d}}}d}d\}}d}dg} |dk rj|dkrPd}ntt|dddVnxcy |} Wntk rd} YnX|dk r| j|} n|d7}dt| } } |r| std| n|j| }|rZ|jd} }tt || d|| ||f|| Vd\}}d}ql|r| d dd kr| d!dd krtt || | |t| f|Vd}d}qmql|| }|| }qmn|dkrH| rH| sPnd}xv| | krz| | d kr.|d7}n?| | dkrS|t dt }n| | dkrld}nP| d7} qW| | krPn| | dkr| | dkr7| | dj d}| t|}tt ||| f|| t|f| Vtt| |d||f|t| f| Vqmttt f| | dk| | d|| f|t| f| Vqmn|| d"kr| j|tt| d| |df|| f| Vnx|| d#krD|| krtdd|| | fn| dd$} ttd|| f|| f| VqWn$| sftd|dfnd}x^| | krttj| | }|r|jd\}}||f||f|}}} ||krqon| ||| |}}||ks+|dkrE|dkrE|dkrEtt|||| Vq|dkr}t|dkrftnt|||| Vq|dkr|jd sttt |||| Vq|tkrRtt|}|j| | }|r,|jd} | || }tt |||| f| Vq||f} | |d}| }Pq|tks|dd tks|dd tkr |d%dkr||f} tt|pt|dpt|d }| |dd}}| }Pqtt |||| Vq|jr3tt|||| Vq|dkrHd}q|dkra|d7}n|dkrz|d8}ntt|||| Vqott | | || f|| df| V| d7} qoWqmx;| ddD])}ttd|df|dfdVqWtt d|df|dfdVdS(&Niu 0123456789uu utf-8-siguutf-8siuEOF in multi-line stringiu\ iu\ u u u u# u#u u3unindent does not match any outer indentation levelu uEOF in multi-line statementu.u...u u\u([{u)]}(ui(ii(ii(uiiiiiii(!uNoneu TokenInfouENCODINGu StopIterationudecodeulenu TokenErrorumatchuenduSTRINGu ERRORTOKENutabsizeurstripuCOMMENTuNLuappenduINDENTuIndentationErroruDEDENTu_compileu PseudoTokenuspanuNUMBERuNEWLINEuendswithuAssertionErroru triple_quoteduendpatsu single_quotedu isidentifieruNAMEuOPu ENDMARKER(ureadlineuencodingulnumuparenlevu continuedunumcharsucontstruneedcontucontlineuindentsulineuposumaxustrstartuendproguendmatchuenducolumnu comment_tokenunl_posu pseudomatchustartusposueposutokenuinitialuindent((u-/opt/alt/python33/lib64/python3.3/tokenize.pyu _tokenizes            2       $#'  0 *   $                'u _tokenizecCs t|dS(N(u _tokenizeuNone(ureadline((u-/opt/alt/python33/lib64/python3.3/tokenize.pyugenerate_tokensssugenerate_tokensc sddl}ddddfdd}|jdd}|jdd d d d d dd|jdddddddd|j}y|jr|j}tj|d}tt |j }WdQXnd}t t j j d}x^|D]V}|j}|jr |j}nd|j|j} td| t||jfqWWnbtk r} z?| jddd\} } || jd|| | fWYdd} ~ Xntk r } z5| jd\} } || jd|| | fWYdd} ~ Xntk r=} z|| |WYdd} ~ Xntk rk} z|| WYdd} ~ XnQtk rtdYn6tk r} zd| WYdd} ~ XnXdS(NicSst|dtjdS(Nufile(uprintusysustderr(umessage((u-/opt/alt/python33/lib64/python3.3/tokenize.pyuperrorzsumain..perrorcsg|r+|f||f}d|n+|rHd||fnd|tjddS(Nu%s:%d:%d: error: %su %s: error: %su error: %si(usysuexit(umessageufilenameulocationuargs(uperror(u-/opt/alt/python33/lib64/python3.3/tokenize.pyuerror}sumain..erroruprogupython -m tokenizeudestufilenameunargsu?umetavaru filename.pyuhelpu'the file to tokenize; defaults to stdinu-eu--exactuexactuactionu store_trueu(display token names using the exact typeurbuu %d,%d-%d,%d:u%-20s%-15s%-15riiu interrupted uunexpected error: %s(uargparseuNoneuArgumentParseru add_argumentu parse_argsufilenameubuiltinsuopenulistutokenizeureadlineu _tokenizeusysustdinutypeuexactu exact_typeustartuenduprintutok_nameustringuIndentationErroruargsu TokenErroru SyntaxErroruIOErroruKeyboardInterruptu Exception( uargparseuerroruparseruargsufilenameufutokensutokenu token_typeu token_rangeuerrulineucolumn((uperroru-/opt/alt/python33/lib64/python3.3/tokenize.pyumainvsN           // umainu__main__(u'''u"""ur'''ur"""uR'''uR"""ub'''ub"""uB'''uB"""ubr'''ubr"""uBr'''uBr"""ubR'''ubR"""uBR'''uBR"""urb'''urb"""urB'''urB"""uRb'''uRb"""uRB'''uRB"""uu'''uu"""uU'''uU"""(u'u"ur'ur"uR'uR"ub'ub"uB'uB"ubr'ubr"uBr'uBr"ubR'ubR"uBR'uBR"urb'urb"urB'urB"uRb'uRb"uRB'uRB"uu'uu"uU'uU"(}u__doc__u __author__u __credits__ubuiltinsucodecsulookupuBOM_UTF8u collectionsuiou TextIOWrapperu itertoolsuchainureusysutokenucompileuASCIIu cookie_reublank_reu__all__uN_TOKENSuCOMMENTutok_nameuNLuENCODINGuLPARuRPARuLSQBuRSQBuCOLONuCOMMAuSEMIuPLUSuMINUSuSTARuSLASHuVBARuAMPERuLESSuGREATERuEQUALuDOTuPERCENTuLBRACEuRBRACEuEQEQUALuNOTEQUALu LESSEQUALu GREATEREQUALuTILDEu CIRCUMFLEXu LEFTSHIFTu RIGHTSHIFTu DOUBLESTARu PLUSEQUALuMINEQUALu STAREQUALu SLASHEQUALu PERCENTEQUALu AMPEREQUALu VBAREQUALuCIRCUMFLEXEQUALuLEFTSHIFTEQUALuRIGHTSHIFTEQUALuDOUBLESTAREQUALu DOUBLESLASHuDOUBLESLASHEQUALuATuEXACT_TOKEN_TYPESu namedtupleu TokenInfougroupuanyumaybeu WhitespaceuCommentuIgnoreuNameu Hexnumberu Binnumberu Octnumberu Decnumberu IntnumberuExponentu PointfloatuExpfloatu Floatnumberu ImagnumberuNumberu StringPrefixuSingleuDoubleuSingle3uDouble3uTripleuStringuOperatoruBracketuSpecialuFunnyu PlainTokenuTokenuContStru PseudoExtrasu PseudoTokenu_compileuNoneuendpatsu triple_quotedutu single_quotedutabsizeu Exceptionu TokenErroruStopTokenizingu Untokenizeru untokenizeu_get_normal_nameudetect_encodinguopenutokenizeu _tokenizeugenerate_tokensumainu__name__(((u-/opt/alt/python33/lib64/python3.3/tokenize.pyusF             "             N  ]   <