锘??xml version="1.0" encoding="utf-8" standalone="yes"?>
榪欐槸鍙嶅簭鍒楀寲鐨勪竴閬撻pickle錛岄〉闈㈡簮鐮佷腑鐨刡anner.p鐨勫瓧鏍鳳紝 涓哄簭鍒楀寲鏂囦歡鍦板潃銆備絾瑙f瀽鐨勭粨鏋滃茍涓嶆槸鏈緇堢殑絳旀錛岃姳浜嗗崐澶╂椂闂寸粓浜庡紕鏄庣櫧緇撴灉鏄竴涓敤‘#’瀛楃粍鎴愮殑鍥懼艦錛屽憸鍛鹼細 紼嬪簭杈撳嚭錛?br>
import pickle
import urllib.request
if __name__ == '__main__':
url = 'http://www.pythonchallenge.com/pc/def/banner.p'
request= urllib.request.Request(url)
# my pc must use proxy to connect
request.set_proxy('172.16.0.252:80', 'http')
try:
response= urllib.request.urlopen(request)
banner= pickle.load(response)
response.close()
for line in banner:
print(''.join(map(lambda x: x[0]* x[1], line)))
except Exception as ex:
print(ex)
紼嬪簭杈撳嚭錛?br>
##### #####
#### ####
#### ####
#### ####
#### ####
#### ####
#### ####
#### ####
### #### ### ### ##### ### ##### ### ### ####
### ## #### ####### ## ### #### ####### #### ####### ### ### ####
### ### ##### #### ### #### ##### #### ##### #### ### ### ####
### #### #### ### ### #### #### #### #### ### #### ####
### #### #### ### #### #### #### #### ### ### ####
#### #### #### ## ### #### #### #### #### #### ### ####
#### #### #### ########## #### #### #### #### ############## ####
#### #### #### ### #### #### #### #### #### #### ####
#### #### #### #### ### #### #### #### #### #### ####
### #### #### #### ### #### #### #### #### ### ####
### ## #### #### ### #### #### #### #### #### ### ## ####
### ## #### #### ########### #### #### #### #### ### ## ####
### ###### ##### ## #### ###### ########### ##### ### ######
寰楀埌涓嬩竴鍏沖湴鍧錛?channel
]]>
璇村疄璇濓紝濂戒笉瀹規槗閫氳繃google鎼炴竻妤氶鐩殑瑕佹眰錛?閫氳繃涓嶆柇鐨勪粠鏈嶅姟鍣ㄥ彇寰椾竴涓獁eb page錛岀劧鍚庝粠婧愮爜涓壘鍑轟笅涓涓摼鎺ョ殑鍦板潃銆傞渶瑕佹敞鎰忕殑鏄細铏界劧欏甸潰鐨勬簮鐮佸緢綆鍗曪紝浣嗗茍涓嶆槸鍏朵腑鎵鏈夌殑鏁板瓧閮芥槸鏈夋晥鐨勶紝闇瑕佷嬌鐢ㄦ鍒欒〃杈懼紡鎵懼嚭姝g‘鐨刾attern褰㈠紡鎵嶅彲浠ワ紝瀵規湰棰樿岃█r'nothing is (\d+)'鏄竴涓彲鐢ㄧ殑pattern錛屼嬌鐢?'.join([x for x in text if x.isdigit()] 灝嗘墍鏈夌殑鏁板瓧閮界矘榪炶搗鏉ヤ簡錛岀粨鏋滆窡韙埌4000澶氳繕娌$粨鏉燂紝鎵嶇煡閬撲笂褰撲簡銆傘傘?br>
import re
import urllib.request
if __name__ == '__main__':
url = 'http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing='
index = '17675'
counter = 1
pattern = re.compile(r'nothing is (\d+)')
while True:
try:
request= urllib.request.Request(url+index)
# my pc must use proxy to connect
request.set_proxy('172.16.0.252:80', 'http')
response= urllib.request.urlopen(request)
content=str(response.read().decode())
response.close()
print(counter, content)
result = pattern.search(content)
if not result:
break
index = result.group(1)
counter += 1
except Exception as ex:
print(ex)
break
2 and the next nothing is 89456
3 and the next nothing is 43502
4 and the next nothing is 45605
5 and the next nothing is 12970
6 and the next nothing is 91060
7 and the next nothing is 27719
8 and the next nothing is 65667
9 peak.html
寰楀埌涓嬩竴涓鐩殑鍦板潃peak.html (娉細鎴戠殑index鍒濆鍊兼槸17675錛岄鐩腑鏈鏃╃粰鍑虹殑鍙笉鏄繖涓鹼紝 鎴戞槸浠庡湴鍧鍒楄〃鐨勫悗涓閮ㄥ垎閫変簡涓涓暟瀛楄屽凡錛屽洜姝や笉瑕佹媴蹇?
]]>
One small letter, surrounded by EXACTLY three big bodyguards on each of its sides. 浣曚負big bodyguards? 鎴戜及璁℃槸澶у啓瀛楁瘝錛屽嵆瑕佹眰鎵懼埌鐢變笁涓ぇ鍐欏瓧姣嶅寘鍥寸潃鐨勫皬鍐欏瓧姣嶃備簨瀹炰笂錛岄鐩姹傜殑紜姝ゃ備笉榪囨垜鍦ㄤ竴寮濮嬬殑鏃跺欒蛋浜嗗集璺細鎴戞妸榪炵畫鐨勪竷涓瓧姣嶅叏閮ㄩ兘鎵撳嵃鍑烘潵浜嗭紙宸﹁竟涓変釜澶у啓瀛楁瘝錛屼腑闂翠竴涓皬鍐欏瓧姣嶏紝鍙寵竟涓変釜澶у啓瀛楁瘝錛夛紝璐逛簡鍗婂ぉ鎵嶆悶鏄庣櫧鍘熸潵瑕佹眰鐨勫彧鏄腑闂寸殑灝忓啓瀛楁瘝銆?br> 鍙﹀娉ㄦ剰EXACTLY 榪欎釜璇嶏紝鍗?#8220;褰撲笖浠呭綋”宸﹀彸涓よ竟鍧囦負“涓?#8221;涓ぇ鍐欏瓧姣嶇殑pattern鎵嶇畻銆?br>
浠g爜鍒頒篃綆鍗曪細
import re
if __name__ == '__main__':
finpath = 'fin.txt'
with open(finpath) as fin:
# translate text into a single string
text = ''.join([line.rstrip() for line in fin.read()])
pattern = re.compile(r'[^A-Z][A-Z]{3}([a-z])[A-Z]{3}[^A-Z]')
print(''.join(pattern.findall(text)))
紼嬪簭杈撳嚭錛歭inkedlist
鍙傝冪瓟妗?/a>
]]>
鏍規嵁鎻愮ず錛岄鐩姹傛槸浠巋tml欏甸潰婧愭枃浠剁殑涓孌墊枃鏈腑鎵懼嚭rare characters銆?浣曚負rare錛屾殏鏃朵笉鐭ラ亾錛屼笉榪囦笉瑕佺揣錛屽厛鎶婃暣孌墊枃鏈瓨鏀句簬涓涓彨fin.txt鐨勬枃浠朵腑錛岄澶勭悊涓涓嬶細
if __name__ == '__main__':
finpath = 'fin.txt'
with open(finpath) as fin:
# translate text into a single string
text = ''.join([line.rstrip() for line in fin.read()])
d= {}
for c in text:
d[c] = d.get(c, 0) +1
for k, v in d.items():
print(k, v)
杈撳嚭緇撴灉錛?br>
! 6079
# 6115
% 6104
$ 6046
& 6043
) 6186
( 6154
+ 6066
* 6034
@ 6157
[ 6108
] 6152
_ 6112
^ 6030
a 1
e 1
i 1
l 1
q 1
u 1
t 1
y 1
{ 6046
} 6105
濂戒簡錛屽緢鏄劇劧浜嗭紝 rare characters鎸囩殑灝辨槸涓暟涓?鐨勮繖鍑犱釜瀛楁瘝錛?浜庢槸灝嗕唬鐮佺◢寰敼涓涓嬪嵆鍙墦鍗板緱鍒扮粨鏋滐細
if __name__ == '__main__':
finpath = 'fin.txt'
with open(finpath) as fin:
# translate text into a single string
text = ''.join([line.rstrip() for line in fin.read()])
d= {}
for c in text:
d[c] = d.get(c, 0) +1
print(''.join([c for c in text if d[c] ==1]))
紼嬪簭杈撳嚭錛?equality
鑰冭檻鍒扮粨鏋滈泦涓湭杈撳嚭鐨勯兘鏄潪瀛楁瘝錛屽洜姝ゅ彲浠ヨ冭檻濡備笅鏂規硶姹傝В錛?br>
if __name__ == '__main__':
finpath = 'fin.txt'
with open(finpath) as fin:
# translate text into a single string
text = ''.join([line.rstrip() for line in fin.read()])
# only print letters
print(''.join([c for c in text if c.isalpha()]))
# another method
print(''.join(filter(lambda x: x.isalpha(), text)))
鍙傝冪瓟妗?/a>
]]>
鐢變簬浠呬粎鏄浜岄亾棰橈紝鎵浠ラ鐩繕鏄緢鏄撴噦鐨勶紝鎸夊浘鐗囦腑鐨勬寚紺猴紝灝嗘瘡涓瓧姣嶆浛鎹㈡垚瀹冨湪瀛楁瘝琛ㄤ腑欏哄歡涓や綅鐨勯偅涓瓧姣嶅氨鍙互浜嗭紝瑙g瓟濡備笅錛氥?br>
'''Translate a letter as the one two places after it'''
if char.isalpha():
return chr((ord(char)- ord('a') + 2)%26 + ord('a'))
return char
if __name__ == '__main__':
text = '''g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj. '''
print(''.join(map(trans, text)))
鎵撳嵃緇撴灉錛?i hope you didnt translate it by hand. thats what computers are for. doing it in by hand is inefficient and that's why this text is so long. using string.maketrans() is recommended. now apply on the url.
鍙傝冪瓟妗堥摼鎺?/a>
]]>