|
楼主 |
发表于 2018-3-16 11:10:40
|
显示全部楼层
本帖最后由 weiwulai 于 2018-3-16 16:17 编辑
好的,,先贴鱼友的这个吧,这个运行是没有问题的。主要是 第一行import urlib.request那里,我这里显示的是灰色的提示 unused import statement, 不过如果注释掉的话能正常运行,但是会报错 AttributeError: module 'urllib' has no attribute 'request'。反之 如果注释掉 import urllib.parse这个包的话,第一行就能正常高亮显示,程序可以正常执行,没问题
- import urllib.request
- import urllib.parse
- import time
- import hashlib #提供了常见的摘要算法
- import json
- url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&sessionFrom=null' #上一次群里面那个失效了 把_o去掉就可以了
- u = 'fanyideskweb'
- d = input('请输入翻译的内容:')
- f = str(int(time.time()*1000))
- c = "rY0D^0'nM0}g5Mm1z%1G4"
- g = hashlib.md5()
- g.update((u + d + f + c).encode('utf-8'))
- head = {}
- head['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0'
- head['Host'] = 'fanyi.youdao.com'
- head['Referer'] = 'http://fanyi.youdao.com/'
- data = {}
- data['i'] = d # 这是我们要翻译的字符串
- data['from'] = 'AUTO'
- data['to'] = 'AUTO'
- data['smartresult'] = 'dict'
- data['client'] = u
- data['salt'] = f # 加密用到的盐。这个是我们破解有道反爬虫机制的关键点
- data['sign'] = g.hexdigest() # 签名字符串。也是破解反爬虫机制的关键点
- data['doctype'] = 'json'
- data['version'] = '2.1'
- data['keyfrom'] = 'fanyi.web'
- data['action'] = 'FY_BY_CL1CKBUTTON'
- data['typoResult'] = 'true'
- data = urllib.parse.urlencode(data).encode('utf-8')
- req = urllib.request.Request(url, data, head)
- response = urllib.request.urlopen(req)
- html = response.read().decode('utf-8')
- target = json.loads(html)
- print('翻译结果: %s ' % (target['translateResult'][0][0]['tgt']))
复制代码 |
|