RECURRENT DONATION
Donate monthly to support
the NeculaiFantanaru.com project
Ny toetra tena ilaina amin'ity boky ity raha ampitahaina amin'ny hafa eny an-tsena amin'ny sehatra iray ihany dia ny famariparitana amin'ny alalan'ny ohatra ny fahaiza-manaon'ny mpitarika iray. Tsy nilaza mihitsy aho hoe mora ny ho tonga mpitarika tsara, fa raha tian'ny olona...
Nanoratra ity boky ity aho izay mampifandray amin'ny fomba tsotra ny fivoaran'ny tena manokana amin'ny fitarihana, toy ny piozila, izay tsy maintsy ampifandraisinao ny ampahany rehetra mba hamerenana ny sary ankapobeny.
Ny tanjon'ity boky ity dia ny hanome anao vaovao amin'ny alalan'ny ohatra mivaingana ary hampiseho aminao ny fomba hahazoanao ny fahaiza-manao hahatonga ny hafa hahita zavatra mitovy amin'ny anao.
Raha tsy mihevitra azy io ho fifanarahana, ny boky dia maneho ny andrana ny olon-tsotra iray - ny mpanoratra - izay amin'ny alalan'ny teny tsotra, zava-misy sy ny ohatra mahazatra mampianatra ny olon-tsotra herim-po sy ny fanantenana amin'ny fikatsahany manokana ho tompony sy izay mahalala. .. mety ho mpitarika mihitsy aza.
Azonao atao ny mijery ny kaody feno:HTTPS: // passatin.com / km uu6hv hametrakaPython. Avy eo, apetraho ireto tranomboky roa ireto amin'ny alàlan'ny mpandika teny (cmd) amin'ny Windows10: py- m pip install pydeepl py -m pip install beautifulsoup4 Python dia handika ny marika HTML manaraka miaraka amin'ny tranomboky Defry:
Ary koa, ny kaody python dia handika ny votoatin'ny tagy manaraka (ny lahatsoratrao), fa raha toa ka namboarina ireo marika ireoSY HTML fanehoan-kevitra. Mazava ho azy fa mila manolo ireo marika ireo amin'ny teninao manokana ianao.
Ny kaody: mandika sy mitantana ny kaody eto ambany amin'ny programa mpandika teny (ampiasaiko pycripter.Aza adino ny hanova ny lalana ao amin'ny tsipika "Files_From_Folder".Aza adino ny hanova ny lakile API (hita in-3 ao amin'ny code) ary eto ny lisitry ny fiteny izay azo adika:Lang.. Misoratra anaranaDeeplRaha ny fahazoana fanalahidy API. from bs4 import BeautifulSoup from bs4.formatter import HTMLFormatter from googletrans import Translator import requests import json if False: test = requests.post('https://api-free.deepl.com/v2/translate', data={'auth_key':'PUT HERE YOUR API KEY:fx', 'text':'hello', 'source_lang':'EN', 'target_lang':'ZH' #translates into Chinesse }).content print(json.loads(test)['translations'][0]['text']) translator = Translator() class UnsortedAttributes(HTMLFormatter): def attributes(self, tag): for k, v in tag.attrs.items(): yield k, v files_from_folder = r"c:\test" #Change with your basic Path, for example a Folder with your website written in English source_language = 'EN' #translates from English use_translate_folder = False destination_language = 'ZH' #translates into Chinesse extension_file = ".html" import os directory = os.fsencode(files_from_folder) def recursively_translate(node): for x in range(len(node.contents)): if isinstance(node.contents[x], str): if node.contents[x].strip() != '': try: newtext = requests.post('https://api-free.deepl.com/v2/translate', data={'auth_key':'PUT HERE YOUR API KEY:fx', 'text':node.contents[x], 'source_lang':source_language, 'target_lang':destination_language }).content node.contents[x].replaceWith(json.loads(newtext)['translations'][0]['text']) except: pass elif node.contents[x] != None: recursively_translate(node.contents[x]) for file in os.listdir(directory): filename = os.fsdecode(file) print(filename) if filename == 'y_key_e479323ce281e459.html' or filename == 'TS_4fg4_tr78.html': continue if filename.endswith(extension_file): with open(os.path.join(files_from_folder, filename), encoding='utf-8') as html: soup = BeautifulSoup('', 'html.parser') for title in soup.findAll('title'): recursively_translate(title) for meta in soup.findAll('meta', {'name':'description'}): try: newtext = requests.post('https://api-free.deepl.com/v2/translate', data={'auth_key':'PUT HERE YOUR API KEY:fx', 'text':meta['content'], 'source_lang':source_language, 'target_lang':destination_language }).content meta['content'] = json.loads(newtext)['translations'][0]['text'] except: pass for h1 in soup.findAll('h1', {'itemprop':'name'}, class_='den_articol'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h1)) < end_comment: recursively_translate(h1) for p in soup.findAll('p', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for p in soup.findAll('p', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for span in soup.findAll('span', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(span)) < end_comment: recursively_translate(span) for li in soup.findAll('li', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(li)) < end_comment: recursively_translate(li) for a in soup.findAll('a', class_='linkMare'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(a)) < end_comment: recursively_translate(a) for h4 in soup.findAll('h4', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h4)) < end_comment: recursively_translate(h4) for h5 in soup.findAll('h5', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h5)) < end_comment: recursively_translate(h5) print(f'{filename} translated') soup = soup.encode(formatter=UnsortedAttributes()).decode('utf-8') new_filename = f'{filename.split(".")[0]}_{destination_language}.html' if use_translate_folder: try: with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) except: os.mkdir(files_from_folder+r'\translated') with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) else: with open(os.path.join(files_from_folder, new_filename), 'w', encoding='utf-8') as html: html.write(soup[5:-6])'+ html.read() + ' That's all folks. If you like my code, then make me a favor: translate your website into Romanian, "ro". Jereo ihany koa ny dikan-teny iray hafa amin'ny fandikan-teny: BeautifulSoup Library naVersion 2na Google Translate API Key
Latest articles accessed by readers:
Donate via Paypal
RECURRENT DONATIONDonate monthly to support SINGLE DONATIONDonate the desired amount to support Donate by Bank TransferAccount Ron: RO34INGB0000999900448439
Open account at ING Bank
|
||||||||||||
![]() |
||||||||||||