RECURRENT DONATION
Donate monthly to support
the NeculaiFantanaru.com project
Ny toetra tena ilaina amin'ity boky ity raha ampitahaina amin'ny hafa eny an-tsena amin'ny sehatra iray ihany dia ny famariparitana amin'ny alalan'ny ohatra ny fahaiza-manaon'ny mpitarika iray. Tsy nilaza mihitsy aho hoe mora ny ho tonga mpitarika tsara, fa raha tian'ny olona...
Nanoratra ity boky ity aho izay mampifandray amin'ny fomba tsotra ny fivoaran'ny tena manokana amin'ny fitarihana, toy ny piozila, izay tsy maintsy ampifandraisinao ny ampahany rehetra mba hamerenana ny sary ankapobeny.
Ny tanjon'ity boky ity dia ny hanome anao vaovao amin'ny alalan'ny ohatra mivaingana ary hampiseho aminao ny fomba hahazoanao ny fahaiza-manao hahatonga ny hafa hahita zavatra mitovy amin'ny anao.
Raha tsy mihevitra azy io ho fifanarahana, ny boky dia maneho ny andrana ny olon-tsotra iray - ny mpanoratra - izay amin'ny alalan'ny teny tsotra, zava-misy sy ny ohatra mahazatra mampianatra ny olon-tsotra herim-po sy ny fanantenana amin'ny fikatsahany manokana ho tompony sy izay mahalala. .. mety ho mpitarika mihitsy aza.
Azonao atao ny mijery ny kaody feno:HTTPS: // passine.com / 7 Matahotra 27PP Q6 hametrakaPython. Avy eo, apetraho ireto tranomboky roa ireto amin'ny alàlan'ny mpandika teny (cmd) amin'ny Windows10: Python dia handika ny marika HTML manaraka miaraka amin'ny Google Library: py -m pip install "googletrans" py -m pip install googletrans==4.0.0rc1 py -m pip install beautifulsoup4 Ary koa, ny kaody python dia handika ny votoatin'ny tagy manaraka (ny lahatsoratrao), fa raha toa ka namboarina ireo marika ireoSY HTML fanehoan-kevitra. Mazava ho azy fa mila manolo ireo marika ireo amin'ny teninao manokana ianao.
Ny kaody: mandika sy mitantana ny kaody eto ambany amin'ny programa mpandika teny (ampiasaiko pycripter.Aza adino ny hanova ny lalana ao amin'ny tsipika "Files_From_Folder".Ary izao ny lisitry ny fiteny izay azo adika:Lang.. from bs4 import BeautifulSoup from bs4.formatter import HTMLFormatter from googletrans import Translator import requests translator = Translator() class UnsortedAttributes(HTMLFormatter): def attributes(self, tag): for k, v in tag.attrs.items(): yield k, v files_from_folder = r"e:\Carte\BB\17 - Site Leadership\Principal" use_translate_folder = False destination_language = 'ceb' extension_file = ".html" import os directory = os.fsencode(files_from_folder) def recursively_translate(node): for x in range(len(node.contents)): if isinstance(node.contents[x], str): if node.contents[x].strip() != '': try: node.contents[x].replaceWith(translator.translate(node.contents[x], dest=destination_language).text) except: pass elif node.contents[x] != None: recursively_translate(node.contents[x]) for file in os.listdir(directory): filename = os.fsdecode(file) print(filename) if filename == 'y_key_e479323ce281e459.html' or filename == 'TS_4fg4_tr78.html': #ignore this 2 files continue if filename.endswith(extension_file): with open(os.path.join(files_from_folder, filename), encoding='utf-8') as html: soup = BeautifulSoup('', 'html.parser') for title in soup.findAll('title'): recursively_translate(title) for meta in soup.findAll('meta', {'name':'description'}): try: meta['content'] = translator.translate(meta['content'], dest=destination_language).text except: pass for h1 in soup.findAll('h1', {'itemprop':'name'}, class_='den_articol'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h1)) < end_comment: recursively_translate(h1) for p in soup.findAll('p', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for p in soup.findAll('p', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for span in soup.findAll('span', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(span)) < end_comment: recursively_translate(span) for li in soup.findAll('li', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(li)) < end_comment: recursively_translate(li) for a in soup.findAll('a', class_='linkMare'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(a)) < end_comment: recursively_translate(a) for h4 in soup.findAll('h4', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h4)) < end_comment: recursively_translate(h4) for h5 in soup.findAll('h5', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h5)) < end_comment: recursively_translate(h5) print(f'{filename} translated') soup = soup.encode(formatter=UnsortedAttributes()).decode('utf-8') new_filename = f'{filename.split(".")[0]}.html' if use_translate_folder: try: with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) except: os.mkdir(files_from_folder+r'\translated') with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) else: with open(os.path.join(files_from_folder, new_filename), 'w', encoding='utf-8') as html: html.write(soup[5:-6])'+ html.read() + ' That's all folks. If you like my code, then make me a favor: translate your website into Romanian, "ro". Ary koa, misyVersion 2amin'ity code ity naVersion 3naVersion 4naVersion 5naVersion 6
Latest articles accessed by readers:
Donate via Paypal
RECURRENT DONATIONDonate monthly to support SINGLE DONATIONDonate the desired amount to support Donate by Bank TransferAccount Ron: RO34INGB0000999900448439
Open account at ING Bank
|
||||||||||||
![]() |
||||||||||||