RECURRENT DONATION
Donate monthly to support
the NeculaiFantanaru.com project
Ko te ahuatanga nui o tenei pukapuka ki te whakataurite ki etahi atu i te maakete i taua rohe ano, ko te whakamaarama i roto i nga tauira ki nga pukenga pai o te kaiarahi. Kare ahau i kii he ngawari te tu hei kaiarahi pai, engari mena ka hiahia te iwi...
I tuhia e au tenei pukapuka e hono ana ki tetahi huarahi ngawari ki te whanake whaiaro me te kaiarahi, peera i te panga, me whakarite e koe nga waahanga katoa hei whakahoki mai i te ahua whanui.
Ko te whaainga o tenei pukapuka he tuku korero ki a koe ma roto i nga tauira raima me te whakaatu ki a koe me pehea e taea ai e koe kia kite i etahi atu i nga mea mai i a koe ano.
Kaore i te whakaarohia he whakataunga, ko te pukapuka e whakaatu ana i te nganatanga o te tangata noa - te kaituhi - na roto i nga kupu ngawari, korero me nga tauira noa e whakatupu ana i te maia o te tangata noa me te whakaaro nui ki tana ake whai kia rangatira ake ia me te mohio. .. akene he rangatira.
Ka taea e koe te tiro i te tohu katoa i konei:Https: // Sandatin.com / 7 Te wehi 27p q6 WhakanohoPenita. Na ka whakauru i nga whare pukapuka e whai ake nei e whai ake nei ma te whakamahi i te kaiwhakamaori whakahau whakahau (CMD) i Windows10: Ka whakamaori te Python i nga tūtohu HTML e whai ake nei me te Whare Pukapuka o Googletrans: py -m pip install "googletrans" py -m pip install googletrans==4.0.0 rc1 py -m pip install beautifulsoup4 Ano, ko te waehere Python ka whakamaori aunoa i nga tuhinga o nga tohu e whai ake nei (to tuhinga), engari mena ka hangaia enei tohuahau Nga korero HTML. Ae ra, me whakakapi koe i enei tohu me o ake tūtohu.
Ko te waehere: Ka kape me te whakahaere i te waehere i raro i tetahi kaupapa whakamaarama (ka whakamahi ahau Pyscriptter).Kaua e wareware ki te huri i te huarahi i roto i te raina "files_from_folder".Anei te rarangi reo ka taea te whakamaoritia:Lang. from bs4 import BeautifulSoup from bs4.formatter import HTMLFormatter from googletrans import Translator import requests translator = Translator() class UnsortedAttributes(HTMLFormatter): def attributes(self, tag): for k, v in tag.attrs.items(): yield k, v files_from_folder = r"e:\Carte\BB\17 - Site Leadership\Principal" use_translate_folder = False destination_language = 'ceb' extension_file = ".html" import os directory = os.fsencode(files_from_folder) def recursively_translate(node): for x in range(len(node.contents)): if isinstance(node.contents[x], str): if node.contents[x].strip() != '': try: node.contents[x].replaceWith(translator.translate(node.contents[x], dest=destination_language).text) except: pass elif node.contents[x] != None: recursively_translate(node.contents[x]) for file in os.listdir(directory): filename = os.fsdecode(file) print(filename) if filename == 'y_key_e479323ce281e459.html' or filename == 'TS_4fg4_tr78.html': #ignore this 2 files continue if filename.endswith(extension_file): with open(os.path.join(files_from_folder, filename), encoding='utf-8') as html: soup = BeautifulSoup('', 'html.parser') for title in soup.findAll('title'): recursively_translate(title) for meta in soup.findAll('meta', {'name':'description'}): try: meta['content'] = translator.translate(meta['content'], dest=destination_language).text except: pass for h1 in soup.findAll('h1', {'itemprop':'name'}, class_='den_articol'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h1)) < end_comment: recursively_translate(h1) for p in soup.findAll('p', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for p in soup.findAll('p', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(p)) < end_comment: recursively_translate(p) for span in soup.findAll('span', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(span)) < end_comment: recursively_translate(span) for li in soup.findAll('li', class_='text_obisnuit'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(li)) < end_comment: recursively_translate(li) for a in soup.findAll('a', class_='linkMare'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(a)) < end_comment: recursively_translate(a) for h4 in soup.findAll('h4', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h4)) < end_comment: recursively_translate(h4) for h5 in soup.findAll('h5', class_='text_obisnuit2'): begin_comment = str(soup).index('') end_comment = str(soup).index('') if begin_comment < str(soup).index(str(h5)) < end_comment: recursively_translate(h5) print(f'{filename} translated') soup = soup.encode(formatter=UnsortedAttributes()).decode('utf-8') new_filename = f'{filename.split(".")[0]}.html' if use_translate_folder: try: with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) except: os.mkdir(files_from_folder+r'\translated') with open(os.path.join(files_from_folder+r'\translated', new_filename), 'w', encoding='utf-8') as new_html: new_html.write(soup[5:-6]) else: with open(os.path.join(files_from_folder, new_filename), 'w', encoding='utf-8') as html: html.write(soup[5:-6])'+ html.read() + ' That's all folks. If you like my code, then make me a favor: translate your website into Romanian, "ro". Ano hoki, kei reira anoPutanga 2Tuhinga o mua.
Nga korero hou i uru ki nga kaipānui:
Donate via Paypal
RECURRENT DONATIONDonate monthly to support SINGLE DONATIONDonate the desired amount to support Donate by Bank TransferAccount Ron: RO34INGB0000999900448439
Open account at ING Bank
|
||||||||||||
![]() |
||||||||||||