while web-scraping looping through each url to make a separate text file for every url data getting error

Question:

`for i in urls:
   text=fetch_text(i)
   listToStr = ' '.join([str(elem) for elem in text])
   result = re.sub(r'<.*?>', '', listToStr)
   basename = "file_"
   file_name = ["{}_{}.txt".format(basename, j) for j in range(37,151)]
   with open(file_name[i], 'w') as f:    ---->Error 
       f.write(result)` 

i wrote above code to fetch data through each URL and want to create a separate file for every URL data .but getting an error at "with the open line..for file_name as "list indices must be integers or slices, not str" …
can someone help me through it?

Asked By: Gopi Kishan

||

Answers:

You need to add a loop to iterate over file_name

Then I changed the iteration variable of urls to url instead of i , since i should be used for indexes and here you are directly iterating elements.

Now in your case you have one result for each url, but you generate several files inside each url and you store the same result on them.
This looks wrong, so you should really check the logic of your loops

for i, url in enmerate(urls):
   text=fetch_text(url)
   listToStr = ' '.join([str(elem) for elem in text])
   result = re.sub(r'<.*?>', '', listToStr)
   basename = "file_"
   file_name = f"{basename}_{i+37}.txt"
   with open(file_name, 'w') as f:
       f.write(result)
Answered By: Sembei Norimaki
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.